(Publisher of Peer Reviewed Open Access Journals)

International Journal of Advanced Technology and Engineering Exploration (IJATEE)

ISSN (Print):2394-5443    ISSN (Online):2394-7454
Volume-9 Issue-97 December-2022
Full-Text PDF
Paper Title : Analysis on localization and prediction of depth chili fruits images using YOLOv5
Author Name : M. N. Shah Zainudin, M. S. S. Shahrul Azlan, L. L. Yin, W. H. Mohd Saad, M. I. Idris, Sufri Muhammad and M. S. J. A. Razak
Abstract :

Chili fruits are essential ingredients that Malaysians consider essential for cooking. Adding chili to a dish used to produce a second fiery flavour. In south-western Ecuador, one of the most important plant-growing regions on the American continent at the time. As a result, this provides evidence that people were using chili as an additional food element as early as 600 years ago. A traditional method of picking chili is common, but it is less precise and time-consuming. Incorrect picking and grading will cause the harvesting process to take longer. The advancement of computer vision and pattern recognition has demonstrated its effectiveness in image recognition. Because of their simplicity and low complexity, 2 dimensional images are frequently used in image recognition. As a result, advancement in automated picking systems with object detection is common. However, due to a lack of image information, such as depth, 2D images are thought to be difficult to identify the growing stages or maturity level of chili fruits. Object detection is prevalent for determining the localization and category of objects. One of the well-established methods such you look only once (YOLO) has widely used is in object detection. To anticipate this effort, the fast, reliable and able to recognize small object, YOLOv5 is proposed to localise and predict the category of chili fruits which allows the process to determine a chili's form and categories based on its colour. The proposed model is able to differentiate and localize the position as well as the colour of chili fruits with above 94% in average. Hence, our achievement has proven its effectiveness and becomes our greater goal of developing an autonomous chili fruit picking robot which could help farmers or agricultural sectors to reduce their labours during the grading process.

Keywords : Chili, YOLOv5, Semi-autonomous, Depth images, Object detection.
Cite this article : Zainudin MS, Azlan MS, Yin LL, Saad WM, Idris MI, Muhammad S, Razak MS. Analysis on localization and prediction of depth chili fruits images using YOLOv5 . International Journal of Advanced Technology and Engineering Exploration. 2022; 9(97):1786-1801. DOI:10.19101/IJATEE.2021.876501.
References :
[1]Gené-mola J, Vilaplana V, Rosell-polo JR, Morros JR, Ruiz-hidalgo J, Gregorio E. Multi-modal deep learning for Fuji apple detection using RGB-D cameras and their radiometric capabilities. Computers and Electronics in Agriculture. 2019; 162:689-98.
[Crossref] [Google Scholar]
[2]Wu G, Zhu Q, Huang M, Guo Y, Qin J. Automatic recognition of juicy peaches on trees based on 3D contour features and colour data. Biosystems Engineering. 2019; 188:1-13.
[Crossref] [Google Scholar]
[3]Davies FT. Opportunities for horticulture to feed the world©. In proceedings of the 2014 annual meeting of the international plant propagators society 2014 (pp. 455-8).
[Crossref] [Google Scholar]
[4]Brondino L, Borra D, Giuggioli NR, Massaglia S. Mechanized blueberry harvesting: preliminary results in the Italian context. Agriculture. 2021; 11(12):1-14.
[Crossref] [Google Scholar]
[5]Zainudin MN, Husin N, Saad WH, Radzi SM, Noh ZM, Sulaiman NA, et al. A framework for chili fruits maturity estimation using deep convolutional neural network. Przegląd Elektrotechniczny. 2021; 97(12):77-81.
[Crossref] [Google Scholar]
[6]Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In proceedings of the conference on computer vision and pattern recognition 2016 (pp. 779-88). IEEE.
[Google Scholar]
[7]Quinn J, Mceachen J, Fullan M, Gardner M, Drummy M. Dive into deep learning: tools for engagement. Corwin Press; 2019.
[Google Scholar]
[8]Sarker IH. Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science. 2021; 2(6):1-20.
[Crossref] [Google Scholar]
[9]Wu W, Liu H, Li L, Long Y, Wang X, Wang Z, et al. Application of local fully convolutional neural network combined with YOLO v5 algorithm in small target detection of remote sensing image. PloS one. 2021; 16(10):1-15.
[Crossref] [Google Scholar]
[10]Choi RY, Coyner AS, Kalpathy-cramer J, Chiang MF, Campbell JP. Introduction to machine learning, neural networks, and deep learning. Translational Vision Science & Technology. 2020; 9(2):1-14.
[Google Scholar]
[11]Diwan T, Anirudh G, Tembhurne JV. Object detection using YOLO: challenges, architectural successors, datasets and applications. Multimedia Tools and Applications. 2022;1-33.
[Crossref] [Google Scholar]
[12]Zainudin MN, Mohd SM, Ismail MM. Feature extraction on medical image using 2D gabor filter. In applied mechanics and materials 2011 (pp. 2128-32). Trans Tech Publications Ltd.
[Crossref] [Google Scholar]
[13]Sulaiman NA, Abdullah MP, Abdullah H, Zainudin MN, Yusop AM. Fault detection for air conditioning system using machine learning. IAES International Journal of Artificial Intelligence. 2020; 9(1):109-16.
[Crossref] [Google Scholar]
[14]Ahmad HM, Rahimi A. Deep learning methods for object detection in smart manufacturing: a survey. Journal of Manufacturing Systems. 2022; 64:181-96.
[Crossref] [Google Scholar]
[15]Kumar S, Balyan A, Chawla M. Object detection and recognition in images. International Journal of Engineering Development and Research. 2017; 5(4):1029-34.
[Google Scholar]
[16]Buhmann JM, Malik J, Perona P. Image recognition: visual grouping, recognition, and learning. Proceedings of the National Academy of Sciences. 1999; 96(25):14203-4.
[Crossref] [Google Scholar]
[17]Murphy K, Torralba A, Eaton D, Freeman W. Object detection and localization using local and global features. In toward category-level object recognition 2006 (pp. 382-400). Springer, Berlin, Heidelberg.
[Crossref] [Google Scholar]
[18]www.intel.com/design/literature.htm . Accessed 20 June 2022.
[19]Kadambi A, Bhandari A, Raskar R. 3D depth cameras in vision: benefits and limitations of the hardware. In computer vision and machine learning with RGB-D sensors 2014 (pp. 3-26). Springer, Cham.
[Crossref] [Google Scholar]
[20]Jeon HG, Lee JY, Im S, Ha H, Kweon IS. Stereo matching with color and monochrome cameras in low-light conditions. In proceedings of conference on computer vision and pattern recognition 2016 (pp. 4086-94). IEEE
[Google Scholar]
[21]Gongal A, Karkee M, Amatya S. Apple fruit size estimation using a 3D machine vision system. Information Processing in Agriculture. 2018; 5(4):498-503.
[Crossref] [Google Scholar]
[22]Tian Y, Yang G, Wang Z, Wang H, Li E, Liang Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Computers and Electronics in Agriculture. 2019; 157:417-26.
[Crossref] [Google Scholar]
[23]Tian Y, Yang G, Wang Z, Li E, Liang Z. Detection of apple lesions in orchards based on deep learning methods of cyclegan and yolov3-dense. Journal of Sensors. 2019; 2019:1-14.
[Crossref] [Google Scholar]
[24]Liu J, Wang X. Tomato diseases and pests detection based on improved YOLO V3 convolutional neural network. Frontiers in Plant Science. 2020; 11:1-12.
[Crossref] [Google Scholar]
[25]Kuznetsova A, Maleva T, Soloviev V. Using YOLOv3 algorithm with pre-and post-processing for apple detection in fruit-harvesting robot. Agronomy. 2020; 10(7):1-19.
[Crossref] [Google Scholar]
[26]Lawal MO. Tomato detection based on modified YOLOv3 framework. Scientific Reports. 2021; 11(1):1-11.
[Crossref] [Google Scholar]
[27]Fu L, Feng Y, Wu J, Liu Z, Gao F, Majeed Y, et al. Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model. Precision Agriculture. 2021; 22(3):754-76.
[Crossref] [Google Scholar]
[28]Yao J, Qi J, Zhang J, Shao H, Yang J, Li X. A real-time detection algorithm for Kiwifruit defects based on YOLOv5. Electronics. 2021; 10(14):1-13.
[Crossref] [Google Scholar]
[29]Kuznetsova A, Maleva T, Soloviev V. Detecting apples in orchards using YOLOv3 and YOLOv5 in general and close-up images. In international symposium on neural networks 2020 (pp. 233-43). Springer, Cham.
[Crossref] [Google Scholar]
[30]Manan AA, Razman MA, Khairuddin IM, Shapiee MN. Chili plant classification using transfer learning models through object detection. Mekatronika. 2020; 2(2):23-7.
[Crossref] [Google Scholar]
[31]Hespeler SC, Nemati H, Dehghan-niri E. Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers. Artificial Intelligence in Agriculture. 2021; 5:102-17.
[Crossref] [Google Scholar]
[32]Sihombing YF, Septiarini A, Kridalaksana AH, Puspitasari N. Chili classification using shape and color features based on image processing. Scientific Journal of Informatics. 2022; 9(1):42-50.
[Crossref] [Google Scholar]
[33]Cruz-domínguez O, Carrera-escobedo JL, Guzmán-valdivia CH, Ortiz-rivera A, García-ruiz M, Durán-muñoz HA, et al. A novel method for dried chili pepper classification using artificial intelligence. Journal of Agriculture and Food Research. 2021; 3:1-7.
[Crossref] [Google Scholar]
[34]Sembiring A, Basuki RS, Rosliani R, Rahayu ST. Farmers’ challenges on chili farming in the acid dry land: a case study from pasir madang-bogor regency, Indonesia. In E3S web of conferences 2021 (pp. 1-7). EDP Sciences.
[Crossref] [Google Scholar]
[35]Xu R, Lin H, Lu K, Cao L, Liu Y. A forest fire detection system based on ensemble learning. Forests. 2021; 12(2):1-17.
[Crossref] [Google Scholar]
[36]Ferguson M, Ak R, Lee YT, Law KH. Detection and segmentation of manufacturing defects with convolutional neural networks and transfer learning. ASTM International. 2018; 2(1):1-28.
[Google Scholar]
[37]Redmon J, Farhadi A. YOLO9000: better, faster, stronger. In proceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 7263-71). IEEE.
[Google Scholar]
[38]Gupta S, Devi DT. YOLOv2 based real time object detection. International Journal of Computer Science Trends and Technology. 2020; 8(3):26-30.
[Google Scholar]
[39]Redmon J, Farhadi A. YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767. 2018:1-6.
[Google Scholar]
[40]Hsieh KW, Huang BY, Hsiao KZ, Tuan YH, Shih FP, Hsieh LC, et al. Fruit maturity and location identification of beef tomato using R-CNN and binocular imaging technology. Journal of Food Measurement and Characterization. 2021; 15(6):5170-80.
[Crossref] [Google Scholar]
[41]Sari AC, Setiawan H, Adiputra TW, Widyananda J. Fruit classification quality using convolutional neural network and augmented reality. Journal of Theoretical and Applied Information Technology. 2021; 99(22):5300-11.
[Google Scholar]
[42]Bochkovskiy A, Wang CY, Liao HY. Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. 2020; 4(2004.10934):1-17.
[Crossref] [Google Scholar]
[43]Yu J, Zhang W. Face mask wearing detection algorithm based on improved YOLO-v4. Sensors. 2021; 21(9):1-21.
[Crossref] [Google Scholar]
[44]Liao Z, Tian M. A bird species detection method based on YOLO-v5. In international conference on neural networks, information and communication engineering 2021 (pp. 65-75). SPIE.
[Crossref] [Google Scholar]
[45]https://github.com/ultralytics/yolov5. Accessed 24 April 2022.
[46]Song Q, Li S, Bai Q, Yang J, Zhang X, Li Z, et al. Object detection method for grasping robot based on improved YOLOv5. Micromachines. 2021; 12(11):1-18.
[Crossref] [Google Scholar]