Design and implementation of a cleaning robot
Abstract
This study focuses on the development of a cleaning robot, covering the three main aspects of mechanical design, circuit design, and software design. First, in the area of mechanical design, we created a structure capable of agile movement and efficient cleaning to ensure the robot can operate smoothly in various environments. Second, for circuit design, we developed a microcontroller-based control system to coordinate the operation of various components, including drive motors, sensors, and image recognition modules. Furthermore, in the software design aspect, we utilized YOLO (You Only Look Once) and OpenCV technologies to enable the robot to accurately identify and classify waste during the automatic cleaning process. Finally, we conducted practical cleaning experiments to verify the robot’s ability to recognize waste and the accuracy of waste classification. The experimental results show that the cleaning robot not only possesses the ability to recognize waste but also accurately classify it, demonstrating its potential for practical applications.
References
[1]Canedo D, Fonseca P, Georgieva P, et al. A Deep Learning-Based Dirt Detection Computer Vision System for Floor-Cleaning Robots with Improved Data Collection. Technologies. 2021; 9(4): 94. doi: 10.3390/technologies9040094
[2]Malik M, Sharma S, Uddin M, et al. Waste Classification for Sustainable Development Using Image Recognition with Deep Learning Neural Network Models. Sustainability. 2022; 14(12): 7222. doi: 10.3390/su14127222
[3]Feng H, Mu G, Zhong S, et al. Benchmark Analysis of YOLO Performance on Edge Intelligence Devices. Cryptography. 2022; 6(2): 16. doi: 10.3390/cryptography6020016
[4]Shin DJ, Kim JJ. A Deep Learning Framework Performance Evaluation to Use YOLO in Nvidia Jetson Platform. Applied Sciences. 2022; 12(8): 3734. doi: 10.3390/app12083734
[5]Wu W, Liu H, Li L, et al. Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image. Lv H, ed. PLOS ONE. 2021; 16(10): e0259283. doi: 10.1371/journal.pone.0259283
[6]Dang C, Wang Z, He Y, et al. The Accelerated Inference of a Novel Optimized YOLOv5-LITE on Low-Power Devices for Railway Track Damage Detection. IEEE Access. 2023; 11: 134846-134865. doi: 10.1109/access.2023.3334973
[7]Gams M, Gu IYH, Härmä A, et al. Artificial intelligence and ambient intelligence. Journal of Ambient Intelligence and Smart Environments. 2019; 11(1): 71-86. doi: 10.3233/ais-180508
[8]Huang Q. Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners. Applied Sciences. 2023; 13(18): 10031. doi: 10.3390/app131810031
[9]Miao X, Lee HS, Kang BY. Multi-Cleaning Robots Using Cleaning Distribution Method Based on Map Decomposition in Large Environments. IEEE Access. 2020; 8: 97873-97889. doi: 10.1109/access.2020.2997095
[10]Kopytko V, Shevchuk L, et al. Smart Home and Artificial Intelligence as Environment for the Implementation of New Technologies. Path of Science. 2018; 4(9): 2007-2012. doi: 10.22178/pos.38-2
[11]Yin J, Apuroop KGS, Tamilselvam YK, et al. Table Cleaning Task by Human Support Robot Using Deep Learning Technique. Sensors. 2020; 20(6): 1698. doi: 10.3390/s20061698
[12]Narkhede P, Walambe R, Mandaokar S, et al. Gas Detection and Identification Using Multimodal Artificial Intelligence Based Sensor Fusion. Applied System Innovation. 2021; 4(1): 3. doi: 10.3390/asi4010003
[13]Mistry SK, Chatterjee S, Verma AK, et al. Drone-vs-Bird: Drone Detection Using YOLOv7 with CSRT Tracker. IEEE; 2023.
[14]Kwon KR, Lee SH, Farkhodov K. Object Tracking using CSRT Tracker and RCNN. In: Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies; 2020.
[15]Lin YW, Chiu CF, Chen LH, et al. Real-Time Dynamic Intelligent Image Recognition and Tracking System for Rockfall Disasters. Journal of Imaging. 2024; 10(4): 78. doi: 10.3390/jimaging10040078
[16]Vega J, Cañas JM. PyBoKids: An Innovative Python-Based Educational Framework Using Real and Simulated Arduino Robots. Electronics. 2019; 8(8): 899. doi: 10.3390/electronics8080899
[17]Spector L. Evolution of artificial intelligence. Artificial Intelligence. 2006; 170(18): 1251-1253. doi: 10.1016/j.artint.2006.10.009
[18]Strautiņa S, Kalniņa I, Kaufmane E, et al. RaspberrySet: Dataset of Annotated Raspberry Images for Object Detection. Data. 2023; 8(5): 86. doi: 10.3390/data8050086
[19]Hirakawa S, Mori S, Kaneda M. A Navigation and Object Detection Method Using Multisensory Fusion for a Cleaning Robot. IEEE Robotics and Automation Letters. 2020; 5(4): 6609-6616.
[20]Vasquez D, Lopez J. Detecting Different Objects in an Industrial Scenario Using Deep Learning with YOLOv4. Procedia CIRP. 2021; 99: 207-212. doi: 10.1016/j.procir.2021.01.064
[21]Liu D, Chen X, Guo Z, et al. Research on the Online Parameter Identification Method of Train Driving Dynamic Model. International Journal of Computational Vision and Robotics. 2022; 1(1): 497-509.
[22]Jiang Y, Yin S. Heterogenous-view occluded expression data recognition based on cycle-consistent adversarial network and K-SVD dictionary learning under intelligent cooperative robot environment. Computer Science and Information Systems. 2023; 20(4): 1869-1883. doi: 10.2298/csis221228034j
Copyright (c) 2025 Author(s)

This work is licensed under a Creative Commons Attribution 4.0 International License.