Research on a Computer Vision-Based Guide Glasses System for the Visually Impaired
DOI:
https://doi.org/10.54097/zm9d0r75Keywords:
guide glasses, computer vision, YOLOv5, target detection, deep learning.Abstract
As the global population of visually impaired individuals continues to grow, traditional assistive tools struggle to meet the demands of safe navigation. This paper presents the design of an intelligent guide glasses system based on the deep learning model YOLOv5, integrating technologies such as ultrasonic ranging, computer vision, and global positioning to enable obstacle detection, environmental sensing, and navigation. By constructing a specialized dataset, Blind Vision-YOLO, and training the model, the system demonstrates impressive real-time performance and high precision in target detection. Experimental results reveal that the system achieves real-time target detection at 28.01 FPS with a mean average precision (mAP) of 74.1%, accurately identifying potential obstacles for visually impaired individuals during daily travel and providing timely voice feedback. The smart guide glasses designed in this study offer excellent portability and practicality, providing a safer and more convenient travel experience for the visually impaired.
Downloads
References
[1] World Health Organization. (2023). World Health Statistics 2023: World Health Organization. https://iris.who.int/handle/10665/367912. License: CC BY - NC - SA 3.0 IGO.
[2] LIN Hui-Qi, ZHOU Yi-Tao, WENG Ming-Key, et al. Design and realization of AI intelligent guide glasses for the blind [J]. Information and Computer (Theoretical Edition), 2021, 33 (06): 171 - 173.
[3] Jocher, Glenn. "YOLOv5: A Family of Object Detection Architectures." GitHub, 2020, https://github.com/ultralytics/yolov5. Accessed 23 Oct. 2023.
[4] H. Ali A., S. U. Rao, S. Ranganath, T. S. Ashwin and G. R. M. Reddy, "A Google Glass Based Real-Time Scene Analysis for the Visually Impaired," in IEEE Access, vol. 9, pp. 166351 - 166369, 2021, doi: 10.1109/ACCESS.2021.3135024.
[5] Kadhim, Mais R. and Bushra K. Oleiwi. "Blind Assistive System based on Real Time Object Recognition using Machine learning." Engineering and Technology Journal (2022): n. pag.
[6] DAN Yufang, SHI Kaikai, LI Weiren, et al. Design and development of intelligent obstacle avoidance glasses for the blind [J]. Fujian Computer, 2021, 37 (04): 99 - 101.
[7] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 580–587). Columbus, OH, USA: IEEE.
[8] GIRSHICK R. Fast r-cnn [C]. Proceedings of the IEEE international conference on computer vision, 2015: 1440 - 1448.
[9] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39 (6): 1137 - 1149.
[10] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection [C]. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016: 779 - 788.
[11] LIU W, ANGUELOV D, ERHAN D, et al. Ssd: Single shot multibox detector [C]. Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11 – 14, 2016, Proceedings, Part I 14. Springer International Publishing, 2016: 21 - 37.
[12] Jocher, G., Chaurasia, A., & Qiu, J. (2023). Ultralytics YOLO (Version 8.0.0) [Computer software]. https://github.com/ultralytics/ultralytics.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Highlights in Science, Engineering and Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







