论文标题

孟加拉国交通图像的车辆检测的基于YOLO的架构的性能分析

Performance Analysis of YOLO-based Architectures for Vehicle Detection from Traffic Images in Bangladesh

论文作者

Alamgir, Refaat Mohammad, Shuvro, Ali Abir, Mushabbir, Mueeze Al, Raiyan, Mohammed Ashfaq, Rani, Nusrat Jahan, Rahman, Md. Mushfiqur, Kabir, Md. Hasanul, Ahmed, Sabbir

论文摘要

定位和分类不同类型的车辆的任务已成为许多自动化和智能系统应用中的重要因素,从交通监视到车辆识别等等。最近,深度学习模型一直在统治着车辆检测领域。然而,孟加拉国的车辆​​检测仍然是一个相对尚未探索的区域。车辆检测的主要目标之一是其实时应用,其中“您只看一次”(Yolo)模型已被证明是最有效的体系结构。在这项工作中,我们打算找到最适合的Yolo架构,以从孟加拉国的交通图像中快速准确检测,我们已经对基于Yolo的架构的不同变体进行了性能分析,例如Yolov3,Yolov5s和Yolov5x。对模型进行了培训,该数据集包含7390张图像,该图像属于21种类型的车辆,其中包括来自Dhakaai数据集,Poribohon-BD数据集和我们的自我收集的图像的样品。经过彻底的定量和定性分析,我们发现Yolov5X变体是最佳的模型,在MAP中,Yolov3和Yolov5S模型的表现分别高7和4%,而准确性则为12和8.5%。

The task of locating and classifying different types of vehicles has become a vital element in numerous applications of automation and intelligent systems ranging from traffic surveillance to vehicle identification and many more. In recent times, Deep Learning models have been dominating the field of vehicle detection. Yet, Bangladeshi vehicle detection has remained a relatively unexplored area. One of the main goals of vehicle detection is its real-time application, where `You Only Look Once' (YOLO) models have proven to be the most effective architecture. In this work, intending to find the best-suited YOLO architecture for fast and accurate vehicle detection from traffic images in Bangladesh, we have conducted a performance analysis of different variants of the YOLO-based architectures such as YOLOV3, YOLOV5s, and YOLOV5x. The models were trained on a dataset containing 7390 images belonging to 21 types of vehicles comprising samples from the DhakaAI dataset, the Poribohon-BD dataset, and our self-collected images. After thorough quantitative and qualitative analysis, we found the YOLOV5x variant to be the best-suited model, performing better than YOLOv3 and YOLOv5s models respectively by 7 & 4 percent in mAP, and 12 & 8.5 percent in terms of Accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源