论文标题
人工智能启用交通监控系统
Artificial Intelligence Enabled Traffic Monitoring System
论文作者
论文摘要
手动交通监视可能是一项艰巨的任务,因为交通管理中心在网络上运行了无数的相机。注入一定程度的自动化可以帮助减轻进行手动监视的人类操作员的工作量,并促进主动决策,从而减少事件的影响和在道路上的重复交通拥堵的影响。本文介绍了一种新颖的方法,可以使用深层卷积神经网络和独立的图形用户界面自动监视实时交通镜头。作者描述了在开发模型的过程中收到的研究结果,这些模型是实现人工智能的综合框架,可实现人工智能的交通监控系统。拟议的系统部署了几种最先进的深度学习算法来自动化不同的流量监视需求。利用大量注释的视频监视数据数据库,对基于深度学习的模型进行了培训,以检测队列,轨道固定的车辆和制表车辆计数。采用像素级分割方法来检测流量队列并预测严重程度。实时对象检测算法以及不同的跟踪系统将被部署以自动检测搁浅的车辆并执行车辆计数。在开发的每个阶段,都会提出有趣的实验结果,以证明所提出的系统的有效性。总体而言,结果表明,所提出的框架在各种条件下都能令人满意地表现,而不会受到环境危害的巨大影响,例如摄像头视图,低照明,雨水或雪。
Manual traffic surveillance can be a daunting task as Traffic Management Centers operate a myriad of cameras installed over a network. Injecting some level of automation could help lighten the workload of human operators performing manual surveillance and facilitate making proactive decisions which would reduce the impact of incidents and recurring congestion on roadways. This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks and a stand-alone graphical user interface. The authors describe the results of research received in the process of developing models that serve as an integrated framework for an artificial intelligence enabled traffic monitoring system. The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs. Taking advantage of a large database of annotated video surveillance data, deep learning-based models are trained to detect queues, track stationary vehicles, and tabulate vehicle counts. A pixel-level segmentation approach is applied to detect traffic queues and predict severity. Real-time object detection algorithms coupled with different tracking systems are deployed to automatically detect stranded vehicles as well as perform vehicular counts. At each stages of development, interesting experimental results are presented to demonstrate the effectiveness of the proposed system. Overall, the results demonstrate that the proposed framework performs satisfactorily under varied conditions without being immensely impacted by environmental hazards such as blurry camera views, low illumination, rain, or snow.