论文标题

实时汽车语义上有雾的场景的多模型学习通过域适应

Multi-Model Learning for Real-Time Automotive Semantic Foggy Scene Understanding via Domain Adaptation

论文作者

Alshammari, Naif, Akcay, Samet, Breckon, Toby P.

论文摘要

在两个关键方面,用于汽车应用的强大语义场景细分是一个具有挑战性的问题:(1)标记每个单独的场景像素和(2)在不稳定的天气和照明变化下执行此任务(例如,有雾的天气),这会导致室外场景的差。这种可见性局限性导致广义深度卷积神经网络的语义场景分割的非最佳性能。在本文中,我们提出了一种有效的端到端汽车语义场景理解方法,该方法对有雾的天气条件很强。作为端到端管道,我们提出的方法提供了:(1)使用域转移方法(可见性较差)将图像从雾化到清除天气条件的转换,以及(2)使用具有低计算复杂性(实现实时性能)的竞争性编码器decoder架构来对场景进行分割场景。我们的方法通过具有密度连接性的不同编码器并具有融合,以有效利用不同输入的信息,从而有助于整体模型中的最佳特征表示形式。使用这种架构配方和密集的跳过连接,我们的模型在整体模型复杂性的一部分中实现了与当代方法相当的性能。

Robust semantic scene segmentation for automotive applications is a challenging problem in two key aspects: (1) labelling every individual scene pixel and (2) performing this task under unstable weather and illumination changes (e.g., foggy weather), which results in poor outdoor scene visibility. Such visibility limitations lead to non-optimal performance of generalised deep convolutional neural network-based semantic scene segmentation. In this paper, we propose an efficient end-to-end automotive semantic scene understanding approach that is robust to foggy weather conditions. As an end-to-end pipeline, our proposed approach provides: (1) the transformation of imagery from foggy to clear weather conditions using a domain transfer approach (correcting for poor visibility) and (2) semantically segmenting the scene using a competitive encoder-decoder architecture with low computational complexity (enabling real-time performance). Our approach incorporates RGB colour, depth and luminance images via distinct encoders with dense connectivity and features fusion to effectively exploit information from different inputs, which contributes to an optimal feature representation within the overall model. Using this architectural formulation with dense skip connections, our model achieves comparable performance to contemporary approaches at a fraction of the overall model complexity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源