论文标题
卷积神经网络的通用可视化方法
A Generic Visualization Approach for Convolutional Neural Networks
论文作者
论文摘要
检索网络对于搜索和索引至关重要。与分类网络相比,几乎没有研究检索网络的注意力可视化。我们将注意力可视化作为约束优化问题。我们利用单位L2-Norm约束作为注意过滤器(L2-CAF)将注意力定位在分类和检索网络中。与最近的文献不同,我们的方法既不需要建筑变化也不需要微调。因此,预先训练的网络的性能永远不会受到破坏 使用弱监督的对象定位对L2-CAF进行定量评估。在分类网络上取得了最新的结果。对于检索网络,在Grad-CAM基线方面实现了显着的改善。定性评估证明了L2-CAF如何可视化每个框架的注意力转换网络的注意力。进一步的消融研究突出了我们方法的计算成本,并将L2-CAF与其他可行替代方案进行了比较。可在https://bit.ly/3idblfv上找到代码
Retrieval networks are essential for searching and indexing. Compared to classification networks, attention visualization for retrieval networks is hardly studied. We formulate attention visualization as a constrained optimization problem. We leverage the unit L2-Norm constraint as an attention filter (L2-CAF) to localize attention in both classification and retrieval networks. Unlike recent literature, our approach requires neither architectural changes nor fine-tuning. Thus, a pre-trained network's performance is never undermined L2-CAF is quantitatively evaluated using weakly supervised object localization. State-of-the-art results are achieved on classification networks. For retrieval networks, significant improvement margins are achieved over a Grad-CAM baseline. Qualitative evaluation demonstrates how the L2-CAF visualizes attention per frame for a recurrent retrieval network. Further ablation studies highlight the computational cost of our approach and compare L2-CAF with other feasible alternatives. Code available at https://bit.ly/3iDBLFv