论文标题
视觉变压器具有可变形的注意力
Vision Transformer with Deformable Attention
论文作者
论文摘要
变形金刚最近在各种视觉任务上表现出了出色的表现。大型,有时甚至是全球的接收场赋予了变压器模型在CNN对应物上具有较高的表示功率。然而,简单地扩大接收领域也引起了一些问题。一方面,使用密集的关注,例如,在VIT中,导致了过多的记忆和计算成本,并且特征可能会受到超出关注区域的无关部分的影响。另一方面,PVT或SWIN Transformer在PVT或SWIN Transformer中采用的稀疏注意力是数据不可知,并且可能会限制建模远距离关系的能力。为了减轻这些问题,我们提出了一个新颖的可变形自我发场模块,其中关键和价值对的位置以数据依赖性的方式选择了自我注意力。这种灵活的方案使自我发场模块能够专注于相关区域并捕获更有信息的功能。在此基础上,我们提出了可变形的注意力变压器,这是一个通用的骨干模型,对图像分类和密集的预测任务都具有可变形的注意力。广泛的实验表明,我们的模型在综合基准上始终如一地提高了结果。代码可在https://github.com/leaplabthu/dat上找到。
Transformers have recently shown superior performances on various vision tasks. The large, sometimes even global, receptive field endows Transformer models with higher representation power over their CNN counterparts. Nevertheless, simply enlarging receptive field also gives rise to several concerns. On the one hand, using dense attention e.g., in ViT, leads to excessive memory and computational cost, and features can be influenced by irrelevant parts which are beyond the region of interests. On the other hand, the sparse attention adopted in PVT or Swin Transformer is data agnostic and may limit the ability to model long range relations. To mitigate these issues, we propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way. This flexible scheme enables the self-attention module to focus on relevant regions and capture more informative features. On this basis, we present Deformable Attention Transformer, a general backbone model with deformable attention for both image classification and dense prediction tasks. Extensive experiments show that our models achieve consistently improved results on comprehensive benchmarks. Code is available at https://github.com/LeapLabTHU/DAT.