论文标题
增强的模棱两可的注意网络,用于显微镜图像重建
Augmented Equivariant Attention Networks for Microscopy Image Reconstruction
论文作者
论文摘要
进行高质量或高分辨率电子显微镜(EM)和荧光显微镜(FM)图像是耗时且昂贵的。拍摄这些图像甚至可能是对样品的侵入性,并且在长时间或强烈的暴露后可能会损害样品中的某些微妙之处,这通常对于达到高质量或高分辨率而言是必需的。深度学习的进步使我们能够为各种显微镜图像重建执行图像到图像变换任务,从而在计算上产生了从物理获得的低质量的图像。当训练图像到图像转换模型对成对的实验获得的显微镜图像成对时,先前的模型由于无法捕获图像间依赖性和图像中共享的共同特征而遭受性能损失。利用图像分类任务中共享特征的现有方法无法正确应用于图像重建任务,因为它们无法在空间排列下保留均衡属性,这是图像到图像转换中必不可少的。为了解决这些局限性,我们提出了增强的模棱两可的注意网络(AEANETS),具有更好的能力捕获间形间依赖性的能力,同时保留了模棱两可的属性。提出的AEANETS通过有关注意机制的两个增强来捕获图像间的依赖性和共享特征,这是共享的参考文献和培训期间的批次感知注意力。从理论上讲,我们得出了所提出的增强注意模型的均衡性质,并在实验上证明了其在定量和视觉结果上与基线方法相比的一致性。
It is time-consuming and expensive to take high-quality or high-resolution electron microscopy (EM) and fluorescence microscopy (FM) images. Taking these images could be even invasive to samples and may damage certain subtleties in the samples after long or intense exposures, often necessary for achieving high-quality or high resolution in the first place. Advances in deep learning enable us to perform image-to-image transformation tasks for various types of microscopy image reconstruction, computationally producing high-quality images from the physically acquired low-quality ones. When training image-to-image transformation models on pairs of experimentally acquired microscopy images, prior models suffer from performance loss due to their inability to capture inter-image dependencies and common features shared among images. Existing methods that take advantage of shared features in image classification tasks cannot be properly applied to image reconstruction tasks because they fail to preserve the equivariance property under spatial permutations, something essential in image-to-image transformation. To address these limitations, we propose the augmented equivariant attention networks (AEANets) with better capability to capture inter-image dependencies, while preserving the equivariance property. The proposed AEANets captures inter-image dependencies and shared features via two augmentations on the attention mechanism, which are the shared references and the batch-aware attention during training. We theoretically derive the equivariance property of the proposed augmented attention model and experimentally demonstrate its consistent superiority in both quantitative and visual results over the baseline methods.