论文标题
基于混合窗口注意的脑肿瘤分割的变压器结构
Hybrid Window Attention Based Transformer Architecture for Brain Tumor Segmentation
论文作者
论文摘要
由于MRI体积的强度在研究机构之间是不一致的,因此必须将多模式MRI的通用特征提取到精确分段脑肿瘤。在这个概念中,我们提出了一个体积视觉变压器,遵循两种窗口策略的注意力,以提取精美的特征和局部分配平滑度(LDS)在受虚拟对手训练(VAT)启发的模型训练过程中,以使模型可靠。我们在FETS Challenge 2022数据集上培训和评估了网络体系结构。我们在在线验证数据集上的性能如下:骰子相似性得分为81.71%,91.38%和85.40%; Hausdorff的距离(95%)的14.81毫米,3.93毫米,11.18毫米,分别用于增强肿瘤,整个肿瘤和肿瘤核。总体而言,实验结果通过在每个肿瘤子区域的分割精度中得出更好的性能来验证我们的方法的有效性。我们的代码实现已公开可用:https://github.com/himashi92/vizviva_fets_2022
As intensities of MRI volumes are inconsistent across institutes, it is essential to extract universal features of multi-modal MRIs to precisely segment brain tumors. In this concept, we propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features and local distributional smoothness (LDS) during model training inspired by virtual adversarial training (VAT) to make the model robust. We trained and evaluated network architecture on the FeTS Challenge 2022 dataset. Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%; Hausdorff Distance (95%) of 14.81 mm, 3.93 mm, 11.18 mm for the enhancing tumor, whole tumor, and tumor core, respectively. Overall, the experimental results verify our method's effectiveness by yielding better performance in segmentation accuracy for each tumor sub-region. Our code implementation is publicly available : https://github.com/himashi92/vizviva_fets_2022