论文标题
无监督的组织通过深度约束的高斯网络进行分割
Unsupervised Tissue Segmentation via Deep Constrained Gaussian Network
论文作者
论文摘要
组织分割是病理检查的主要主机,而手动描述则过于繁重。为了协助这一耗时和主观的手动步骤,研究人员已经设计了自动在病理图像中分割结构的方法。最近,自动化机器和基于深度学习的方法主导了组织分割研究。但是,大多数基于机器和深度学习的方法都是使用大量培训样本进行监督和开发的,其中PixelWise注释很昂贵,有时无法获得。本文通过将端到端的深层混合模型与有限的指标集成以获取准确的语义组织分割,从而引入了一种新颖的无监督学习范式。该约束旨在集中在优化函数计算过程中深层混合模型的组成部分。这样做,可以大大减少当前无监督学习方法中常见的多余或空的班级问题。通过对公共和内部数据集的验证,与其他现有的未经预处理的细分方法相比,提议的深度约束高斯网络在组织细分方面具有显着的性能(Wilcoxon签名级测试)更好的性能(分别为0.737和0.735),具有改善的稳定性和稳健性。此外,与完全监督的U-NET相比,该提出的方法具有相似的性能(P值> 0.05)。
Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixelwise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value > 0.05) compared to the fully supervised U-Net.