计算机应用 ›› 2020, Vol. 40 ›› Issue (10): 2910-2916.DOI: 10.11772/j.issn.1001-9081.2020020162

• 人工智能 • 上一篇    下一篇

基于条件生成对抗网络的乳腺上皮和间质区域自动分割

张泽林, 徐军   

  1. 江苏省大数据分析技术重点实验室(南京信息工程大学), 南京 210044
  • 收稿日期:2020-02-19 修回日期:2020-04-24 出版日期:2020-10-10 发布日期:2020-05-13
  • 通讯作者: 徐军
  • 作者简介:张泽林(1993-),男,甘肃张掖人,硕士研究生,主要研究方向:人工智能、医学图像;徐军(1972-),男,江西景德镇人,教授,博士,主要研究方向:面向癌症计算机辅助诊断与预后的病理图像分析、医学图像分析、生物医学工程。
  • 基金资助:
    国家自然科学基金资助项目(U1809205,61771249);江苏省自然科学基金资助项目(BK20181411)。

Automatic segmentation of breast epithelial and stromal regions based on conditional generative adversarial network

ZHANG Zelin, XU Jun   

  1. Jiangsu Key Laboratory of Big Data Analysis Technology(Nanjing University of Information Science and Technology), Nanjing Jiangsu 210044, China
  • Received:2020-02-19 Revised:2020-04-24 Online:2020-10-10 Published:2020-05-13
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (U1809205, 61771249), the Natural Science Foundation of Jiangsu Province (BK20181411).

摘要: 乳腺病理组织图像中上皮和间质区域的自动分割对乳腺癌的诊断和治疗具有非常重要的临床意义。但是由于乳腺组织病理图像中上皮和间质区域具有高度复杂性,因此一般的分割模型很难只根据提供的分割标记来有效地训练,并对两种区域进行快速、准确的分割。为此,提出一种基于条件对抗网络(cGAN)的上皮和间质分割条件对抗网络(EPScGAN)模型。在EPScGAN中,判别器的判别机制为生成器的训练提供了一个可训练的损失函数,来更加准确地衡量出生成器网络的分割结果输出和真实标记之间的误差,从而更好地指导生成器的训练。从荷兰癌症研究所(NKI)和温哥华综合医院(VGH)两个机构提供的专家标记的乳腺病理图像数据集中随机裁剪出1 286张尺寸为512×512的图像作为实验数据集,然后将该数据集按照7:3的比例划分为训练集和测试集对EPScGAN模型进行训练和测试。结果表明,EPScGAN模型在测试集的平均交并比(mIoU)为78.12%,和其他6种流行的深度学习分割模型相比较,提出的EPScGAN具有更好的分割性能。

关键词: 深度学习, 条件生成对抗网络, 乳腺病理组织图像, 上皮和间质区域, 图像分割

Abstract: The automatic segmentation of epithelial and stromal regions in breast pathological images has very important clinical significance for the diagnosis and treatment of breast cancer. However, due to the high complexity of epithelial and stromal regions in breast tissue pathological images, it is difficult for general segmentation models to effectively train the model based on the provided segmentation labels only, and perform fast and accurate segmentation of the two regions. Therefore, based on conditional Generative Adversarial Network (cGAN), the EPithelium and Stroma segmentation conditional Generative Adversarial Network (EPScGAN) model was proposed. In EPScGAN, the discrimination mechanism of the discriminator provided a trainable loss function for the training of the generator, in order to measure the error between the segmentation result outputs of the generator and the real labels more accurately, so as to better guide the generator training. Total of 1 286 images with the size of 512×512 were randomly cropped as an experimental dataset from the expert-labeled breast pathological image datasets provided by the Netherlands Cancer Institute (NKI) and the Vancouver General Hospital (VGH). Then the dataset was divided into the training set and the test set according to the ratio of 7:3 to train and test the EPScGAN model. Experimental results show that, the mean Intersection over Union (mIoU) of the EPScGAN model on the test set is 78.12%, and compared with other 6 popular deep learning segmentation models, EPScGAN model has better segmentation performance.

Key words: deep learning, conditional Generative Adversarial Network (cGAN), breast tissue image, epithelial and stromal region, image segmentation

中图分类号: