《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (3): 825-832.DOI: 10.11772/j.issn.1001-9081.2021040856
• 2021年中国计算机学会人工智能会议(CCFAI 2021) • 上一篇
徐光柱1,2, 林文杰1, 陈莎1, 匡婉1, 雷帮军1,2(), 周军3
收稿日期:
2021-05-25
修回日期:
2021-06-29
接受日期:
2021-06-30
发布日期:
2021-11-09
出版日期:
2022-03-10
通讯作者:
雷帮军
作者简介:
徐光柱(1979—),男,山东单县人,教授,博士,CCF会员,主要研究方向:人工神经网络、计算机视觉基金资助:
Guangzhu XU1,2, Wenjie LIN1, Sha CHEN1, Wan KUANG1, Bangjun LEI1,2(), Jun ZHOU3
Received:
2021-05-25
Revised:
2021-06-29
Accepted:
2021-06-30
Online:
2021-11-09
Published:
2022-03-10
Contact:
Bangjun LEI
About author:
XU Guangzhu,born in 1979, Ph. D., professor. His research interests include artificial neural network, computer vision.Supported by:
摘要:
由于眼底血管结构复杂多变,且图像中血管与背景对比度低,眼底血管分割存在巨大困难,尤其是微小型血管难以分割。基于深层全卷积神经网络的U-Net能够有效提取血管图像全局及局部信息,但由于其输出为灰度图像,并采用硬阈值实现二值化,这会导致血管区域丢失、血管过细等问题。针对这些问题,提出一种结合U-Net与脉冲耦合神经网络(PCNN)各自优势的眼底血管分割方法。首先使用迭代式U-Net模型凸显血管,即将U-Net模型初次提取的特征与原图融合的结果再次输入改进的U-Net模型进行血管增强;然后,将U-Net输出结果视为灰度图像,利用自适应阈值PCNN对其进行精准血管分割;在U-Net模型中引入Batch Normalization和Dropout,提高训练速度,有效缓解过拟合问题。实验结果表明,所提方法的AUC在DRVIE、STARE和CHASE_DB1数据集上分别为0.979 6,0.980 9和0.982 7。该方法可以提取更多的血管细节,且具有较强的泛化能力和良好的应用前景。
中图分类号:
徐光柱, 林文杰, 陈莎, 匡婉, 雷帮军, 周军. U-Net与自适应阈值脉冲耦合神经网络相结合的眼底血管分割方法[J]. 计算机应用, 2022, 42(3): 825-832.
Guangzhu XU, Wenjie LIN, Sha CHEN, Wan KUANG, Bangjun LEI, Jun ZHOU. Fundus vessel segmentation method based on U-Net and pulse coupled neural network with adaptive threshold[J]. Journal of Computer Applications, 2022, 42(3): 825-832.
下采样 | 特征图大小 | 上采样 | 特征图大小 | 卷积核 |
---|---|---|---|---|
Layer_1 | 48×48 | Layer_1 | 6×6 | 3×3 |
Layer_2 | 24×24 | Layer_2 | 12×12 | 3×3 |
Layer_3 | 12×12 | Layer_3 | 24×24 | 3×3 |
Layer_4 | 6×6 | Layer_4 | 48×48 | 3×3 |
表1 U-Net网络层参数
Tab. 1 Network layer parameters of U-Net
下采样 | 特征图大小 | 上采样 | 特征图大小 | 卷积核 |
---|---|---|---|---|
Layer_1 | 48×48 | Layer_1 | 6×6 | 3×3 |
Layer_2 | 24×24 | Layer_2 | 12×12 | 3×3 |
Layer_3 | 12×12 | Layer_3 | 24×24 | 3×3 |
Layer_4 | 6×6 | Layer_4 | 48×48 | 3×3 |
阶段 | 图块 总量 | 训练阶段 | 测试 阶段 | 图块 大小 | |
---|---|---|---|---|---|
训练 | 验证 | ||||
DRIVE | 36 000 | 32 400 | 3 600 | 11 232 | 48×48 |
STARE | 36 000 | 32 400 | 3 600 | 14 672 | 48×48 |
CHASE_DB1 | 39 200 | 35 280 | 3 920 | 34 953 | 48×48 |
表2 各阶段所用图像块数量
Tab. 2 Number of image blocks used in each stage
阶段 | 图块 总量 | 训练阶段 | 测试 阶段 | 图块 大小 | |
---|---|---|---|---|---|
训练 | 验证 | ||||
DRIVE | 36 000 | 32 400 | 3 600 | 11 232 | 48×48 |
STARE | 36 000 | 32 400 | 3 600 | 14 672 | 48×48 |
CHASE_DB1 | 39 200 | 35 280 | 3 920 | 34 953 | 48×48 |
方法 | Epoch | 每个Epoch 消耗时间/s | 实际训练Epoch | 训练时间/s |
---|---|---|---|---|
U-Net | 150 | 135 | 120 | 16 200 |
本文方法 | 25 | 112 | 15 | 1 680 |
表3 训练周期时间对比
Tab. 3 Training cycle time comparison
方法 | Epoch | 每个Epoch 消耗时间/s | 实际训练Epoch | 训练时间/s |
---|---|---|---|---|
U-Net | 150 | 135 | 120 | 16 200 |
本文方法 | 25 | 112 | 15 | 1 680 |
方法 | Acc | Se | Sp |
---|---|---|---|
4区域 | 0.956 6(±0.012 6) | 0.783 1(±0.100 9) | 0.982 3(±0.009 3) |
9区域 | 0.955 4(±0.012 4) | 0.808 6(±0.092 7) | 0.977 5(±0.011 8) |
表4 4区域、9区域分割结果对比
Tab. 4 Segmentation results of 4 regions and 9 regions
方法 | Acc | Se | Sp |
---|---|---|---|
4区域 | 0.956 6(±0.012 6) | 0.783 1(±0.100 9) | 0.982 3(±0.009 3) |
9区域 | 0.955 4(±0.012 4) | 0.808 6(±0.092 7) | 0.977 5(±0.011 8) |
数据集 | U-Net初次血管增强 | U-Net二次血管增强 | ||||
---|---|---|---|---|---|---|
Acc | Se | Sp | Acc | Se | Sp | |
DRIVE | 0.956 8 | 0.786 8 | 0.981 8 | 0.957 5 | 0.801 8 | 0.980 4 |
STARE | 0.960 3 | 0.785 5 | 0.980 9 | 0.961 2 | 0.765 6 | 0.983 3 |
CHASE_DB1 | 0.956 2 | 0.680 3 | 0.989 5 | 0.958 9 | 0.725 6 | 0.988 3 |
表5 U-Net迭代增强结果对比
Tab. 5 Comparison of U-Net iterative enhancement results
数据集 | U-Net初次血管增强 | U-Net二次血管增强 | ||||
---|---|---|---|---|---|---|
Acc | Se | Sp | Acc | Se | Sp | |
DRIVE | 0.956 8 | 0.786 8 | 0.981 8 | 0.957 5 | 0.801 8 | 0.980 4 |
STARE | 0.960 3 | 0.785 5 | 0.980 9 | 0.961 2 | 0.765 6 | 0.983 3 |
CHASE_DB1 | 0.956 2 | 0.680 3 | 0.989 5 | 0.958 9 | 0.725 6 | 0.988 3 |
方法 | Acc | Se | Sp |
---|---|---|---|
硬阈值 | 0.957 0±0.011 5 | 0.788 5±0.107 0 | 0.981 6±0.009 0 |
自适应阈值PCNN | 0.957 6±0.011 4 | 0.800 3±0.073 4 | 0.980 4±0.046 2 |
表6 自适应阈值PCNN与硬阈值分割结果对比
Tab. 6 Comparison of adaptive threshold PCNN and hard threshold segmentation results
方法 | Acc | Se | Sp |
---|---|---|---|
硬阈值 | 0.957 0±0.011 5 | 0.788 5±0.107 0 | 0.981 6±0.009 0 |
自适应阈值PCNN | 0.957 6±0.011 4 | 0.800 3±0.073 4 | 0.980 4±0.046 2 |
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Hugo[ | 0.785 4 | — | 0.950 3 | 0.875 8 |
Zhao[ | 0.782 0 | 0.979 0 | 0.957 0 | 0.886 0 | |
Zhou[ | 0.726 2 | 0.980 3 | 0.947 5 | — | |
Azzopardi[ | 0.752 6 | 0.970 7 | 0.942 7 | 0.957 1 | |
监督 方法 | Strisciuglio[ | 0.777 7 | 0.970 2 | 0.945 4 | 0.959 7 |
Zhang[ | 0.786 1 | 0.971 2 | 0.946 6 | 0.970 3 | |
Orlando[ | 0.789 7 | 0.968 4 | 0.945 4 | 0.950 6 | |
Fu[ | 0.760 3 | — | 0.952 3 | — | |
Jin[ | — | — | 0.956 6 | 0.980 2 | |
Yu[ | 0.764 3 | 0.980 3 | 0.952 4 | 0.972 3 | |
本文方法 | 0.801 8 | 0.980 4 | 0.957 5 | 0.979 6 |
表7 在DRIVE数据集上的性能表现对比
Tab. 7 Performance comparison of different methods on DRIVE dataset
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Hugo[ | 0.785 4 | — | 0.950 3 | 0.875 8 |
Zhao[ | 0.782 0 | 0.979 0 | 0.957 0 | 0.886 0 | |
Zhou[ | 0.726 2 | 0.980 3 | 0.947 5 | — | |
Azzopardi[ | 0.752 6 | 0.970 7 | 0.942 7 | 0.957 1 | |
监督 方法 | Strisciuglio[ | 0.777 7 | 0.970 2 | 0.945 4 | 0.959 7 |
Zhang[ | 0.786 1 | 0.971 2 | 0.946 6 | 0.970 3 | |
Orlando[ | 0.789 7 | 0.968 4 | 0.945 4 | 0.950 6 | |
Fu[ | 0.760 3 | — | 0.952 3 | — | |
Jin[ | — | — | 0.956 6 | 0.980 2 | |
Yu[ | 0.764 3 | 0.980 3 | 0.952 4 | 0.972 3 | |
本文方法 | 0.801 8 | 0.980 4 | 0.957 5 | 0.979 6 |
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Hugo[ | 0.711 6 | 0.945 4 | 0.923 1 | 0.828 5 |
Zhao[ | 0.789 0 | 0.978 0 | 0.956 0 | 0.885 0 | |
Zhou[ | 0.786 5 | 0.973 0 | 0.953 5 | — | |
Azzopardi[ | 0.754 3 | 0.968 9 | 0.947 6 | 0.948 7 | |
监督 方法 | Strisciuglio[ | 0.804 6 | 0.971 0 | 0.953 4 | 0.963 8 |
Zhang[ | 0.788 2 | 0.972 9 | 0.954 7 | — | |
Orlando[ | 0.768 0 | 0.973 8 | 0.951 9 | 0.957 0 | |
Fu[ | 0.741 2 | — | 0.958 5 | — | |
Soomro[ | 0.801 0 | 0.969 0 | 0.961 0 | 0.945 0 | |
本文方法 | 0.765 6 | 0.983 3 | 0.961 2 | 0.980 9 |
表8 在STARE数据集上的性能表现对比
Tab. 8 Performance comparison of different methods on STARE dataset
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Hugo[ | 0.711 6 | 0.945 4 | 0.923 1 | 0.828 5 |
Zhao[ | 0.789 0 | 0.978 0 | 0.956 0 | 0.885 0 | |
Zhou[ | 0.786 5 | 0.973 0 | 0.953 5 | — | |
Azzopardi[ | 0.754 3 | 0.968 9 | 0.947 6 | 0.948 7 | |
监督 方法 | Strisciuglio[ | 0.804 6 | 0.971 0 | 0.953 4 | 0.963 8 |
Zhang[ | 0.788 2 | 0.972 9 | 0.954 7 | — | |
Orlando[ | 0.768 0 | 0.973 8 | 0.951 9 | 0.957 0 | |
Fu[ | 0.741 2 | — | 0.958 5 | — | |
Soomro[ | 0.801 0 | 0.969 0 | 0.961 0 | 0.945 0 | |
本文方法 | 0.765 6 | 0.983 3 | 0.961 2 | 0.980 9 |
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Azzopardi[ | 0.725 7 | 0.965 1 | 0.941 1 | 0.943 4 |
Srinidhi[ | 0.829 7 | 0.966 3 | 0.947 4 | 0.959 1 | |
Zhang[ | 0.756 3 | 0.967 5 | 0.945 7 | 0.956 5 | |
Roychowdhury[ | 0.720 1 | 0.982 4 | 0.953 0 | 0.953 2 | |
监督 方法 | Zhang[ | 0.764 4 | 0.971 6 | 0.950 2 | 0.970 6 |
Orlando[ | 0.727 7 | 0.971 2 | 0.946 7 | 0.947 8 | |
Fu[ | 0.713 0 | — | 0.948 9 | — | |
Jin[ | — | — | 0.961 0 | 0.980 4 | |
Yan[ | 0.764 1 | 0.980 6 | 0.960 7 | 0.977 6 | |
Thangaraj[ | 0.628 8 | — | 0.946 8 | 0.797 1 | |
Suraj[ | 0.880 5 | 0.965 1 | 0.960 1 | 0.976 3 | |
本文方法 | 0.725 6 | 0.988 3 | 0.958 9 | 0.982 7 |
表9 在CHASE_DB1数据集上的性能表现对比
Tab. 9 Performance comparison of different methods on CHASE_DB1 dataset
方法 | Se | Sp | Acc | AUC | |
---|---|---|---|---|---|
无监督 方法 | Azzopardi[ | 0.725 7 | 0.965 1 | 0.941 1 | 0.943 4 |
Srinidhi[ | 0.829 7 | 0.966 3 | 0.947 4 | 0.959 1 | |
Zhang[ | 0.756 3 | 0.967 5 | 0.945 7 | 0.956 5 | |
Roychowdhury[ | 0.720 1 | 0.982 4 | 0.953 0 | 0.953 2 | |
监督 方法 | Zhang[ | 0.764 4 | 0.971 6 | 0.950 2 | 0.970 6 |
Orlando[ | 0.727 7 | 0.971 2 | 0.946 7 | 0.947 8 | |
Fu[ | 0.713 0 | — | 0.948 9 | — | |
Jin[ | — | — | 0.961 0 | 0.980 4 | |
Yan[ | 0.764 1 | 0.980 6 | 0.960 7 | 0.977 6 | |
Thangaraj[ | 0.628 8 | — | 0.946 8 | 0.797 1 | |
Suraj[ | 0.880 5 | 0.965 1 | 0.960 1 | 0.976 3 | |
本文方法 | 0.725 6 | 0.988 3 | 0.958 9 | 0.982 7 |
训练集 | 测试集 | Se | Sp | Acc | AUC |
---|---|---|---|---|---|
DRIVE | DRIVE | 0.755 1 | 0.986 0 | 0.956 6 | 0.980 6 |
STARE | STARE | 0.783 4 | 0.980 2 | 0.960 2 | 0.979 7 |
CHASE_DB1 | CHASE_DB1 | 0.679 6 | 0.990 4 | 0.955 9 | 0.981 1 |
DRIVE | STARE | 0.739 5 | 0.975 0 | 0.951 0 | 0.962 8 |
DRIVE | CHASE_DB1 | 0.542 4 | 0.991 4 | 0.928 3 | 0.959 7 |
STARE | DRIVE | 0.567 8 | 0.995 2 | 0.940 8 | 0.971 6 |
STARE | CHASE_DB1 | 0.573 2 | 0.982 5 | 0.937 1 | 0.946 2 |
CHASE_DB1 | DRIVE | 0.489 7 | 0.991 5 | 0.927 7 | 0.954 6 |
CHASE_DB1 | STARE | 0.641 2 | 0.985 3 | 0.950 2 | 0.971 8 |
表10 数据集交叉验证性能
Tab. 10 Dataset cross validation
训练集 | 测试集 | Se | Sp | Acc | AUC |
---|---|---|---|---|---|
DRIVE | DRIVE | 0.755 1 | 0.986 0 | 0.956 6 | 0.980 6 |
STARE | STARE | 0.783 4 | 0.980 2 | 0.960 2 | 0.979 7 |
CHASE_DB1 | CHASE_DB1 | 0.679 6 | 0.990 4 | 0.955 9 | 0.981 1 |
DRIVE | STARE | 0.739 5 | 0.975 0 | 0.951 0 | 0.962 8 |
DRIVE | CHASE_DB1 | 0.542 4 | 0.991 4 | 0.928 3 | 0.959 7 |
STARE | DRIVE | 0.567 8 | 0.995 2 | 0.940 8 | 0.971 6 |
STARE | CHASE_DB1 | 0.573 2 | 0.982 5 | 0.937 1 | 0.946 2 |
CHASE_DB1 | DRIVE | 0.489 7 | 0.991 5 | 0.927 7 | 0.954 6 |
CHASE_DB1 | STARE | 0.641 2 | 0.985 3 | 0.950 2 | 0.971 8 |
1 | WANG S L, YIN Y L, CAO G B, et al. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning[J]. Neurocomputing, 2015, 149: 708-717. 10.1016/j.neucom.2014.07.059 |
2 | HUGO A R, JUAN G-A C, IVAN C A, et al. Blood vessel segmentation in retinal fundus images using Gabor filters, fractional derivatives, and expectation maximization[J]. Applied Mathematics and Computation, 2018, 339:568-587. 10.1016/j.amc.2018.07.057 |
3 | SIGUROSSON E M, VALERO S, BENEDIKTSSON J A, et al. Automatic retinal vessel extraction based on directional mathematical morphology and fuzzy classification[J]. Pattern Recognition Letters, 2014, 47:164-171. 10.1016/j.patrec.2014.03.006 |
4 | ZHAO Y T, ZHAO J L, YANG J, et al. Saliency driven vasculature segmentation with infinite perimeter active contour model[J]. Neurocomputing, 2017, 259:201-209. 10.1016/j.neucom.2016.07.077 |
5 | STRISCIUGLIO N, GEORGE A, MARIO V, et al. Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters[J]. Machine Vision and Applications, 2016, 27(8):1137-1149. 10.1007/s00138-016-0781-7 |
6 | ZHANG J, CHEN Y, BEKERS E, et al. Retinal vessel delineation using a brain-inspired wavelet transform and random forest[J]. Pattern Recognition, 2017, 69:107-123. 10.1016/j.patcog.2017.04.008 |
7 | ORLANDO J I, PROKOFYEVA E, BLASCHKO M. A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images[J]. IEEE Transactions on Biomedical Engineering, 2016, 64(1):16-27. 10.1109/tbme.2016.2535311 |
8 | FU H Z, XU Y W, WONG D W K, et al. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields[C]// Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging. Piscataway: IEEE, 2016: 698-701. 10.1109/isbi.2016.7493362 |
9 | OLIVEIRA A, PERIRA S, SILVA C A. Retinal vessel segmentation based on fully convolutional neural networks[J]. Expert Systems with Applications, 2018, 112:229-242. 10.1016/j.eswa.2018.06.034 |
10 | JIN Q G, MENG Z P, TUAN D P, et al. DUNet: a deformable network for retinal vessel segmentation[J]. Knowledge-Based Systems, 2019, 178:178. 10.1016/j.knosys.2019.04.025 |
11 | 徐光柱,胡松,陈莎,等 .U-Net与Dense-Net相结合的视网膜血管提取[J].中国图象图形学报,2019,24(9): 1569-1580. |
XU G Z, HU S, CHEN S, et al. Retinal blood vessel extraction by combining U-net and Dense-net[J]. Journal of Image and Graphics,2019,24(9) : 1569-1580. | |
12 | PAN X Q, ZHANG Q R, ZHANG H, et al. A fundus retinal vessels segmentation scheme based on the improved deep learning U-Net model[J]. IEEE Access, 2019, 7:122634-122643. 10.1109/access.2019.2935138 |
13 | SOOMRO A T, AHMED J A, GAO J, et al. Boosting sensitivity of a retinal vessel segmentation algorithm with convolutional neural network[C]// Proceedings of the 2017 International Conference on Digital Image Computing: Techniques and Applications. Piscataway: IEEE, 2017: 1-8. 10.1109/dicta.2017.8227413 |
14 | DONA F J J. Fundus image vessel segmentation using PCNN model[C]// Proceedings of the 2016 Online International Conference on Green Engineering and Technologies. Piscataway: IEEE, 2016: 1-5. 10.1109/get.2016.7916778 |
15 | 徐光柱,王亚文,胡松,等. PCNN与形态匹配增强相结合的视网膜血管分割[J]. 光电工程,2019, 46(4):180466. 10.12086/oee.2019.180466 |
XU G Z, WANG Y W, HU S, et al. Retinal vascular segmentation combined with PCNN and morphological matching enhancement[J]. Opto-Electronic Engineering, 2019, 46(4): 180466. 10.12086/oee.2019.180466 | |
16 | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]// Proceedings of the 2015 Medical Image Computing and Computer-assisted Intervention. Piscataway: IEEE, 2015:234-241. 10.1007/978-3-319-24574-4_28 |
17 | WU C, LIU Z, JIANG H. Catenary image segmentation using the simplified PCNN with adaptive parameters[J]. Optic International Journal for Light & Electron Optics, 2018, 157:914-923. 10.1016/j.ijleo.2017.11.171 |
18 | IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift[C]// Proceedings of the 2015 International Conference on International Conference on Machine Learning. Piscataway: IEEE, 2015, 37:448-456. |
19 | SRIIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1):1929-1958. |
20 | RAYA T H, BETTAIAH V, RANGANATH H S. Adaptive pulse coupled neural network parameters for image segmentation[J]. World Academy of Science Engineering & Technology, 2011, 5(1):90-96. |
21 | STAAL J, ABRAMOFFM D, NIEMEIJER M,et al. Ridge-based vessel segmentation in color images of the retina[J]. IEEE Transactions on Medical Imaging, 2004, 23(4):501-509. 10.1109/tmi.2004.825627 |
22 | HOOVER A, KOUZNETSOVA V, GOLDBAUM M. Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter response[J]. IEEE Transactions on Medical Imaging, 2000, 19(3):203-210. 10.1109/42.845178 |
23 | OWEN C G, RUDNICKA A R, MULLEN R. Measuring retinal vessel tortuosity in 10-year-old children: validation of the Computer-Assisted Image Analysis of the Retina (CAIAR) program[J]. Investigative Opthalmology & Visual Science, 2009, 50(5):2004-2010. 10.1167/iovs.08-3018 |
24 | ZHOU C, ZHANG X G, CHEN H. A new robust method for blood vessel segmentation in retinal fundus images based on weighted line detector and hidden Markov model[J]. Computer Methods and Programs in Biomedicine, 2020, 187:105231. 10.1016/j.cmpb.2019.105231 |
25 | AZZOPARDI G, NICOLA S, MARIO V, et al. Trainable COSFIRE filters for vessel delineation with application to retinal images[J]. Medical Image Analysis, 2015, 19(1):46-57. 10.1016/j.media.2014.08.002 |
26 | YU L F, QIN Z, ZHUANG T, et al. A framework for hierarchical division of retinal vascular networks[J], Neurocomputing, 2020, 6(392):221-232. 10.1016/j.neucom.2018.11.113 |
27 | SOOMRO A T, AHMED J A, AHMED A S, et al. Impact of image enhancement technique on CNN model for retinal blood vessels segmentation[J]. IEEE Access, 2020, 7:158183-158197. |
28 | SRINIDHI C L, APARNA P, JENY R. A visual attention guided unsupervised feature learning for robust vessel delineation in retinal images[J]. Biomedical Signal Processing and Control, 2018, 44:110-126. 10.1016/j.bspc.2018.04.016 |
29 | ZHANG J, DASHTBOZORG B, BEKKERS E, et al. Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores[J]. IEEE Transactions on Medical Imaging, 2016, 35 (12):2631-2644. 10.1109/tmi.2016.2587062 |
30 | ROYCHOWDHURY S, KOOZEKANANI D D, PARHI K K. Blood vessel segmentation of fundus images by major vessel extraction and subimage classification[J]. IEEE Journal of Biomedical & Health Informatics, 2015, 19(3):1118-1128. 10.1109/JBHI.2014.2335617 |
31 | YAN Z Q, YANG X, CHENG K T. A three-stage deep learning model for accurate retinal vessel segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2018,23(4):1427-1436. 10.1109/jbhi.2018.2872813 |
32 | THANGARAJ S, PERIYASAMY V BALAJI R. Retinal vessel segmentation using neural network[J]. IET Image Processing, 2018, 12(5):669-678. 10.1049/iet-ipr.2017.0284 |
33 | SURAJ M, DANNY Z, CHEN X, et al. A data-aware deep supervised method for retinal vessel segmentation[C]// Proceedings of the 17th IEEE International Symposium on Biomedical Imaging. Piscataway: IEEE, 2020: 1254-1257. 10.1109/isbi45749.2020.9098403 |
[1] | 黄梨, 卢龙. 基于长距离依赖编码与深度残差U-Net的缺血性卒中病灶分割[J]. 计算机应用, 2021, 41(6): 1820-1827. |
[2] | 董阳, 潘海为, 崔倩娜, 边晓菲, 滕腾, 王邦菊. 面向多模态磁共振脑瘤图像的小样本分割方法[J]. 计算机应用, 2021, 41(4): 1049-1054. |
[3] | 高海军, 曾祥银, 潘大志, 郑伯川. 基于U-Net改进模型的直肠肿瘤分割方法[J]. 计算机应用, 2020, 40(8): 2392-2397. |
[4] | 石陆魁, 马红祺, 张朝宗, 樊世燕. 基于改进残差结构的肺结节检测方法[J]. 计算机应用, 2020, 40(7): 2110-2116. |
[5] | 马金林, 魏萌, 马自萍. 基于深度迁移学习的肺结节分割方法[J]. 计算机应用, 2020, 40(7): 2117-2125. |
[6] | 魏小娜, 邢嘉祺, 王振宇, 王颖珊, 石洁, 赵地, 汪红志. 基于改进U-Net的关节滑膜磁共振图像的分割[J]. 计算机应用, 2020, 40(11): 3340-3345. |
[7] | 宋小娜, 芮挺, 王新晴. 结合语义边界信息的道路环境语义分割方法[J]. 计算机应用, 2019, 39(9): 2505-2510. |
[8] | 潘沛克, 王艳, 罗勇, 周激流. 基于U-net模型的全自动鼻咽肿瘤MR图像分割[J]. 计算机应用, 2019, 39(4): 1183-1188. |
[9] | 刘亚龙, 李洁, 王颖, 仵赛飞, 邹佩. 基于精细化残差U-Net的新生儿局灶性脑白质损伤分割模型[J]. 计算机应用, 2019, 39(12): 3456-3461. |
[10] | 秦品乐, 李鹏波, 曾建潮, 朱辉, 徐少伟. 基于级联全卷积神经网络的颈部淋巴结自动识别算法[J]. 计算机应用, 2019, 39(10): 2915-2922. |
[11] | 张永宏, 夏广浩, 阚希, 何静, 葛涛涛, 王剑庚. 基于全卷积神经网络的多源高分辨率遥感道路提取[J]. 计算机应用, 2018, 38(7): 2070-2075. |
[12] | 杨朔, 陈丽芳, 石瑀, 毛一鸣. 基于深度生成式对抗网络的蓝藻语义分割[J]. 计算机应用, 2018, 38(6): 1554-1561. |
[13] | 刘栋, 周冬明, 聂仁灿, 侯瑞超. NSCT域内结合相位一致性激励PCNN的多聚焦图像融合[J]. 计算机应用, 2018, 38(10): 3006-3012. |
[14] | 王泽宇, 吴艳霞, 张国印, 布树辉. 面向RGB-D场景解析的三维空间结构化编码深度网络[J]. 计算机应用, 2017, 37(12): 3458-3466. |
[15] | 程述立, 汪烈军, 秦继伟, 杜安钰. 群智能算法优化的结合熵的最大类间方差法与脉冲耦合神经网络融合的图像分割算法[J]. 计算机应用, 2017, 37(12): 3528-3535. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||