《计算机应用》唯一官方网站

• •    下一篇

基于深度对比网络的印刷缺陷检测方法

王佑芯1,2,陈斌2,3,4*   

  1. 1.中国科学院 成都计算机应用研究所,成都 610041;2.中国科学院大学 计算机科学与技术学院,北京 100049;
    3.哈尔滨工业大学(深圳) 国际人工智能研究院,广东 深圳 518055;4.哈尔滨工业大学 重庆研究院,重庆 401100
  • 收稿日期:2021-11-13 修回日期:2022-03-13 接受日期:2022-04-01 发布日期:2022-06-17 出版日期:2022-06-17
  • 通讯作者: 陈斌

Print defects detection method based on deep compare network

  • Received:2021-11-13 Revised:2022-03-13 Accepted:2022-04-01 Online:2022-06-17 Published:2022-06-17

摘要: 针对基于传统图像处理技术的印刷缺陷检测方法鲁棒性差与基于深度学习的目标检测方法不完全适用于印刷缺陷检测任务的问题,将模板匹配方法中的对比思想与深度学习中的语义特征结合,提出了用于印刷缺陷检测任务的深度对比网络(CoNet)。首先,提出了采用孪生结构的深度对比模块(DCM),通过在语义空间提取并融合检测图像与参考图像的特征图挖掘二者的语义关系。然后,基于非对称的双通路特征金字塔结构,提出了多尺度变化检测模块(MsCDM),用于定位并识别印刷缺陷。在公开的印刷电路板缺陷数据集上,CoNet的平均精度(mAP)为99.1%,与模板匹配方法和Faster R-CNN相比分别提升了9.8个百分点和1.5个百分点;与同样采用变化检测思路的两个基线模型最大分组金字塔池化(MP-GPP)和变化检测单次检测器(CD-SSD)相比,分别提升了0.5个百分点和0.8个百分点。在更复杂的立金缺陷数据集上,采用相同实验设定得出,CoNet的mAP为69.8%,不仅比模板匹配方法和Faster R-CNN的mAP分别高了12个百分点和5.3个百分点,而且相较MP-GPP和CD-SSD也分别提升3.5个百分点和2.4个百分点。此外,当输入图像分辨率为640×640时,CoNet的平均耗时为35.7ms,完全可以满足工业检测任务的实时性需求。

关键词: 印刷缺陷检测, 深度学习, 孪生卷积神经网络, 特征金字塔, 变化检测

Abstract: In order to solve the problems of poor robustness of the print defects detection methods based on traditional image processing technology and the object detection methods based on deep learning are not completely suitable for the detection tasks of print defects, the comparison ideas in template matching method were combined with semantic features in deep learning, the Deep Comparison Network (CoNet) could be used for the detection tasks of print defects was proposed. Firstly, the Deep Comparison Module (DCM) based on twin structure was proposed to mine the semantic relationship between the detection image and the reference image through extracting and fusing the feature maps of them in the semantic space. Then, based on the feature pyramid structure composed of asymmetric double channels, the Multi-scale Change Detection Module (MsCDM) was proposed to locate and classify print defects. On the published dataset of print circuit board defects, the mean Average Precision (mAP) of CoNet is 99.1%, which is increased by 9.8 percentage points and 1.5 percentage points respectively compared with the template matching method and Faster Region-based Convolutional Neural Network (Faster R-CNN). Compared with the two baseline models Max-Pooling Group Pyramid Pooling (MP-GPP) and Change-Detection Single Shot Detector (CD-SSD), which also adopt the idea of change detection, the mAP of the proposed method is increased by 0.5 percentage points and 0.8 percentage points respectively. With the same experimental setting, the mAP of CoNet is 69.8% on a more complex dataset of Lijin defects, which is not only 12 percentage points and 5.3 percentage points higher than that of template matching method and Faster R-CNN, but also 3.5 percentage points and 2.4 percentage points higher than that of MP-GPP and CD-SSD respectively. Besides, when the resolution of input image is 640×640, the average time consumption of CoNet is 35.7ms, which can absolutely meet the real-time requirements of industrial detection tasks.

Key words: print defects detection, deep learning, Siamese convolutional neural network, feature pyramid, change detection

中图分类号: