Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-layer fusion feature based on richer convolutional features for edge detection
SONG Jie, YU Yu, LUO Qifeng
Journal of Computer Applications    2020, 40 (7): 2053-2058.   DOI: 10.11772/j.issn.1001-9081.2019112057
Abstract521)      PDF (1496KB)(586)       Save
Aiming at the problems such as chaotic and fuzzy edge lines caused by current deep learning based edge detection technology, an end-to-end Cross-layer Fusion Feature for edge detection (CFF) model based on RCF (Richer Convolutional Features) was proposed. In this model, RCF was used as a baseline, the CBAM (Convolutional Block Attention Module) was added to the backbone network, translation-invariant downsampling technology was adopted, and some downsampling operations in the backbone network were removed in order to preserve the image details information, dilated convolution technique was used to increase the model receptive field at the same time. In addition, the method of cross-layer fusion of feature maps was adopted to enable high-level and low-level features to be fully fused together. In order to balance the relationship between the loss in each stage and the fusion loss, and to avoid the phenomenon of excessive loss of low-level details after multi-scale feature fusion, the weight parameters were added to the losses. The model was trained on Berkeley Segmentation Data Set (BSDS500) and PASCAL VOL Context dataset, and the image pyramid technology was used in testing to improve the quality of edge images. Experimental results show that the contour extracted by CFF model is clearer than that extracted by the baseline network and can solve the edge blurring problem. The evaluation performed on the BSDS500 benchmark shows that, the Optimal Dataset Scale (ODS) and the Optimal Image Scale (OIS) are improved to 0.818 and 0.839 respectively by this model.
Reference | Related Articles | Metrics