Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Tracking appearance features based on attention self-correlation mechanism
Guangyi DOU, Fanan WEI, Chuangyi QIU, Jianshu CHAO
Journal of Computer Applications    2023, 43 (4): 1248-1254.   DOI: 10.11772/j.issn.1001-9081.2022030426
Abstract283)   HTML8)    PDF (2258KB)(93)       Save

In order to solve the Multi-Objective Tracking (MOT) algorithms’ problems such as ID Switch (IDS) caused by fuzzy pedestrian features and verify the importance of pedestrian appearance in the tracking process, an Attention Self-Correlation Network (ASCN) based on center point detection model was proposed. Firstly, the original image was learned by channel and spatial attention networks to obtain two different feature maps, and the deep information was decoupled. Then, more accurate pedestrian appearance features and pedestrian orientation information were obtained through the autocorrelation learning between the feature maps, and this information was used to track association process. In addition, a tracking dataset of videos at low frame rate conditions was produced to verify the performance of the improved algorithm. When the video frame rate conditions were not ideal, the pedestrian appearance information was obtained by the improved algorithm through ASCN, and the algorithm had better accuracy and robustness than the algorithms only using pedestrian orientation information. Finally, the improved algorithm was tested on the MOT17 dataset of MOT Challenge. Experimental results show that compared with the FairMOT (Fairness in MOT) without adding ASCN, the improved algorithm has the Multiple Object Tracking Accuracy (MOTA) and Identification F-Score (IDF1) increased by 0.5 percentage points and 1.1 percentage points respectively, the number of IDS decreased by 32.2%, and the running speed on a single NVIDIA Tesla V100 card reached 21.2 frames per second. The above proves that the improved algorithm not only reduces the errors in the tracking process, but also improves the overall tracking performance, and can meet the real-time requirements.

Table and Figures | Reference | Related Articles | Metrics
Cloth-changing person re-identification model based on semantic-guided self-attention network
Jianhua ZHONG, Chuangyi QIU, Jianshu CHAO, Ruicheng MING, Jianfeng ZHONG
Journal of Computer Applications    2023, 43 (12): 3719-3726.   DOI: 10.11772/j.issn.1001-9081.2022121875
Abstract448)   HTML19)    PDF (2046KB)(258)       Save

Focused on the difficulty of extracting effective information in the cloth-changing person Re-identification (ReID) task, a cloth-changing person re-identification model based on semantic-guided self-attention network was proposed. Firstly, semantic information was used to segment an original image into a cloth-free image. Both images were input into a two-branch multi-head self-attention network to extract cloth-independent features and complete person features, respectively. Then, a Global Feature Reconstruction module (GFR) was designed to reconstruct two global features, in which the clothing region contained head features with better robustness, which made the saliency information in the global features more prominent. And a Local Feature Reorganization and Reconstruction module (LFRR) was proposed to extract the head and shoe features from the original image and the cloth-free image, emphasizing the detailed information about the head and shoe features and reducing the interference caused by changing shoes. Finally, in addition to the identity loss and triplet loss commonly used in person re-identification, Feature Pull Loss (FPL) was proposed to close the distances among local and global features, complete image features and costume-free image features. On the PRCC (Person ReID under moderate Clothing Change) and VC-Clothes (Virtually Changing-Clothes) datasets, the mean Average Precision (mAP) of the proposed model improved by 4.6 and 0.9 percentage points respectively compared to the Clothing-based Adversarial Loss (CAL) model. On the Celeb-reID (Celebrities re-IDentification) and Celeb-reID-light (a light version of Celebrities re-IDentification) datasets, the mAP of the proposed model improved by 0.2 and 5.0 percentage points respectively compared with the Joint Loss Capsule Network (JLCN) model. The experimental results show that the proposed method has certain advantages in highlighting effective information expression in the cloth-changing scenarios.

Table and Figures | Reference | Related Articles | Metrics