Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Video anomaly detection for moving foreground regions
Lihu PAN, Shouxin PENG, Rui ZHANG, Zhiyang XUE, Xuzhen MAO
Journal of Computer Applications    2025, 45 (4): 1300-1309.   DOI: 10.11772/j.issn.1001-9081.2024040519
Abstract114)   HTML4)    PDF (2907KB)(74)       Save

Imbalance in data distribution between static background information and moving foreground objects often leads to insufficient learning of abnormal foreground region information, thereby affecting the accuracy of Video Anomaly Detection (VAD). To address this issue, a Nested U-shaped Frame Predictive Generative Adversarial Network (NUFP-GAN) was proposed for VAD. In the proposed method, a nested U-shaped frame prediction network architecture, which had the capability to highlight significant targets in video frames, was utilized as the frame prediction module. In the discrimination phase, a self-attention patch discriminator was designed to extract more important appearance and motion features from video frames using receptive fields of different sizes, thereby enhancing the accuracy of anomaly detection. Additionally, to ensure the consistency of multi-scale features of predicted frames and real frames in high-level semantic information, a multi-scale consistency loss was introduced to further improve the method’s anomaly detection performance. Experimental results show that the proposed method achieves the Area Under Curve (AUC) values of 87.6%, 85.2%, 96.0%, and 73.3%, respectively, on CUHK Avenue, UCSD Ped1, UCSD Ped2, and ShanghaiTech datasets; on ShanghaiTech dataset, the AUC value of the proposed method is 1.8 percentage points higher than that of MAMC (Memory-enhanced Appearance-Motion Consistency) method. It can be seen that the proposed method can meet the challenges brought by data distribution imbalance in VAD effectively.

Table and Figures | Reference | Related Articles | Metrics