Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Face forgery detection method based on tri-branch feature extraction
Shengwei XU, Jianbo WANG, Jijie HAN, Yijie BAI
Journal of Computer Applications    2026, 46 (4): 1292-1299.   DOI: 10.11772/j.issn.1001-9081.2025040461
Abstract127)   HTML0)    PDF (1012KB)(25)       Save

To address the problems of insufficient feature representation, poor robustness, and weak cross-domain generalization in handling diverse forgery types and low-quality images, a face forgery detection method based on tri-branch feature extraction, Tri-BranchNet (Tri-Branch feature extraction Network), was proposed to achieve the complementarity and integration of multiple types of features, and enhance forgery trace representation and model’s detection performance. The architecture was designed as: 1)global semantic representation were captured by using Vision Transformer (ViT); 2)local texture feature modeling ability was improved by introducing Invertible Neural Network (INN); 3)an edge feature extraction branch was designed to solve the problem that traditional models inadequately extracted features from boundary forgery regions. Experimental results on multiple public datasets show that the proposed method achieves 98.75% accuracy on FaceForensics++ (C23) dataset, outperforming F3-Net (Frequency in Face Forgery Network) and CORE (COnsistent REpresentation learning) by 1.26% and 1.17%, respectively. In cross-compression and cross-dataset tests, the proposed method has the Area Under Curve (AUC) scores reached 85.26% and 81.09% on C40 and Celeb-DF, respectively, demonstrating strong robustness and generalization. It can be seen that the proposed tri-branch fusion mechanism enhances detection accuracy in complex forgery environments significantly and provides a novel idea for multi-dimensional feature modeling of forgery images.

Table and Figures | Reference | Related Articles | Metrics