Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Vehicle insurance fraud detection method based on improved graph attention network
Jinjiao LIN, Canshun ZHANG, Shuya CHEN, Tianxin WANG, Jian LIAN, Yonghui XU
Journal of Computer Applications    2026, 46 (2): 437-444.   DOI: 10.11772/j.issn.1001-9081.2025020151
Abstract30)   HTML1)    PDF (973KB)(6)       Save

Aiming at the problem that the existing vehicle insurance fraud detection methods ignore complex correlation in the data, a vehicle insurance fraud detection method based on improved graph attention network was proposed. This method enhances the ability to capture complex correlation in the data through collaborative design of dynamic attention mechanism and serialized global modeling. Firstly, each case of vehicle insurance fraud was abstracted as a node of graph structure. Secondly, the similarity between multiple attributes such as time, age, and amount of the nodes was calculated by K-Nearest Neighbor (KNN) algorithm, so as to construct the complex correlation between the cases. Thirdly, the graph data of the cases was input into GATv2(dynamic Graph ATtention network), and local features of the adjacent nodes were aggregated by allocating node weights dynamically, thereby obtaining new feature representation of each case node. Fourthly, Transformer was introduced to serialize the graph structure output of GATv2. Finally the fusion module was used to perform nonlinear integration expression on the final features, so as to obtain the classification results of the case nodes. Experimental results show that compared with the baseline methods, the proposed method has the accuracy on the two datasets improved by at least 1.11 and 1.34 percentage points, respectively, and the False Positive Rate (FPR) of as low as 0.035% on the insurance company dataset, which provides a new technical solution for improving the accuracy and efficiency of vehicle insurance fraud detection.

Table and Figures | Reference | Related Articles | Metrics
Cross‑resolution person re‑identification by generative adversarial network based on multi‑granularity features
Yanbing GENG, Yongjian LIAN
Journal of Computer Applications    2022, 42 (11): 3573-3579.   DOI: 10.11772/j.issn.1001-9081.2021122124
Abstract464)   HTML7)    PDF (2598KB)(142)       Save

Existing Super Resolution (SR) reconstruction methods based on Generative Adversarial Network (GAN) for cross?resolution person Re?IDentification (ReID) suffer from deficiencies in both texture structure content recovery and feature consistency maintenance of the reconstructed images. To solve these problems, a cross?resolution pedestrian re?identification method based on multi?granularity information generation network was proposed. Firstly, a self?attention mechanism was introduced into multiple layers of generator to focus on multi?granularity stable regions with structural correlation, focusing on recovering the texture and structure information of the Low Resolution (LR) person image. At the same time, an identifier was added at the end of the generator to minimize the loss in different granularity features between the generated image and the real image during the training process, improving the feature consistency between the generated image and the real image in terms of features. Secondly, the self?attention generator and identifier were jointed, then they were optimized alternately with the discriminator to improve the generated image on content and features. Finally, the improved GAN and person re?identification network were combined to train the model parameters of the optimized network alternately until the model converged. Comparison Experimental results on several cross?resolution person re?identification datasets show that the proposed algorithm improves rank?1 accuracy on Cumulative Match Characteristic(CMC) by 10 percentage points on average, and has better performance in enhancing both content consistency and feature expression consistency of SR images.

Table and Figures | Reference | Related Articles | Metrics