Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Semi-supervised stance detection based on category-aware curriculum learning
Zhaoze GAO, Xiaofei ZHU, Nengqiang XIANG
Journal of Computer Applications    2024, 44 (10): 3281-3287.   DOI: 10.11772/j.issn.1001-9081.2023101558
Abstract87)   HTML1)    PDF (1303KB)(11)       Save

Pseudo-label generation emerges as an effective strategy in semi-supervised stance detection. In practical applications, variations are observed in the quality of generated pseudo-labels. However, in the existing working, the quality of these labels is regarded as equivalent. Furthermore, the influence of category imbalance on the quality of pseudo-label generation is not fully considered. To address these issues, a Semi-supervised stance Detection model based on Category-aware curriculum Learning (SDCL) was proposed. Firstly, a pre-trained classification model was employed to generate pseudo-labels for unlabeled tweets. Then, tweets were sorted by category based on the quality of pseudo-labels, and the top k high-quality tweets for each category were selected. Finally, the selected tweets from each category were merged, re-sorted, and input into the classification model with pseudo-labels, thereby further optimizing the model parameters. Experimental results indicate that compared to the best-performing baseline model, SANDS (Stance Analysis via Network Distant Supervision), the proposed model demonstrates improvements in Mac-F1 (Macro-averaged F1) scores on StanceUS dataset by 2, 1, and 3 percentage points respectively under three different splits (with 500, 1 000, and 1 500 labeled tweets). Similarly, on StanceIN dataset, the proposed model exhibits enhancements in Mac-F1 scores by 1 percentage point under the three splits, thereby validating the effectiveness of the proposed model.

Table and Figures | Reference | Related Articles | Metrics
Information diffusion prediction model of prototype-aware dual-channel graph convolutional neural network
Nengqiang XIANG, Xiaofei ZHU, Zhaoze GAO
Journal of Computer Applications    2024, 44 (10): 3260-3266.   DOI: 10.11772/j.issn.1001-9081.2023101557
Abstract84)   HTML1)    PDF (1549KB)(9)       Save

Aiming at the problem that existing information diffusion prediction models are difficult to mine users’ dependency on cascades, a Prototype-aware Dual-channel Graph Convolutional neural Network (PDGCN) information diffusion prediction model was proposed. Firstly, HyperGraph Convolutional Network (HGCN) was used to learn user representation and cascade representation based on cascade hypergraph level, while Graph Convolutional Network (GCN) was used to learn user representation based on dynamic friendship forwarding graph. Secondly, for a given target cascade, the user representations that met the current cascade were found from the above two levels of user representations, and the two representations were fused together. Thirdly, the prototype of cascade representation was obtained through clustering algorithm. Finally, the most matching prototype for the current cascade was found, and this prototype was integrated into each user representation in the current cascade to calculate the diffusion probability of candidate users. Compared with Memory-enhanced Sequential HyperGraph ATtention network (MS-HGAT), PDGCN improved Hits@100 by 1.17% and MAP@100 by 5.02% on Twitter dataset, and improved Hits@100 by 3.88% and MAP@100 by 0.72% on Android dataset. Experimental results show that the proposed model outperforms the comparison model in information diffusion prediction task and has better prediction performance.

Table and Figures | Reference | Related Articles | Metrics
Hardware reconstruction acceleration method of convolutional neural network-based single image defogging model
Guanjun WANG, Chunlian JIAN, Qiang XIANG
Journal of Computer Applications    2022, 42 (10): 3184-3190.   DOI: 10.11772/j.issn.1001-9081.2021081475
Abstract415)   HTML7)    PDF (3412KB)(133)       Save

Single image defogging model based on Convolutional Neural Network (CNN) was difficult to deploy on mobile/embedded system and used for real-time video defogging. To solve this problem, a method of hardware reconstruction and acceleration was proposed, based on Zynq System-on-Chip (SoC). First, a quantization-dequantization algorithm was proposed to perform quantization on two representative defogging models; second, a quantized defogging model was reconstructed and a hardware IP core with Advanced eXtensible Interface 4 (AXI4) was generated, based on video stream memory architecture, hardware/software co-design, pipeline technology and High-Level Synthesis (HLS) tool. Experimental results show that the model parameters can be quantified from float32 to int5(5 bit) under premise of defogging performance, saving about 84.4% of storage space; the highest pixel clock frequency of the generated hardware IP core is 182 Mpixel/s, which can achieve 1080P@60 frame/s video defogging; the hardware IP core processes a single hazy image with the resolution of 640 pixel × 480 pixel only in 2.4 ms, and the on-chip power consumption is only 2.25 W. This hardware IP core with AXI4 is also convenient for cross-platform migration and deployment, which can expand application scope of CNN-based single image defogging model.

Table and Figures | Reference | Related Articles | Metrics