Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Image watermarking method combining attention mechanism and multi-scale feature
Tianqi ZHANG, Shuang TAN, Xiwen SHEN, Juan TANG
Journal of Computer Applications    2025, 45 (2): 616-623.   DOI: 10.11772/j.issn.1001-9081.2024030282
Abstract183)   HTML6)    PDF (3448KB)(222)       Save

Aiming at the problems that the watermarking method based on deep learning does not fully highlight key features of the image and does not utilize the output features of the intermediate convolution layer effectively, to improve the visual quality and the ability to resist noise attacks of the watermarked image, an attention mechanism-based multi-scale feature image watermarking method was proposed. An attention module was designed in the encoder part to focus on important image features, thereby reducing image distortion caused by watermark embedding; a multi-scale feature extraction module was designed in the decoder part to capture different levels of image details. Experimental results show that compared with the deep watermark model HiDDeN(Hiding Data with Deep Networks) on COCO dataset, the proposed method has the generated watermarked image’s Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) increased by 11.63% and 1.29% respectively and has the average Bit Error Rate (BER) of watermark extraction for dropout, cropout, crop, Gaussian blur, and JPEG compression reduced by 53.85%. In addition, ablation experimental results confirm that the method adding attention module and multi-scale feature extraction module has better invisibility and robustness.

Table and Figures | Reference | Related Articles | Metrics
Deep temporal event detection algorithm based on signal temporal logic
Siqi ZHANG, Jinjun ZHANG, Tianyi WANG, Xiaolin QIN
Journal of Computer Applications    2025, 45 (1): 90-97.   DOI: 10.11772/j.issn.1001-9081.2024010131
Abstract233)   HTML12)    PDF (1725KB)(197)       Save

Aiming at the issues of insufficient accuracy in detecting complex temporal events and the neglect of inter-event correlations of deep event detection models, a deep temporal event detection algorithm based on temporal logic, DSTL (Deep Signal Temporal Logic), was proposed. In the algorithm, for one thing, a framework of signal temporal logic was introduced, and events in time series were modeled using Signal Temporal Logic (STL) formulae to consider the logicality and temporality of events in time series comprehensively. For another, a neural network-based base classifier was utilized to detect the occurrence of atomic events, and detection of complex events was aided by structures and semantics of STL formulae. Additionally, neural network modules were employed to replace the corresponding logical conjunctions and temporal logic operators to provide neural network modules supporting GPU acceleration and gradient descent. Through experiments on six time series datasets, the effectiveness of the proposed algorithm in temporal event detection was validated, and the model using DSTL algorithm was compared with deep temporal event detection models using MLP (Multilayer Perceptron), Long Short-Term Memory (LSTM) network and Transformer without using this algorithm. The results indicate that the model using DSTL algorithm has an approximate 12% improvement in average F1 score on five event categories, with an approximate 14% improvement in average F1 score for three categories of cross-time point events, and it has better interpretability.

Table and Figures | Reference | Related Articles | Metrics
Robust resource allocation optimization in cognitive wireless network integrating information communication and over-the-air computation
Hualiang LUO, Quanzhong LI, Qi ZHANG
Journal of Computer Applications    2024, 44 (4): 1195-1202.   DOI: 10.11772/j.issn.1001-9081.2023050573
Abstract433)   HTML11)    PDF (1373KB)(117)       Save

To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.

Table and Figures | Reference | Related Articles | Metrics
Review of interactive machine translation
Xingbin LIAO, Xiaolin QIN, Siqi ZHANG, Yangge QIAN
Journal of Computer Applications    2023, 43 (2): 329-334.   DOI: 10.11772/j.issn.1001-9081.2021122067
Abstract951)   HTML99)    PDF (1870KB)(549)       Save

With the development and maturity of deep learning, the quality of neural machine translation has increased, yet it is still not perfect and requires human post-editing to achieve acceptable translation results. Interactive Machine Translation (IMT) is an alternative to this serial work, that is performing human interaction during the translation process, where the user verifies the candidate translations produced by the translation system and, if necessary, provides new input, and the system generates new candidate translations based on the current feedback of users, this process repeats until a satisfactory output is produced. Firstly, the basic concept and the current research progresses of IMT were introduced. Then, some common methods and state-of-the-art works were suggested in classification, while the background and innovation of each work were briefly described. Finally, the development trends and research difficulties of IMT were discussed.

Table and Figures | Reference | Related Articles | Metrics
Deep review attention neural network model for enhancing explainability of recommendation system
Chuyuan WEI, Mengke WANG, Chuanhao HU, Guangqi ZHANG
Journal of Computer Applications    2023, 43 (11): 3443-3448.   DOI: 10.11772/j.issn.1001-9081.2022101628
Abstract448)   HTML22)    PDF (1652KB)(652)       Save

In order to improve the explainability of Recommendation System (RS), break the inherent limitations of recommendation system and enhance the user’s trust and satisfaction on recommender systems, a Deep Review Attention Neural Network (DRANN) model with enhanced explainability was proposed. Based on the potential relationships between users and items on text reviews, the rich semantic information in user reviews and item reviews was used to predict users’ interest preferences and sentiment tendencies by the proposed model. Firstly, a Text Convolutional Neural Network (TextCNN) was used to do shallow feature extraction for word vectors. Then, the attention mechanism was used to assign weights to comment data and filter invalid comment information. At the same time, the deep autoencoder module was constructed to reduce the dimension of high-dimensional sparse data, remove interference information, learn deep semantic representation, and enhance the explainability of recommendation model. Finally, the prediction score was obtained through the prediction layer. Experimental results on the four public data sets including Patio, Automotive, Musical Instrument (M?I) and Beauty show that DRANN model has the smallest Root Mean Square Error (RMSE) compared with Probabilistic Matrix Factorization (PMF), Single Value Decomposition++ (SVD++), Deep Cooperative Neural Network (DeepCoNN), Tree-enhanced Embedding Model (TEM), DeepCF (Deep Collaborative Filtering) and DER(Dynamic Explainable Recommender), verifying its effectiveness in improving performance and the feasibility of the adopted explanation strategy.

Table and Figures | Reference | Related Articles | Metrics
Image denoising model based on approximate U-shaped network structure
Huazhong JIN, Xiuyang ZHANG, Zhiwei YE, Wenqi ZHANG, Xiaoyu XIA
Journal of Computer Applications    2022, 42 (8): 2571-2577.   DOI: 10.11772/j.issn.1001-9081.2021061126
Abstract613)   HTML11)    PDF (952KB)(165)       Save

Aiming at the problem of poor denoising effect and long training period in image denoising, an image denoising model based on approximate U-shaped network structure was proposed. Firstly, the original linear network structure was modified to an approximate U-shaped network structure by using convolutional layers with different strides. Then, the image information of different receptive fields was superimposed on each other to preserve the original information of the image as much as possible. Finally, the deconvolutional network layer was introduced for image restoration and further noise removal. Experimental results show that on Set12 and BSD68 test sets: compared with Denoising Convolutional Neural Network (DnCNN) model, the proposed model has an average increase of 0.04 to 0.14 dB on Peak Signal-to-Noise Ratio (PSNR), and an average reduction of 41% on training time, verifying that the proposed model has better denoising effect and shorter training time.

Table and Figures | Reference | Related Articles | Metrics
Lip language recognition algorithm based on single-tag radio frequency identification
Yingqi ZHANG, Dawei PENG, Sen LI, Ying SUN, Qiang NIU
Journal of Computer Applications    2022, 42 (6): 1762-1769.   DOI: 10.11772/j.issn.1001-9081.2021061390
Abstract455)   HTML8)    PDF (4019KB)(127)       Save

In recent years, a wireless platform for speech recognition using multiple customized and stretchable Radio Frequency Identification (RFID) tags has been proposed, however, it is difficult for the tags to accurately capture large frequency shifts caused by stretching, and multiple tags need to be detected and recalibrated when the tags fall off or wear out naturally. In response to the above problems, a lip language recognition algorithm based on single-tag RFID was proposed, in which a flexible, easily concealable and non-invasive single universal RFID tag was attached to the face, allowing lip language recognition even if the user does not make a sound and relies only on facial micro-actions. Firstly, a model was established to process the Received Signal Strength (RSS) and phase changes of individual tags received by an RFID reader responding over time and frequency. Then the Gaussian function was used to preprocess the noise of the original data by smoothing and denoising, and the Dynamic Time Warping (DTW) algorithm was used to evaluate and analyze the collected signal characteristics to solve the problem of pronunciation length mismatch. Finally, a wireless speech recognition system was created to recognize and distinguish the facial expressions corresponding to the voice, thus achieving the purpose of lip language recognition. Experimental results show that the accuracy of RSS can reach more than 86.5% by the proposed algorithm for identifying 200 groups of digital signal characteristics of different users.

Table and Figures | Reference | Related Articles | Metrics
Density peak clustering algorithm based on adaptive nearest neighbor parameters
Huanhuan ZHOU, Bochuan ZHENG, Zheng ZHANG, Qi ZHANG
Journal of Computer Applications    2022, 42 (5): 1464-1471.   DOI: 10.11772/j.issn.1001-9081.2021050753
Abstract417)   HTML14)    PDF (5873KB)(128)       Save

Aiming at the problem that the nearest neighbor parameters need to be set manually in density peak clustering algorithm based on shared nearest neighbor, a density peak clustering algorithm based on adaptive nearest neighbor parameters was proposed. Firstly, the proposed nearest neighbor parameter search algorithm was used to automatically obtain the nearest neighbor parameters. Then, the clustering centers were selected through the decision diagram. Finally, according to the proposed allocation strategy of representative points, all sample points were clustered through allocating the representative points and the non-representative points sequentially. The clustering results of the proposed algorithm was compared with those of the six algorithms such as Shared-Nearest-Neighbor-based Clustering by fast search and find of Density Peaks (SNN?DPC), Clustering by fast search and find of Density Peaks (DPC), Affinity Propagation (AP), Ordering Points To Identify the Clustering Structure (OPTICS), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and K-means on the synthetic datasets and UCI datasets. Experimental results show that, the proposed algorithm is better than the other six algorithms on the evaluation indicators such as Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI) and Fowlkes and Mallows Index (FMI). The proposed algorithm can automatically obtain the effective nearest neighbor parameters, and can better allocate the sample points in the edge region of the cluster.

Table and Figures | Reference | Related Articles | Metrics
Sparse subspace clustering method based on random blocking
Qi ZHANG, Bochuan ZHENG, Zheng ZHANG, Huanhuan ZHOU
Journal of Computer Applications    2022, 42 (4): 1148-1154.   DOI: 10.11772/j.issn.1001-9081.2021071271
Abstract398)   HTML9)    PDF (734KB)(93)       Save

Aiming at the problem of big clustering error of the Sparse Subspace Clustering (SSC) methods, an SSC method based on random blocking was proposed. First, the original problem dataset was divided into several subsets randomly to construct several sub-problems. Then, after obtaining the coefficient matrices of several sub-problems by the sparse subspace Alternating Direction Method of Multipliers (ADMM) respectively, these coefficient matrices were expanded into coefficient matrices of the same size as the original problem and integrated into a coefficient matrix. Finally, a similarity matrix was calculated according to the coefficient matrix obtained by the integration, and the clustering result of the original problem was obtained by using the Spectral Clustering (SC) algorithm. The SSC method based on random blocking has the subspace clustering error reduced by 3.12 percentage points on average compared with the optional algorithm among SSC, Stochastic Sparse Subspace Clustering via Orthogonal Matching Pursuit with Consensus (S3COMP-C), scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit (SSCOMP), SC and K-Means algorithms, and has all the mutual information, Rand index and entropy significantly better than comparison algorithms. Experimental results show that the SSC method based on random blocking can significantly reduce subspace clustering error, and improve the clustering performance.

Table and Figures | Reference | Related Articles | Metrics
Knowledge graph recommendation model with multiple time scales and feature enhancement
Suqi ZHANG, Xinxin WANG, Shiyao SHE, Junhua GU
Journal of Computer Applications    2022, 42 (4): 1093-1098.   DOI: 10.11772/j.issn.1001-9081.2021071241
Abstract517)   HTML16)    PDF (582KB)(333)       Save

Aiming at the problems that the existing knowledge graph recommendation models do not consider the periodic features of the user and the items to be recommended will affect the recent interests of the user, a knowledge graph recommendation model with Multiple Time scales and Feature Enhancement (MTFE) was proposed. Firstly, Long Short-Term Memory (LSTM) network was used to mine the user’s periodic features on different time scales and integrate them into user representation. Then, attention mechanism was used to mine the features strongly correlated with the user’s recent features in the items to be recommended and integrate them into the item representation after enhancement. Finally, the scoring function was used to calculate user’s ratings of items to be recommended. The proposed model was compared with PER(Personalized Entity Recommendation), CKE(Collaborative Knowledge base Embedding), LibFM, RippleNet, KGCN(Knowledge Graph Convolutional Network), CKAN(Collaborative Knowledge-aware Attentive Network) knowledge graph recommendation models on real datasets Last.FM, MovieLens-1M and MovieLens-20M. Experimental results show that compared with the model with the best prediction performance, MTFE model has the F1 value improved by 0.78 percentage points, 1.63 percentage points and 1.92 percentage points and the Area Under Curve of ROC (AUC)metric improved by 3.94 percentage points, 2.73 percentage points and 1.15 percentage points on three datasets respectively. In summary, compared with comparative knowledge graph recommendation models, the proposed knowledge graph recommendation model has better recommendation effect.

Table and Figures | Reference | Related Articles | Metrics
Knowledge graph attention network fusing collaborative filtering information
Junhua GU, Rui WANG, Ningning LI, Suqi ZHANG
Journal of Computer Applications    2022, 42 (4): 1087-1092.   DOI: 10.11772/j.issn.1001-9081.2021071269
Abstract505)   HTML14)    PDF (558KB)(206)       Save

Since Knowledge Graph(KG) can alleviate the problems of data sparsity and cold start in collaborative filtering algorithm, it has been widely studied and applied in the recommendation field. Many existing recommendation models based on KG confuse the collaborative filtering information in user-item bipartite graph and the association information between entities in KG, resulting in the learned user vector and item vector cannot accurately express the characteristics of users and items, and even introducing wrong information to interfere with recommendation. Regarding the issues above, a model called KG Attention Network fusing Collaborative Filtering information (KGANCF) was proposed. Firstly, the collaborative filtering information of users and items was dug out by the collaborative filtering layer of the network from the user-item bipartite graph, avoiding the interference of the entity information of KG. Then, the graph attention mechanism was applied in the KG attention embedding layer, the attribute information closely related to users and items was extracted from KG. Finally, the collaborative filtering information and the attribute information in KG were merged at the prediction layer to obtain the final vector representations of users and items, and then the scores of users to items were predicted. The experiments were carried out on MovieLens-20M and Last.FM datasets. Compared with the results of Collaborative Knowledge-aware Attentive Network (CKAN), on Movielens-20M, F1-score of KGANCF improves by 1.1 percentage points while Area Under Curve (AUC) improves by 0.6 percentage points; on Last.FM, F1-score improves by 3.3 percentage points and AUC improves by 8.5 percentage points. Experimental results show that KGANCF can effectively improve the accuracy of recommendation results, and is significantly better than CKE (Collaborative Knowledge base Embedding),KGCN (Knowledge Graph Convolutional Network),KGAT (Knowledge Graph Attention Network) and CKAN models on datasets with sparse KG.

Table and Figures | Reference | Related Articles | Metrics
Long- and short-term recommendation model and updating method based on knowledge graph preference attention network
Junhua GU, Shuai FAN, Ningning LI, Suqi ZHANG
Journal of Computer Applications    2022, 42 (4): 1079-1086.   DOI: 10.11772/j.issn.1001-9081.2021071242
Abstract678)   HTML27)    PDF (785KB)(224)       Save

Current research on knowledge graph recommendation mainly focus on model establishment and training. However, in practical applications, it is necessary to update the model regularly by using incremental updating method to adapt to the changes of preferences of new and old users. Because most of these models only use the users’ long-term interest representations for recommendation, do not consider the users’ short-term interests, and during the aggregation of neighborhood entities to obtain the item vector representation, the interpretability of the aggregation methods is insufficient, and there is the problem of catastrophic forgetting in the process of updating the model, a Knowledge Graph Preference ATtention network based Long- and Short-term recommendation (KGPATLS) model and its updating method were proposed. Firstly, the aggregation method of preference attention network and the user representation method combining users’ long- and short-term interests were proposed through KGPATLS model. Then, in order to alleviate the catastrophic forgetting problem during model update, an incremental updating method Fusing Predict Sampling and Knowledge Distillation (FPSKD) was proposed. The proposed model and incremental updating method were tested on MovieLens-1M and Last.FM datasets. Compared with the optimal baseline model Knowledge Graph Convolutional Network (KGCN), KGPATLS has the Area Under Curve (AUC) increased by 2.2% and 1.4% respectively and the Accuracy (Acc) increased by 2.5% and 2.9% on the two datasets respectively. Compared with three baseline incremental updating methods on the two datasets, the AUC and Acc indexes of FPSKD are better than those of Fine Tune and Random Sampling respectively, the training time index of FPSKD is reduced to about one eighth and one quarter of that of Full Batch respectively. Experimental results verify the performance of KGPATLS model and that FPSKD can update the model efficiently while maintaining the model performance.

Table and Figures | Reference | Related Articles | Metrics
Recommendation service for API use cases based on open source community analysis
Jiaqi ZHANG, Yanchun SUN, Gang HUANG
Journal of Computer Applications    2022, 42 (11): 3520-3526.   DOI: 10.11772/j.issn.1001-9081.2021122070
Abstract463)   HTML6)    PDF (1339KB)(162)       Save

Current research on Application Program Interface (API) learning and code reuse focuses on mining frequent API usage patterns, extracting component information, and recommending personalized API services based on user requirements and target functions. However, as beginners in software development who lack professional knowledge, experience and skills to implement specific use cases, they often need real code use cases as a reference except reading official documents. Most of the existing research about code recommendation is in single fragment mode. The lack of cross function case in case selection is not conducive for beginners to learn to build a complete use scenario or a functional module. At the same time, the semantic description extracted from a single function annotation is not enough for learners to understand the complete function implementation method of the project. To solve the above problems, an API use case recommendation service based on open source community analysis was proposed. Taking the software development back?end framework Spring Boot as an example, a cross function case recommendation assistant learning service was constructed. Then, the feasibility and effectiveness of the proposed API use case recommendation service was verified through questionnaires and expert verification.

Table and Figures | Reference | Related Articles | Metrics
Pulse condition recognition method based on optimized reinforcement learning path feature classification
Jiaqi ZHANG, Yueqin ZHANG, Jian CHEN
Journal of Computer Applications    2021, 41 (11): 3402-3408.   DOI: 10.11772/j.issn.1001-9081.2021010008
Abstract570)   HTML12)    PDF (606KB)(660)       Save

Pulse condition recognition is one of the important ways of traditional Chinese medical diagnosis. For a long time, recognizing pulse condition based on personal experience restricts the promotion and development of traditional Chinese medicine. Therefore, the researches on using sensing devices for recognizing pulse condition are more and more. In order to solve the problems such as large training datasets, “black box” processing and high time cost in the research of recognizing pulse condition by neural network, a new pulse condition diagram analysis method using Markov decision and Monte Carlo search on the framework of reinforcement learning was proposed. Firstly, based on the theory of traditional Chinese medicine, the paths of specific pulse conditions were classified, and then the representative features for different paths were selected on this basis. Finally, the pulse condition recognition was realized by comparing the threshold values of the representative features. Experimental results show that, the proposed method can reduce the training time and the required resources, retain the complete experience track, and can solve the “black box” problem during the data processing with the accuracy of pulse condition recognition improved.

Table and Figures | Reference | Related Articles | Metrics
Multi-function rendering technology based on graphics process unit accelerated ray casting algorithm
LV Xiaoqi ZHANG Chuanting HOU He ZHANG Baohua
Journal of Computer Applications    2014, 34 (1): 135-138.   DOI: 10.11772/j.issn.1001-9081.2014.01.0135
Abstract687)      PDF (733KB)(587)       Save
In order to overcome the rendering drawbacks of traditional algorithms that cannot be interacted fluently with the user and have a big time consumption and single rendering result, a ray casting algorithm based on Graphics Process Unit (GPU) was proposed to be used for the real-time volume rendering of medical tomographic images. Different rendering effects can be switched quickly by the proposed algorithm. Firstly, medical tomographic images were read into the computer memory to construct voxels. Afterwards, properties (interpolating, shading and light) of the corresponding voxels were set. The transfer functions of color and opacity were designed to display different organs and tissues. Finally, the volume data were loaded and the ray casting algorithm was executed by GPU. The experiments show that the rendering speed of the proposed algorithm can reach 40 frames per second, which satisfies the clinical application. On the aspect of rendering quality, jags produced in the process of interaction because of resampling on GPU are apparently lower than the ray casting algorithm on CPU. The time consumption of CPU-based ray casting algorithm is about 9 times that of the proposed algorithm.
Related Articles | Metrics
Fusion prediction of mine multi-sensor chaotic time series data
MU Wen-yu LI Ru YIN Zhi-zhou WANG Qi ZHANG Bao-yan
Journal of Computer Applications    2012, 32 (06): 1769-1773.   DOI: 10.3724/SP.J.1087.2012.01769
Abstract1242)      PDF (827KB)(543)       Save
For single sensor data mining prediction problem of the existence of one-sidedness, proposed the multi-sensor data mining prediction model of combining of information fusion technology and phase-space reconstruction technology. A variety of underground sensors, including gas concentration, wind speed, temperature sensors, are fusion forecasted. To many types of sensor time series data for the study, the first using the method of information fusion, respectively, followed by all kinds of data sensor data level fusion, feature level fusion; Then using the method of correlation integral the integration of two sensor data, respectively, to determine the time delay τ and embedding dimension m two parameters for the reconstruction phase; Finally, combined with the techniques of multivariate phase space reconstruction, fusion phase space the various types of sensor data, using the predictive models based on the weight one-rank local-region of K-Means clustering of multi-sensor data. The data is from the coal mines in Shanxi Province and the New King Wu Yi mine, collection of nearly 20G data to the gas concentration, wind speed, temperature experiment three sensor data, the results show that: For the feature level fusion, the data every 15 minutes period of time after fusion to be effective as a measure of the characteristics of this period, after the prediction model calculations, compared with the time period ,5 minutes, 10 minutes, 20 minutes, the error is minimum ESS=0.003, compared with the current minimum error value of 0.05, the error is greatly decreased, therefore, the integration forecasts’ better, it can more accurately predict the future after 15 minutes of sensor data, people have sufficient time to further provide for the safety assessment of underground basis for making decision.
Related Articles | Metrics