Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
DeepsORF: coding sORFs prediction method based on graph coding with improved flow attention
Dongmei XIE, Xinye BIAN, Lianfei YU, Wenbo LIU, Ziling WANG, Zhijian QU, Jiafeng YU
Journal of Computer Applications    2025, 45 (2): 546-555.   DOI: 10.11772/j.issn.1001-9081.2024020177
Abstract77)   HTML4)    PDF (3046KB)(44)       Save

Small Open Reading Frames (sORFs) plays a critical role in various biological processes, and identifying coding and non-coding sORFs accurately is a significant and challenging task in genomics. Due to the severe reliance of most existing algorithms for predicting coding sORFs on manual features based on prior biological knowledge, and the lack of universality of the algorithms, as well as the variable lengths of original sORFs sequences that prevent direct input into prediction models, an sORF-Graph graph encoding method-based end-to-end deep learning framework, DeepsORF, was developed for predicting coding sORFs. Firstly, all sORFs sequences were encoded into the corresponding graphs through sORF-Graph, and the input sequences were standardized by encoding sequence information into graph element features. Then, a convolutional and residual flow attention mechanism was introduced to capture the interactions among long distant bases within sORFs, thereby enhancing the expression of sORFs features and improving the model’s prediction accuracy. Experimental results demonstrate that DeepsORF framework enhances performance on all of six independent test sets. Compared with csORF-finder method, DeepsORF achieves increases of 9.97, 19.49, and 13.07 percentage points in accuracy, Matthew Correlation Coefficient (MCC), and precision, respectively, on D.melanogaster nonCDS-sORFs test set, validating the effectiveness and good generalization ability of DeepsORF model in the task of identifying coding and non-coding sORFs.

Table and Figures | Reference | Related Articles | Metrics
Chinese entity and relation extraction model based on parallel heterogeneous graph and sequential attention mechanism
Dianhui MAO, Xuebo LI, Junling LIU, Denghui ZHANG, Wenjing YAN
Journal of Computer Applications    2024, 44 (7): 2018-2025.   DOI: 10.11772/j.issn.1001-9081.2023071051
Abstract286)   HTML20)    PDF (2387KB)(583)       Save

In recent years, with the rapid development of deep learning technology, entity and relation extraction has made remarkable progress in many fields. However, due to complex syntactic structures and semantic relationships of Chinese text, there are still many challenges in Chinese entity and relation extraction. Among them, the problem of overlapping triple in Chinese text is one of the important challenges. A Hybrid Neural Network Entity and Relation Joint Extraction (HNNERJE) model was proposed in this article to address the issue of overlapping triple in Chinese text. HNNERJE model fused sequence attention mechanism and heterogeneous graph attention mechanism in a parallel manner, and combined them with a gated fusion strategy, so that it could capture both word order information and entity association information of Chinese text, and adaptively adjusted the output of subject and object markers, effectively solving the overlapping triple issue. Moreover, adversarial training algorithm was introduced to improve the model’s adaptability in processing unseen samples and noise. Finally, SHapley Additive exPlanations (SHAP) method was adopted to explain and analyze HNNERJE model, which effectively revealed key features in extracting entities and relations. HNNERJE model achieved high performance on NYT, WebNLG, CMeIE, and DuIE datasets with F1 score of 92.17%, 93.42%, 47.40%, and 67.98%, respectively. The experimental results indicate that HNNERJE model can transform unstructured text data into structured knowledge representations and effectively extract valuable information.

Table and Figures | Reference | Related Articles | Metrics
Adaptive computing optimization of sparse matrix-vector multiplication based on heterogeneous platforms
Bo LI, Jianqiang HUANG, Dongqiang HUANG, Xiaoying WANG
Journal of Computer Applications    2024, 44 (12): 3867-3875.   DOI: 10.11772/j.issn.1001-9081.2023111707
Abstract189)   HTML5)    PDF (3526KB)(144)       Save

Sparse Matrix-Vector multiplication (SpMV) is an important numerical linear algebraic operation. The existing optimizations for SpMV suffer from issues such as incomplete consideration of preprocessing and communication time, lack of universality in storage structures. To address these issues, an adaptive optimization scheme for SpMV on heterogeneous platforms was proposed. In the proposed scheme, the Pearson correlation coefficients were utilized to determine highly correlated feature parameters, and two Gradient Boosting Decision Tree (GBDT) based algorithms eXtreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) were employed to train prediction models to determine the optimal storage format for a certain sparse matrix. The use of grid searches to identify better model hyperparameters for model training resulted in both of those algorithms achieving more than 85% accuracy in selecting a more suitable storage structure. Furthermore, for sparse matrices with the HYBrid (HYB) storage format, the ELLPACK (ELL) and COOrdinate (COO) storage format parts in these metrices were computed on the GPU and CPU separately, establishing a CPU+GPU parallel hybrid computing mode. At the same time, hardware platforms were also selected for sparse matrices with small data sizes to improve computational speed. Experimental results demonstrate that the adaptive computing optimization achieves an average speedup of 1.4 compared to the Compressed Sparse Row (CSR) storage format in cuSPARSE library, and average speedup of 2.1 and 2.6 compared to the HYB and ELL storage formats, respectively.

Table and Figures | Reference | Related Articles | Metrics
Long-term prediction model of time series based on multi-scale feature fusion
Wenbo LIU, Lianfei YU, Dongmei XIE, Chuang CAI, Zhijian QU, Chongguang REN
Journal of Computer Applications    2024, 44 (11): 3435-3441.   DOI: 10.11772/j.issn.1001-9081.2023111705
Abstract83)   HTML4)    PDF (1095KB)(142)       Save

Long-term time series prediction has a wide range of application requirements in many fields. However, the non-stationarity problem shown in the long-term prediction process of time series is a key factor affecting the prediction accuracy. To improve the long-term prediction accuracy of time series and the universality of prediction model, a Multi?Scale Decomposition Fusion Attention Network (MSDFAN) was constructed. The model uses time series decomposition to extract seasonal components and trend components in the input data, and models different predictions for different data components, and is able to model and predict non?stationary time components with multi?scale stability characteristics. Experimental results show that compared with FEDformer, the Mean Squared Error (MSE) and Mean Absolute Error (MAE) of MSDFAN on five benchmark datasets are reduced by 12.95% and 8.49%, averagely and respectively. MSDFAN achieves a better prediction accuracy on multivariate time series.

Table and Figures | Reference | Related Articles | Metrics
Aspect sentiment triplet extraction integrating semantic and syntactic information
Yanbo LI, Qing HE, Shunyi LU
Journal of Computer Applications    2024, 44 (10): 3275-3280.   DOI: 10.11772/j.issn.1001-9081.2023101479
Abstract78)   HTML0)    PDF (1353KB)(13)       Save

Aspect Sentiment Triplet Extraction (ASTE) is a challenging subtask in aspect-based sentiment analysis, which aims at extracting aspect terms, opinion terms, and corresponding sentiment polarities from a given sentence. Existing models for ASTE tasks are divided into pipeline models and end-to-end models. To address the issues of error propagation in pipeline models and most end-to-end models overlooking the rich semantic information in sentences, a model called Semantic and Syntax Enhanced Dual-channel model for ASTE (SSED-ASTE) was proposed. First, BERT (Bidirectional Encoder Representation from Transformers) encoder was used to encode context. Then, a Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to capture context semantic dependencies. Next, two parallel Graph Convolution Networks (GCN) were utilized to extract the semantic features and the syntax features using self-attention mechanism and dependency syntactic parsing, respectively. Finally, the Grid Tagging Scheme (GTS) was used for triplet extraction. Experimental analysis was conducted on four public datasets, and compared with the GTS-BERT model, the F1 values of the proposed model increased by 0.29, 1.50, 2.93, and 0.78 percentage points, respectively. The experimental results demonstrate that the proposed model effectively utilizes implicit semantic and syntactic information in sentences, achieving more accurate triplet extraction.

Table and Figures | Reference | Related Articles | Metrics
Review of research on aquaculture counting based on machine vision
Hanyu ZHANG, Zhenbo LI, Weiran LI, Pu YANG
Journal of Computer Applications    2023, 43 (9): 2970-2982.   DOI: 10.11772/j.issn.1001-9081.2022081261
Abstract549)   HTML27)    PDF (1320KB)(310)       Save

Aquaculture counting is an important part of the aquaculture process, and the counting results provide an important basis for feeding, breeding density adjustment, and economic efficiency estimation of aquatic animals. In response to the traditional manual counting methods, which are time-consuming, labor-intensive, and prone to large errors, a large number of methods and applications based on machine vision have been proposed, thereby greatly promoting the development of non-destructive counting of aquatic products. In order to deeply understand the research on aquaculture counting based on machine vision, the relevant domestic and international literature in the past 30 years was collated and analyzed. Firstly, a review of aquaculture counting was presented in the perspective of data acquisition, and the methods for acquiring the data required for machine vision were summed up. Secondly, the aquaculture counting methods were analyzed and summarized in terms of traditional machine vision and deep learning. Thirdly, the practical applications of counting methods in different farming environments were compared and analyzed. Finally, the difficulties in the development of aquaculture counting research were summarized in terms of data, methods, and applications, and corresponding views were presented for the future trends of aquaculture counting research and equipment applications.

Table and Figures | Reference | Related Articles | Metrics
Identity-based ring signature scheme on number theory research unit lattice
Jinbo LI, Ping ZHANG, Ji ZHANG, Muhua LIU
Journal of Computer Applications    2023, 43 (9): 2798-2805.   DOI: 10.11772/j.issn.1001-9081.2022081268
Abstract343)   HTML10)    PDF (962KB)(211)       Save

Concerning the problems that the size of the trapdoor base is too large and the public key of ring members needs digital certificate authentication in the lattice-based ring signature schemes, an NTRU (Number Theory Research Unit) lattice-based Identity-Based Ring Signature scheme (NTRU-IBRS) was proposed. Firstly, the trapdoor generation algorithm on NTRU lattice was used to generate the system master public-private key pairs. Secondly, the master private key was taken as the trapdoor information and the one-way function was reversely operated to obtain the private key of every ring member. Finally, based on the Small Integer Solution (SIS) problem, the ring signature was generated by using the rejection sampling technology. Security analysis shows that NTRU-IBRS is anonymous and existentially unforgeable under adaptive chosen message and chosen identity attacks. Performance analysis and experimental simulation show that compared with the ring signature scheme on ideal lattice and the identity-based linkable ring signature scheme on NTRU lattice: in storage overhead, NTRU-IBRS has the system private key length decreased by 0 to 99.6% and the signature private key length decreased by 50.0% to 98.4%; and in time overhead, NTRU-IBRS has the total time overhead reduced by 15.3% to 21.8%. Simulation results of applying NTRU-IBRS to the dynamic Internet of Vehicles (IoV) scenario show that NTRU-IBRS can ensure privacy security and improve communication efficiency during vehicle interaction at the same time.

Table and Figures | Reference | Related Articles | Metrics
Multi-similarity K-nearest neighbor classification algorithm with ordered pairs of normalized real numbers
Haoyang CUI, Hui ZHANG, Lei ZHOU, Chunming YANG, Bo LI, Xujian ZHAO
Journal of Computer Applications    2023, 43 (9): 2673-2678.   DOI: 10.11772/j.issn.1001-9081.2022091376
Abstract340)   HTML19)    PDF (1618KB)(150)       Save

For the problems that the performance of the nearest neighbor classification algorithm is greatly affected by the adopted similarity or distance measuring method, and it is difficult to select the optimal similarity or distance measuring method, with multi-similarity method adopted, a K-Nearest Neighbor algorithm with Ordered Pairs of Normalized real numbers (OPNs-KNN) was proposed. Firstly, the new mathematical theory of Ordered Pair of Normalized real numbers (OPN) was introduced in machine learning. And all the samples in the training and test sets were converted into OPNs by multiple similarity or distance measuring methods, so that different similarity information was included in each OPN. Then, the improved nearest neighbor algorithm was used to classify the OPNs, so that different similarity or distance measuring methods were able to be mixed and complemented to improve the classification performance. Experimental results show that compared with 6 improved nearest neighbor classification algorithms, such as distance-Weighted K-Nearest-Neighbor rule (WKNN) rule on Iris, seeds, and other datasets, OPNs-KNN has the classification accuracy improved by 0.29 to 15.28 percentage points, which proves that the performance of classification can be improved greatly by the proposed algorithm.

Table and Figures | Reference | Related Articles | Metrics
Approximate query processing approach based on deep autoregressive model
Libin CEN, Jingdong LI, Chunbo LIN, Xiaoling WANG
Journal of Computer Applications    2023, 43 (7): 2034-2039.   DOI: 10.11772/j.issn.1001-9081.2022071128
Abstract279)   HTML15)    PDF (1513KB)(341)       Save

Recently, Approximate Query Processing (AQP) of aggregate functions is a research hotspot in the database field. Existing approximate query techniques have problems such as high query response time cost, high storage overhead, and no support for multi-predicate queries. Thus, a deep autoregressive model-based AQP approach DeepAQP (Deep Approximate Query Processing) was proposed. DeepAQP leveraged deep autoregressive model to learn the joint probability distribution of multi-column data in the table in order to estimate the selectivity and the target column’s probability distribution of the given query, enhancing the ability to handle the approximate query requests of aggregation functions with multiple predicates in a single table. Experiments were conducted on TPC-H and TPC-DS datasets. The results show that compared with VerdictDB, which is a sample-based method, DeepAQP has the query response time reduced by 2 to 3 orders of magnitude, and the storage space reduced by 3 orders of magnitude; compared with DBEst++, which is a machine learning-based method, DeepAQP has the query response time reduced by 1 order of magnitude and the model training time reduced significantly. Besides, DeepAQP can handle with multi-predicate query requests, for which DBEst++ does not support. It can be seen that DeepAQP achieves good accuracy and speed at the same time and reduces the training and storage overhead of algorithm significantly.

Table and Figures | Reference | Related Articles | Metrics
Super-resolution reconstruction of lung CT images based on feature pyramid network and dense network
Lihua SHEN, Bo LI
Journal of Computer Applications    2023, 43 (5): 1612-1619.   DOI: 10.11772/j.issn.1001-9081.2022040620
Abstract372)   HTML17)    PDF (4504KB)(267)       Save

To pay more attention to pulmonary nodules and satisfy the objective existence of reconstructed features in lung Computed Tomography (CT) image Super-Resolution (SR) reconstruction, a lung image SR reconstruction method based on Feature Pyramid Network (FPN) and dense network was proposed. Firstly, at the feature extraction layer, FPN was used to extract features. Secondly, the local structure based on residual network was designed at the feature mapping layer, and then the special dense network was used to connect the local structure. Thirdly, at the feature reconstruction layer, Convolution Neural Network (CNN) was used to gradually reduce the convolution layers with different depths to the image size. Finally, the residual network was used to integrate the initial Low-Resolution (LR) features and the reconstructed High-Resolution (HR) features to form the final SR image. In comparison experiments, the deep learning network with two feature fusion in FPN and five local structure connections in feature mapping has better effect. Compared with classic networks such as Super-Resolution Convolutional Neural Network (SRCNN), the proposed network has higher Peak Signal-to-Noise Ratio (PSNR) and better visual quality of the reconstructed SR images.

Table and Figures | Reference | Related Articles | Metrics
Improved method of convolution neural network based on matrix decomposition
Zhenliang LI, Bo LI
Journal of Computer Applications    2023, 43 (3): 685-691.   DOI: 10.11772/j.issn.1001-9081.2022010032
Abstract660)   HTML42)    PDF (1694KB)(312)       Save

Aiming at the difficulty of optimizing the traditional Convolutional Neural Network (CNN) in the training process, an improved method of CNN based on matrix decomposition was proposed. Firstly, the convolution kernel parameter tensor of the model convolution layer during training was converted into the product of multiple parameter matrices through matrix decomposition to form overparameterization. Secondly, these additional linear parameters were added to the back propagation of the network and updated synchronously with other parameters of the model to improve the optimization process of gradient descent. After completing the training, the matrix product was restored to the standard convolution kernel parameters, so that the computational complexity of forward propagation during inference was able to be the same as before the improvement. With thin QR decomposition and reduced Singular Value Decomposition (SVD) applied, the classification effect experiments were carried out on CIFAR-10 (Canadian Institute For Advanced Research, 10 classes) dataset, and further generalization experiments were carried out by using different image classification datasets and different initialization methods. Experimental results show that the classification accuracies of 7 models of different depths of Visual Geometry Group (VGG) and Residual Network (ResNet) based on matrix decomposition are higher than those of the original convolutional neural network models. It can be seen that the matrix decomposition method can make CNN achieve higher classification accuracy, and eventually converge to a better local optimum.

Table and Figures | Reference | Related Articles | Metrics
Controllable face editing algorithm with closed-form solution
Lingling TAO, Bo LIU, Wenbo LI, Xiping HE
Journal of Computer Applications    2023, 43 (2): 601-607.   DOI: 10.11772/j.issn.1001-9081.2022010030
Abstract391)   HTML6)    PDF (2481KB)(93)       Save

To solve the problems in face editing, such as unnatural editing results and great changes in generated images, a controllable face editing algorithm with closed-form solution was proposed. Firstly, n latent vectors were sampled randomly to construct a sample matrix, and the top k principal component vectors of the matrix were calculated. Then, five attributes of face image were obtained by ResNet-50, and the semantic boundary of each attribute was calculated by Support Vector Machine (SVM). Finally, the interpretable direction vectors of these attributes were calculated, which were as closed to the principal components vectors as possible and stayed as far away from the semantic boundary of the corresponding attribute as possible at the same time, thereby reducing the coupling between facial attributes, and improving the controllability in face editing. Because the algorithm has a closed-form solution, it has high efficiency. Experimental results show that the compared with closed-form Factorization of latent Semantics in GANs (SeFa) algorithm and Discovering Interpretable Generative Adversarial Network Controls (GANSpace) algorithm, the proposed algorithm increases the Inception Score (IS) by 19% and 26% respectively, decreases the Fréchet Inception Distance (FID) by 4% and 37% respectively, and decreases the Maximum Mean Discrepancy (MMD) by 15% and 48% respectively. It can be seen that this algorithm has good controllability and decoupling.

Table and Figures | Reference | Related Articles | Metrics
Deep spectral clustering algorithm with L1 regularization
Wenbo LI, Bo LIU, Lingling TAO, Fen LUO, Hang ZHANG
Journal of Computer Applications    2023, 43 (12): 3662-3667.   DOI: 10.11772/j.issn.1001-9081.2022121822
Abstract466)   HTML44)    PDF (1465KB)(379)       Save

Aiming at the problems that the deep spectral clustering models perform poorly in training stability and generalization capability, a Deep Spectral Clustering algorithm with L1 Regularization (DSCLR) was proposed. Firstly, L1 regularization was introduced into the objective function of deep spectral clustering to sparsify the eigen vectors of the Laplacian matrix generated by the deep neural network model. And the generalization capability of the model was enhanced. Secondly, the network structure of the spectral clustering algorithm based on deep neural network was improved by using the Parametric Rectified Linear Unit activation function (PReLU) to solve the problems of model training instability and underfitting. Experimental results on MNIST dataset show that the proposed algorithm improves Clustering Accuracy (CA), Normalized Mutual Information (NMI) index, and Adjusted Rand Index (ARI) by 11.85, 7.75, and 17.19 percentage points compared to the deep spectral clustering algorithm, respectively. Furthermore, the proposed algorithm also significantly improves the three evaluation metrics, CA, NMI and ARI, compared to algorithms such as Deep Embedded Clustering (DEC) and Deep Spectral Clustering using Dual Autoencoder Network (DSCDAN).

Table and Figures | Reference | Related Articles | Metrics
Group activity recognition based on partitioned attention mechanism and interactive position relationship
Bo LIU, Linbo QING, Zhengyong WANG, Mei LIU, Xue JIANG
Journal of Computer Applications    2022, 42 (7): 2052-2057.   DOI: 10.11772/j.issn.1001-9081.2021060904
Abstract328)   HTML17)    PDF (2504KB)(111)       Save

Group activity recognition is a challenging task in complex scenes, which involves the interaction and the relative spatial position relationship of a group of people in the scene. The current group activity recognition methods either lack the fine design or do not take full advantage of interactive features among individuals. Therefore, a network framework based on partitioned attention mechanism and interactive position relationship was proposed, which further considered individual limbs semantic features and explored the relationship between interaction feature similarity and behavior consistency among individuals. Firstly, the original video sequences and optical flow image sequences were used as the input of the network, and a partitioned attention feature module was introduced to refine the limb motion features of individuals. Secondly, the spatial position and interactive distance were taken as individual interaction features. Finally, the individual motion features and spatial position relation features were fused as the features of the group scene undirected graph nodes, and Graph Convolutional Network (GCN) was adopted to further capture the activity interaction in the global scene, thereby recognizing the group activity. Experimental results show that this framework achieves 92.8% and 97.7% recognition accuracy on two group activity recognition datasets (CAD (Collective Activity Dataset) and CAE (Collective Activity Extended Dataset)). Compared with Actor Relationship Graph (ARG) and Confidence Energy Recurrent Network (CERN) on CAD dataset, this framework has the recognition accuracy improved by 1.8 percentage points and 5.6 percentage points respectively. At the same time, the results of ablation experiment show that the proposed algorithm achieves better recognition performance.

Table and Figures | Reference | Related Articles | Metrics
Air combat maneuver decision method based on three-way decision
Kaiqiang YUE, Bo LI, Panlong FAN
Journal of Computer Applications    2022, 42 (2): 616-621.   DOI: 10.11772/j.issn.1001-9081.2021050855
Abstract371)   HTML8)    PDF (1931KB)(97)       Save

In order to improve the maneuver decision ability of fighters under the condition of insufficient information, a method of aircraft air combat maneuver decision based on three-way decision was proposed. Firstly, the three-way decision intention recognition model was used to recognize the target intention. Secondly, after introducing the combat intention factor of the target into the threat assessment, a dynamic adjustment method of maneuver decision weight factor based on three-way decision was proposed with the combination of the target threat degree. Finally, the evaluation function of maneuver decision factor was constructed by using fuzzy logic, and the optimal maneuver mode of aircraft at each stage was obtained by using the dynamic adjustment strategy of weight and maneuver decision evaluation function, thus forming the effective and feasible flight route. Simulation results show that the proposed aircraft air combat maneuver decision method based on three-way decision is feasible and effective.

Table and Figures | Reference | Related Articles | Metrics
CT three-dimensional reconstruction algorithm based on super-resolution network
Junbo LI, Pinle QIN, Jianchao ZENG, Meng LI
Journal of Computer Applications    2022, 42 (2): 584-591.   DOI: 10.11772/j.issn.1001-9081.2021020219
Abstract603)   HTML27)    PDF (1088KB)(562)       Save

Computed Tomography (CT) three-dimensional reconstruction technique improves the quality of three-dimensional model by upsampling volume data, and reduces the jagged edges, streak artifacts and discontinuous surface in the model, so as to improve the accuracy of disease diagnosis in clinical medicine. A CT three-dimensional reconstruction algorithm based on super-resolution network was proposed to solve the problem that the model after CT three-dimensional reconstruction remains unclear enough in the past. The network model is a Double Loss Refinement Network (DLRNET), and the three-dimensional reconstruction of abdominal CT was performed by uniaxial super-resolution. The optimization learning module was introduced at the end of the network model, and besides the calculation of the loss between the baseline image and super-resolution image, the loss between the roughly reconstructed image in the network model and the baseline image was also calculated. In this way, with the force of optimization learning and double loss, the results closer to the baseline image were produced by the network. Then, spatial pyramid pooling and channel attention mechanism were introduced into the feature extraction module to learn the features of vascular tissues with different thickness degrees and scales. Finally, the upsampling method was used to dynamically generate the convolution kernel set, so that a single network model was able to complete the upsampling tasks with different scaling factors. Experimental results show that compared with Residual Channel Attention Network (RCAN), the proposed network model improves the Peak Signal-to-Noise Ratio (PSNR) by 0.789 dB on average under 2, 3, and 4 scaling factors, showing that the network model effectively improves the quality of CT three-dimensional model, recovers the continuous detail features of vascular tissues to some extent, and has practicability.

Table and Figures | Reference | Related Articles | Metrics
Single image dehazing based on conditional generative adversarial network with enhanced generator
Yang ZHAO, Bo LI
Journal of Computer Applications    2021, 41 (12): 3686-3691.   DOI: 10.11772/j.issn.1001-9081.2021010092
Abstract369)   HTML7)    PDF (947KB)(296)       Save

The presence of particles such as smoke in the atmosphere can lead to reduced visibility in scenes captured by the naked eye. Most traditional dehazing methods estimate the transmissivity and atmospheric light of the haze scene, and restore the image without haze by using atmospheric scattering model. Although these methods have made significant progresses, due to excessive reliance on harsh prior conditions, the dehazing effect in the absence of corresponding prior conditions is not ideal. Therefore, an end-to-end integrated dehazing network was proposed, in which the Conditional Generative Adversarial Network (CGAN) with enhanced generator was used to directly restore the image without haze. In the generator side, U-Net was used as the basic structure, and a simple and effective enhanced decoder was used through the promotion strategy of “integration-enhance-subtraction” to enhance the recovery of features in the decoder. In addition, the Multi-Scale Structural SIMilarity (MS-SSIM) loss function was added to enhance the restoration of the edge details of the image. In experiments on synthetic and real datasets, the model was significantly better than the traditional dehazing models such as Dark Channel Prior (DCP), All-in-One Dehazing Network (AOD-Net), Progressive Feature Fusion Network (PFFNet) and Conditional Wasserstein Generative Adversarial Network (CWGAN) in Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Experimental results show that compared with the comparison algorithms, the proposed network can recover images without haze closer to the ground truth with better dehazing effect.

Table and Figures | Reference | Related Articles | Metrics
Dynamic graph representation learning method based on deep neural network and gated recurrent unit
Huibo LI, Yunxiao ZHAO, Liang BAI
Journal of Computer Applications    2021, 41 (12): 3432-3437.   DOI: 10.11772/j.issn.1001-9081.2021060994
Abstract401)   HTML15)    PDF (869KB)(395)       Save

Learning the latent vector representations of nodes in the graph is an important and ubiquitous task, which aims to capture various attributes of the nodes in the graph. A lot of work demonstrates that static graph representation learning can learn part of the node information; however, real-world graphs evolve over time. In order to solve the problem that most dynamic network algorithms cannot effectively retain node neighborhood structure and temporal information, a dynamic network representation learning method based on Deep Neural Network (DNN) and Gated Recurrent Unit (GRU), namely DynAEGRU, was proposed. With Auto-Encoder (AE) as the framework of the DynAEGRU, the neighborhood information was aggregated by encoder with a DNN to obtain low-dimensional feature vectors, then the node temporal information was extracted by a GRU network,finally, the adjacency matrix was reconstructed by the decoder and compared with the real graph to construct the loss. Experimental results on three real-word datasets show that DynAEGRU method has better performance gain compared to several static and dynamic graph representation learning algorithms.

Table and Figures | Reference | Related Articles | Metrics
Super-resolution and multi-view fusion based on magnetic resonance image inter-layer interpolation
Meng LI, Pinle QIN, Jianchao ZENG, Junbo LI
Journal of Computer Applications    2021, 41 (11): 3362-3367.   DOI: 10.11772/j.issn.1001-9081.2020122065
Abstract379)   HTML7)    PDF (650KB)(222)       Save

The high resolution in Magnetic Resonance (MR) image slices and low resolution between the slices lead to the lack of medical diagnostic significance of MR in the coronal and sagittal planes. In order to solve the problem, a medical image processing algorithm based on inter-layer interpolation and multi-view fusion network was proposed. Firstly, the inter-layer interpolation module was introduced to cut the MR volume data from three-dimensional data into two-dimensional images along the coronal and sagittal directions. Then, after the feature extraction on the coronal and sagittal planes, the weights were dynamically calculated by the spatial matrix filter and used for upsampling factor with any size to magnify the image. Finally, the results of the coronal and sagittal images obtained in the inter-layer interpolation module were aggregated into three-dimensional data and then cut into two-dimensional images along the axial direction. The obtained two-dimensional images were fused in pairs and corrected by the axial direction data. Experimental results show that, compared with other super-resolution algorithms, the proposed algorithm has improved the Peak Signal-to-Noise Ratio (PSNR) by about 1 dB in ×2, ×3, and ×4 scales. It can be seen that the proposed algorithm can effectively improve the quality of image reconstruction.

Table and Figures | Reference | Related Articles | Metrics
Task offloading method based on probabilistic performance awareness and evolutionary game strategy in “cloud + edge” hybrid environment
Ying LEI, Wanbo ZHENG, Wei WEI, Yunni XIA, Xiaobo LI, Chengwu LIU, Hong XIE
Journal of Computer Applications    2021, 41 (11): 3302-3308.   DOI: 10.11772/j.issn.1001-9081.2020121932
Abstract355)   HTML3)    PDF (1179KB)(124)       Save

Aiming at the problem of low multi-task offloading efficiency in the “cloud+edge” hybrid environment composed of “central cloud server and multiple edge servers”, a task offloading method based on probabilistic performance awareness and evolutionary game theory was proposed. Firstly, in a “cloud + edge” hybrid environment composed of “central cloud server and multiple edge servers”, assuming that all the edge servers distributed in it had time-varying volatility performance, the historical performance data of edge cloud servers was probabilistically analyzed by a task offloading method based on probabilistic performance awareness and evolutionary game theory for obtaining the evolutionary game model. Then, an Evolutionary Stability Strategy (ESS) of service offloading was generated to guarantee that each user could offload tasks on the premise of high satisfaction rate. Simulation experiments were carried out based on the cloud edge resource locations dataset and the cloud service performance test dataset, the test and comparison of different methods were carried out on 24 continuous time windows. Experimental results show that, the proposed method is better than traditional task offloading methods such as Greedy algorithm, Genetic Algorithm (GA), and Nash-based Game algorithm in many performance indexes. Compared with the three comparison methods, the proposed method has the average user satisfaction rate higher by 13.7%, 117.0%, 13.8% respectively, the average offloading time lower by 6.5%, 24.9%, 8.3% respectively, and the average monetary cost lower by 67.9%, 88.7%, 18.0% respectively.

Table and Figures | Reference | Related Articles | Metrics
Single shot multibox detector recognition method for aerial targets of unmanned aerial vehicle
Huaiyu ZHU, Bo LI
Journal of Computer Applications    2021, 41 (11): 3234-3241.   DOI: 10.11772/j.issn.1001-9081.2021010026
Abstract410)   HTML10)    PDF (1657KB)(82)       Save

Unmanned Aerial Vehicle (UAV) aerial images have a wide field of vision, and the targets in the images are small and have blurred boundaries. And the existing Single Shot multibox Detector (SSD) target detection model is difficult to accurately detect small targets in aerial images. In order to effectively solve the problem that the original model is easy to have missed detection, based on Feature Pyramid Network (FPN), a new SSD model based on continuous upsampling was proposed. In the improved SSD model, the input image size was adjusted to 320 × 320 , the Conv3_3 feature layer was added, the high-level features were upsampled, and features of the first five layers of VGG16 network were fused by using feature pyramid structure, so as to enhance the semantic representation ability of each feature layer. Meanwhile, the size of anchor box was redesigned. Training and verification were carried out on the open aerial dataset UCAS-AOD. Experimental results show that, the improved SSD model has 94.78% in mean Average Precision (mAP) of different categories, and compared with the existing SSD model, the improved SSD model has the accuracy increased by 17.62%, including 4.66% for plane category and 34.78% for car category.

Table and Figures | Reference | Related Articles | Metrics
Digital camouflage generation method based on cycle-consistent adversarial network
Xu TENG, Hui ZHANG, Chunming YANG, Xujian ZHAO, Bo LI
Journal of Computer Applications    2020, 40 (2): 566-570.   DOI: 10.11772/j.issn.1001-9081.2019091625
Abstract683)   HTML9)    PDF (5080KB)(535)       Save

Traditional methods of generating digital camouflages cannot generate digital camouflages based on the background information in real time. In order to cope with this problem, a digital camouflage generation method based on cycle-consistent adversarial network was proposed. Firstly, the image features were extracted by using densely connected convolutional network, and the learned digital camouflage features were mapped into the background image. Secondly, the color retention loss was added to improve the quality of generated digital camouflages, ensuring that the generated digital camouflages were consistent with the surrounding background colors. Finally, a self-normalized neural network was added to the discriminator to improve the robustness of the model against noise. For the lack of objective evaluation criteria for digital camouflages, the edge detection algorithm and the Structural SIMilarity (SSIM) algorithm were used to evaluate the camouflage effects of the generated digital camouflages. Experimental results show that the SSIM score of the digital camouflage generated by the proposed method on the self-made datasets is reduced by more than 30% compared with the existing algorithms, verifying the effectiveness of the proposed method in the digital camouflage generation task.

Table and Figures | Reference | Related Articles | Metrics
New scheme for privacy-preserving in electronic transaction
YANG Bo LI Shundong
Journal of Computer Applications    2014, 34 (9): 2635-2638.   DOI: 10.11772/j.issn.1001-9081.2014.09.2635
Abstract242)      PDF (625KB)(574)       Save

For the users' privacy security in electronic transactions, an electronic transaction scheme was proposed to protect the users' privacy. The scheme combined the oblivious transfer and ElGamal signature, achieved both traders privacy security in electronic transactions. A user used a serial number to choose digital goods and paid the bank anonymously and correctly. After that, the bank sent a digital signature of the digital goods to the user, then the user interacted with the merchant obliviously through the digital signature that he had paid. The user got the key though the number of exponentiation encryption, the merchant could not distinguish the digital goods ordered. The serial number was concealed and restricted, so the user could not open the message with the unselected serial number, they could and only could get the digital goods they paid. Correctness proof and security analysis shows that the proposed scheme can protect both traders mutual information in electronic transactions and prevent merchant's malicious fraud. The scheme has short signature, small amount of calculation and dynamic changed keys, its security is strong.

Reference | Related Articles | Metrics
Face recognition via kernel-based non-negative sparse representation
BO Chunjuan ZHANG Rubo LIU Guanqun JIANG Yuzhe
Journal of Computer Applications    2014, 34 (8): 2227-2230.   DOI: 10.11772/j.issn.1001-9081.2014.08.2227
Abstract332)      PDF (615KB)(403)       Save

A novel kernel-based non-negative sparse representation (KNSR) method was presented for face recognition. The contributions were mainly three aspects: First, the non-negative constraints on representation coefficients were introduced into the Sparse Representation (SR) and the kernel function was exploited to depict non-linear relationships among different samples, based on which the corresponding objective function was proposed. Second, a multiplicative gradient descent method was proposed to solve the proposed objective function, which could achieve the global optimum value in theory. Finally, local binary feature and the Hamming kernel were used to model the non-linear relationships among face samples and therefore achieved robust face recognition. The experimental results on some challenging face databases demonstrate that the proposed algorithm has higher recognition rates in comparison with algorithms of Nearest Neighbor (NN), Support Vector Machine (SVM), Nearest Subspace (NS), SR and Collaborative Representation (CR), and achieves about 99% recognition rates on both YaleB and AR databases.

Reference | Related Articles | Metrics
Improvement of DV-Hop based localization algorithm
XIA Shaobo LIAN Lijun WANG Luna ZHU Xiaoli ZOU Jianmei
Journal of Computer Applications    2014, 34 (5): 1247-1250.   DOI: 10.11772/j.issn.1001-9081.2014.05.1247
Abstract550)      PDF (614KB)(419)       Save

DV-Hop algorithm uses the hop number multiplied by the average distance per hop to estimate the distance between nodes and the trilateral measurement or the maximum likelihood to estimate the node coordinate information, which has defects and then causing too many positioning errors. This paper presented an improved DV-Hop algorithm based on node density regional division (Density Zoning DV-Hop, DZDV-Hop), which used the connectivity of network and the node density to limit the hop number of the estimated node coordinate information and the weighted centroid method to estimate the positioning coordinates. Compared with the traditional DV-Hop algorithm in the same network hardware and topology environment, the result of Matlab simulation test shows that, the communication amount of nodes can be effectively reduced and the positioning error rate can be reduced by 13.6% by using the improved algorithm, which can improve the positioning accuracy.

Reference | Related Articles | Metrics
Fatigue behavior detection by mining keyboard and mouse events
WANG Tianben WANG Haipeng ZHOUXingshe NI Hongbo LIN Qiang
Journal of Computer Applications    2014, 34 (1): 227-231.   DOI: 10.11772/j.issn.1001-9081.2014.01.0227
Abstract545)      PDF (747KB)(419)       Save
Long-term continuous use of computers would bring negative effects on users' health. In order to detect users fatigue level in a non-invasive manner, an approach that is able to measure fatigue level on hand muscle based on the keyboard and mouse events was proposed. The proposed method integrated keying action match, data noise filtering, and feature vector extraction/classification together to collect and analyze the delay characteristics of both keying and hitting actions, upon which the detection of fatigue level on hand muscle could be enabled. With the detected fatigue level, friends belonging to the same virtual community on current social networks could be, in real-time, alerted and persuaded to take a health-conscious way in their daily use of computers. Particularly, an interesting conclusion has been made that there is an obvious negative correlation between keying (hitting) delay and fatigue level of hand muscle. The experimental validation conducted on two-week data collected from 15 participants shows that the proposed method is effective in detecting users fatigue level and distributing fatigue-related health information on social network platform.
Related Articles | Metrics
Application of support vector regression in prediction of due date under uncertain assemble-to-order environment
SUN Dechang SHI Haibo LIU Chang
Journal of Computer Applications    2013, 33 (08): 2362-2365.  
Abstract677)      PDF (753KB)(491)       Save
For the issue of how to quickly estimate the accurate, reliable due date according to the order information and the features of the production system in Assembly To Order (ATO), a due date prediction model was constructed based on the influential mechanism analysis of the uncertainty factors. The model parameters included three parts: order release time, assembly cycle time and abnormal tardiness. Order release time was based on the availability of materials and production capacity. The assembly cycle time and abnormal tardiness were predicted by using Support Vector Regression (SVR) method based on actual production history data. The case study shows that the predicted results of the model are close to actual due date and it can be used to guide the order's delivery time consultation.
Related Articles | Metrics
Validation method of security features in safety critical software requirements specification
WANG Fei GUO Yuanbo LI Bo HAO Yaohui
Journal of Computer Applications    2013, 33 (07): 2041-2045.   DOI: 10.11772/j.issn.1001-9081.2013.07.2041
Abstract886)      PDF (681KB)(618)       Save
Since the security features described by natural language in the safety-critical software requirements specification are of inaccuracy and inconsistence, a validation method of security features based on UMLsec was proposed. The method completed the UMLsec model by customizing stereotypes, tags and constraints for security features of the core class on the basis of class diagram and sequence diagram for UML requirements model. Afterwards, the support tool for designing and implementing UMLsec was used for automatic verification of security features. The experimental results show that the proposed method can accurately describe security features in the safety-critical requirements specification and can automatically verify whether the security features meet the security requirements.
Reference | Related Articles | Metrics
Mine gas monitoring by multi-source information clustering fusion
SUN Yanbo LIU Zongzhu MENG Ke TANG Yang
Journal of Computer Applications    2013, 33 (06): 1783-1786.   DOI: 10.3724/SP.J.1087.2013.01783
Abstract797)      PDF (627KB)(795)       Save
Due to the complexity and the dynamic changes of the coal mine environment, the concentrations of harmful gases are difficult to be accurately monitored. The traditional monitoring methods use a single sensor to pick-up information, and the collected data have simple data form, low reliability, big error and so on. Concerning these problems, a new method was proposed in this paper, that is, sampling a variety of heterogeneous gases sources, and then taking advantage of the strong classification algorithm to filter, lastly fusing the above obtained information. As experiments state, the new method significantly improve the reliability of the mine monitoring system.
Reference | Related Articles | Metrics
Imbalanced data learning based on particle swarm optimization
CAO Peng LI Bo LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (03): 789-792.   DOI: 10.3724/SP.J.1087.2013.00789
Abstract1168)      PDF (630KB)(499)       Save
In order to improve the classification performance on the imbalanced data, a new Particle Swarm Optimization (PSO) based method was introduced. It optimized the re-sampling rate and selected the feature set simultaneously, with the imbalanced data evaluation metric as objective function through particle swarm optimization, so as to achieve the best data distribution. The proposed method was tested on a large number of UCI datasets and compared with the state-of-the-art methods. The experimental results show that the proposed method has substantial advantages over other methods; moreover, it proves that it can effectively improve the performance on the imbalanced data by optimizing the re-sampling rate and feature set simultaneously.
Reference | Related Articles | Metrics