Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Recommendation model combining self-features and contrastive learning
Xingyao YANG, Yu CHEN, Jiong YU, Zulian ZHANG, Jiaying CHEN, Dongxiao WANG
Journal of Computer Applications    2024, 44 (9): 2704-2710.   DOI: 10.11772/j.issn.1001-9081.2023091264
Abstract241)   HTML11)    PDF (1737KB)(385)       Save

Aiming at the over-smoothing and noise problems in the embedding representation in the message passing process of graph convolution based on graph neural network recommendation, a Recommendation model combining Self-features and Contrastive Learning (SfCLRec) was proposed. The model was trained using a pre-training-formal training architecture. Firstly, the embedding representations of users and items were pre-trained to maintain the feature uniqueness of the nodes themselves by fusing the node self-features and a hierarchical contrastive learning task was introduced to mitigate the noisy information from the higher-order neighboring nodes. Then, the collaborative graph adjacency matrix was reconstructed according to the scoring mechanism in the formal training stage. Finally, the predicted score was obtained based on the final embedding. Compared with existing graph neural network recommendation models such as LightGCN and Simple Graph Contrastive Learning (SimGCL), SfCLRec achieves the better recall and NDCG (Normalized Discounted Cumulative Gain) in three public datasets ML-latest-small, Last.FM and Yelp, validating the effectiveness of SfCLRec.

Table and Figures | Reference | Related Articles | Metrics
High-efficiency dual-LAN Terahertz WLAN MAC protocol based on spontaneous data transmission
Zhi REN, Jindong GU, Yang LIU, Chunyu CHEN
Journal of Computer Applications    2024, 44 (2): 519-525.   DOI: 10.11772/j.issn.1001-9081.2023020250
Abstract180)   HTML5)    PDF (1941KB)(72)       Save

In the existing Dual LAN (Local Area Network) Terahertz Wireless LAN (Dual-LAN THz WLAN) related MAC (Medium Access Control) protocol, some nodes may repeatedly send the same Channel Time Request (CTRq) frame within multiple superframes to apply for time slot resources and idle time slots exist in some periods of network operation, therefore an efficient MAC protocol based on spontaneous data transmission SDTE-MAC (high-Efficiency MAC Protocol based on Spontaneous Data Transmission) was proposed. SDTE-MAC protocol enabled each node to maintain one or more time unit linked lists to synchronize with the rest of the nodes in the network running time, so as to know where each node started sending data frames at the channel idle time slot. The protocol optimized the traditional channel slot allocation and channel remaining slot reallocation processes, improved network throughput and channel slot utilization, reduced data delay, and could further improve the performance of Dual-LAN THz WLAN. The simulation results showed that when the network saturates, compared with the new N-CTAP (Normal Channel Time Allocation Period) slot resource allocation mechanism and adaptive shortening superframe period mechanism in the AHT-MAC (Adaptive High Throughout multi-pan MAC protocol), the MAC layer throughput of the SDTE-MAC protocol was increased by 9.2%, the channel slot utilization was increased by 10.9%, and the data delay was reduced by 22.2%.

Table and Figures | Reference | Related Articles | Metrics
Differential and linear characteristic analysis of full-round Shadow algorithm
Yong XIANG, Yanjun LI, Dingyun HUANG, Yu CHEN, Huiqin XIE
Journal of Computer Applications    2024, 44 (12): 3839-3843.   DOI: 10.11772/j.issn.1001-9081.2023121762
Abstract114)   HTML2)    PDF (960KB)(75)       Save

As Radio Frequency IDentification (RFID) technology and wireless sensors become increasingly common, the need of secure data transmitted and processed by such devices with limited resources leads to the emergence and growth of lightweight ciphers. Characterized by their small key sizes and limited number of encryption rounds, precise security evaluation of lightweight ciphers is needed before putting into service. The differential and linear characteristics of full-round Shadow algorithm were analyzed for lightweight ciphers’ security requirements. Firstly, a concept of second difference was proposed to describe the differential characteristic more clearly, the existence of a full-round differential characteristic with probability 1 in the algorithm was proved, and the correctness of differential characteristic was verified through experiments. Secondly, a full-round linear characteristic was provided. It was proved that with giving a set of Shadow-32 (or Shadow-64) plain ciphertexts, it is possible to obtain 8 (or 16) bits of key information, and its correctness was experimentally verified. Thirdly, based on the linear equation relationship between plaintexts, ciphertexts and round keys, the number of equations and independent variables of the quadratic Boolean function were estimated. After that, the computational complexity of solving the initial key was calculated to be 2 63.4 . Finally, the structural features of Shadow algorithm were summarized, and the focus of future research was provided. Besides, differential and linear characteristic analysis of full-round Shadow algorithm provides preference for the differential and linear analysis of other lightweight ciphers.

Table and Figures | Reference | Related Articles | Metrics
Directed gene regulatory network inference algorithm based on t-test and stepwise network search
Du CHEN, Yuanyuan LI, Yu CHEN
Journal of Computer Applications    2024, 44 (1): 199-205.   DOI: 10.11772/j.issn.1001-9081.2023010086
Abstract247)   HTML8)    PDF (1783KB)(67)       Save

In order to overcome the shortage that the Path Consensus Algorithm based on Conditional Mutual Information (PCA-CMI) cannot identify the regulation direction and further improve the accuracy of network inference, a Directed Network Inference algorithm enhanced by t-Test and Stepwise Regulation Search (DNI-T-SRS) was proposed. First, the upstream and downstream relationships of genes were identified by a t-test performed on the expression data with different perturbation settings, by which the conditional genes were selected for guiding Path Consensus (PC) algorithm and calculating Conditional Mutual Inclusive Information (CMI2) to remove redundant regulations, and an algorithm named CMI2-based network inference guided by t-Test (CMI2NI-T) was developed. Then, the corresponding Michaelis-Menten differential equation model was established to fit the expression data, and the network inference result was further corrected by a stepwise network search based on Bayesian information criterion. Numerical experiments were conducted on two benchmark networks of the DREAM6 challenge, and the Area Under Curves (AUCs) of CMI2NI-T were 0.767 9 and 0.979 6, which were 16.23% and 11.62% higher than those of PCA-CMI. With the help of additional process of data fitting, the DNI-T-SRS achieved the inference accuracies of 86.67% and 100.00%, which were 18.19% and 10.52% higher than those of PCA-CMI. The experimental results demonstrate that the proposed DNI-T-SRS can eliminate indirect regulatory relationships and preserve direct regulatory connections, which contributes to precise inference results of gene regulatory networks.

Table and Figures | Reference | Related Articles | Metrics
Dam surface disease detection algorithm based on improved YOLOv5
Shengwei DUAN, Xinyu CHENG, Haozhou WANG, Fei WANG
Journal of Computer Applications    2023, 43 (8): 2619-2629.   DOI: 10.11772/j.issn.1001-9081.2022081207
Abstract537)   HTML33)    PDF (7862KB)(349)       Save

For the current water conservancy dams mainly rely on manual on-site inspections, which have high operating costs and low efficiency, an improved detection algorithm based on YOLOv5 was proposed. Firstly, a modified multi-scale visual Transformer structure was used to improve the backbone, and the multi-scale global information associated with the multi-scale Transformer structure and the local information extracted by Convolutional Neural Network (CNN) were used to construct the aggregated features, thereby making full use of the multi-scale semantic information and location information to improve the feature extraction capability of the network. Then, coordinate attention mechanism was added in front of each feature detection layer of the network to encode features in the height and width directions of the image, and long-distance associations of pixels on the feature map were constructed by the encoded features to enhance the target localization ability of the network in complex environments. The sampling algorithm of the positive and negative training samples of the network was improved to help the candidate positive samples to respond to the prior frames of similar shape to themselves by constructing the average fit and difference between the prior frames and the ground-truth frames, so as to make the network converge faster and better, thus improving the overall performance of the network and the network generalization. Finally, the network structure was lightened for application requirements and was optimized by pruning the network structure and structural re-parameterization. Experimental results show that on the current adopted dam disease data, compared with the original YOLOv5s algorithm, the improved network has the mAP (mean Average Precision)@0.5 improved by 10.5 percentage points, the mAP@0.5:0.95 improved by 17.3 percentage points; compared to the network before lightening, the lightweight network has the number of parameters and the FLOPs(FLoating point Operations Per second) reduced by 24% and 13% respectively, and the detection speed improved by 42%, verifying that the network meets the requirements for precision and speed of disease detection in current application scenarios.

Table and Figures | Reference | Related Articles | Metrics
Blockchain smart contract privacy authorization method based on TrustZone
Luyu CHEN, Xiaofeng MA, Jing HE, Shengzhi GONG, Jian GAO
Journal of Computer Applications    2023, 43 (6): 1969-1978.   DOI: 10.11772/j.issn.1001-9081.2022050719
Abstract343)   HTML8)    PDF (2561KB)(681)       Save

To meet the needs of data sharing in the context of digitalization currently, and take into account the necessity of protecting private data security at the same time, a blockchain smart contract private data authorization method based on TrustZone was proposed. The blockchain system is able to realize data sharing in different application scenarios and meet regulatory requirements, and a secure isolation environment was provided by TrustZone Trusted Execution Environment (TEE) technology for private computing. In the integrated system, the uploading of private data was completed by the regulatory agency, the plaintext information of the private data was obtained by other business nodes only after obtaining the authorization of the user. In this way, the privacy and security of the user were able to be protected. Aiming at the problem of limited memory space in the TrustZone architecture during technology fusion, a privacy set intersection algorithm for small memory conditions was proposed. In the proposed algorithm, the intersection operation for large-scale datasets was completed on the basis of the ??grouping computing idea. The proposed algorithm was tested with datasets of different orders of magnitude. The results show that the time and space consumption of the proposed algorithm fluctuates in a very small range and is relatively stable. The variances are 1.0 s2 and 0.01 MB2 respectively. When the order of magnitudes of the dataset is increased, the time consumption is predictable. Furthermore, using a pre-sorted dataset can greatly improve the algorithm performance.

Table and Figures | Reference | Related Articles | Metrics
Large-scale subspace clustering algorithm with Local structure learning
Qize REN, Hongjie JIA, Dongyu CHEN
Journal of Computer Applications    2023, 43 (12): 3747-3754.   DOI: 10.11772/j.issn.1001-9081.2022111750
Abstract230)   HTML6)    PDF (768KB)(468)       Save

The conventional large-scale subspace clustering methods ignore the local structure that prevails among the data when computing the anchor affinity matrix, and have large error when calculating the approximate eigenvectors of the Laplacian matrix, which is not conducive to data clustering. Aiming at the above problems, a Large-scale Subspace Clustering algorithm with Local structure learning (LLSC) was proposed. In the proposed algorithm, the local structure learning was embedded into the learning of anchor affinity matrix, which was able to comprehensively use global and local information to mine the subspace structure of data. In addition, inspired by Nonnegative Matrix Factorization (NMF), an iterative optimization method was designed to simplify the solution of anchor affinity matrix. Then, the mathematical relationship between the anchor affinity matrix and the Laplacian matrix was established according to the Nystr?m approximation method, and the calculation method of the eigenvectors of the Laplacian matrix was modified to improve the clustering performance. Compared to LMVSC (Large-scale Multi-View Subspace Clustering), SLSR (Scalable Least Square Regression), LSC-k (Landmark-based Spectral Clustering using k-means), and k-FSC(k-Factorization Subspace Clustering), LLSC demonstrates significant improvements on four widely used large-scale datasets. Specifically, on the Pokerhand dataset, the accuracy of LLSC is 28.18 points percentage higher than that of k-FSC. These results confirm the effectiveness of LLSC.

Table and Figures | Reference | Related Articles | Metrics
Rethinking errors in human pose estimation heatmap
Feiyu YANG, Zhan SONG, Zhenzhong XIAO, Yaoyang MO, Yu CHEN, Zhe PAN, Min ZHANG, Yao ZHANG, Beibei QIAN, Chaowei TANG, Wu JIN
Journal of Computer Applications    2022, 42 (8): 2548-2555.   DOI: 10.11772/j.issn.1001-9081.2021050805
Abstract332)   HTML7)    PDF (870KB)(91)       Save

Recently, the leading human pose estimation algorithms are heatmap-based algorithms. Heatmap decoding (i.e. transforming heatmaps to coordinates of human joint points) is a basic step of these algorithms. The existing heatmap decoding algorithms neglect the effect of systematic errors. Therefore, an error compensation based heatmap decoding algorithm was proposed. Firstly, an error compensation factor of the system was estimated during training. Then, the error compensation factor was used to compensate the prediction errors including both systematic error and random error of human joint points in the inference stage. Extensive experiments were carried out on different network architectures, input resolutions, evaluation metrics and datasets. The results show that compared with the existing optimal algorithm, the proposed algorithm achieves significant accuracy gain. Specifically, by using the proposed algorithm, the Average Precision (AP) of the HRNet-W48-256×192 model is improved by 2.86 percentage points on Common Objects in COntext (COCO)dataset, and the Percentage of Correct Keypoints with respect to head (PCKh) of the ResNet-152-256×256 model is improved by 7.8 percentage points on Max Planck Institute for Informatics (MPII)dataset. Besides, unlike the existing algorithms, the proposed algorithm did not need Gaussian smoothing preprocessing and derivation operation, so that it is 2 times faster than the existing optimal algorithm. It can be seen that the proposed algorithm has applicable values to performing fast and accurate human pose estimation.

Table and Figures | Reference | Related Articles | Metrics
Semantic extraction of domain-dependent mathematical text
Xiaoyu CHEN, Wei WANG
Journal of Computer Applications    2022, 42 (8): 2386-2393.   DOI: 10.11772/j.issn.1001-9081.2021060924
Abstract274)   HTML9)    PDF (791KB)(57)       Save

Aiming at the problem of insufficient acquisition of document semantic information in the field of science and technology,a set of rule-based methods for extracting semantics from domain-dependent mathematical text were proposed. Firstly, domain concepts were extracted from the text and semantic mapping between mathematical entities and domain concepts were realized. Secondly, through context analysis for mathematical symbols, entity mentions or corresponding text descriptions of mathematical symbols were obtained and the semantics of the symbols were extracted. Finally, the semantic analysis of expressions was completed based on the extracted semantics of mathematical symbols. Taking linear algebra texts as research examples, a semantic tagging dataset was constructed for experiments. Experimental results show that the proposed methods achieve a precision higher than 93% and a recall higher than 91% on semantic extraction of identifiers, linear algebra entities and expressions.

Table and Figures | Reference | Related Articles | Metrics
Named entity recognition method combining multiple semantic features
Yayao ZUO, Haoyu CHEN, Zhiran CHEN, Jiawei HONG, Kun CHEN
Journal of Computer Applications    2022, 42 (7): 2001-2008.   DOI: 10.11772/j.issn.1001-9081.2021050861
Abstract601)   HTML24)    PDF (2326KB)(277)       Save

Aiming at the common non-linear relationship between characters in languages, in order to capture richer semantic features, a Named Entity Recognition (NER) method based on Graph Convolutional Network (GCN) and self-attention mechanism was proposed. Firstly, with the help of the effective extraction ability of character features of deep learning methods, the GCN was used to learn the global semantic features between characters, and the Bidirectional Long Short-Term Memory network (BiLSTM) was used to extract the context-dependent features of the characters. Secondly, the above features were fused and their internal importance was calculated by introducing a self-attention mechanism. Finally, the Conditional Random Field (CRF) was used to decode the optimal coding sequence from the fused features, which was used as the result of entity recognition. Experimental results show that compared with the method that only uses BiLSTM or CRF, the proposed method has the recognition precision increased by 2.39% and 15.2% respectively on MicroSoft Research Asia (MSRA) dataset and Biomedical Natural Language Processing/Natural Language Processing in Biomedical Applications (BioNLP/NLPBA) 2004 dataset, indicating that this method has good sequence labeling capability on both Chinese and English datasets, and has strong generalization capability.

Table and Figures | Reference | Related Articles | Metrics
Integrating posterior probability calibration training into text classification algorithm
Jing JIANG, Yu CHEN, Jieping SUN, Shenggen JU
Journal of Computer Applications    2022, 42 (6): 1789-1795.   DOI: 10.11772/j.issn.1001-9081.2021091638
Abstract321)   HTML7)    PDF (738KB)(66)       Save

The pre-training language models used for text representation have achieved high accuracy on various text classification tasks, but the following problems still remain: on the one hand, the category with the largest posterior probability is selected as the final classification result of the model after calculating the posterior probabilities on all categories in the pre-training language model. However, in many scenarios, the quality of the posterior probability itself can provide more reliable information than the final classification result. On the other hand, the classifier of the pre-training language model has performance degradation when assigning different labels to texts with similar semantics. In response to the above two problems, a model combining posterior probability calibration and negative example supervision named PosCal-negative was proposed. In PosCal-negative model, the difference between the predicted probability and the empirical posterior probability was dynamically penalized in an end-to-and way during the training process, and the texts with different labels were used to realize the negative supervision of the encoder, so that different feature vector representations were generated for different categories. Experimental results show that the classification accuracies of the proposed model on two Chinese maternal and child care text classification datasets MATINF-C-AGE and MATINF-C-TOPIC reach 91.55% and 69.19% respectively, which are 1.13 percentage points and 2.53 percentage points higher than those of Enhanced Representation through kNowledge IntEgration (ERNIE) model respectively.

Table and Figures | Reference | Related Articles | Metrics
Parameter asynchronous updating algorithm based on multi-column convolutional neural network
Xinyu CHEN, Mingzhe LIU, Jun REN, Ying TANG
Journal of Computer Applications    2022, 42 (2): 395-403.   DOI: 10.11772/j.issn.1001-9081.2021020367
Abstract500)   HTML15)    PDF (4787KB)(221)       Save

To address the problem that the existing algorithm uses synchronous manual optimization of deep learning networks, and ignores the negative information of network learning, which leads to a large number of redundant parameters or even overfitting, thereby affecting the counting accuracy, a parameter asynchronous updating algorithm based on Multi-column Convolutional Neural Network (MCNN) was proposed. Firstly, a single frame image was input to the network, and after three columns of convolutions to extracting features with different scales respectively, the correlation of every two columns of feature maps was learned through the mutual information between columns. Then, the parameters of each column were updated asynchronously according to the optimized mutual information and the updated loss function until the algorithm converges. Finally, the dynamic Kalman filtering was used to deeply fuse the output density maps output by the columns, and all pixels in the fused density map were summed up to obtain the total number of people in the image. Experimental results show that on the UCSD (University of California San Diego) dataset, the Mean Absolute Error (MAE) of the proposed algorithm is 1.1% less than that of ic-CNN+McML (iterative crowd counting Convolution Neural Network Multi-column Multi-task Learning) with the best MAE performance on the dataset, and the Mean Square Error (MSE) of the proposed algorithm is 4.3% less than that of Contextual Pyramid Convolution Neural Network (CP-CNN) with the best MSE performance on the dataset; on the ShanghaiTech Part_A dataset, the MAE of the proposed algorithm is reduced by 1.7% compared to that of ic-CNN+McML with the best MAE performance on the dataset, and the MSE of the proposed algorithm is reduced by 3.2% compared to that of ACSCP (Adversarial Cross-Scale Consistency Pursuit)with the best MSE performance on the dataset; on the ShanghaiTech Part_B dataset, the proposed algorithm has the MAE and MSE reduced by 18.3% and 35.2% respectively compared to ic-CNN+McML with the best MAE and MSE performances on the dataset; on the UCF_CC_50 (University of Central Florida Crowd Counting) dataset, the proposed algorithm has the MAE and MSE reduced by 1.9% and 9.8% respectively compared to ic-CNN+McML with the best MAE and MSE performances on the dataset. The above shows that this algorithm can effectively improve the accuracy and robustness of crowd counting, and allows the input image to have any size or resolution, and can adapt to the large-scale transformation of the detected target.

Table and Figures | Reference | Related Articles | Metrics
Federated‑autonomy‑based cross‑chain scheme for blockchain
Jianhui ZHENG, Feilong LIN, Zhongyu CHEN, Zhaolong HU, Changbing TANG
Journal of Computer Applications    2022, 42 (11): 3444-3457.   DOI: 10.11772/j.issn.1001-9081.2021111922
Abstract625)   HTML29)    PDF (2880KB)(422)       Save

To deal with the phenomenon of "information and value islands" caused by the lack of interoperation among the increasingly emerging blockchain systems, a federated?autonomy?based cross?chain scheme was proposed. The elemental idea of this scheme is to form a relay alliance chain maintained by participated blockchain systems using blockchain philosophy, which is supposed to solve the data sharing, value circulation and business collaboration problems among different blockchain systems. Firstly, a relay mode based cross?chain structure was proposed to provide interoperation services for heterogeneous blockchain systems. Secondly, the detailed design of the relay alliance chain was presented as well as the rules for the participated blockchain systems and their users. Then, the basic types of cross?chain interactions were summarized, and a process for implementing cross?chain interoperability based on smart contracts were designed. Finally, through multiple experiments, the feasibility of the cross?chain scheme was validated, the performance of the cross?chain system was evaluated, and the security of the whole cross?chain network was analyzed. Simulation results and security analysis prove that the proposed channel allocation strategy and block?out right allocation scheme of the proposed scheme are practically feasible, the throughput of the proposed shceme can reach up to 758 TPS (Transactions Per Second) when asset transactions are involved, and up to 960 TPS when asset transactions are not involved; the proposed scheme has high?level security and coarse? and fine?grained privacy protection mechanism. The proposed federated?autonomy?based cross?chain scheme for blockchain can provide secure and efficient cross?chain services, which is suitable for most of the current cross?chain scenarios.

Table and Figures | Reference | Related Articles | Metrics
Automatic tuning of Ceph parameters based on random forest and genetic algorithm
Yu CHEN, Yingchi MAO
Journal of Computer Applications    2020, 40 (2): 347-351.   DOI: 10.11772/j.issn.1001-9081.2019081366
Abstract865)   HTML6)    PDF (722KB)(588)       Save

The performance of Ceph system is significantly affected by the configuration parameters. In the optimization of configuration of Ceph cluster, there are many kinds of configuration parameters with complex meanings, which makes it difficult to achieve fast and accurate optimization. To solve the above problems, a parameter tuning method based on Random Forest (RF) and Genetic Algorithm (GA) was proposed to automatically adjust the Ceph parameter configuration in order to optimize the Ceph system performance. RF algorithm was used to construct a performance prediction model for the Ceph system, and the output of the prediction model was used as the input of GA, then the parameter configuration scheme was automatically and iteratively optimized by using GA. Simulation results show that compared with the system with default parameter configuration, the Ceph file system with optimized parameter configuration has the read and write performance improved by about 1.4 times, and the optimization time is much lower than that of the black box parameter tuning method.

Table and Figures | Reference | Related Articles | Metrics
Privacy preservation algorithm of original data in mobile crowd sensing
JIN Xin, WAN Taochun, LYU Chengmei, WANG Chengtian, CHEN Fulong, ZHAO Chuanxin
Journal of Computer Applications    2020, 40 (11): 3249-3254.   DOI: 10.11772/j.issn.1001-9081.2020020236
Abstract448)      PDF (631KB)(572)       Save
With the popularity of mobile smart devices, Mobile Crowd Sensing (MCS) has been widely used while facing serious privacy leaks. Focusing on the issue that the existing original data privacy protection scheme is unable to resist collusion attacks and reduce the perception data availability, a Data Privacy Protection algorithm based on Mobile Node (DPPMN) was proposed. Firstly, the node manager in DPPMN was used to establish an online node list and send it to the source node. An anonymous path for data transmission was built by the source node through the list. Then, the data was encrypted by using paillier encryption scheme, and the ciphertext was uploaded to the application server along the path. Finally, the required perception data was obtained by the server using ciphertext decryption. The data was encrypted and decrypted during transmission, making sure that the attacker was not able to wiretap the content of the perception data and trace the source of the data along the path. The DPPMN ensures that the application server can access the original data without the privacy invasion of the nodes. Theoretical analysis and experimental results show that DPPMN has higher data security with increasing appropriate communication, and can resist collusion attacks without affecting the availability of data.
Reference | Related Articles | Metrics
Survivability analysis of interdependent network with incomplete information
JIANG Yuxiang, LYU Chen, YU Hongfang
Journal of Computer Applications    2015, 35 (5): 1224-1229.   DOI: 10.11772/j.issn.1001-9081.2015.05.1224
Abstract612)      PDF (1051KB)(629)       Save

This paper proposed a method for analyzing the survivability of interdependent networks with incomplete information. Firstly, the definition of the structure information and the attack information were proposed. A novel model of interdependent network with incomplete attack information was proposed by considering the process of acquiring attack information as the unequal probability sampling by using information breadth parameter and information accuracy parameter in the condition of structure information was known. Secondly, with the help of generating function and the percolation theory, the interdependent network survivability analysis models with random incomplete information and preferential incomplete information were derived. Finally, the scale-free network was taken as an example for further simulations. The research result shows that both information breadth and information accuracy parameters have tremendous impacts on the percolation threshold of interdependent network, and information accuracy parameter has more impact than information breadth parameter. A small number of high accuracy nodes information has the same survivability performance as a large number of low accuracy nodes information. Knowing a small number of the most important nodes can reduce the interdependent network survivability to a large extent. The interdependent network has far lower survivability performance than the single network even in the condition of incomplete attack information.

Reference | Related Articles | Metrics
Data quality assessment of Web article content based on simulated annealing
HAN Jingyu CHEN Kejia
Journal of Computer Applications    2014, 34 (8): 2311-2316.   DOI: 10.11772/j.issn.1001-9081.2014.08.2311
Abstract349)      PDF (1008KB)(339)       Save

Because the existing Web quality assessment approaches rely on trained models, and users' interactions not only cannot meet the requirements of online response, but also can not capture the semantics of Web content, a data Quality Assessment based on Simulated Annealing (QASA) method was proposed. Firstly, the relevant space of the target article was constructed by collecting topic-relevant articles on the Web. Then, the scheme of open information extraction was employed to extract Web articles' facts. Secondly, Simulated Annealing (SA) was employed to construct the dimension baselines of two most important quality dimensions, namely accuracy and completeness. Finally, the data quality dimensions were quantified by comparing the facts of target article with those of the dimension baselines. The experimental results show that QASA can find the near-optimal solutions within the time window while achieving comparable or even 10 percent higher accuracy with regard to the related works. The QASA method can precisely grasp data quality in real-time, which caters for the online identification of high-quality Web articles.

Reference | Related Articles | Metrics
Positioning and display of intensive point of interest for augmented reality browser
ZHANG Yu CHEN Jing WANG Yongtian ZHOU Qi
Journal of Computer Applications    2014, 34 (5): 1435-1438.   DOI: 10.11772/j.issn.1001-9081.2014.05.1435
Abstract429)      PDF (789KB)(426)       Save

When Augmented Reality (AR) browser running in the Point of Interest (POI) dense region, there are some problems like data loading slowly, icon sheltered from the others, low positioning accuracy, etc. To solve above problems, this article proposed a new calculation method of the Global Positioning System (GPS) coordinate mapping which introduced the distance factor, improved the calculating way of coordinates based on the angle projection, and made the icon distinguished effectively after the phone posture changed. Secondly, in order to improve the user experience, a POI labels focus display method which is in more accord with human visual habits was proposed. At the same time, aiming at the low positioning accuracy problem of GPS, the distributed mass scene visual recognition technology was adopted to implement high-precision positioning of scenario.

Reference | Related Articles | Metrics
Dynamic resource management mechanism of debris resources in cloud computing
WANG Xiaoyu CHENG Lianglun
Journal of Computer Applications    2014, 34 (4): 999-1004.   DOI: 10.11772/j.issn.1001-9081.2014.04.0999
Abstract613)      PDF (1109KB)(393)       Save

Concerning the problems that resources specifications and services required by users were not entirely consistent and the full resources were cut into debrises in the process of resource reservation in cloud computing environment, a dynamic resources management strategy considering the reuse of debris resources was put forward. The causes of debris resources were studied to construct debris resource pools, and the metric was made to measure how many tasks the debris resource could receive. While taking full account of the current task for resource discovery, scheduling, matching, the resource partitioning was further discussed by task scheduling, and the influence of the receiving capacity of subsequent tasks of resources debris was indicated. Finally, a dynamic resource debris scheduling model was built. The theoretical analysis and Cloudsim simulation results prove that, the resource management strategy can achieve resource optimization and reorganization of resources debris effectively. The strategy can not only improve the resources reception capability for subsequent tasks but ensure high resource utilization.

Reference | Related Articles | Metrics
Reachability analysis for attribute based user-role assignment model
REN Zhiyu CHEN Xingyuan
Journal of Computer Applications    2014, 34 (2): 428-432.  
Abstract433)      PDF (755KB)(521)       Save
It is difficult to express diversity policy by traditional RBAC (Role-based Access Control) management model. In order to solve the problem, an Attribute based User-Role assignment (ABURA) model was proposed. Attributes were adopted as prerequisite conditions to provide richer semantics for RBAC management policy. In distributed systems, user-role reachability analysis is an important mechanism to verify the correctness of authorization management policy. The definition of user-role reachability analysis problem for ABURA model was given. According to the characteristics of state transition in ABURA model, some reduction theorems for policy were given. Based on these theorems, user-role reachability analysis algorithm was proposed, and the algorithm got verified through examples.
Related Articles | Metrics
Single video temporal super-resolution reconstruction algorithm based on maximum a posterior
GUO Li LIAO Yu CHEN Weilong LIAO Honghua LI Jun XIANG Jun
Journal of Computer Applications    2014, 34 (12): 3580-3584.  
Abstract166)      PDF (823KB)(721)       Save

Any video camera equipment has certain temporal resolution, so it will cause motion blur and motion aliasing in captured video sequence. Spatial deblurring and temporal interpolation are usually adopted to solve this problem, but these methods can not solve it completely in origin. A temporal super-resolution reconstruction method based on Maximum A Posterior (MAP) probability estimation for single-video was proposed in this paper. The conditional probability model was determined in this method by reconstruction constraint, and then prior information model was established by combining temporal self-similarity in video itself. From these two models, estimation of maximum posteriori was obtained, namely reconstructed a high temporal resolution video through a single low temporal resolution video, so as to effectively remove motion blur for too long exposure time and motion aliasing for inadequate camera frame-rate. Through theoretical analysis and experiments, the validity of the proposed method is proved to be effective and efficient.

Reference | Related Articles | Metrics
Classification of multivariate time series based on singular value decomposition and discriminant locality preserving projection
DONG Hongyu CHEN Xiaoyun
Journal of Computer Applications    2014, 34 (1): 239-243.   DOI: 10.11772/j.issn.1001-9081.2014.01.0239
Abstract718)      PDF (704KB)(674)       Save
The existing multivariate time series classification algorithms require sequences of equal length and neglect categories information. In order to solve these defects, a multivariate time series classification algorithm was proposed based on Singular Value Decomposition (SVD) and discriminant locality preserving projection. Based on the idea of dimension reduction, the first right singular vector of samples by SVD was used as feature vector to transform unequal length sequence into a sequence of identical size. Then the feature vector was projected by utilizing discriminant locality preserving projection based on maximum margin criterion, which made full use of categories information to ensure samples of the same class as close as possible and heterogeneous samples as dispersed as possible. Finally, it achieved the classification in a low dimension subspace by using 1 Nearest Neighbor (1NN), Parzen windows, Support Vector Machine (SVM) and Naive Bayes classifier. Experiments were carried out on Australian Sign Language (ASL), Japanese Vowels (JV) and Wafer, the three public multivariate time series datasets. The results show that the proposed algorithm achieves lower classification error rate under the condition of the same time complexity basically.
Related Articles | Metrics
Cross-domain authorization management model based on two-tier role mapping
REN Zhiyu CHEN Xingyuan SHAN Dibin
Journal of Computer Applications    2013, 33 (09): 2511-2515.   DOI: 10.11772/j.issn.1001-9081.2013.09.2511
Abstract620)      PDF (785KB)(443)       Save
With regard to the singleness of the role establishment method in the traditional cross-domain authorization management models, and the problems such as implicit promotion of privilege and the separation of duties conflict, a new cross-domain authorization management model based on two-tier role mapping was proposed. The two-tier role architecture met the practical needs of role establishment and management. On this basis, unidirectional role mapping can avoid the role mapping rings. By introducing attribute and condition, dynamic adjustment of permissions was realized. The model was formalized by dynamic description logic, including concepts, relations and management actions. In the end, the security of the model was analyzed.
Related Articles | Metrics
Replica allocation policy of cloudy services based on social network properties
LUO Haoyu CHEN Wanghu
Journal of Computer Applications    2013, 33 (08): 2143-2146.  
Abstract726)      PDF (812KB)(538)       Save
To improve the running efficiency of business workflow in cloud environment, a policy of replica allocation of cloudy services was proposed. Taking the advantage of social network analysis, the policy specified the central service nodes in a service network based on mining the social network properties such as connectivity and centralization for a service community. The host physical machine of the replica of the central service was specified according to the analysis of logical sequence between the central service and its pre-service, and the usage of other physical machines. The analysis and simulation show that the policy can improve the running efficiency of data intensive business workflow in a cloud environment by averaging the overload of physical machine and reducing the time wasted by long-distance service interaction.
Reference | Related Articles | Metrics
Multifocus image fusion algorithm based on Brenner function and new contourlet transform
MO Jian-wen MA Ai-hong SHOU Zhao-yu CHEN Li-xia
Journal of Computer Applications    2012, 32 (12): 3353-3356.   DOI: 10.3724/SP.J.1087.2012.03353
Abstract860)      PDF (840KB)(606)       Save
In order to eliminate the aliasing phenomena of the spectrum in each direction subband of the fusion algorithm in contourlet domain and to improve the accuracy of extracting effective coefficients, a multifocus image fusion method was proposed based on the Brenner function and the New Contourlet Transform with Sharp Frequency Localization (NCT-SFL). Firstly, the new contourlet transform was used to decompose the multifocus images which to be fused. And then the traditional arithmetic mean fusion rule was used to fuse the low-frequency coefficients and the maximum local energy based on the Brenner function to fuse the high-frequency coefficients. Finally, the inverse new contourlet transform was used to obtain fused images. Using the proposed fusion method, the experimental results demonstrate that the algorithm can effectively extract the contour information of the images which to be fused, and under the premise of obtaining a better performance in terms of subjective vision, objective evaluation criterions of mutual information and transferred edge information have improved by 99.34% and 77.95% respectively. In addition, the more levels of the new contourlet decomposition, the more obvious advantages of the algorithm will show.
Related Articles | Metrics
Incentive strategy based on Bayesian game model in wireless multi-hop network
XU Li CHEN Xin-yu CHEN Zhi-de
Journal of Computer Applications    2011, 31 (12): 3169-3173.  
Abstract1212)      PDF (984KB)(1425)       Save
The enhancement of nodes intelligence causes more applications for the wireless multi-hop networks. The security problem also becomes more crucial. In order to prevent the adverse effects of the selfish nodes or malicious nodes, this paper proposes a cross-layer mechanism based on the game theory. A Bayesian game model is developed for the information sharing between the physical layer and the link layer. Apply the Bayesian game model to derive and analyze the mutual information among nodes and form an effective mutual supervision incentive for the node cooperation. The effectiveness of the proposed Bayesian game model is shown through careful case studies and comprehensive computer simulation.
Reference | Related Articles | Metrics
Ship video transmission and protection system based on 3G network
ZHAI Xiao-yu CHEN Zhao-zheng CHEN Qi-mei
Journal of Computer Applications    2011, 31 (11): 3161-3164.   DOI: 10.3724/SP.J.1087.2011.03161
Abstract1215)      PDF (656KB)(469)       Save
The control and treatment of water pollution is an important issue in China. To meet the lack of remote monitoring of water, the ship video transmission and protection system based on 3G network was proposed. The structure of the system was described, and the characteristics of the 3G network video transmission were analyzed. The achievement of smooth real-time video transmission was based on 3G network, simple reliable user datagram protocol, H.264 video codec, and Quality of Service (QoS) control. The results show that the system is effective, and it can be applied to real-time video surveillance of water.
Related Articles | Metrics
Path editing technique based on motion graphs
DU Yu CHEN Zhi-hua XU Jun-jian
Journal of Computer Applications    2011, 31 (10): 2745-2749.   DOI: 10.3724/SP.J.1087.2011.02745
Abstract1040)      PDF (815KB)(540)       Save
This paper improved the algorithm of generating transitions and searching for path, and proposed a path editing method based on motion graphs. With regard to generating transitions, this paper detected the motion clips which can be used to blend automatically by minimizing the average frame distance between blending frames, and proposed Enhanced Dynamic Time Wrapping (EDTW) algorithm to solve this optimization problem. Concerning path search in the motion graph, this paper used the area between two curves as the target function and improved the strategy of incremental search and the strategy of branch and bound. The result shows that the proposed algorithm can edit and generate the character motions that well match the paths specified by users.
Related Articles | Metrics
Comparative analysis on three ultra wideband chip design methods
Liang ZHAO Liang JIN Zhong-heng JI Jin-yu CHEN Shuang-ping LIU
Journal of Computer Applications    2011, 31 (07): 1971-1975.   DOI: 10.3724/SP.J.1087.2011.01971
Abstract1055)      PDF (800KB)(814)       Save
At present, in consideration of the carrier mode, the chip design methods of ultra wideband systems mainly involve no-carrier ultra wideband, single carrier ultra wideband and multi-carrier ultra wideband. Although the three chip design methods are relatively mature, there still exist some technical difficulties, and none of them have achieved absolute advantage or extensive application. Through the researches on the related technologies and chip design examples, a comparative analysis was made on the three ultra wideband chip design methods in terms of system complexity, peak to average power ratio, overall system power consumption, frequency selective fading resistance, carrier synchronization, symbol synchronization, and spreading gain. The conclusions may provide some useful reference for the selection of ultra wideband chip design methods in different application scenarios.
Reference | Related Articles | Metrics
New method for maximum distribution reduction in inconsistent decision information systems
YU Chengyi LI Jinjin
Journal of Computer Applications    2011, 31 (06): 1645-1647.   DOI: 10.3724/SP.J.1087.2011.01645
Abstract1503)      PDF (575KB)(482)       Save
In order to get the maximum distribution attribute reduction rapidly in inconsistent decision information systems, a new decision maximum distribution binary relation was defined after analyzing the existing methods. And the judgment theorems for judging maximum distribution consistent sets were obtained, from which we can provide a new maximum distribution attribute reduction algorithm in inconsistent decision information systems. Moreover, the characterization of core attributes, relative necessary attributes, unnecessary attributes were discussed based on decision maximum distribution binary relation. Finally, a case study illustrates the validity of the method.
Related Articles | Metrics