Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
UMCS tree based hybrid similarity measure of UML class diagram
Zhongchen YUAN, Zongmin MA
Journal of Computer Applications    2024, 44 (3): 883-889.   DOI: 10.11772/j.issn.1001-9081.2022111702
Abstract151)   HTML9)    PDF (2820KB)(158)       Save

Software reuse is to retrieve previously developed software artifacts from a repository according to given conditions. The retrieval is based on similarity measure. UML (Unified Modeling Language) class diagram is widely applied to software design, and its reuse as a core of software design reuse has attracted much attention. Therefore, the research on the similarity of UML class diagrams was carried out. UML class diagram contains semantic and structural contents. At present, the similarity research of UML class diagrams mainly focuses on semantics, and there are also some discussions on structural similarity, but the combination of semantics and structure has not been considered. Therefore, a hybrid similarity measure combining semantics and structure was proposed. Due to the non-formal nature of UML class diagram, the UML class diagram was transformed into a graph model for similarity measure, the Maximum Common Subgraph List (MCSL) was searched, a Maximum Common Subgraph (MCS) tree was created based on MCSL, and a hybrid similarity measure method was proposed based on MCS sequence. The semantic matching and structural matching were defined corresponding to concept and structure common subgraphs, respectively. The similarity comparison and similarity based classification quality comparison experiments were carried out, and the experimental results validate the advantages of the proposed method.

Table and Figures | Reference | Related Articles | Metrics
Survey of research progress on crowdsourcing task assignment for evaluation of workers’ ability
MA Hua, CHEN Yuepeng, TANG Wensheng, LOU Xiaoping, HUANG Zhuoxuan
Journal of Computer Applications    2021, 41 (8): 2232-2241.   DOI: 10.11772/j.issn.1001-9081.2020101629
Abstract417)      PDF (1533KB)(582)       Save
With the rapid development of internet technology and sharing economy mode, as a new crowd computing mode, crowdsourcing has been widely applied and become a research focus recently. Aiming at the characteristics of crowdsourcing applications, to ensure the completion quality of crowdsourcing tasks, the existing researches have proposed different crowdsourcing task assignment methods from the perspective of the evaluation of worker's ability. Firstly, the crowdsourcing's concept and classification were introduced, and the workflow and task characteristics of the crowdsourcing platform were analyzed. Based on them, the existing research works on the evaluation of workers' ability were summarized. Then, the crowdsourcing task assignment methods and the related challenges were reviewed from three different aspects, including matching-based, planning-based and role-based collaboration. Finally, the research directions of future work were put forward.
Reference | Related Articles | Metrics
Blockchain storage expansion model based on Chinese remainder theorem
QING Xinyi, CHEN Yuling, ZHOU Zhengqiang, TU Yuanchao, LI Tao
Journal of Computer Applications    2021, 41 (7): 1977-1982.   DOI: 10.11772/j.issn.1001-9081.2020081256
Abstract479)      PDF (1043KB)(385)       Save
Blockchain stores transaction data in the form of distributed ledger, and its nodes hold copies of current data by storing hash chain. Due to the particularity of the blockchain structure, the number of blocks increases over time and the storage pressure of nodes also increases with the increasing of blocks, so that the storage scalability has become one of the bottlenecks in blockchain development. To address this problem, a blockchain storage expansion model based on Chinese Remainder Theorem (CRT) was proposed. In the model, the blockchain was divided into high-security blocks and low-security blocks, which were stored by different storage strategies. Among them, low-security blocks were stored in the form of network-wide preservation (all nodes need to preserve the data), while the high-security blocks were stored in a distributed form after being sliced by the CRT-based partitioning algorithm. In addition, the error detection and correction of Redundant Residual Number System (RRNS) was used to restore data to prevent malicious node attacking, so as to improve the stability and integrity of data. Experimental results and security analysis show that the proposed model not only has security and fault tolerance ability, but also ensures the integrity of data, as well as effectively reduces the storage consumption of nodes and increases the storage scalability of the blockchain system.
Reference | Related Articles | Metrics
Dual-channel night vision image restoration method based on deep learning
NIU Kangli, CHEN Yuzhang, SHEN Junfeng, ZENG Zhangfan, PAN Yongcai, WANG Yichong
Journal of Computer Applications    2021, 41 (6): 1775-1784.   DOI: 10.11772/j.issn.1001-9081.2020091411
Abstract807)      PDF (1916KB)(727)       Save
Due to the low light level and low visibility of night scene, there are many problems in night vision image, such as low signal to noise ratio and low imaging quality. To solve the problems, a dual-channel night vision image restoration method based on deep learning was proposed. Firstly, two Convolutional Neural Network (CNN) based on Fully connected Multi-scale Residual learning Block (FMRB) were used to extract multi-scale features and fuse hierarchical features of infrared night vision images and low-light-level night vision images respectively, so as to obtain the reconstructed infrared image and enhanced low-light-level image. Then, the two processed images were fused by the adaptive weighted averaging algorithm, and the effective information of the more salient one in the two images was highlighted adaptively according to the different scenes. Finally, the night vision restoration images with high resolution and good visual effect were obtained. The reconstructed infrared night vision image obtained by the FMRB based deep learning network had the average values of Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) by 3.56 dB and 0.091 2 higher than the image obtained by Super-Resolution Convolutional Neural Network (SRCNN) reconstruction algorithm respectively, and the enhanced low-light-level night vision image obtained by the FMRB based deep learning network had the average values of PSNR and SSIM by 6.82dB and 0.132 1 higher than the image obtained by Multi-Scale Retinex with Color Restoration (MSRCR). Experimental results show that, by using the proposed method, the resolution of reconstructed image is improved obviously and the brightness of the enhanced image is also improved significantly, and the visual effect of the fusion image obtained by the above two images is better. It can be seen that the proposed algorithm can effectively restore the night vision images.
Reference | Related Articles | Metrics
Reverse hybrid access control scheme based on object attribute matching in cloud computing environment
GE Lina, HU Yugu, ZHANG Guifen, CHEN Yuanyuan
Journal of Computer Applications    2021, 41 (6): 1604-1610.   DOI: 10.11772/j.issn.1001-9081.2020121954
Abstract291)      PDF (1071KB)(327)       Save
Cloud computing improves the efficiency of the use, analysis and management of big data, but also brings the worry of data security and private information disclosure of cloud service to the data contributors. To solve this problem, combined with the role-based access control, attribute-based access control methods and using the architecture of next generation access control, a reverse hybrid access control method based on object attribute matching in cloud computing environment was proposed. Firstly, the access right level of the shared file was set by the data contributor, and the minimum weight of the access object was reversely specified. Then, the weight of each attribute was directly calculated by using the variation coefficient weighting method, and the process of policy rule matching in the attribute centered role-based access control was cancelled. Finally, the right value of the data contributor setting to the data file was used as the threshold for the data visitor to be allowed to access, which not only realized the data access control, but also ensured the protection of private data. Experimental results show that, with the increase of the number of visits, the judgment standards of the proposed method for malicious behaviors and insufficient right behaviors tend to be stable, the detection ability of the method becomes stronger and stronger, and the success rate of the method tends to a relatively stable level. Compared with the traditional access control methods, the proposed method can achieve higher decision-making efficiency in the environment of large number of user visits, which verifies the effectiveness and feasibility of the proposed method.
Reference | Related Articles | Metrics
Drug-target association prediction algorithm based on graph convolutional network
XU Guobao, CHEN Yuanxiao, WANG Ji
Journal of Computer Applications    2021, 41 (5): 1522-1526.   DOI: 10.11772/j.issn.1001-9081.2020081186
Abstract458)      PDF (892KB)(478)       Save
Traditional drug-target association prediction based on biological experiments is difficult to meet the demand of pharmaceutical research because its low efficiency and high cost. In order to solve the problem, a novel Graph Convolution for Drug-Target Interactions (GCDTI) algorithm was proposed. In GCDTI, the graph convolution and auto-encoder technology were combined by using semi-supervised learning to construct an encoding layer for integrating node features and a decoding layer for predicting full-link interactive networks respectively. At the same time, the graph convolution was used to build a latent factor model and effectively utilize the high-dimensional attribute information of drugs and targets for end-to-end learning. In this method, the input characteristic information was able to be combined with the known interaction network without preprocessing, which proved that the graph convolution layer of the model was able to effectively fuse the input data and node characteristics. Compared with other advanced methods, GCDTI has the highest prediction accuracy and average Area Under Receiver Operating Characteristic (ROC) Curve (AUC) (0.924 6±0.004 8), and has strong robustness. Experimental results show that GCDTI with the model architecture of end-to-end learning has the potential to be a reliable predictive method when large amounts of drug and target data need to be predicted.
Reference | Related Articles | Metrics
Coevolutionary ant colony optimization algorithm for mixed-variable optimization problem
WEI Mingyan, CHEN Yu, ZHANG Liang
Journal of Computer Applications    2021, 41 (5): 1412-1418.   DOI: 10.11772/j.issn.1001-9081.2020081200
Abstract395)      PDF (2082KB)(482)       Save
For Mixed-Variable Optimization Problem (MVOP) containing both continuous and categorical variables, a coevolution strategy was proposed to search the mixed-variable decision space, and a Coevolutionary Ant Colony Optimization Algorithm for MVOP (CACOA MV) was developed. In CACOA MV, the continuous and categorical sub-populations were generated by using the continuous and discrete Ant Colony Optimization (ACO) strategies respectively, the sub-vectors of continuous and categorical variables were evaluated with the help of cooperators, and the continuous and categorical sub-populations were respectively updated to realize the efficient coevolutionary search in the mixed-variable decision space. Furthermore, the ability of global exploration to the categorical variable solution space was improved by introducing a smoothing mechanism of pheromone, and a "best+random cooperators" restart strategy facing the coevolution framework was proposed to enhance the efficiency of coevolutionary search. By comparing with the Mixed-Variable Ant Colony Optimization (ACO MV) algorithm and the Success History-based Adaptive Differential Evolution algorithm with linear population size reduction and Ant Colony Optimization (L-SHADE ACO), it is demonstrated that CACOA MV is able to perform better local exploitation, so as to improve approximation quality of the final results in the target space; the comparison with the set-based Differential Evolution algorithm with Mixed-Variables (DE MV) shows that CACOA MV is able to better approximate the global optimal solutions in the decision space and has better global exploration ability. In conclusion, CACOA MV with the coevolutionary strategy can keep a balance between global exploration and local exploitation, which results in better optimization ability.
Reference | Related Articles | Metrics
Database star-join optimization for multicore CPU and GPU platforms
LIU Zhuan, HAN Ruichen, ZHANG Yansong, CHEN Yueguo, ZHANG Yu
Journal of Computer Applications    2021, 41 (3): 611-617.   DOI: 10.11772/j.issn.1001-9081.2020091430
Abstract669)      PDF (1026KB)(889)       Save
Focusing on the high execution cost of star-join between the fact table and multiple dimension tables in On-line Analytical Processing (OLAP), a star-join optimization technique was proposed for advanced multicore CPU (Central Processing Unit) and GPU (Graphics Processing Unit). Firstly, the vector index based vectorized star-join algorithm on CPU and GPU platforms was proposed for the intermediate materialization cost problem in star-join in multicore CPU and GPU platforms. Secondly, the star-join operation based on vector granularity was presented according to the vector division for CPU cache size and GPU shared memory size, so as to optimize the vector index materialization cost in star-join. Finally, the compressed vector index based star-join algorithm was proposed to compress the fixed-length vector index to the variable-length binary vector index, so as to improve the storage access efficiency of the vector index in cache under low selection rate. Experimental results show that the vectorized star-join algorithm achieves more than 40% performance improvement compared to the traditional row-wise or column-wise star-join algorithms on multicore CPU platform, and the vectorized star-join algorithm achieves more than 15% performance improvement compared to the conventional star-join algorithms on GPU platform; in the comparison with the mainstream main-memory databases and GPU databases, the optimized star-join algorithm achieves 130% performance improvement compared to the optimal main-memory database Hyper, and achieves 80% performance improvement compared to the optimal GPU database OmniSci. It can be seen that the vector index based star-join optimization technique effectively improves the multiple table join performance, and compared with the traditional optimization techniques, the vector index based vectorized processing improves the data storage access efficiency in small cache, and the compressed vector further improves the vector index access efficiency in cache.
Reference | Related Articles | Metrics
Secure energy transaction scheme based on alliance blockchain
LONG Yangyang, CHEN Yuling, XIN Yang, DOU Hui
Journal of Computer Applications    2020, 40 (6): 1668-1673.   DOI: 10.11772/j.issn.1001-9081.2019101784
Abstract579)      PDF (763KB)(548)       Save
Blockchain technology is widely used in vehicular network, energy internet, smart grid, etc., but attackers can combine social engineering and data mining algorithms to obtain users’ privacy data recorded in the blockchain; especially in microgrid, data generated by games between neighboring energy nodes are more likely to leak user privacy. In order to solve such a problem, based on the alliance blockchain, a secure energy transaction model with a one-to-many energy node account matching mechanism was proposed. The proposed model mainly uses the generation of new accounts to prevent attackers from using data mining algorithms to obtain private data such as energy node accounts, geographical locations, and energy usage from transaction records. The simulation experiment combines the characteristics of the alliance chain, the number of new accounts generated by energy nodes, and the change of transaction verification time to give the analysis results of privacy protection performance, transaction efficiency, and security efficiency. The experimental results show that, the proposed model requires less time during the stage of transaction initiation and verification, has higher security, and the model can hide the transaction trend between adjacent users. The proposed scheme can be well applied to the energy internet transaction scenario.
Reference | Related Articles | Metrics
Improving machine simultaneous interpretation by punctuation recovery
CHEN Yuna, SHI Xiaodong
Journal of Computer Applications    2020, 40 (4): 972-977.   DOI: 10.11772/j.issn.1001-9081.2019101711
Abstract639)      PDF (1373KB)(551)       Save
In the Machine Simultaneous Interpretation(MSI)pipeline system,semantic incompleteness occurs when the Automatic Speech Recognition(ASR)outputs are directly input into Neural Machine Translation(NMT). To address this problem,a model based on Bidirectional Encoder Representation from Transformers (BERT) and Focal Loss was proposed. Firstly,several segments generated by the ASR system were cached and formed into a string. Then a BERT-based sequence labeling model was used to recover the punctuations of the string,and Focal Loss was used as the loss function in the process of model training to alleviate the class imbalance problem of more unpunctuated samples than punctuated samples. Finally,the punctuation-restored string was input into NMT. Experimental results on English-German and Chinese-English translation show that in term of translation quality,the MSI using the proposed punctuation recovery model has the improvement of 8. 19 BLEU and 4. 24 BLEU respectively compared with the MSI with ASR outputs directly inputting into NMT,and has the improvement of 2. 28 BLEU and 3. 66 BLEU respectively compared with the MSI using punctuation recovery model based on bi-directional recurrent neural network with attention mechanism. Therefore,the proposed model can be effectively applied to MSI.
Reference | Related Articles | Metrics
Gaussian mixture clustering algorithm combining elbow method and expectation-maximization for power system customer segmentation
CHEN Yu, TIAN Bojin, PENG Yunzhu, LIAO Yong
Journal of Computer Applications    2020, 40 (11): 3217-3223.   DOI: 10.11772/j.issn.1001-9081.2020050672
Abstract516)      PDF (915KB)(416)       Save
In order to further improve the user experience of power system customers, and aiming at the problems of poor optimization ability, lack of compactness and difficulty in solving the optimal number of clusters, a Gaussian mixture clustering algorithm combining elbow method and Expectation-Maximization (EM) was proposed, which can mine the potential information in a large number of customer data. The good clustering results were obtained by EM algorithm iteration. Aiming at the shortcoming of the traditional Gaussian mixture clustering algorithm that needs to obtain the number of user clusters in advance, the number of customer clusters was reasonably found by using elbow method. The case study shows that compared with hierarchical clustering algorithm and K-Means algorithm, the proposed algorithm has the increase of both FM (Fowlkes-Mallows) and AR (Adjusted-Rand) indexes more than 10%, and the decrease of Compactness Index (CI) and Degree of Separation (DS) less than 15% and 25% respectively. It can be seen that the performance of the algorithm is greatly improved.
Reference | Related Articles | Metrics
Underwater image super-resolution reconstruction method based on deep learning
CHEN Longbiao, CHEN Yuzhang, WANG Xiaochen, ZOU Peng, HU Xuemin
Journal of Computer Applications    2019, 39 (9): 2738-2743.   DOI: 10.11772/j.issn.1001-9081.2019020353
Abstract666)      PDF (893KB)(512)       Save

Due to the characteristics of water itself and the absorption and scattering of light by suspended particles in the water, a series of problems, such as low Signal-to-Noise Ratio (SNR) and low resolution, exist in underwater images. Most of the traditional processing methods include image enhancement, restoration and reconstruction rely on degradation model and have ill-posed algorithm problem. In order to further improve the effects and efficiency of underwater image restoration algorithm, an improved image super-resolution reconstruction method based on deep convolutional neural network was proposed. An Improved Dense Block structure (IDB) was introduced into the network of the method, which can effectively solve the gradient disappearance problem of deep convolutional neural network and improve the training speed at the same time. The network was used to train the underwater images before and after the degradation by registration and obtained the mapping relation between the low-resolution image and the high-resolution image. The experimental results show that on a self-built underwater image training set, the underwater image reconstructed by the deep convolutional neural network with IDB has the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) improved by 0.38 dB and 0.013 respectively, compared with SRCNN (an image Super-Resolution method using Conventional Neural Network) and proposed method can effectively improve the reconstruction quality of underwater images.

Reference | Related Articles | Metrics
Single image super resolution algorithm based on structural self-similarity and deformation block feature
XIANG Wen, ZHANG Ling, CHEN Yunhua, JI Qiumin
Journal of Computer Applications    2019, 39 (1): 275-280.   DOI: 10.11772/j.issn.1001-9081.2018061230
Abstract391)      PDF (1016KB)(327)       Save
To solve the problem of insufficient sample resources and poor noise immunity for single image Super Resolution (SR) restoration, a single image super-resolution algorithm based on structural self-similarity and deformation block feature was proposed. Firstly, a scale model was constructed to expand search space as much as possible and overcome the shortcomings of lack of a single image super-resolution training sample. Secondly, the limited internal dictionary size was increased by geometric deformation of sample block. Finally, in order to improve anti-noise performance of reconstructed picture, the group sparse learning dictionary was used to reconstruct image. The experimental results show that compared with the excellent algorithms such as Bicubic, Sparse coding Super Resolution (ScSR) algorithm and Super-Resolution Convolutional Neural Network (SRCNN) algorithm, the super-resolution images with more subjective visual effects and higher objective evaluation can be obtained, the Peak Signal-To-Noise Ratio (PSNR) of the proposed algorithm is increased by about 0.35 dB on average. In addition, the scale of dictionary is expanded and the accuracy of search is increased by means of geometric deformation, and the time consumption of algorithm is averagely reduced by about 80 s.
Reference | Related Articles | Metrics
Supernetwork link prediction method based on spatio-temporal relation in location-based social network
HU Min, CHEN Yuanhui, HUANG Hongcheng
Journal of Computer Applications    2018, 38 (6): 1682-1690.   DOI: 10.11772/j.issn.1001-9081.2017122904
Abstract358)      PDF (1605KB)(481)       Save
The accuracy of link prediction in the existing methods for Location-Based Social Network (LBSN) is low due to the failure of integrating social factors, location factors and time factors effectively. In order to solve the problem, a supernetwork link prediction method based on spatio-temporal relation was proposed in LBSN. Firstly, aiming at the heterogeneity of network and the spatio-temporal relation among users in LBSN, the network was divided into four-layer supernetwork of "spatio-temporal-user-location-category" to reduce the coupling between the influencing factors. Secondly, considering the impact of edge weights on the network, the edge weights of subnets were defined and quantified by mining user influence, implicit association relationship, user preference and node degree information, and a four-layer weighted supernetwork model was built. Finally, on the basis of the weighted supernetwork model, the super edge as well as weighted super-edge structure were defined to mine the multivariate relationship among users for prediction. The experimental results show that, compared with the link prediction methods based on homogeneity and heterogeneity, the proposed method has a certain increase in accuracy, recall, F1-measure (F1) as well as Area Under the receiver operating characteristic Curve (AUC), and its AUC index is 4.69% higher than that of the link prediction method based on heterogeneity.
Reference | Related Articles | Metrics
Single image super resolution combining with structural self-similarity and convolution networks
XIANG Wen, ZHANG Ling, CHEN Yunhua, JI Qiumin
Journal of Computer Applications    2018, 38 (3): 854-858.   DOI: 10.11772/j.issn.1001-9081.2017081920
Abstract431)      PDF (879KB)(557)       Save
Aiming at the ill-posed inverse problem of single-image Super Resolution (SR) restoration, a single image super resolution algorithm combining with structural self-similarity and convolution networks was proposed. Firstly, the self-structure similarity of samples to be reconstructed was obtained by scaling decomposition. Combined with external database samples as training samples, the problem of over-dispersion of samples could be solved. Secondly, the sample was input into a Convolution Neural Network (CNN) for training and learning, and the prior knowledge of the super resolution of the single image was obtained. Then, the optimal dictionary was used to reconstruct the image by using a nonlocal constraint. Finally, an iterative backprojection algorithm was used to further improve the image super resolution effect. The experimental results show that compared with the excellent algorithms such as Bicubic, K-SVD (Singular Value Decomposition of k iterations) algorithm and Super-Resolution Convolution Neural Network (SRCNN) algorithm, the proposed algorithm can get super-resolution reconstruction with clearer edges.
Reference | Related Articles | Metrics
Data augmentation method based on conditional generative adversarial net model
CHEN Wenbing, GUAN Zhengxiong, CHEN Yunjie
Journal of Computer Applications    2018, 38 (11): 3305-3311.   DOI: 10.11772/j.issn.1001-9081.2018051008
Abstract1931)      PDF (1131KB)(1209)       Save
Deep Convolutional Neural Network (CNN) is trained by large-scale labelled datasets. After training, the model can achieve high recognition rate or good classification effect. However, the training of CNN models with smaller-scale datasets usually occurs overfitting. In order to solve this problem, a novel data augmentation method called GMM-CGAN was proposed, which was integrated Gaussian Mixture Model (GMM) and CGAN (Conditional Generative Adversarial Net). Firstly, sample number was increased by randomly sliding sampling around the core region. Secondly, the random noise vector was supposed to submit to the distribution of GMM model, then it was used as the initial input to the CGAN generator and the image label was used as the CGAN condition to train the parameters of the CGAN and GMM models. Finally, the trained CGAN was used to generate a new dataset that matched the real distribution of the samples. The dataset was divided into 12 classes of 386 items. After implementing GMM-CGAN on the dataset, the total number of the new dataset was 38600. The experimental results show that compared with CNN's training datasets augmented by Affine transformation or CGAN, the average classification accuracy of the proposed method is 89.1%, which is improved by 18.2% and 14.1%, respectively.
Reference | Related Articles | Metrics
Compression method for trajectory data based on prediction model
CHEN Yu, JIANG Wei, ZHOU Ji'en
Journal of Computer Applications    2018, 38 (1): 171-175.   DOI: 10.11772/j.issn.1001-9081.2017061411
Abstract669)      PDF (924KB)(525)       Save
A Compression method for Trajectory data based on Prediction Model (CTPM) was proposed to improve compression efficiency of massive trajectory data in road network environment. The temporal information and spatial information of the trajectory data were respectively compressed so that the compressed trajectory data was lossless in the spatial dimension and the error was bounded in the time dimension. In terms of space, the Prediction by Partial Matching (PPM) algorithm was used to predict the possible position of the next moment by the part trajectory that had been driven. And then the predicted road segments were deleted to reduce the storage cost. In terms of time, the statistical traffic speed model of different time intervals was constructed according to the periodic feature of the traffic condition to predict the required time for moving objects to enter the next section. And then the compression process was performed by deleting the road section information which predicted time error was smaller than the given threshold. In the comparison experiments with Paralleled Road-network-based trajectory comprESSion (PRESS) algorithm, the average compression ratio of CTPM was increased by 43% in space and 1.5% in time, and the temporal error was decreased by 9.5%. The experimental results show that the proposed algorithm can effectively reduce the compression time and compression error while improving the compression ratio.
Reference | Related Articles | Metrics
Hierarchical representation model of APT attack
TAN Ren, YIN Xiaochuan, LIAN Zhe, CHEN Yuxin
Journal of Computer Applications    2017, 37 (9): 2551-2556.   DOI: 10.11772/j.issn.1001-9081.2017.09.2551
Abstract554)      PDF (1009KB)(566)       Save
Aiming at the problem that the attack chain model for the attack phase is too small to indicate the means of attack, an Advanced Persistent Threat (APT) Hierarchical Attack Representation Model (APT-HARM) was proposed. By summarizing the analysis of a large number of published APT event reports and reference APT attack chain model and HARM, the APT attack was divided into into two layers, the upper layer attack chain and the lower layer attack tree, which were formally defined. Firstly, the APT attack was divided into four stages:reconnaissance, infiltration, operation and exfiltration and the characteristics of each stage were studied. Then, the attack methods in each stage were studied, and the attack tree was composed according to its logical relationship. APT attacks were carried out in stages according to the attack chain, and the attack of each stage was performed in accordance with the attack tree. The case study shows that the model has the advantages of reasonable granularity classification and better attack description compared to the attack chain model. APT-HARM formally defines the APT attack, which provides an idea for the prediction and prevention of APT attacks.
Reference | Related Articles | Metrics
Link prediction method for complex network based on closeness between nodes
DING Dazhao, CHEN Yunjie, JIN Yanqing, LIU Shuxin
Journal of Computer Applications    2017, 37 (8): 2129-2132.   DOI: 10.11772/j.issn.1001-9081.2017.08.2129
Abstract636)      PDF (734KB)(769)       Save
Many link prediction methods only focus on the standard metric AUC (Area Under receiver operating characteristic Curve), ignoring the metric precision and closeness of common neighbors and endpoints under different topological structures. To solve these problems, a link prediction method based on closeness between nodes was proposed. In order to describe the similarity between endpoints more accurately, the closeness of common neighbors was designed by considering the local topological information around common neighbors, which was adjusted for different networks through a parameter. Empirical study on six real networks show that compared with the similarity indicators such as Common Neighbor (CN), Resource Allocation (RA), Adamic-Adar (AA), Local Path (LP) and Katz, the proposed index can improve the prediction accuracy.
Reference | Related Articles | Metrics
Space target sphere grid index based on orbit restraint and region query application
LYU Liang, SHI Qunshan, LAN Chaozhen, CHEN Yu, LIU Yiping, LIANG Jing
Journal of Computer Applications    2017, 37 (7): 2095-2099.   DOI: 10.11772/j.issn.1001-9081.2017.07.2095
Abstract513)      PDF (768KB)(376)       Save
Since the efficiency of retrieval and query of mass and high-speed space targets remains in a low level, a construction method of sphere grid index to the space targets based on the orbit restraint was proposed. The advantage that the orbit of space target is relatively stable in earth inertial coordinate system was used in the method to achieve the stabilized index to high-speed moving objects by maintaining the list of the space targets that pass through the sphere subdivision grid. On this basis, a region query application scheme was proposed. Firstly, the query time period was dispersed according to a particular step value. Secondly, the boundary coordinates of the query region in the inertial space were calculated and the staggered mesh was confirmed. Then the space targets in the grid were extracted and the spatial relationship between targets and the region was calculated and estimated. Finally, the whole time period was queried recursively and the space targets transit query analysis was accomplished. In the simulation experiment, the consumed time of the traditional method by calculating one by one has a linear positive correlation with the target number, but it has no relevance with the region size. One target costs 0.09 ms on average. By contrast, the time of the proposed method in the paper shows a linear decrease with the decline of area size. When the number of the region grids is less than 2750, the time efficiency is higher than that of the comparison method. Furthermore, it can maintain a fairly good accuracy. The experimental results show that the proposed method can improve the efficiency of the query in the actual region application effectively.
Reference | Related Articles | Metrics
Salient object detection and extraction method based on reciprocal function and spectral residual
CHEN Wenbing, JU Hu, CHEN Yunjie
Journal of Computer Applications    2017, 37 (7): 2071-2077.   DOI: 10.11772/j.issn.1001-9081.2017.07.2071
Abstract554)      PDF (1167KB)(381)       Save
To solve the problems of "center-surround" salient object detection and extraction method, such as incomplete object detected or extracted, not smooth boundary and redundancy caused by down-sampling 9-level pyramid, a salient object detection method based on Reciprocal Function and Spectral Residual (RFSR) was proposed. Firstly, the difference between the intensity image and its corresponding Gaussian low-pass one was used to substitute the normalization of the intensity image under "center-surround" model, meanwhile the level of Gaussian pyramid was further reduced to 6 to avoid redundancy. Secondly, a reciprocal function filter was used to extract local orientation information instead of Gabor filter. Thirdly, spectral residual algorithm was used to extract spectral feature. Finally, three extracted features were properly combined to generate the final saliency map. The experimental results on two mostly common benchmark datasets show that compared with "center-surround" and spectral residual models, the proposed method significantly improves the precision, recall and F-measure, furthermore lays a foundation for subsequent image analysis, object recognition, visual-attention-based image retrieval and so on.
Reference | Related Articles | Metrics
Cascading failure model in interdependent network considering global distribution of loads
DONG Chongjie, CHEN Yuqiang
Journal of Computer Applications    2017, 37 (7): 1861-1865.   DOI: 10.11772/j.issn.1001-9081.2017.07.1861
Abstract602)      PDF (933KB)(470)       Save
Concerning the interdependent network coupled by different networks, a new model for cascading failures was proposed which considered the combined effects of traffic load and interdependent edge. In the new model, the roles of interdependent edge and connected edge in interdependent networks were considered separately, variant-load global distribution principle based on the shortest path length was adopted in load allocation; the additional load assigned by the normal node was inversely proportional to the distance from the failed node. Finally, cascading failures of the interdependent network coupled by the IEEE118 standard grid network, small world network and random network were simulated. The simulation results show that the effect of global distribution of load is smaller, the failures resistance ability is stronger, the contribution of the traffic load of cascading failures is smaller and IEEE118 coupling network and the small-world coupling network have bigger failures steps when tolerance coefficient is smaller. Meanwhile, the network is unable to maintain the integrity, tolerance coefficient and failures steps appear approximately monotonically increasing relationship when the effect of global distribution of load is bigger.
Reference | Related Articles | Metrics
New words detection method for microblog text based on integrating of rules and statistics
ZHOU Shuangshuang, XU Jin'an, CHEN Yufeng, ZHANG Yujie
Journal of Computer Applications    2017, 37 (4): 1044-1050.   DOI: 10.11772/j.issn.1001-9081.2017.04.1044
Abstract491)      PDF (1117KB)(650)       Save
The formation rules of microblog new words are extremely complex with high degree of dispersion, and the extracted results by using traditional C/NC-value method have several problems, including relatively low accuracy of the boundary of identified new words and low detection accuracy of new words with low frequency. To solve these problems, a method of integrating heuristic rules, modified C/NC-value method and Conditional Random Field (CRF) model was proposed. On one hand, heuristic rules included the abstracted information of classification and inductive rules focusing on the components of microblog new words. The rules were artificially summarized by using Part Of Speech (POS), character types and symbols through observing a large number of microblog documents. On the other hand, to improve the accuracy of the boundary of identified new words and the detection accuracy of new words with low frequency, traditional C/NC-value method was modified by merging the information of word frequency, branch entropy, mutual information and other statistical features to reconstruct the objective function. Finally, CRF model was used to train and detect new words. The experimental results show that the F value of the proposed method in new words detection is improved effectively.
Reference | Related Articles | Metrics
Application case of big data analysis-robustness of a trading model
QIN Xiongpai, CHEN Yueguo, WANG Bangguo
Journal of Computer Applications    2017, 37 (3): 660-667.   DOI: 10.11772/j.issn.1001-9081.2017.03.660
Abstract586)      PDF (1417KB)(532)       Save
The robustness of a trading model means that the model's profitability curve is less volatile and does not fluctuate significantly. To solve the problem of robustness of an algorithmic trading model based on Support Vector Regression (SVR), several strategies to derive a unified trading model and a portfolio diversification method were proposed. Firstly, the algorithm trade model based on SVR was introduced. Then, based on the commonly used indicators, a number of derived indicators were constructed for short term forecasting of stock prices. The typical patterns of recent price movements, overbought/oversold market conditions, and divergence of market conditions were characterized by these indicators. These indicators were normalized and used to train the trading model so that the model can be generalized to different stocks. Finally, a portfolio diversification method was designed. In the portfolio, the correlation between various stocks, sometimes leads to great investment losses; because the price of the stock with strong correlation changes in the same direction. If the trading model doesn't predict the price trend correctly, then stop loss will be triggered, and these stocks will cause loss in a mutual accelerated manner. Stocks were clustered into different categories according to the similarity, and a diversified portfolio was formed by selecting a number of stocks from different clustered categories. The similarity of stocks, was defined as the similarity of the recent profit curves on different stocks by trading models.Experiments were carried out on the data of 900 stocks for 10 years. The experimental results show that the transaction model can obtain excess profit rate over time deposit, and the annualized profit rate is 8.06%. The maximum drawdown of the trading model was reduced from 13.23% to 5.32%, and the Sharp ratio increased from 81.23% to 88.79%. The volatility of the profit margin curve of the trading model decreased, which means that the robustness of the trading model was improved.
Reference | Related Articles | Metrics
Real-time crowd counting method from video stream based on GPU
JI Lina, CHEN Qingkui, CHEN Yuanjing, ZHAO Deyu, FANG Yuling, ZHAO Yongtao
Journal of Computer Applications    2017, 37 (1): 145-152.   DOI: 10.11772/j.issn.1001-9081.2017.01.0145
Abstract769)      PDF (1340KB)(672)       Save
Focusing on low counting accuracy caused by serious occlusions and abrupt illumination variations, a new real-time statistical method based on Gaussian Mixture Model (GMM) and Scale-Invariant Feature Transform (SIFT) features for video crowd counting was proposed. Firstly, the moving crowd were detected by using GMM-based motion segment method, and then the Gray Level Co Occurrence Matrix (GLCM) and morphological operations were applied to remove small moving objects of background and the dense noise in non-crowd foreground. Considering the high time-complexity of GMM algorithm, a novel parallel model with higher efficiency was proposed. Secondly, the SIFT feature points were acted as the basis of crowd statistics, and the execution time was reduced by using feature exaction based on binary image. Finally, a novel statistical analysis method based on crowd features and crowd number was proposed. The data sets with different level of crowd number were chosen to train and get the average feature number of a single person, and the pedestrians with different densities were counted in the experiment. The algorithm was accelerated by using multi-stream processors on Graphics Processing Unit (GPU) and the analysis about efficiently scheduling the tasks on Compute Unified Device Architecture (CUDA) streams in practical applications was conducted. The experimental results indicate that the speed is increased by 31.5% compared with single stream, by 71.8% compared with CPU.
Reference | Related Articles | Metrics
Electricity customers arrears alert based on parallel classification algorithm
CHEN Yuzhong, GUO Songrong, CHEN Hong, LI Wanhua, GUO Kun, HUANG Qicheng
Journal of Computer Applications    2016, 36 (6): 1757-1761.   DOI: 10.11772/j.issn.1001-9081.2016.06.1757
Abstract633)      PDF (755KB)(610)       Save
The "consumption first and replenishment afterward" operation model of the power supply companies may cause the risk of arrears due to poor credit of some power consumers. Therefore, it is necessary to analyze of the tremendous user data in real-time and quickly before the arrears' happening and provide a list of the potential customers in arrear. In order to solve the problem, a method for arrears alert of power consumers based on the parallel classification algorithm was proposed. Firstly, the arrear behaviors were modeled by the parallel Random Forest (RF) classification algorithm based on the Spark framework. Secondly, based on previous consumption behaviors and payment records, the future characteristics of consumption and payment behavior were predicted by time series. Finally, the list of the potential hig-risk customers in arrear was obtained by using the obtained model for classifying users. The proposed algorithm was compared with the parallel Support Vector Machine (SVM) algorithm and Online Sequential Extreme Learning Machine (OSELM) algorithm. The experimental results demonstrate that, the prediction accuracy of the proposed algorithm performs better than the other algorithms in comparison. Therefore, the proposed method is a convenient way for electricity recycling management to remind the customers of paying the electricity bills ahead of time, which can ensure timeliness electricity recovery. Moreover, the proposed method is also beneficial for consumer arrear risk management of the power supply companies.
Reference | Related Articles | Metrics
Random service system model based on UPnP service discovery
HU Zhikun, SONG Jingye, CHEN Yuan
Journal of Computer Applications    2016, 36 (3): 591-595.   DOI: 10.11772/j.issn.1001-9081.2016.03.591
Abstract536)      PDF (718KB)(553)       Save
In the automatic-discovery process of smart home network devices, serious jams occur due to randomly and independently choosing delay time to send service response message. In order to solve this problem, taking Universal Plug and Play (UPnP) service discovery protocol as an example, considering different demands of reliability and real-time performance, a random service system model based on UPnP service discovery was proposed. A profit-loss function including system response index and waiting index was designed. Finally, the relation between the best length of buffer queue and the profit-loss coefficient was obtained. Through the comparison of arrival time, departure time, waiting time and travel time with different buffer queue lengths, the necessity of designing profit-loss function and the feasibility of this proposed model are verified.
Reference | Related Articles | Metrics
Finger-vein image segmentation based on level set
WANG Baosheng, CHEN Yufei, ZHAO Weidong, ZHOU Qiangqiang
Journal of Computer Applications    2016, 36 (2): 526-530.   DOI: 10.11772/j.issn.1001-9081.2016.02.0526
Abstract509)      PDF (752KB)(852)       Save
To deal with weak edge, intensity inhomogeneity and low contrast that may appear in finger-vein images, a new segmentation algorithm based on even-symmetric Gabor filter and level set method was proposed. Firstly, the even-symmetric Gabor filter was used to filter the finger-vein image through 8 different orientations; secondly, finger-vein image based on the 8 filtered results was reconstructed to obtain the high quality image with significantly improved gray contrast between target and background; finally, the level set algorithm combining local features and global features was applied to segment finger-vein image. Compared with the level set algorithm proposed by Li, et al. (LI C, HUANG R, DING Z, et al. A variational level set approach to segmentation and bias correction of images with intensity inhomogeneity. MICCAI'08: Proceedings of the 11th International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II. Berlin: Springer, 2008: 1083-1091), and Legendre Level Set (L2S) algorithm, the percentage of Area Difference (AD) of the proposed algorithm decreased by 1.116% and 0.370% respectively, and the Relative Difference Degree (RDD) reduced by 1.661% and 1.379% respectively. The experimental results show that the proposed algorithm can achieve better results compared with traditional level set image segmentation algorithms that only consider local information or global information.
Reference | Related Articles | Metrics
Review of modeling, statistical properties analysis and routing strategies optimization in Internet of vehicles
CHEN Yufeng, XIANG Zhengtao, DONG Yabo, XIA Ming
Journal of Computer Applications    2015, 35 (12): 3321-3324.   DOI: 10.11772/j.issn.1001-9081.2015.12.3321
Abstract740)      PDF (842KB)(986)       Save
It has been a hot research area using the complex network theory and method to model the communication network, analyzing the statistical properties in evolving process and guiding the optimization of routing strategies. The research status of the modeling, the analysis of the statistical properties, the optimization of routing strategies and the design of routing protocols in the Internet of Vehicles (IoV) were analyzed. In addition, three improvements were proposed. The first is using the directed weighted graph to describe the topology of IoV. The second is analyzing the key statistical properties influencing the transmission capacity of IoV based on the differences of statistical properties between the IoV and the mobile Ad Hoc network. The third is optimizing the multi-path routing strategies based on Multiple-Input Multiple-Output (MIMO) technologies by the complex network, which means utilizing multiple channels and multiple paths to transmit.
Reference | Related Articles | Metrics
Analysis of public emotion evolution based on probabilistic latent semantic analysis
LIN Jianghao, ZHOU Yongmei, YANG Aimin, CHEN Yuhong, CHEN Xiaofan
Journal of Computer Applications    2015, 35 (10): 2747-2751.   DOI: 10.11772/j.issn.1001-9081.2015.10.2747
Abstract388)      PDF (900KB)(537)       Save
Concerning the problem of topics mining and its corresponding public emotion analysis, an analytical method for public emotion evolution was proposed based on Probabilistic Latent Semantic Analysis (PLSA) model. In order to find out the evolutional patterns of the topics, the method started with extracting the subtopics on time series by making use of PLSA model. Then, emotion feature vectors represented by emotion units and their weights which matched with the topic context were established via parsing and ontology lexicon. Next, the strength of public emotion was computed via a fine-grained dimension and the holistic public emotion of the issue. In this case, the method has a deep mining into the evolutional patterns of public emotion which were finally quantified and visualized. The advantage of the method is highlighted by introducing grammatical rules and ontology lexicon in the process of extracting emotion units, which was conducted in a fine-grained dimension to improve the accuracy of extraction. The experimental results show that this method can gain good performance on the evolutional analysis of topics and public emotion on time series and thus proves the positive effect of the method.
Reference | Related Articles | Metrics