Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Aspect-based sentiment analysis model integrating match-LSTM network and grammatical distance
LIU Hui, MA Xiang, ZHANG Linyu, HE Rujin
Journal of Computer Applications    2023, 43 (1): 45-50.   DOI: 10.11772/j.issn.1001-9081.2021111874
Abstract376)   HTML17)    PDF (1828KB)(268)       Save
Aiming at the problems of the mismatch between aspect words and irrelevant context and the lack of grammatical level features in Aspect-Based Sentiment Analysis (ABSA) at current stage, an improved ABSA model integrating match-Long Short-Term Memory (mLSTM) and grammatical distances was proposed, namely mLSTM-GCN. Firstly, the correlation between the aspect word and the context was calculated word by word, and the obtained attention weight and the context representation were fused as the input of the mLSTM, so that the context representation with higher correlation with the aspect word was obtained. Then, the grammatical distance was introduced to obtain a context which was more grammatically related to the aspect word, so as to obtain more contextual features to guide the modeling of the aspect word, and obtain the aspect representation through the aspect masking layer. Finally, in order to exchange information, location weights, context representations and aspect representations were combined, thereby obtaining the features for sentiment analysis. Experimental results on Twitter, REST14 and LAP14 datasets show that compared with Aspect-Specific Graph Convolutional Network (ASGCN), mLSTM-GCN has the accuracy improved by 1.32, 2.50 and 1.63 percentage points, respectively, and has the Macro-F1 score improved by 2.52, 2.19 and 1.64 percentage points, respectively. Therefore, mLSTM-GCN can effectively reduce the probability of mismatch between aspect words and irrelevant context, and improve the classification effect.
Reference | Related Articles | Metrics
Aspect-based sentiment analysis model embedding different neighborhood representations
LIU Huan, DOU Quansheng
Journal of Computer Applications    2023, 43 (1): 37-44.   DOI: 10.11772/j.issn.1001-9081.2021122099
Abstract415)   HTML18)    PDF (1680KB)(122)       Save
The Aspect-Based Sentiment Analysis (ABSA) task aims to identify the sentiment polarity of a specific aspect. However, the existing related models lack the short-distance constraints on the context of the aspect word for the natural sentences with uncertain structure, and easily ignore the syntactic relations, so it is difficult to accurately determine the sentiment polarity of the aspect. Aiming at the above problems, an ABSA model with Embedding Different Neighborhood Representations (EDNR) was proposed. In this model, on the basis of obtaining the word order information of sentences, the nearest neighbor strategy combining with Convolution Neural Network (CNN) was used to obtain aspect neighborhood information, so as to reduce the influence of far irrelevant information on the model. At the same time, the grammatical information of sentences was introduced to increase the dependency between words. After fusing the two features, Mask and attention mechanism were used to pay special attention to the aspect information and reduce the interference of useless information to the sentiment analysis model. Besides, in order to evaluate the influence degree of contextual and grammatical information on sentiment polarity, an information evaluation coefficient was proposed. Experiments were carried out on five public datasets, and the results show that compared with the sentiment analysis model AGCN-MAX (Aggregated Graph Convolutional Network-MAX), the EDNR model has the accuracy and F1 score on dataset 14Lap improved by 2.47 percentage points and 2.83 percentage points respectively. It can be seen that the EDNR model can effectively capture emotional features and improve the classification performance.
Reference | Related Articles | Metrics
Knowledge graph driven recommendation model of graph neural network
LIU Huan, LI Xiaoge, HU Likun, HU Feixiong, WANG Penghua
Journal of Computer Applications    2021, 41 (7): 1865-1870.   DOI: 10.11772/j.issn.1001-9081.2020081254
Abstract801)      PDF (991KB)(853)       Save
The abundant structure and association information contained in Knowledge Graph (KG) can not only alleviate the data sparseness and cold-start in the recommender systems, but also make personalized recommendation more accurately. Therefore, a knowledge graph driven end-to-end recommendation model of graph neural network, named KGLN, was proposed. First, a signal-layer neural network framework was used to fuse the features of individual nodes in the graph, then the aggregation weights of different neighbor entities were changed by adding influence factors. Second, the single-layer was extended to multi-layer by iteration, so that the entities were able to obtain abundant multi-order associated entity information. Finally, the obtained features of entities and users were integrated to generate the prediction score for recommendation. The effects of different aggregation methods and influence factors on the recommendation results were analyzed. Experimental results show that on the datasets MovieLen-1M and Book-Crossing, compared with the benchmark methods such as Factorization Machine Library (LibFM), Deep Factorization Machine (DeepFM), Wide&Deep and RippleNet, KGLN obtains an AUC (Area Under ROC (Receiver Operating Characteristic) curve) improvement of 0.3%-5.9% and 1.1%-8.2%, respectively.
Reference | Related Articles | Metrics
Subgraph isomorphism matching algorithm based on neighbor information aggregation
XU Zhoubo, LI Zhen, LIU Huadong, LI Ping
Journal of Computer Applications    2021, 41 (1): 43-47.   DOI: 10.11772/j.issn.1001-9081.2020060935
Abstract536)      PDF (755KB)(492)       Save
Graph matching is widely used in reality, of which subgraph isomorphic matching is a research hotspot and has important scientific significance and practical value. Most existing subgraph isomorphism algorithms build constraints based on neighbor relationships, ignoring the local neighborhood information of nodes. In order to solve the problem, a subgraph isomorphism matching algorithm based on neighbor information aggregation was proposed. Firstly, the aggregated local neighborhood information of the nodes was obtained by importing the graph attributes and structure into the improved graph convolutional neural network to perform the representation learning of feature vector. Then, the efficiency of the algorithm was improved by optimizing the matching order according to the characteristics such as the label and degree of the graph. Finally, the Constraint Satisfaction Problem (CSP) model of subgraph isomorphism was established by combining the obtained feature vector and the optimized matching order with the search algorithm, and the model was solved by using the CSP backtracking algorithm. Experimental results show that the proposed algorithm significantly improves the solving efficiency of subgraph isomorphism compared with the traditional tree search algorithm and constraint solving algorithm.
Reference | Related Articles | Metrics
Protein complex identification algorithm based on XGboost and topological structural information
XU Zhoubo, YANG Jian, LIU Huadong, HUANG Wenwen
Journal of Computer Applications    2020, 40 (5): 1510-1514.   DOI: 10.11772/j.issn.1001-9081.2019111992
Abstract403)      PDF (643KB)(463)       Save

Large amount of uncertainty in PPI network and the incompleteness of the known protein complex data add inaccuracy to the methods only considering the topological structural information to search or performing supervised learning to the known complex data. In order to solve the problem, a search method called XGBoost model for Predicting protein complex (XGBP) was proposed. Firstly, feature extraction was performed based on the topological structural information of complexes. Then, the extracted features were trained by XGBoost model. Finally, a mapping relationship between features and protein complexes was constructed by combining topological structural information and supervised learning method, in order to improve the accuracy of protein complex prediction. Comparisons were performed with eight popular unsupervised algorithms: Markov CLustering (MCL), Clustering based on Maximal Clique (CMC), Core-Attachment based method (COACH), Fast Hierarchical clustering algorithm for functional modules discovery in Protein Interaction (HC-PIN), Cluster with Overlapping Neighborhood Expansion (ClusterONE), Molecular COmplex DEtection (MCODE), Detecting Complex based on Uncertain graph model (DCU), Weighted COACH (WCOACH); and three supervisedmethods Bayesian Network (BN), Support Vector Machine (SVM), Regression Model (RM). The results show that the proposed algorithm has good performance in terms of precision, sensitivity and F-measure.

Reference | Related Articles | Metrics
Service composition partitioning method based on process partitioning technology
LIU Huijian, LIU Junsong, WANG Jiawei, XUE Gang
Journal of Computer Applications    2020, 40 (3): 799-805.   DOI: 10.11772/j.issn.1001-9081.2019071290
Abstract398)      PDF (843KB)(372)       Save
In order to solve the bottleneck existed in the central controller of centralized service composition, a method of constructing decentralized service composition based on process partitioning was proposed. Firstly, the business process was modeled by the type directed graph. Then, a grouping algorithm was proposed based on the graph transformation method, and the process model was partitioned according to the grouping algorithm. Finally, the decentralized service composition was constructed according to the partitioning results. Test results show that compared with single thread algorithm, the grouping algorithm has the time-consuming for model 1 reduced by 21.4%, and the decentralized service composition constructed has lower response time and higher throughput. The experimental results show that the proposed method can effectively partition the business processes in the service composition, and the constructed decentralized service composition can improve the service performance.
Reference | Related Articles | Metrics
Fine-grained vehicle recognition under multiple angles based on multi-scale bilinear convolutional neural network
LIU Hu, ZHOU Ye, YUAN Jiabin
Journal of Computer Applications    2019, 39 (8): 2402-2407.   DOI: 10.11772/j.issn.1001-9081.2019010133
Abstract943)      PDF (936KB)(567)       Save
In view of the problem that it is difficult to accurately recognize the type of vehicle due to scale change and deformation under multiple angles, a fine-grained vehicle recognition model based on Multi-Scale Bilinear Convolutional Neural Network (MS-B-CNN) was proposed. Firstly, B-CNN was improved and then MS-B-CNN was proposed to realize the multi-scale fusion of the features of different convolutional layers to improve feature expression ability. In addition, a joint learning strategy was adopted based on center loss and Softmax loss. On the basis of Softmax loss, a category center was maintained for each category of the training set in the feature space. When new samples were added in the training process, the classification center distances of samples were constrained to improve the ability of vehicle recognition in multi-angle situations. Experimental results show that the proposed vehicle recognition model achieved 93.63% accuracy on CompCars dataset, verifying the accuracy and robustness of the model under multiple angles.
Reference | Related Articles | Metrics
Task requirement-oriented user selection incentive mechanism in mobile crowdsensing
CHEN Xiuhua, LIU Hui, XIONG Jinbo, MA Rong
Journal of Computer Applications    2019, 39 (8): 2310-2317.   DOI: 10.11772/j.issn.1001-9081.2019010226
Abstract527)      PDF (1328KB)(384)       Save
Most existing incentive mechanisms in mobile crowdsensing are platform-centered design or user-centered design without multidimensional consideration of sensing task requirements. Therefore, it is impossible to make user selection effectively based on sensing tasks and meet the maximization and diversification of the task requirements. To solve these problems, a Task Requirement-oriented user selection Incentive Mechanism (TRIM) was proposed, which is a task-centered design method. Firstly, sensing tasks were published by the sensing platform according to task requirements. Based on multiple dimensions such as task type, spatio-temperal characteristic and sensing reward, the task vectors were constructed to optimally meet the task requirements. To implement the personalized sensing participation, the user vectors were constructed based on the user preferences, individual contribution value, and expected reward by the sensing users. Then, by introducing Privacy-preserving Cosine Similarity Computation protocol (PCSC), the similarities between the sensing tasks and the sensing users were calculated. In order to obtain the target user set, the user selection based on the similarity comparison results was performed by the sensing platform. Therefore, the sensing task requirements were better met and the user privacy was protected. Finally, the simulation experiment indicates that TRIM shortens the computational time overhead of exponential increments and improves the computational efficiency compared with incentive mechanism using Paillier encryption protocol in the matching process between sensing tasks and sensing users; compared with the incentive mechanism using direct PCSC, the proposed TRIM guarantees the privacy of the sensing users and achieves 98% matching accuracy.
Reference | Related Articles | Metrics
Industrial X-ray image enhancement algorithm based on gradient field
ZHOU Chong, LIU Huan, ZHAO Ailing, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2019, 39 (10): 3088-3092.   DOI: 10.11772/j.issn.1001-9081.2019040694
Abstract585)      PDF (843KB)(402)       Save
In the detection of components with uneven thickness by X-ray, the problems of low contrast or uneven contrast and low illumination often occur, which make it difficult to observe and analyze some details of components in the images obtained. To solve this problem, an X-ray image enhancement algorithm based on gradient field was proposed. The algorithm takes gradient field enhancement as the core and is divided into two steps. Firstly, an algorithm based on logarithmic transformation was proposed to compress the gray range of an image, remove redundant gray information of the image and improve image contrast. Then, an algorithm based on gradient field was proposed to enhance image details, improve local image contrast and image quality, so that the details of components were able to be clearly displayed on the detection screen. A group of X-ray images of components with uneven thickness were selected for experiments, and the comparisons with algorithms such as Contrast Limited Adaptive Histogram Equalization (CLAHE) and homomorphic filtering were carried out. Experimental results show that the proposed algorithm has more obvious enhancement effect and can better display the detailed information of the components. The quantitative evaluation criteria of calculating average gradient and No-Reference Structural Sharpness (NRSS) texture analysis further demonstrate the effectiveness of this algorithm.
Reference | Related Articles | Metrics
Online behavior recognition using space-time interest points and probabilistic latent-dynamic conditional random field model
WU Liang, HE Yi, MEI Xue, LIU Huan
Journal of Computer Applications    2018, 38 (6): 1760-1764.   DOI: 10.11772/j.issn.1001-9081.2017112805
Abstract352)      PDF (783KB)(439)       Save
In order to improve the recognition ability for online behavior continuous sequences and enhance the stability of behavior recognition model, a novel online behavior recognition method based on Probabilistic Latent-Dynamic Conditional Random Field (PLDCRF) from surveillance video was proposed. Firstly, the Space-Time Interest Point (STIP) was used to extract behavior features. Then, the PLDCRF model was applied to identify the activity state of indoor human body. The proposed PLDCRF model incorporates the hidden state variables and can construct the substructure of gesture sequences. It can select the dynamic features of gesture and mark the unsegmented sequences directly. At the same time, it can also mark the conversion process between behaviors correctly to improve the effect of behavior recognition greatly. Compared with Hidden Conditional Random Field (HCRF), Latent-Dynamic Conditional Random Field (LDCRF) and Latent-Dynamic Conditional Neural Field (LDCNF), the recognition rate comparison results of 10 different behaviors show that, the proposed PLDCRF model has a stronger recognition ability for continuous behavior sequences and better stability.
Reference | Related Articles | Metrics
Adaptive threshold algorithm based on statistical prediction under spatial crowdsourcing environment
LIU Hui, LI Sheng'en
Journal of Computer Applications    2018, 38 (2): 415-420.   DOI: 10.11772/j.issn.1001-9081.2017071805
Abstract685)      PDF (946KB)(602)       Save
Focusing on the problem that the randomness of task assignment is too high and the utility value is not ideal under the spatial crowdsourcing environment, an adaptive threshold algorithm based on statistical prediction was proposed. Firstly, the numbers of free tasks, free workers and free positions in the crowdsourcing platform in real-time was counted to set the threshold value. Secondly, according to the historical statistical analysis, the distributions of tasks and workers were divided into two balanced parts, then the Min-max normalization method was applied to match each task to a certain worker. Finally, the probability of the appearance of the matched workers was calculated to verify the effectiveness of the task distribution. The experimental results on real data show that, compared with random threshold algorithm and greedy algorithm, the utility value of the proposed algorithm was increased by 7% and 10%, respectively. Experimental result indicates that the proposed adaptive threshold algorithm can reduce the randomness and improve the utility value in the process of task assignment.
Reference | Related Articles | Metrics
Dimension reduction method of brain network state observation matrix based on Spectral Embedding
DAI Zhaokun, LIU Hui, WANG Wenzhe, WANG Yanan
Journal of Computer Applications    2017, 37 (8): 2410-2415.   DOI: 10.11772/j.issn.1001-9081.2017.08.2410
Abstract565)      PDF (1084KB)(674)       Save
As the brain network state observation matrix based on functional Magnetic Resonance Imaging (fMRI) reconstruction is high-dimensional and characterless, a method of dimensionality reduction based on Spectral Embedding was presented. Firstly, the Laplacian matrix was constructed from the similarity measurement between the samples. Secondly, in order to achieve the purpose of mapping (reducing dimension) datasets from high dimension to low dimension, the first two main eigenvectors were selected to construct a two-dimensional eigenvector space through Laplacian matrix factorization. The method was applied to reduce the dimension of the matrix and visualize it in two-dimensional space, and the results were evaluated by category validity indicators. Compared with the dimensionality reduction algorithms such as Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Mapping (Isomap), the mapping points in the low dimensional space got by the proposed method have obvious category significance. According to the category validity indicators, compared with Multi-Dimensional Scaling (MDS) and t-distributed Stochastic Neighbor Embedding (t-SNE) algorithms, the Di index (the average distance among within-class samples) of the proposed method was decreased by 87.1% and 65.2% respectively, and the Do index (the average distance among between-class samples) of it was increased by 351.3% and 25.5% respectively. Finally, the visualization results of dimensionality reduction show a certain regularity through a number of samples, and the effectiveness and universality of the proposed method are validated.
Reference | Related Articles | Metrics
Energy hole avoidance strategy based on multi-level energy heterogeneity for wireless sensor networks
XIE Lin, PENG Jian, LIU Tang, LIU Huashan
Journal of Computer Applications    2016, 36 (6): 1475-1479.   DOI: 10.11772/j.issn.1001-9081.2016.06.1475
Abstract720)      PDF (868KB)(745)       Save
In order to alleviate the problem of energy hole in the Wireless Sensor Network (WSN), a Multi-level Energy Heterogeneous algorithm (MEH) was proposed. The energy consumption's characteristics of WSN were analyzed. Then the nodes with different initial energies were deployed according to the energy consumption's characteristics. To balance the energy consumption rate of each region, alleviate the energy hole problem and prolong the network lifecycle, nodes in the heavy communication load region would be configured with higher initial energy. The simulation results show that, compared with Low-Energy Adaptive Clustering Hierarchy (LEACH), Distributed Energy-Balanced Unequal Clustering routing protocol (DEBUC), and Nonuniform Distributed Strategy (NDS), the utilization rate of network energy, network lifecycle and period ratio of network energy of MEH were increased nearly 10 percentage points respectively. The proposed MEH has a good balance of energy consumption as well. The experimental results show that, the proposed MEH can effectively prolong the network lifecycle and ease the energy hole problem.
Reference | Related Articles | Metrics
Existence detection algorithm for non-cooperative burst signals in wideband
WANG Yang, WANG Bin, JIANG Tianli, LIU Huaixing, CHEN Ting
Journal of Computer Applications    2016, 36 (3): 620-627.   DOI: 10.11772/j.issn.1001-9081.2016.03.620
Abstract671)      PDF (1062KB)(405)       Save
With the extensive application of wideband receivers, the blind detection of non-cooperation burst signal in broadband is increasingly important. It is difficult to detect burst signals with low duty cycle time and to distinguish the burst signals with high duty cycle time from continuous-time signals. The problem was solved by constructing two broadband spectral statistics including maximum spectrum and maximum difference spectrum. By keeping the maximum value of instantaneous spectrum, the maximum spectrum has the information of both burst and non-burst signals; by keeping the maximum value of difference between adjacent instantaneous spectrums, the maximum difference spectrum can extract burst information and suppress continuous-time signals. By using these two spectrums, the detection of burst signals in broadband is completed. The test results show that the proposed algorithm can handle burst signals of all the duty cycle time.
Reference | Related Articles | Metrics
Application of weighted Fast Newman modularization algorithm in human brain structural network
XIA Yidan, WANG Bin, DONG Yingzhao, LIU Hui, XIONG Xin
Journal of Computer Applications    2016, 36 (12): 3347-3352.   DOI: 10.11772/j.issn.1001-9081.2016.12.3347
Abstract663)      PDF (1026KB)(491)       Save
The binary brain network modularization is not enough to describe physiological features of human brain. In order to solve the problem, a modularization algorithm for weighted brain network based on Fast Newman binary algorithm was presented. Using the hierarchical clustering idea of condensed nodes as the base, a weighted modularity indicator was built with the main bases of single node's weight and entire network's weight. Then the modularity increment was taken as the testing index to decide which two nodes should be combined in weighted brain network and realize module partition. The proposed method was applied to detect the modular structure of the group average data of 60 healthy people. The experiment results showed that, compared with the modular structure of the binary brain network, the brain network modularity of the proposed method was increased by 28% and more significant difference between inside and outside of modules could be revealed. Moreover, the modular structure found by the proposed method is more consistent with the physiological characteristics of human brain. Compared with the other two existing weighted modular algorithms, the proposed method can also slightly improve the modularity and guarantee a reasonable identification for human brain modular structure.
Reference | Related Articles | Metrics
Optimization algorithm based on R-λ model rate control in H.265/HEVC
LIAO Jundong, LIU Licheng, HAO Luguo, LIU Hui
Journal of Computer Applications    2016, 36 (11): 2993-2997.   DOI: 10.11772/j.issn.1001-9081.2016.11.2993
Abstract721)      PDF (910KB)(539)       Save
In order to improve the bit-allocation effect of the Largest Coding Unit (LCU) and the parameter-update precision ( αβ), in the rate control algorithm of H.265/HEVC based R-λ model, an optimized rate control algorithm was proposed. By utilizing the existing encoding basic unit, bit allocation was carried out, and the parameters ( α, β) were updated by using the coding distortion degree. The experimental result shows that in the constant bit rate case, compared to the HM13.0 rate control algorithm, three component PSNR gain improves at least 0.76 dB, the coding transmission bit reduces at least by 0.46%, and the coding time reduces at least by 0.54%.
Reference | Related Articles | Metrics
Hierarchical modeling method based on extensible port technology in real-time field
WANG Bin, CUI Xiaojie, HE Bi, LIU Hui, XU Shenglei, WANG Xiaojun
Journal of Computer Applications    2015, 35 (3): 872-877.   DOI: 10.11772/j.issn.1001-9081.2015.03.872
Abstract542)      PDF (1063KB)(443)       Save

When the Model Driven Development (MDD) method is used in real-time field, it is difficult to describe the whole control system in a single layer completely and clearly. A real-time multi-layer modeling method based on hierarchy theory was presented in this study. The extensible input port and output port were adopted to equip present meta-model technique in real-time field, then the eXtensible Markup Language (XML) was used to describe the ports and the message transfer mechanism based on channel was applied to realize communication between models in mutiple layers. The modeling results for real-time control system show that compared with single layer modeling method, the hierarchical modeling method can effectively support the description of parallel interactions between multiple tasks when using model driven development method in real-time field, as a result it enhances the visibility and reusability of real-time complex system models.

Reference | Related Articles | Metrics
Image restoration algorithm of adaptive weighted encoding and L 1/2 regularization
ZHA Zhiyuan, LIU Hui, SHANG Zhenhong, LI Runxin
Journal of Computer Applications    2015, 35 (3): 835-839.   DOI: 10.11772/j.issn.1001-9081.2015.03.835
Abstract700)      PDF (965KB)(542)       Save

Aiming at the denoising problem in image restoration, an adaptive weighted encoding and L1/2 regularization method was proposed. Firstly, for many real images which have not only Gaussian noise, but have Laplace noise, an Improved L1-L2 Hybrid Error Model (IHEM) method was proposed, which could have the advantages of both L1 norm and L2 norm. Secondly, considering noise distribution change in the iteration process, an adaptive membership degree method was proposed, which could reduce iteration number and computational cost. An adaptive weighted encoding method was applied, which had a perfect effect on solving the noise heavy tail distribution problem. In addition, L1/2 regularization method was proposed, which could get much sparse solution. The experimental results demonstrate that the proposed algorithm can lead to Peak Signal-to-Noise Ratio (PSNR) about 3.5 dB improvement and Structural SIMilarity (SSIM) about 0.02 improvement in average over the IHEM method, and it gets an ideal result to deal with the different noise.

Reference | Related Articles | Metrics
LTE downlink cross-layer scheduling algorithm based on QoS for value-added service
LIU Hui, ZHANG Sailong
Journal of Computer Applications    2015, 35 (2): 336-339.   DOI: 10.11772/j.issn.1001-9081.2015.02.0336
Abstract579)      PDF (622KB)(461)       Save

Aiming at the problem that how to achieve the different value-added service users' rates in the Long Term Evolution (LTE) system, an optimization Proportional Fairness (PF) algorithm was proposed. Considering channel conditions, pay level and satisfaction, this optimization PF algorithm with QoS-aware service's eigenfunction could properly schedule the paying users under the situation that the paid rates could not be achieved. So it could achieve the different paying levels of rates. Simulations were conducted in Matlab environment. In the simulations, the optimization PF algorithm performed better than traditional PF in satisfaction and effective throughput. Compared with traditional PF algorithm, the difference of average satisfaction between paying users was about 26%, and the average effective throughput increased by 17%. The simulation results indicate that, under the premise of QoS in multi-service, the optimization algorithm can achieve the different users' perceived average rates, guarantee the satisfaction among the different paying parties and raise the effective throughput of system.

Reference | Related Articles | Metrics
Vibration measurement system based on ZigBee and Ethernet
CAO Mengchao, LIU Hua
Journal of Computer Applications    2015, 35 (10): 3000-3003.   DOI: 10.11772/j.issn.1001-9081.2015.10.3000
Abstract471)      PDF (639KB)(459)       Save
In the traditional method of vibration measurement system, the ability of the network construction is weak and the transmission rate is slow. In order to solve these problems, a new kind of vibration measurement was designed using ZigBee and Ethernet. There are three layers in the system. ZigBee based on XBee-PRO was used to establish the communication between collector nodes and router nodes to suit the multipoint and long-span measurement. Ethernet based on LwIP was used to make the data transmitted accurately in real-time. On the end device layer, the data were stored in SD-card in a server node and offered to computers. The experimental results show that the three layers structure of the measurement system combines the strength of ZigBee's network construction ability and Ethernet's high speed and good stability. It can not only realize an effective control to the measure points, but also meet the requirements of a long-span measurement and a real-time data transmission.
Reference | Related Articles | Metrics
Frequent closed itemset mining algorithm over uncertain data
LIU Huiting, SHEN Shengxia, ZHAO Peng, YAO Sheng
Journal of Computer Applications    2015, 35 (10): 2911-2914.   DOI: 10.11772/j.issn.1001-9081.2015.10.2911
Abstract465)      PDF (586KB)(498)       Save
Due to the downward closure property over uncertain data, existing solutions of mining all the frequent itemsets may lead an exponential number of results. In order to obtain a reasonable result set with small size, frequent closed itemsets discovering over uncertain data were studied, and a new algorithm called Normal Approximation-based Probabilistic Frequent Closed Itemsets Mining (NA-PFCIM) was proposed. The new method regarded the itemset mining process as a probability distribution function, and mined frequent itemsets by using the normal distribution model which supports large databases and can extract frequent itemsets with a high degree of accuracy. Then, the algorithm adopted the depth-first search strategy to obtain all probabilistic frequent closed itemsets, so as to reduce the search space and avoid redundant computation. Two probabilistic pruning techniques including superset pruning and subset pruning were also used in this method. Finally, the effectiveness and efficiency of the proposed methods were verified by comparing with the Possion distribution based algorithm called A-PFCIM. The experimental results show that NA-PFCIM can decrease the number of extending itemsets and reduce the complexity of calculation, it has better performance than the compared algorithm.
Reference | Related Articles | Metrics
Blowing state recognition of basic oxygen furnace based on feature of flame color texture complexity
LI Pengju, LIU Hui, WANG Bin, WANG Long
Journal of Computer Applications    2015, 35 (1): 283-288.   DOI: 10.11772/j.issn.1001-9081.2015.01.0283
Abstract493)      PDF (881KB)(490)       Save

In the process of converter blowing state recognition based on flame image recognition, flame color texture information is underutilized and state recognition rate still needs to be improved in the existing methods. To deal with this problem, a new converter blowing recognition method based on feature of flame color texture complexity was proposed. Firstly, the flame image was transformed into HSI color space, and was nonuniformly quantified; secondly, the co-occurrence matrix of H component and S component was computed in order to fuse color information of flame images; thirdly, the feature descriptor of flame texture complexity was calculated using color co-occurrence matrix; finally, the Canberra distance was used as similarity criteria to classify and identify blowing state. The experimental results show that in the premise of real-time requirements, the recognition rate of the proposed method is increased by 28.33% and 3.33% respectively, compared with the methods of Gray-level co-occurrence matrix and gray differential statistics.

Reference | Related Articles | Metrics
Wireless communication system of capsule endoscope based on ZL70102
WEI Xueling, LIU Hua
Journal of Computer Applications    2015, 35 (1): 279-282.   DOI: 10.11772/j.issn.1001-9081.2015.01.0279
Abstract750)      PDF (657KB)(658)       Save

For the traditional method of digestive tract disease diagnosis, the accuracy rate is low and the process is painful. In order to solve these problems, a wireless capsule endoscope system was designed using the wireless communication technology to transmit the image of the tract out of the body. Firstly, the image gathering module was used to capture the image of the digestive tract. Secondly, the image data was transmitted out of the body by the digital wireless communication system. Finally, the data was quickly uploaded to PC by the receiving module to decompress and display the image. The experimental results show that the wireless communication system with MSP430 and ZL70102 has several excellent features such as small-size, low-power and high-rate. Compared with the existing capsule endoscope that transmits analog signal, this digital wireless communication system has strong anti-interference capacity. Also, the accuracy of transmitting image data can reach 80% and the power consumption is only 31.6 mW.

Reference | Related Articles | Metrics
Homomorphic compensation of recaptured image detection based on direction predict
XIE Zhe WANG Rangding YAN Diqun LIU Huacheng
Journal of Computer Applications    2014, 34 (9): 2687-2690.   DOI: 10.11772/j.issn.1001-9081.2014.09.2687
Abstract472)      PDF (769KB)(590)       Save

To resist recaptured image's attack towards face recognition system, an algorithm based on predicting face image's gradient direction was proposed. The contrast of real image and recaptured image was enhanced by adaptive Gauss homomorphic's illumination compensation. A Support Vector Machine (SVM) classifier was chosen for training and testing two kinds of pictures with convoluting 8-direction Sobel operator. Using 522 live and recaptured faces come from domestic and foreign face databases including NUAA Imposter Database and Yale Face Database for experiment, the detection rate reached 99.51%; Taking 261 live face photos using Samsung Galaxy Nexus phone, then remaked them to get 522 samples library, the detection rate was 98.08% and the time of feature extraction was 167.04s. The results show that the proposed algorithm can classify live and recaptured faces with high extraction efficiency.

Reference | Related Articles | Metrics
Information hiding technology based on digital screening and its application in covert secrecy communication
GUO Wei LIU Huiping
Journal of Computer Applications    2014, 34 (9): 2645-2649.   DOI: 10.11772/j.issn.1001-9081.2014.09.2645
Abstract430)      PDF (826KB)(589)       Save

For the confidentiality and capacity problems of modern information communication network environment, an information hiding method based on digital screening technology was proposed, in which the information is embedded into the digital text documents to achieve the purpose of security communication. In this method, the watermark information was hidden into the background shades composed of screen dots, then fused with shades and stochastic Frequency Modulation (FM) screen dots image. Finally the background shades with information embedded were added to the text document as regular elements. The analysis and experimental results indicate that the proposed method has huge information capacity, and can embed the information of 72000 Chinese characters in one A4 size document page. In addition, it has perfect visual effects, good concealment ability, high security level and small file size, thus it can be widely used in the modern network security communication.

Reference | Related Articles | Metrics
Multi-label classification based on singular value decomposition-partial least squares regression
MA Zongjie LIU Huawen
Journal of Computer Applications    2014, 34 (7): 2058-2060.   DOI: 10.11772/j.issn.1001-9081.2014.07.2058
Abstract222)      PDF (581KB)(518)       Save

To tackle multi-label data with high dimensionality and label correlations, a multi-label classification approach based on Singular Value Decomposition (SVD)-Partial Least Squares Regression (PLSR) was proposed, which aimed at performing dimensionality reduction and regression analysis. Firstly, the label space was taken into a whole so as to exploit the label correlations. After that, the score vectors of both the instance space and label space were obtained by SVD, which was used for dimensionality reduction. Finally, the model of multi-label classification was established based on PLSR. The experiments performed on four real data sets with higher dimensionality verify the effectiveness of the proposed method.

Reference | Related Articles | Metrics
Real-time scheduling algorithm for periodic priority exchange
WANG Bin WANG Cong XUE Hao LIU Hui XIONG Xin
Journal of Computer Applications    2014, 34 (3): 668-672.   DOI: 10.11772/j.issn.1001-9081.2014.03.0668
Abstract559)      PDF (782KB)(373)       Save

A static priority scheduling algorithm for periodic priority exchange was proposed to resolve the low-priority task latency problem in real-time multi-task system. In this method, a fixed period of timeslice was defined, and the two independent tasks of different priorities in the multi-task system exchanged their priority levels periodically. Under the precondition that the execution time of the task with higher priority could be guaranteed, the task with lower priority would have more opportunities to perform as soon as possible to shorten its execution delay time. The proposed method can effectively solve the bad real-time performance of low-priority task and improve the whole control capability of real-time multi-task system.

Related Articles | Metrics
Dynamical replacement policy based on cost and popularity in named data networking
HUANG Sheng TENG Mingnian CHEN Shenglan LIU Huanlin XIANG Jinsong
Journal of Computer Applications    2014, 34 (12): 3369-3372.  
Abstract323)      PDF (625KB)(21719)       Save

In view of the problem that data for Named Data Networking (NDN) cache is replaced efficiently, a new replacement policy that considered popularity and request cost of data was proposed in this paper. It dynamically allocated proportion of popularity factor and request cost factor according to the interval time between the two requests of the same data. Therefore, nodes would cache data with high popularity and request cost. Users could get data from local node when requesting data next time, so it could reduce the response time of data request and reduce link congestion. The simulation results show that the proposed replacement policy can efficiently improve the in-network hit rate, reduce the delay and distance for users to fetch data.

Reference | Related Articles | Metrics
Fault detection approach for MPSoC by redundancy core
TANG Liu HUANG Zhangqin HOU Yibin FANG Fengcai ZHANG Huibing
Journal of Computer Applications    2014, 34 (1): 41-45.   DOI: 10.11772/j.issn.1001-9081.2014.01.0041
Abstract545)      PDF (737KB)(430)       Save
For a better trade-off between fault-tolerance mechanism and fault-tolerance overhead in processor reliability research, a fault detection approach for Multi-Processor System-on-Chip (MPSoC) that placed the calculation task of detecting code on redundancy core was proposed in this paper. The approach achieved MPSoC failure detection by placing the calculation and comparison parts of detecting code on redundancy core. The technique required no additional hardware modification, and shortened the design cycle while reducing performance and memory overheads. The verification experiment was implemented on a MPSoC by fault injection and running multiple benchmark programs. Comparing several previous methods of fault detection in terms of capability, area, memory and performance overhead, the experiment results show that the approach is effective and able to achieve a better trade-off between performance and overhead.
Related Articles | Metrics
Belly shape modeling with new combined invariant moment based on stereo vision
LIU Huan ZHU Ping XIAO Rong TANG Weidong
Journal of Computer Applications    2013, 33 (11): 3183-3186.  
Abstract612)      PDF (642KB)(438)       Save
To overcome the influence from both the light change and blurring in actual shooting for the three-dimensional reconstruction based on the stereo vision technique, the new illumination-robust combined invariant moments were put forward. Meanwhile, for the purpose of improving the performance of the image feature matching which solely depended on similarity, the dual constraints of the slope and the distance were involved into the similarity measurement, and then the matching process was carried out with their combined actions. Finally the three-dimensional reconstruction of the whole belly contour was built automatically. The parameters of the belly shape obtained by the proposed method can achieve the same accuracy as the 3D scanner and the measurement error with the actual value was less than 0.5cm. The experimental results show that the hardware of this system is simple, low cost as well as fast and reliable for information collection. The system is suitable for apparel design.
Related Articles | Metrics