Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Visual analysis system for exploring spatio-temporal exhibition data
LIU Li, HU Haibo, YANG Tao
Journal of Computer Applications    2020, 40 (9): 2719-2727.   DOI: 10.11772/j.issn.1001-9081.2019111976
Abstract537)      PDF (4426KB)(280)       Save
The spatio-temporal data in the exhibition environment is complex, with high discreteness, discontinuity and incomplete records. In most cases, the spatio-temporal data itself not only contains time, longitude and latitude, but also contains additional attributes such as speed, acceleration and direction, which makes the analysis of such data challenging. Therefore, an interactive visual analytic system, Visual Analysis system for Spatio-Temporal Exhibition Data (VASTED) was proposed, which combines multiple interactions to analyze participants’ types and movement patterns, as well as possible abnormal events, for the whole and details. The system utilizes and further improves the 3D map and Gantt chart to effectively represent the various attributes of data. The dataset of ChinaVis2019 challenge1 was used for case study to prove the feasibility of the system.
Reference | Related Articles | Metrics
Hybrid multi-objective grasshopper optimization algorithm based on fusion of multiple strategies
WANG Bo, LIU Liansheng, HAN Shaocheng, ZHU Shixing
Journal of Computer Applications    2020, 40 (9): 2670-2676.   DOI: 10.11772/j.issn.1001-9081.2020030315
Abstract507)      PDF (1792KB)(895)       Save
In order to improve the performance of Grasshopper Optimization Algorithm (GOA) in solving multi-objective problems, a Hybrid Multi-objective Grasshopper Optimization Algorithm (HMOGOA) based on fusion of multiple strategies was proposed. First, the Halton sequence was used to establish the initial population to ensure that the population had an uniform distribution and high diversity in the initial stage. Then, the differential mutation operator was applied to guide the population mutation, so as to promote the population to move to the elite individuals and extend the search range of optimization. Finally, the adaptive weight factor was used to adjust the global exploration ability and local optimization ability of the algorithm dynamically according to the status of population optimization, so as to improve the optimization efficiency and the solution set quality. With seven typical functions selected for experiments and tests, HMOGOA were compared with algorithms such as multi-objective grasshopper optimization, Multi-Objective Particle Swarm Optimization (MOPSO), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA Ⅱ). Experimental results indicate that compared with the above algorithms, HMOGOA avoids falling into local optimum, makes the distribution of the solution set significantly more uniform and broader, and has greater convergence accuracy and stability.
Reference | Related Articles | Metrics
Improved redundant point filtering-based 3D object detection method
SONG Yifan, ZHANG Peng, ZONG Libo, MA Bo, LIU Libo
Journal of Computer Applications    2020, 40 (9): 2555-2560.   DOI: 10.11772/j.issn.1001-9081.2019122092
Abstract506)      PDF (1674KB)(536)       Save
VoxelNet is the first end-to-end object detection model based on point cloud. Taken only point cloud data as input, it has good effect. However, in VoxelNet, taking point cloud data of full scene as input makes more computation resources use on background point cloud data, and error detection and missing detection are easy to occur in complex scenes because the point cloud with only geometrical information has low recognition granularity on the targets. In order to solve these problems, an improved VoxelNet model with view frustum added was proposed. Firstly, the targets of interest were located by the RGB front view image. Then, the dimension increase was performed on the 2D targets, making the targets into the spatial view frustum. And the view frustum candidate region was extracted in the point cloud to filter out the redundant point cloud, only the point cloud within view frustum candidate region was calculated to obtain the detection results. Compared with VoxelNet, the improved algorithm reduces computation complexity of point cloud, and avoids the calculation of background point cloud data, so as to increase the efficiency of detection. At the same time, it avoids the disturbance of redundant background points and decreases the error detection rate and missing detection rate. The experimental results on KITTI dataset show that the improved algorithm outperforms VoxelNet in 3D detection with 67.92%, 59.98%, 53.95% average precision at easy, moderate and hard level.
Reference | Related Articles | Metrics
WeChat payment behavior recognition model based on division of large and small burst blocks
LIANG Denggao, ZHOU Anmin, ZHENG Rongfeng, LIU Liang, DING Jianwei
Journal of Computer Applications    2020, 40 (7): 1970-1976.   DOI: 10.11772/j.issn.1001-9081.2019122063
Abstract497)      PDF (1310KB)(749)       Save
For the facts that WeChat red packet and fund transfer functions are used for illegal activities such as red packet gambling and illegal transactions, and the existing research work in this field is difficult to identify the specific numbers of sending and receiving red packets and fund transfers in WeChat, and there are problems of low recognition rate and high resource consumption, a method for dividing large and small burst blocks of traffic was proposed to extract the characteristics of traffic, so as to effectively identify the sending and receiving of red packets and the transfer behaviors. Firstly, by taking advantage of the suddenness of sending and receiving red packets and fund transfers, a large burst time threshold was set to define the burst blocks of such behaviors. Then, according to the feature that the behaviors of sending and receiving red packets and fund transfers consist of several consecutive user operations, a small burst threshold was set to further divide the traffic block into small bursts. Finally, synthesizing the features of small burst blocks in the big burst block, the final features were obtained. The experimental results show that the proposed method is generally better than the existing research on WeChat payment behavior recognition in terms of time efficiency, space occupancy rate, recognition accuracy and algorithm universality, with an average accuracy rate up to 97.58%. The test results of the real environment show that the proposed method can basically accurately identify the numbers of sending and receiving red packets and fund transfers for a user in a period of time.
Reference | Related Articles | Metrics
Lifetime estimation for human motion with WiFi channel state information
LIU Lishuang, WEI Zhongcheng, ZHANG Chunhua, WANG Wei, ZHAO Jijun
Journal of Computer Applications    2019, 39 (7): 2056-2060.   DOI: 10.11772/j.issn.1001-9081.2018122431
Abstract653)      PDF (817KB)(408)       Save

Concerning the poor privacy and flexibility of traditional lifetime estimation for human motion, a lifetime estimation system for human motion was proposed, by analyzing the amplitude variation of WiFi Channel State Information (CSI). In this system, the continuous and complex lifetime estimation problem was transformed into a discrete and simple human motion detection problem. Firstly, the CSI was collected with filtering out the outliers and noise. Secondly, Principal Component Analysis (PCA) was used to reduce the dimension of subcarriers, obtaining the principal components and the corresponding eigenvectors. Thirdly, the variance of principal components and the mean of first difference of eigenvectors were calculated, and a Back Propagation Neural Network (BPNN) model was trained with the ratio of above two parameters as eigenvalue. Fourthly, human motion detection was achieved by the trained BP neural network model, and the CSI data were divided into some segments with equal width when the human motion was detected. Finally, after the human motion detection being performed on all CSI segments, the human motion lifetime was estimated according to the number of CSI segments with human motion detected. In real indoor environment, the average accuracy of human motion detection can reach 97% and the error rate of human motion lifetime is less than 10%. The experimental results show that the proposed system can effectively estimate the lifetime of human motion.

Reference | Related Articles | Metrics
Directed fuzzing method for binary programs
ZHANG Hanfang, ZHOU Anmin, JIA Peng, LIU Luping, LIU Liang
Journal of Computer Applications    2019, 39 (5): 1389-1393.   DOI: 10.11772/j.issn.1001-9081.2018102194
Abstract715)      PDF (899KB)(539)       Save
In order to address the problem that the mutation in the current fuzzing has certain blindness and the samples generated by the mutation mostly pass through the same high-frequency paths, a binary fuzzing method based on light-weight program analysis technology was proposed and implemented. Firstly, the target binary program was statically analyzed to filter out the comparison instructions which hinder the sample files from penetrating deeply into the program during the fuzzing process. Secondly, the target binary program was instrumented to obtain the specific values of the operands in the comparison instructions, according to which the real-time comparison progress information for each comparison instruction was established, and the importance of each sample was measured according to the comparison progress information. Thirdly, the real-time path coverage information in the fuzzing process was used to increase the probability that the samples passing through rare paths were selected to be mutated. Finally, the input files were directed and mutated by the comparison progress information combining with a heuristic strategy to improve the efficiency of generating valid inputs that could bypass the comparison checks in the program. The experimental results show that the proposed method is better than the current binary fuzzing tool AFL-Dyninst both in finding crashes and discovering new paths.
Reference | Related Articles | Metrics
Identification method of user's medical intention in chatting robot
YU Hui, FENG Xupeng, LIU Lijun, HUANG Qingsong
Journal of Computer Applications    2018, 38 (8): 2170-2174.   DOI: 10.11772/j.issn.1001-9081.2018010190
Abstract773)      PDF (781KB)(653)       Save
Traditional user intention recognition methods in chatting robot are usually based on template matching or artificial feature sets. To address the problem that those methods are difficult, time-consuming but have a week extension, an intention recognition model based on Biterm Topic Model (BTM) and Bidirectional Gated Recurrent Unit (BiGRU) was proposed with considering the features of the chatting texts about health. The identification of user's medical intention was regarded as a classification problem and topic features were used in the hybrid model. Firstly, the topic of user's every chatting sentence was mined by BTM with quantification. Then last step's results were fed into BiGRU to do context-based learning for getting the final representation of user's continuous statements. At last, the task was finished by making classification. In the comparison experiments on crawling corpus, the BTM-BiGRU model obviously outperforms to other traditional methods such as Support Vector Machine (SVM), even the F value approximately increses by 1.5 percentage points compared to the state-of-the-art model combining Convolution Neural Network and Long-Short Term Memory Network (CNN-LSTM). Experimental results show that the proposed method can effectively improve the accuracy of the intention recognition focusing on characteristics of the study.
Reference | Related Articles | Metrics
Obfuscator low level virtual machine deobfuscation framework based on symbolic execution
XIAO Shuntao, ZHOU Anmin, LIU Liang, JIA Peng, LIU Luping
Journal of Computer Applications    2018, 38 (6): 1745-1750.   DOI: 10.11772/j.issn.1001-9081.2017122892
Abstract863)      PDF (972KB)(536)       Save
The deobfuscation result of deobfuscation framework Miasm is a picture, which cannot be decompiled to recovery program source code. After deep research on the obfuscation strategy of Obfuscator Low Level Virtual Machine (OLLVM) and Miasm deobfuscation idea, a general OLLVM automatic deobfuscation framework based on symbolic execution was proposed and implemented. Firstly, the basic block identification algorithm was used to find useful basic blocks and useless blocks in the obfuscated program. Secondly, the symbolic execution technology was used to determine the topological relations among useful blocks. Then, the instruction repairment was directly applied to the assembly code of basic blocks. Finally, an executable file after deobfuscation was obtained. The experimental results show that, under the premise of guaranteeing the deobfuscation time as little as possible, the code similarity between the deobfuscation program and the non-obfuscated source program is 96.7%. The proposed framework can realize the OLLVM deobfuscation of the C/C ++ files under the x86 architecture very well.
Reference | Related Articles | Metrics
Password strength estimation model based on ensemble learning
SONG Chuangchuang, FANG Yong, HUANG Cheng, LIU Liang
Journal of Computer Applications    2018, 38 (5): 1383-1388.   DOI: 10.11772/j.issn.1001-9081.2017102516
Abstract599)      PDF (850KB)(575)       Save
Focused on the issue that the existing password evaluation models cannot be used universally, and there is no evaluation model applicable from simple passwords to very complex passwords. A password evaluation model was designed based on multi-model ensemble learning. Firstly, an actual password training set was used to train multiple existing password evaluation models as the sub-models. Secondly, a multiple trained evaluation sub-models were used as the base learners for ensemble learning, and the ensemble learning strategy which designed to be partial to weakness, was used to get all advantages of sub-models. Finally, a common password evaluation model with high accuracy was obtained. Actual user password set that leaked on the network was used as the experimental data set. The experimental results show that the multi-model ensemble learning model used to evaluate the password strength of different complexity passwords, has a high accuracy and is universal. The proposed model has good applicability in the evaluation of passwords.
Reference | Related Articles | Metrics
Online signature verification based on curve segment similarity matching
LIU Li, ZHAN Enqi, ZHENG Jianbin, WANG Yang
Journal of Computer Applications    2018, 38 (4): 1046-1050.   DOI: 10.11772/j.issn.1001-9081.2017092186
Abstract465)      PDF (930KB)(439)       Save
Aiming at the problems of mismatching and too large matching distance because of curves scaling, shifting, rotation and non-uniform sampling in the process of online signature verification, a curve segment similarity matching method was proposed. In the progress of online signature verification, two curves were partitioned into segments and matched coarsely at first. A dynamic programming algorithm based on cumulative difference matrix of windows was introduced to get the matching relationship. Then, the similarity distance for each matching pair and weighted sum of all the matching pairs were calculated, and the calculating method is to fit each curve of matching pairs, carry out the similarity transformation within a certain range, and resample the curves to get the Euclidean distance. Finally, the average of the similarity distance between test signature and all template signatures was used as the authentication distance, which was compared with the training threshold to judge the authenticity. The method was validated on the open databases SUSIG Visual and SUSIG Blind respectively with 3.56% and 2.44% Equal Error Rate (EER) when using personalized threshold, and the EER was reduced by about 14.4% on Blind data set compared with the traditional Dynamic Time Wraping (DTW) method. The experimental results show that the proposed method has certain advantages in skilled forgery signature and random forgery signature verification.
Reference | Related Articles | Metrics
Vessel traffic pattern extraction based on automatic identification system data and Hough transformation
CHEN Hongkun, CHA Hao, LIU Liguo, MENG Wei
Journal of Computer Applications    2018, 38 (11): 3332-3335.   DOI: 10.11772/j.issn.1001-9081.2018040841
Abstract686)      PDF (771KB)(485)       Save
Traditional trajectory clustering algorithm is no longer applicable due to the lack of continuous ship navigation data for large-scale sea area extraction. To solve this problem, a technique of vessel traffic pattern extraction using Hough transformation was proposed. Based on Automatic Identification System (AIS) data, the target area was divided into grids so that the ship density distribution was analyzed. Considering the problem of density distribution resolution, median filtering and morphological filtering were used to optimize the density distribution. Thus a method combining Hough transformation and Kernel density estimation was proposed to extract vessel traffic pattern and estimate the width of pattern. The experimental verification of the method with real historical AIS data shows that the trajectory clustering method cannot extract vessel traffic pattern in lower ship-density areas, its extracted number of ship trajectories in trajectory clusters accounts for 29.81% of the total number in the area, compared to 95.89% using the proposed method. The experimental result validates the effectiveness of the proposed method.
Reference | Related Articles | Metrics
Detection of SQL injection behaviors for PHP applications
ZHOU Ying, FANG Yong, HUANG Cheng, LIU Liang
Journal of Computer Applications    2018, 38 (1): 201-206.   DOI: 10.11772/j.issn.1001-9081.2017071692
Abstract814)      PDF (1074KB)(470)       Save
The SQL (Structured Query Language) injection attack is a threat to Web applications. Aiming at SQL injection behaviors in PHP (Hypertext Preprocessor) applications, a model of detecting SQL injection behaviors based on tainting technology was proposed. Firstly, an SQL statement was obtained when an SQL function was executed, and the identity information of the attacker was recorded through PHP extension technology. Based on the above information, the request log was generated and used as the analysis source. Secondly, the SQL parsing process with taint marking was achieved based on SQL grammar analysis and abstract syntax tree. By using tainting technology, multiple features which reflected SQL injection behaviors were extracted. Finally, the random forest algorithm was used to identify malicious SQL requests. The experimental results indicate that the proposed model gets a high accuracy of 96.9%, which is 7.2 percentage points higher than that of regular matching detection technology. The information acquisition module of the proposed model can be loaded in an extended form in any PHP application; therefore, it is transplantable and applicable in security audit and attack traceability.
Reference | Related Articles | Metrics
Online service evaluation based on social choice theory
LI Wei, FU Xiaodong, LIU Li, LIU Lijun
Journal of Computer Applications    2017, 37 (7): 1983-1988.   DOI: 10.11772/j.issn.1001-9081.2017.07.1983
Abstract636)      PDF (976KB)(473)       Save
The inconformity of user evaluation standard and preference results in unfair comparability between online services in cyberspace, thereby the users are hardly to choose satisfactory online services. The ranking method to calculate the online service quality based on social choice theory was proposed. First, group preference matrix was built according to the user-service evaluation matrix given by users; second, 0-1 integer programming model was built based on group preference matrix and Kemeny social choice function; at last, the optimal service ranking results could be obtained by solving this model. The individual preferences were aggregated to group preference in the proposed method; the decision was consistent with the majority preference of the group and maximum consistency with the individual preference. The proposed method's rationality and effectiveness were verified by theoretical analysis and experiment results. The experimental results show that the proposed method can solve the incomparability between online services, realize the online service quality ranking, effectively resisted the recommendation attacks. So it has strong anti-manipulation.
Reference | Related Articles | Metrics
Parallel trajectory compression method based on MapReduce
WU Jiagao, XIA Xuan, LIU Linfeng
Journal of Computer Applications    2017, 37 (5): 1282-1286.   DOI: 10.11772/j.issn.1001-9081.2017.05.1282
Abstract534)      PDF (902KB)(567)       Save
The massive spatiotemporal trajectory data is a heavy burden to store, transmit and process, which is caused by the increase Global Positioning System (GPS)-enable devices. In order to reduce the burden, many kinds of trajectory compression methods were generated. A parallel trajectory compression method based on MapReduce was proposed in this paper. In order to solve the destructive problem of correlation nearby segmentation points caused by the parallelization, in this method, the trajectory was divided by two segmentation methods in which the segmentation points were interleaving firstly. Then, the trajectory segments were assigned to different nodes for parallel compression. Lastly, the compression results were matched and merged. The performance test and analysis results show that the proposed method can not only increase the compression efficiency significantly, but also eliminate the error which is caused by the destructive problem of correlation.
Reference | Related Articles | Metrics
Bilingual collaborative Chinese relation extraction based on parallel corpus
GUO Bo, FENG Xupeng, LIU Lijun, HUANG Qingsong
Journal of Computer Applications    2017, 37 (4): 1051-1055.   DOI: 10.11772/j.issn.1001-9081.2017.04.1051
Abstract510)      PDF (826KB)(531)       Save
In the relation extraction of Chinese resources, the long Chinese sentence style is complex, the syntactic feature extraction is very difficult, and its accuracy is low. A bilingual cooperative relation extraction method based on a parallel corpus was proposed to resolve these above problems. In a Chinese and English bilingual parallel corpus, the English relation extraction classification was trained by dependency syntactic features which obtained by mature syntax analytic tools of English, the Chinese relation extraction classification was trained by n-gram feature which is suitable for Chinese, then they constituted bilingual view. Finally, based on the annotated and mapped parallel corpus, the training corpus with high reliability of both classifications were added to each other for bilingual collaborative training, and a Chinese relation extraction classification model with better performance was acquired. Experimental results on Chinese test corpus show that the proposed method improves the performance of Chinese relation extraction method based on weak supervision, its F value is increased by 3.9 percentage points.
Reference | Related Articles | Metrics
Mining denial of service vulnerability in Android applications automatically
ZHOU Min, ZHOU Anmin, LIU Liang, JIA Peng, TAN Cuijiang
Journal of Computer Applications    2017, 37 (11): 3288-3293.   DOI: 10.11772/j.issn.1001-9081.2017.11.3288
Abstract663)      PDF (1044KB)(541)       Save
Concerning the fact that when the receiver of an Intent does not validate empty data and abnormal data, the process will crash and cause denial of service, an automated Android component vulnerability mining framework based on static analysis techniques and fuzzing test techniques was proposed. In this framework, reverse analysis techniques and static data flow analysis techniques were used to extract package name, component, Intent with the data of a traffic and data flow paths from exported component to private component to assist fuzzing test. In addition, more mutation strategy on the attributes of Intent (such as Action, Category, Data and Extra) were added while generating Intent tests and the Accessibility technology was adopted to close the crash windows in order to realize automation. Finally, a tool named DroidRVMS was implemented, and a comparative experiment with Intent Fuzzer was designed to verify the validity of the framework. The experimental results show that DroidRVMS can find denial of service vulnerability resulting from dynamic broadcast receiver and most types of exceptions.
Reference | Related Articles | Metrics
Adaptive N-sigma amplitude spectrum shaping algorithm in transform domain communication system
LIU Li, ZHANG Hengyang, MAO Yuquan, SUN Le, MA Lihua
Journal of Computer Applications    2016, 36 (6): 1492-1495.   DOI: 10.11772/j.issn.1001-9081.2016.06.1492
Abstract643)      PDF (639KB)(435)       Save
In order to reduce the relatively high probability of missed and false detection in traditional hard threshold setting algorithm, improve the anti-interference performance of Transform Domain Communication System (TDCS), an adaptive N-sigma amplitude spectrum shaping algorithm was proposed. The amplitude information of environment power spectrum was got according to spectrum sensing, the mean and standard deviation of the environment power spectrum were calculated. According to the correlation theory of normal distribution, the threshold was adaptively set. Therefore, when the electromagnetic environment changed, the mean and standard deviation would be readjusted and the threshold would be updated. The simulation results show that, compared to the traditional hard threshold setting algorithm, the threshold setting of the adaptive N-sigma amplitude spectrum shaping algorithm is more flexible and accurate, which can reduce the missed detection probability and false detection probability of interference and improve the overall anti-interference performance of the system.
Reference | Related Articles | Metrics
Similar circular object recognition method based on local contour feature in natural scenario
BAN Xiaokun, HAN Jun, LU Dongming, WANG Wanguo, LIU Liang
Journal of Computer Applications    2016, 36 (5): 1399-1403.   DOI: 10.11772/j.issn.1001-9081.2016.05.1399
Abstract386)      PDF (805KB)(450)       Save
In the natural scenario, it is difficult to extract a complete outline of the object because of background textures, light and occlusion. Therefore an object recognition method based on local contour feature was proposed. Local contour feature of this paper formed by chains of 2-adjacent straight and curve contour segments (2AS). First, the angle of the adjacent segments, the segment length and the bending strength were analyzed, and the semantic model of the 2AS contour feature was defined. Then on the basis of the relative position relation between object's 2AS features, the 2AS mutual relation model was defined. Second, the 2AS semantic model of the object template primarily matched with the 2AS features of the test image, then 2AS mutual relation model of object template accurately matched with the 2AS features of the test image. At last, the pairs of 2AS of detected local contour features were obtained and repeatedly grouped, then grouped objects were verified according to the 2AS mutual relation model of object template. The contrast experiment with the 2AS feature algorithm with similar straight-line chains, the proposed algorithm has higher accuracy, low false positive rate and miss rate in the recognition of grading ring, then the method can more effectively recognize the grading ring.
Reference | Related Articles | Metrics
Micro-blog hot-spot topic discovery based on real-time word co-occurrence network
LI Yaxing, WANG Zhaokai, FENG Xupeng, LIU Lijun, HUANG Qingsong
Journal of Computer Applications    2016, 36 (5): 1302-1306.   DOI: 10.11772/j.issn.1001-9081.2016.05.1302
Abstract613)      PDF (751KB)(464)       Save
In view of the real-time, sparse and massive characteristics of micro-blog, a topic discovery model based on real-time co-occurrence network was proposed. Firstly, the set of keywords was extracted from the primitive data by the model, and the relationship weights was calculated on the basis of the time parameter to structure the word co-occurrence network. Then, sparsity could be reduced by finding potential features of a strong correlation based on weight adjustment coefficient. Secondly, the topic incremental clustering could be achieved by using the improved Single-Pass algorithm. Finally, the feature words of each topic were sorted by heat calculation, so the most representative keywords of the topic were got. The experimental results show that the accuracy and comprehensive index of the proposed model increase 6%, 8% respectively compared with the Single-Pass algorithm. The experimental results prove the validity and accuracy of the proposed model.
Reference | Related Articles | Metrics
Multipath error of deep coupling system based on integrity
LIU Linlin, GUO Chengjun, TIAN Zhong
Journal of Computer Applications    2016, 36 (3): 610-615.   DOI: 10.11772/j.issn.1001-9081.2016.03.610
Abstract582)      PDF (885KB)(360)       Save
Focused on the elimination of multipath error in Global Positioning System (GPS), a multipath error elimination method based on the combination of integrity and deep coupling structure was proposed. Firstly, the GPS and Strapdown Inertial Navigation System (SINS) were constructed into a deep coupling structure. Then, pseudorange residual and pseudorange rate residual which came from the output of phase frequency detector were used as the test statistics. Secondly, according to that pseudorange residual and pseudorange rate residual subjected to Gaussian distribution, the detection threshold of pseudorange residual and pseudorange rate residual was calculated. Finally, the detection threshold was used to evaluate the test statistics, and the modified pseudorange residual and pseudorange rate residual were put into the Kalman filter. In the simulation comparison of the proposed method with the multipath error elimination method without integrity: the latitude error decreased by about 40 m, the yaw angle error decreased by about 4 degrees, the north velocity error decreased by about 2 m/s. In contrast to the traditional method of eliminating multipath errors (using wavelet filtering): the height error decreased by about 40 m, the pitch angle error decreased by about 5 degrees. The simulation results show that the proposed method based on integrity can effectively eliminate the positioning error caused by multipath (reflected in position error, attitude angle error and velocity error). Meanwhile, comparing to the traditional filtering method, it can more effectively reduce the positioning error caused by multipath.
Reference | Related Articles | Metrics
Optimization algorithm based on R-λ model rate control in H.265/HEVC
LIAO Jundong, LIU Licheng, HAO Luguo, LIU Hui
Journal of Computer Applications    2016, 36 (11): 2993-2997.   DOI: 10.11772/j.issn.1001-9081.2016.11.2993
Abstract721)      PDF (910KB)(539)       Save
In order to improve the bit-allocation effect of the Largest Coding Unit (LCU) and the parameter-update precision ( αβ), in the rate control algorithm of H.265/HEVC based R-λ model, an optimized rate control algorithm was proposed. By utilizing the existing encoding basic unit, bit allocation was carried out, and the parameters ( α, β) were updated by using the coding distortion degree. The experimental result shows that in the constant bit rate case, compared to the HM13.0 rate control algorithm, three component PSNR gain improves at least 0.76 dB, the coding transmission bit reduces at least by 0.46%, and the coding time reduces at least by 0.54%.
Reference | Related Articles | Metrics
Data discretization algorithm based on adaptive improved particle swarm optimization
DONG Yuehua, LIU Li
Journal of Computer Applications    2016, 36 (1): 188-193.   DOI: 10.11772/j.issn.1001-9081.2016.01.0188
Abstract517)      PDF (915KB)(397)       Save
Focusing on the issue that the classical rough set can only deal with discrete attributes, a discretization algorithm based on Adaptive Hybrid Particle Swarm Optimization (AHPSO) was proposed. Firstly, the adaptive adjustment strategy was introduced, which could not only overcome the shortage that the particle swarm was easy to fall into local extremum but also improve the ability of seeking the global excellent result. Secondly, the Tabu Search (TS) method was introduced to deal with the global optimal particle of each generation and to get the best global optimal particle, which enhanced the local search ability of particle swarm. Finally, the attribute discretization points were initialized to the particle group when the classification ability of the decision table had been kept. The optimal discretization points were sought through the interaction between particles. By using the classification method of J48 decision tree based on WEKA (Waikato Environment for Knowledge Analysis) platform, compared with the discretization algorithms based on importance of attribute and information entropy, the classification accuracy of the proposed algorithm improved by about 10% to 20%.Compared with the discretization algorithms based on Niche Discrete PSO (NDPSO) and linearly decreasing weight PSO, the classification accuracy of the proposed algorithm improved by about 2% to 5%. The experimental results show that the proposed algorithm significantly enhances the accuracy of classification by J48 decision tree, and it has better validity for discretization of continuous attributes.
Reference | Related Articles | Metrics
Broken strand and foreign body fault detection method for power transmission line based on unmanned aerial vehicle image
WANG Wanguo, ZHANG Jingjing, HAN Jun, LIU Liang, ZHU Mingwu
Journal of Computer Applications    2015, 35 (8): 2404-2408.   DOI: 10.11772/j.issn.1001-9081.2015.08.2404
Abstract920)      PDF (840KB)(896)       Save

In order to improve the efficiency of power transmission line inspection by Unmanned Aerial Vehicle (UAV), a new method was proposed for detecting broken transmission lines and defects of foreign body based on the perception of line structure. The transmission line image acquired by UAV was easily influenced by the background texture and light, the gradient operators of horizontal and vertical direction which can be used to detect the line width were used to extract line objects in the inspection image. The study on calculation of gestalt perception of similarity, continuity and colinearity connected the intermittent wires into continuous wires. Then the parallel wire groups were further determined through the calculation of parallel relationship between wires. In order to reduce the detection error rate, spacers and stockbridge dampers of wires were recognized based on a local contour feature. Finally, the width change and gray similarity of segmented conductor wire were calculated to detect the broken part of wire and foreign object defect. The experimental results show that the proposed method can detect broken wire strand and foreign object defect efficiently under complicated backgrounds from the transmission line of UAV images.

Reference | Related Articles | Metrics
Intelligent environment measuring and controlling system of textile workshop based on Internet of things
LIU Xiangju, LI Jingzhao, LIU Lina
Journal of Computer Applications    2015, 35 (7): 2073-2076.   DOI: 10.11772/j.issn.1001-9081.2015.07.2073
Abstract639)      PDF (722KB)(702)       Save

To improve the workshop environment of textile mill and enhance the automatic control level on the environment, an intelligent environment measuring and controlling system of textile workshop based on Internet of Things (IoT) was proposed. The overall design scheme of the system was given. In order to reduce traffic loads of sink nodes and improve the data transmission rate of network, the wireless network topology structure of single-hop multi-sink nodes was designed. The concrete implementation scheme of hardware design and software work process of sensing nodes, controlling nodes and other nodes were represented detailedly. The improved Newton interpolation algorithm was used as the fitting function to process the detection data, which improved the precision of detection and control of system. The application results show that the system is simple, stable and reliable, low in cost, easy to maintain and upgrade, and obtains good application effect.

Reference | Related Articles | Metrics
Adaptive moving object extraction algorithm based on visual background extractor
LYU Jiaqing, LIU Licheng, HAO Luguo, ZHANG Wenzhong
Journal of Computer Applications    2015, 35 (7): 2029-2032.   DOI: 10.11772/j.issn.1001-9081.2015.07.2029
Abstract554)      PDF (628KB)(686)       Save

The prior work of video analysis technology is video foreground detection in complex scenes. In order to solve the problem of low accuracy in foreground moving target detection, an improved moving object extraction algorithm for video based on Visual Background Extractor (ViBE), called ViBE+, was proposed. Firstly, in the model initialization stage, each background pixel was modeled by a collection of its diamond neighborhood to simply the sample information. Secondly, in the moving object extraction stage, the segmentation threshold was adaptively obtained to extract moving object in dynamic scenes. Finally, for the sudden illumination change, a method of background rebuilding and update-parameter adjusting was proposed during the process of background update. The experimental results show that, compared with the Gaussian Mixture Model (GMM) algorithm, Codebook algorithm and original ViBE algorithm, the improved algorithm's similarity metric on moving object extracting results increases by 1.3 times, 1.9 times and 3.8 times respectively in complex video scene LightSwitch. The proposed algorithm has a better adaptability to complex scenes and performance compared to other algorithms.

Reference | Related Articles | Metrics
Classification method of text sentiment based on emotion role model
HU Yang, DAI Dan, LIU Li, FENG Xupeng, LIU Lijun, HUANG Qingsong
Journal of Computer Applications    2015, 35 (5): 1310-1313.   DOI: 10.11772/j.issn.1001-9081.2015.05.1310
Abstract552)      PDF (780KB)(845)       Save

In order to solve the problem of misjudgment which due to emotion point to an unknown and missing hidden view in traditional emotion classification method, a text sentiment classification method based on emotional role modeling was proposed. The method firstly identified evaluation objects in the text, and it used the measure based on local semantic analysis to tag the sentence emotion which had potential evaluation object. Then it distinguished the positive and negative polarity of evaluation objects in this paper by defining its emotional role. And it let the tendency value of emotional role integrate into feature space to improve the feature weight computation method. Finally, it proposed the concept named "features converge" to reduce the dimension of model. The experimental results show that the proposed method can improve the effect and accuracy of 3.2% for text sentiment classification effectively compared with other approaches which tend to pick the strong subjective emotional items as features.

Reference | Related Articles | Metrics
Hybrid trajectory compression algorithm based on multiple spatiotemporal characteristics
WU Jiagao, QIAN Keyu, LIU Min, LIU Linfeng
Journal of Computer Applications    2015, 35 (5): 1209-1212.   DOI: 10.11772/j.issn.1001-9081.2015.05.1209
Abstract610)      PDF (593KB)(902)       Save

In view of the problem that how to reduce the storage space of the trajectory data and improve the speed of data analysis and transmission in the Global Positioning System (GPS), a hybrid trajectory compression algorithm based on the multiple spatiotemporal characteristics was proposed in this paper. On the one hand, in the algorithm, a new online trajectory compression strategy based on the multiple spatiotemporal characteristics was adopted in order to choose the characteristic points more accurately by using the position, direction and speed information of GPS point. On the other hand, the hybrid trajectory compression strategy which combined online compression with batched compression was used, and the Douglas batched compression algorithm was adopted to do the second compression process of the hybrid trajectory compression. The experimental results show that the compression error of the new online trajectory compression strategy based on multiple spatiotemporal characteristics reduces significantly, although the compression ratio fells slightly compared with the existing spatiotemporal compression algorithm. By choosing appropriate cycle time of batching, the compression ratio and compression error of this algorithm are improved compared with the existing spatiotemporal compression algorithm.

Reference | Related Articles | Metrics
Image mosaic approach of transmission tower based on saliency map
ZHANG Xu, GAO Jiao, WANG Wanguo, LIU Liang, ZHANG Jingjing
Journal of Computer Applications    2015, 35 (4): 1133-1136.   DOI: 10.11772/j.issn.1001-9081.2015.04.1133
Abstract593)      PDF (664KB)(676)       Save

Images of transmission tower acquired by Unmanned Aerial Vehicle (UAV) have high resolution and complex background, the traditional stitching algorithm using feature points can detect a large number of feature points from background which costs much time and affects the matching accuracy. For solving this problem, a new image mosaic algorithm with quick speed and strong robustness was proposed. To reduce the influence of the background, each image was first segmented into foreground and background based on a new implementation method of salient region detection. To improve the feature point extraction and reduce the computation complexity, transformation matrix was calculated and image registration was completed by ORB (Oriented Features from Accelerated Segment Test (FAST) and Rotated Binary Robust Independent Elementary Features (BRIEF)) feature. Finally, the image mosaic was realized with image fusion method based on multi-scale analysis. The experimental results indicate that the proposed algorithm can complete image mosaic precisely and quickly with satisfactory mosaic effect.

Reference | Related Articles | Metrics
Fine-grained sentiment analysis oriented to product comment
LIU Li, WANG Yongheng, WEI Hang
Journal of Computer Applications    2015, 35 (12): 3481-3486.   DOI: 10.11772/j.issn.1001-9081.2015.12.3481
Abstract907)      PDF (1058KB)(852)       Save
The traditional sentiment analysis is coarse-grained and ignores the comment targets, the existing fine-grained sentiment analysis ignores multi-target and multi-opinion sentences. In order to solve these problems, a method of fine-grained sentiment analysis based on Conditional Random Field (CRF) and syntax tree pruning was proposed. A parallel tri-training method based on MapReduce was used to label corpus autonomously. CRF model of integrating various features was used to extract positive/negative opinions and the target of opinions from comment sentences. To deal with the multi-target and multi-opinion sentences, syntax tree pruning was employed through building domain ontology and syntactic path library to eliminate the irrelevant target of opinions and extract the correct appraisal expressions. Finally, a visual product attribute report was generated. After syntax tree pruning, the accuracy of the proposed method on sentiment elements and appraisal expression can reach 89% approximately.The experimental results on two product domains of mobile phone and camera show that the proposed method outperforms the traditional methods on both sentiment analysis accuracy and training performance.
Reference | Related Articles | Metrics
PM2.5 concentration prediction model of least squares support vector machine based on feature vector
LI Long MA Lei HE Jianfeng SHAO Dangguo YI Sanli XIANG Yan LIU Lifang
Journal of Computer Applications    2014, 34 (8): 2212-2216.   DOI: 10.11772/j.issn.1001-9081.2014.08.2212
Abstract498)      PDF (781KB)(1214)       Save

To solve the problem of Fine Particulate Matter (PM2.5) concentration prediction, a PM2.5 concentration prediction model was proposed. First, through introducing the comprehensive meteorological index, the factors of wind, humidity, temperature were comprehensively considered; then the feature vector was conducted by combining the actual concentration of SO2, NO2, CO and PM10; finally the Least Squares Support Vector Machine (LS-SVM) prediction model was built based on feature vector and PM2.5 concentration data. The experimental results using the data from the city A and city B environmental monitoring centers in 2013 show that, the forecast accuracy is improved after the introduction of a comprehensive weather index, error is reduced by nearly 30%. The proposed model can more accurately predict the PM2.5 concentration and it has a high generalization ability. Furthermore, the author analyzed the relationship between PM2.5 concentration and the rate of hospitalization, hospital outpatient service amount, and found a high correlation between them.

Reference | Related Articles | Metrics