Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Meta label correction method based on shallow network predictions
Yuxin HUANG, Yiwang HUANG, Hui HUANG
Journal of Computer Applications    2024, 44 (11): 3364-3370.   DOI: 10.11772/j.issn.1001-9081.2023111616
Abstract179)   HTML6)    PDF (828KB)(83)       Save

Aiming at overfitting problem caused by memory behavior of Deep Neural Networks (DNNs) on image data with noisy labels, a meta label correction method based on predictions from shallow neural networks was proposed. In this method, with the use of weakly supervised training method, a label reweighting network was set to reweight noise data, meta learning method was employed to facilitate dynamic learning of the model to noise data, and the prediction output from both deep and shallow networks was used as the pseudo labels to train the model. At the same time, the knowledge distillation algorithm was applied to allow the deep network to guide the training of the shallow networks. In this way, the memory behavior of the model was alleviated effectively and the robustness of the model was enhanced. Experiments conducted on CIFAR10/100 and Clothing1M datasets demonstrate the superiority of the proposed method over Meta Label Correction (MLC) method. Particularly, on CIFAR10 dataset with symmetrical noise ratios of 60% and 80%, the accuracy improvements are 3.49 and 1.56 percentage points respectively. Furthermore, in ablation experiments on CIFAR100 dataset with asymmetric noise ratio of 40%, at most 5.32 percentage points accuracy improvement is achieved by the proposed method over models trained without predicted labels, confirming the feasibility and effectiveness of the proposed method.

Table and Figures | Reference | Related Articles | Metrics
Parallel decompression algorithm for high-speed train monitoring data
WANG Zhoukai, ZHANG Jiong, MA Weigang, WANG Huaijun
Journal of Computer Applications    2021, 41 (9): 2586-2593.   DOI: 10.11772/j.issn.1001-9081.2020111173
Abstract333)      PDF (1272KB)(356)       Save
The real-time monitoring data generated by high-speed trains during running are usually processed by variable-length coding compression technology, which is convenient for transmission and storage. However, this method will complicate the internal structure of the compressed data, so that the corresponding data decompression process must follow the composition order of the compressed data, which is inefficient. In order to improve the decompression efficiency of high-speed train monitoring data, a parallel decompression algorithm for high-speed train monitoring data was proposed with the help of the speculation technology. Firstly, the structural characteristics of high-speed train monitoring data were studied, and the internal dependence that affects data division was analyzed. Secondly, the speculation technology was used to clean up internal dependence, and then, the data were divided into different parts tentatively. Thirdly, the division results were decompressed in a distributed computing environment in parallel. Finally, the parallel decompression results were combined together. Through this way, the decompression efficiency of high-speed train monitoring data was improved. Experimental results showed that on the computing cluster composed of 7 computing nodes, compared with the serial algorithm, the speedup of the proposed speculative parallel algorithm was about 3, showing a good performance of this algorithm. It can be seen that this algorithm can improve the monitoring data decompression efficiency significantly.
Reference | Related Articles | Metrics
Nonlinear constraint based quasi-homography warps for image stitching
WANG Huai, WANG Zhanqing
Journal of Computer Applications    2021, 41 (8): 2318-2323.   DOI: 10.11772/j.issn.1001-9081.2020101637
Abstract410)      PDF (2008KB)(367)       Save
In order to solve the problem of longitudinal projection distortion in non-overlapping regions of images caused by the quasi-homography warp algorithm for image stitching, an image stitching algorithm based on nonlinear constraint was proposed. Firstly, the nonlinear constraint was used to smoothly transit the image regions around the dividing line. Then, the linear equation of quasi-homography warp was replaced by a parabolic equation. Finally, the mesh-based method was used to improve the speed of image texture mapping and the method based on optimal stitching line was used to fuse the images. For images of 1 200 pixel×1 600 pixel, the time consumption range of texture mapping by the proposed algorithm is 4 s to 7 s, and the proposed algorithm has the average deviation degree of diagonal structure is 11 to 31. Compared with the quasi-homography warp algorithm for image stitching, the proposed algorithm has the time consumption of texture mapping reduced by 55% to 67%, and the average deviation degree of diagonal structure reduced by 36% to 62%. It can be seen that the proposed algorithm not only corrects the oblique diagonal structure, but also improves the efficiency of image stitching. Experimental results show that the proposed algorithm has better results in improving the visual effect of stitched images.
Reference | Related Articles | Metrics
Dynamic group based effective identity authentication and key agreement scheme in LTE-A networks
DU Xinyu, WANG Huaqun
Journal of Computer Applications    2021, 41 (6): 1715-1722.   DOI: 10.11772/j.issn.1001-9081.2020091428
Abstract405)      PDF (988KB)(460)       Save
As one of the communication methods in future mobile communications, Machine Type Communication (MTC) is an important mobile communication method in Internet of Things (IoT). When many MTC devices want to access the network at the same time, each MTC device needs to perform independent identity authentication, which will cause network congestion. In order to solve this problem and improve the security of key agreement of MTC device, a dynamic group based effective identity authentication and key agreement scheme was proposed in Long Term Evolution-Advanced (LTE-A) networks. Based on symmetric bivariate polynomials, the proposed scheme was able to authenticate a large number of MTC devices at the same time and establish independent session keys between the devices and the network. In the proposed scheme, multiple group authentications were supported, and the updating of access policies was provided. Compared with the scheme based on linear polynomials, bandwidth analysis shows that the bandwidth consumptions of the proposed scheme during transmission are optimized:the transmission bandwidth between the MTC devices in the Home Network (HN) and the Service Network (SN) is reduced by 132 bit for each group authentication, the transmission bandwidth between the MTC devices within the HN is reduced by 18.2%. Security analysis and experimental results show that the proposed scheme is safe in actual identity authentication and session key establishment, and can effectively avoid signaling congestion in the network.
Reference | Related Articles | Metrics
Intrusion detection model based on combination of dilated convolution and gated recurrent unit
ZHANG Quanlong, WANG Huaibin
Journal of Computer Applications    2021, 41 (5): 1372-1377.   DOI: 10.11772/j.issn.1001-9081.2020071082
Abstract416)      PDF (936KB)(629)       Save
Intrusion detection model based on machine learning plays a vital role in the security protection of network environment. Aiming at the problem that the existing network intrusion detection model cannot fully learn the data features of network intrusion, the deep learning theory was applied to intrusion detection, and a deep network model with automatic feature extraction function was proposed. In this model, the dilated convolution was used to increase the receptive field of information and extract high-level features from it, the Gated Recurrent Unit (GRU) model was used to extract long-term dependencies between retained features, then the Deep Neural Network (DNN) was used to fully learn the data features. Compared with the classical machine learning classifier, this model has a higher detection rate. Experiments conducted on the famous KDD CUP99, NSL-KDD and UNSW-NB15 datasets show that the model has the performance better than other classifiers. Specifically, the model has the accuracy of 99.78% on KDD CUP99 dataset, the accuracy of 99.53% on NSL-KDD dataset, and the accuracy of 93.12% on UNSW-NB15 dataset.
Reference | Related Articles | Metrics
Microservice identification method based on class dependencies under resource constraints
SHAO Jianwei, LIU Qiqun, WANG Huanqiang, CHEN Yaowang, YU Dongjin, SALAMAT Boranbaev
Journal of Computer Applications    2020, 40 (12): 3604-3611.   DOI: 10.11772/j.issn.1001-9081.2020040495
Abstract416)      PDF (1213KB)(477)       Save
To effectively improve the automation level of legacy software system reconstruction based on the microservice architecture, according to the principle that there is a certain correlation between resource data operated by two classes with dependencies, a microservice identification method based on class dependencies under resource constraints was proposed. Firstly, the class dependency graph was built based on the class dependencies in the legacy software program, and the resource entity label for each class was set. Then, a dividing algorithm was designed for the class dependency graph based on the resource entity label, which was used to divide the original software system and obtain the candidate microservices. Finally, the candidate microservices with higher dependency degrees were combined to obtain the final microservice set. Experimental results based on four open source projects from GitHub demonstrate that, the proposed method achieves the microservice division accuracy of higher than 90%, which proves that it is reasonable and effective to identify microservices by considering both class dependencies and resource constraints.
Reference | Related Articles | Metrics
Modeling of dyeing vat scheduling and slide time window scheduling heuristic algorithm
WEI Qianqian, DONG Xingye, WANG Huanzheng
Journal of Computer Applications    2020, 40 (1): 292-298.   DOI: 10.11772/j.issn.1001-9081.2019060981
Abstract518)      PDF (1123KB)(540)       Save
Considering the characteristics of dyeing vat scheduling problem, such as complex constraints, large task scales, high efficiency request, an incremental dyeing vat scheduling model was established and the Slide Time Window Scheduling heuristic (STWS) algorithm was proposed to improve the applicability of the problem model and the algorithm in real scenario. In order to meet the optimization target of minimizing delay cost, washing cost and the switching cost of dyeing vat, the heuristic scheduling rules were applied to schedule the products according to the priority order. For each product scheduling, the dynamic combination batch algorithm and the batch split algorithm were used to divide batches, and then the batch optimal sorting algorithm was used to schedule the batches. The simulated scheduling results on actual production data provided by a dyeing enterprise show that the algorithm can complete the scheduling for monthly plan within 10 s. Compared with the manual scheduling, the proposed algorithm improves the scheduling efficiency and significantly optimizes three objectives. Additionally, experiments on incremental scheduling show obvious optimization of the algorithm on reducing the washing cost and the switching cost of dyeing vat. All the results indicate that the proposed algorithm has excellent scheduling ability.
Reference | Related Articles | Metrics
Measurement of spatial straightness of train axle
WANG Hua, HOU Daishuang, ZHANG Shuang, GAO Jingang
Journal of Computer Applications    2019, 39 (10): 2960-2965.   DOI: 10.11772/j.issn.1001-9081.2019020318
Abstract334)      PDF (858KB)(345)       Save
In order to accurately and quickly measure the spatial straightness of train axle, a measurement system of train axle spatial straightness was constructed and the algorithms of spatial circle fitting, spatial straight line fitting and straightness measurement were studied. Firstly, the spatial circle fitting algorithm based on spatial plane and spatial sphere tangent was introduced according to the characteristics of the object under test. Then, the RANdom SAmple Consensus (RANSAC) algorithm was used to iterate out the best point set of the model. On the basis of the data obtained from the spatial circle fitting of the train axle section, the data of the train axle section spatial circle center was analyzed. And the wolf colony algorithm was used to fit the spatial straight line, that is, according to the circle center coordinates of the spatial circle of the train axle section at the position of the space section, the spatial straight line of the train axle was fitted. Finally, the wolf colony algorithm was used to measure the spatial straightness of train axle, and the measured data were compared with the data of laser tracker. Experimental results show that the accuracy of measuring the spatial straightness of train axle based on wolf colony algorithm is 0.01 mm, which can meet the requirements of high accuracy, high stability and repeatability in measurement of the spatial straightness of train axle.
Reference | Related Articles | Metrics
Fast indoor positioning algorithm of airport terminal based on spectral regression kernel discriminant analysis
DING Jianli, MU Tao, WANG Huaichao
Journal of Computer Applications    2019, 39 (1): 256-261.   DOI: 10.11772/j.issn.1001-9081.2018051074
Abstract465)      PDF (899KB)(308)       Save
Aiming at the characteristics of large passenger flow, complex and variable indoor environment in airport terminals, an indoor positioning algorithm based on Spectral Regression Kernel Discriminant Analysis (SRKDA) was proposed. In the offline phase, the Received Signal Strength (RSS) data of known location was collected, and the non-linear features of the Original Location Fingerprint (OLF) were extracted by SRKDA algorithm to generate a new feature fingerprint database. In the online phase, SRKDA was firstly used to process the RSS data of the point to be positioned, and then Weighted K-Nearest Neighbor (W KNN) algorithm was used to estimate the position. In positioning simulation experiments, the Cumulative Distribution Function (CDF) and positioning accuracies of the proposed algorithm under 1.5 m positioning accuracy are 91.2% and 88.25% respectively in two different localization scenarios, which are 16.7 percentage points and 18.64 percentage points higher than those of the Kernel Principal Component Analysis (KPCA)+W KNN model, 3.5 percentage points and and 9.07 percentage points higher than those of the KDA+W KNN model. In the case of a large number of offline samples (more than 1100), the data processing time of the proposed algorithm is much shorter than that of KPCA and KDA. The experimental results show that, the proposed algorithm can effectively improve the indoor positioning accuracy, save data processing time and enhance the positioning efficiency.
Reference | Related Articles | Metrics
Feature selection model for harmfulness prediction of clone code
WANG Huan, ZHANG Liping, YAN Sheng, LIU Dongsheng
Journal of Computer Applications    2017, 37 (4): 1135-1142.   DOI: 10.11772/j.issn.1001-9081.2017.04.1135
Abstract468)      PDF (1468KB)(477)       Save
To solve the problem of irrelevant and redundant features in harmfulness prediction of clone code, a combination model for harmfulness feature selection of code clone was proposed based on relevance and influence. Firstly, a preliminary sorting for the correlation of feature data was proceeded by the information gain ratio, then the features with high correlation was preserved and other irrelevant features were removed to reduce the search space of features. Next, the optimal feature subset was determined by using the wrapper sequential floating forward selection algorithm combined with six kinds of classifiers including Naive Bayes and so on. Finally, the different feature selection methods were analyzed, and feature data was analyzed, filtered and optimized by using the advantages of various methods in different selection critera. Experimental results show that the prediction accuracy is increased by15.2-34 percentage pointsafter feature selection; and compared with other feature selection methods, F1-measure of this method is increased by 1.1-10.1 percentage points, and AUC measure is increased by 0.7-22.1 percentage points. As a result, this method can greatly improve the accuracy of harmfulness prediction model.
Reference | Related Articles | Metrics
Survey on construction of measurement matrices in compressive sensing
WANG Qiang, ZHANG Peilin, WANG Huaiguang, YANG Wangcan, CHEN Yanlong
Journal of Computer Applications    2017, 37 (1): 188-196.   DOI: 10.11772/j.issn.1001-9081.2017.01.0188
Abstract801)      PDF (1425KB)(1190)       Save
The construction of measurement matrix in compressive sensing varies widely and is on the development constantly. In order to sort out the research results and acquire the development trend of measurement matrix, the process of measurement matrix construction was introduced systematically. Firstly, compared with the traditional signal acquisition theory, the advantages of high resource utilization and small storage space were expounded. Secondly, on the basis of the framework of compressive sensing and focusing on four aspects:the construction principle, the generation method, the structure design of measurement matrix and the optimal method, the construction of measurement matrix in compressive sensing was summarized, and advantages of different principles, generations and structures were introduced in detail. Finally, based on the research results, the development directions of measurement matrix were prospected.
Reference | Related Articles | Metrics
Improved adaptive collaborative filtering algorithm to change of user interest
HU Weijian, TENG Fei, LI Lingfang, WANG Huan
Journal of Computer Applications    2016, 36 (8): 2087-2091.   DOI: 10.11772/j.issn.1001-9081.2016.08.2087
Abstract505)      PDF (767KB)(428)       Save
As a widely used recommendation algorithm in the industry, collaborative filtering algorithm can predict the likely favorite items based on the user's historical behavior records. However, the traditional collaborative filtering algorithms do not take into account the drifting of user interests, and there are also some deficiencies when the recommendation's timeliness is considered. To solve these problems, the measure method of similarity was improved by combining with the characteristics of user interests change with time. At the same time, an enhanced time attenuation model was introduced to measure the predictive value. By combining these two ways together, the concept drifting problem of user interests was solved and the timeliness of the recommendation algorithm was also considered. In the simulation experiment, predictive scoring accuracy and Top N recommendation accuracy were compared among the proposed algorithm, UserCF, TCNCF, PTCF and TimesSVD++ algorithm in different data sets. The experimental results show that the improved algorithm can reduce the Root Mean Square Error (RMSE) of the prediction score, and it is better than all the compared algorithms on the accuracy of Top N recommendation.
Reference | Related Articles | Metrics
Clone group mapping method based on improved vector space model
CHEN Zhuo, ZHANG Liping, WANG Huan, ZHANG Jiujie, WANG Chunhui
Journal of Computer Applications    2016, 36 (7): 2031-2037.   DOI: 10.11772/j.issn.1001-9081.2016.07.2031
Abstract404)      PDF (1026KB)(325)       Save
Focusing on the less quantity and low efficiency problem of Type-3 clone code mapping method, a mapping method based on improved Vector Space Model (VSM) was proposed. Improved VSM was introduced into the clone code analysis to get an effective clone group mapping method for Type-1, Type-2 and Type-3. Firstly, clone group document was pretreated to get the code document with removing useless word, and the file name, function name and other features of clone group document were extracted at the same time. Secondly, word frequency vector space of clone group was extracted and built; the similarity of clone group was calculated by using cosine algorithm. Then mapping of clone group was constructed by clone group similarity and feature matching, and the result of cloning group mapping was obtained finally. Five pieces of open source software was tested and verified by experiments. The proposed method can guarantee the recall and the precision of not less than 96.1% and 97.1% at low time consumption. The experimental results show that the proposed method is feasible, which provides data support for the analysis of software evolution.
Reference | Related Articles | Metrics
Solution for classification imbalance in harmfulness prediction of clone code
WANG Huan, ZHANG Liping, YAN Sheng
Journal of Computer Applications    2016, 36 (12): 3468-3475.   DOI: 10.11772/j.issn.1001-9081.2016.12.3468
Abstract599)      PDF (1160KB)(378)       Save
Focusing on the problem of imbalanced classification of harmful data and harmless data in the prediction of the harmful effects of clone code, a K-Balance algorithm based on Random Under-Sampling (RUS) was proposed, which could adjust the classification imbalance automatically. Firstly, a sample data set was constructed by extracting static features and evolution features of clone code. Then, a new data set of imbalanced classification with different proportion was selected. Next, the harmful prediction was carried out to the new selected data set. Finally, the most suitable percentage value of classification imbalance was chosen automatically by observing the different performance of the classifier. The performance of the harmfulness prediction model of clone code was evaluated with seven different types of open-source software systems containing 170 versions written in C language. Compared with the other classification imbalance solution methods, the experimental results show that the proposed method is increased by 2.62 percentage points to 36.7 percentage points in the classification prediction effects (Area Under ROC(Receive Operating Characteristic) Curve (AUC)) of harmful and harmless clones. The proposed method can improve the classification imbalance prediction effectively.
Reference | Related Articles | Metrics
Harmfulness prediction of clone code based on Bayesian network
ZHANG Liping, ZHANG Ruixia, WANG Huan, YAN Sheng
Journal of Computer Applications    2016, 36 (1): 260-265.   DOI: 10.11772/j.issn.1001-9081.2016.01.0260
Abstract514)      PDF (875KB)(424)       Save
During the process of software development, activities of programmers including copy and paste result in a lot of code clones. However, the inconsistent code changes are always harmful to the programs. To solve this problem, and find harmful code clones in programs effectively, a method was proposed to predict harmful code clones by using Bayesian network. First, referring to correlation research on software defects prediction and clone evolution, two software metrics including static metrics and evolution metrics were proposed to characterize the features of clone codes. Then the prediction model was constructed by using core algorithm of Bayesian network. Finally, the probability of harmful code clones occurrence was predicted. Five different types of open-source software system containing 99 versions written in C languages were tested to evaluate the prediction model. The experimental results show that the proposed method can predict harmfulness for clones with better applicability and higher accuracy, and further reduce the threat of harmful code clones while improving software quality.
Reference | Related Articles | Metrics
Support vector machine combined model forecast based on ensemble empirical mode decomposition-principal component analysis
SANG Xiuli, XIAO Qingtai, WANG Hua, HAN Jiguang
Journal of Computer Applications    2015, 35 (3): 766-769.   DOI: 10.11772/j.issn.1001-9081.2015.03.766
Abstract612)      PDF (792KB)(654)       Save

To solve the problem of feature extraction and state prediction of intermittent non-stationary time series in the industrial field, a new prediction approach based on Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Support Vector Machine (SVM) was proposed in this paper. Firstly, the intermittent non-stationary time series was analyzed by multiple time scales and decomposed into a couple of IMF components which possessed the different scales by the EEMD algorithm. Then, the noise energy was estimated to determine the cumulative contribution rate adaptively on the basis of 3-sigma principle. The feature dimension and redundancy were reduced and the noise in IMF was removed by using PCA algorithm. Finally, on the basis of the determining of SVM key parameters, the principal components were regarded as input variables to predict future. Instance's testing results show that Mean Average Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Squared Percentage Error (MSPE) were 514.774, 78.216, 12.03% and 1.862%, respectively. It is concluded that the SVM prediction of the time series of output power of wind farm possesses a higher accuracy than not using PCA because the frequency mixing phenomena was inhibited, the non-stationary was reduced and the noise was further eliminated by EEMD algorithm and PCA algorithm.

Reference | Related Articles | Metrics
JavaScript code protection method based on temporal diversity
FANG Dingyi, DANG Shufan, WANG Huaijun, DONG Hao, ZHANG Fan
Journal of Computer Applications    2015, 35 (1): 72-76.   DOI: 10.11772/j.issn.1001-9081.2015.01.0072
Abstract716)      PDF (943KB)(690)       Save

Web applications are under the threat from malicious host problem just as native applications. How to ensure the core algorithm or main business process's security of Web applications in browser-side has become a serious problem needed to be solved. For the problem of low effectiveness to resist dynamic analysis and cumulative attack in present JavaScript code protection methods, a JavaScript code Protection based on Temporal Diversity (TDJSP) method was proposed. In order to resist cumulative attack, the method firstly made the JavaScript program obtain the diverse ability in runtime by building program's diversity set and obfuscating its branch space. And then, it detected features of abnormal execution environments such as debuggers and emulations to improve the difficulty of dynamic analysis. The theoretical analyses and experimental results show that the method improves the ability of JavaScript program against the converse analysis. And the space growth rate is 3.1 (superior to JScrambler3) while the delay time is in millisecond level. Hence, the proposed method can protect Web applications effectively without much overhead.

Reference | Related Articles | Metrics
Mining multiple sequential patterns with gap constraints
WANG Huadong YANG Jie LI Yajuan
Journal of Computer Applications    2014, 34 (9): 2612-2616.   DOI: 10.11772/j.issn.1001-9081.2014.09.2612
Abstract273)      PDF (913KB)(606)       Save

For the given multiple sequences, a certain threshold and the gap constraints, the study objective is to discover frequent patterns whose supports in multiple sequences are no less than the given threshold value, where any two successive elements of pattern fulfill the user-specified gap constraints, and any two occurrences of a pattern in a given sequence meet the one-off condition. To solve this problem, the existing algorithms only consider the first occurrence of each character of a pattern when they compute the support of a pattern in a given sequence, so that many frequent patterns are not mined. An efficient mining algorithm of multiple sequential patterns with gap constraints, named MMSP, was proposed. Firstly, it stored the candidate positions of a pattern using two-dimensional table, then it selected the position from the candidate positions according to the left-most strategy. The experiments were conducted on DNA sequences. The number of frequent patterns mined by MMSP was 3.23 times of that mined by the related algorithm named M-OneOffMine when the number of multiple sequence elements is constant and the sequence length changes, and the average number of mining patterns by MMSP was 4.11 times of that mined by M-OneOffMine when the number of multiple sequence elements changes. The average number of mined patterns by MMSP was 2.21 and 5.24 times of that mined by M-OneOffMine and MPP respectively when the number of multiple sequence elements changes, and the frequent patterns mined by M-OneOffMine was a subset of MMSP. The experimental results show that MMSP can mine more frequent patterns with shorter time, and it is more suitable for practical applications.

Reference | Related Articles | Metrics
Visual localization for mobile robots in complex urban scene using building features and 2D map
LI Haifeng WANG Huaiqiang
Journal of Computer Applications    2014, 34 (9): 2557-2561.   DOI: 10.11772/j.issn.1001-9081.2014.09.2557
Abstract218)      PDF (823KB)(641)       Save

For the localization problem in urban areas, where Global Positioning System (GPS) cannot provide the accurate location as its signal can be easily blocked by the high-rise buildings, a visual localization method based on vertical building facades and 2D bulding boundary map was proposed. Firstly, the vertical line features across two views, which are captured with an onboard camera, were matched into pairs. Then, the vertical building facades were reconstructed using the matched vertical line pairs. Finally, a visual localization method, which utilized the reconstructed vertical building facades and 2D building boundary map, was designed under the RANSAC (RANdom Sample Consensus) framework. The proposed localization method can work in real complex urban scenes. The experimental results show that the average localization error is around 3.6m, which can effectively improve the accuracy and robustness of self-localization of mobile robots in urban environments.

Reference | Related Articles | Metrics
High-dimensional data visualization based on random forest
LYV Bing WANG Huazhen
Journal of Computer Applications    2014, 34 (6): 1613-1617.   DOI: 10.11772/j.issn.1001-9081.2014.06.1613
Abstract309)      PDF (940KB)(489)       Save

High-dimensional data mining methods are mostly based on the mathematical theory rather than visual intuition currently. To facilitate visual analysis and evaluation of high-dimensional data, Random Forest (RF) was introduced to visualize high-dimensional data. Firstly, RF applied supervised learning to get the proximity measurement from the source data and the principal coordinate analysis was used for dimension reduction, which transformed the high-dimensional data relationship into the low-dimensional space. Then scattering plots were used to visualize the data in low-dimensional space. The results of experiment on high-dimensional gene datasets show that visualization with supervised dimension-reduction based on RF can illustrate perfectly discrimination of class distribution and outperforms traditional unsupervised dimension-reduction.

Reference | Related Articles | Metrics
Washout control and stability analysis for cyclic traffic flows
XUE Peng REN Pengfei WANG Hua
Journal of Computer Applications    2014, 34 (2): 597-600.  
Abstract462)      PDF (525KB)(499)       Save
Concerning the traffic jams due to bad speed control for cyclic traffic flows, a suppression method was proposed on Washout control. With optimal velocity function, a nonlinear dynamic mathematical model for cyclic traffic flow was derived. To reduce the unreasonable factors of taking drivers' sensitivities as main parameters, Washout control was employed to obtain stability on its equilibrium. Parameters to keep system stability were also considered by using small gain theorem. Simulating a cyclic traffic flow with 20 cars, the simulation results show that the traffic flow reaches its equilibrium in 100 seconds with the proposed controller.
Related Articles | Metrics
Algorithm for Generating Weighted Voronoi Diagram Based on Quadtree Structure
LI Rui LI Jia-tian WANG Hua PU Hai-xia HE Yu-feng
Journal of Computer Applications    2012, 32 (11): 3078-3081.   DOI: 10.3724/SP.J.1087.2012.03078
Abstract887)      PDF (642KB)(426)       Save
Considering the limitation of research on ordinary Voronoi diagram and low efficiency of the capabilities to build weighted voronoi diagram, a method based on quadtree structure for generating weighted Voronoi diagram was proposed in this paper. The key idea of this method was to obtain searched correlated seeds region of the nonexpansion nodes by the quadtree structure, calculate the time consumption value to replace weighted distance, and determine the ownership seed according to the node shortest time consumption value. The computing model based on quadtree structure and several basic characteristics of the method were given. The test result shows that the seeds are rapidly dilated, the time complexity gets effectively lower than uniform grid structure. The algorithm is simple, and it has a strong maneuverability and practical value.
Reference | Related Articles | Metrics
Descriptive query method based on unstructured text data in GIS
PU Hai-xia LI Jia-tian LI Rui HE Yu-feng WANG Hua
Journal of Computer Applications    2012, 32 (09): 2483-2487.   DOI: 10.3724/SP.J.1087.2012.02483
Abstract1264)      PDF (759KB)(540)       Save
Since the traditional Geographic Information System (GIS)'s structured or semi-structured attribute query poses sort of limitation on input accuracy and scope of query sentences, a GIS descriptive query method for text-related non-structured text data was suggested based on the expanded version of a dictionary of English synonyms, TongYiCi CiLin. compiled by Harbin Institute of Technology. Basic process is to calculate correlation between descriptive query sentence and text connected to the geographic element, then getting general query results according to it. The comparison experiment shows that descriptive query method not only supports diversity of input query sentence, but also effectively gets geographic element related to input descriptive query.
Reference | Related Articles | Metrics
Real-time scan conversion for ultrasound based on CUDA
WANG Wei-min WANG He-chuang WANG Hua-jun
Journal of Computer Applications    2011, 31 (10): 2760-2763.   DOI: 10.3724/SP.J.1087.2011.02760
Abstract1106)      PDF (802KB)(604)       Save
Scan conversion is one of the most important and widely-used technologies in medical ultrasound imaging. Unfortunately, traditional scan conversion algorithm needs intensive computation, which becomes one of the performance bottlenecks of the ultrasound system. In order to overcome this shortcoming, three parallel algorithms called real-time scan conversion for ultrasound based on Compute Unified Device Architecture (CUDA) were proposed. Through assigning the best structure of threads, rationally arranging data transmission between Central Processing Unit (CPU) and Graphic Processing Unit (GPU), and dividing computing tasks, throughput of the algorithm was increased and real-time requirement was met. Finally, this paper compared the three types of real-time scan conversion algorithms on CUDA to traditional method. This paper gets a frame rate of about more than 746fps with the picture size of 3121×936, which is about 300 times faster than the CPU implementation.
Related Articles | Metrics
Efficient data collection algorithm in sensor networks with optimal-path mobile sink
LI Bin LIN Ya-ping ZHOU Si-wang HUANG Cen-xi LUO Qing
Journal of Computer Applications    2011, 31 (10): 2625-2629.   DOI: 10.3724/SP.J.1087.2011.02625
Abstract1331)      PDF (917KB)(630)       Save
Mobile sink can efficiently collect data and extend the network lifetime. However, the existing researches about data collection based on mobile sink mainly focus on path-constrained mobile sink. Hence, a path-controlled traversal model for mobile sink data collection was constructed, and a data collection algorithm for mobile sink based on optimal-path traveling was proposed. The algorithm discretized the continuous path problem by local Voronoi grid, used the amount of data collected and system energy consumption as performance metric, combined taboo search algorithm to achieve the maximum amount of data collected and the minimum of network energy consumption traversing. Theoretically and experimentally, it is concluded that the proposed algorithm is able to solve the optimal-path traveling of data collection problem using path-controlled mobile sink.
Related Articles | Metrics
Tree-ART2 model for clustering spatial data in two-dimensional space
YU Li LI Jia-tian LI Jia DUAN Ping WANG Hua
Journal of Computer Applications    2011, 31 (05): 1328-1330.   DOI: 10.3724/SP.J.1087.2011.01328
Abstract1561)      PDF (470KB)(849)       Save
The Adaptive Resonance Theory 2 (ART2) is one of well-known clustering algorithms and has been applied to many fields practically. However, to be a clustering algorithm for two-dimension spatial data, it not only has the shortcomings of pattern drift and vector model of information missing, but also is difficult to adapt to spatial data clustering of irregular distribution. A Tree-ART2 (TART2) network model was proposed. It retained the memory of old model which maintained the constraint of spatial distance by learning and adjusting Long Time Memory (LTM) pattern and amplitude information of vector. Meanwhile, introducing tree structure to the model could reduce the subjective requirement of vigilance parameter and decrease the occurrence of pattern mixing. The comparative experimental results show that TART2 network is suitable for clustering about the ribbon distribution of spatial data, and it has higher plasticity and adaptability.
Related Articles | Metrics
Application of improved cerebella model articulation controller in forest fire recognition
WANG Hua-qiu LIU Ke
Journal of Computer Applications    2011, 31 (03): 860-864.   DOI: 10.3724/SP.J.1087.2011.00860
Abstract1153)      PDF (794KB)(1034)       Save
Concerning the defects of traditional fire recognition, a forest fire recognition system of Cerebella Model Articulation Controller (CMAC) network, which was based on variable step Least Mean Square (LMS) algorithm of hyperbolic secant, was presented. Through analyzing some initial static and dynamic characteristics, forest fire was preliminarily identified. And on the basis of image segmentation using the optimal threshold search method, the corresponding eigenvectors were extracted as the input of the improved CMAC network to detect and identify forest fire. The simulation results show that the improved method can accurately and efficiently identify flame.
Related Articles | Metrics
Research and implementation of trace capture technique based on aspect-oriented programming
ZHANG Zhu-Xi WANG Huai-Min
Journal of Computer Applications   
Abstract1588)      PDF (802KB)(958)       Save
Because the traditional software development method does not provide the mechanism that separates the trace capture concern and other business concerns, the implementation codes of all the concerns tangle seriously. To solve this problem, we applied Aspect-Oriented Programming (AOP) in the research of software trace capture and studied a technique of trace capture that can wave the monitor requirement into the system without changing the source code. This technique can improve the modularity of software effectively. Base on it, we implemented a monitor tool named Software Runtime Tracer (SRT), which can be used to analyze system manner and find program bugs and enhance the trustworthiness of software as well.
Related Articles | Metrics
Study on the transaction scheduling for real-time database system
CHEN Chuan-bo,WANG Hua
Journal of Computer Applications    2005, 25 (09): 2004-2006.   DOI: 10.3724/SP.J.1087.2005.02004
Abstract910)      PDF (160KB)(1079)       Save
Generally the deadline of transaction is only considered in dealing with real-time transaction scheduling,but in many cases,scheduling method is directly related with data deadline.This paper analyzed in detail if different scheduling strategy could consider data deadline and thus induced the existed rule.Based on the overall run-time estimation of transaction,the author offered the transaction strategy aimed at data deadline and transaction deadline.
Related Articles | Metrics
Design and implementation of user permission management component system based on software reuse
SUI Hong-wei, WANG Hua-yu, LIU Hong, WANG Rui-xia
Journal of Computer Applications    2005, 25 (05): 1166-1169.   DOI: 10.3724/SP.J.1087.2005.1166
Abstract1001)      PDF (204KB)(854)       Save
For the trivial desgning and the complexity of the user permission management system, the user permission management component model was put forward. One user permission management component generating system was also developed. This system implemented the management and reuse of user permission component, and is applied to the development of concreate information system.
Related Articles | Metrics