Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 October 2015, Volume 35 Issue 10
Previous Issue
Next Issue
MTRF: a topic model with spatial information
PAN Zhiyong, LIU Yang, LIU Guojun, GUO Maozu, LI Pan
2015, 35(10): 2715-2720. DOI:
10.11772/j.issn.1001-9081.2015.10.2715
Asbtract
(
)
PDF
(1118KB) (
)
References
|
Related Articles
|
Metrics
To overcome the limitation of the assumptions of topic model-word independence and topic independence, a topic model which inosculated the spatial relationship of visual words was proposed, namely Markov Topic Random Field (MTRF). In addition, it was discussed that the "topic" of topic model represented the part of object in image processing. There is a high probability of the neighbor visual words generated from the same topic, and whether the visual words were generated from the same topic determined the topic was generated from Markov Random Field (MRF) or multinomial distribution of topic model. Meanwhile, both theoretical analysis and experimental results prove that "topic" of topic model appeared as mid-level feature to represent the parts of objects rather than the instances of objects. In experiments of image classification, the average accuracy of MTRF was 3.91% higher than that of Latent Dirichlet Allocation (LDA) on Caltech101 dataset, and the mean Average Precision (mAP) of MTRF was 2.03% higher than that of LDA on VOC2007 dataset. Furthermore, MTRF assigned topics to visual words more accurately and got the mid-level features which represented the parts of objects more effectively than LDA. The experimental results show that MTRF makes use of the spatial information effectively and improves the accuracy of the model.
Simple multi-label ranking for Chinese microblog sentiment classification
SHI Shaoliang, WEN Yimin, MIAO Yuqing
2015, 35(10): 2721-2726. DOI:
10.11772/j.issn.1001-9081.2015.10.2721
Asbtract
(
)
PDF
(1000KB) (
)
References
|
Related Articles
|
Metrics
In order to solve a specific case that each sample has two emotional labels at most in emotion classification of Chinese microblog text, a simple multi-label ranking algorithm named TSMLR was proposed. The proposed algorithm employed the strategy of two-stage learning and two-stage classification, and gave classification and ranking emotional labels for each microblog text by learning the relations between labels. Firstly, it transformed the emotion classification problem into eight single-label classification problems. One learning model was trained for the dominant emotion and seven learning models were trained for the secondary emotion. It classified for the dominant emotion label at first, then chose the corresponding classification model for the secondary emotion label. The experiment was conducted on the dataset of Chinese Weibo Texts provided by NLP&CC2014. The results showed that the proposed method improved the accuracy and average precision by 8.59% and 9.28% respectively, and decreased the one-error by 9.77% accordingly, compared to the method of Calibrated Label Ranking (CLR). In addition, the running time of the proposed method was lower than those of the two baseline methods. These experimental results illustrate that the proposed algorithm can effectively learn the label order and make more accurate emotion classification for Chinese microblog.
Analysis on distinguishing product reviews based on top-
k
emerging patterns
LIU Lu, WANG Yining, DUAN Lei, NUMMENMAA Jyrki, YAN Li, TANG Changjie
2015, 35(10): 2727-2732. DOI:
10.11772/j.issn.1001-9081.2015.10.2727
Asbtract
(
)
PDF
(994KB) (
)
References
|
Related Articles
|
Metrics
With the development of e-commerce, online shopping Web sites provide reviews for helping a customer to make the best choice. However, the number of reviews is huge, and the content of reviews is typically redundant and non-standard. Thus, it is difficult for users to go through all reviews in a short time and find the distinguishing characteristics of a product from the reviews. To resolve this problem, a method to mine top-
k
emerging patterns was proposed and applied to mining reviews of different products. Based on the proposed method, a prototype, called ReviewScope, was designed and implemented. ReviewScope can find significant comments of certain goods as decision basis, and provide visualization results. The case study on real world data set of JD.com demonstrates that ReviewScope is effective, flexible and user-friendly.
Heuristic algorithms for paper index ranking based on sparse matrix
WAN Xiaosong, WANG Zhihai, YUAN Jidong
2015, 35(10): 2733-2736. DOI:
10.11772/j.issn.1001-9081.2015.10.2733
Asbtract
(
)
PDF
(738KB) (
)
References
|
Related Articles
|
Metrics
In order to enhance the accuracy of retrieved academic papers, so as to facilitate academic research extensively, a series of ranking strategies for academic paper retrieval problem were proposed. Firstly, the heuristic methods based on page ranking algorithm for paper index ranking were described, taking advantage of a Hash indexing technique to effectively reduce memory consumption of the sparse matrix computation. Secondly, the definition of intensive equilibrium value of reference relationship among papers was presented, at the same time, the correlation between iterations of different ranking algorithms and intensive equilibrium value was clarified by a large number of experiments. Finally, the proposed heuristic algorithms for paper index ranking were tested on the SCI index database, and compared with the classical citation descending sort results. The experimental results show that, in the proposed three kind of algorithms based on page ranking techniques, the stochastic process approach with link-structure analysis is much more suitable for the ranking of papers, which obtained by the searching results according to keywords in a certain field.
Constructing high-level architecture of online social network through community detection
QIU Dehong, XU Fangxiang, LI Yuan
2015, 35(10): 2737-2741. DOI:
10.11772/j.issn.1001-9081.2015.10.2737
Asbtract
(
)
PDF
(700KB) (
)
References
|
Related Articles
|
Metrics
The online social network poses severe challenges because of its large size and complex structure. It is meaningful to construct a concise high-level architecture of the online social network. The concise high-level architecture was composed of the communities, the hub nodes and the relationships between them. The original online social network was represented by a new representation named quantitative attribute graph, and a new method was proposed to construct the concise high-level architecture of the online social network. The communities were detected by using the attributes of the nodes and edges in combination, then the hub nodes were identified based on the found communities, and the relationships between the communities and hub nodes were reproduced. The new method was used to construct the concise high-level architecture of a large online social network extracted from a practical business Bulletin Board System (BBS). The experimental results show that the proposed method has a good performance when the relationship strength and the community size are set as 0.5 and 3 respectively.
Overlapping community discovery method based on symmetric nonnegative matrix factorization
HU Liying, GUO Gongde, MA Changfeng
2015, 35(10): 2742-2746. DOI:
10.11772/j.issn.1001-9081.2015.10.2742
Asbtract
(
)
PDF
(759KB) (
)
References
|
Related Articles
|
Metrics
In view of the important nodes (including overlapping nodes, central nodes and outlier nodes) in overlapping community and the inherent overlapping community structure discovery problem, a new symmetric nonnegative matrix factorization algorithm was proposed. First, the sum of the error approximation and the asymmetric penalty term was used as the objective function. Then the algorithm was derived by using the principle of gradient update and the nonnegative constraint conditions. Simulation experiments were carried out on five real networks. The results show that the proposed algorithm can find the important nodes of the actual networks and their inherent community structures. The average conductance and the algorithm's execution time of the community discovery results are better than those of Community Detection with Nonnegative Matrix Factorization (CDNMF) method;the weighted average of the accuracy and recall rate's harmonic mean value shows that the proposed method is more suitable for the large databases.
Analysis of public emotion evolution based on probabilistic latent semantic analysis
LIN Jianghao, ZHOU Yongmei, YANG Aimin, CHEN Yuhong, CHEN Xiaofan
2015, 35(10): 2747-2751. DOI:
10.11772/j.issn.1001-9081.2015.10.2747
Asbtract
(
)
PDF
(900KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of topics mining and its corresponding public emotion analysis, an analytical method for public emotion evolution was proposed based on Probabilistic Latent Semantic Analysis (PLSA) model. In order to find out the evolutional patterns of the topics, the method started with extracting the subtopics on time series by making use of PLSA model. Then, emotion feature vectors represented by emotion units and their weights which matched with the topic context were established via parsing and ontology lexicon. Next, the strength of public emotion was computed via a fine-grained dimension and the holistic public emotion of the issue. In this case, the method has a deep mining into the evolutional patterns of public emotion which were finally quantified and visualized. The advantage of the method is highlighted by introducing grammatical rules and ontology lexicon in the process of extracting emotion units, which was conducted in a fine-grained dimension to improve the accuracy of extraction. The experimental results show that this method can gain good performance on the evolutional analysis of topics and public emotion on time series and thus proves the positive effect of the method.
New discriminative feature selection method
WU Jinhua, ZUO Kaizhong, JIE Biao, DING Xintao
2015, 35(10): 2752-2756. DOI:
10.11772/j.issn.1001-9081.2015.10.2752
Asbtract
(
)
PDF
(666KB) (
)
References
|
Related Articles
|
Metrics
As a kind of common method for data preprocessing, feature selection can not only improve the classification performance, but also increase the interpretability of the classification results. In sparse-learning-based feature selection methods, some useful discriminative information is ignored, and it may affect the final classification performance. To address this problem, a new discriminative feature selection method called Discriminative Least Absolute Shrinkage and Selection Operator (D-LASSO) was proposed to choose the most discriminative features. In detail, firstly, the proposed D-LASSO method contained a
L
1
-norm regularization item, which was used to produce sparse solution. Secondly, in order to induce the most discriminative features, a new discriminative regularization term was introduced to embed the geometric distribution information of samples with the same class label and samples with different class labels. Finally, the comparison experimental results obtained from a series of Benchmark datasets show that, the proposed D-LASSO method can not only improve the classification accuracy, but also be robust against parameters.
Extreme learning machine based on conjugate gradient
ZHANG Peizhou, WANG Xizhao, GU Di, ZHAO Shixin
2015, 35(10): 2757-2760. DOI:
10.11772/j.issn.1001-9081.2015.10.2757
Asbtract
(
)
PDF
(668KB) (
)
References
|
Related Articles
|
Metrics
Extreme Learning Machine (ELM) has been widely used in many applications due to its fast convergence and good generalization performance. However, the training speed will slow down or ELM will make error when the number of the training samples reaches a certain scale. Conjugate gradient algorithm was introduced into the ELM model instead of the generalized inverse. The experimental results show that, under the condition of the same generalization accuracy, conjugate gradient-based ELM has faster training speed than that of ELM with matrix inversion. Because conjugate gradient-based ELM do not need to calculate the generalized inverse of a hidden layer output matrix, while most of the generalized inverse calculations depend on the matrix Singular Value Decomposition (SVD), which has low efficiency for a high-order matrix. It has been proved that the conjugate gradient algorithm can find the solution through iteration with finite steps, so the conjugate gradient-based ELM algorithm has faster training speed and is also suitable for processing big data.
Multi-label
K
nearest neighbor algorithm by exploiting label correlation
TAN Hefeng, LIU Zhengyi
2015, 35(10): 2761-2765. DOI:
10.11772/j.issn.1001-9081.2015.10.2761
Asbtract
(
)
PDF
(656KB) (
)
References
|
Related Articles
|
Metrics
Since the Multi-Label
K
Nearest Neighbor (ML-KNN) classification algorithm ignores the correlation between labels, a multi-label classification algorithm by exploiting label correlation named CML-KNN was proposed. Firstly, the conditional probability between each pair of labels was calculated. Secondly, the conditional probabilities of predicted labels and the conditional probability of the label to be predicted were ranked, then the maximum was got. Finally, a new classification model by combining Maximum A Posteriori (MAP) and the product of the maximum and its corresponding label value was proposed and the new label value was predicted. The experimental results show that the performance of CML-KNN on Emotions dataset outperforms the other four algorithms, namely ML-KNN, AdaboostMH, RAkEL, BPMLL, while only two evaluation metric values are lower than those of ML-KNN and RAkEL on Yeast and Enron datasets. The experimental analyses show that CML-KNN obtains better classification results.
Enhanced differential evolution algorithm with non-prior-knowledge DFP local search under Memetic framework
MA Zhenyuan, YE Shujin, LIN Zhiyong, LIANG Yubin, HUANG Han
2015, 35(10): 2766-2770. DOI:
10.11772/j.issn.1001-9081.2015.10.2766
Asbtract
(
)
PDF
(881KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the performance of Differential Evolution (DE) algorithm and extend its adaptability for solving continuous optimization problems, an enhanced DE algorithm was proposed by using efficient local search under the Memetic framework. Specifically, based on the Davidon-Fletcher-Powell (DFP) method, an improved local search method named NDFP was put forward, which could speed up finding locally optimal solutions based on excellent individuals explored by the DE algorithm. Furthermore, a strategy on when and how to run the NDFP local search was also given, so as to strike a good balance between global search (i.e., DE) and local search (i.e., NDFP). The proposed strategy was also enhanced the adaptability of NDFP local search in the range of DE algorithm. To verify the efficiency of the proposed algorithm, extensive simulation experiments were conducted on up to 53 test functions from CEC2005 and CEC2013 Benchmarks. The experimental results show that, compared with DE/current-to-best/1, SaDE and EPSDE algorithms, the proposed algorithm can achieve better performance in terms of both precision and stability.
Conditional information entropy and attribute reduction based on boundary region
HUANG Guoshun, WEN Han
2015, 35(10): 2771-2776. DOI:
10.11772/j.issn.1001-9081.2015.10.2771
Asbtract
(
)
PDF
(830KB) (
)
References
|
Related Articles
|
Metrics
To establish the relationship between conditional information entropy defined on boundary region and attribute reduction, it was proved that the conditional information entropy defined on discourse of universe was the same as the one on boundary region. It means that the representation of information entropy reduction can be obtained by conditional information entropy defined on boundary region. By strictly convex function and Jensen inequality, its properties were discussed. To remain conditional information entropy defined on boundary region unchanged, the sufficient and necessary condition was presented. To get the representation of positive region reduction by conditional information entropy defined on boundary region, its sufficient and necessary condition was given, so as to get the judgment approach for positive region reduction from the view of conditional information entropy on boundary region. It is the generalization of similar method for consistent decision information system. Finally, a numerical example was designed to show how to use the conditional information entropy defined on boundary region to compute the positive region or conditional information entropy reductions.
Ant colony optimization algorithm based on Spark
WANG Zhaoyuan, WANG Hongjie, XING Huanlai, LI Tianrui
2015, 35(10): 2777-2780. DOI:
10.11772/j.issn.1001-9081.2015.10.2777
Asbtract
(
)
PDF
(721KB) (
)
References
|
Related Articles
|
Metrics
To deal with the combinatorial optimization problem in the era of big data, a parallel Ant Colony Optimization (ACO) algorithm based on Spark, a framework for the distributed memory computing, was presented. To achieve the parallelization of the phase of solution construction in ant colony optimization, a class of ants was encapsulated to a resilient distributed dataset and the corresponding transformation operators were given. The simulation results in solving the Traveling Salesman Problem (TSP) prove the feasibility of the proposed parallel algorithm. Under the same experimental environment, the comparison results between MapReduce based ant colony algorithm and the proposed algorithm show that the proposed algorithm significantly improves the optimization speed at least ten times than the MapReduce one.
Matrix factorization recommendation algorithm based on Spark
ZHENG Fengfei, HUANG Wenpei, JIA Mingzheng
2015, 35(10): 2781-2783. DOI:
10.11772/j.issn.1001-9081.2015.10.2781
Asbtract
(
)
PDF
(570KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the bottleneck problems of processing speed and resource allocation, a Spark based matrix factorization recommendation algorithm was proposed. Firstly, user factor matrix and item factor matrix were initialized according to historical data. Secondly, factor matrix was iteratively updated and the result was stored in memory as the input of next iteration. Finally, recommendation model was generated when iteration ended. The experiment on MovieLens shows that the speedup is linear and the proposed Spark based algorithm can save time and significantly improve the execution efficiency of collaborative filtering recommendation algorithm.
Real-time fault-tolerant technology for Hadoop based on heartbeat expired time mechanism
GUAN Guodong, TENG Fei, YANG Yan
2015, 35(10): 2784-2788. DOI:
10.11772/j.issn.1001-9081.2015.10.2784
Asbtract
(
)
PDF
(754KB) (
)
References
|
Related Articles
|
Metrics
The heartbeat mechanism in Hadoop is not reasonable for short jobs, and ignores the fairness of expired time set of nodes in heterogeneous cluster. In order to overcome the problem, a fair expired time fault-tolerant mechanism was proposed. First of all, a failure misjudgement loss model and a Fair MisJudgment Loss (FMJL) algorithm were put forward according to reliability and computational performance of nodes, so as to meet requirements of the long jobs and short jobs at the same time. Then a fair expired time mechanism based on FMJL algorithm was designed and implemented. Running a 345 seconds short job on the Hadoop with the proposed fair expired time mechanism, the results showed that it saved completion time by 44% when there was fault on TaskTracker nodes, and saved completion time by 23% compared with self-adaptation expired time mechanism. The experimental results show that the proposed fair expired time mechanism shortens the fault-tolerant processing time without affecting the completion time of long jobs, and can improve the efficiency of real-time processing ability for a heterogeneous Hadoop cluster.
Framework of serial multimodal biometrics with parallel fusion
LI Haixia, ZHANG Qing
2015, 35(10): 2789-2792. DOI:
10.11772/j.issn.1001-9081.2015.10.2789
Asbtract
(
)
PDF
(801KB) (
)
References
|
Related Articles
|
Metrics
In the multimodal biometric system, the parallel fusion mode has more advantages than the serial fusion mode in convenience and efficiency. Based on current works on serial multimodal biometric system, a framework combined with parallel fusion mode and serial fusion mode was proposed. In the framework, the weighted score level fusion algorithm using biological features of gait, face and finger was proposed at first;then semi-supervised learning techniques were used to improve the performance of weak traits in the system, and the simultaneous upgrade of user convenience and recognition accuracy was achieved. Analysis and experimental result indicate that the performance of the weak classifier can be improved by online learning, the convenience and recognition accuracy are successfully promoted in this framework.
Diagnosis decision of breast cancer combining with attribute reduction and support vector machine
LU Xingning, ZHANG Li
2015, 35(10): 2793-2797. DOI:
10.11772/j.issn.1001-9081.2015.10.2793
Asbtract
(
)
PDF
(743KB) (
)
References
|
Related Articles
|
Metrics
In the disease diagnosis approach of combining with Gene Algorithm (GA) and Support Vector Machine (SVM) ensemble, the attribute redundancy problem still exists. A decision method for diagnosis of breast cancer was proposed based on attribute reduction and SVM. The proposed attribute reduction method took minimizing the attribute number, maximizing the difference attribute number in discernibility matrix and maximizing the dependency degree of condition reduction attributes on decision attributes as the fitness function of GA. After attribute reduction, multiple attribute subsets were selected for SVM ensemble learning. Compared with SVM, experimental results on the breast cancer dataset from UCI databases validate that the classification accuracy increases by 2 percent at least.
Modified multi-class support vector machine recursive feature elimination for cancer multi-classification
HUANG Xiaojuan, ZHANG Li
2015, 35(10): 2798-2802. DOI:
10.11772/j.issn.1001-9081.2015.10.2798
Asbtract
(
)
PDF
(716KB) (
)
References
|
Related Articles
|
Metrics
To deal with cancer multi-cancer classification problems, a Multi-class feature selection method based on Support Vector Machine Recursive Feature Elimination (MSVM-RFE) has been proposed. However, it takes the combined weights of all SVM-RFE sub-classifiers into consideration, and ignores the ability of feature selection of each SVM-RFE sub-classifiers. To improve the recognition rate of multi-classification problem, a Modified MSVM-RFE (MMSVM-RFE) was presented. Similar to MSVM-RFE, MMSVM-RFE converted a multi-class problem into multiple binary tasks, then each binary feature elimination problem was solved by an SVM-REF which iteratively removed irrelevant features to obtain a feature subset. All these feature subsets were merged into one final feature subset on which an SVM classifier was trained. The experimental results on three gene datasets show that the proposed method can select a useful feature subset which is efficient in cancer classification. The proposed algorithm can increase the overall recognition rate by about 2%, and significantly enhances the precision of a single category, even to 100%. Compared to random forest,
K
-Nearest Neighbor (KNN) classifier and PCA dimension reduction, the proposed method can achieve better performance.
Inconsistent decision algorithm in region of interest based on certainty degree, inclusion degree and cover degree
ZHOU Tao, LU Huiling, MA Miao, YANG Pengfei
2015, 35(10): 2803-2807. DOI:
10.11772/j.issn.1001-9081.2015.10.2803
Asbtract
(
)
PDF
(886KB) (
)
References
|
Related Articles
|
Metrics
Noisy data and disease misjudgment in Region of Interest (ROI) of medical image is a typical inconsistent decision question of Inconsistent Decision System (IDS), and it is becoming huge challenge in clinical diagnosis. Focusing on this problem, based on certainty degree, inclusion degree and cover degree, a decision algorithm named ItoC-CIC was proposed for ROI of prostate tumor Magnetic Resonance Imaging (MRI) combined with macro-and-micro characteristics and global-and-local characteristics. Firstly, high-dimensional features for ROI of prostate tumor MRI were extracted to construct complete inconsistent decision table. Secondly, the equivalent classes possessing inconsistent samples were found by calculating certainty degree. Thirdly, the Score value was obtained by calculating inclusion degree and cover degree of inconsistent equivalent classes respectively, which was used to filter inconsistent samples, making inconsistent decision convert to consistent decision. Finally, test experiments of inconsistent decision tables were conducted on typical examples, UCI data and 102 features of MRI prostate tumor ROI. The experimental results illustrate that this algorithm is effective and feasible, and the conversion rate can reach 100% from inconsistent decision to consistent decision.
Human promoter recognition based on single nucleotide statistics and support vector machine ensemble
XU Wenxuan, ZHANG Li
2015, 35(10): 2808-2812. DOI:
10.11772/j.issn.1001-9081.2015.10.2808
Asbtract
(
)
PDF
(756KB) (
)
References
|
Related Articles
|
Metrics
To efficiently discriminate the promoter in human genome, an algorithm for human promoter recognition based on single nucleotide statistics and Support Vector Machine (SVM) ensemble was proposed. Firstly, a gene dataset was divided into two subsets such as C-preferred and G-perferred subsets by using single nucleotide statistics. Secondly, DNA rigidity feature, word-based feature and CpG-island feature were extracted for each subset. Finally, these features were combined by using SVM ensemble learning. In addition, three ensemble ways were discussed, including single SVM ensemble, double-layer SVM ensemble and cascaded SVM ensemble. The experimental result shows that the proposed method can improve the sensitivity and specificity of human propoter recognition. Especially, the double-layer SVM ensemble can achieve the highest sensitivity of 79.51%, while the cascaded SVM ensemble has the highest specificity of 84.58%.
Prostate tumor CAD model based on neural network with feature-level fusion in magnetic resonance imaging
LU Huiling, ZHOU Tao, WANG Huiqun, WANG Wenwen
2015, 35(10): 2813-2818. DOI:
10.11772/j.issn.1001-9081.2015.10.2813
Asbtract
(
)
PDF
(894KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that feature relevancy and dimension disaster problem in high-dimensional representation of Magnetic Resonance Imaging (MRI) prostate tumor Region of Interesting (ROI), a prostate tumor CAD model was proposed based on Neural Network (NN) with Principal Component Analysis (PCA) feature-level fusion in MRI. Firstly, 102 dimension features were extracted form MRI prostate tumor ROI, including 6 dimension geometry features, 6 dimension statistical features, 7 dimension Hu invariant moment features, 56 dimension GLCM texture features, 3 dimension Tamura texture features and 24 dimension frequency features. Secondly, 8 dimension features with cumulative contribution rate of 89.62% were obtained by using PCA in feature-level fusion, reducing the dimension of the feature vectors. Thirdly, the classical NN, which used Broyden-Fletcher-Goldfarb-Shanno (BFGS), Back-Propagation (BP) and Gradient Descent (GD), Levenberg-Marquardt as the training algorithm, was regarded as classifier to classify the features. Finally, 180 MRI images of prostate patients were used as original data, and the prostate tumor CAD model based on NN with feature-level fusion was utilized to diagnose. The experimental results illustrate that the ability to identify benign and malignant prostate tumor of neural network with PCA feature-level fusion is improved at least 10%, and the feature-level fusion strategy is effective, which increases the feature irrelevancy to a certain extent.
Fault diagnosis method of high-speed rail based on compute unified device architecture
CHEN Zhi, LI Tianrui, LI Ming, YANG Yan
2015, 35(10): 2819-2823. DOI:
10.11772/j.issn.1001-9081.2015.10.2819
Asbtract
(
)
PDF
(703KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that traditional fault diagnosis of High-Speed Rail (HSR) vibration signal is slow and cannot meet the actual requirement of real-time processing, an accelerated fault diagnosis method for HSR vibration signal was proposed based on Compute Unified Device Architecture (CUDA). First, the data of HSR was processed by Empirical Mode Decomposition (EMD) based on CUDA, then the fuzzy entropy of each result component was calculated. Finally,
K
-Nearest Neighbor (KNN) classification algorithm was used to classify feature space which consisted of multiple fuzzy entropy features. The experimental results show that the proposed method is efficient on fault classification of HSR vibration signal, and the processing speed is significantly improved compared with the traditional method.
Regularized approach for incomplete robust principal component analysis and its applications in background modeling
SHI Jiarong, ZHENG Xiuyun, YANG Wei
2015, 35(10): 2824-2827. DOI:
10.11772/j.issn.1001-9081.2015.10.2824
Asbtract
(
)
PDF
(782KB) (
)
References
|
Related Articles
|
Metrics
Because the existing Robust Principal Component Analysis (RPCA) approaches do not consider the continuity and the incompletion of sequential data, one type of low-rank matrix recovery model, named Regularized Incomplete RPCA (RIRPCA), was proposed. First, the model of RIRPCA was constructed based on a metric function for evaluating the continuity, where the model minimized a weighted combination of the matrix nuclear norm,
L
1
norm and regularized term. Then, the augmented Lagrange multipliers algorithm was employed to solve the proposed convex optimization problem. This algorithm has good scalability and low computation complexity. Finally, RIRPCA was applied to the background modeling of videos. The experimental results demonstrate that the proposed method has the superiority of recovering missing entries and separating foreground over matrix completion and incomplete RPCA.
Uncertain life strength rescue path planning based on particle swarm optimization
GENG Na, GONG Dunwei, ZHANG Yong
2015, 35(10): 2828-2832. DOI:
10.11772/j.issn.1001-9081.2015.10.2828
Asbtract
(
)
PDF
(728KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of rescuing the maximum number of trapped men in limited time after disaster, the robots were used to take place of rescue workers to rescue the survivors after disaster, and the robots rescue path planning method was studied by considering the situation that the trapped men's life strengths were uncertain. Firstly, considering that each target has life strength and the values of life strengths were different due to different factors, the value of life strength was set as interval number in general. Secondly, taking life strength constraint into account, the rescued worker number was treated as the objective function, which is an interval function related to life strength. Then the modified Particle Swarm Optimization (PSO) algorithm was used to solve the established objective function, the particle's code and decode method and the global best solution update strategy were introduced. Finally, the effectiveness of the proposed method was verified by simulations of different scenarios.
Corridor scene recognition for mobile robots based on multi-sonar-sensor information and NeuCube
WANG Xiuqing, HOU Zengguang, PAN Shiying, TAN Min, WANG Yongji, ZENG Hui
2015, 35(10): 2833-2837. DOI:
10.11772/j.issn.1001-9081.2015.10.2833
Asbtract
(
)
PDF
(769KB) (
)
References
|
Related Articles
|
Metrics
To improve the perception ability of indoor mobile robots, the classification method for the commonly structured corridor-scenes, Spiking Neural Network (SNN) and NeuCube, which is a novel computing model based on SNN, were studied. SNN can convey spatio-temporal information by spikes. Besides, SNN is more suitable for analyzing dynamic and time-series data, and for recognizing data of various patterns than traditional Neural Network (NN). It is easy to be implemented by hardware. The principle, learning methods and calculation steps of NeuCube were discussed. Then seven common corridor scenes were recognized by the classification method based on multi-sonar-sensor information and NeuCube. The experimental results show that the proposed method is effective. Additionally, it is helpful for improving autonomy and intelligence of mobile robots.
Slight-pause marks boundary identification based on conditional random field
MO Yiwen, JI Donghong, HUANG Jiangping
2015, 35(10): 2838-2842. DOI:
10.11772/j.issn.1001-9081.2015.10.2838
Asbtract
(
)
PDF
(786KB) (
)
References
|
Related Articles
|
Metrics
The boundary identification of punctuation marks is an important research field of natural language processing. It is the basis of the application of word segmentation and phrase chunking. In order to solve the problem that the boundary identification of Chinese slight-pause marks which split the coordinate words and phrases in Chinese, the Conditional Random Field (CRF) that used for sequence segmentation and labeling was adopted for slight-pause marks boundary identification. At first, the slight-puase marks boundary recognition task was described in two types, and then the slight-puase marks corpus tagging method and process and feature selection were studied. According to the methods of corpus recommendation and ten-fold cross validation, a series of experiments were carried out in slight-pause marks. The experimental result shows that the proposed method plays an effective role in slight-pause marks boundary identification with selected boundary identification features. And
F
-measure of boundary identification increased by 10.57% on baseline as well as the
F
-measure of words divided by slight-pause marks achieves 85.24%.
Anonymous circuit control method for the onion router based on node failure
ZHUO Zhongliu, ZHANG Xiaosong, LI Ruixing, CHEN Ting, ZHANG Jingzhong
2015, 35(10): 2843-2847. DOI:
10.11772/j.issn.1001-9081.2015.10.2843
Asbtract
(
)
PDF
(786KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the communication path selected by random routing algorithm of the onion router (Tor) can not be controlled, thus leading to problems such as the abuse of anonymous techniques and the failure of tracing methods, a Tor anonymous circuit control method based on node failure was proposed. To effectively control the circuit, the fake TCP reset information was sent to mimic the node failure, so that the Tor client would not stop choosing nodes until it selected the controlled ones. The results of theoretic analysis of Tor network path selection algorithm and the real test in a private Tor network composed of 256 onion routers demonstrate the effectiveness of the proposed approach. Compared with traditional methods which deploy high bandwidth routers to attract users to select the controlled nodes, the proposed method can improve the probability of choosing controlled entry node from 4.8% to about 60%, when entry guard was generally enabled by Tor client by default. The results also show, as the length of a controlled path increases, the success rate of building path decreases. Therefore the proposed method is suitable for controlling short paths.
Multi-user multi-input multi-output multi-hop relay system based on likelicooperative hood detection and sphere decoding
XU Yuanfei
2015, 35(10): 2848-2851. DOI:
10.11772/j.issn.1001-9081.2015.10.2848
Asbtract
(
)
PDF
(783KB) (
)
References
|
Related Articles
|
Metrics
Concerning the Bit Error Rate (BER) and channel capacity optimization in the data transfer process of Multi-Input Multi-Output (MIMO) system, a multi-user MIMO multi-hop relay system based on cooperative likelihood detection and sphere decoding was proposed. Firstly, it constructed a second-order cooperative MIMO relay system model to analyze the relay transmission process of channel data, as well as path loss and shadow fading. Secondly, the sphere decoding was used to derive the equivalent maximum likelihood rule for log-normal shadow fading detection. Finally, the maximum channel power harmonic mean selection policy was put forward, and an access link with smaller BER was chosen for user based on correlation link metric and maximum channel power threshold value, so as to improve the performance of multi-user MIMO system. Simulation results show that, compared with the multi-user MIMO multi-hop relay system based on mutual information maximization and the relay system based on decoding-forwarding and MIMO orthogonal Space Time Block Code (STBC), the average BER of the proposed system reduced by 27.4% and 32.6% respectively, and the average channel capacity increased by 9.5% and 12.7% respectively. The results demonstrate that the proposed system has a good effect in reducing BER and improving channel capacity.
Improved algorithm of time synchronization based on control perspective of wireless sensor network
ZENG Pei, CHEN Wei
2015, 35(10): 2852-2857. DOI:
10.11772/j.issn.1001-9081.2015.10.2852
Asbtract
(
)
PDF
(906KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue of low synchronization accuracy and slow convergence speed caused by the process of time synchronization which is susceptible to disturbance and communication delay in Wireless Sensor Network (WSN), an improved time synchronization algorithm was proposed from control perspective. Firstly, the clock synchronization state model was established, then, through the thought of modern control theory, a centralized control strategy was introduced, and the time synchronization state model based on control strategy was established. The centralized control scheme was designed based on global clock status information, and optimal control was obtained under the condition of minimizing the performance index function and the optimal estimation of Kalman filtering. Comparison simulation were carried out on the proposed clock synchronization optimization algorithm and Timing-sync Protocol for Sensor Networks (TPSN). The results showed that, from the 6th step of the clock synchronization, the synchronization error of the former was gradually smaller than that of the latter;when achieving the same relative high synchronization precision performances, the former required steps is about twenty percent of the latter;the variance of synchronization error convergence of the former was two orders lower than that of the latter. The results prove that the proposed algorithm of time synchronization has higher synchronization accuracy, faster convergence speed and lower communication load than TPSN.
Multi-slot allocation data transmission algorithm based on dynamic tree topology for wireless sensor network
SUN Li, SONG Xizhong
2015, 35(10): 2858-2862. DOI:
10.11772/j.issn.1001-9081.2015.10.2858
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
Concerning the load imbalance of nodes in Wireless Sensor Network (WSN), a new multi-slot allocation data transmission algorithm was proposed based on dynamic tree topology. The data trasimission mode and slot allocation were analyzed by a tree link model at first. Then the node performed frame slot allocation based on slot requirements by using the relationship between parent and offspring in the tree topology;and a sequence mode for reception slot and a sequence mode for transmission slot were given, so as to allow the node to be more ordered and receive packets sent by the other nodes in less interference channel, reducing waste of time slot and improving utilization efficiency of channel slot. Compared with life cycle extension algorithm for WSN based on data transmission optimization and reliable data transmission algorithm based on energy awareness and time slot allocation, the simulation results show that the network energy efficiency of the proposed algorithm increases by 42.8% and 51.7% respectively, and the average lifetime of the nodes extends by 1.7% and 37.5% respectively, the energy efficiency and network life cycle are optimized.
Cipher texts generation method in elliptic curve cryptography based on plaintext length
ZHANG Xidong, TONG Weiming, WANG Tiecheng, JIN Xianji
2015, 35(10): 2863-2866. DOI:
10.11772/j.issn.1001-9081.2015.10.2863
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
Since the space for saving cipher texts is more than that for saving plaintexts in elliptic curve cryptography encrypting process, a method of generating cipher texts which utilized elliptic curve cryptography based on plaintext length was proposed. Firstly, by analyzing encrypting operation process of elliptic curve encryption, it was deduced that the space for cipher texts of elliptic curve points was decided by the number of plaintexts in elliptic curve points. Secondly, by fusing the encrypting patterns based on segmentation and combination plaintexts, an encrypting model was constructed, and plaintext segmentation algorithm and plaintext combination algorithm were put forward to generate the minimum number of elliptic curve points. Finally, the demanded space for saving cipher texts in elliptic curve points was calculated, and the solutions for reducing the number of cipher texts in elliptic curve points were given. By the analysis and calculation, it is shown that the space of cipher text elliptic curve points decreases 88.2% by segmentation plaintexts and decreases 90.2% by combination plaintexts. The results show the method can decrease the number of cipher texts in elliptic curve points and the storage space demand for hardware.
Code-based blind signature scheme
WANG Qian, ZHENG Dong, REN Fang
2015, 35(10): 2867-2871. DOI:
10.11772/j.issn.1001-9081.2015.10.2867
Asbtract
(
)
PDF
(746KB) (
)
References
|
Related Articles
|
Metrics
Coding cryptography is widespread concerned because it has the advantages of resisting quantum algorithm. To protect the anonymous message, a kind of blind signature scheme based on coding was proposed. By Hash technique and blind factor, message owner sent irreversible and blinding message to the signer, signer completed blind signature by using Courtois-Finiasz-Sendrier (CFS) signature scheme and sent it back to the message owner, message owner obtained signature by blind removing operation. Analysis show that the new scheme not only has the basic nature of general blind signature, but also inherit the advantages of CFS signature scheme such as high security and short signature length, in addition, it can effectively resist attacks of quantum algorithm.
Service quality evaluation model based on trusted recommendation
ZHOU Guoqiang, YANG Xihui, LIU Hongfang
2015, 35(10): 2872-2876. DOI:
10.11772/j.issn.1001-9081.2015.10.2872
Asbtract
(
)
PDF
(766KB) (
)
References
|
Related Articles
|
Metrics
Due to the diversity of Web users and their complex personal demands, Quality of Service (QoS) information released by some users is not completely reliable, which affects the accuracy of the evaluation on service quality. To address this problem, a service quality evaluation model based on Credible Recommendation (TR-SQE) was presented. In TR-SQE, recommendation trust for the user was defined as the degree of similarity between user's recommendation data and user group's accumulated recommendation data. QoS data released by the user whose recommendations trust was lower than threshold were shielded. By using such correctional QoS information as recommendation data of service quality, then the user, according to the degree of similarity with recommended preference, evaluated service quality. Analysis and simulation results demonstrate that evaluation results from TR-SQE are basically consistent with the real quality of service, which has smaller MAE compared with the contrast methods, and it is helpful to the user's service selection.
Provably-secure two-factor authentication scheme for wireless sensor network
CHEN Lei, WEI Fushan, MA Chuangui
2015, 35(10): 2877-2882. DOI:
10.11772/j.issn.1001-9081.2015.10.2877
Asbtract
(
)
PDF
(1108KB) (
)
References
|
Related Articles
|
Metrics
With the development of Wireless Sensor Network (WSN), user authentication in WSN is a critical security issue due to their unattended and hostile deployment in the field. To improve the security of user authentication, a new provably-secure two-factor authentication key exchange scheme based on Nam's first security model was proposed. The proposed scheme was based on elliptic curve cryptography, and it achieved authentication security and user anonymity. The safety of the improved protocol was proved based on ECCDH in the random oracle model. Performance analysis demonstrates that compared to Nam's schemes, the proposal is more efficient, and it is more suited to wireless sensor networks environments.
Intrusion detection model based on decision tree and Naive-Bayes classification
YAO Wei, WANG Juan, ZHANG Shengli
2015, 35(10): 2883-2885. DOI:
10.11772/j.issn.1001-9081.2015.10.2883
Asbtract
(
)
PDF
(465KB) (
)
References
|
Related Articles
|
Metrics
Intrusion detection requires the system to identify network intrusions quickly and accurately, so it also requires high efficiency of the detection algorithm. In order to improve the efficiency and accuracy of intrusion detection system, and reduce the rate of false positives and false negatives, a H-C4.5-NB intrusion detection model combined C4.5 with Naive Bayes (NB) was proposed after fully analyzing the C4.5 and NB algorithm. The distribution of decision category was described in the form of probability in this model, and the final decision results were given in the form of C4.5 and NB probability weighted sum. Finally the performance of the model was tested by KDD 99 data set. The experimental results showed that the accuracy of Denial of Service (DoS) was improved about 9% and the accuracy of U2R and R2L was improved about 20%-30% in H-C4.5-NB compared to the traditional methods such as C4.5, NB and NBTree.
Cloud architecture intrusion detection system based on KKT condition and hyper-sphere incremental SVM algorithm
ZHANG Wenxing, FAN Jiejie
2015, 35(10): 2886-2890. DOI:
10.11772/j.issn.1001-9081.2015.10.2886
Asbtract
(
)
PDF
(749KB) (
)
References
|
Related Articles
|
Metrics
In view of overload, nonsupport of multi-computer conjunction analysis and maintenance of huge rule database in traditional Intrusion Detection System (IDS), a new kind of cloud architecture IDS with Incremental Support Vector Machine (ISVM) algorithm based on KKT condition and hyper-sphere, namely KS-ISVM was proposed. The network data captured by client were preprocessed and sent to the cloud as samples. The KS-ISVM was used to analyze these samples in cloud. According to the KKT condition, the samples that violated the KKT condition were selected as useful samples, and the others that met the KKT condition were removed. In addition, in order to ensure that the removed samples were redundant, they were screened again by hyper-sphere, after that, the samples which met the hyper-sphere rule were regarded as useful samples, while the others were deleted. Finally, the SVM was trained and updated by merging those selected useful samples. Contrast experiments with SVM, Batch-SVM and Incremental SVM based on KKT (K-ISVM) were carried out on KDDCUP 99. The results show that KS-ISVM has good performance in prediction and selection of samples, its accuracy can reach 90.3%, but the accuracy of SVM, Batch-SVM and K-ISVM are all below 89%. Through analyzing the parallel KS-ISVM processes, the analyzing time of the single process is 6351 s, while that of 16 processes is 146 s, which proves that the multi-process techniques is effiective, and it can meet the efficiency and accuracy requirements of IDS in cloud computing environment.
Efficient plaintext gathering method for data protected by SSL/TLS protocol in network auditing
DONG Haitao, TIAN Jing, YANG Jun, YE Xiaozhou, SONG Lei
2015, 35(10): 2891-2895. DOI:
10.11772/j.issn.1001-9081.2015.10.2891
Asbtract
(
)
PDF
(827KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of auditing the data protected by Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocol on the Internet, a plaintext gathering method for network data protected by SSL/TLS protocol based on the principles of man-in-the-middle was proposed. A data gatherer was connected between the server and the client in series, which was able to get the encryption key by modifying handshake messages during SSL/TLS handshake, so as to decrypt the secure data and then gather its plaintext. Compared with the existing gathering method based on the principles of proxy server, the proposed method has a shorter transmission delay, a larger SSL throughput and a smaller memory occupation. Compared with the existing gathering method in which the gatherer possesses the server's private key, the proposed method has a wider application scope, and also has the advantage of being unaffected by packet losses on the Internet. The experimental results show that the proposed method has a decrease in transmission delay of about 27.5% and an increase in SSL throughput of about 10.4% compared with the method based on the principles of proxy server. The experimental results also show that the SSL throughput of the proposed method approaches the ideal maximum value.
Partition-based binary file similarity comparison method
DONG Qihai, WANG Yagang
2015, 35(10): 2896-2900. DOI:
10.11772/j.issn.1001-9081.2015.10.2896
Asbtract
(
)
PDF
(983KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the huge consumption of time and space and the absolute comparison result of Basic Block (BB) resulted from one-to-one mapping of basic block in traditional file structural similarity detection technique, a method based on dividing thought was proposed to accomplish structural comparison of binary file. Firstly, the small primes algorithm which is used to compare the basic block was improved to classify the basic blocks in a function, then the similar rates of basic blocks were obtained by combining with the weights of basic block signature and attribute, so as to get the final function similar rate and final file similar rate. In the comparison with the absolute comparison method which does not consider the partition at the similar rates of function, this proposed method has certain advantages in efficiency and accuracy. The experimental show that the proposed method improves the accuracy of comparison, and reduces the comparison time, it is more feasible in similarity comparison of binary files.
Focused topic Web crawler based on improved TF-IDF alogorithm
WANG Jingzhong, QIU Tongxiang
2015, 35(10): 2901-2904. DOI:
10.11772/j.issn.1001-9081.2015.10.2901
Asbtract
(
)
PDF
(797KB) (
)
References
|
Related Articles
|
Metrics
Considering a large number of irrelevant data in Web search results and low accuracy of semantic retrieval by using the traditional TF-IDF algorithm,
K
-means algorithm and the adaptive genetic algorithm, the improvement of the TF-IDF algorithm and its application in semantic retrieval were studied. The TF-IDF algorithm was improved successfully by applying the regular expression to the semantic analysis technique. The search topic was described by a semantic database. The similarity of the regular atoms in the documents was obtained by a weighted calculation, which was according to the importance of the regular atomic semantics and the different positions in the Web pages. The final results were obtained by a Cosine operation of the document similarity and subject mode through the space vector model. Finally, the calculating results were analyzed by applying the improved TF-IDF algorithm, the traditional TF-IDF algorithm, the
K
-means algorithm and the adaptive genetic algorithm to the focused topic Web crawler. The results show that the accuracy of the improved TF-IDF algorithm rose by 17.1 percentage points and the omission rate of that reduced by 7.76 percentage points in the vertical search of the focused topic web crawler. Compared with the
K
-means algorithm and the adaptive genetic algorithm, the accuracy of the improved TF-IDF algorithm rose by 6 percentage points and 8.1 percentage points, respectively. In summary, the improved TF-IDF algorithm can promote the accuracy of document similarity detection effectively and improve the defect of focused topic web crawler in the semantic analysis greatly.
Evolution analysis method of microblog topic-sentiment based on dynamic topic sentiment combining model
LI Chaoxiong, HUANG Faliang, WEN Xiaoqian, LI Xuan, YUAN Chang'an
2015, 35(10): 2905-2910. DOI:
10.11772/j.issn.1001-9081.2015.10.2905
Asbtract
(
)
PDF
(921KB) (
)
References
|
Related Articles
|
Metrics
For the problem of existing models' disability to analyze topic-sentiment evolution of microblogs, a Dynamic Topic Sentiment Combining Model (DTSCM) was proposed based on Topic Sentiment Combining Model (TSCM) and the emotional cycle theory. DTSCM could track the topic sentiment evolution trend and obtain the graph of topic sentiment evolution so as to analyze the evolution of topic and sentiment by capturing the topic and sentiment of microblogs in different time. The experimental results in real microblog corpus showed that, in contrast with state-of-the-art models Joint Sentiment/Topic (JST), Sentiment-Latent Dirichlet Allocation (S-LDA) and Dependency Phrases-Latent Dirichlet Allocation (DPLDA), the sentiment classification accuracy of DTSCM increased by 3.01%, 4.33% and 8.75% respectively,and DTSCM could obtain topic-sentiment evolution of microblogs. The proposed approach can not only achieve higher sentiment classification accuracy but also analyze topic-sentiment evolution of microblog, and it is helpful for public opinion analysis.
Frequent closed itemset mining algorithm over uncertain data
LIU Huiting, SHEN Shengxia, ZHAO Peng, YAO Sheng
2015, 35(10): 2911-2914. DOI:
10.11772/j.issn.1001-9081.2015.10.2911
Asbtract
(
)
PDF
(586KB) (
)
References
|
Related Articles
|
Metrics
Due to the downward closure property over uncertain data, existing solutions of mining all the frequent itemsets may lead an exponential number of results. In order to obtain a reasonable result set with small size, frequent closed itemsets discovering over uncertain data were studied, and a new algorithm called Normal Approximation-based Probabilistic Frequent Closed Itemsets Mining (NA-PFCIM) was proposed. The new method regarded the itemset mining process as a probability distribution function, and mined frequent itemsets by using the normal distribution model which supports large databases and can extract frequent itemsets with a high degree of accuracy. Then, the algorithm adopted the depth-first search strategy to obtain all probabilistic frequent closed itemsets, so as to reduce the search space and avoid redundant computation. Two probabilistic pruning techniques including superset pruning and subset pruning were also used in this method. Finally, the effectiveness and efficiency of the proposed methods were verified by comparing with the Possion distribution based algorithm called A-PFCIM. The experimental results show that NA-PFCIM can decrease the number of extending itemsets and reduce the complexity of calculation, it has better performance than the compared algorithm.
Efficient information search method based on semantic Web
XIA Meicui, SHI Hongtao
2015, 35(10): 2915-2919. DOI:
10.11772/j.issn.1001-9081.2015.10.2915
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the accuracy of Web information retrieval, an efficient information search method based on semantic Web was proposed. Firstly, all semantic paths between the target resources and the query keywords were extracted from the ontology library, and the weight of each semantic path was calculated by analyzing the weight and identification power of attributes included in it. Then, based on the weights, the number and the specificity of the semantic paths between resources and query keywords, as well as the semantic correlation between each resource and each keyword were calculated;and combining with the coverage and identification power of each keyword, the semantic correlation between each resource and the keyword set was calculated. Finally, on the basis of the correlation, all the resources were sorted and output. The experimental results show that compared with three different semantic Web search algorithms, including OntoLook, tf*idf and TMSubtree, the proposed method improved the average precision of 69.0, 25.0, 21.0 percentage points, respectively;average recall of 77.1, 28.3, 24.3 percentage points, respectively;and average
F
-measure of 72.4, 26.4, 22.4 percentage points, respectively. These results prove the proposed method can not only effectively improve the accuracy of semantic search, but also have good query results for indirect information.
Adaptive
H
∞
control for longitudinal plane of hypersonic vehicle based on hierarchical fuzzy system
WANG Yongchao, ZHANG Shengxiu, CAO Lijia, HU Xiaoxiang
2015, 35(10): 2920-2926. DOI:
10.11772/j.issn.1001-9081.2015.10.2920
Asbtract
(
)
PDF
(976KB) (
)
References
|
Related Articles
|
Metrics
To deal with the output tracking problem of a hypersonic vehicle with parameters uncertainty, an adaptive controller which obtained
H
∞
performance was proposed based on hierarchical fuzzy system. In order to solve the problem that the number of rules in a fuzzy controller increases exponentially with the number of variables involved, reduce the number of the parameters to be identified on-line and enhance the real-time performance of the control system, an adaptive controller was designed based on hierarchical fuzzy system. To weak the impact on the stability abused by approximation error of the fuzzy logic system, parameters uncertainty and the external disturbances, the robust compensation terms were introduced to improve the
H
∞
performance of the system. The Lyapunov theory was applied to analyze and prove the stability of the system. The simulation results demonstrate that the system can not only track the input exactly, but also possess strong robustness.
Service-level agreement negotiation mechanism based on semantic Web technology
WANG Xiaolong, ZHANG Heng, YANG Bochao, SHEN Yulin
2015, 35(10): 2927-2932. DOI:
10.11772/j.issn.1001-9081.2015.10.2927
Asbtract
(
)
PDF
(870KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the lack of semantic description for Service-Level Agreement (SLA) elements used in negotiation and the negotiation process in the SLA auto-negotiation, a negotiation mechanism based on the semantic Web technology was proposed, At first, a negotiation ontology named Osn was proposed, which was used for the description of SLA elements directly used in negotiation;the mapping function and the evaluation function of negotiation for these SLA elements were designed and described in this Osn, and the formal description of the main concepts and the relationship between these concepts was given based on description logic to provide a satisfiable semantic model for the Osn. Then a bargain model was put forward for SLA negotiation, and it was illustrated that a Pareto optimal offer could be generated by adopting this model through the proof of the related proposition and theorem;the service ontology was designed for SLA negotiation based on the mapping between OWL-S and Unified Modeling Language (UML) using this bargain model. The result of case study shows that the knowledge can form the sequence of offers which satisfied the need to maximize the interest of negotiation participants. It is illustrated that Osn can provide the service ontology with the parameter type support for the negotiation of an arbitrary SLA;the SLA negotiation oriented bargain model can generate the SLA accepted by both negotiation participants.
Parallel constrained differential evolution algorithm merging with multi-constraint handling techniques
WEI Wenhong
2015, 35(10): 2933-2938. DOI:
10.11772/j.issn.1001-9081.2015.10.2933
Asbtract
(
)
PDF
(855KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that constrained differential evolution with single constraint handing technique is not suitable for all constrained optimization problems, a parallel constrained differential evolution algorithm using multi-constraint handing techniques was proposed. The algorithm divided an initial population into several sub-populations, and then the sub-populations evolved with different constraint handing techniques in parallel, they communicated with each other at fitness evaluation. By using four constraint handing techniques, the algorithm can find the best known optimization solution and compared with serial algorithm, the computation time is 1/4 while solving all benchmark functions. The experimental results show that the propsed algorithm is able to decrease computation time, and improve solution accuracy and convergence speed in the majority of test cases compared with corresponding serial algorithm and those algorithms which only use one constraint handing technique.
Virtual reality display model based on human vision
XU Yujie, GUAN Huichao, ZHANG Zongwei, GUO Qing, ZHANG Qing
2015, 35(10): 2939-2944. DOI:
10.11772/j.issn.1001-9081.2015.10.2939
Asbtract
(
)
PDF
(790KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the current display module could not provide a perfect stereo vision on the principle of human visual system, a solution of Virtual Reality (VR) stereo vision was proposed based on the oblique crossing frustum camera. Firstly, by studying the ken model and the theory of accessing to the depth data by eyes, a mathematical model of eyes parallex was built. Secondly, the industrial engine 3DVIA Studio was used as the simulation platform, which relied on the VSL programming language to screen. The relationship of child and parent was set up and the module of visual interaction was designed to construct the stereo camera. Then, the point cloud model was developed to quantize the stereo sense. The advantages and disadvantages of each model were analyzed based on the characteristics of depth display and distortion, and all models were optimized step by step. Center axis parallel normal frustum camera model and normal frustum model whose center axis crossed at the viewing distance were developed, the frustum of camera was optimized to develop a VR camera model of oblique crossing frustum. At last, using 3DVIA Studio as the experiment platform, specific data were substituted on it to do projective transformation. The result shows that the proposed camera model of oblique crossing frustum eliminates the distortion guarantees the depth information display effection, and provides an excellent effect of vision.
Key technologies of human-computer interaction based on virtual hand
YANG Xiaowen, ZHANG Zhichun, KUANG liqun, HAN Xie
2015, 35(10): 2945-2949. DOI:
10.11772/j.issn.1001-9081.2015.10.2945
Asbtract
(
)
PDF
(741KB) (
)
References
|
Related Articles
|
Metrics
Human-computer interaction technology based on virtual hand is a research focus of virtual reality. The key technologies were studied, the modeling method of polygon mesh was used and a realistic geometric model of virtual hand was established. The data acquisition module of data glove based on Virtools was developed and a method of data conversion based on initial value was proposed, the phenomenon of jump was solved when the virtual hand begin to move. For the fingers outreach motion, the method of constraint value was used, and the matching consistency of virtual hand with the real hand was increased. At last, for the interaction operation of virtual hand with virtual objects, a crawl algorithm based on effective threshold angle was put forward. Verified by Virtools platform, high fidelity of the virtual hand model is got, the shape of the hand changes naturally. Meanwhile, the interaction operation of virtual hand to virtual objects is implemented and it has a strong practicability.
Surface blending using resultant and three-dimensional geometric modeling
LI Yaohui, WU Zhifeng, XUAN Zhaocheng
2015, 35(10): 2950-2954. DOI:
10.11772/j.issn.1001-9081.2015.10.2950
Asbtract
(
)
PDF
(741KB) (
)
References
|
Related Articles
|
Metrics
As many geometric modelings essentially are the problems of the surface blending with constrained conditions, a nonlinear homotopy mapping method was presented to compute the surface equation of three-dimensional modeling on the base of linear homotopy method. In the method, the interpolation polynomial was computed firstly by using the position of cross-over section or biological slices as the interpolation points. Then, this interpolation polynomial was regarded as the nonlinear continuous homotopy mapping function and substituted into the polynomials of primary surfaces and auxiliary surfaces respectively to get blending surface equation. Thus, two univariable equations were obtained when the interpolated variable in interpolation polynomial was used as the variable but the others in the equations of primary surfaces and auxiliary surfaces were used as parameters. Furtherly, Sylvester resultant was used to eliminate the interpolated variable in these two equations to achieve the modeling surface which satisfied the constraints. The proposed method can realize surface modeling with control points and geometric modeling with constraints, and it is more practical because it can redefine and change the the intermediate position and shape.
Image completion algorithm based on depth information
HE Ye, LI Guangyao, XIAO Mang, XIE Li, PENG Lei, TANG Ke
2015, 35(10): 2955-2958. DOI:
10.11772/j.issn.1001-9081.2015.10.2955
Asbtract
(
)
PDF
(621KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of object structure discontinuity and incompleteness occurred in image completion, an image completion algorithm based on depth information was proposed. Firstly, the plane parameter Markov random field model was established to speculate depth information of the pixels in the image where the scene situate, then the coplanar region in the image determined, and the target matching blocks were located. Secondly, according to the principle of perspective projection, the transformation matrix was derived, which guided the geometric transformation of the matching blocks. Finally, the target cost function which includes the depth term was designed. Experimental results show the proposed algorithm has superiority in both subjective details and Peak Signal-to-Noise Ratio (PNSR) statistics.
Cross-based adaptive guided filtering in image denoising
QUAN Li, HU Yueli, YAN Ming
2015, 35(10): 2959-2962. DOI:
10.11772/j.issn.1001-9081.2015.10.2959
Asbtract
(
)
PDF
(589KB) (
)
References
|
Related Articles
|
Metrics
Since the contradiction between edge-preserving in homogeneous regions and structure-preserving in the boundary region of an image, a new algorithm combined with cross-based framework and guided filter was proposed. The main idea of the algorithm was adding an adjust offset in guided filter to ensure remaining the edge structure. Usually the fixed size window was used as neighborhood filtering, while the new algorithm employed cross-based framework, which chose a threshold in grayscale similarity. Taking the advantage of stereo matching, the adaptive filtering blocks whose sizes and shapes can adjust automatically were generalized. The adjust offset was proportional to the threshold, which was more robust than a hard threshold. In the simulation experiments of processing international standard sequence, the blocks were generalized by cross-based framework efficiently and effectively, homogeneous regions were smoothed well. The added offset outperforms many other algorithms in terms of sharpness enhancement. Compared to the guided filter, the value of Peak Signal-to-Noise Ratio (PSNR) of the proposed method has been improved by about 2 dB. The test results of real natural picture show that the proposed algorithm has a good future in practical application.
Image authentication agorithm based on two-dimensional histogram shifting
WANG Bing, MAO Qian, SU Dongqi
2015, 35(10): 2963-2968. DOI:
10.11772/j.issn.1001-9081.2015.10.2963
Asbtract
(
)
PDF
(856KB) (
)
References
|
Related Articles
|
Metrics
In allusion to the problem that how to detect whether the digital image data is complete, and whether the image is tampered, an image authentication algorithm based on two-dimensional histogram shifting was proposed to improve the quality of the authentication image. Firstly, the two-dimensional histogram of the cover image was structured through two prediction difference value calculating methods. The embeddable channels were chosen by the preset parameters and the embeddable channel peak positions were determined and the embeddable channels were shifted. Then, the anthentication information was embedded into image blocks with histogram shifting method. Hierarchical tampering detection was adopted during the process of tamper detection to effectively improve the accuracy. The experimental results showed that the algorithm could resist noise attack, and the average Peak Signal-to-Noise Ratio (PSNR) of the authentication image was 52.37dB and 50.33dB respectively when the parameter was set as 2 and 4, which improves the quality of images. The resutls prove that the algorithm has high security, and it is able to implement reversible watermarking as well as precisely locationing the tampering region.
Autofocus method based on blur difference qualitative analysis
LIN Zhong, HUANG Chenrong, LU Ali
2015, 35(10): 2969-2973. DOI:
10.11772/j.issn.1001-9081.2015.10.2969
Asbtract
(
)
PDF
(880KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of low accuracy and big error in hill climb searching method caused by the unimodal focal value function, a new autofocus method based on blur difference qualitative analysis was presented. First, the spatial-domain convolution/deconvolution transform was used to compute the blur difference at every point of two probed images corresponding to two different focus positions. Second, blur difference qualitative measurement of two images was made by voting policy. Then, the searching direction was determined by blur difference qualitative measurement of two probed images. Finally, using the variable step scheme, the searching range was gradually narrowed down and the searching steps was reduced until the best focus position was found. Three image sequences of difference focus positions were collected by an 18X zoom surveillance camera. The experimental result indicates that, compared with two typical methods based on focal value function, the proposed method keeps the advantages of the hill climb searching method with increasing the accuracy and reducing the error, and resolves the influence of local minima.
Semi-supervised composite kernel support vector machine image classification with adaptive parameters
WANG Shuochen, WANG Xili
2015, 35(10): 2974-2979. DOI:
10.11772/j.issn.1001-9081.2015.10.2974
Asbtract
(
)
PDF
(987KB) (
)
References
|
Related Articles
|
Metrics
When the semi-supervised composite kernel Support Vector Machine (SVM) constructing cluster kernel, the universal existence problem is high complexity and not suitable for large-scale image classification. In addition, when using
K
-means algorithm for image clustering, the parameter is difficult to estimate. In allusion to the above problems, semi-supervised composite kernel SVM image classification method based on adaptive parameters of Mean-Shift was proposed. This method combined with Mean-Shift to make a cluster analysis of the pixel to avoid the limitations of
K
-means algorithm for image clustering, determined the parameters adaptively by using the structure feature of the image to avoid the volatility of the algorithm, and constructed Mean Map cluster kernel with Mean-Shift image clustering results to enhance the possibility of the same clustering samples belong to the same category, so as to make the composite kernel function guide SVM image classification better. The experimental results show that the improved clustering algorithm and parameter selection method can obtain the image clustering information better, the classification rate of the proposed method to ordinary and noise image can generally increase more than 1-7 percentage points compared with the other semi-supervised methods, and it has some applicability for the larger scale images, make the image classification more efficiently and stably.
Fast algorithm for object tracking based on binary feature and structured output support vector machine
LI Xinye, SUN Zhihua, CHEN Mingyu
2015, 35(10): 2980-2984. DOI:
10.11772/j.issn.1001-9081.2015.10.2980
Asbtract
(
)
PDF
(732KB) (
)
References
|
Related Articles
|
Metrics
The object tracking algorithm based on discriminative classifier usually adopts complex appearance model to improve the tracking precision in complex scenes, which relatively influences the real-time performance of tracking. To solve this problem, a binary feature based on halftone was proposed to describe the object appearance and the kernel function of structured output Support Vector Machine (SVM) was improved, so as to realize fast updating and discriminating of discriminative model. In addition, a discriminative model updating strategy based on part matching was proposed, which can ensure the reliability of the training samples. In the experiments conducted on Benchmark, compared with the three algorithms including Compressive Tracking (CT), Tracking Detection Learning (TLD) and Structured Output Tracking with Kernels (Struck), the proposed algorithm had better performance in tracking speed with the increases of 0.2 times, 4.6 times and 5.7 times respectively. On the aspect of tracking precision, when overlap rate threshold was set to 0.6, the success rate of the proposed algorithm reached 0.62, which was higher than the success rates of the other three algorithms that were all less than 0.4;when the position error threshold was set to 10, the precision of the proposed algorithm reached 0.72,while the precisions of the other three algorithms were all less than 0.5. The experimental results show that the proposed algorithm obtains good robustness and real-time performance in complex scenes, such as illumination change, scale change, full occlusion and abrupt motion.
Improved TLD target tracking algorithm based on automatic adjustment of surveyed areas
QU Haicheng, SHAN Xiaochen, MENG Yu, LIU Wanjun
2015, 35(10): 2985-2989. DOI:
10.11772/j.issn.1001-9081.2015.10.2985
Asbtract
(
)
PDF
(737KB) (
)
References
|
Related Articles
|
Metrics
There is a long time detection problem caused by too large surveyed area in the classical Tracking-Learning-Detection (TLD) target tracking algorithm. Moreover, the TLD algorithm could not do the similar targets processing well. So in this paper, an efficient approach called TLD-DO was proposed for tracking targets in which the surveyed areas could be automatically adjusted according to the target's velocity of movement. In order to accelerate the process speed of TLD algorithm without reducing tracking precision, a novel algorithm named Double Kalman Filter (DKF) with optimal surveyed area which could reduce the detection range of TLD detector was constructed based on twice Kalman filtering operation for acceleration correction. Meanwhile, the improved method could also increase the accuracy of target tracking through eliminating the interference of the similar targets in complex background. The experimental results show that tracking effect of improved method is better than that of the original TLD algorithm under the circumstance of similar target disturbance. Furthermore, the detection speed has been improved 1.31-3.19 times for different videos and scenes. In addition, the improved method is robust to target vibration or distortion.
Constructing method of metamorphic relations in object-oriented software testing
HOU Xuemei, YU Lei, ZHANG Xinglong, LI Zhibo
2015, 35(10): 2990-2994. DOI:
10.11772/j.issn.1001-9081.2015.10.2990
Asbtract
(
)
PDF
(783KB) (
)
References
|
Related Articles
|
Metrics
To solve the Oracle problem of method sequence call in object-oriented software testing, a method of metamorphic relations constructing for object-oriented software testing based on algebraic specification was proposed. Firstly, metamorphic relations constructing criteria for object-oriented testing was defined based on the algebraic specification. Then the normal form metamorphic relations constructing method in the Generating a Finite number of Test cases (GFT) algorithm was improved according to these criteria. Finally, the improved method was verified through constructing IntStack class metamorphic relations. The experimental results showed that, compared with the normal form metamorphic relations constructing method, the metamorphic relations redundancy was reduced by 66% at the same mutation score. The results indicate that the new method has a low metamorphic relations redundancy and improves the efficiency of software testing.
Optimal algorithm for infrared touch screen based on target tracking
ZHOU Aiguo, PAN Qiangbiao
2015, 35(10): 2995-2999. DOI:
10.11772/j.issn.1001-9081.2015.10.2995
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Current infrared touch screen has problems in the multi-touch recognition and causes zigzag trace when drawing. In order to deal with these problems, an optimal algorithm based on target tracking was presented combined with Kalman filter and validation region algorithm. In data association, the validation region algorithm was used to match the right touch point's estimate value with the original target trajectory and delete the wrong touch point's estimate value. With the touch point motion model, Kalman filter was adopted for track smoothing and target movement prediction. Compared to the original recognition algorithm, the time of single touch point recognition was added about 3 μs by the optimal algorithm, but the smoothness in the trajectory angles was improved and about 60% of the burr amount was decreased. The experimental results show that some problems of the infrared touch screen are solved by the optimal algorithm, the effectiveness for multi-touch applications and trajectory smoothness is demonstrated, and the drawing effectiveness on the infrared touch screen is improved.
Vibration measurement system based on ZigBee and Ethernet
CAO Mengchao, LIU Hua
2015, 35(10): 3000-3003. DOI:
10.11772/j.issn.1001-9081.2015.10.3000
Asbtract
(
)
PDF
(639KB) (
)
References
|
Related Articles
|
Metrics
In the traditional method of vibration measurement system, the ability of the network construction is weak and the transmission rate is slow. In order to solve these problems, a new kind of vibration measurement was designed using ZigBee and Ethernet. There are three layers in the system. ZigBee based on XBee-PRO was used to establish the communication between collector nodes and router nodes to suit the multipoint and long-span measurement. Ethernet based on LwIP was used to make the data transmitted accurately in real-time. On the end device layer, the data were stored in SD-card in a server node and offered to computers. The experimental results show that the three layers structure of the measurement system combines the strength of ZigBee's network construction ability and Ethernet's high speed and good stability. It can not only realize an effective control to the measure points, but also meet the requirements of a long-span measurement and a real-time data transmission.
Signal denoising method based on singular value decomposition and Savitzky-Golay filter
ZHU Hongyun, WANG Changlong, WANG Jianbin, MA Xiaolin
2015, 35(10): 3004-3007. DOI:
10.11772/j.issn.1001-9081.2015.10.3004
Asbtract
(
)
PDF
(688KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the noise of signal, a new denoising approach was proposed based on Singular Value Decomposition (SVD) and Savitzky-Golay filter. The change law of negentropy with Signal-to-Noise Ratio (SNR) was analyzed, then the negentropy was treated as an evaluating parameter of noise suppression, and the optimal dimension of the Hankel matrix of the signal was obtained. Then the Savitzky-Golay filter was used to process the singular values which are used to reconstruct the denoised signal, and the effect of Savitzky-Golay filter configuration on denoising result was studied, then the optimal configuration of Savitzky-Golay filter was determined by defining error function. The proposed approach was applied to multi-component periodic signal and linear frequency modulation signal denoising. The result shows that the proposed approach can reduce the noise effectively, and it is an effective denoising approach.
Directory-adaptive journaling mode selective mechanism for Android systems
XU Yuanchao, SUN Fengyun, YAN Junfeng, WAN Hu
2015, 35(10): 3008-3012. DOI:
10.11772/j.issn.1001-9081.2015.10.3008
Asbtract
(
)
PDF
(798KB) (
)
References
|
Related Articles
|
Metrics
The unexpected power loss or system crash can result in data inconsistency upon updating a persistent data structure. Most existing file systems use some consistency techniques such as write-ahead logging, copy-on-write to avoid this situation. These mechanisms, however, introduce a significant overhead, and fail to adapt to the diversity of directory and heterogeneity of data reliability demands. Existing file-adaptive journaling technique is required to modify legacy applications. Therefore, a directory-adaptive journaling mode selective mechanism for Android systems was proposed to choose different journaling modes with strong or weak consistency guarantees in terms of different directories reliability demands. This mechanism is transparent to developers, and also matches the feature of Android systems, hence, it greatly reduces the consistency guarantee overhead without sacrifice of reliability. The experimental results show that modified file system can identify directories in which a file resides, meanwhile, choose reasonable pre-defined journaling mode.
3D geoinformation network sharing based on decomposition and combination method
ZENG Wenhua
2015, 35(10): 3013-3016. DOI:
10.11772/j.issn.1001-9081.2015.10.3013
Asbtract
(
)
PDF
(666KB) (
)
References
|
Related Articles
|
Metrics
People have accumulated a large number of 3D landscape data during the construction of digital city. 3D landscape application becomes "island of information" because of the difference of its standard criteria and technical route. Aiming at the public demand of 3D landscape online sharing, the standard of 3D landscape data's content and organization and sharing requirements were analyzed. Second, the research on the network sharing technique was conducted through comparing the sharing and integration mechanism between 2D and 3D geographic data. At last, the ideas and technical route of trans-regional 3D landscape online sharing based on decomposition and combination method were presented. This approach decomposed 3D information into various components such as terrain, image, model, etc. It used standard geographic information service and implemented construction and visualization of 3D information with HTML5 on the client. The simulation results demonstrate that this proposed method can effectively restructure 3D landscape model, and realize resource sharing among province, city and county with a little change.
2024 Vol.44 No.12
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF