Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 September 2016, Volume 36 Issue 9
Previous Issue
Next Issue
Topology-aware congestion control algorithm in data center network
WANG Renqun, PENG Li
2016, 36(9): 2357-2361. DOI:
10.11772/j.issn.1001-9081.2016.09.2357
Asbtract
(
)
PDF
(819KB) (
)
References
|
Related Articles
|
Metrics
To solve the congestion problem of links in Data Center Network (DCN), a Topology-Aware Congestion Control (TACC) algorithm was proposed in data center networks. According to the properties of multi-dimensional orthogonality and single-dimensional full mesh in the generalized hypercube, a topology-aware strategy was put forward to find the disjoint routes of distributing the request of flow by the max-flow min-cut theorem. Then, the disjoint routes were adjusted adaptively for satisfying the bandwidth requirement. Finally, the residual bandwith of the selected path was used as the weight to dynamically adjust the flow distribution of each route, so as to achieve the purpose of alleviating network congestion, balancing the load of links and reducing the pressure of recombining data in the destination. The experimental results show that in comparison with Link Criticality Routing Algorithm (LCRA), Multipath Oblivious Routing Algorithm (MORA), Min-Cut Multi-Path (MCMP) and Congestion-Free Routing Strategy (CFRS), TACC algorithm has good performance in link load balancing and deployment time.
Coverage optimization algorithm for three-dimensional directional heterogeneous sensor network
WANG Changzheng, MAO Jianlin, FU Lixa, GUO Ning, QU Weixian
2016, 36(9): 2362-2366. DOI:
10.11772/j.issn.1001-9081.2016.09.2362
Asbtract
(
)
PDF
(913KB) (
)
References
|
Related Articles
|
Metrics
Concerning the coverage overlapping areas and blind spots caused by random deployment of nodes in three-dimensional directional heterogeneous network, a Particle Swarm Optimization (PSO) based coverage optimization algorithm for three-dimensional directional heterogeneous network, namely PSOTDH, was proposed. Through involving the concepts of three-dimensional overlapping centroid, three-dimensional effective centroid and three-dimensional boundary centroid, three-dimensional overlapping area, and three-dimensional boundary nodes were optimized in a new three-dimensional directed perception model by using PSO. The sensing directions of the nodes were changed by PSOTDH, which made the distribution of three-dimensional overlapping centroids, three-dimensional effective centroids and three-dimensional boundary centroids more uniform, and achieved the purpose of improving coverage. Simulation results show that the proposed algorithm can improve coverage rate by about 27.82% after 25 iterations, which means the proposed algorithm can improve the coverage rate quickly and effectively.
Anti-jamming network architecture self-adaption technology based on cooperation and cognition
WANG Haijun, LI Jiaxun, ZHAO Haitao, WANG Shan
2016, 36(9): 2367-2373. DOI:
10.11772/j.issn.1001-9081.2016.09.2367
Asbtract
(
)
PDF
(1095KB) (
)
References
|
Related Articles
|
Metrics
Considering that the Cooperative Cognitive Radio Networks (CCRN) perform poorly with low flexibility and deficient ability to adapt to complex environment, which caused by working under a fixed architecture at present, a kind of network architecture self-adaption technology based on cooperation and cognition was proposed to improve the anti-jamming and anti-damage ability of the CCRN. The technology made CCRN switch among three kinds of architectures, including centralized control, self-organization and cooperative relay, autonomously and flexibly, to deal with electromagnetic interference, equipment failure and obstructions on communication link, which could greatly enhance the network robustness. The switch scheme design and protocol implementation of different nodes were introduced in detail. Moreover, a CCRN testbed which consists of GNU Radio and the second generation of Universal Software Radio Peripheral (USRP2) was set up to test and verify its performance including switching time consumption and throughput. Results show that the technology significantly improves the anti-destroying ability, connectivity and Quality of Service (QoS) of CCRN compared with the network working under single, and fixed architecture.
Application of parameter-tuning stochastic resonance for detecting weak signal with ultrahigh frequency
HAO Jing, DU Taihang, JIANG Chundong, SUN Shuguang, FU Chao
2016, 36(9): 2374-2380. DOI:
10.11772/j.issn.1001-9081.2016.09.2374
Asbtract
(
)
PDF
(1141KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that common nonlinear Stochastic Resonance (SR) system is subject to the restriction of small parameter and is failure to detect the high frequency weak signal, a new detection method of parameter-tuning SR for weak signal with high frequency was proposed. Firstly, the relationship between the damping coefficient and the signal frequency was derived in a bistable system, and by using Kramers rate for analysis, the influence of changing damping coefficient on the SR of the system was verified. Then, the influence of SR phenomenon produced by system shape parameters was deduced, the SR of high frequency weak signal was realized through adjusting the damping coefficient and the system shape parameters, and the effect of output spectrum characteristics of the system and different sampling frequency was discussed, the stability of the algorithm was verified by the results. Finally, using the received actual signals with noise as experimental research data, the experimental results show that ultrahigh frequency weak signal under strong noise background can be extracted effectively and steadily using the strategy even when the signal frequency reaches MHz and GHz. The proposed method extends the application field of SR principle of weak signal detection.
Rumor spreading model with conformity effect
WAN Youhong, WANG Xiaochu
2016, 36(9): 2381-2385. DOI:
10.11772/j.issn.1001-9081.2016.09.2381
Asbtract
(
)
PDF
(764KB) (
)
References
|
Related Articles
|
Metrics
Considering that there exists conformity effect in the real social network, the dynamic descriptions of the probabilities of rumor spreading and a spreader coming to reason were improved. A rumor spreading model with conformity effect was proposed, and the corresponding equations of transmission dynamics were established based on the different network topologies. In the rumor spreading model with conformity effect, numerical analysis of eventual scale of rumor spreading was conducted, and the results show that the eventual scale of rumor spreading increases with the initial probability of rumor spreading. Matlab simulations of the model show that conformity effect will accelerate the rumor spreading. Monte Carlo method was utilized to simulate the rumor spreading in the small-world network and the scale-free network, and results show that under the influence of conformity effect, the rumors are spreading faster and deeper in the scale-free network. The simulations of the improved model based on the real social network show that the influence of the initial rumor spreader exerts a great impact on the process of rumor spreading.
Friend recommendation method for mobile social networks
WANG Shanshan, LENG Supeng
2016, 36(9): 2386-2389. DOI:
10.11772/j.issn.1001-9081.2016.09.2386
Asbtract
(
)
PDF
(771KB) (
)
References
|
Related Articles
|
Metrics
In view of the friend recommendation in Mobile Social Network (MSN), a new method based on multi-dimensional similarity was proposed. The method is a kind of method based on content, but not confined to single dimension matching information, it judges users' similarity of various dimensions from three aspects of space, time and interest, then gets a comprehensive judgment by "difference distance". The proposed method can recommend other users to target audience when they are consistent in geographical position, online-time and interest. The experimental results show that when the method is used in the friend recommendation of mobile social networks, its precision and efficiency are up to 80% and 60% respectively, the performance is much better than the other friend recommendation methods based on single dimension; at the same time, by adjusting the value of three dimensional weights, the method can be used in a variety of mobile social networks with different characteristics.
Performance analysis of OSTBC-MIMO systems over i.n.i.d. generalized-K fading channels
HE Jie, XIAO Kun
2016, 36(9): 2390-2395. DOI:
10.11772/j.issn.1001-9081.2016.09.2390
Asbtract
(
)
PDF
(783KB) (
)
References
|
Related Articles
|
Metrics
Concerning the performance of the Orthogonal Space Time Block Code based Multiple-Input Multiple-Output (OSTBC-MIMO) system over independent but not necessarily identically distributed (i.n.i.d.) generalized-K fading channels, the system model of the OSTBC-MIMO system was established in i.n.i.d. generalized-K fading channel by adopting M-QAM modulation scheme. The equivalent Signal-to-Noise Ratio (SNR) of the receiver was approximated by the product of two variables that composed of multiple independent Gamma distributed random variables. On the basis of that, the probability density function expression of the equivalent SNR was derived as well as the expression of the average symbol error probability, channel capacity and outage probability. The simulation results show that, in addition to the parameter
m
, the parameter
k
also has a significant impact on the overall system performance, which produces the non-negligible differences on the system performance between the i.n.i.d. generalized-K fading channels and the i.i.d. generalized-K fading channels.
Dynamic resource configuration based on multi-objective optimization in cloud computing
DENG Li, YAO Li, JIN Yu
2016, 36(9): 2396-2401. DOI:
10.11772/j.issn.1001-9081.2016.09.2396
Asbtract
(
)
PDF
(1092KB) (
)
References
|
Related Articles
|
Metrics
Currently, most resource reallocation methods in cloud computing mainly aim to how to reduce active physical nodes for green computing, however, node stability of virtual machine placement solution is not considered. According to varying workload information of applications, a new virtual machine placement method based on multi-objective optimization was proposed for node stability, considering both the overhead of virtual machine reallocation and the stability of new virtual machine placement, and a new Multi-Objective optimization based Genetic Algorithm for Node Stability (MOGANS) was designed to solve this problem. The simulation results show that, the stability time of Virtual Machine (VM) placement obtained by MOGANS is 10.42 times as long as that of VM placement got by GA-NN (Genetic Algorithm for greeN computing and Numbers of migration). Meanwhile, MOGANS can well balance stability time and migration overhead.
Coordinator selection strategy based on RAMCloud
WANG Yuefei, YU Jiong, LU Liang
2016, 36(9): 2402-2408. DOI:
10.11772/j.issn.1001-9081.2016.09.2402
Asbtract
(
)
PDF
(1102KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that ZooKeeper cannot meet the requirement of low latency and quick recovery of RAMCloud, a Coordinator Election Strategy (CES) based on RAMCloud was proposed. First of all, according to the network environment of RAMCloud and factors of the coordinator itself, the performance indexes of coordinator were divided into two categories including individual indexes and coordinator indexes, and models for them were built separately. Next, the operation of RAMCloud was divided into error-free running period and data recovery period, their fitness functions were built separately, and then the two fitness functions were merged into a total fitness function according to time ratio. Lastly, on the basis of fitness value of RAMCloud Backup Coordinator (RBC), a new operator was proposed with randomness and the capacity of selecting an ideal target: CES would firstly eliminate poor-performing RBC by screening, as the range of choice was narrowed, CES would select the ultimate RBC from the collection of ideal coordinators by means of roulette. The experimental results showed that compared with other RBCs in the NS2 simulation environment, the coordinator selected by CES decreased latency by 19.35%; compared with ZooKeeper in the RAMCloud environment, the coordinator selected by CES reduced recovery time by 10.02%. In practical application of RAMCloud, the proposed CES can choose the coordinator with better performance, ensure the demand of low latency and quick recovery.
Design and implementation of QoE measuring tool for personal cloud storage services
YUAN Bin, LI Wenwei
2016, 36(9): 2409-2415. DOI:
10.11772/j.issn.1001-9081.2016.09.2409
Asbtract
(
)
PDF
(1117KB) (
)
References
|
Related Articles
|
Metrics
With the growing requirements of users in network storage, a large number of Personal Cloud Storage (PCS) service platforms are emerging. The Quality of Experience (QoE) perceived by the end users has become the common concerned issues of both end users and service providers. The factors that affect QoE in personal cloud storage were analyzed from the aspects of the different features between control flows and data flows. Several indicators were proposed from the end user's perspectives which can reasonably evaluate the QoE of personal cloud storage. The accurate measurement method of the QoE indicators was designed. The measuring tool for QoE of personal cloud services was implemented based on passive measurement; at the same time, the solution was given to solve the issues in tool implementation like special process capture packets and network flow classification. The experimental result shows that the measuring tool can run robust and obtain accurate results, it can be used to measure the QoE of personal cloud services in terminals.
Intelligent algorithm acceleration strategy for nonlinear 0-1 programming based on improved Markov neighborhood
LI Weipeng, ZENG Jing, ZHANG Guoliang
2016, 36(9): 2416-2421. DOI:
10.11772/j.issn.1001-9081.2016.09.2416
Asbtract
(
)
PDF
(923KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the time consumption in solving the problem of large-scale nonlinear 0-1 programming, an intelligent algorithm acceleration strategy based on the improved Markov neighborhood was presented by analyzing the characteristics of nonlinear 0-1 programming and the Markov process of intelligent algorithm. First, a rewritten model of nonlinear 0-1 programming problem was given. Next, an improved Markov neighborhood was constructed based on the rewritten model, and the reachable probability between two random statuses with its conditions under the improved Markov neighborhood was derived and proven. With a further analysis of the structure of nonlinear 0-1 programming together with the improved Markov neighborhood, a recursive updating strategy of the constraint and objective function was designed to accelerate the intelligent algorithms. The experimental results illustrate that the proposed strategy improves the operating efficiency of intelligent algorithms while keeping a correspondence with the original algorithms in search results.
Design and implementation of automatic C code parallelization based on JavaCC
LIU Youyao, YANG Pengcheng
2016, 36(9): 2422-2426. DOI:
10.11772/j.issn.1001-9081.2016.09.2422
Asbtract
(
)
PDF
(872KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that a large amount of legacy code can not be reused, a new compilation tool was designed to convert the serial code of C into a hybrid parallel programming code based on MPI+OpenMP, which can reduce the development cost of parallel programming. First of all, by optimizing Java Compiler Compiler (JavaCC), a lexical and syntax analyzer which can parse the C language was implemented, then the source code analysis was conducted and the abstract syntax tree was generated. Secondly, according to the abstract syntax tree, the control dependence and data dependence of the source code were analyzed to produce the parallelizable statement block partitions. Thirdly, the object code was obtained according to the proposed parallel code generation method. Finally, the target code simulation environment was built based on Visual Studio 2010. The experimental results show that the tool can effectively achieve automatic parallelization of the serial code, and compared with the code written by hand, its speedup of the error is between 8.2% to 18.4%.
Improvement of penalty factor in suppressed fuzzy C-means clustering
XIAO Mansheng, XIAO Zhe
2016, 36(9): 2427-2431. DOI:
10.11772/j.issn.1001-9081.2016.09.2427
Asbtract
(
)
PDF
(795KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of slow convergence and weak real-time processing of large data in general Fuzzy C-Means (FCM) algorithm, an improved method of penalty factor on sample membership was proposed. Firstly, the characteristics of Suppressed Fuzzy C-Means (SFCM) clustering were analyzed, and the trigger condition for adjusting sample membership by penalty factor was studied, and then the dynamic membership adjusting scheme of SFCM based on penalty factor was designed. By using the algorithm, the samples are "moved to the poles" to achieve the purpose of rapid convergence. Theoretical analysis and experimental result show that under the same initial condition, the execution time efficiency of the improved algorithm is increased by 40% and 10% respectively compared with the traditional FCM and Optimal-Selection-based SFCM (OS-SFCM), at the same time, the clustering accuracy is also improved.
Data aggregation scheme for wireless sensor network to timely determine compromised nodes
WANG Jie, LU Jianzhu, ZENG Xiaofei
2016, 36(9): 2432-2437. DOI:
10.11772/j.issn.1001-9081.2016.09.2432
Asbtract
(
)
PDF
(986KB) (
)
References
|
Related Articles
|
Metrics
In Wireless Sensor Network (WSN), when the compromised sensor nodes disturb network data and transmission, it is particularly important to determine the compromised sensor nodes in time and take appropriate measures to ensure the security of the entire network. Therefore, a data aggregation scheme for wireless sensor network was proposed to timely determine the compromised sensor nodes. First, the state public key encryption, the symmetric public key encryption, the pseudo random function and the message authentication code were used to encrypt the plaintext twice. Secondly, the cluster head node authenticated the ciphertext and filtered false data. Then, the cluster head node decrypted the ciphertext, and the numbers of the compromised nodes were sent to the base station. At last, the base station decrypted the ciphertext to recover the plaintext and authenticated the data. The proposed scheme solves the problem of the error aggregation value problem caused by the compromised nodes, filters the false data in time and determines the compromised sensor nodes. The analysis shows that the proposed scheme is secure under the secure one-way hash function, the message authentication code and the assumption of the Discrete Logarithm Problem (DLP), and also greatly reduces the communication cost and computational cost. Simulation result shows that, compared with the secure aggregation scheme for WSN using stateful public key cryptography, the computational cost, the communication cost and the time consumption of determining the compromised sensor nodes of the proposed scheme is decreased by at least 19.96%, 36.81% and 28.10%, respectively.
Frequency self-adjusting algorithm for network instruction detection based on target prediction
YANG Zhongming, LIANG Benlai, QIN Yong, CAI Zhaoquan
2016, 36(9): 2438-2441. DOI:
10.11772/j.issn.1001-9081.2016.09.2438
Asbtract
(
)
PDF
(743KB) (
)
References
|
Related Articles
|
Metrics
In cluster, it is a conventional method to increase attack efficiency for intruder by attacking the specific target, so it is effective to improve the detection efficiency by scheduling the computing resource contrapuntally. A frequency self-adjusting algorithm for Network Intrusion Detection System (NIDS) based on target prediction, named DFSATP, was proposed. By detecting and analyzing the collected data packets, the data packets sent to potentially attacked targets were marked as high risk data and the other packets were marked as low risk data. The efficiency of NIDS was improved by high frequency detection of high risk data packets and low frequency detection of low risk packets, thus the detection rate of abnormal data was also increased to some extent in limited computing resource circumstances. The simulation results show that the detection rate of abnormal data packets is increased because of the detection frequency adjustment of NIDS by using DFSATP.
Global directional search algorithm adapting NLBF sequence cryptogram efficiently
WANG Zhouchuang, DAI Zibin, LI Wei
2016, 36(9): 2442-2446. DOI:
10.11772/j.issn.1001-9081.2016.09.2442
Asbtract
(
)
PDF
(701KB) (
)
References
|
Related Articles
|
Metrics
In view of the absence of universality and high consumption of sequence cryptogram adaptation algorithms, a global directional searching algorithm based on AND terms of Non-Linear Boolean Function (NLBF) and truth table was proposed. Firstly, adaptive and reasonable models of Look-Up Table (LUT) were gotten by analyzing the ratio of terms in NLBF. Then a classification algorithm for Boolean function was established which can search all AND terms from high-order ones to lows and "absorb" or "unite" the terms. Finally, a configuration generating algorithm was obtained on the basis of truth table, which can generate the configuration information to fulfill the computation of NLBF by traversing truth tables. The existing NLBF sequence cryptograms can be adapted by the proposed classification algorithm, and it is more easy to adapt to the commonly used algorithms such as ACH-128, Trivium and Grain. At the same time, the resource consumption of LUT is obviously less than the adaptation based on Shannon decomposition theory and genetic algorithm; meanwhile, the consumption results show that the adaptation consumes the most in 4-input look-up tables and the least in 6-input ones.
Trusted network management model based on clustering analysis
XIE Hong'an, LI Dong, SU Yang, YANG Kai
2016, 36(9): 2447-2451. DOI:
10.11772/j.issn.1001-9081.2016.09.2447
Asbtract
(
)
PDF
(936KB) (
)
References
|
Related Articles
|
Metrics
To improve the availability of dynamic trust model in trusted network, a trusted network management model based on clustering analysis was built. Behavior expectations were used to describe the trust of user behavior by introducing clustering analysis to the traditional trust model. Clustering analysis of the user's historical data was used to build behavior expection model, which was used to evaluate user's behaviors. Finally the trust evaluation results were utilized to realize the network user management. The experimental results show that the proposed model can generate trust evaluation results firmly, detect and isolate the malicious users rapidly, it has better accuracy and efficiency than traditional model, basically improving the network reliability.
Proxy re-encryption scheme based on conditional asymmetric cross-cryptosystem
HAO Wei, YANG Xiaoyuan, WANG Xu'an, WU Liqiang
2016, 36(9): 2452-2458. DOI:
10.11772/j.issn.1001-9081.2016.09.2452
Asbtract
(
)
PDF
(1002KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the decryption burden of the mobile device in cloud application, using Identity-Based Broadcast Encryption (IBBE) scheme, Identity-Based Encryption (IBE) scheme and conditional identity-based broadcast proxy re-encryption scheme, an asymmetric cross-cryptosystem proxy re-encryption scheme with multiple conditions was proposed. In this scheme, the sender is allowed to encrypt information into IBBE ciphertext, which can be sent to multiple recipients at a time. Anyone of the receivers can authorize a multi-condition re-encryption key to the proxy to re-encrypt the original ciphertext which meets the conditions into the IBE ciphertext that a new receiver can decrypt. The scheme realizes asymmetric proxy re-encryption from IBBE encryption system to IBE encryption system and allows the proxy to re-encrypt the original ciphertext according to the conditions, which avoids the proxy to re-encrypt the unnecessary original ciphertext. The scheme not only improves the re-encryption efficiency of the proxy, but also saves the time of the receiver to get the correct plaintext.
Signcryption scheme based on low-density generator-matrix code
LIU Mingye, HAN Yiliang, YANG Xiaoyuan
2016, 36(9): 2459-2464. DOI:
10.11772/j.issn.1001-9081.2016.09.2459
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
Code-based cryptography has natural advantage to resist the attack from quantum computers. Considering the long ciphertext length and the large key size of the traditional Goppa-codes-based cryptography, Low-Density Generator-Matrix (LDGM) code and hash function were used to construct a provably secure signcryption scheme. The generator matrix of LDGM code is sparse, so it can effectively reduce the amount of data, and the hash function is of high computation efficiency. It satisfies IND-CCA2 (INDistinguishability under Adaptive Chosen Ciphertext Attacks) and EUF-CMA (Existential UnForgeability under Chosen Message Attacks) security under random oracle model. As it guarantees data confidentiality and integrality, the ciphertext is reduced by 25% compared with the traditional case of "sign then encrypt"; compared with the "two birds one stone" and the SCS signcryptions, its computational efficiency gets significant improvement.
Survey on big data storage framework and algorithm
YANG Junjie, LIAO Zhuofan, FENG Chaochao
2016, 36(9): 2465-2471. DOI:
10.11772/j.issn.1001-9081.2016.09.2465
Asbtract
(
)
PDF
(1246KB) (
)
References
|
Related Articles
|
Metrics
With the growing demand of big data computing, the processing speed of the cluster needs to be improved rapidly. However, the processing performance of the existing big data framework can not satisfy the requirement of the computing development gradually. As the framework of the storage is distributed, the placement of data to be processed has become one of the key factors affecting the performance of the cluster. Firstly, the current distributed file system structure was introduced. Then the popular data placement algorithms were summarized and classified according to different optimization goals, such as network load balance, energy saving and fault tolerance. Finally, future challenges and research directions in the area of storage framework and algorithms were presented.
Homogeneous pattern discovery of time series based on derivative series
ZOU Lei, GAO Xuedong
2016, 36(9): 2472-2474. DOI:
10.11772/j.issn.1001-9081.2016.09.2472
Asbtract
(
)
PDF
(595KB) (
)
References
|
Related Articles
|
Metrics
As the basis of time series data mining tasks, such as indexing, clustering, classification, and anomaly detection, subsequence matching has been researched widely. Since the traditional time series subsequence matching only aims at matching the exactly same or approximately same patterns, a new sequence pattern with similar tendency, called time series homogeneous pattern, was defined. With mathematical derivation, the time series homogeneous pattern judgment rules were given, and an algorithm on time series homogeneous pattern discovery was proposed based on those rules. Firstly, the raw time series were preprocessed. Secondly, the homogeneous patterns were matched with segmentation and fitting subsequences. Since practical data can not satisfy the theoretical constraints, a parameter of homogeneous pattern tolerance was defined to make it possible for the practical data homogeneous patterns mining. The experimental results show that the proposed algorithm can effectively mine the time series homogeneous patterns.
Composite classification model learned on multiple isolated subdomains for imbalanced class
JIN Yan, PENG Xinguang
2016, 36(9): 2475-2480. DOI:
10.11772/j.issn.1001-9081.2016.09.2475
Asbtract
(
)
PDF
(878KB) (
)
References
|
Related Articles
|
Metrics
Started with the regional distribution characteristics, a composite classification model learned on multiple isolated subdomains was proposed to further study the class imbalance problem. In the subdomains division stage, each class was described as ultra-small spheres by improved Support Vector Data Description (SVDD) algorithm, then class domain was divided into intensive and sparse domains. Some instances were founded out from the boundaries of classes and composed of class overlapping domains. In the subdomains cleanup stage, according to sample availability parameters related to domain tightness, noise data was cleaned up by improved
K
-Nearest Neighbor (
K
NN). After combining classifiers sequentially which were learned on isolated subdomains, the Composite Classification model (CCRD) was generated. In the comparison with similar algorithms including SVM (Support Vector Machine),
K
NN, C4.5 and MetaCost, CCRD can obviously improve the accuracy of positive instances without increasing mistake of negative instances; in the comparison with SMOTE (Synthetic Minority Over-sampling TEchnique) sampling, CCRD can improve the misjudgement of negative instances without affecting the classification of the positive instances; in the experiments on five datasets, the classification performance of CCRD is also improved, especially in Haberman_sur. Experimental results indicate that the composite classification model learned on multiple isolated subdomains has excellent classification capability, and it is an effective method for inbalanced dataset.
Design and implementation of performance comparing frame for higher-order code elimination
ZHAO Di, HUA Baojian, ZHU Hongjun
2016, 36(9): 2481-2485. DOI:
10.11772/j.issn.1001-9081.2016.09.2481
Asbtract
(
)
PDF
(713KB) (
)
References
|
Related Articles
|
Metrics
In functional programming language compilation, closure conversion and defunctionalization are two widely used higher-order code eliminating methods. To improve the operational efficiency of functional programming languages, focusing on the higher-order code eliminating phase, a compiler frame to compare the performance of code generated by closure conversion and defunctionalization was proposed. Both closure conversion and defunctionalization were used in parallel in the comparing frame with a diamond structure. A functional programming language named FUN and a compiling system for FUN based on the comparing frame was proposed. Comparison experiments of closure conversion and defunctionalization were conducted on the proposed system by using typical use cases, and the experimental results were compared in code quantity and operation efficiency. The result suggests that compared with closure conversion, defunctionalization can produce shorter and faster target code; the amount of code can be decreased by up to 33.76% and performance can be improved by up to 69.51%.
Software defect detection algorithm based on dictionary learning
ZHANG Lei, ZHU Yixin, XU Chun, YU Kai
2016, 36(9): 2486-2491. DOI:
10.11772/j.issn.1001-9081.2016.09.2486
Asbtract
(
)
PDF
(881KB) (
)
References
|
Related Articles
|
Metrics
Since the exsiting dictionary learning methods can not effectively construct discriminant structured dictionary, a discriminant dictionary learning method with discriminant and representative ability was proposed and applied in software defect detection. Firstly, sparse representation model was redesigned to train structured dictionary by adding the discriminant constraint term into the object function, which made the class-dictionary have strong representation ability for the corresponding class-samples but poor representation ability for the irrelevant class-samples. Secondly, the Fisher criterion discriminant term was added to make the representative coefficients have discriminant ability in different classes. Finally, the optimization of the designed dictionary learning model was solved to obtain strongly structured and sparsely representative dictionary. The NASA defect dataset was selected as the experiment data, and compared with Principal Component Analysis (PCA), Logistics Regression (LR), decision tree, Support Vector Machine (SVM) and the typical dictionary learning method, the accuracy and F-measure value of the proposed method were both increased. Experimental results indicate that the proposed method can increase detection accuracy with improving the classifier performance.
Test data augmentation method based on adaptive particle swarm optimization algorithm
WANG Shuyan, WEN Chunyan, SUN Jiaze
2016, 36(9): 2492-2496. DOI:
10.11772/j.issn.1001-9081.2016.09.2492
Asbtract
(
)
PDF
(778KB) (
)
References
|
Related Articles
|
Metrics
It is difficult for the original test data to meet the requirements of the new version of software testing in regression testing, thus a new test data augmentation method based on Adaptive Particle Swarm Optimization (APSO) algorithm was proposed to solve the problem. Firstly, according to the similarity between the cross path and the target path of the original test data in the new version of the program, the appropriate test data in the original test data was chosen as evolutionary individual of initial population. Secondly, taking advantage of different sub-paths of the cross path of initial test data and target path, the input component which caused deviation between them was confirmed. Finally, the fitness function was created according to the path similarity, and the new data was generated by using the APSO algorithm to operate the input component. Compared with the genetic algorithm based and random algorithm based test data augmentation methods on four benchmark programs, the augmentation efficiency of the proposed method was improved on average by approximately 56% and 81% respectively. The experimental results show that the proposed method can effectively increase the efficiency and improve the stability of test data augmentation in regression testing.
Pheromone updating strategy of ant colony algorithm for multi-objective test case prioritization
XING Xing, SHANG Ying, ZHAO Ruilian, LI Zheng
2016, 36(9): 2497-2502. DOI:
10.11772/j.issn.1001-9081.2016.09.2497
Asbtract
(
)
PDF
(981KB) (
)
References
|
Related Articles
|
Metrics
The Ant Colony Optimization (ACO) has slow convergence and is easily trapped in local optimum when solving Multi-Objective Test Case Prioritization (MOTCP). Thus, a pheromone updating strategy based on Epistatic-domain Test case Segment (ETS) was proposed. In the scheme, ETS existed in the test case sequence was selected as a pheromone updating scope, because ETS can determine the fitness value. Then, according to the fitness value increment between test cases and execution time of test cases in ETS, the pheromone on the trail was updated. In order to further improve the efficiency of ACO and reduce time consumption when ants visited test cases one by one, the end of ants' visiting was reset by estimating the length of ETS using optimized ACO. The experimental results show that compared with the original ACO and NSGA-Ⅱ, the optimized ACO has faster convergence and obtains better Pareto optimal solution sets in MOTCP.
Parallel cyclic redundancy check Verilog program generating method based on Matlab
XUE Jun, DUAN Fajie, JIANG Jiajia, LI Yanchao, YUAN Jianfu, WANG Xianquan
2016, 36(9): 2503-2507. DOI:
10.11772/j.issn.1001-9081.2016.09.2503
Asbtract
(
)
PDF
(996KB) (
)
References
|
Related Articles
|
Metrics
During underwater signal data transmission process, using Field Programmable Gate Array (FPGA) to calculate Cyclic Redundancy Check (CRC) code with traditional serial calculating method cannot meet the demand of fast computation; however, parallel checking method, which is much faster, has difficulty in practical engineering application because of programming complexity. In order to meet the demand of transmission speed, to eliminate programming difficulty and time waste, a method was proposed to automatically generate parallel CRC code for any length data frames by Matlab. It finished all the mathematical deductions based on matrix method and calculations with the help of Matlab and then generated parallel CRC calculating program which conforms to the Verilog HDL grammar rules. Finally, the CRC calculation program statements generated by Matlab were first simulated in Quartus II 9.0 and then demonstrated by data transmission experiments on a civil towed sonar system. The results prove the validity of the proposed method, its programming and generation can be finished in tens of seconds, and the CRC module can accurately figure out CRC code of every long data frame defined by transmission protocol within requested time.
Survey of convolutional neural network
LI Yandong, HAO Zongbo, LEI Hang
2016, 36(9): 2508-2515. DOI:
10.11772/j.issn.1001-9081.2016.09.2508
Asbtract
(
)
PDF
(1569KB) (
)
References
|
Related Articles
|
Metrics
In recent years, Convolutional Neural Network (CNN) has made a series of breakthrough research results in the fields of image classification, object detection, semantic segmentation and so on. The powerful ability of CNN for feature learning and classification attracts wide attention, it is of great value to review the works in this research field. A brief history and basic framework of CNN were introduced. Recent researches on CNN were thoroughly summarized and analyzed in four aspects: over-fitting problem, network structure, transfer learning and theoretic analysis. State-of-the-art CNN based methods for various applications were concluded and discussed. At last, some shortcomings of the current research on CNN were pointed out and some new insights for the future research of CNN were presented.
Enhanced multi-species-based particle swarm optimization for multi-modal function
XIE Hongxia, MA Xiaowei, CHEN Xiaoxiao, XING Qiang
2016, 36(9): 2516-2520. DOI:
10.11772/j.issn.1001-9081.2016.09.2516
Asbtract
(
)
PDF
(769KB) (
)
References
|
Related Articles
|
Metrics
It is difficult to balance local development and global exploration in a multi-modal function optimization process, therefore, an Enhanced Multi-Species-based Particle Swarm Optimization (EMSPSO) was proposed. An improved multi-species evolution strategy was introduced to Species-based Particle Swarm Optimization (SPSO). Several species which evolved independently were established by selecting seed in the individual optimal values to improve the stability of algorithm convergence. A redundant particle reinitialization strategy was introduced to the algorithm in order to improve the utilization of the particles, and enhance global search capability and search efficiency of the algorithm. Meanwhile, in order to prevent missing optimal extreme points in the optimization process, the rate update formula was also improved to effectively balance the local development and global exploration capability of the algorithm. Finally, six typical test functions were selected to test the performance of EMSPSO. The experimental results show that, EMSPSO has high multi-modal optimization success rate and optimal performance of global extremum search.
Deep belief network algorithm based on multi-innovation theory
LI Meng, QIN Pingle, LI Chuanpeng
2016, 36(9): 2521-2525. DOI:
10.11772/j.issn.1001-9081.2016.09.2521
Asbtract
(
)
PDF
(911KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of small gradient, low learning rate, slow convergence of error during the process of using Deep Belief Network (DBN) algorithm to correct connection weight and bias of network by the method of back propagation, a new algorithm called Multi-Innovation DBN (MI-DBN) was proposed based on combination of standard DBN algorithm with multi-innovation theory. The back propagation process in standard DBN algorithm was remodeled to make full use of multiple innovations in previous cycles, while the original algorithm can only use single innovation. Thus, the convergence rate of error was significantly increased. MI-DBN algorithm and other representative classifiers were compared through experiments of datasets classification. Experimental results show that MI-DBN algorithm has a faster convergence rate than other sorting algorithms; especially when identifying MNIST and Caltech101 dataset, MI-DBN algorithm has the fewest inaccuracies among all the algorithms.
Query expansion with semantic vector representation
LI Yan, ZHANG Bowen, HAO Hongwei
2016, 36(9): 2526-2530. DOI:
10.11772/j.issn.1001-9081.2016.09.2526
Asbtract
(
)
PDF
(905KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that the traditional query expansion used in professional domains suffers from the lack of semantic relations between expansion terms and original queries, a query expansion approach based on semantic vector representation was proposed. First, a semantic vector representation model was designed to learn the semantic vector representations of words from their contexts in corpus. Then, the similarities between words were computed with their semantic representations. Afterwards, the most similar words were selected from the corpus as the expansion terms to enrich the queries. Finally, a search system of biomedical literatures was built based on this expansion approach and compared with the traditional query expansion approaches based on Wikipedia or WordNet and the BioASQ participants along with the significant difference analysis. The comparison experimental results indicate that the proposed query expansion approach based on semantic vector representations outperforms the baselines, and the mean average precision increases by at least one percentage point; furthermore, the search system performs better than the BioASQ participants significantly.
Collaborative filtering recommendation based on entropy and timeliness
LIU Jiangdong, LIANG Gang, FENG Cheng, ZHOU Hongyu
2016, 36(9): 2531-2534. DOI:
10.11772/j.issn.1001-9081.2016.09.2531
Asbtract
(
)
PDF
(618KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the noise data problem in collaborative filtering recommendation, a user entropy model was put forward. The user entropy model combined the concept of entropy in the information theory and used the information entropy to measure the content of user information, which filtered the noise data by calculating the entropy of users and getting rid of the users with low entropy. Meanwhile, combining the user entropy model with the item timeliness model, the item timeliness model got the timeliness of item by using the contextual information of the rating data, which alleviated the data sparsity problem in collaborative filtering algorithm. The experimental results show that the proposed algorithm can effectively filter out noise data and improve the recommendation accuracy, its recommendation precision is increased by about 1.1% compared with the basic algorithm.
Type-2 fuzzy multiple attribute decision-making method based on entropy and risk attitude
WANG Cuicui, YAO Dengbao, LI Baoping
2016, 36(9): 2535-2539. DOI:
10.11772/j.issn.1001-9081.2016.09.2535
Asbtract
(
)
PDF
(720KB) (
)
References
|
Related Articles
|
Metrics
In order to deal with the type-2 fuzzy decision-making problem that the attribute weights are unknown, a decision-making method based on type-2 fuzzy entropy and decision-maker's risk attitude was proposed. Firstly, the axiomatic principles of type-2 fuzzy entropy were constructed by introducing fuzzy factor and hesitancy factor to measure the uncertainty of Type-2 Fuzzy Set (T2FS), and some formulas were also given based on different distance measures. Secondly, in order to decrease effects of decision results caused by uncertain information, a non-linear programming model combined with type-2 fuzzy entropy was constructed to determine the attribute weights. Meanwhile, a score function was proposed by considering decision-maker's risk attitude and the specific decision-making processes were also given. Finally, the feasibility of the proposed method was verified through an example analysis, and the flexibility of the proposed method was also been reflected by comparing with existed references.
No-fit-polygon-based heuristic nesting algorithm for irregular shapes
TANG Deyou, ZHOU Zilin
2016, 36(9): 2540-2544. DOI:
10.11772/j.issn.1001-9081.2016.09.2540
Asbtract
(
)
PDF
(778KB) (
)
References
|
Related Articles
|
Metrics
To raise the material utilization ratio of heuristic nesting for irregular shapes, a Gravity No-Fit-Polygon (NFP) and Edge Fitness-based Heuristic Nesting Algorithm (GEFHNA) was proposed. Firstly, the definition of Edge Fitness (EF) to measure the fitness between the material and irregular shape produced in the process of packing was defined, and a packing strategy combining Gravity NFP (GNFP) with edge fitness was proposed to reduce the area of gap generated in packing. Secondly, a Weiler-Atherton-based algorithm was presented to compute remained materials and add holes produced in each round of packing to the list of materials. The heuristic packing algorithm prefered the holes in next rounds of packing to reduce proportion of holes in released layout. Finally, a heuristic algorithm based on the previous packing strategy and reuse strategy was put forward and the comparison experiments of GEFHNA with intelligent algorithm and similar softwares were presented. Experimental results on benchmarks provided by ESICUP (EURO Special Interest Group on Cutting and Packing) show that GEFHNA only has about 1/1000 time consumption of intelligent algorithm-based nesting scheme and achieves 7/11 relatively optimal utilization rate in contrast with two commercial softwares NestLib and SigmaNest.
Localization for mobile robots based on improved support vector regression algorithm
WANG Chunrong, XIA Erdong, WU Long, LIU Jianjun, XIONG Changjiong
2016, 36(9): 2545-2549. DOI:
10.11772/j.issn.1001-9081.2016.09.2545
Asbtract
(
)
PDF
(691KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the positioning accuracy of mobile robots, a kind of positioning system for wheeled mobile robots based on orthogonal encoder and gyroscope was proposed, and the positioning model and kinematics model of robot were established. With the purpose of obtaining better robustness, Support Vector Regression (SVR) algorithm was studied, the error square of objective function was weighted, and the effect of different parameter optimization algorithms on the accuracy of SVR were analyzed. The experimental platform was established by homemade mobile robot, the Least Squares Support Vector Regression (LSSVR) algorithm and the Weighted Least Squares Support Vector Regression (WLSSVR) algorithm were compared with the improved algorithm. The positioning errors of the improved algorithm when the robot worked on ceramic and wood floor were compared, and the orthogonal encoder plus gyroscope positioning system was compared with the double encoder positioning system and the single encoder plus gyroscope positioning system. The experimental results show that the robot positioning accuracy of the improved algorithm is higher than comparison algorithms, and the proposed positioning system has a better location performance.
Deep network for person identification based on joint identification-verification
CAI Xiaodong, YANG Chao, WANG Lijuan, GAN Kaijin
2016, 36(9): 2550-2554. DOI:
10.11772/j.issn.1001-9081.2016.09.2550
Asbtract
(
)
PDF
(777KB) (
)
References
|
Related Articles
|
Metrics
It is a challenge for person identification to find an appropriate person feature representation method which can reduce intra-personal variations and enlarge inter-personal differences. A deep network for person identification based on joint identification-verification was proposed to solve this problem. First, the deep network model for identification was used to enlarge the inter-personal differences of different people while the verification model was used for reducing the intra-personal distance of the same person. Second, the discriminative feature vectors were extracted by sharing parameters and jointing deep networks of identification and verification. At last, the joint Bayesian algorithm was adopted to calculate the similarity of two persons, which improved the accuracy of pedestrian alignment. Experimental results prove that the proposed method has higher pedestrian recognition accuracy compared with some other state-of-art methods on VIPeR database; meanwhile, the joint identification-verification deep network has higher convergence speed and recognition accuracy than those of separated deep networks.
Short-term lightning prediction based on multi-machine learning competitive strategy
SUN LiHua, YAN Junfeng, XU Jianfeng
2016, 36(9): 2555-2559. DOI:
10.11772/j.issn.1001-9081.2016.09.2555
Asbtract
(
)
PDF
(789KB) (
)
References
|
Related Articles
|
Metrics
The traditional lightning data forecasting methods often use single optimal machine learning algorithm to forecast, not considering the spatial and temporal variations of meteorological data. For this phenomenon, an ensemble learning based multi-machine learning model was put forward. Firstly, attribute reduction was conducted for meteorological data to reduce dimension; secondly, multiple heterogeneous machine learning classifiers were trained on data set and optimal base classifier was screened based on predictive quality; finally, the final classifier was generated after weighted training for optimal base classifier by using ensemble strategy. The experimental results show that, compared with the traditional single optimal algorithm, the prediction accuracy of the proposed model is increased by 9.5% on average.
Saliency detection combining foreground and background features based on manifold ranking
ZHU Zhengyu, WANG Mei
2016, 36(9): 2560-2565. DOI:
10.11772/j.issn.1001-9081.2016.09.2560
Asbtract
(
)
PDF
(939KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the saliency detection algorithm via graph-based manifold ranking (MR algorithm) is over dependent on background features extracted from boundary nodes, an improved saliency detection algorithm combined with foreground and background features was proposed. Firstly, an image was divided into several super-pixels and a close-loop model was constructed. Secondly, the foreground and background seeds were obtained by using manifold ranking algorithm according to foreground and background features. Then these two kinds of seed nodes were combined through brightness and color characteristics, resulting in more accurate query nodes. Finally, a saliency map of the image was obtained by computing the saliency value via manifold ranking algorithm. Experimental results show that compared with MR algorithm, the precision rate, the recall rate and the
F
-measure of the proposed algorithm are significantly improved, and the obtained saliency maps are much more close to the true value.
Object tracking algorithm based on random sampling consensus estimation
GOU Chengfu, CHEN Bin, ZHAO Xuezhuan, CHEN Gang
2016, 36(9): 2566-2569. DOI:
10.11772/j.issn.1001-9081.2016.09.2566
Asbtract
(
)
PDF
(791KB) (
)
References
|
Related Articles
|
Metrics
In order to solve tracking failure problem caused by target occlusion, appearance variation and long time tracking in practical monitoring, an object tracking algorithm based on RANdom SAmpling Consensus (RANSAC) estimation was proposed. Firstly, the local invariant feature set in the searching area was extracted. Then the object features were separated from the feature set by using the transfer property of feature matching and non-parametric learning algorithm. At last, the RANSAC estimation of object features was used to track the object location. The algorithm was tested on video data sets with different scenarios and analyzed by using three analysis indicators including accuracy, recall and comprehensive evaluation (F1-Measure). The experimental results show that the proposed method improves target tracking accuracy and overcomes track-drift caused by long time tracking.
Image super-resolution reconstruction combined with compressed sensing and nonlocal information
CHEN Weiye, SUN Quansen
2016, 36(9): 2570-2575. DOI:
10.11772/j.issn.1001-9081.2016.09.2570
Asbtract
(
)
PDF
(950KB) (
)
References
|
Related Articles
|
Metrics
The existing super-resolution reconstruction algorithms only consider the gray information of image patches, but ignores the texture information, and most nonlocal methods emphasize the nonlocal information without considering the local information. In view of these disadvantages, an image super-resolution reconstruction algorithm combined with compressed sensing and nonlocal information was proposed. Firstly, the similarity between pixels was calculated according to the structural features of image patches, and both the gray and the texture information was considered. Then, the weight of similar pixels was evaluated by merging the local and nonlocal information, and a regularization term combining the local and nonlocal information was constructed. Finally, the nonlocal information was introduced into the compressed sensing framework, and the sparse representation coefficients were solved by the iterative shrinkage algorithm. Experimental results demonstrate that the proposed algorithm outperforms other learning-based algorithms in terms of improved Peak Signal-to-Noise Ratio and Structural Similarity, and it can better recover the fine textures and effectively suppress the noise.
Improved hierarchical Markov random field algorithm color image segmentation algorithm
WANG Lei, HUANG Chenxue
2016, 36(9): 2576-2579. DOI:
10.11772/j.issn.1001-9081.2016.09.2576
Asbtract
(
)
PDF
(618KB) (
)
References
|
Related Articles
|
Metrics
The distribution of color image pixel value is difficult to describe in hierarchical Markov Random Field (MRF) segmentation algorithm, therefore, a hierarchical MRF segmentation algorithm based on RGB color statistic distribution was proposed to solve this problem. The key parameters of the MRF model were set up, and the related formulas were deduced. With the RGB color statistic distribution model, the hierarchical MRF energy function was rewritten, and the
k
-means algorithm was used as presegmentation method to realize unsupervised segmentation. The proposed algorithm has fewer color distribution parameters and lower computational cost in comparison with traditional MRF segmentation model, which describes color distribution more accurately; and it can describe different targets and background very well without being restricted by target and background color distribution and target spatial distribution. Experimental results prove the effectiveness of the proposed algorithm, which is superior to the MRF algorithm and Fuzzy C-Means (FCM) algorithm in computing speed and segmentation accuracy.
Edge quality evaluation based on fuzzy comprehensive evaluation
JIE Dan, HU Qiangqiang, XU Chengwu, GAO Baolu, LI Haifang
2016, 36(9): 2580-2583. DOI:
10.11772/j.issn.1001-9081.2016.09.2580
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues of low efficiency, being easy to be influenced by subjective factors and deviation of the evaluation results caused by the traditional edge quality evaluation which relies on experts, a new mapping evaluation operator called Geographical Mapping Operator (GMO) was proposed. Fuzzy comprehensive evaluation was applied to the edge quality evaluation, and the comment rate and evaluation index were determined according to the national standard, as well as the fuzzy weight vectors of evaluation factors were determined through the entropy weight method. Besides, the new operator was proved by theory. When applying the GMO to the actual data, the unqualified quality of earth data accounts for 65% before edging, but the perfect quality of earth data accounts for 55% after edging, which indicates the effectiveness of GMO.
Improved global parameterization method
HONG Cheng, ZHANG Dengyi, SU Kehua, WU Xiaoping, ZHENG Changjin
2016, 36(9): 2584-2589. DOI:
10.11772/j.issn.1001-9081.2016.09.2584
Asbtract
(
)
PDF
(914KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that non-zero genus surface parameterization has large deformation and high computational complexity, an improved global parameterization approach based on holomorphic 1-form was proposed, which starts from the gradient field and adapts easier and faster method to compute homology and cohomology group. Firstly, a simplified cut graph method was used to construct homology group to determine the topology. Secondly, cohomology group of the linear space formed by the gradient field was calculated by defining special harmonic function to figure out closed 1-form. Thirdly, homology group was diffused to harmonic 1-form through minimizing harmonic energy. Finally, holomorphic 1-form was computed by combining linearly harmonic 1-form and the parameterization was obtained by integrating holomorphic 1-form on the surface basic domain. Theoretical analysis of homology group and cohomology group shows that the parameterization is a global, border-free conformal mapping. Experimental results based on non-zero genus model show that, compared with the former global parameterization based on holomorphic 1-form, the proposed algorithm has better visual effect, smaller average error and higher operation efficiency.
Fast and effective compression for 3D dynamic scene based on KD-tree division
MA Zhiqiang, LI Haisheng
2016, 36(9): 2590-2596. DOI:
10.11772/j.issn.1001-9081.2016.09.2590
Asbtract
(
)
PDF
(1150KB) (
)
References
|
Related Articles
|
Metrics
In order to take full advantage of GPU to realize fast and effective compression and reduce the limitation of network bandwidth, a fast and effective compression method based on KD-tree was presented. Firstly, the dynamic scene was divided by KD-tree at the first time step and small rigid bodies were constructed in each leaf in parallel. The mapping relations between rigid body leaves and the 3D divided grid were established to merge rigid bodies by using disjoint set. Finally, the compressed dynamic data were transmitted to the client to reconstruct the 3D dynamic scene within a certain period of time. The algorithm can greatly improve the speed of compression on the server, and effectively reduce the amount of data. The experimental results show that the proposed algorithm can not only guarantee the quality of the compression, but also compress dynamic datasets quickly and effectively which reduces the limitation of network bandwidth for the dynamic data.
Person re-identification based on feature fusion and kernel local Fisher discriminant analysis
ZHANG Gengning, WANG Jiabao, LI Yang, MIAO Zhuang, ZHANG Yafei, LI Hang
2016, 36(9): 2597-2600. DOI:
10.11772/j.issn.1001-9081.2016.09.2597
Asbtract
(
)
PDF
(785KB) (
)
References
|
Related Articles
|
Metrics
Feature representation and metric learning are fundamental problems in person re-identification. In the feature representation, the existing methods cannot describe the pedestrian well for massive variations in viewpoint. In order to solve this problem, the Color Name (CN) feature was combined with the color and texture features. To extract histograms for image features, the image was divided into zones and blocks. In the metric learning, the traditional kernel Local Fisher Discriminant Analysis (kLFDA) method mapped all query images into the same feature space, which disregards the importance of different regions of the query image. For this reason, the features were grouped by region based on the kLFDA, and the importance of different regions of the image was described by the method of Query-Adaptive Late Fusion (QALF). Experimental results on the VIPeR and iLIDS datasets show that the extracted features are superior to the original feature; meanwhile, the improved method of metric learning can effectively increase the accuracy of person re-identification.
fMRI time series stepwise denoising based on wavelet transform
LI Weiwei, MEI Xue, ZHOU Yu
2016, 36(9): 2601-2604. DOI:
10.11772/j.issn.1001-9081.2016.09.2601
Asbtract
(
)
PDF
(734KB) (
)
References
|
Related Articles
|
Metrics
The neural activity signal of interest is often influenced by structural noise and random noise in functional Magnetic Resonance Imaging (fMRI) data. In order to eliminate noise effects in the analysis of activate voxels, the time series of voxels preprocessed by Statistical Parametric Mapping (SPM) were transformed by Activelets wavelet. After getting scale coefficient and detail coefficient, the two kinds of noise denoised were eliminated separately according to their corresponding characteristics. Firstly, the Independent Component Analysis (ICA) was used to identify and eliminate the structural noise sources. Secondly, an improved algorithm for spatial correlation was presented on the detail coefficient. In particular, in the improved algorithm, the voxel similarity in the neighborhood was used to determine whether the detail coefficient reflected the noise or the neural activity. Experimental results show that the processing of data effectively eliminate the effect of noise; specifically, the frame displacement decreased by 1.5mm and the percentage of spikes decreased by 2%; in addition, the false activation regions are obviously restrained in the spatial map got by denoised signals.
Metropolis ray tracing based integrated filter
WU Xi, XU Qing, BU Hongjuan, WANG Zheng
2016, 36(9): 2605-2608. DOI:
10.11772/j.issn.1001-9081.2016.09.2605
Asbtract
(
)
PDF
(658KB) (
)
References
|
Related Articles
|
Metrics
The Monte Carlo method is the basis of calculating global illumination. Many Monte Carlo-based global illumination algorithms have been proposed. However, most of them have some limitations in terms of rendering time. Based on the Monte Carlo method, a new global illumination algorithm was proposed, combining the Metropolis ray tracing algorithm with an integrated filter. The algorithm is composed of two parts. In the first part, multiple sets of filters with different scales were used to smooth the image; in the second part, filtered images were combined into the final result. Relative Mean Squared Error (RMSE) was used as a basis for the selection of filtering scale, and an appropriate filter was adaptively selected for each pixel during the process of sampling and reconstruction, aiming to reduce the errors to a minimum degree and gain better reconstruction results. Experimental results show that the proposed method outperforms many traditional Metropolis algorithms in terms of both efficiency and image quality.
Acoustic modeling approach of multi-stream feature incorporated convolutional neural network for low-resource speech recognition
QIN Chuxiong, ZHANG Lianhai
2016, 36(9): 2609-2615. DOI:
10.11772/j.issn.1001-9081.2016.09.2609
Asbtract
(
)
PDF
(1145KB) (
)
References
|
Related Articles
|
Metrics
Aiming at solving the problem of insufficient training of Convolutional Neural Network (CNN) acoustic modeling parameters under the low-resource training data condition in speech recognition tasks, a method for improving CNN acoustic modeling performance in low-resource speech recognition was proposed by utilizing multi-stream features. Firstly, in order to make use of enough acoustic information of features from limited data to build acoustic model, multiple features of low-resource data were extracted from training data. Secondly, convolutional subnetworks were built for each type of features to form a parallel structure, and to regularize distributions of multiple features. Then, some fully connected layers were added above the parallel convolutional subnetworks to incorporate multi-stream features, and to form a new CNN acoustic model. Finally, a low-resource speech recognition system was built based on this acoustic model. Experimental results show that parallel convolutional subnetworks normalize different feature spaces more similar, and it gains 3.27% and 2.08% recognition accuracy improvement respectively compared with traditional multi-feature splicing training approach and baseline CNN system. Furthermore, when multilingual training is introduced, the proposed method is still applicable, and the recognition accuracy is improved by 5.73% and 4.57% respectively.
GIS map updating algorithm based on new road finding
GUO Sen, QIN Guihe, XIAO Xiao, REN Pengfei, SUN Minghui
2016, 36(9): 2616-2619. DOI:
10.11772/j.issn.1001-9081.2016.09.2616
Asbtract
(
)
PDF
(623KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of high cost and long time consumption of updating the electronic map in navigation system, a new road judgment and electron map updating algorithm based on failure data screening was proposed, which utilized the circumstances of unsuccessful matching between the history GPS track of floating vehicle and the current electronic map. First of all, the main direction of the travel path was judged by calculating the horizontal and vertical spans of all the failure points. Secondly, elegant point screening was used to cull the misregistration groups of data points due to the malfunction of the on-board GPS equipment; then the linear least square method was used for the linear fitting of failure-matching abnormal trajectory to determine the position and direction of the track; the positioning data point groups with large error were culled by angle screening. Finally, the screened trajectory data was fused and ordered by the main direction. Combined with the road network structure of electronic map, the new road was inserted into the current road network according to the matching results of the endpoints of the new road. Experiments were conducted on the electronic map of a local area network of some city. Experimental results show that the method can accurately determine and screen the new road, and rightly insert the new road into the current network structure of the electronic map.
Comprehensive evaluation on merchants based on G1 method improved by composite power function
LI Zhongxun, HUA Jinzhi, LIU Zhen, ZHENG Jianbin
2016, 36(9): 2620-2625. DOI:
10.11772/j.issn.1001-9081.2016.09.2620
Asbtract
(
)
PDF
(911KB) (
)
References
|
Related Articles
|
Metrics
Considering the issue of objective weight overwhelming subjective weight when the subjective weight and objective weight is inconsistent in multi-index evaluation problem, based on G1 method and the objective weighting method, an assembled weighting model combined with G1 method improved by composite power function was proposed. Firstly, an index system was built, and the subjective ranking and subjective initial vector were determined by G1 method. Thus, each objective index vector was calculated by objective weighting method. Secondly, without changing the ranking order, the comprehensive weights integrated with both subjective and objective components were obtained by utilizing composite power function. Lastly, comprehensive evaluation was calculated by using standardized values of indices and comprehensive weights. Merchants data crawled from Dianping.com was adopted for the experiments of comprehensive evaluation. The Root-Mean-Square Error (RMSE) of the new model was 3.891, which is lower than the result of 8.818 obtained by the G1-entropy weighting and the result of 4.752 obtained by the standard deviation improved G1. Meanwhile, the coverage rate obtained by the new model was better than the two baseline models as well. On the other hand, the RMSE obtained by changing subjective ranking order is 5.430, which is higher than the result of 1.17 that obtained by changing subjective initial vector. The experimental results demonstrate that the evaluation values obtained by the new model highly match with the counterparts given by the Dianping.com, and the model can significantly weaken the effect of initial subjective values, which reflects the fundamental status of the subjective ranking.
Route planning method for unmanned aerial vehicle based on improved teaching-learning algorithm
WU Wei, ZOU Jie
2016, 36(9): 2626-2630. DOI:
10.11772/j.issn.1001-9081.2016.09.2626
Asbtract
(
)
PDF
(884KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of slow convergence and being easy to fall into local optimum in the route planning of the traditional teaching-learning-based optimization algorithm, an adaptive crossover teaching-learning-based optimization algorithm was proposed. Firstly, the teaching factor of the algorithm was changed with the number of iterations, so the learning speed of the algorithm was improved. Secondly, when the algorithm was likely to fall into local optimum, a certain disturbance was added to make the algorithm jump out of local optimum as far as possible. Finally, in order to improve the convergence effect, the crossover link of genetic algorithm was introduced into the algorithm. Then the path planning of Unmanned Aerial Vehicle (UAV) was carried out by using the traditional teaching-learning-based optimization algorithm, the adaptive crossover teaching-learning-based optimization algorithm and the Quantum Particle Swarms Optimization (QPSO) algorithm. The simulation results show that in 10 times of planning, the adaptive crossover teaching-learning-based optimization algorithm finds the global optimal route for 8 times, while the traditional teaching-learning-based optimization algorithm and the QPSO algorithm find the route for only 2 times and 1 time respectively, and the convergence of the adaptive crossover teaching-learning-based optimization algorithm is faster than the other two algorithms.
Adaptive tracking control for unmanned aerial vehicle's three dimensional trajectory
ZHANG Kun, GAO Xiaoguang
2016, 36(9): 2631-2635. DOI:
10.11772/j.issn.1001-9081.2016.09.2631
Asbtract
(
)
PDF
(629KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of trajectory tracking control for an Unmanned Aerial Vehicle (UAV) when the nominal values of autopilot parameters deviate from the actual values, a three-dimensional trajectory adaptive tracking control law was proposed. First, with no parameter deviation of autopilot parameters, a guidance law was derived and commands were obtained for the UAV's airspeed, track angle and flight-path angle. Global asymptotic stability of the closed-loop tracking system was proved by Lyapunov stability theory. Then regarding the deviation of autopilot parameters, a parameter adaption algorithm was designed to estimate the actual autopilot parameters online, and an adaptive tracking control law for UAV's three-dimensional trajectory was obtained. Simulation results show that the proposed adaptive tracking control law can achieve UAV's three-dimensional trajectory tracking effectively in spite of autopilot parameter deviation.
Fast learning algorithm of grammatical probabilities in multi-function radars based on Earley algorithm
CAO Shuai, WANG Buhong, LIU Xinbo, SHEN Haiou
2016, 36(9): 2636-2641. DOI:
10.11772/j.issn.1001-9081.2016.09.2636
Asbtract
(
)
PDF
(890KB) (
)
References
|
Related Articles
|
Metrics
To deal with the probability learning problem in Multi-Function Radar (MFR) based on Stochastic Context-Free Grammar (SCFG) model, a new fast learning algorithm of grammatical probabilities in MFR based on Earley algorithm was presented on the basis of traditional Inside-Outside (IO) algorithm and Viterbi-Score (VS) algorithm. The intercepted radar data was pre-processed to construct an Earley parsing chart which can describe the derivation process. Furthermore, the best parsing tree was extracted from the parsing chart based on the criterion of maximum sub-tree probabilities. The modified IO algorithm and modified VS algorithm were utilized to realize the learning of grammatical probabilities and MFR parameter estimation. After getting the grammatical parameters, the state of MFR was estimated by Viterbi algorithm. Theoretical analysis and simulation results show that compared to the conventional IO algorithm and VS algorithm, the modified algorithm can effectively reduce the computation complexity and running time while keeping the same level of estimation accuracy, which validates that the grammatical probability learning speed can be improved with the proposed method.
Random matrix denoising method based on Monte Carlo simulation as amended
LUO Qi, HAN Hua, GONG Jiangtao, WANG Haijun
2016, 36(9): 2642-2646. DOI:
10.11772/j.issn.1001-9081.2016.09.2642
Asbtract
(
)
PDF
(708KB) (
)
References
|
Related Articles
|
Metrics
Since the small combined stock market has less noise information, a random matrix denoising method amended by Monte Carlo simulation was proposed. Firstly, random matrix was generated by simulation; secondly, the lower and upper bounds of the noise were corrected simultaneously by using a large number of simulated data; finally, the range of noise was determined precisely. The Dow Jones China 88 Index and the Hang Seng 50 Index were used for empirical analysis. The simulation results show that, compared with LCPB (Laloux-Cizeau-Potters-Bouchaud), PG+(Plerou-Gopikrishnan) and KR (RMT denoising method based on correlation matrix eigenvector's Krzanowski stability), rationality and validity of the noise range corrected by Monte Carlo simulation denoising method are greatly improved in eigenvalue, eigenvector and inverse participation ratio. Investment portfolio of the correlation matrix before and after denoising was given, and the results indicate that the Monte Carlo simulation denoising method has the smallest value at risk under the same expected rate of return, which can provide a certain reference for the portfolio selection, risk management and other financial applications.
2024 Vol.44 No.9
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF