Loading...

Table of Content

    10 April 2015, Volume 35 Issue 4
    Design of data traffic optimization system for large-scale wireless sensor networks
    CHEN Yi, XU Li, ZHANG Meiping
    2015, 35(4):  905-909.  DOI: 10.11772/j.issn.1001-9081.2015.04.0905
    Asbtract ( )   PDF (957KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the data traffic rises with the increase of data visitors in large-scale Wireless Sensor Networks (WSN), a data traffic optimization WSN system framework was designed and implemented to build large-scale WSN and reduce the network data traffic. The IPv6 and IPv6 over Low Power Wireless Personal Area Network (6LoWPAN) technology were adopted to build large-scale WSN. To integrate the WSN and traditional Internet, the Message Queuing Telemetry Transport (MQTT) and Message Queuing Telemetry Transport for Sensor Network (MQTT-SN) protocols were deployed in application layer to build system publish/subscribe model. The experimental results show that, when system has 5 sensor nodes, compared with the Constrained Application Protocol (CoAP) based WSN system, the data traffic of the proposed system is 18% of the former. It proves that the proposed system framework can effectively control the impact caused by increasing visitors to WSN data traffic.

    Monte Carlo boxed localization algorithm for mobile nodes based on received signal strength indication ranging
    WU Xiaolin, SHAN Zhilong, CAO Shulin, CAO Chuqun
    2015, 35(4):  916-920.  DOI: 10.11772/j.issn.1001-9081.2015.04.0916
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics

    To solve the shortcomings of sampling efficiency and positioning accuracy of the Monte Carlo localization algorithm in Wireless Sensor Networks (WSN), a Monte Carlo localization Boxed (MCB) algorithm for mobile nodes based on Received Signal Strength Indication (RSSI) ranging was proposed. To improve the positioning accuracy, the filter conditions was strengthened by mapping the ranging information into different distance intervals. At the same time, the samples which had already met the filter conditions were used to create more effective samples so as to improve the sampling efficiency. Finally, the Newton interpolation was used to predict the nodes' trajectory. The closer the trajectory between the sample and the node is, the greater the weight of the sample is, and the best estimate position could be obtained with these weighted samples. The simulation results indicate that the proposed algorithm has good performance in different density of anchor node, communication radius, and movement velocity etc., and compared with the MCB algorithm, the proposed algorithm has higher positioning accuracy.

    Interference-aware routing for wireless sensor networks based on signal power random fading model
    ZHANG Kaiping, MAO Jianjing
    2015, 35(4):  921-924.  DOI: 10.11772/j.issn.1001-9081.2015.04.0921
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics

    To reduce the effect caused by Wireless Sensor Network (WSN) node signal power attenuation and node interference on transmission efficiency, an interference-aware routing based on random signal power fading model was proposed for WSN. First, according to probability theory, two probabilistic interference models for successfully transmitting data under different distribution of interfering nodes were put forward; interference, node routing convergence and residual energy issues were used as a measure to establish a interference-aware route. Then, interference, route convergence and residual energy were regarded as assessment weights to determine the best next-hop node. NS2 simulation data shows that compared with interference-aware routing algorithms based on differentiated services and coding, the proposed algorithm has better performance in packet delivery success rate, energy consumption and average delay time.

    Deep space data delivery strategy based on optimized LT code
    ZHAO Hui, FANG Gaofeng, WANG Qin
    2015, 35(4):  925-928.  DOI: 10.11772/j.issn.1001-9081.2015.04.0925
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics

    Focusing on the shortcomings such as long delay, high Bit Error Rate (BER), asymmetric channel in deep space communication, high redundancy and low decoding success rate of short LT codes, a new deep space data delivery strategy based on Optimized LT (OLT) codes was proposed. First, the OLT codes was given by adjusting the degree distribution function, and adopting new packet selection strategy and joint decoding algorithm. Then, a deep space data delivery strategy based on OLT codes was presented, in which the sender encoded the data file and sent it out, and the receiver got the data by decoding the encoded packets by use of joint decoding algorithm. The simulation results show that compared with LT codes, OLT codes can improve decoding success rate and reduce redundancy. Besides, compared with CFDP, the proposed strategy can effectively reduce the delay and improve the validity and reliability of data delivery, especially in the case of high packet loss rate.

    Tradeoff between multicast rate and number of coding nodes based on network coding
    PU Baoxing, ZHAO Chenglin
    2015, 35(4):  929-933.  DOI: 10.11772/j.issn.1001-9081.2015.04.0929
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics

    Based on single-source multicast network coding, in order to explore the relationship between multicast rate and the number of minimal needed coding nodes, by employing the technique of generation and extension of linear network coding, theoretical analysis and formula derivation of the relationship were given. It is concluded that the number of the minimal needed coding nodes monotonously increases with the increasing of multicast rate. A multi-objective optimization model was constructed, which accurately described the quantitative relationship between them. For the sake of solving this model, a search strategy was derived to search all feasible coding schemes. By combining the search strategy with NSGA-II, an algorithm for solving this model was presented. In the case of being required to consider the tradeoff between them, the solution of the model is the basis of choice for determining network coding scheme. The proposed algorithm not only can search whole Pareto set, but also search part Pareto set related with certain feasible multicast rate region given by user with less search cost. The simulation results verify the conclusion of theoretical analysis, and indicate that the proposed algorithm is feasible and efficient.

    Layered and cascaded stochastic resonance algorithm for direct sequence spread spectrum signal receiving
    WANG Aizhen, HOU Chengguo, REN Guofeng
    2015, 35(4):  934-937.  DOI: 10.11772/j.issn.1001-9081.2015.04.0934
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    To improve the demodulation performance of received direct sequence spread spectrum signal, a layered and cascaded stochastic resonance algorithm was presented. Two cascaded bistable stochastic resonance systems were designed respectively in the down-conversion process for carrier removing and the dispreading of baseband signal. By both of the systems, the broadband receiving of narrowband signal and the transformation from the energy channel noise to the signal energy were implemented. The theoretical analysis and simulation results show that the proposed algorithm can enlarge the application scope of frequency spectrum of the receiver, and improve the performance of receiver by increasing the number of the cascades of two-layer cascaded stochastic resonance system.

    Load balancing algorithm of task scheduling in cloud computing environment based on honey bee behavior
    YANG Shi, WANG Yanling, WANG Yongli
    2015, 35(4):  938-943.  DOI: 10.11772/j.issn.1001-9081.2015.04.0938
    Asbtract ( )   PDF (839KB) ( )  
    References | Related Articles | Metrics

    For the problem that task scheduling program in cloud computing environments usually takes high response time and communication costs, a Honey Bee Behavior inspired Load Balancing (HBB-LB) algorithm was proposed. Firstly, the load was balanced across Virtual Machines (VMs) for maximizing the throughput. Then the priorities of tasks on the machines were balanced. Finally, HBB-LB algorithm was used to improve the overall throughput of processing, and priority based balancing focused on reducing the wait time of tasks on a queue of the VM. The experiments were carried out in cloud computing environments simulated by CloudSim. The experiment results showed that HBB-LB algorithm respectively reduced average response time by 5%, 13%, 17%, 67% and 37% compared with Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Dynamic Load Balancing (DLB), First In First Out (FIFO) and Weighted Round Robin (WRR) algorithms, and reduced maximum completion time by 20%, 23%, 18%, 55% and 46%. The result indicates that HBB-LB algorithm is suitable for cloud computing system and helpful to balancing non-preemptive independent tasks.

    Scheduling mechanism based on service level objective in multi-tenant cluster
    DU Xiongjie, WANG Min, TANG Xuehai, ZHANG Zhang
    2015, 35(4):  944-949.  DOI: 10.11772/j.issn.1001-9081.2015.04.0944
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics

    A scheduling mechanism based on Service Level Objective (SLO) in multi-tenant cluster, including a preference scheduling algorithm and a resource preemption algorithm, was proposed to solve the problem of the inability to guarantee the SLOs of jobs in multi-tenant clusters. The preference scheduling algorithm considered the users who overused resources above their quota and the users who did not, then assigned a higher priority to the jobs of the latter users, under this condition, the job with highest priority was preferentially allocated resources. When the resources was limited, the resource preemption algorithm preempted the resource for the jobs whose urgency was above the threshold, and chose the jobs with the lowest urgency in the corresponding range of the running jobs according to the resource usages of tenants.The experimental results show that, compared with the current multi-tenant scheduler named Capacity Scheduler, the proposed mechanism can significantly improve the deadline guarantee rate of jobs and SLO with guaranteeing the job execution efficiency and the equity among tenants at the same time.

    Vector exploring path optimization algorithm of superword level parallelism with subsection constraints
    XU Jinlong, ZHAO Rongcai, HAN Lin
    2015, 35(4):  950-955.  DOI: 10.11772/j.issn.1001-9081.2015.04.0950
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics

    Superword Level Parallelism (SLP) is a vector parallelism exploration approach for basic block. With loop unrolling, more parallel possibility can be explored. Simultaneously too much exploring paths are brought in. In order to solve the above problem, an optimized SLP method with subsection constraints was proposed. Redundant elimination on segmentation was used to obtain homogeneous segments. Inter-section exploring method based on SLP was used to restrain exploring paths and reduce the complexity of the algorithm. And finally pack adjustment was used to deal with the situation of overlap on memory access. The experimental results show that the vectorization capability of SLP is enhanced; for the test serial program, the average speedup of vectorized version is close to 2.

    Distributed massive molecule retrieval model based on consistent Hash
    SUN Xia, YU Long, TIAN Shengwei, YAN Yilin, LIN Jiangli
    2015, 35(4):  956-959.  DOI: 10.11772/j.issn.1001-9081.2015.04.0956
    Asbtract ( )   PDF (581KB) ( )  
    References | Related Articles | Metrics

    In view of the problems that the traditional general graph matching search is inefficient, and refractive index data cannot be positioned fast in large data environment, a distributed massive molecular retrieval model based on consistent Hash function was established. Combined with the characteristics of molecular storage structures, to improve retrieval efficiency of molecules, the continuous refractive index was discretized by fixed width algorithm to establish high-speed Hash index, and the distributed massive retrieval system was realized. The size of dataset was effectively reduced, and Hash collision was handled according to the visiting frequency. The experimental results show that, in the chemical data containing 200 thousand structures of molecules, the average time of this method is about five percent of the traditional general graph matching search. Besides, the model has the steady performance with high scalability. It is applicable to retrieve high-frequency molecules in accordance with refractive index under the environment of massive data.

    Parallel algorithm of raster river network extraction based on CUDA
    WANG Yuzhuo, LIU Xiuguo, ZHANG Wei
    2015, 35(4):  960-963.  DOI: 10.11772/j.issn.1001-9081.2015.04.0960
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics

    Concerning the low efficiency of calculating flow accumulation on high resolution digital terrain data, a parallel algorithm was put forward based on the Compute Unified Device Architecture (CUDA) and flooding model. Based on the technology of Graphic Processing Unit (GPU), two strategies were designed to improve the speed of the extraction. Firstly, the calculation of flow accumulation was divided into a plurality of independent tasks for parallel processing. Secondly, the time of data exchange was reduced through the asynchronous data transmission. The experimental results show that the efficiency of the parallel algorithm is superior of the serial algorithm. The acceleration of river network extraction reached 62 times in NVIDIA Geforce GTX660 for 600 MB DEM data with 9784×8507 grid size.

    Access control mechanism with dynamic authorization and file evaluation
    ZHANG Yue, ZHENG Dong, ZHANG Yinghui
    2015, 35(4):  964-967.  DOI: 10.11772/j.issn.1001-9081.2015.04.0964
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics

    Concering that the traditional access control methods fail to support dynamic authorization and file evaluation, and suffer from malicious re-sharing issue, an Access Control Mechanism with Dynamic Authorization and File Evaluation (DAFE-AC) was proposed. DAFE-AC adopted a dynamic authorization mechanism to monitor authorized users in real-time and allowed users to supervise each other. The file evaluation mechanism in DAFE-AC could dynamically update the access threshold of files. Based on the Hash/index database, DAFE-AC can ensure the uniqueness of files in the system. In DAFE-AC, a user' authorization value can dynamically change with behaviors of other users, and users can perform file evaluation to eliminate malicious re-sharing of files.

    Analysis of egress detection strategy of Internet service provider based on game theory
    BU Junrong, FENG Liping, SHI Qiong, SONG Lipeng
    2015, 35(4):  968-971.  DOI: 10.11772/j.issn.1001-9081.2015.04.0968
    Asbtract ( )   PDF (499KB) ( )  
    References | Related Articles | Metrics

    Internet Service Provider (ISP), as the convergers and distributors of network information, is the best network virus defender. However, ISP usually would like to detect the ingress information while ignoring the egress information according to the cost. The security measures for ISP in the whole network were analyzed and the strategy of egress detection was presented to provide reference for ISP's selection of strategies. First, the game model between ISP and attackers and the spreading model of network virus were proposed. Secondly, the impacts of the strategies selected by ISP on the spreading of virus were analyzed when the dynamical spreading of network virus was considered. The results show that ISP will face an increase in invasion risk when it does not take egress detection, however the adoption of egress detection can improve not only the ISP's own utility, but also the security of the whole network. The validity of the theoretical results was verified by Matlab simulation.

    Malware behavior assessment system based on support vector machine
    OUYANG Boyu, LIU Xin, XU Chan, WU Jian, AN Xiao
    2015, 35(4):  972-976.  DOI: 10.11772/j.issn.1001-9081.2015.04.0972
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the classification accuracy in malware behavior analysis system was low,a malware classification method based on Support Vector Machine (SVM) was proposed. First, the risk behavior library which used software behavior results as characteristics was established manually. Then all of the software behaviors were captured and matched with the risk behavior library, and the matching results were converted to data suitable for SVM training through the conversion algorithm. In the selection of the SVM model, kernel function and parameters (C,g), a method combining the grid search and Genetic Algorithm (GA) was used to search optimization after theoretical analysis. A malware behavior assessment system based on SVM classification model was designed to verify the effectiveness of the proposed malware classification method. The experiments show that the false positive rate and false negative rate of the system were 5.52% and 3.04% respectively. It means that the proposed method outperforms K-Nearest Neighbor (KNN) and Naive Bayes (NB); its performance is at the same level with the BP neural network, however, it has a higer efficiency in training and classification.

    Authentication protocol based on pseudo-random function for mobile radio frequency identification
    ZHANG Qi, LIANG Xiangqian, WEI Shumin
    2015, 35(4):  977-980.  DOI: 10.11772/j.issn.1001-9081.2015.04.0977
    Asbtract ( )   PDF (562KB) ( )  
    References | Related Articles | Metrics

    To solve the security problems between the reader and the server of mobile Radio Frequency IDentification (RFID) caused by wireless transmission, a two-way authentication protocol based on pseudo-random function was provided. It satisfied the EPC Class-1 Generation-2 industry standards, and mutual certifications between tags, readers and servers were achieved. The security of this protocol was also proved by using GNY logic. It can effectively resist track, replay and synchronization attack etc.; simultaneously, its main calculations are transferred to the server, thereby reducing the calculations and cost of the tag.

    Efficient quantum information splitting scheme based on W states
    XIE Shuzhen, TAN Xiaoqing
    2015, 35(4):  981-984.  DOI: 10.11772/j.issn.1001-9081.2015.04.0981
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics

    To improve the efficiency of quantum communication based on W states, a new scheme about Quantum Information Splitting (QIS) based on W states was proposed. Local operation was used to encode the classic information into the qubit by dealer in this scheme. The nonorthogonal state particles were inserted to detect eavesdropping in the distribution of qubit. To recover the secret, participants only needed to perform three-particle projective measurements. One W state can transmit two bits of classical information between the participants. Moreover, the scheme can resist some attacks like intercept-and-measure attack, intercept-and-resend attack and entangled ancillary particles attack to make sure of its security. The scheme has good efficiency with theoretical quantum efficiency of 67%.

    Enhanced attack-resistible ant-based trust and reputation model
    WANG Hao, ZHANG Yuqing
    2015, 35(4):  985-990.  DOI: 10.11772/j.issn.1001-9081.2015.04.0985
    Asbtract ( )   PDF (1189KB) ( )  
    References | Related Articles | Metrics

    Traditional trust and reputation models do not pay enough attention to nodes'deceit in recommendation, so their reputation evaluation may be affected by malicious nodes' collusion. A trust and reputation model named Enhanced Attack Resistible Ant-based Trust and Reputation Model (EAraTRM) was proposed, which is based on ant colony algorithm. Node recommendation behaviors analysis and adaptive mechanism to malicious nodes density were added into reputation evaluation of EAraTRM to overcome the shortage of traditional models. Simulation experiments show that EAraTRM can restrain the collusion of malicious nodes, and give more accurate reputation evaluation results, even when 90% nodes in a network are malicious and the comparison models have failed.

    Hardware acceleration model of MD5 algorithm based on NetMagic platform
    MENG Xiangyang, LIN Qi
    2015, 35(4):  991-995.  DOI: 10.11772/j.issn.1001-9081.2015.04.0991
    Asbtract ( )   PDF (719KB) ( )  
    References | Related Articles | Metrics

    Aiming at the disadvantages of MD5 software such as large occupancy of resources and poor security, a hardware acceleration model of MD5 was put forward based on NetMagic platform, and then the non-streamlined and streamlined hardware acceleration models were verified and analyzed based on ModelSim and NetMagic platforms. Compared to non-streamlined acceleration model, the streamlined acceleration model can improve the operation efficiency of MD5 for five times. Its achievement can be applied to hardware encryption engine such as the network processor, which can effectively improve the security and processing efficiency of hardware equipment including the network processor.

    Application of improved point-wise mutual information in term extraction
    DU Liping, LI Xiaoge, ZHOU Yuanzhe, SHAO Chunchang
    2015, 35(4):  996-1000.  DOI: 10.11772/j.issn.1001-9081.2015.04.0996
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics

    The traditional Point-wise Mutual Information (PMI) method has shortcoming of overvaluing the co-occurrence of two low-frequency words. To get the proper value of k of improved PMI named PMIk to overcome the shortcoming of PMI, and solve the problem that the term extraction cannot be obtained from a segmented corpus with segmentation errors, as well as maintaining the portability of term extraction system, combining with the PMIk method and two fundamental rules, a new method was put forward to identity terms from an unsegmented corpus. Firstly, 2-gram extended seed was determined by computing the bonding strength of two adjoining words by PMIk method. Secondly, whether the 2-gram extended seed could be extended to 3-gram was determined by respectively computing the bonding strength between the seed and the word in front of it and the word located behind it, and then getting multi-gram term candidates iteratively. Finally, the garbage of term candidates were filtered using the two fundamental rules to obtain terms. The theoretical analysis shows that PMIkcan overcome the shortcoming of PMI when k≥3(k∈N+). The experiments on 1 GB SINA finance Blog corpus and 300 MB Baidu Tieba corpus verify the theoretical analysis, and PMIk outperforms PMI with good portability.

    Application of labeled author topic model in scientific literature
    CHEN Yongheng, ZUO Wanli, LIN Yaojin
    2015, 35(4):  1001-1005.  DOI: 10.11772/j.issn.1001-9081.2015.04.1001
    Asbtract ( )   PDF (712KB) ( )  
    References | Related Articles | Metrics

    Author Topic (AT) model is widely used to find the author's interests in scientific literature, but AT model cannot take advantage of the correlation between category labels and topics. Through integrating the inherent category labels of documents into AT model, Labeled Author Topic (LAT) model was proposed. LAT model realized the predicate of multi-labels by optimizing the mapping relation between labels and topics and improved the clustering results. The experimental results suggest that, compared with Latent Dirichlet Allocation (LDA) model and AT model, LAT model can improve the decision accuracy of multi-labels, and optimize the generalization ability and operating efficiency.

    Word sense disambiguation method based on knowledge context
    YANG Zhizhuo
    2015, 35(4):  1006-1008.  DOI: 10.11772/j.issn.1001-9081.2015.04.1006
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics

    In order to overcome the data sparseness problem of traditional Word Sense Disambiguation (WSD) methods, a new WSD method based on knowledge context was proposed. The method is based on the assumption that sentences within one article share some common topics. Fisrt, similarity algorithm was used to obtain sentences with the same ambiguous words in the article, and those sentences could be appropriate knowledge context for ambiguous sentences and provided disambiguation knowledge. Then a graph-based ranking algorithm was used for WSD. The experimental results of real data show that, when there are two knowledge context sentences and the window size is 1, the disambiguation accuracy of this method is increased by 3.2% compared to the baseline method (OrigDisam).

    Prediction technology of translation query behavior in interactive machine translation
    JI Duo, MA Bin, YE Na
    2015, 35(4):  1009-1012.  DOI: 10.11772/j.issn.1001-9081.2015.04.1009
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics

    To deal with the problems of frequent switches between mouse and keyboard when the users call the bilingual words searching in the process of translation, a prediction model oriented word searching behavior was proposed based on Interactive Machine Translation (IMT). This behavior of word searching was transformed into the selection problems under the condition of current translation, and the prediction of the word searching behavior was achieved with a high accuracy using the alignment model, translation model and language model. In the tests of artificial bilingual alignment corpus, the prediction accuracy of the approach is about 64.99%, and especially for the prediction of noun, the prediction accuracycan reach 71.43%. The approach effectively reduces the repeatability and mechanical operations in the manual translation, and improves the user's translation experience when using the interactive translation system, which also improves the translation efficiency.

    Family relation extraction from Wikipedia by self-supervised learning
    ZHU Suyang, HUI Haotian, QIAN Longhua, ZHANG Min
    2015, 35(4):  1013-1016.  DOI: 10.11772/j.issn.1001-9081.2015.04.1013
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics

    Traditional supervised relation extraction demands a large scale of manually annotated training data while semi-supervised learning suffers from low recall. A self-supervised learning based approach was proposed to extract personal family relationships. First, semi-structured information (family relation triples) was mapped to the free text in Chinese Wikipedia to automatically generate annotated training data. Then family relations between person entities were extracted from Wikipedia text with feature-based relation extraction method. The experimental results on a manually annotated test family network show that this method outperforms Bootstrapping with F1-measure of 77%, implying that self-supervised learning can effectively extract personal family relationships.

    Evaluation of microblog users' influence based on Hrank
    JIA Chongchong, WANG Mingyang, CHE Xin
    2015, 35(4):  1017-1020.  DOI: 10.11772/j.issn.1001-9081.2015.04.1017
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics

    An evaluation algorithm based on HRank was proposed to evaluate the users' influence in microblog social networking platform. By introducing H parameter which used for judging the scientific research achievements of scientists and considering the user's followers and their microblog forwarding numbers, two new H-index models of followers H-index and microblog-forwarded H-index were given. Both of them could represent the users' static characters and their dynamic activities in microblog, respectively. And then the HRank model was established to make comprehensive assessment on users' influence. Finally, the experiments were conducted on Sina microblog data using the HRank model and the PageRank model, and the results were analyzed by correlation on users' influence rank and compared to the results given by Sina microblog. The results show that user influence does not have strong correlation with the number of fans, and the HRank model outperforms the PageRank model. It indicates that the HRank model can be used to identify users influence effectively.

    Information extraction of history evolution based on Wikipedia
    ZHAO Jiapeng, LIN Min
    2015, 35(4):  1021-1025.  DOI: 10.11772/j.issn.1001-9081.2015.04.1021
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics

    The domain concepts are complex, various and hard to capture the development of concepts in software engineering. It's difficult for students to understand and remember. A new effective method which extracts the historical evolution information on software engineering was proposed. Firstly, the candidate sets included entities and entity relationships from Wikipedia were extracted with the Nature Language Processing (NLP) and information extraction technology. Secondly, the entity relationships which being closest to historical evolution from the candidate sets were extracted using TextRank; Finally, the knowledge base was constructed by quintuples composed of the neighboring time entities and concept entities with concerning the key entity relationship. In the process of information extraction, TextRank algorithm was improved based on the text semantic features to increase the accuracy rate. The results verify the effectiveness of the proposed algorithm, and the knowledge base can organize the concepts in software engineering field together according to the characteristics of time sequence.

    One-class support vector data description based on local patch
    YANG Xiaoming, HU Wenjun, LOU Jungang, JIANG Yunliang
    2015, 35(4):  1026-1029.  DOI: 10.11772/j.issn.1001-9081.2015.04.1026
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics

    Because Support Vector Data Description (SVDD) fails in identifying the local geometric information, a new detection method, called One-class SVDD based on Local Patch (OCSVDDLP), was proposed. First, the data was divided into many local patches. Then, each sample was reconstructed by using the corresponding local patch. Finally, the decision model was obtained through training on the reconstruction data with SVDD. The experimental results on the artificial data set demonstrate that OCSVDDLP can not only capture the global geometric structure of the data set, but also uncover the local geometric information. Besides, the results on real-world data sets validate the effectiveness of the proposed method.

    Computing method of attribute information granule of information system
    HAO Yanbin, GUO Xiao, YANG Naiding
    2015, 35(4):  1030-1034.  DOI: 10.11772/j.issn.1001-9081.2015.04.1030
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    Based on functional dependency over the attributes, the concept of attribute information granule of information system was proposed, and a method to calculate the structure of attribute granule of separable information system was given. Firstly, the separability of information system was defined, and it was proved that if an information system is separable, the structure of attribute granule of the system can be decomposed into the Cartesian product of the structures of attribute granules of its sub-systems. Secondly, the method to judge the separability of an information system and the decomposition algorithm of information system were given. Lastly, the complexity of the proposed method was analyzed. And the analysis result demonstrates that the complexity of the direct calculation of the structure of attribute granule of information system is O(2n), and the proposed method can reduce it to O(2n1+2n2+…+2nk) where n=n1+n2+…+nk. The theoretical analysis and example show that the method is feasible.

    Design and implementation of light-weight rules engine on IoT gateway
    TIAN Ruiqin, WU Jinzhao, TANG Ding
    2015, 35(4):  1035-1039.  DOI: 10.11772/j.issn.1001-9081.2015.04.1035
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    In order to apply the Internet of Things (IoT) gateway to various scenarios, a light-weight rules engine was proposed, through which users can define personalized rules on demand. However, the limited resource for computing and storage prevents the traditional rules engine, such as JRules, being applied on the IoT gateway directly. By the "related facts" attribute added to each rule and the mechanism of "Agent-Inference", both the running time and the response time of the rules engine were reduced. Adding the "related facts" attribute to each rule can reduce the number of the rules involved in matching operations, and the mechanism of "Agent-Inference" can reduce the waiting time for available rules. Based on these methods, a Faster Light-weight Rules Engine (FLRE) was implemented and applied to IoT gateways. The experiments on different-size data sets showed that the running efficiency was increased by 8%-30% with adding "related facts" attribute, and the response time was decreased by 7%-35% with using mechanism of "Agent-Inference". The evaluation shows the two methods are effective to apply the light-weight rules engine to the IoT gateway.

    Particle swarm optimization with adaptive task allocation
    LIN Guohan, ZHANG Jing, LIU Zhaohua
    2015, 35(4):  1040-1044.  DOI: 10.11772/j.issn.1001-9081.2015.04.1040
    Asbtract ( )   PDF (695KB) ( )  
    References | Related Articles | Metrics

    Conventional Particle Swarm Optimization (PSO) algorithm has disadvantage of premature convergence and is easily trapped in local optima. An improved PSO algorithm with adaptive task allocation was proposed to avoid those disadvantages. Adaptive task allocation was applied to particles according to their distribution status and fitness. All the particles were divided into exploration particles and exploitation particles, and carried out different tasks with global model and dynamic local model respectively. This strategy can make better trade-off between exploration and exploitation and enhance the diversity of particle. Dynamic neighborhood strategy broadened the search space and effectively inhibited the premature stagnation. Gaussian disturbance learning was applied to the stagnant elite particles to help them jump out from local optima region. The superior performance of the proposed algorithm in global search ability and solution accuracy was validated by optimizing six complicated composition test functions.

    Partner selection based on grey relational analysis and particle swarm optimization algorithm
    HUANG Huiqun
    2015, 35(4):  1045-1048.  DOI: 10.11772/j.issn.1001-9081.2015.04.1045
    Asbtract ( )   PDF (492KB) ( )  
    References | Related Articles | Metrics

    Concerning the slow searching, poor practicability and being difficult to get a perfectly reasonable options of the methods for solving the problem of cloud services partner selection, a new partner selection method was proposed based on grey relational analysis and Particle Swarm Optimization (PSO) algorithm. Firstly, grey relational analysis method was used to select evaluation indexs of cloud providers, then the weight of each index value was calculated. Secondly, the mathematical model of services partner selection problems in cloud environment was built, then it was solved by using PSO algorithm to find the best partners. Performance test results of specific application examples show that the proposed method is feasible and rational, and can select the best partners.

    Improved harmony search algorithm based on circular trust region
    LIU Le
    2015, 35(4):  1049-1056.  DOI: 10.11772/j.issn.1001-9081.2015.04.1049
    Asbtract ( )   PDF (1149KB) ( )  
    References | Related Articles | Metrics

    Concerning the drawbacks of trapping in local optimal solutions and low convergence accuracy of standard Harmony Search (HS) algorithm, a new harmony search algorithm based on Circular Trust Region (CTR), named as CTRHS, was proposed. CTRHS adopted the one-off generation mode of two pitches. Intensive considerations within the circular trust region were interactively conducted in its memory considering process. Adjustment bandwidth was determined by means of the best or worst harmony vector of current Harmony Memory (HM) during the adjusting process of double pitches. The update of HM was achieved by replacing the worst harmony in current HM with the newly generated harmony. Computational experiments were conducted upon 9 benchmark functions to validate the performance of CTRHS. As demonstrated in the results, CTRHS outperforms other 7 reported HS variants in terms of solution quality and convergence efficiency. Moreover, when the parameters of Harmony Memory Size (HMS) and Harmony Memory Considering Rate (HMCR) are respectively equal to 5 and 0.99, it has better performance in searching the global optimal solutions.

    Improved artificial bee colony algorithm using phased search
    LI Guoliang, WEI Zhenhua, XU Lei
    2015, 35(4):  1057-1061.  DOI: 10.11772/j.issn.1001-9081.2015.04.1057
    Asbtract ( )   PDF (707KB) ( )  
    References | Related Articles | Metrics

    Aiming at the shortcomings of Artificial Bee Colony (ABC) algorithm and its improved algorithms in solving high-dimensional complex function optimization problems, such as low solution precision, slow convergence, being easy to fall in local optimum and too many control parameters of improved algorithms, an improved artificial bee colony algorithm using phased search was proposed. In this algorithm, to reduce the probability of being falling into local extremum, the segmental-search strategy was used to make the employed bees have different characteristics in different stages of search. The escape radius was defined to guide the precocity individual to jump out of the local extremum and avert the blindness of escape operation. Meanwhile, to improve the quality of initialization food sources, the uniform distribution method and opposition-based learning theory were used. The simulation results of eight typical high-dimensional complex functions of optimization problems show that the proposed method not only obtains higher solving accuracy, but also has faster convergence speed. It is especially suitable for solving high-dimensional optimization problems.

    Flower pollination algorithm based on simulated annealing
    XIAO Huihui, WAN Changxuan, DUAN Yanming, ZHONG Qing
    2015, 35(4):  1062-1066.  DOI: 10.11772/j.issn.1001-9081.2015.04.1062
    Asbtract ( )   PDF (799KB) ( )  
    References | Related Articles | Metrics

    A hybrid algorithm of Simulated Annealing (SA) and flower pollination algorithm was presented to overcome the problems of low-accuracy computation, slow-speed convergence and being easily relapsed into local extremum. The sudden jump strategy in SA was utilized to avoid falling into local optimum, and the global searching performance of SA was exploited to enhance the global searching ability of the hybrid algorithm. The hybrid algorithm was tested through six standard functions and compared to basic Flower Pollination Algorithm (FPA), Bat Algorithm (BA), Particle Swarm Optimization (PSO) algorithm and improved PSO algorithm. The simulation results show that the optimal value of 4 functions were found by the hybrid algorithm with better convergence precision, convergence rate and robustness. At the same time, the experimental results of solving nonlinear equation group verify the validity of the hybrid algorithm.

    Spectrum difference allocation of cognitive radio network based on chaotic artificial physics optimization
    JIA Suimin, WEI Meng, HU Mingsheng
    2015, 35(4):  1067-1070.  DOI: 10.11772/j.issn.1001-9081.2015.04.1067
    Asbtract ( )   PDF (549KB) ( )  
    References | Related Articles | Metrics

    Focusing on the spectrum allocation problem in cognitive radio network, an allocation model considering spectrum availability was proposed. When handling the constraint, the higher availability spectrum was assigned to the cognitive users. A chaotic artificial physics optimization was proposed based on NP (Non-deterministic Polynomial) feature of spectrum allocation problem. The ergodicity of chaos was used to initiate population and the force equation between particles was improved to avoid the algorithm falling into local optimum. The simulation results show that the proposed algorithm can get better network revenue and improve the spectrum usage.

    Test point optimization under unreliable test based on simulated annealing particle swarm optimization
    QIANG Xiaoqing, JING Bo, DENG Sen, JIAO Xiaoxuan, SU Yue
    2015, 35(4):  1071-1074.  DOI: 10.11772/j.issn.1001-9081.2015.04.1071
    Asbtract ( )   PDF (693KB) ( )  
    References | Related Articles | Metrics
    Considering the false alarm and miss detection during testing and diagnosis of complex system, a new method was proposed to solve test selection problems under unreliable test based on Simulated Annealing Particle Swarm Optimization (SA-PSO) algorithm. Firstly, a heuristic function was established to evaluate the capability of test point detection, coverage and reliance. Then, combining the heuristic function with the least test cost principle and considering the requirement of testability targets, a fitness function to ensure optimal selection was designed. Lastly, the process and key steps of SA-PSO were introduced and the superiority of this algorithm was verified by simulation results of launch system of Apollo. The results show that the proposed method can find the global optimal test points. It can minimize test cost on requirement of testability targets and has higher fault detection and isolation rate compared with greedy algorithm and genetic algorithm.
    Improved non-negativity and support constraint recursive inverse filtering algorithm for blind restoration based on interband prediction
    HUANG Detian, ZHENG Lixin, LIU Peizhong, GU Peiting
    2015, 35(4):  1075-1078.  DOI: 10.11772/j.issn.1001-9081.2015.04.1075
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    To overcome the shortcoming that the Non-negativity And Support constraint Recursive Inverse Filtering (NAS-RIF) algorithm is noise-sensitive and time-consuming, an improved NAS-RIF algorithm for blind restoration was proposed. Firstly, a new cost function of the NAS-RIF algorithm was introduced, and then the noise resistance ability and the restoration effect were both improved. Secondly, in order to enhance computational efficiency of the algorithm, after decomposed by Haar wavelet transform, only degraded image in low frequency sub-bands was restored with the NAS-RIF algorithm, while information in high frequency sub-bands was predicted from the restored image of low frequency sub-bands by interband prediction. Finally, an interband prediction based on Minimum Mean Square Error (MMSE) was presented to guarantee the accuracy of the predicted information in high frequency sub-bands. The experiments on synthetic degraded images and real images were performed, and the Signal-to-Noise Ratio (SNR) gain by proposed algorithm were 5.2216 dB and 8.1039 dB respectively. The experimental results demonstrate that the proposed algorithm not only preserves image edges, but also has good performance in noise suppression. In addition, the computational efficiency of the proposed algorithm is greatly enhanced.

    Image matching algorithm based on histogram of gradient angle local feature descriptor
    FANG Zhiwen, CAO Zhiguo, ZHU Lei
    2015, 35(4):  1079-1083.  DOI: 10.11772/j.issn.1001-9081.2015.04.1079
    Asbtract ( )   PDF (858KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem that it is difficult to leverage the performances of effect and efficiency, an image matching algorithm based on the Histogram of Gradient Angle (HGA) was proposed. After obtaining the key points by Features from Accelerated Segment Test (FAST), the block gradient and the new structure as dartboards were introduced to descript the local structure feature. The image matching algorithm based on HGA can work against the rotation, blur and luminance and overcome the affine partly. The experimental results, compared with Speeded Up Robust Feature (SURF), Scale Invariant Feature Transform (SIFT) and ORB (Oriented FAST and Rotated Binary Robust Independent Elementary Features (BRIEF)) in the complex scenes, demonstrate that the performance of HGA is better than other descriptors. Additionally, HGA achieves an accuracy of over 94.5% with only 1/3 of the time consumption of SIFT.

    Image enlargement based on anisotropic forth-order partial differential equation coupled to second-order partial differential equation
    HAI Tao, XI Zhihong
    2015, 35(4):  1084-1088.  DOI: 10.11772/j.issn.1001-9081.2015.04.1084
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics

    To enhance the weak edges and textures and to avoid the staircase effect, an image enlargement method was proposed which coupled anisotropic forth-order partial differential equation to second-order partial differential equation. In the method, the weak edges and textures were enhanced and staircase was reduced by improved anisotropic forth-order partial differential equation with adaptive diffusion coefficient to threshold value constrained by pixel's local variance, improved total variance and adaptive amplitude shock filters controlled by gradient were coupled with the forth-order differential equation to enhance the edges, and the bi-orthogonal projection was used to realize the constraint of the degradation model. Simulation experiment results validate the proposed method on enhancing the edges, details and textures and reducing staircases. Compared with other existing second-order PDE-based zoom methods, the zoomed images using the proposed method have better visual effect and higher Peak Signal-to-Noise Ratio (PSNR) and Mean Structural Similarity Measure (MSSIM), for example, PSNR of zoomed image with larger smooth part by the proposed method is about 0.1 dB higher than that by improved Total Variance (TV) enlargement method and PSNR of zoomed image with larger weak edges and textures by the proposed method is above 0.5 dB higher than that by improved TV enlargement method. Therefore, the zoomed image of the method looks more natural, and the resolution of the weak edges and textures of the image are enhanced better.

    Near outlier detection of scattered point cloud
    ZHAO Jingdong, YANG Fenghua, LIU Aijng
    2015, 35(4):  1089-1092.  DOI: 10.11772/j.issn.1001-9081.2015.04.1089
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics

    Concerning that the original Surface Variation based Local Outlier Factor (SVLOF) cannot filter out the outliers on edges or corners of three-dimensional solid, a new near outlier detection algorithm of scattered point cloud was proposed. This algorithm firstly defined SVLOF on the k neighborhood-like region, and expanded the definition of SVLOF. The expanded SVLOF can not only filter outliers on smooth surface but also filter outliers on edges or corners of three-dimensional solid. At the same time, it still retains the space of threshold value enough of original SVLOF. The experimental results of the simulation data and measured data show that the new algorithm can detect the near outliers of scattered point cloud effectively without changing the efficiency obviously.

    Semantic annotation model for scenes based on formal concept analysis
    ZHANG Sulan, ZHANG Jifu, HU Lihua, CHU Meng
    2015, 35(4):  1093-1096.  DOI: 10.11772/j.issn.1001-9081.2015.04.1093
    Asbtract ( )   PDF (590KB) ( )  
    References | Related Articles | Metrics

    To generate an effective visual dictionary for representing the scene of images, and further improve the accuracy of semantic annotation, a scene annotation model based on Formal Concept Analysis (FCA) was presented by means of an abstract from the training image set with the initial visual dictionary as a form context. The weight value of visual words was first marked with information entropy, and FCA structures were built for various types of scene. Then the arithmetic mean of each visual word's weight values was used to describe the contribution among different visual words in the intent to the semantic, and each type of visual vocabularies for the scene was extracted from the structure according to the visual vocabularies thresholds. Finally, the test image was assigned with the class label by using of the K-nearest method. The proposed approach is evaluated on the Fei-Fei Scene 13 natural scene data sets, and the experimental results show that in comparison with the methods of Fei-Fei and Bai, the proposed algorithm has better classification accuracy with β=0.05 and γ=15.

    Retrieval of medical images based on fusion of global feature and scale-invariant feature transform feature
    ZHOU Dongyao, WU Yueqing, YAO Yu
    2015, 35(4):  1097-1100.  DOI: 10.11772/j.issn.1001-9081.2015.04.1097
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics

    Feature extraction is a key step of image retrieval and image registration, but the single feature can not express the information of medical images efficiently. To overcome this shortcoming, a new algorithm for medical image retrieval combining global features with local features was proposed based on the characteristics of medical images. First, after studying the medical image retrieving techniques with single feature, a new retrieval method was proposed by considering global feature and relevance feedback. Then to optimize the Scale-Invariant Feature Transform (SIFT) features, an improved SIFT features extraction and matching algorithm was proposed. Finally, in order to ensure the accuracy of the results and improve the retrieval result, local features were used for stepwise refinement. The experimental results on general Digital Radiography (DR) images prove the effectiveness of the proposed algorithm.

    Detection of elliptical hole group based on maximum inscribed circle
    HUAN Hai, HUANG Lingxiao, ZHANG Yu, LU Song
    2015, 35(4):  1101-1105.  DOI: 10.11772/j.issn.1001-9081.2015.04.1101
    Asbtract ( )   PDF (742KB) ( )  
    References | Related Articles | Metrics

    In view of the high cost and low effectiveness of current detection methods for some elliptic hole group workpieces,an elliptic hole group detection method based on maximal inscribed circle was proposed. Firstly, the image was preprocessed by denoising, binaryzation and edge detection. According to the geometric properties of ellipse, the ellipse's maximal inscribed circle was calculated by using the ellipse center estimation method and the distance selection method. Then, the center coordinates of the ellipse, the length of long and short axes and the inclination angle were determined. The experimental results show that the proposed method can quickly and accurately detect the elliptic hole group, and can quickly intercept effective ellipse arc based on the estimation of the center of the ellipse, then the invalid samples are reduced; compared with the Hough ellipse detection algorithm based on center estimation and the ellipse detection algorithm based on the improved least square method, it has the advantages of high precision, short time consumption, and can be effectively applied to automatic detection of elliptical hole group workpieces.

    Target tracking algorithm for underwater bearings-only system with incomplete measurements
    DING Wei, LI Yinya
    2015, 35(4):  1106-1109.  DOI: 10.11772/j.issn.1001-9081.2015.04.1106
    Asbtract ( )   PDF (545KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of underwater bearings-only system target tracking with incomplete measurements when the probability of sensor detection is less than 1,an improved extended Kalman filtering algorithm for target state estimation was presented. First, the mathematical model of underwater bearings-only system for target tracking with incomplete measurements was established. Second, based on the sensor's incomplete measurement data, the previous update data was used to compensate for the incomplete date and then to perform the filtering. Finally, two evaluation criteria including Cramer-Rao Low Bound (CRLB) and Root Mean Square Errors (RMSE) were used to evaluate the proposed algorithm. The simulation results show that the proposed extended Kalman filtering algorithm for target tracking has higher real-time property with desired tracking precision in the problem of underwater bearings-only system target tracking with incomplete measurements.

    Virtual development model of plant-reed based on growth kinetics
    TANG Weidong, LI Pingping, LI Jinzhong
    2015, 35(4):  1110-1115.  DOI: 10.11772/j.issn.1001-9081.2015.04.1110
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics

    Due to the lack of physiological and ecological characteristics while modeling plant morphology, the law of plant development cannot be expressed in the model. To solve this problem, a new plant morphology modeling method was proposed based on growth kinetics. Taking the plant-reed as an example, firstly, the growth kinetics of plant was studied, and the morphological model of plant was constructed based on the effective accumulated temperature and growth rate. Then the topological change of plant canopy structure was described using Open L-systems (Open-L) method. Finally, the algorithm of constructing virtual plant development model was presented by coupling with the geometric model and displaying model of plant topology and organs. The simulation results demonstrate that the proposed method is effective and feasible in visualizing the morphogenesis of plant and reflecting its growth mechanism, which also provides valuable evidences for dynamical control and prediction of plant development.

    Error analysis of unmanned aerial vehicle remote sensing images stitching based on simulation
    LI Pengjun, LI Jianzeng, SONG Yao, ZHANG Yan, DU Yulong
    2015, 35(4):  1116-1119.  DOI: 10.11772/j.issn.1001-9081.2015.04.1116
    Asbtract ( )   PDF (702KB) ( )  
    References | Related Articles | Metrics

    Concerning that the increasement of accumulated error causes serious distortion of Unmanned Aerial Vehicle (UAV) remote sensing images stitching, a projection error correction algorithm based on space intersection was proposed, Using space intersection theory, the spatial coordinates of 3D points were calculated according to correspondence points. Then all 3D points were orthographic projected onto the same space plane, and the orthographic points were projected onto the image plane to get corrected correspondence points, Finally, M-estimator Sample Consensus (MSAC) algorithm was used to estimate the homography matrix, then the stitching image was obtained. The simulation results show that this algorithm can effectively eliminate the projection error, thus achieve the purpose of inhibiting UAV remote sensing image stitching error.

    Reconstruction of images at intermediate phases of lung 4D-CT data based on deformable registration
    GENG Dandan, WANG Tingting, CAO Lei, ZHANG Yu
    2015, 35(4):  1120-1123.  DOI: 10.11772/j.issn.1001-9081.2015.04.1120
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics

    Due to the high radiation dose to the patient when acquiring lung four Dimensional Computed Tomography (4D-CT) data, this paper proposed a method for deriving the phase-binned 4D-CT image sets through deformable registration of the images acquired at some known phases. First, Active Demons registration algorithm was employed to estimate the motion field between inhale and exhale phases. Then, images at an intermediate phase were reconstructed by a linear interpolation of the deformation coefficients. The experiment results showed that the images at intermediate phases could be reconstructed efficiently. The quantitative analysis of landmark point displacements showed that 3 mm accuracy was achievable. The different maps of reconstructed and acquired images illustrated the similar level of success. The proposed method can accurately reconstruct images at intermediate phases of lung 4D-CT data.

    Echocardiography chamber segmentation based on integration of speeded up robust feature fitting and Chan-Vese model
    CHEN Xiaolong, WANG Xiaodong, LI Xin, YE Jianyu, YAO Yu
    2015, 35(4):  1124-1128.  DOI: 10.11772/j.issn.1001-9081.2015.04.1124
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics

    During the automatic segmentation of cardiac structures in echocardiographic sequences within a cardiac cycle, the contour with weak edges can not be extracted effectively. A new approach combining Speeded Up Robust Feature (SURF) and Chan-Vese model was proposed to resolve this problem. Firstly, the weak boundary of heart chamber in the first frame was marked manually. Then, the SURF points around the boundary were extracted to build Delaunay triangulation. The positions of weak boundaries of subsequent frames were predicted using feature points matching between adjacent frames. The coarse contour was extracted using Chan-Vese model, and the fine contour of object could be acquired by region growing algorithm. The experiment proves that the proposed algorithm can effectively extract the contour of heart chamber with weak edges, and the result is similar to that by manual segmentation.

    Dorsal hand vein recognition algorithm based on sparse coding
    JIA Xu, WANG Jinkai, CUI Jianjiang, SUN Fuming, XUE Dingyu
    2015, 35(4):  1129-1132.  DOI: 10.11772/j.issn.1001-9081.2015.04.1129
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics

    In order to improve the effectiveness of vein feature extraction, a dorsal hand vein recognition method based on sparse coding was proposed. Firstly, during image acquisition process, acquisition system parameters were adaptively adjusted in real-time according to image quality assessment results, and the vein image with high quality could be acquired. Then concerning that the effectiveness of subjective vein feature mainly depends on experience, a feature learning mechanism based on sparse coding was proposed, thus high-quality objective vein features could be extracted. Experiments show that vein features obtained by the proposed method have good inter-class separableness and intra-class compactness, and the system using this algorithm has a high recognition rate.

    Image mosaic approach of transmission tower based on saliency map
    ZHANG Xu, GAO Jiao, WANG Wanguo, LIU Liang, ZHANG Jingjing
    2015, 35(4):  1133-1136.  DOI: 10.11772/j.issn.1001-9081.2015.04.1133
    Asbtract ( )   PDF (664KB) ( )  
    References | Related Articles | Metrics

    Images of transmission tower acquired by Unmanned Aerial Vehicle (UAV) have high resolution and complex background, the traditional stitching algorithm using feature points can detect a large number of feature points from background which costs much time and affects the matching accuracy. For solving this problem, a new image mosaic algorithm with quick speed and strong robustness was proposed. To reduce the influence of the background, each image was first segmented into foreground and background based on a new implementation method of salient region detection. To improve the feature point extraction and reduce the computation complexity, transformation matrix was calculated and image registration was completed by ORB (Oriented Features from Accelerated Segment Test (FAST) and Rotated Binary Robust Independent Elementary Features (BRIEF)) feature. Finally, the image mosaic was realized with image fusion method based on multi-scale analysis. The experimental results indicate that the proposed algorithm can complete image mosaic precisely and quickly with satisfactory mosaic effect.

    Big data benchmarks: state-of-art and trends
    ZHOU Xiaoyun, QIN Xiongpai, WANG Qiuyue
    2015, 35(4):  1137-1142.  DOI: 10.11772/j.issn.1001-9081.2015.04.1137
    Asbtract ( )   PDF (1039KB) ( )  
    References | Related Articles | Metrics

    A big data benchmark is needed eagerly by customers, industry and academia, to evaluate big data systems, improve current techniques and develop new techniques. A number of prominent works in last several years were reviewed. Their characteristics were introduced and the shortcomings were analyzed. Based on that, some suggestions on building a new big data benchmark are provided, including: 1) component based benchmarks as well as end-to-end benchmarks should be used in combination to test different tools inside the system and test the system as a whole, while component benchmarks are ingredients of the whole big data benchmark suite; 2) workloads should be enriched with complex analytics to encompass different application requirements, besides SQL queries; 3) other than performance metrics (response time and throughput), some other metrics should also be considered, including scalability, fault tolerance, energy saving and security.

    Implementation of decision tree algorithm dealing with massive noisy data based on Hadoop
    LIU Yaqiu, LI Haitao, JING Weipeng
    2015, 35(4):  1143-1147.  DOI: 10.11772/j.issn.1001-9081.2015.04.1143
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics

    Concerning that current decision tree algorithms seldom consider the influence of the level of noise in the training set on the model, and traditional algorithms of resident memory have difficulty in processing massive data, an Imprecise Probability C4.5 algorithm named IP-C4.5 was proposed based on Hadoop. When training model, IP-C4.5 algorithm considered that the training set used to design decision trees is not reliable, and used imprecise probability information gain rate as selecting split criterion to reduce the influence of the noisy data on the model. To enhance the ability of dealing with massive data, IP-C4.5 was implemented on Hadoop by MapReduce programming based on file split. The experimental results show that when the training set is noisy, the accuracy of IP-C4.5 algorithm is higher than that of C4.5 and Complete CDT (CCDT), especially when the data noise degree is more than 10%, it has outstanding performance; and IP-C4.5 algorithm with parallelization based on Hadoop has the ability of dealing with massive data.

    Personalized recommendation technique for mobile life services based on location cluster
    ZHENG Hui, LI Bing, CHEN Donglin, LIU Pingfeng
    2015, 35(4):  1148-1153.  DOI: 10.11772/j.issn.1001-9081.2015.04.1148
    Asbtract ( )   PDF (842KB) ( )  
    References | Related Articles | Metrics

    Current mobile recommendation systems limit the real role of location information, because the systems just take location as a general property. More importantly, the correlation of location and the boundary of activities of users have been ignored. According to this issue, personalized recommendation technique for mobile life services based on location cluster was proposed, which considered both user preference in its location cluster and the related weight by forgetting factor and information entropy. It used fuzzy cluster to get the location cluster, then used forgetting factor to adjust the preference of the service resources in the location cluster. Then the related weight was obtained by using probability distribution and information entropy. The top-N recommendation set was got by matching the user preference and service resources. As a result, the algorithm can provide service resources with high similarities with user preference. This conclusion has been verified by case study.

    Skyline based search results sorting method
    YIN Wenke, WU Shanshan, DING Feng, XUN Zhide
    2015, 35(4):  1154-1158.  DOI: 10.11772/j.issn.1001-9081.2015.04.1154
    Asbtract ( )   PDF (871KB) ( )  
    References | Related Articles | Metrics

    Concerning the high redundancy and low diversity of search result sorting in current vertical search engines, a skyline based search results sorting method was proposed. The search results were sorted in accordance with skyline level, domination degree and coverage. In order to reduce the time cost, a Bitmap based skyline level and domination degree computing algorithm was proposed. The experimental results show that the proposed method can achieve better performance in terms of search results diversity with low redundancy, and has faster calculation speed in skyline level and domination degree calculation.

    Automated parallel software test case generation for cloud testing
    LIU Xiaoqiang, XIE Xiaomeng, DU Ming, CHANG Shan, CAI Lizhi, LIU Zhenyu
    2015, 35(4):  1159-1163.  DOI: 10.11772/j.issn.1001-9081.2015.04.1159
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics

    To achieve efficient software testing under cloud computing environment, a method of generating parallel test cases automatically for functional testing of Web application system was proposed. First, parallel test paths were obtained by conducting depth-first traversal algorithm on scene flow graph; then parallel test scripts were assembled from test scripts referred by the test paths, and parameterized valid test data sets that can traverse target test paths and replace test data in script were generated using Search Based Software Testing (SBST) method. A vast number of automatic distributable parallel test cases were generated by inputting test data into parallel test scripts. Finally, a prototype system of automatic testing in cloud computing environment was built for examination of the method. The experimental results show that the method can generate a large number of valid test cases rapidly for testing in cloud computing environment and improve the efficiency of testing.

    Test case generation of data gathering protocol for wireless sensor networks based on timed automata model
    WANG Fei, YANG Hongli, QIN Shengchao, HU Shichao, LIU Yuan
    2015, 35(4):  1164-1168.  DOI: 10.11772/j.issn.1001-9081.2015.04.1164
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    The test case generation method of data gathering protocol for Wireless Sensor Network (WSN) based on timed automata model was studied from the perspective of protocol testing, then the time automata model for data gathering protocol was established using UPPAAL, and the test traces set satisfied with certain coverage criteria was generated using UPPAAL CoVer. In order to facilitate the actual test case generation, an auxiliary automatic test case generation tool named Auto Test Case Generation Tool (ATCGT) was developed. The effectiveness of the method was proved by modeling and test cases generating for a wireless meter reading data gathering protocol in industry.

    Clone genealogies extraction based on software evolution over multiple versions
    TU Ying, ZHANG Liping, WANG Chunhui, HOU Min, LIU Dongsheng
    2015, 35(4):  1169-1173.  DOI: 10.11772/j.issn.1001-9081.2015.04.1169
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics

    Since clone detection results cannot fully reflect the features of clones, clone genealogies extraction from multiple versions can be used to uncover the patterns and characteristics exhibited by clones in the evolving system. A clone genealogy extraction method named FCG was proposed. FCG first mapped clones between each adjacent versions and then identified clone evolution patterns. All of the results were combined to get clone genealogies. Experiments on 6 open source systems found that the average lifetime of clones in current version is over 70 percent of the total number of studied versions, and most of them do not change, which indicates that majority of clones can be well maintained. While some unstable clones may be defect potential, and needs to be modified or refactoring. Results show that FCG can efficiently extract clone genealogies, which contributes to a better understanding of clones and provides insights on targeted management of clones.

    GPS signal tracking algorithm based on carrier frequency assist phase
    SHEN Feng, LI Weidong
    2015, 35(4):  1174-1178.  DOI: 10.11772/j.issn.1001-9081.2015.04.1174
    Asbtract ( )   PDF (707KB) ( )  
    References | Related Articles | Metrics

    Tracking performance of traditional Global Positioning System (GPS) receiver in high dynamic environment is not ideal, a kind of GPS signal tracking algorithm based on carrier frequency assist phase was proposed to improve it. The traditional single tracking loop was replaced by frequency lock loop assisting phase lock loop, and intermediate-frequency signals in all tracking channel of receiver were processed using Kalman filter. According to many tracking channel pseudorange and pseudorange rate residual, comprehensive estimation for system state parameters was given. The system state equation and measurement equation of the Kalman filter were created and the tracking loop feedback was given. Tracking performance of the traditional scalar tracking mode was compared with that of the proposed method. The simulation results show that the GPS signal tracking algorithm based on carrier frequency assist phase gets into the steady state with 100 ms decreasing, the precision of position error is improved by 5 m, and the precision of velocity error is improved about 3 m/s. In the fast moving environment of receiver user, it can better handle the high dynamic signal.

    Parallel multiple input and multiple output equalization based on software defined radio
    ZHANG Yongjun, CHEN Ting
    2015, 35(4):  1179-1184.  DOI: 10.11772/j.issn.1001-9081.2015.04.1179
    Asbtract ( )   PDF (866KB) ( )  
    References | Related Articles | Metrics

    Since baseband processors for Multiple Input and Multiple Output (MIMO) equalization require high throughput and high flexibility, a parallel MIMO detector was proposed for 3GPP-LTE standard based on Software Defined Radio (SDR) methodology, which adopted Single Instruction Multiple Data (SIMD) and Very Long Instruction Word (VLIW) technology to exploit the parallelism on both inter-tone and inner-tone MIMO equalization. Each SIMD lane supported both 16 bit fixed-point and 20 bit floating-point complex vector and matrix operations, met the requirements of power, processing delay and precision for different MIMO configurations. The experimental results show that the proposed MIMO detector is much more efficient and 4×4 matrix inversion rate is up to 95 MInversion/s, which satisfies the requirement of 3GPP-LTE standard. Besides, its programmability and configurability support different algorithms of MIMO equalization.

    Optimization between multiple input multiple output radar signal and target interference based on Stackelberg game
    LAN Xing, WANG Xingliang, LI Wei, WU Haotian, JIANG Mengran
    2015, 35(4):  1185-1189.  DOI: 10.11772/j.issn.1001-9081.2015.04.1185
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of the game of detection and stealth in the presence of clutter between Multiple Input Multiple Output (MIMO) radar and target, a new two-step water-filling was proposed. Firstly, space-time coding model was built. Then based on mutual information, water-filling was applied to distribute target interference power, and generalized water-filling was applied to distribute radar signal power. Lastly, optimization schemes in Stackelberg game of target dominant and radar dominant were achieved under strong and weak clutter. The simulation results indicate that both radar signal power allocation and trend of generalized water-filling level are affected by clutter, therefore two optimization schemes' mutual information in strong clutter environment is about half and interference factor decreases 0.2 and 0.25 separately, mutual information is less sensitive to interference. The availability of the proposed algorithm is proved.

    Automatic generation of urban rail transit train diagram for express/slow trains operation and CAD realization
    WANG Xianming, CHEN Rongwu, CAI Zheyang, WANG Fangchao
    2015, 35(4):  1190-1195.  DOI: 10.11772/j.issn.1001-9081.2015.04.1190
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics

    According to the idea of manually drawing a train diagram of the express and slow, the different proportions modularization thought of express and slow trains was proposed. Under the condition to satisfy the requirements of trains,the transitions and adjustments between different modules were added to meet the entrance/exit tracks of depot, so that the automatically generated diagram could meet the needs of designers. It can draw different full-day operation plans of different lines in particular proportions. Besides it can convert train diagram into CAD scripts and realize automatic drawing in CAD software. Finally, taking line 18 of Chengdu Metro as an example, the proposed method is implemented and its feasibility is proved.

    Ranking of military training performances based on data envelopment analysis of common weights
    ZHANG Youliang, ZHANG Hongjun, ZHANG Rui, YANG Bojiang, ZENG Zilin, GUO Lisheng
    2015, 35(4):  1196-1199.  DOI: 10.11772/j.issn.1001-9081.2015.04.1196
    Asbtract ( )   PDF (521KB) ( )  
    References | Related Articles | Metrics

    Conventional approaches for Common Weights (CW) generation in Data Envelopment Analysis (DEA) are either non-linear or scale-relevant. To solve this problem, according to the demand of military training performance evaluation, a new method was proposed to generate CW in DEA. The new method took DEA efficient units as the basis of calculation. Firstly, training data were normalized, and then multi-objective programing was employed for CW generation, which can lead to a fairer and more reasonable ranking of performances. The proposed method is not only linear, but also scale-irrelevant. Lastly, a military application illustrates that the proposed method is scientific and effective.

    Fuzzy-proportion integration differentiation control system based on image visual servo
    WANG Sheng, CHEN Ning
    2015, 35(4):  1200-1204.  DOI: 10.11772/j.issn.1001-9081.2015.04.1200
    Asbtract ( )   PDF (743KB) ( )  
    References | Related Articles | Metrics

    In view of the hard parameter tuning and unsatisfactory control performance, a fuzzy-Proportion Integration Differentiation (fuzzy-PID) controller which combined Proportion Integration Differentiation (PID) controller with the fuzzy control theory was proposed. The control system applied Eye-to-Hand visual model, introduced visual servo mechanism, and realized real-time, online and adaptive adjustment for three parameters Kp, Ti and Td of the PID controller by getting errors in image. The experiment was performed on punching machine visual servo motion control system which composes of PC, compactRIO, NI-9401, Complementary Metal Oxide Semiconductor (CMOS) camera, motor driver and brushless Direct Current (DC) motor. The results show that, compared with traditional PID controller, the speed of response of the fuzzy-PID controller based on image visual servo is increased by 60%, the overshoot is reduced by 80%, and it has better robustness. It can not only improve the positioning accuracy of hole, but also process and detect holes nearly at the same time.

2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF