Loading...

Table of Content

    01 November 2014, Volume 34 Issue 11
    Parallel algorithms for multi-grid lattice Boltzmann method
    LIU Zhixiang SONG Anping XU Lei ZHEN Hanyuan ZHANG Wu
    2014, 34(11):  3065-3068.  DOI: 10.11772/j.issn.1001-9081.2014.11.3065
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    Due to the shortcomings of large amount of computational grids and slow convergence rate in the numerical simulation of the complex flow, the parallel algorithm of multiple Cartesian grid generation based on three-dimensional geometry was proposed for Lattice Boltzmann Method (LBM). And multi-grid parallel LBM was developed based on the parallel algorithm of grid generation. The proposed algorithms can efficiently reduce the amount of computational grids and improve the convergence by coupling with the grids with different sizes. Numerical results also show that the proposed parallel algorithms have considerable scalability.

    Related task scheduling algorithm based on task hierarchy and time constraint in cloud computing
    CHEN Xi MAO Yingchi JIE Qing ZHU Lili
    2014, 34(11):  3069-3072.  DOI: 10.11772/j.issn.1001-9081.2014.11.3069
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics

    Concerning the delay of related task scheduling in cloud computing, a Related Task Scheduling algorithm based on Task Hierarchy and Time Constraint (RTS-THTC) was proposed. The related tasks and task execution order were represented by Directed Acyclic Graph (DAG), and the task execution concurrency was improved by using the proposed hierarchical task model. Through the calculation of the total time constraint in each task layer, the tasks were dispatched to the resource with the minimum execution time. The experimental results demonstrate that the proposed RTS-THTC algorithm can achieve better performance than Heterogeneous Earliest-Finish-Time (HEFT) algorithm in the terms of the total execution time and task delay.

    Parallel fuzzy partition algorithm based on MapReduce
    ZHANG Guangrong CHEN Qingkui ZHANG Gang ZHAO Haiyan GAO Liping HUO Huan
    2014, 34(11):  3073-3077.  DOI: 10.11772/j.issn.1001-9081.2014.11.3073
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics

    It is difficult for users to find the needed items from a large-scale project resource repository because the project resources in it are disordered, so a parallel fuzzy partition algorithm based on MapReduce was proposed. The algorithm firstly abstracted and standardized characteristic attributes of original project resource. Then a similarity matrix was established based on the standardized characteristic attributes of the project, and it was segmented by using block matrix. MapReduce was used to process the block matrix and merge the results. Finally, the algorithm obtained the partition results according to the threshold. The contrast experiment among the proposed algorithm, K-means algorithm and genetic algorithm shows that the proposed algorithm has higher accuracy and recall, it can achieve better speedup in large-scale data calculation and divide project resources effectively and accurately.

    Real-time clustering for massive data using Storm
    WANG Mingkun YUAN Shaoguang ZHU Yongli WANG Dewen
    2014, 34(11):  3078-3081.  DOI: 10.11772/j.issn.1001-9081.2014.11.3078
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    In order to improve the real-time response ability of massive data processing, Storm distributed real-time platform was introduced to process data mining, and the Density-Based Spatial Clustering of Application with Noise (DBSCAN) clustering algorithm based on Storm was designed to deal with massive data. The algorithm was divided into three main steps: data collection, clustering analysis and result output. All procedures were realized under the pre-defined component of Storm and submitted to the Storm cluster for execution. Through comparative analysis and performance monitoring, the system shows the advantages of low latency and high throughput capacity. It proves that Storm suits for real-time processing of massive data.

    Dynamic load balancing mechanism in cloud-based virtual cluster system
    LI Liyao ZHAO Shaoka LIN Dongsen XU Cong YANG Jiahai
    2014, 34(11):  3082-3085.  DOI: 10.11772/j.issn.1001-9081.2014.11.3082
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics

    As the conventional physical cluster system fails to cope flexibly with large-scale Internet applications, a comprehensive load balancing mechanism for cloud-based virtual cluster system was proposed. It first periodically collected CPU and memory usage, number of connections, and response time of all virtual machines and physical hosts, then calculated the weighted load of the physical hosts, and finally scheduled and assigned the task requests based on the calculated comprehensive load, thus could adapt to the complex, dynamic and variable computing environment. The experimental results show that, compared with other scheduling mechanisms such as Weighted Round Robin (WRR) and Weighted Least Connections (WLC), the proposed mechanism is delay optimal under heavy workload, and moreover, it can increase or decrease the number of Virtual Machines (VMs) dynamically to balance the server load usually within 5 seconds.

    Performance tests and analysis of distributed evolutionary algorithms
    CHEN Bingliang ZHANG Yuhui JI Zhiyuan
    2014, 34(11):  3086-3090.  DOI: 10.11772/j.issn.1001-9081.2014.11.3086
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics

    Due to the lack of performance analysis while designing a distributed Evolutionary Algorithm (dEA), the designed algorithm cannot reach the expected speedup. To solve this problem, a comprehensive performance analysis method was proposed. According to the components of dEAs, factors that influence the performance of dEAs can be divided into three parts, namely, evolutionary cost, fitness evaluation cost and communication cost. Firstly, the feature of evolutionary cost under different individual encoding lengths was studied. Then when the evolutionary cost was kept unchanged, the fitness evaluation cost was controlled by using the delay function of the operating system and the communication cost was controlled by changing the length of individual encoding. Finally, the effect of each factor was tested through control variable method. The experimental results reveal the constraint relation among the three factors and point out the necessary conditions for speeding up dEAs.

    Optimization of small files storage and accessing on Hadoop distributed file system
    LI Tie YAN Cairong HUANG Yongfeng Song Yalong
    2014, 34(11):  3091-3095.  DOI: 10.11772/j.issn.1001-9081.2014.11.3091
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics

    In order to improve the efficiency of processing small files in Hadoop Distributed File System (HDFS), a new efficient approach named SmartFS was proposed. By analyzing the file accessing log to obtain the accessing behavior of users, SmartFS established a probability model of file associations. This model was the reference of merging algorithm to merge the relevant small files into large files which would be stored on HDFS. When a file was accessed, SmartFS prefetched the related files according to the prefetching algorithm to accelerate the access speed. To guarantee the enough cache space, a cache replacement algorithm was put forward. The experimental results show that SmartFS can save the metadata space of NameNode in HDFS, reduce the interaction between users and HDFS, and improve the storing and accessing efficiency of small files on HDFS.

    Parallel framework based on aspect-oriented programming and run-time reflection
    ZHANG Yang ZHANG Dongwen WANG Yizhuo
    2014, 34(11):  3096-3099.  DOI: 10.11772/j.issn.1001-9081.2014.11.3096
    Asbtract ( )   PDF (550KB) ( )  
    References | Related Articles | Metrics

    JOMP that is the OpenMP-like implementation in Java needs to be optimized, so a parallel framework, which can separate parallel logic and logic function, was proposed.The parallel framework was implemented by a parallel library named waxberry, and the parts which need to be processed parallelly were annotated and executed by using Aspect-Oriented Programming (AOP) and run-time reflection. AOP was used to separate parallel parts with core ones, and to weave them together. Run-time reflection was used to obtain the related information during the parallel execution. The library waxberry was evaluated using Java Grande Forum (JGF) benchmarks on a quad-core processor. The experimental results show that the waxberry can obtain good performance.

    Composite model of manufacturing cloud service based on business process
    ZHAO Qiuyun WEI Le SHU Hongping
    2014, 34(11):  3100-3103.  DOI: 10.11772/j.issn.1001-9081.2014.11.3100
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics

    Based on the business process, a composite model of manufacturing cloud services was proposed to improve the successful rate of the manufacturing cloud service composition and match the composite cloud service with the user business demand correctly. As the foundation, formal descriptions were given about concepts such as the manufacturing cloud service, the process node task, the service composability and the process matching. The model consisted of the business process engine, the business process, the selection logic, the evaluation logic, the monitoring logic, the knowledge base and the atomic cloud service set. With the function matching, the composability of optional services was checked. The load, the Quality of Service (QoS) and the business process information were also considered. Then suitable cloud services were selected and integrated into the business process to form the composite manufacturing cloud service. The process of the service composition was described in detail, and the realization method of it was offered. The composite service model was verified through an example. The results prove that the valid cloud service entities can be selected and combined effectively with the model, the successful rate of the service composition is raised, and users' manufacturing activities can be carried out smoothly.

    Optimal storing strategy based on small files in RAMCloud
    YING Changtian YU Jiong LU Liang LIU Jiankuang
    2014, 34(11):  3104-3108.  DOI: 10.11772/j.issn.1001-9081.2014.11.3104
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics

    RAMCloud stores data using log segment structure. When large amount of small files store in RAMCloud, each small file occupies a whole segment, so it may leads to much fragments inside the segments and low memory utilization. In order to solve the small file problem, a strategy based on file classification was proposed to optimize the storage of small files. Firstly, small files were classified into three categories including structural related, logical related and independent files. Before uploading, merging algorithm and grouping algorithm were used to deal with these files respectively. The experiment demonstrates that compared with non-optimized RAMCloud, the proposed strategy can improve memory utilization.

    Cost optimization-oriented caching strategy for cloud storage
    TANG Bing ZHANG Li
    2014, 34(11):  3109-3111.  DOI: 10.11772/j.issn.1001-9081.2014.11.3109
    Asbtract ( )   PDF (581KB) ( )  
    References | Related Articles | Metrics

    A cost optimization-oriented caching strategy for cloud storage was proposed to increase the access rate and reduce the cost. Nearly free desktop machines in local area network environment were utilized to build a local distributed file system, which was deployed as a data cache of remote cloud storage service. When the user read a file, it first determined whether the file was in the cache. If it was in the cache, the file could be read directly from the cache; otherwise, the file was read from remote cloud storage service. Least Recently Used (LRU) algorithm was used for cache replacement, and the cold data were replaced. A simple performance evaluation of prototype system was accomplished using Amazon Simple Storage Service (S3) as the remote cloud storage service. The results show that using the proposed caching strategy not only reduces cost, but also greatly improves file reading speed.

    GPU-accelerated segmented Top-k query algorithm
    HUANG Yulong ZOU Xunjin LIU Kui SU Benyue
    2014, 34(11):  3112-3116.  DOI: 10.11772/j.issn.1001-9081.2014.11.3112
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics

    The existing algorithms of Top-k query can not make full use of the powerful parallel throughput of Graphic Processing Unit (GPU) to timely return the query results. So, a segmented query algorithm based on Compute Unified Device Architecture (CUDA) model was proposed. By dividing the query process and using the strategy of segmented parallel process, the maximal calculation and comparison efficiency in query process could be obtained in this algorithm. The experimental results show that this algorithm has obvious performance advantages compared with four-thread parallel optimization algorithm on multi-core CPU. When the number of ordered lists is 6 and the traversal stride is 120, the optimal performance can be obtained which is 40 times faster than multi-core CPU algorithm.

    Parallel alignment algorithm of large scale biological networks based on message passing interface
    SHU Junhui ZHANG Wu XUE Qianfei XIE Jiang
    2014, 34(11):  3117-3120.  DOI: 10.11772/j.issn.1001-9081.2014.11.3117
    Asbtract ( )   PDF (594KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the time complexity of biological networks alignment, an implementation for large scale biological networks alignment based on Scalable Protein Interaction Network Alignment (SPINAL) in Message Passing Interface (MPI) program was proposed. Based on MPI, the SPINAL algorithm combined with parallelization method was applied into this approach. Instead of serial algorithm, parallel sorting algorithm was used in multi-core environment. Load balancing strategy was chosen to assign tasks reasonably. In the processing of large scale biological networks alignment, the experiment shows that, compared with the algorithm without parallelization and load balancing strategy, this proposed algorithm can reduce the runtime and improve computation efficiency effectively.

    Parallel face contour extraction algorithm based on Chan-Vese model on multi-core CPU and GPU
    WANG LIna SHI Xiaohua
    2014, 34(11):  3121-3125.  DOI: 10.11772/j.issn.1001-9081.2014.11.3121
    Asbtract ( )   PDF (690KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that Chan-Vese model of face contour extraction has a large amount of calculation and slow segmentation, an parallel algorithm was proposed based on Graphic Processing Unit (GPU) and multi-core CPU with Open Computing Language (OpenCL) parallel programming model. Firstly, by reconstructing the computing architecture, the algorithm could eliminate data dependence of the model; secondly, using OpenCL to parallel and optimize the algorithm. The result shows that, compared with the single-thread method, the proposed algorithm gets a higher average acceleration ratios under NVIDIA GTX660 and AMD FX-8530.

    Query algorithm based on mesh structure in large-scale smart grid
    WANG Yan HAO Xiuping SONG Baoyan LI Xuecheng XING Zengwei
    2014, 34(11):  3126-3130.  DOI: 10.11772/j.issn.1001-9081.2014.11.3126
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics

    Currently, the query of transmission lines monitoring system in smart grid is mostly aiming at the global query of Wireless Sensor Network (WSN), which cannot satisfy the flexible and efficient query requirements based on any area. The layout and query characteristics of network were analyzed in detail, and a query algorithm based on mesh structure in large-scale smart grid named MSQuery was proposed. The algorithm aggregated the data of query nodes within different grids to one or more logical query trees, and an optimized path of collecting query result was built by the merging strategy of the logical query tree. Experiments were conducted among MSQuery, RSA which used routing structure for querying and SkySensor which used cluster structure for querying. The simulation results show that MSQuery can quickly return the query results in query window, reduce the communication cost, and save the energy of sensor nodes.

    Data crawler for Sina Weibo based on Python
    ZHOU Zhonghua ZHANG Huiran XIE Jiang
    2014, 34(11):  3131-3134.  DOI: 10.11772/j.issn.1001-9081.2014.11.3131
    Asbtract ( )   PDF (520KB) ( )  
    References | Related Articles | Metrics

    Nowadays, most of researches about social network use data from foreign social network platforms. However the largest social network platform Sina Weibo in China has no data interfaces for investors. A Sina Weibo data crawler combined with parallelization technology was put forward. It got fans information and Weibo data content of different weibo users in real-time. It also supported key words matching and parallelization. The serial data crawler and its parallel version were compared, and an experiment about flu was conducted on some Weibo data. The results indicate that, with parallelization, this tool has liner speedup and all the fetching data are with timeliness and accuracy.

    Personalization recommendation algorithm for Web resources based on ontology
    LIANG Junjie LIU Qiongni YU Dunhui
    2014, 34(11):  3135-3139.  DOI: 10.11772/j.issn.1001-9081.2014.11.3135
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy of recommended Web resources, a personalized recommendation algorithm based on ontology, named BO-RM, was proposed. Subject extraction and similarity measurement methods were designed, and ontology semantic was used to cluster Web resources. With a user's browser tracks captured, the tendency of preferences and recommendation were adjusted dynamically. Comparison experiments with collaborative filtering algorithm based on situation named CFR-RM and personalized prediction algorithm based on model were given. The results show that BO-RM has relatively stable overhead time and good performance in Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). The results prove that BO-RM improves the efficiency by using offline data analysis for large Web resources, thus it is practical. In addition, BO-RM captures the users' interest in real-time to updates the recommendation list dynamically, which meets the real needs of users.

    Collaborative filtering recommendation based on number of common items and common rating interest of users
    WANG Xuexia LI Qing LI Jihong
    2014, 34(11):  3140-3143.  DOI: 10.11772/j.issn.1001-9081.2014.11.3140
    Asbtract ( )   PDF (575KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the negative impacts of sparse data, a new collaborative filtering recommendation algorithm was put forward based on the number of common rating items among users and the similarity of user interests. The similarity calculations were made to be more credible by combing the number of common rating items among users with the similarity of user interests, so as to provide better recommendation results for the target user. Compared with the method based on Pearson similarity, the new algorithm provides better recommendation results with smaller Mean Absolute Error (MAE). In conclusion, the new algorithm is effective and feasible.

    Web text clustering method based on topic
    ZHANG Wanshan Xiaoyao LIANG Junjie YU Dunhui
    2014, 34(11):  3144-3146.  DOI: 10.11772/j.issn.1001-9081.2014.11.3144
    Asbtract ( )   PDF (577KB) ( )  
    References | Related Articles | Metrics

    Concerning that the traditional Web text clustering algorithm without considering the Web text topic information leads to a low accuracy rate of multi-topic Web text clustering, a new algorithm was proposed for Web text clustering based on the topic theme. In the method, multi-topic Web text was clustered by three steps: topic extraction, feature extraction and text clustering. Compared to the traditional Web text clustering algorithm, the proposed method fully considered the Web text topic information. The experimental results show that the accuracy rate of the proposed algorithm for multi-topic Web text clustering is higher than the text clustering method based on K-means or HowNet.

    Incentive scheme of auction algorithm based on the discriminatory second price
    SONG Wei YU Qiang PENG Jun SUN Qingzhong
    2014, 34(11):  3147-3151.  DOI: 10.11772/j.issn.1001-9081.2014.11.3147
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics

    In the real-time large data applications of Peer-to-Peer (P2P), to avoid free-riding behavior in the Video on Demand (VOD) system, a new incentive scheme of auction algorithm based on the discriminatory second price was proposed. The nodes obtained the video data block they needed using distributed dynamic auction between nodes. In auction, the bidding node firstly determined whether the budget was enough to bid based on discrimination rule, and set the upload bandwidth according to the number of bidding nodes. Secondly, the winner node was determined by the bid price. Finally, the bidding node paid the auction node according to the second highest price after it got the data block as its income. Analysis of the revenue of nodes, the bandwidth utilization and the proportion of selfless or selfish nodes indicate that the proposed scheme can effectively motivate nodes to take active part in sharing of video data blocks, and make efficient use of the upload bandwidth at the same time.

    Interference awareness based multicast scheme in wireless Ad Hoc networks
    TAN Guoping FENG Fei PENG Xinhua JU Meiyan
    2014, 34(11):  3152-3156.  DOI: 10.11772/j.issn.1001-9081.2014.11.3152
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics

    For multicast scenarios using network coding in Wireless Ad Hoc NETworks (WANET), in order to overcome the influence of interference on multicast's overall performance effectively when the node density is very large, a routing metric to measure the interference of the path was proposed. Using the metric, a Partial Network Coding based Interference Aware (PNCIA) multicast routing scheme was built to reach the tradeoff between the network coding opportunities and interference avoidance among nodes. The simulation shows that the proposed scheme can outperform traditional network coding based multicast scheme in terms of energy consumption, delay and throughput, thus it is especially suitable for those scenarios with large node density.

    Multi-dimensional trust management mechanism for peer-to-peer networks
    ZHAO Yuan LU Tianbo
    2014, 34(11):  3157-3159.  DOI: 10.11772/j.issn.1001-9081.2014.11.3157
    Asbtract ( )   PDF (566KB) ( )  
    References | Related Articles | Metrics

    Concernig the trust management in Peer-to-Peer (P2P) networks, a multi-dimensional trust management mechanism was put forward.It utilized direct and indirect trust evaluation methods to determine the trust degree of trusted users in the system according to each user's behavior, so as to prevent malicious users from the negative impact caused by malicious feedback on the network. Compared with the EigenTrust in Bad Mouthing and on-off attack scenarios, the proposed method got a higher Success Rate of Transmission (SRT). The simulation results show that the mechanism can effectively inhibit the malicious user behavior.

    Evolutionary game theory based media access control protocol in underwater acoustic sensor networks
    XU Ming LIU Guangzhong SUN Wei
    2014, 34(11):  3160-3163.  DOI: 10.11772/j.issn.1001-9081.2014.11.3160
    Asbtract ( )   PDF (610KB) ( )  
    References | Related Articles | Metrics

    In order to decrease the influence caused by low bandwidth and high latency on Media Access Control (MAC) layer in Underwater Acoustic Sensor Network (UWASN), an Evolutionary Game Theory based MAC (EGT-MAC) protocol was proposed. In EGT-MAC, each sensor node adopted two strategies including spatial multiplexing and temporal multiplexing. With the replication kinetics equation, each strategy got an evolutionary stable strategy and reached stable equilibrium of evolution. In this way, it improved channel utilization rate and data transmission efficiency to achieve performance optimization for MAC protocol. The simulation results show that EGT-MAC can improve the network throughput as well as the transmission rate of data packet.

    Energy-balanced unequal clustering routing protocol based on game theory for wireless sensor networks
    SUN Qingzhong YU Qiang SONG Wei
    2014, 34(11):  3164-3169.  DOI: 10.11772/j.issn.1001-9081.2014.11.3164
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics

    In Wireless Sensor Network (WSN) clustering routing algorithm, sensors energy consumption imbalance will result in "energy hole" phenomenon, and it will affect the network lifetime. For this problem, an energy-balanced unequal clustering routing protocol based on game theory named GBUC was put forward. In clustering stage, WSNs were divided into clusters of different sizes, the cluster radius was determined by the distance from cluster head to sink node and the residual energy. By adjusting the cluster head in the energy consumption of communication within the cluster and forwarding data to achieve energy balance. In inter-cluster communication phase, a game model was established by using the residual energy efficiency and link reliability as the benefit functions, using its Nash equilibrium solution to get joint energy balancing, optimal transmission path of link reliability, thereby improving network performance. The simulation results show that, compared with Energy-Efficient Uneven Clustering (EEUC) algorithm and Unequal Clustering Energy-Economical Routing (UCEER) algorithm, the GBUC algorithm has significantly improved the performance in balancing node energy consumption and prolonging the network lifetime.

    Improved RFID indoor positioning algorithm based on reference tags' credibility and deviation correction
    WANG Dong GE Wancheng MO Guomin WANG Yunguang
    2014, 34(11):  3170-3172.  DOI: 10.11772/j.issn.1001-9081.2014.11.3170
    Asbtract ( )   PDF (468KB) ( )  
    References | Related Articles | Metrics

    After analyzing a classical Radio Frequency Identification (RFID) indoor positioning system named LANDMARC, an improved RFID indoor positioning algorithm based on reference tags' credibility and deviation correction was proposed to enhance the positioning accuracy. This algorithm introduced reference tags to assist in positioning process. After checking the credibility of all nearest neighbour reference tags, a small number of incredible tags were discarded. Then the system did deviation correction of positioning aiming at these final selected nearest neighbor reference tags. At last, the position of the tag to be positioned was calculated. The experimental results show that, compared with the original LANDMARC, this algorithm improves positioning accuracy and is suitable for the application of indoor objects.

    Frequency modulation location method based on dynamic radio
    HE Yanjun LUO Haiyong Yong DAI CHEN Zili
    2014, 34(11):  3173-3176.  DOI: 10.11772/j.issn.1001-9081.2014.11.3173
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    In view of the fast that dynamic changes of Frequency Modulation (FM) radio signal have great influence on the positioning performance, a localization method based on dynamic radio map was proposed. During offline phase, the relationship between radio signal strength of a few calibration points and each reference point was established by multiple linear regression and neural network, and then during online phase, the real-time radio signal strength values at all reference points were predicted based on the radio signal strength collected at calibration points in real-time. Dynamic map of radio frequency with adaptive ability was established using these two methods, and Bayesian estimation method was used to locate the target. The experimental results show that, compared with static radio map, the multiple linear regression model would reduce the deviation by 9.1%, and the neural network model would reduce the deviation by 36.3%, which restrains the negative impact of radio signal's time-dependent nature to the position performance.

    DNA image encryption algorithm based on chaotic system
    XU Guangxian GUO Xiaojuan
    2014, 34(11):  3177-3179.  DOI: 10.11772/j.issn.1001-9081.2014.11.3177
    Asbtract ( )   PDF (567KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of digital image encryption algorithm including scheme complexity and poor security, a DNA fusion image encryption algorithm based on chaotic system was proposed. Firstly, the image was scrambled by Baker transform to obtain the DNA sequence. Then, Logistic map was used to generate chaotic sequence. Finally, the DNA sequence was encrypted. The method has good sensitivity to initial values and strong ability of anti-statistical and anti-differential attacks. The simulation results show that the algorithm is not only simple, but also has good encryption effect and high security.

    Selection-free judgment for productions based on edge-based context-sensitive graph grammar
    WANG Yi DING Han
    2014, 34(11):  3180-3183.  DOI: 10.11772/j.issn.1001-9081.2014.11.3180
    Asbtract ( )   PDF (485KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the time complexity of parsing algorithm, based on the Edge-based context-sensitive Graph Grammar (EGG), a selection-free judgment method was proposed for the productions of EGG with appropriate constraints. This method can effectively judge the selection independence of the productions of EGG. For the selection-free productions, using the order of the productions will not affect the result of the parsing, thus the parsing process can avoid backtracking and the time complexity of the parsing algorithm can be effectively reduced.

    Network and communications
    Effects analysis of network evolution speed on propagation in temporal networks
    ZHU Yixin ZHANG Fengli QIN Zhiguang
    2014, 34(11):  3184-3187.  DOI: 10.11772/j.issn.1001-9081.2014.11.3184
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    An index of network evolution speed and a network evolution model were put forward to analyze the effects of network evolution speed on propagation. The definition of temporal correlation coefficient was modified to characterize the speed of the network evolution; meanwhile, a non-Markov model of temporal networks was proposed. For every active node at a time step, a random node from network was selected with probability r, while a random node from former neighbors of the active node was selected with probability 1-r. Edges were created between the active node and its corresponding selected nodes. The simulation results confirm that there is a monotone increasing relationship between the network model parameter r and the network evolution speed; meanwhile, the greater the value of r, the greater the scope of the spread on network becomes. These mean that the temporal networks with high evolution speed are conducive to the spread on networks. More specifically, the rapidly changing network topology is conducive to the rapid spread of information, but not conducive to the suppression of virus propagation.

    Address resolution protocol proxy mechanism in hybrid environment of software-defined network and traditional network
    WANG Junjun MENG Xuhui WANG Jian
    2014, 34(11):  3188-3191.  DOI: 10.11772/j.issn.1001-9081.2014.11.3188
    Asbtract ( )   PDF (817KB) ( )  
    References | Related Articles | Metrics

    To eliminate the most common problem of the flooding of Address Resolution Protocol (ARP) messages in Ethernet, a new ARP proxy mechanism, taking account of hybrid environment of Software-Defined Network (SDN) and traditional network was proposed environment. In this mechanism, the advantage of network-wide view of SDN paradigm was used to register hosts information once accessing into the network and update the records in real-time by keeping trace of hosts' dynamics and network failure. Thus, most ARP request messages could be responded directly by the controller. The evaluation results show that this proposed scheme reserves the auto-configuration characteristic, which is transparent to hosts, compatible with the existed hardware without any changes, reduces network traffic and allows redundant links existed in network, so as to improve the scalability of the Ethernet.

    Priority-based device-to-device data forwarding strategy in cluster
    WANG Junyi GONG Zhishuai FU Jielin QIU Hongbing
    2014, 34(11):  3192-3195. 
    Asbtract ( )   PDF (629KB) ( )  
    Related Articles | Metrics
    In cellular networks, users requesting the same data from base station can be grouped into a cluster, and Device-to-Device (D2D) links can be utilized to improve data dissemination efficiency. However, the difference of D2D links may be the bottleneck that results in conservative resource utilization. To solve this problem, a priority-based D2D relay scheme was proposed. The proposed scheme first set an optimal threshold by traversing the D2D links quality matrix, and then preferentially selected the D2D links with high achievable data rates to forward the data. In simulation experiment, compard with the traditional scheme without priority, the proposed scheme consumed less time and frequency resources with a higher efficiency especially when the ratio of ACK users and NACK users decreasing; meanwhile, the size of the cluster also influenced the improvement of resource efficiency. The results show that the proposed priority-based scheme improves resource efficiency when there is a small number of users, especially the users of NACK are more than that of ACK.
    Medium access control protocol with network utility maximization and collision avoidance for wireless sensor networks
    LIU Tao LI Tianrui YIN Feng ZHANG Nan
    2014, 34(11):  3196-3200.  DOI: 10.11772/j.issn.1001-9081.2014.11.3196
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics

    In order to avoid transmission collisions and improve energy efficiency for periodic report Wireless Sensor Network (WSN), a Medium Access Control (MAC) protocol with network utility maximization and collision avoidance called UM-MAC was proposed. UM-MAC used Time Division Multiple Access (TDMA) scheduling mechanism and introduced the utility model into the slot assignment process. A utility maximization problem of joint link reliability and energy consumption optimization based on utility model was put forward. To handle it, a heuristic algorithm was proposed to make the network to quickly find out a slot scheduling strategy which maximize network utility and avoid transmission collisions. Comparison experiments among UM-MAC, S-MAC and CA(Collision Avoidance)-MAC protocols were conducted under networks with different nodes, where UM-MAC got larger network utility and higher average packet successful delivery ratio, the lifetime of UM-MAC was between S-MAC and CA-MAC, while its average transmission delay increased under networks with defferent loads. The simulation results show that UM-MAC can achieve collision avoidance and improve network performance in terms of packet successful delivery ratio and energy efficiency; meanwhile, the TDMA-based protocol is not better than competition-based protocol in low load networks.

    Design and implementation of high-speed network traffic capture system
    JIANG Lalin YANG Jiajia JIANG Lei TANG Qiu
    2014, 34(11):  3201-3205.  DOI: 10.11772/j.issn.1001-9081.2014.11.3201
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    Since high-speed network traffic can not be effectively coped with the network traffic capture system implemented by software, and the multiple network flow need to be collected simultaneously to improve the capturing efficiency, an high-speed network flow capture framework in combination of hardware and software was presented, and the implementation of network traffic capture system based on NetFPGA-10G, called HSNTCS, was discussed. A variety of network flow in hardware was filtered and classified by the exact string matching engine and the regular expression matching engine of this system. After being transmitted to the corresponding data buffer at the driver layer, the network flow was directly copied to the corresponding database in user space. The experiments show that the throughput of UDP (User Datagram Protocol)and TCP (Transmission Control Protocol)in the high-speed network traffic capture system implemented by the hardware under the condition of exact string matching achieved 1.2Gb/s, which is about 3 times of that implemented by the software; and the throughput of UDP and TCP in the system implemented by the hardware under the condition of regular expression matching achieved 640Mb/s, which is about 3 times of that implemented by the software. The results demonstrate that the capture performance by the method of hardware is better than the method of software.

    Semi-supervised network traffic feature selection algorithm based on label extension
    LIN Rongqiang LI Qing LI Ou LI Linlin
    2014, 34(11):  3206-3209.  DOI: 10.11772/j.issn.1001-9081.2014.11.3206
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of sample labeling in network traffic feature selection, and the deficiency of traditional semi-supervised methods which can not select a strong correlation feature set, a Semi-supervised Feature Selection based on Extension of Label (SFSEL) algorithm was proposed. The model started from a small number of labeled samples, and the labels of unlabeled samples were extended by K-means algorithm, then MDrSVM (Multi-class Doubly regularized Support Vector Machine) algorithm was combined to achieve feature selection of multi-class network data. Comparison experiments with other semi-supervised algorithms including Spectral, PCFRSC and SEFR on Moore network data set were given, where SFSEL got higher classification accuracy and recall with fewer selection features. The experimental results show that the proposed algorithm has a better classification performance with selecting a strong correlation feature set of network traffic.

    Transmission resource scheduling method for remote sensing images based on ant colony algorithm
    LIU Wanjun WANG Xiaoyu QU Chenghai MENG Yu JIANG Qingling
    2014, 34(11):  3210-3213.  DOI: 10.11772/j.issn.1001-9081.2014.11.3210
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics

    A block resource scheduling strategy for remote sensing images in multi-line server environment was proposed with the problems of huge amount of remote sensing data, heavy server load caused by multi-user concurrent requests which decreased the transmission efficiency of remote sensing images. To improve the transmission efficiency, an Improved Ant Colony Optimization (IACO) algorithm was used, which introduced a line waiting factor γ to dynamically select the optimal transmission lines. Intercomparison experiments among IACO, Ant Colony Optimization (ACO), Max-min, Min-min, and Random algorithm were conducted and IACO algorithm finished the tasks in the client and executed in the server with the shortest time, and the larger the amount of tasks, the more obvious the effect. Besides, the line resource utilization of IACO was the highest. The simulation results show that: combining multi-line server block scheduling strategy with IACO algorithm can raise the speed of remote sensing image transmission and the utilization of line resource to some degree.

    Nonlinear robust detection Kalman filter algorithm based on M-estimation
    LI Kailong HU Boqing GAO Jingdong FENG Guoli
    2014, 34(11):  3214-3217.  DOI: 10.11772/j.issn.1001-9081.2014.11.3214
    Asbtract ( )   PDF (563KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the traditional nonlinear robust filtering will be severely degraded when the distribution of measurement noise deviates from the assumed Gaussian distribution, a new robust nonlinear Kalman filter based on M-estimation and detection method was proposed. The proposed robust filtering algorithm set a threshold using Chi-square test to delete mutation outliers, and modified the measurement update using M-estimation. Several conventional nonlinear filtering methods were evaluated under different measurement noises in terms of accuracy and stability. Under non-Gaussian noise and strong interference, the proposed algorithm outperforms the traditional robust algorithm with higher estimation accuracy by 25.5% and lower estimation covariance by 18.3%. The experimental results show that the proposed filtering algorithm can suppress the influence of non-Gaussian noise and strong interference, and increase the estimation accuracy and stability.

    Multi-dimensional cloud index based on KD-tree and R-tree
    HE Jing WU Yue YANG Fan YIN Chunlei ZHOU Wei
    2014, 34(11):  3218-3221.  DOI: 10.11772/j.issn.1001-9081.2014.11.3218
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    Most existing cloud storage systems are based on the model, which leads to a full dataset scan for multi-dimensional queries and low query efficiency. A KD-tree and R-tree based multi-dimensional cloud data index named KD-R index was proposed. KD-R index adopted two-layer architecture: a KD-tree based global index was built in the global server and R-tree based local indexes were built in local server. A cost model was used to adaptively select appropriate R-tree nodes to publish into global KD-tree index. The experimental results show that, compared with R-tree based global index, KD-R index is efficient for multi-dimensional range queries, and it has high availability in the case of server failure.

    Two-phase placement algorithm with energy efficiency optimization for virtual machins bsed on data center
    ZHANG Xiaoqing HE Zhongtang
    2014, 34(11):  3222-3226.  DOI: 10.11772/j.issn.1001-9081.2014.11.3222
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics

    In view of high energy consumption of dynamic virtual machine placement in data centers, a two-phase placement algorithm with energy efficiency optimization for virtual machines, named DVMP_VMMA, was proposed based on data center. The first phase was the initial placement, a Dynamic Virtual Machine Placement (DVMP) algorithm was presented to limit the optimal number of deployed hosts, which reduced the idle energy consumption. Meanwhile, for responding to the dynamic changes of loads, Virtual Machine Migration Algorithm (VMMA) was put forward to further optimize the initial placement with the migration constraints through the dynamic VM (Virtual Machine) migration in second phase, which not only got lower energy consumption of the system, but also ensured Quality of Service (QoS) of applications. Comparison experiments with FL (Full Load), FT (Fixed Threshold), MAD (Median Absolute Deviation), QD (Quartile Deviation), MTM (Migration Time Minimum) and MIU (Minimum Utilization) were given. The experimental results show that DVMP_VMMA not only considers the energy consumption optimization to increase the resource utilization, but also avoids frequent migration of VMs to improve the performance, it gets good effect in optimization of data center energy consumption, SLA (Service Level Agreement) violation, VM migration quantity and loss of performance, its comprehensive performance is better than compared algorithms.

    Three-queue job scheduling algorithm based on Hadoop
    ZHU Jie ZHAO Hong LI Wenrui
    2014, 34(11):  3227-3230.  DOI: 10.11772/j.issn.1001-9081.2014.11.3227
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics

    Single queue job scheduling algorithm in homogeneous Hadoop cluster causes short jobs waiting and low utilization rate of resources; multi-queue scheduling algorithms solve problems of unfairness and low execution efficiency, but most of them need setting parameters manually, occupy resources each other and are more complex. In order to resolve these problems, a kind of three-queue scheduling algorithm was proposed. The algorithm used job classifications, dynamic priority adjustment, shared resource pool and job preemption to realize fairness, simplify the scheduling flow of normal jobs and improve concurrency. Comparison experiments with First In First Out (FIFO) algorithm were given under three kinds of situations, including that the percentage of short jobs is high, the percentages of all types of jobs are similar, and the general jobs are major with occasional long and short jobs. The proposed algorithm reduced the running time of jobs. The experimental results show that the execution efficiency increase of the proposed algorithm is not obvious when the major jobs are short ones; however, when the assignments of all types of jobs are balanced, the performance is remarkable. This is consistent with the algorithm design rules: prioritizing the short jobs, simplifying the scheduling flow of normal jobs and considering the long jobs, which improves the scheduling performance.

    Video-on-demand video stream scheduling policy based on ant colony optimization algorithm under cloud environment
    WANG Qingfeng LIU Zhiqing HUANG Jun WANG Yaobin
    2014, 34(11):  3231-3233.  DOI: 10.11772/j.issn.1001-9081.2014.11.3231
    Asbtract ( )   PDF (601KB) ( )  
    References | Related Articles | Metrics

    Concerning the large-scale concurrent video stream scheduling problem of low resource utilization and load imbalance under cloud environment, a Video-on-Demand (VOD) scheduling policy based on Ant Colony Optimization (ACO) algorithm named VodAco was proposed. The correlation of video stream expected performance and server idle performance was analyzed, and a mathematical model was built based on the definition of comprehensive matching degree, then ACO method was adopted to hunt the best scheduling schemes. The contrast experiments with Round Robin (RR) and greedy schemes were tested on CloudSim. The experimental results show that the proposed policy has more obvious advantages in task completion time, platform resources occupancy and node load balancing performance.

    Proportional extension of parallel computing under fixed structure constraint
    WU Canghai XIONG Huanliang JIANG Huowen YANG Wenji
    2014, 34(11):  3234-3240.  DOI: 10.11772/j.issn.1001-9081.2014.11.3234
    Asbtract ( )   PDF (1102KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the performance of parallel computing cannot be improved by extending its scale under the constraint of fixed structure, a method of proportionally adjusting graph weights was proposed to handle such extension problem. The method firstly investigated the factors from architecture and parallel tasks which affected its scalability, and then modeled the architecture as well as parallel tasks by using weighted graph. Finally, it realized an extension in parallel computing by adjusting proportionally the weights of the vertices and edges in the graph model for parallel computing. The experimental results show that the proposed extension method can realize isospeed-efficiency extension for parallel computing under the constraint of fixed structure.

    Artificial intelligence
    Improved particle swarm optimization algorithm using mean information and elitist mutation
    LIN Guohan ZHANG Jing LIU Zhaohua
    2014, 34(11):  3241-3244.  DOI: 10.11772/j.issn.1001-9081.2014.11.3241
    Asbtract ( )  
    References | Related Articles | Metrics

    Concerning that conventional Particle Swarm Optimization (PSO) is easy trapped in local optima and with low search efficiency in later stage, an improved PSO based on mean information and elitist mutation, named MEPSO, was proposed. Average information of swarm was introduced into MEPSO to improve the global search ability, and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. In the latter stage of the iteration, the Cauchy mutation operation was applied to the global best particle to improve the global search ability and to further reduce the risk of trapping into local optimum. Contrast experiments on six benchmark functions were given. Compared with Basic PSO (BPSO), PSO with TVAC (PSO-TVAC), PSO with Time-Varying Inertia Weight factor (PSO-TVIW) and Hybrid PSO with Wavelet Mutation (HPSOWM), MEPSO achieved better mean value and standard variance with shorter optimization time and better reliability. The results show that MEPSO can better balance the ability of local search and global search, and can converge faster with higher accuracy and efficiency.

    Improved biogeography-based optimization algorithm using hybrid quasi-oppositional learning
    WANG Lei JIA Yanchi
    2014, 34(11):  3245-3249.  DOI: 10.11772/j.issn.1001-9081.2014.11.3245
    Asbtract ( )   PDF (748KB) ( )  
    References | Related Articles | Metrics

    To deal with the problems of poor exploration capability and slow convergence speed in Biogeography-Based Optimization (BBO) algorithm, a hybrid quasi-oppositional learning based BBO algorithm named HQBBO was proposed. Firstly, the definition of heuristic hybrid quasi-oppositional point was given and its advantage in searching efficiency was proven theoretically. Then, the hybrid quasi-oppositional learning operator was brought forward to enhance the exploration capability and accelerate convergence speed. Meanwhile, the dynamic scaling strategy of searching domain and the elitism preservation strategy were utilized to boost optimization efficiency further. Simulation results on eight benchmark functions illustrate that the proposed algorithm outperforms the basic BBO algorithm and the oppositional BBO (OBBO) algorithm in terms of convergence accuracy and speed, which verifies the effectivity of hybird quasi-oppositional learning operator for improving the convergence speed and global exploring ability.

    Evaluation of Agent coalition based on fuzzy soft set theory
    GUI Haixia ZHOU Huaping
    2014, 34(11):  3250-3253.  DOI: 10.11772/j.issn.1001-9081.2014.11.3250
    Asbtract ( )   PDF (507KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that the factors, which affect the coalition efficacy in Multi-Agent Systems (MAS), have strong ambiguity and uncertainty, fuzzy soft set theory was adopted to make a comprehensive evaluation on Agent coalition. First, the coalition to be evaluated gave its own property, each expert gave evaluation set of indexes and corresponding evaluation matrix according to his knowledge and experience. Then, fuzzy soft set theory was used to fuse the evaluation matrix and obtain the results of final evaluation. Finally, a practical example was given to prove that the method can deal with ambiguity and uncertainty of information effectively and reasonably, and the process of evaluation accords with human thinking and judgment.

    Epidemic model with multiple infections stages on complex networks
    LIAO Liefa MENG Xiangmao
    2014, 34(11):  3254-3257.  DOI: 10.11772/j.issn.1001-9081.2014.11.3254
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics

    For the deficiency of the epidemic propagation models lacking of multiple infections stages, referring to the characteristics of two traditional propagation models including SIR and SEIR, an improved SIR epidemic propagation model with multiple infections stages, named SInR model, was put forward. Different infectious stages with non-uniform infectiousness which impacts on the spread of the epidemic in different network structures and the spread threshold were considered; meanwhile, relative infectiousness and propagation time were introduced to the model, and the simulations on network construction, network scale and relative infectiousness were also given. In the simulation, scale-free networks and small-world networks respectively used BA model generation algorithm and WS model generation algorithm. The infected nodes obeyed Poisson distribution in process of infection, thus the propagation speed of scale-free networks was faster as well as wider transmission under SInR model. There was a spread threshold of relative infectiousness for massive outbreak, when the relative infectiousness was greater than the threshold, the epidemic would outbreak in a wide range; otherwise, the epidemic would only spread in a local small range until it disappeared. The threshold of scale-free networks was 0.2, while that of small-world networks was 0.24. The propagation time scale increased and the corresponding propagation speed decreased while the network scale increased. The simulation results show that epidemic disease spreads faster and the influence range is larger on scale-free network under this model. In addition, the spread threshold value of relative infectiousness of scale-free network is less than the small world's and setting a reasonable threshold is beneficial to reduce the influence of the propagation of epidemic disease.

    Reputation-based supply chain partner selection model with multi-attribute and multi-period
    LU Zhigang GUAN Wei
    2014, 34(11):  3258-3263.  DOI: 10.11772/j.issn.1001-9081.2014.11.3258
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics

    In order to solve the low trustiness problem in the supply chain cooperation process, in the decision making condition with unknown attribute and time weights, a reputation-based multi-attribute and multi-period supply chain partner selection model was proposed. Triangular fuzzy numbers were introduced to describe the appraisal information in language categories. Attribute and time weights for each period were determined by the grey correlations among different attributes and an improved time decay function respectively. Punishing variation weight method was introduced in the time weights setting, allowing time weights for each period to vary with the reputation values. The experimental results show that the model is able to select preferable partners whose reputations are well-balanced over different periods. The model has flexibility and can be adapted to different requirements in partner selection.

    Improved player skill estimation algorithm by modeling first-move advantage
    WU Lin CHEN Lei YUAN Meiyu JIANG Hong
    2014, 34(11):  3264-3267.  DOI: 10.11772/j.issn.1001-9081.2014.11.3264
    Asbtract ( )   PDF (550KB) ( )  
    References | Related Articles | Metrics

    For the traditional player skill estimation algorithms based on probabilistic graphical model neglect the first-move advantage (or home play advantage) which affects estimation accuracy, a new method to model the first-move advantage was proposed. Based on the graphical model, the nodes of first-move advantage were introduced and added into player's skills. Then, according to the game results, true skills and first-move advantage of palyers were caculated by Bayesian learning method. Finally, predictions for the upcoming matches were made using those estimated results. Two real world datasets were used to compare the proposed method with the traditional model that neglect the first-move advantage. The result shows that the proposed method can improve average estimation accuracy noticeably.

    Classification method for interval uncertain data based on improved naive Bayes
    LI Wenjin XIONG Xiaofeng MAO Yimin
    2014, 34(11):  3268-3272.  DOI: 10.11772/j.issn.1001-9081.2014.11.3268
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics

    Considering the high computation complexity and storage requirement of Naive Bayes (NB) based on Parzen Window Estimation (PWE), especially for classification on interval uncertain data, an improved method named IU-PNBC was proposed for classifying the interval uncertain data. Firstly, Class-Conditional Probability Density Function (CCPDF) was estimated by using PWE. Secondly, an approximate function for CCPDF was obtained by using algebraic interpolation. Finally, the posterior probability was computed and used for classification by using the approximate interpolation function. Artificial simulation data and UCI standard dataset were used to assume the rationality of the proposed algorithm and the affection of the interpolation points to classification accuracy of IU-PNBC. The experimental results show that: when the interpolation points are more than 15, the accuracy of IU-PNBC tends to be stable, and the accuracy increases with the increase of the interpolation points; IU-PNBC can avoid the dependence on the training samples and improve the computation efficiency effectively. Thus, IU-PNBC is suitable for classification on large interval uncertain data with lower computation complexity and storage requirement than NB based on Parzen window estimation.

    Trust assessment method for scientific papers based on matching between title and its content
    YUXuanxuan ZENG Guosun DING Chunling
    2014, 34(11):  3273-3278.  DOI: 10.11772/j.issn.1001-9081.2014.11.3273
    Asbtract ( )   PDF (919KB) ( )  
    References | Related Articles | Metrics

    It is more and more difficult to find the valuable required scientific papers accurately and efficiently on Internet, thus a new thesis evaluation method was proposed based on consistency of the title and text to deal with this problem. First of all, the title and text were modeled by eigenvectors respectively. After that, the technique of words similarity was used to calculate the matching-degree of each feature word in title and text vector. The feature word pair was successfully matched if their matching degree was greater than a certain threshold. Then all such matching pairs and their word weights were counted up to calculate the credibility of the title. Based on the hierarchical tree structure of the thesis title, the similarity matching degree of all headings and their corresponding text were calculated by Depth First Traversal (DFT) algorithm, and then the credibility of the paper was evaluated. A case study results prove that the proposed method can realize the scientific papers' credible quality assessment, which makes it be more efficient for readers in paper reading.

    Improved information gain text feature selection algorithm based on word frequency information
    SHI Hui JIA Daiping MIAO Pei
    2014, 34(11):  3279-3282.  DOI: 10.11772/j.issn.1001-9081.2014.11.3279
    Asbtract ( )   PDF (574KB) ( )  
    References | Related Articles | Metrics

    On the basis of elaborate analysis of traditional algorithm and relevant improved algorithms, an improved Information Gain (IG) algorithm based on word frequency information was proposed to solve the insufficient consideration of the frequency of features in traditional information gain feature selection algorithm. The improved algorithm modified parameters of the traditional IG algorithm, mainly from aspects of the frequency of features within category, distribution within category and the distribution among different categories, which can make full use of the frequency of features. The result of text categorization experiment compared with traditional IG algorithm and an improved IG algorithm of weighted indicates that the proposed algorithm has an obvious enhancement in accuracy of the text categorization.

    Vulnerability classification based on binary tree with entropy multi-class support vector machine
    ZHANG Peng XIE Xiaoyao
    2014, 34(11):  3283-3286.  DOI: 10.11772/j.issn.1001-9081.2014.11.3283
    Asbtract ( )   PDF (646KB) ( )  
    References | Related Articles | Metrics

    To effectively improve the accuracy of the vulnerabilities' classification, the vulnerability classification algorithm based on binary tree with entropy multi-class support vector machine was proposed to solve the insufficiency of the classification algorithm based on binary tree multi-class support vector machine, which could effectively reduce the complexity of the classification and dependence on the structure of binary tree. According to defining the smallest hyper sphere to classify the vulnerability's sample space, and used the entropy to describe the confusion degree among vulnerabilities, thus the calculation process of vulnerability classification was simplified, and the classification results' dependence on the structure of the binary tree was reduced. Experiments were conducted on 3000 vulnerabilities' samples using Common Weakness Enumeration (CWE), the average vulnerability classification accuracy and the average recall rate respectively were 93.3% and 93.25%, which were higher than those of the classification algorithm based on the binary tree multi-class support vector machine and the classification algorithm based on K-Nearest Neighbor (KNN). The experimental results indicate that the proposed algorithm is practical and feasible, it can achieve the precise vulnerability classification.

    Effective multi-base scalar multiplication algorithm with side-channel attack resistance
    YIN Heng JIANG Chaohui FU Wei
    2014, 34(11):  3287-3290.  DOI: 10.11772/j.issn.1001-9081.2014.11.3287
    Asbtract ( )   PDF (572KB) ( )  
    References | Related Articles | Metrics

    To raise the safety and efficiency of algorithm on Elliptic Curve Cryptography (ECC), a new multi-base scalar multiplication algorithm was presented based on original side-channel attack and scalar multiplication algorithm. In order to enhance the algorithm's security, random number and the masking technology of base point were introduced to hide the related side-channel informations of the algorithm. Meanwhile, fast point halving and the multi-base representation of scalar were conbined to improve the algorithm's efficiency. According to security analysis, the algorithm can resist various side-channel attacks well. Results of the actual experiments also show that the efficiency of the new method was improved about 36%-42% over the Purohit's method and about 8%-11% over the Lai's method (LAI Z, ZHANG Z, TAO D. Algorithm for directly computing 7P elliptic curves and its application[J]. Journal of Computer Applications, 2013,33(7):1870-1874.) on the elliptic curves recommended by National Institute of Standards and Technology (NIST) including NIST B-163, NIST B-233, NIST B-283, when the number of pre-computation points were 2 and 5 respectively. The new algorithm can be applied to the domains of smart cards and other limited storage resources, making it more secure and efficient to the encryption and decryption of sensitive data.

    ID-based proxy re-signature scheme with strong unforgeability
    FENG Jie LAN Caihui XIE Borong
    2014, 34(11):  3291-3294.  DOI: 10.11772/j.issn.1001-9081.2014.11.3291
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that ID-based proxy re-signature schemes are existentially unforgeable, an ID-based bidirectional proxy re-signature scheme was proposed by using target collision-resistant Hash function and bilinear pairing. Under computational Diffie-Hellman assumption, the proposed proxy re-signature scheme is provably secure against strongly forgery under adaptive chosen message attacks. Moreover, the proposed scheme has short system parameters, short re-signature, low re-signing computation cost and high security. The proposed scheme solves the existing proxy re-signature in the presence of low security and complex key management.

    Web video copy detection based on variable-length step key frame selection
    CHEN Xiaohui CHEN Xiuhong GAN Yuesong
    2014, 34(11):  3295-3299.  DOI: 10.11772/j.issn.1001-9081.2014.11.3295
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    To weaken the influence of unrepresentative frames and time complexity on detecting the copy of the video on the network, a quick and efficient variable-length step algorithm was proposed to select the most representative key frames. According to the characteristics of the video continuous change, the highly similar adjacent frames could be used to replace the corresponding video between them. Firstly, the algorithm selected core region and less affected edge region of key frame and allocated different areas with different weights so as to use the Ordinal Measures (OM) transformation distance measurement to judge the similarity between two frames. Then the sliding window was used to find the most similar match to detect the copy clips of query videos. The experimental results on actual network and MUSCLE-VCD-2007 dataset show that the proposed algorithm has stronger robustness and higher detection efficiency compared with the existing algorithms of copy detection feature measure based on OM.

    Super-resolution reconstruction based on dictionary learning and non-local similarity
    SHOU Zhaoyu WU Guangxiang CHEN Lixia
    2014, 34(11):  3300-3303.  DOI: 10.11772/j.issn.1001-9081.2014.11.3300
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics

    To deal with the single-image scale-up problem, a super-resolution reconstruction algorithm based on dictionary learning and non-local similarity was proposed. The difference images between the high-resolution images and results of using iterative back-projection image reconstruction were obtained, and then the high and corresponding low dictionaries could be co-generated by training difference image patches and the corresponding low-resolution image patches via using K-Singular Value Decomposition (K-SVD) algorithm which was combined with the idea that the high and low dictionaries could be co-trained for super-resolution reconstruction. In addition, a non-local similarity regularization constraint was introduced in the new algorithm to further improve the quality of the reconstructed images. The experimental results show that the proposed algorithm achieves better results than learning-based algorithms in terms of both visual perception and objective evaluation.

    Joint reconstruction of breast dynamic contrast-enhanced magnetic resonance images with group sparsity method
    WANG Guanhao XU Jun
    2014, 34(11):  3304-3308.  DOI: 10.11772/j.issn.1001-9081.2014.11.3304
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics

    Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) demonstrates that malignant tumors generally show faster and higher levels of enhancement than they are seen in benign or normal tissue, after an intravenous injection of the contrast agent Gd-DTPA, DCE-MRI has played important roles in diagnosis and detecting malignant tumor. However, it is still a challenge on the fast reconstruction of DCE-MR images. Based on the idea of group sparse and the theory of Compressed Sensing (CS), a conjugate gradient algorithm combined with variable density random sampling method was employed to get samples from the local k-spaces (Fourier coefficient) sampling data. Then traditional l1 norm conjugate gradient descent algorithm was extended to l2,1 norm to jointly reconstruct multiple DCE-MR images simultaneously. Compared with conventional Multi-Measurement Vector (MMV) algorithm, the proposed approach yields a faster and more accurate reconstruction result. The experimental results show that when the sampling rate is less than 40%, the joint reconstruction time based on conjugate gradient algorithm almost decreased by 30% compared with the MMV algorithm. In addition, compared with the uniform random sampling, the variable density random sampling method improves the accuracy rate about 70%.

    Segmentation method for affinity propagation clustering images based on fuzzy connectedness
    DU Yanxin GE Hongwei XIAO Zhiyong
    2014, 34(11):  3309-3313.  DOI: 10.11772/j.issn.1001-9081.2014.11.3309
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics

    Considering the low accuracy of the existing image segmentation method based on affinity propagation clustering, a FCAP algorithm which combined fuzzy connectedness and affinity propagation clustering was proposed. A Whole Fuzzy Connectedness (WFC) algorithm was also proposed with concerning the shortcoming of traditional fuzzy connectedness algorithms that can not get fuzzy connectedness of every pair of pixels. In FCAP, the image was segmented by using super pixel technique. These super pixels could be considered as data points and their fuzzy connectedness could be computed by WFC. Affinities between super pixels could be calculated based on their fuzzy connectedness and spatial distances. Finally, affinity propagation clustering algorithm was used to complete the segmentation. The experimental results show that FCAP is much better than the methods which use affinity propagation clustering directly after getting super pixels, and can achieve competitive performance when comparing with other unsupervised segmentation methods.

    Target recognition method based on deep belief network
    SHI Hehuan XU Yuelei YANG Zhijun LI Shuai LI Yueyun
    2014, 34(11):  3314-3317.  DOI: 10.11772/j.issn.1001-9081.2014.11.3314
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics

    Aiming at improving the robustness in pre-processing and extracting features sufficiently for Synthetic Aperture Radar (SAR) images, an automatic target recognition algorithm for SAR images based on Deep Belief Network (DBN) was proposed. Firstly, a non-local means image despeckling algorithm was proposed based on Dual-Tree Complex Wavelet Transformation (DT-CWT); then combined with the estimation of the object azimuth, a robust process on original data was achieved; finally a multi-layer DBN was applied to extract the deeply abstract visual information as features to complete target recognition. The experiments were conducted on three Moving and Stationary Target Acquisition and Recognition (MSTAR) databases. The results show that the algorithm performs efficiently with high accuracy and robustness.

    Compressed sensing measurement matrix based on quasi-cyclic low density parity check code
    JIANG Xiaoyan XIE Zhengguang HUANG Hongwei CAI Xu
    2014, 34(11):  3318-3322.  DOI: 10.11772/j.issn.1001-9081.2014.11.3318
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics

    Abstract: To overcome the shortcoming that random measurement matrix is hard for hardware implementation due to its randomly generated elements, a new structural and sparse deterministic measurement matrix was proposed by studying the theory of measurement matrix in Compressed Sensing (CS). The new matrix was based on parity check matrix in Quasi-Cyclic Low Density Parity Check (QC-LDPC) code over finite field. Due to the good channel decoding performance of QC-LDPC code, the CS measurement matrix based on it was expected to have good performance. To verify the performance of the new matrix, CS reconstruction experiments aiming at one-dimensional signals and two-dimensional signals were conducted. The experimental results show that, compared with the commonly used matrices, the proposed matrix has lower reconstruction error under the same reconstruction algorithm and compression ratio. The proposed method achieves certain improvement (about 0.5-1dB) in Peak Signal-to-Noise Ratio (PSNR). Especially, if the new matrix is applied to hardware implementation, the need for physical storage space and the complexity of the hardware implementation should be greatly reduced due to the quasi-cyclic and symmetric properties in the structure.

    Resampling tampering detection based on Markov process and pseudo-polar fast Fourier transform
    ZHOU Zhiping HU Chengyan ZHUDan
    2014, 34(11):  3323-3326.  DOI: 10.11772/j.issn.1001-9081.2014.11.3323
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics

    Resampling tampering in digital images would bring about change of the correlation between the Discrete Cosine Transform (DCT) coefficients. A new resampling tampering method was proposed to solve this problem. At first, Markov features were extracted, then the relationship between DCT coefficients in images was analyzed by the high-order statistics. Secondly, the Cartesian coordinate was mapped to the pseudo-polar axis, and the smoothness of the image was extracted as the texture feature, then the texture feature was used to detect the resampling operation. Finally, these two types of features were sent into Support Vector Machine (SVM) for training and classification to detect the resampling tampering operation. The experimental results show that this method can detect the resampling tampering operation in image and has a high detection rate, and it is robust for noise in certain range.

    Video shot boundary detection method based on pre-processing
    ZHANG Yikui ZHAO Hui
    2014, 34(11):  3327-3331.  DOI: 10.11772/j.issn.1001-9081.2014.11.3327
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics

    To solve the high consumption problem of fast video Shot Boundary Detection (SBD), an improved Shot Boundary Detection (SBD) method was proposed based on video pre-processing. Adaptive threshold was taken to select the candidate segments which may contain shot boundaries, the beginning frame was detected by comparing the similarity value between the first frame and the rest frames in the candidate segments, and then cut was detected immediately. Gradual transition would be detected if no cut was detected. Candidate segments had to be adjusted to ensure the whole transition was located in the same segment. Ending frame was confirmed by comparing the similarity value between the beginning frame and the rest frames in segment. Experimental results demonstrate that the proposed algorithm achieves an accuracy above 90% and the time cost is reduced 15.6%-30.2% compared with inverted triangle pattern matching method. The proposed algorithm satisfies the need of accuracy and improves detection speed compared with the traditional methods which need detection for both cut and gradual transition.

    Multi-level image projection detection algorithm for low-altitude targets
    ZHANG Yu WANG Xiaoyan
    2014, 34(11):  3332-3335.  DOI: 10.11772/j.issn.1001-9081.2014.11.3332
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics

    For solving the problems of earth surface background disturbance and small weak target detection in rain, a multi-level image projection detection algorithm for low-altitude targets was proposed. Firstly, the position and the gray characteristics of ground and sky area in the original image were analyzed, and the image was splitted into sky and ground area according to the step change of horizontal gray-level projection. Secondly, the horizontal and vertical strip areas including target were intercepted from the sky area image based on the maximum value of the horizontal and vertical projection first-order differences respectively, then the vertical gray projection of the horizontal strip area and the horizontal gray projection of the vertical strip area were calculated, as a result, two candidate target coordinates were determined by the maximum values of the horizontal and vertical projection first-order differences. Finally, the two candidate target coordinates were verified and the target coordinates were calculated. The experiment results show that the proposed algorithm can detect both low-altitude targets with complex earth surface background and small weak target in rain. In addition, this new method is fast to satisfy the requirement of real-time video image processing.

    Hybrid detection technique for Android kernel hook
    HUA Baojian ZHOU Aiting ZHU Hongjun
    2014, 34(11):  3336-3339.  DOI: 10.11772/j.issn.1001-9081.2014.11.3336
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics

    To address the challenge of Android kernel hook detection, a new approach was proposed to detect Android kernel hooks by combining static technique based on characteristic pattern and dynamic technique based on behavioural analysis. The attacks including modifying system call tables and inline hook could be detected by the proposed approach. Software prototypes and test experiments were given. The experimental results show that the proposed method is effective and efficient in detecting Android kernel hooks, for most of the test cases, the runtime overhead is below 7%; and it is suitable to detect Android kernel hooks.

    Identification method of system reliability structure
    LI Qingmin LI Hua XU Li YUAN Wei
    2014, 34(11):  3340-3343.  DOI: 10.11772/j.issn.1001-9081.2014.11.3340
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics

    In integrated support engineering, the number of components in reliability block diagram is large, the level of mastering the principle of system is required to be high and the operational data is always incomplete. To resolve these problems, a method that identifies the reliability structure of system using the information of operational data and the reliability of the units was proposed. The system reliability was estimated by using the system performance information. In addition, all reliability structure models was traversed and the theoretical reliability was calculated with the system's units reliability information, then the deviations between the estimated value of system reliability and all the reliability theoretical values were calculated, and the identification results by the first N reliability structure models of the lowest deviation was outputted after sorting the deviations. The calculation results of a given example show that the combined system based on the voting reliability structure can be identified with the probability of around 80%, decreases to 3% of the scope out of all possible forms, it can significantly reduce the workload of the researcher to identify the system reliability structure.

    Fitting of scaling curve and financial time series clustering
    YUAN Ming
    2014, 34(11):  3344-3347.  DOI: 10.11772/j.issn.1001-9081.2014.11.3344
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    In order to take the multi-fractal properties of financial time series into consideration, a clustering method based on measuring similarities among CSI 300 index stocks through scaling curve was proposed. The algorithm firstly fitted scaling curve at different autocorrelation order through Multi-Scale Detrend Fluctuation Analysis (MSDFA). Then it abstracted the distribution or shape features of scaling curve for the construction of pattern vector. Clustering was implemented by weighted K-means algorithm and the optimal number of categories was determined by Davis-Bouldin Index (DBI). The result shows the clustering based on scaling curve can discover industry aggregation and strong linkage of different plates within the stock market. The portfolio built from different clusters can reduce the risk greatly and the proposed method outperforms clustering method based on linear trend features of original series.

    Gas emission prediction model of working face based on chaos immune particle swarm optimizations and generalized regression neural network
    WANG Yuhong FU Hua HOU Fujian ZHANG Yang
    2014, 34(11):  3348-3352.  DOI: 10.11772/j.issn.1001-9081.2014.11.3348
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy and efficiency of absolute gas emission prediction, a new algorithm based on Chaos Immune Particle Swarm Optimization (CIPSO) and General Regression Neural Network (GRNN) was proposed. In this algorithm, CIPSO was employed to dynamically optimize the smooth factor of GRNN to reduce the impact of artificial factors in GRNN model construction, and then the optimized network was adopted to establish gas emission prediction model. The simulation experiment results on gas emission data of a coal mine show that the model is of faster convergence and higher prediction accuracy than other prediction models based on BP and Elman neural network. It is proved that the proposed method is feasible and effective.

    Teaching resources recommendation system for K12 education
    ZHANG Haidong NI Wancheng ZHAO Meijing YANG Yiping
    2014, 34(11):  3353-3356.  DOI: 10.11772/j.issn.1001-9081.2014.11.3353
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    In data layer, the course model and resource model were built based on Markov chain and vector space model, and the teacher model was built based on teachers' personal registration information and nodes of course model. In off-line layer, the content features of course model and resource model were extracted via Term Frequency-Inverse Document Frequency (TF-IDF) algorithm, and the course model and resource model of data layer were initialized and optimized. Then relations between any two resources or recourse and course were calculated using association rules mining and similarity measure, and intermediate recommendation results were given using teacher model and course model. A weighted hybrid recommendation algorithm was proposed to generate recommendation list in on-line layer. The proposed system has been successfully applied in a real education resources sharing platform which consists of 600 thousand teaching resources.

    Electrooculogram assisted electromyography human-machine interface system based on multi-class support vector machine
    ZHANG Yi LIU Rui LUO Yuan
    2014, 34(11):  3357-3360.  DOI: 10.11772/j.issn.1001-9081.2014.11.3353
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics

    Concerning the low correct recognition rate of the Electromyography (EMG) control system, a new Human-Computer Interaction (HCI) system based on Electrooculogram (EOG) assisted EMG was designed and implemented. The feature vectors of EOG and EMG were extracted by threshold method and improved wavelet transform separately, and the feature vectors were integrated together. Then the features were classified by multi-class Support Vector Machine (SVM), and the different control commands were generated according to the result of pattern recognition. The experimental results prove that, compared with the single EMG control system, the new system has better operability and stability with higher correct recognition rate.

    High-speed data acquisition and transmission system for low-energy X-ray industrial CT
    YANG Lei GAOFuqiang LI Ling CHEN Yan LI Ren
    2014, 34(11):  3361-3364.  DOI: 10.11772/j.issn.1001-9081.2014.11.3361
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics

    To meet the application demand of high speed scanning and massive data transmission in industrial Computed Tomography (CT) of low-energy X-ray, a system of high-speed data acquisition and transmission for low-energy X-ray industrial CT was designed. X-CARD 0.2-256G of DT company was selected as the detector. In order to accommodate the needs of high-speed analog to digital conversion, high-speed time division multiplexing circuit and ping-pong operation for the data cache were combined; a gigabit Ethernet design was conducted with Field Programmable Gate Array (FPGA) selected as the master chip,so as to meet the requirements of high-speed transmission of multi-channel data. The experimental result shows that the speed of data acquisition system reaches 1MHz, the transmission speed reaches 926Mb/s and the dynamic range is greater than 5000. The system can effectively shorten the scanning time of low energy X-ray detection, which can meet the requirements of data transmission of more channels.

    Application of fractal theory on identification of near-surface defects in
    CHEN Shili HUANG Yuqiu ZHANG Hui YANG Xiaoxia GUO Wei
    2014, 34(11):  3365-3368.  DOI: 10.11772/j.issn.1001-9081.2014.11.3365
    Asbtract ( )   PDF (599KB) ( )  
    References | Related Articles | Metrics

    The near-surface defects are hard to identify in ultrasonic phased array Non-Destructive Testing (NDT), thus a new intelligent identification method based on fractal theory was proposed to solve this problem. A box-counting dimension algorithm based on linear interpolation was described to calculate the box-counting dimension of 140 groups of ultrasonic A-Scan time domain signals. Then the distribution of box-counting dimension was analyzed using the statistical method. The experimental results show that ultrasonic A-Scan signal is obviously fractal and it is effective to analyze the A-Scan signal with the fractal approach. This method has the potential to identify near-surface defects since the values of the box counting dimension of defective signals are different from those of defective signals. As a result, the detection rate of near-surface defects can be improved and the omission rate caused by man-made factors can be reduced in ultrasonic phased array automatic testing.

2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF