Loading...

Table of Content

    10 November 2015, Volume 35 Issue 11
    DPCS 2015 Paper
    Communication aware multiple directed acyclic graph scheduling considering cost and fairness
    WANG Yuxin, CAO Shijie, GUO He, CHEN Zheng, CHEN Xin
    2015, 35(11):  3017-3020.  DOI: 10.11772/j.issn.1001-9081.2015.11.3017
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    Multiple Directed Acyclic Graphic (DAG) scheduling algorithms are supposed to take many factors into account, such as execution time, communication overhead, cost and fairness of all DAG. Therefore, in order to increase fairness and reduce cost, a new scheduling strategy CAFS (Communication Aware Fair Scheduling), based on CA-DAG (Communication Aware-DAG), was proposed. Also, a BD (Backward Difference) rule was introduced to optimize finish time of all DAGs. CAFS is consisted of two parts: the pre-scheduling part adopts CACO (Communication Aware Cost Optimization) to optimize the total cost, and utilizes the classical fairness algorithm to decide the sequence for scheduling. Based on the sequence the second part schedules all the DAGs using BD rule to reduce the finish time. The simulation results show that CAFS can reduce the finish time without increasing cost and guarantee the fairness, and the average execution time has been reduced by 19.82%.
    Three-dimensional coverage algorithm based on virtual force in sensor network
    DANG Xiaochao, YANG Dongdong, HAO Zhanjun
    2015, 35(11):  3021-3025.  DOI: 10.11772/j.issn.1001-9081.2015.11.3021
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics
    To meet the requirement of non-uniform coverage of nodes, a Three-Dimensional Coverage Algorithm based on Virtual Force (3D-CAVF) in sensor network was introduced. In this algorithm the virtual force was applied in wireless sensor network to implement node arrangement. By the means of virtual force and the congestion degree control, the nodes could automatically cover the events, and then the nodes and density of events could present a balanced effect. According to the simulation experiment in Matlab, when the events are in T-shaped non-uniform arrangement and linear non-uniform arrangement, the efficiency of event set covering by the proposed algorithm is 3.6% and 3.1% higher than the APFA3D (Artificial Potential Field Algorithm in Three-Dimensional Space) and ECA3D (Exact Covering Algorithm in Three-Dimensional Space) respectively. The simulation results indicate that the proposed algorithm can arrange the nodes efficiently in three-dimensional wireless sensor networks.
    Cloud migration performance of tsinghua cloud monitoring platform
    MA Haifeng, YI Hebali, WANG Ye, YANG Jiahai, ZHANG Chao
    2015, 35(11):  3026-3030.  DOI: 10.11772/j.issn.1001-9081.2015.11.3026
    Asbtract ( )   PDF (919KB) ( )  
    References | Related Articles | Metrics
    With the popularization of cloud computing technology, many enterprises have migrated or are planning to migrate their business and applications to the cloud. But it may face the problems of application performance degradation, and the key business and applications may suffer security threats. Therefore, migrating to cloud or deploying in independent server is a problem that needs to be further studied. In this paper, based on the Tsinghua cloud platform, Tsinghua cloud monitoring platform was set up based on Nagios. Firstly, Tsinghua cloud platform and its architecture were introduced, and then Nagios and the architecture of Tsinghua cloud monitoring platform were discussed. For cloud migration performance evaluation, Ubuntu and Windows were used as operating system platforms, CPU load and memory usage were used as evaluation indexes, two applications of CPU computing type and server load type respectively ran on the cloud server and the independent server. At last, the experimental results were analyzed and compared. The experimental result shows that some applications have better performance on independent servers that may not be suitable for migrating to cloud platform.
    Regional cluster-based lifetime optimization strategy for large-scale wireless sensor networks
    WANG Yan, ZHANG Tingting, SONG Zhirun, WANG Junlu, GUO Jingyu
    2015, 35(11):  3031-3037.  DOI: 10.11772/j.issn.1001-9081.2015.11.3031
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics
    In view of the characteristics of wide monitoring area and large number of sensors in large-scale monitoring systems like environment monitoring and power grid ice-disaster monitoring system, a Regional Cluster-based lifetime optimization Strategy for large-scale wireless sensor network (RCS) was proposed to save the network energy consumption and prolong the lifetime of the network. The strategy firstly used AGNES (Agglomerative Nesting) algorithm to divide the network into several subareas based on node location for optimizing the distribution of cluster heads. Secondly, uneven clusters would be conducted after cluster heads were generated, and a time threshold value was set to balance node energy consumption. Finally, for inter-cluster communication, a multi-hop routing was adopted by constructing minimum spanning tree on the basis of calculating network energy cost to balance the energy consumption of the cluster heads. In the simulation, compared with LEACH (Low Energy Adaptive Clustering Hierarchy) and EEUC (Energy-Efficient Uneven Clustering) algorithm, RCS respectively reduced the cluster head nodes' energy consumption by 45.1% and 2.4% on average; and respectively extend the network lifetime by 38% and 3.7%.The simulation results show that RCS can be more efficient to balance the overall network energy consumption, and significantly prolong the network lifetime.
    Data forwarding strategy based on gathering point in mobile opportunistic networks
    YUAN Peiyan, WANG Chenyang, LI Sijia
    2015, 35(11):  3038-3042.  DOI: 10.11772/j.issn.1001-9081.2015.11.3038
    Asbtract ( )   PDF (824KB) ( )  
    References | Related Articles | Metrics
    The characteristic that mobile opportunistic network exploits node contacts to forward packets is very suitable for the Ad Hoc networking requirements in actual environment, thus a large number of applications are produced. Considering the smart devices are generally carried by people or integrated in vehicles, human involvement is one of the most important factors for the success of these applications, this paper explored the influence of human mobility on data communication. The authors observed that people always visited hot regions, while other regions were visited less frequently. Motivated by this observation, the GS (Gathering Spray), a human gathering point assisted spraying scheme for mobile opportunistic scenarios was proposed. GS assumed each hot region configured a Access Point (AP), which had a higher priority to cache and spray messages than other mobile nodes. Theoretical analysis verifies that GS achieves a lower mean delivery delay than the Spray-Wait, and the simulation results show that GS improves the packet delivery ratio simultaneously.
    Design method of measurement matrix for compressive sensing in wireless sensor network
    LIU Yanxing, DANG Xiaochao, HAO Zhanjun, DONG Xiaohui
    2015, 35(11):  3043-3046.  DOI: 10.11772/j.issn.1001-9081.2015.11.3043
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of redundancy and transmission energy consumption in the process of data acquisition in wireless sensor networks, a method for designing the measurement matrix of compressive sensing was proposed in this paper. The method is based on the linear representation theory of diagonal matrix orthogonal basis and the process of constructing the matrix is simple with short time, high sparsity and low redundancy, which is very suitable for the nodes with limited hardware resources. The simulation results show the measurement method based on the linear representation theory of diagonal matrix gains higher signal recovery rate compared with Gauss random matrix and part Hadamard matrix under the same signal reconstruction accuracy. This method in the paper greatly reduces the traffic of networks, saves the network energy consumption and prolongs the network life cycle.
    Medium access control protocol based on partially observable Markov decision process in underwater acoustic sensor networks
    XU Ming, LIU Guangzhong
    2015, 35(11):  3047-3050.  DOI: 10.11772/j.issn.1001-9081.2015.11.3047
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of spatial-temporal uncertainty caused by low bandwidth and high latency as well as insufficient network state observations in UnderWater Acoustic Sensor Network (UWASN), a medium access control protocol based on Partially Observable Markov Decision Process (POMDP) for UWASN was presented in this paper. Firstly, the link quality and residual energy of each node were divided into multiple discrete levels for expressing nodes' state information. After that, channel access probability was predicted by receivers through the history information of channel state observation and channel access actions, and then the optimal channel scheduling strategy for senders was acquired. Senders communicate with receivers and transmit data packets in their time slots according to the assigned sequence from the optimal channel scheduling strategy. When the communication was completed, the states of next time slot of the related nodes were predicted based on the statistics of the network transfer probabilities. Simulation results show that the proposed protocol can improve the data packet transmission rate as well as the network throughput, and decrease the energy consumption.
    Trust model based on node dependency and interaction frequency in wireless Mesh network
    SONG Xiaoyu, XU Huan, BAI Qingyue
    2015, 35(11):  3051-3054.  DOI: 10.11772/j.issn.1001-9081.2015.11.3051
    Asbtract ( )   PDF (566KB) ( )  
    References | Related Articles | Metrics
    The openness and dynamics of Wireless Mesh Network (WMN) makes it used widely, however, there are some security problems. The traditional trust model could no longer meet the security requirements of WMN. Based on the trust principle of social network, a new trust model named TFTrust was proposed. In the TFTrust, multi-dimensional factor calculation method was defined, including node contribution value, node dependency value and interaction frequency value, also the calculation method of the direct trust value was established. The simulation results show that TFTrust model is better than Ad Hoc on-demand Distance Vector Routing (AODV) protocol and Beth model in safety, quality of service and reduces the cost of network communications, etc.
    Performance evaluation method based on objective weight determination for data center network
    NAN Yang, CHEN Lin
    2015, 35(11):  3055-3058.  DOI: 10.11772/j.issn.1001-9081.2015.11.3055
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics
    For large-scale data center network, how to monitor the network effectively, discover the bottleneck of network performance and potential point of failure, provide support for optimization of network performance becomes the new research subject. However, there are many factors which affect the network performance, and there are differences in the influence of performance factors. How to give an accurate performance evaluation has been a difficult problem. To solve these problems, a network performance evaluation index system was proposed in this paper. On this basis, a method for evaluating the network performance of data centers based on objective weight (PE-OWD) was put forward. By using the method of objective weight determination, the dynamic calculation of the weights of the indexes was adopted. And using the data normalization method based on historical parameters, a perfect network performance evaluation model was established. For the Tianhe2 real network environment, the performance indexes of the network equipment were evaluated, and the validity of the method for evaluating the network performance was verified.
    Private desktop cloud architecture with instant-start virtual machines
    CHEN Xin, XU Yizhen, GUO He, YU Yulong, LUO Jie, WANG Yuxin
    2015, 35(11):  3059-3062.  DOI: 10.11772/j.issn.1001-9081.2015.11.3059
    Asbtract ( )   PDF (659KB) ( )  
    References | Related Articles | Metrics
    Private desktop cloud is widely used in various kinds of application scenarios, such as centralized computing, centralized management, and remote office. Existing private clouds are mostly based on OpenStack operating system, however, OpenStack encounters a problem that it takes a long time to start a virtual machine when it is used in private desktop cloud, so it is unable to meet the real-time requirement in some application scenarios. In this paper, using template image strategy and network attached storage strategy for the cloud storage layer, Instant-Start Virtual Machine (ISVM) private desktop cloud architecture was proposed. ISVM private desktop cloud architecture consisted of cloud management layer, cloud storage layer, and cloud service layer. Through testing and analyzing, the time of starting virtual machines in ISVM architecture is about one percent of the time in OpenStack architecture, reaching millisecond order of magnitude and meeting the real-time requirement of some application scenarios.
    Private cloud platform design based on virtualization technology
    ZHANG Qian, CHEN Chaogen, LIANG Hong
    2015, 35(11):  3063-3069.  DOI: 10.11772/j.issn.1001-9081.2015.11.3063
    Asbtract ( )   PDF (1140KB) ( )  
    References | Related Articles | Metrics
    Combined with virtualization technology, a realization scheme of private cloud platform based on multi-framework technology was put forward in order to improve hardware utilization of distributed cluster system and avoid the economic loss caused by the idle equipment. This scheme met the demand for resource allocation, dynamic allocation and dynamic migration, thus the bottom hardware was integrated. Aiming at unbalanced loading in the traditional virtual machine deployment method, the virtual machine deployment mechanism based on dynamic deployment allocation was proposed. According to the characteristics of virtual machine resources and the load of the current physical nodes, dynamic deployment of virtual machine was carried out. At last, private cloud computing platform with strong flexibility and good scalable performance was realized, which had been tested by Fourier finite difference pre-stack depth migration in oil exploration. The results proved the feasibility and effectiveness of the private cloud platform. By deployment test on virtual machine, the results show that the dynamic allocation decision can be used to deploy a large number of virtual machines, and keep good load balancing of private cloud platform at the same time.
    Dandelion: Rapid deployment mechanism of cloud platform based on OpenStack
    LI Liyao, ZHAO Shaoka, WANG Ye, YANG Jiahai, XU Huarong
    2015, 35(11):  3070-3074.  DOI: 10.11772/j.issn.1001-9081.2015.11.3070
    Asbtract ( )   PDF (742KB) ( )  
    References | Related Articles | Metrics
    A rapid and automatic deployment solution of cloud platform based on OpenStack was presented in order to improve OpenStack deployment efficiency. Firstly, the solution created image template files of different node types, and then replicated the image template by node types (such as network node, computing node), and according to the properties of the nodes (such as IP address, hostname tag), automatically modified the configuration file in the use of scripts in order to complete single node deployment. Then, the same strategy was used to achieve rapid deployment of other nodes. After that, the solution took advantage of network service (PXE(Preboot eXecute Environment), DHCP (Dynamic Host Configuration Protocol) and TFTP (Trivial File Transfer Protocol)) which were provided by management servers, mounted the image-block-file. Finally, nodes were started up to complete Dandelion. In addition, performance evaluation model was established to determine the optimal number of image copies and storage servers in order to optimize the storage network topology. Compared with other deployment schemes,such as Cobbler, NFS (Network File System), whether using the same size storage network to deploy different size cloud platforms, or using different size storage network to deploy the same size cloud platform, the experimental results show that the proposed solution can greatly reduce deployment time and improve efficiency of the deployment.
    Parallelization of deformable part model algorithm based on graphics processing unit
    LIU Baoping, CHEN Qingkui, LI Jinjing, LIU Bocheng
    2015, 35(11):  3075-3078.  DOI: 10.11772/j.issn.1001-9081.2015.11.3075
    Asbtract ( )   PDF (832KB) ( )  
    References | Related Articles | Metrics
    At present, in the field of target recognition, the highest accuracy algorithm is the Deformable Part Model (DPM) for human detection. Aiming at the disadvantage of large amount of calculation, a parallel solution method based on Graphics Processing Unit (GPU) was proposed. In this paper, with the GPU programming model of OpenCL, the details of the whole DPM algorithm were implemented by the parallel methods,and optimization of the memory model and threads allocation was made. Through the comparison of the OpenCV library and the GPU implementation, under the premise of ensuring the detection effect, the execution efficiency of the algorithm was increased by nearly 8 times.
    Parallel algorithm of AES encryption based on MapReduce
    FU Yadan, YANG Geng, HU Chi, MIN Zhao'e
    2015, 35(11):  3079-3082.  DOI: 10.11772/j.issn.1001-9081.2015.11.3079
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics
    In order to protect privacy of consumers in cloud computing, encrypted data storage is a feasible way. To speed up process of encryption and decryption, a parallel Advanced Encryption Standard (AES) encryption algorithm was proposed taking the characteristic of multi-nodes in cloud computing into account. The performance of the algorithm was analyzed in theory, and then experiments were conducted to demonstrate the efficiency of the designed algorithm. The experimental results show that the speed-up radio of the proposed encryption scheme can reach 15.9, and the total time cost of AES encryption can be reduced by 72.7% on the cloud cluster consisting of 4 compute nodes of total 16 CPUs.
    Analysis method of inclusion relations between firewall rules
    YIN Yi, WANG Yun
    2015, 35(11):  3083-3086.  DOI: 10.11772/j.issn.1001-9081.2015.11.3083
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics
    It is difficult to understand all the relations between firewall rules. Poorly-organized rules may cause the problem that firewall could not filter packets correctly. In order to solve this problem, an analysis method of inclusion relations between firewall rules based on set theory was proposed. Based on the inclusion relations in set theory, the proposed method analyzed and classified the relations between firewall rules without considering the actions of rules. The proposed method simplified the process of analysis relations between firewall rules, and it was implemented by using a functional programming language, Haskell. The whole Haskell codes were concise, which also were easy to maintain and expand. The experimental results show that, with regard to medium scale sets of rules, the proposed method can analyze the inclusion relations between firewall rules rapidly and effectively. The proposed method also provides an important basis for the succeeding rules conflict detection.
    Intrusion detection based on dendritic cell algorithm and twin support vector machine
    LIANG Hong, GE Yufei, CHEN Lin, WANG Wenjiao
    2015, 35(11):  3087-3091.  DOI: 10.11772/j.issn.1001-9081.2015.11.3087
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that network intrusion detection was weak in training speed, real-time process and high false positive rate when dealing with big data, a Dendritic Cell TWin Support Vector Machine (DCTWSVM) approach was proposed. The Dendritic Cell Algorithm (DCA) was firstly used for the basic intrusion detection, and then the TWin Support Vector Machine (TWSVM) was applied to optimize the first step detection outcome. The experiments were carried out for testing the performance of the approach. The experimental results show that DCTWSVM respectively improves the detection accuracy by 2.02%, 2.30%, and 5.44% compared with DCA, Support Vector Machine (SVM) and Back Propagation (BP) neural network, and reduces the false positive rate by 0.26%, 0.46%, and 0.90%. The training speed is approximately twice as the SVM, and the brief training time is another advantage. The results indicate that the DCTWSVM is suitable for the comprehensive intrusion detection environment and helpful to the real-time intrusion process.
    Incremental learning method for fault diagnosis in large-scale InfiniBand network
    HU Yinhui, CHEN Lin
    2015, 35(11):  3092-3096.  DOI: 10.11772/j.issn.1001-9081.2015.11.3092
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics
    Aiming at how to effectively monitor the network abnormal events, find the bottleneck of network performance and potential point of failure in large-scale data center network, based on the deep analysis of the characteristics of InfiniBand (IB) network and introducing the feature selection strategy and incremental learning strategy, an incremental learning method of fault diagnosis for large-scale IB network (IL_Bayes) which based on the Bayes classification and added incremental learning mechanism was proposed. It could effectively improve the accuracy of fault classification. Through testing and verifying the diagnostic accuracy and the rate of misdiagnosis of this method in the Tianhe-2's real network environment, the result shows that the IL_Bayes method has higher classification accuracy and lower misdiagnosis rate.
    Geographically distributed replication management based on Hbase
    LI Yong, WU Lihui, HUANG Ning, WU Weigang
    2015, 35(11):  3097-3101.  DOI: 10.11772/j.issn.1001-9081.2015.11.3097
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the data in distributed system usually has many replicas among several datacenters and a robust mechanism was required to maintain data consistency, an algorithm of geographically distributed replication management was proposed after further research on the replication theory of distributed systems. Microsoft Research used Service Level Agreements (SLA) to divide the consistency requirements of users into several levels, each of which was associated with tolerable delay. The system ensured that users could have higher service levels within tolerable delay. Tuba system extends Pileus, it can dynamically change the location of primary and secondary replicas according to statistics sent by all users, so as to raise the average performance of the system. But the replication of Tuba system was carried out based on a single target unit. Improving the method in Tuba system, a set of algorithms independently to change the location of primary and secondary replicas was proposed. The mechanism was implemented in the replication among the HBase distributed systems. After the system is completed, the results show that taking the correlation between two regions into consideration when changing the location of primary and secondary replicas can improve the overall utility of the system.
    Online abnormal event detection with spatio-temporal relationship in river networks
    MAO Yingchi, JIE Qing, CHEN Hao
    2015, 35(11):  3106-3111.  DOI: 10.11772/j.issn.1001-9081.2015.11.3106
    Asbtract ( )   PDF (1073KB) ( )  
    References | Related Articles | Metrics
    When the network abnormal event occurs, the spatial-temporal correlation of the sensor nodes is very obvious. While existing methods generally separate time and space data properties, a decentralized algorithm of spatial-temporal abnormal detection based on Probabilistic Graphical Model (PGM) was proposed. Firstly the Connected Dominating Set (CDS) algorithm was used to select part of the sensor nodes to avoid monitoring all the sensor nodes, and then Markov Chain (MC) was used to predict time exception event, at last Bayesian Network (BN) was utilized in modelling the spatial dependency of sensors, combining spatio-temporal events to predict whether the abnormal events would or would not occur. Compared with the simple threshold algorithm and BN algorithm, the experimental results demonstrate that the proposed algorithm has higher detection precision, and low delay rate, greatly reducing the communication overhead and improving the response speed.
    Improved Kalman algorithm for abnormal data detection based on multidimensional impact factors
    HUA Qing, XU Guoyan, ZHANG Ye
    2015, 35(11):  3112-3115.  DOI: 10.11772/j.issn.1001-9081.2015.11.3112
    Asbtract ( )   PDF (705KB) ( )  
    References | Related Articles | Metrics
    With the widespread application of the data flow, the abnormal data detection problem in data flow has caused more attention. Existing Kalman filtering algorithms need small amount of historical data, but they only apply to single abnormal point detection. The effect to complex continuous outlier points is poor. In order to solve the problem, a Kalman filtering algorithm based on multidimensional impact factors was proposed. The algorithm joined the three dimensions of impact factor as space, time, provenance as well. In case of different weather and flood season, the algorithm adjusted the controlling parameters of system model parameters, and got a more accurate estimate of measurement noise. The detection accuracy of the algorithm could be improved significantly. The experimental results show that under the premise of guaranteeing similar running time, the detection error rate of this algorithm is far lower than Amnesic Kalman Filtering (AKF) and Wavelet Kalman Filtering (WKF) algorithms.
    New classification method based on neighborhood relation fuzzy rough set
    HU Xuewei, JIANG Yun, LI Zhilei, SHEN Jian, HUA Fengliang
    2015, 35(11):  3116-3121.  DOI: 10.11772/j.issn.1001-9081.2015.11.3116
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Since fuzzy rough sets induced by fuzzy equivalence relations can not quite accurately reflect decision problems described by numerical attributes among fuzzy concept domain, a fuzzy rough set model based on neighborhood relation called NR-FRS was proposed. First of all, the definitions of the rough set model were presented. Based on properties of NR-FRS, a fuzzy neighborhood approximation space reasoning was carried out, and attribute dependency in characteristic subspace was also analyzed. Finally, feature selection algorithm based on NR-FRS was presented, and feature subsets was constructed next, which made fuzzy positive region greater than a specific threshold, thereby getting rid of redundant features and reserving attributes that have a strong capability in classification. Classification experiment was implemented on UCI standard data sets, which used Radial Basis Function (RBF) support vector machine as the classifier. The experimental results show that, compared with fast forward feature selection based on neighborhood rough set as well as Kernel Principal Component Analysis (KPCA), feature number of the subset obtained by NR-FRS model feature selection algorithm changes more smoothly and stably according to parameters. Meanwhile, average classification accuracy increases by 5.2% in the best case and varies stably according to parameters.
    Improved MIMLBoost algorithm based on importance evaluation of labels
    HAO Ning, XIA Shixiong, NIU Qiang, ZHAO Zhijun
    2015, 35(11):  3122-3125.  DOI: 10.11772/j.issn.1001-9081.2015.11.3122
    Asbtract ( )   PDF (534KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of class imbalance which the original degradation method causes in MIMLBoost algorithm, this paper introduced the importance of class into the original algorithm and an improved degradation method based on the category tag evaluating was proposed. First of all, the proposed method used a clustering algorithm to cluster all bags into groups. Each group could be treated as a concept in the multi-instance bag, and every class label could be quantified in each group. Then, the TF-IDF(Term Frequency-Inverse Document Frequency) algorithm was used to get the importance of each label in each group. Finally, for each group, the label whose importance was lowest in the group could be removed, because this label created many negative samples easily when the MIML (Multi-Instance Multi-Label) samples were transformed into multi-instance samples. The experimental results show that the new degradation method is effective, and the performance of improved algorithm is better than the original algorithm, especially in the terms of Hamming loss, coverage and ranking loss. This confirms that the new algorithm can reduce the error rate of classification and improve the precision of algorithm effectively.
    Improved teaching-learning-based optimization algorithm based on K-means
    HUANG Xiangdong, XIA Shixiong, NIU Qiang, ZHAO Zhijun
    2015, 35(11):  3126-3129.  DOI: 10.11772/j.issn.1001-9081.2015.11.3126
    Asbtract ( )   PDF (571KB) ( )  
    References | Related Articles | Metrics
    For the complex multimodal optimization problems, the traditional Teaching-Learning-Based Optimization (TLBO) algorithm is easy to fall into local search and has low optima efficiency. In order to solve the problem, an improved TLBO algorithm based on K-means was proposed in this paper. It used the K-means to decide the population into small populations for reducing the population size and correspondingly improved the "teaching" and "learning" stages to improve the speed of global convergence. At the same time, the proposed algorithm added "mutation" operation to avoid the local optimum. In the experiments, seven unimodal and two multimodal optimization problems were optimized by the algorithm proposed in this paper. The optimization results were compared grenade explosion method and traditional TLBO algorithm. The experimental results show that the improved algorithm can quickly and efficiently find the globally optimal solution in both unimodal and multimodal functions and the improved algorithm is better than the traditional TLBO algorithm in the ability of searching the globally optimal solutions.
    Novel text clustering approach based on R-Grams
    WANG Xianming, GU Qiong, HU Zhiwen
    2015, 35(11):  3130-3134.  DOI: 10.11772/j.issn.1001-9081.2015.11.3130
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the clustering accuracy rate and recall rate are difficult to balance in traditional text clustering algorithms, a clustering approach based on the R-Grams text similarity computing algorithm was proposed. Firstly, the clustered documents were sorted in descending order; secondly, the symbolic documents were identified and then initial clustering results were achieved by using an R-Grams-based similarity computing algorithm; finally, the final clustering results were completed by combining the initial clustering. The experimental results show that the proposed approach can flexibly regulate the clustering results by adjusting the clustering threshold parameter to satisfy different demands and the optimal parameter is about 15. With the increasing of the clustering threshold, the clustering accuracies increase, and the recalls increase at first, then decrease. In addition, the approach is free from time-consuming processing procedures such as word segmentation and feature extraction and can be easily implemented.
    Indoor and outdoor scene recognition algorithm based on support vector machine multi-classifier
    RUAN Jinjia, LUO Dan, LUO Haiyong
    2015, 35(11):  3135-3138.  DOI: 10.11772/j.issn.1001-9081.2015.11.3135
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Considering the low power consumption for successive indoor and outdoor scenes pervasive perception in complex and dynamic environment, a lightweight indoor and outdoor scene identification algorithm based on Support Vector Machine (SVM) multi-classifier was proposed, which can accurately distinguish the indoor and outdoor scenes with low power consumption. The algorithm adopted data mining method to obtain different characteristics in indoor and outdoor scenes from the sensors integrated in smart phones (such as visible light sensors, magnetic sensors, acceleration sensors, gyro sensors, and pressure sensors, etc.). It also made advantage of human behavior difference between indoor and outdoor scene. According to different time and weather conditions, the algorithm designed support vector machine multi-classifier to identify complex indoor and outdoor scenes based on the differences of human behavior in indoor and outdoor scene. The simulation results show that the proposed algorithm has good universality, and can determine the indoor and outdoor scenes with more than 95% accuracy, and only consumes less than 5 mW averaging power.
    Improved heterogereous earliest finish time scheduling algorithm with multi-dimensional quality of service for data stream processing task in Internet of vehicles
    LI Huiyong, CHEN Yixiang
    2015, 35(11):  3139-3145.  DOI: 10.11772/j.issn.1001-9081.2015.11.3139
    Asbtract ( )   PDF (1089KB) ( )  
    References | Related Articles | Metrics
    In order to solve the scheduling problem of distributed processing data stream in Internet of vehicles, two kinds of improved Heterogereous Earliest Finish Time (HEFT) scheduling algorithm for multi-dimensional QoS (Quality of Service) were proposed. Firstly, the weighted directed acyclic graph model of distributed processing data stream task and the seven-dimensional QoS attributes weighted undirected topology model of distributed computing resources of Internet of vehicles were established. Secondly, the method of building lists in the classic HEFT scheduling algorithm was improved to a new one based on the priority of highest-level, minimum-successor tasks. Meanwhile, the seven-dimensional QoS attributes of computing resources in Internet of vehicles were grouped and reduced into two-dimensional composite attributes: the computing priority and communicating priority. Based on this improvement, two different improved HEFT scheduling algorithms due to the preferences of users for multi-dimensional QoS were proposed. Finally, the analysis of an example shows that the overall performance of these two multidimensional QoS improved HEFT scheduling algorithms are better than the classical HEFT scheduling algorithm and the round-robin scheduling algorithm.
    Group trip planning queries on road networks
    ZHU Haiquan, LI Wengen, ZHANG Yichao, GUAN Jihong
    2015, 35(11):  3146-3150.  DOI: 10.11772/j.issn.1001-9081.2015.11.3146
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Group Trip Planning (GTP) queries are targeting at finding some same activity sites for a group of users (usually expressed as Point of Interests (PoI)), in ordor to minimize the total travel cost. Existing researches on GTP queries are limited in Euclidean space, however, real travel is restricted by road network. Motivated by this observation, two algorithms (NE-GTP and ER-GTP) were designed to solve the GTP queries. NE-GTP expanded the network around every user's location to iteratively find the PoI, while ER-GTP used R-tree index and Euclidean distance to quickly get the results. The experimental results show that ER-GTP always performs on average an order of magnitude processing time faster than NE-GTP. In addition, when the dataset becomes large, ER-GTP also has good scalability.
    Short-term traffic flow prediction algorithm based on orthogonal differential evolution unscented Kalman filter
    YUAN Lei, LIANG Dingwen, CAI Zhihua, WU Zhao, GU Qiong
    2015, 35(11):  3151-3156.  DOI: 10.11772/j.issn.1001-9081.2015.11.3151
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics
    A state-space model was established for the short-term traffic flow prediction problem under complex road conditions, which is based on macroscopic traffic flow forecasting. In order to solve the problem of parameter optimization on the dynamic traffic forecast model, a method to improve the performance of Unscented Kalman Filter (UKF) with orthogonal adaptive Differential Evolution (DE) was proposed. The orthogonal method maximized the diversity of the initial population in DE algorithm. The crossover operator in DE was optimized by the orthogonal method and the technology of quantification to balance the exploitation and exploration, which was more beneficial to find the model parameters of UKF. The experimental results show that, with respect to use random distribution to initialize the parameters, or set model parameters based on the experience, the use of orthogonal design method for initialization strategy, mutation operator and adaptive control strategy of parameters in differential evolution algorithm can effectively save computing resources, improve forecasting performance and accuracy, and provide better robustness.
    Fall detection algorithm based on random forest
    LUO Dan, LUO Haiyong
    2015, 35(11):  3157-3160.  DOI: 10.11772/j.issn.1001-9081.2015.11.3157
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics
    To handle the over fitting and inadaptability problem of current fall detection algorithms caused by lack of real fall samples of elderly people and the use of small size of fall samples collected by young people, a fall detection algorithm based on random forest was proposed. By adopting sliding window mechanism, the sequentially collected acceleration data within the window were firstly processed to extract feature parameters of time domain and frequency domain, and then the Bootstrap approach was employed to randomly select partial samples with the same number from the whole training sample collection, after that random selection of features was performed to construct a collection of basic SVM (Support Vector Machine) classifiers with best feature partition. On the online fall detection stage, the final classification result was obtained with vote of results by multiple basic SVM classifiers according to the majority criteria. The experimental results demonstrate that the proposed algorithm outperforms the SVM and BP (Back Propagation) neural network algorithms with 95.2% accuracy, 90.6% sensitivity and 93.5% specificity, and reflects that the fall detection algorithm based on random forest can accurately recognize the fall activity, and has strong generalization ability and robustness.
    Sparse trajectory prediction method based on iterative grid partition and entropy estimation
    LIU Leijun, ZHU Meng, ZHANG Lei
    2015, 35(11):  3161-3165.  DOI: 10.11772/j.issn.1001-9081.2015.11.3161
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    Concerning the "data sparsity" problem of moving object's trajectory prediction, i.e., the available historical trajectories are far from enough to cover all possible query trajectories that can obtain predicted destinations, a Trajectory Prediction Algorithm suffer from Data Sparsity based on Iterate Grid Partition and Entropy Estimation (TPDS-IGP&EE) was proposed. Firstly, the moving region of trajectories was iteratively divided into a two-dimensional plane grid graph, and then the original trajectories were mapped to the grid graph so that each trajectory could be represented as a grid sequence. Secondly, an L-Z entropy estimator was used to calculate the entropy value of trajectory sequence, and a new trajectory space was generated by doing trajectory synthesis based on trajectory entropy. At last combining with the Sub-Trajectory Synthesis (SubSyn) algorithm, sparse trajectory prediction was implemented. The experimental results show when trajectory completed percentage increases towards 90%, the coverage of the Baseline algorithm decreases to almost 25%. TPDS-IGP&EE algorithm successfully coped with it as expected with only an unnoticeable drop in coverage, and could constantly answer almost 100% of query trajectories. And TPDS-IGP&EE algorithm's prediction accuracy was generally 4% higher than Baseline algorithm. At the same time, the prediction time of Baseline algorithm to 100 ms was too long, while the prediction time of TPDS-IGP&EE algorithm could be negligible (10 μs). TPDS-IGP&EE algorithm can make an effective prediction for the sparse trajectory, providing much wider predicting range, faster predicting speed and better predicting accuracy.
    Subjective trust model based on consumers' risk attitude
    XU Jun, ZHONG Yuansheng
    2015, 35(11):  3166-3171.  DOI: 10.11772/j.issn.1001-9081.2015.11.3166
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the existing evaluation methods do not take into account consumers' risk attitude, a subjective trust model based on consumers' risk attitude was proposed. Firstly, the historical information of entity evaluation attributes was converted into the interval number by using set-valued statistics theory. Then, by introducing risk attitude factor, the interval evaluation matrix was transformed into the evaluation matrix with risk attitude. Subsequently, the trust level of the entity was calculated by using the idea of relative closeness. Finally, the simulation results verify that the proposed method can make better trust decisions by considering risk attitude of consumers. The simulating experiment of anti-fraud further confirms the feasibility of the subjective trust model.
    Fork/Join-oriented software refactoring and performance analysis
    ZHANG Dongwen, LIU Chenguang, ZHANG Yang
    2015, 35(11):  3172-3177.  DOI: 10.11772/j.issn.1001-9081.2015.11.3172
    Asbtract ( )   PDF (853KB) ( )  
    References | Related Articles | Metrics
    There are few works performed on the application and analysis of the Fork/Join framework at present. This paper refactored several benchmarks, including series, crypt, sparsematmult and sor, in the Java Grande Forum (JGF) benchmark suite by using Fork/Join framework. Taking the series benchmark as an example, detailed refactoring process was presented. In the experimentation, the execution time of each benchmark was evaluated with different threshold, and the execution time of Fork/Join framework was compared with that of multi-threaded version. Furthermore, the number of work-stealing operations was presented. The experimental results show that the execution time using Fork/Join framework can reduce 14.2% on average than that using multi-thread. When evaluating the series benchmark with data size sizeC and two threads, the execution time of Fork/Join framework is 40% lower than that with multi-thread. Programs using the Fork/Join framework can get better performance than that using multi-thread.
    Parallel disposal of nephogram display based on visualization ToolKit and message passing interface
    LIU Weihui, TANG Peng, SONG Anping, LIU Zhixiang, XU Lei, ZHANG Wu
    2015, 35(11):  3178-3181.  DOI: 10.11772/j.issn.1001-9081.2015.11.3178
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics
    Visual pipeline mechanism and basic structure of the parallel program were discussed based on the characteristics of Visualization ToolKit (VTK). Since the problem of visualization post-processing in computational fluid dynamics, VTK color mapping algorithm was introduced and a program of showing nephogram was written. In order to reduce the running time, a parallel algorithm was proposed. The proposed algorithm made full use of the parallelism between the VTK tasks, reduced the program running time and improved the running efficiency. Finally the speedup ratios of files of different sizes were compared and analyzed. The results show that the requirements of visualization post-processing is satisfied by visualization technology based on VTK and the good parallel effect is obtained with Message Passing Interface (MPI).
    Improvement of WordNet application programming interface and its application in Mashup services discovery
    ZENG Cheng, TANG Yong, ZHU Zilong, LI Bing
    2015, 35(11):  3182-3186.  DOI: 10.11772/j.issn.1001-9081.2015.11.3182
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    The process of using the traditional WordNet Application Programming Interface (API) is based on the file operation, so each execution of API in WordNet library would lead to serious problem of time-consuming in process of text analysis and similarity calculation. Therefore, the improved solution of WordNet API was proposed. The process of constructing semantic net of WordNet concept was transferred into the computer memory; meanwhile several APIs which were convenient to calculate the similarity were added. These improvements would accelerate the process of tracking the relationship of concepts and text similarity calculation. This solution was already applied in the process of Mashup services discovery. The experimental results show that the use of improved API can effectively improve the query efficiency and recall of Mashup service.
    Binary probability segmentation of video based on graphics processing unit
    LI Jinjing, CHEN Qingkui, LIU Baoping, LIU Bocheng
    2015, 35(11):  3187-3193.  DOI: 10.11772/j.issn.1001-9081.2015.11.3187
    Asbtract ( )   PDF (1079KB) ( )  
    References | Related Articles | Metrics
    Since the segmentation performance of existing binary segmentation algorithm for video is excessively low, a binary probability segmentation algorithm in real-time based on Graphics Processing Unit (GPU) was proposed. The algorithm implemented a probabilistic segmentation based on the Quadratic Markov Measure Field (QMMF) model by regularizing the likelihood of each pixel of frame belonging to forground class or background class. In this algorithm, first two kinds of likelihood models, Static Background Likelihood Model (SBLM) and Unstable Background Likelihood Model (UBLM) were proposed. Secondly, the probability of each pixel belonging to background was computed by tonal transforming, cast shadow detecting and camouflage detecting algorithm. Finally, the probability of background which makes the energy function have a minimum value was computed by Gauss-Seidel model iteration and the binary value of each pixel was calculated. Moreover, illumination change, cast shadow and camouflage were included to improve the accuracy of segmentation algorithm. In order to fulfill the real-time requirement, a parallel version of our algorithm was implemented in a NVIDIA GPU. The accuracy and GPU execution time of the segmentation algorithm were analyzed. The experimental results show that the average missing rate and false detection rate of ViBe+ and GMM+ are 3 and 6 times those of QMMF, the average execution time of GPU of ViBe+ and GMM+ is about 1.3 times that of QMMF. Moreover, the average speedup of algorithm was computed and it is about 76.8.
    Texture image restoration model based on combined subdivision
    WAN Jinliang, WANG Jian
    2015, 35(11):  3194-3197.  DOI: 10.11772/j.issn.1001-9081.2015.11.3194
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    A texture image restoration model based on combined subdivision was proposed in this paper, which can solve the problems of piecewise iterative curve fitting, especially the discontinuous contour and the size error of reconstruction regions. Firstly, segmentation regions of the original image were extracted. The feature vector of region shape was got by using contour tracing and downsampling. Then the region contour curve was reconstructed by combining ternary approximating and interpolating subdivision scheme. Finally, region texture was synthesized to get texture image restoration result. The proposed model had been tested on many natural images. The experimental results show that the proposed model is valid and the restoration results are consistent with human visual system. The proposed algorithm has lower time complexity of image restoration, and its performance in subjective assessment of the quality of pictures is much better than the multi-region image reconstruction algorithm.
    Blind image forensics based on JPEG double quantization effect
    DUAN Xintao, PENG Tao, LI Feifei, WANG Jingjuan
    2015, 35(11):  3198-3202.  DOI: 10.11772/j.issn.1001-9081.2015.11.3198
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    The double quantization effect of JPEG (Joint Photographic Experts Group) provides important clues for detecting image tampering. When an original JPEG image undergoes localized tampering and is saved again in JPEG format, the Discrete Consine Transform (DCT) coefficients of untampered regions would undergo double JPEG compressing, while the DCT coefficients of tampered regions would only undergo a single compression. The Alternating Current (AC) coefficient distribution accords with a Laplace probability density distribution described with a suitable parameter. And on this basis, this paper proposed a new double compression probability model of JPEG image to describe the change of DCT coefficients after the double compression, and combined the Bayes criterion to express the eigenvalues of the image blocks which have undergone the single and double JPEG compression. A threshold was set for the eigenvalues. Then the tampered region was automatically detected and extracted by using the threshold to classify the eigenvalues. The experimental results show that the method can detect and locate the tamped area effectively and it outperforms in terms of the detection result compared with the blind detection algorithm of composite images by measuring inconsistencies of JPEG blocking artifact and image forgery detection algorithm based on quantization table especially when the second compression factor is smaller than the first one.
    Discovery method of travelling companions based on big data of license plate recognition
    CAO Bo, HAN Yanbo, WANG Guiling
    2015, 35(11):  3203-3207.  DOI: 10.11772/j.issn.1001-9081.2015.11.3203
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    The discovery of travelling companions based on processing and analysis of the license plate recognition big data has become widely used in many aspects such as the involved vehicle tracking. However, discovery algorithms of travelling companions have poor performance in single machine mode no matter in time and space. To solve this problem, a discovery method of travelling companions named FP-DTC was proposed. This method based on the algorithm of FP-Growth was parallelled by the distributed processing framework-Spark, and had made some improvement and optimization to discover the travelling companions more efficiently. The experimental results show that, this method performs well on the discovery of travelling companions, and achieves an increase of nearly four times than the same algorithm with Hadoop.
    CRSSC 2015 Paper
    Rough set based matrix method for dynamic change covering family
    LIN Yidong, ZHANG Yanlan, LIN Menglei
    2015, 35(11):  3208-3212.  DOI: 10.11772/j.issn.1001-9081.2015.11.3208
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics
    To calculate upper and lower approximations effectively and quickly under covering variation in the covering information systems, a relation matrix was defined by using the concept of characteristic function. Then the expressions for the approximations, positive, boundary and negative regions intuitively from the view of matrix were presented. Then, the expressions for the approximations, positive boundary and negative regions intuitively from the view of matrix were put forward. Furthermore, the idea of matrix was used to research and discuss the approaches for incrementally updating approximations of sets, based on the dynamic number of coverings. The investigations enriched and improved the covering rough set based dynamic learning theory and provided a method for dynamic knowledge update based in covering information systems.
    Vertical union algorithm of interval concept lattices
    ZHANG Ru, ZHANG Chunying, WANG Liya, LIU Baoxiang
    2015, 35(11):  3213-3217.  DOI: 10.11772/j.issn.1001-9081.2015.11.3213
    Asbtract ( )   PDF (699KB) ( )  
    References | Related Articles | Metrics
    To solve the practical problem that some rules may be lost when the association rules are extracted directly after the construction of interval concept lattice for the different formal context, the different interval concept lattices must be merged firstly. To improve the efficiency of lattice generating and consolidation, the incremental generation algorithm of interval concept lattice should be improved firstly, and then the concepts were stored in the form of structures which were divided into existence concepts, redundancy concepts and empty concepts. Secondly, the binary relation between extension and intension was analyzed and the sufficient condition of vertical merger, consistency of interval concept lattice, was defined. Thirdly the concepts which have consistent intension were divided into six kinds after merging and the corresponding decision theorem was given. In the end, based on the principle of breadth-first, a new vertical integration algorithm was designed through the type judgment and different processing methods of the concept lattice nodes in the original interval concept lattice. Finally, an application example verified the effectiveness and efficiency of the algorithm.
    Multi-label learning with label-specific feature reduction
    XU Suping, YANG Xibei, QI Yunsong
    2015, 35(11):  3218-3221.  DOI: 10.11772/j.issn.1001-9081.2015.11.3218
    Asbtract ( )   PDF (696KB) ( )  
    References | Related Articles | Metrics
    In multi-label learning, since different labels may have their own characteristics, multi-label learning approach with label-specific features named LIFT has been proposed. However, the construction of label-specific features may increase the dimension of feature vector, which brings some redundant information in feature space. To solve this problem, a multi-label learning approach named FRS-LIFT was presented, which can implement label-specific feature reduction by fuzzy rough set. FRS-LIFT contains four steps: construction of label-specific features, reduction of feature dimensionality, training of classification models and prediction of unknown samples. The experimental results on 5 multi-label datasets show that, compared with LIFT, the proposed method can not only reduce the dimension of label-specific features, but also achieve satisfactory performances in 5 evaluation metrics.
    Research and application of dynamic rule extraction algorithm based on rough set and decision tree
    CHEN Lifang, WANG Yun, ZHANG Feng
    2015, 35(11):  3222-3226.  DOI: 10.11772/j.issn.1001-9081.2015.11.3222
    Asbtract ( )   PDF (713KB) ( )  
    References | Related Articles | Metrics
    For the shortage of big data and incremental data processing in static algorithm, the dynamic rule extraction algorithm based on rough-decision tree was constructed to diagnose rotating machinery faults. Through the combination of rough set with decision tree, the sample selections were made by the method of incremental sampling. Through dynamic reduction, decision tree construction, rules extraction and selection, matching, four steps of loop iteration process, dynamic rule extraction was achieved, which improved the credibility of the extracted rules. Meanwhile, by applying the algorithm to the dynamic problem: rotating machinery fault diagnosis, the effectiveness of the algorithm was verified. Finally, the efficiency of the algorithm was compared with static algorithm and incremental dynamic algorithm. The result demonstrates that the proposed algorithm can obtain more implied information in the most streamlined way.
    Multi-dimensional fuzzy clustering image segmentation algorithm based on kernel metric and local information
    WANG Shaohua, DI Lan, LIANG Jiuzhen
    2015, 35(11):  3227-3231.  DOI: 10.11772/j.issn.1001-9081.2015.11.3227
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    In image segmentation based on clustering analysis, spatial constraints were imposed so as to reduce noise but preserve details. Based on Fuzzy C-Means (FCM) method, a multi-dimensional fuzzy clustering image segmentation algorithm based on kernel metric and local information was proposed to compromise noise and details in the image. In the algorithm, two extra images based on local information derived from the original one through a smoothing and a sharpening filter respectively were introduced to construct a multi-dimensional gray level vector to replace the original one-dimensional gray level. And then kernel method was employed to strengthen its robustness. In addition, a penalty term, which represents the diversity between local pixel and its neighbors, was used to modify the objective function so as to improve its anti-noise ability further. Compared with NNcut (Nystrom Normalized cut) and FLICM (Fuzzy Local Information C-Means), its segmentation accuracy achieved almost 99%. The experimental results on natural and medical images and parameter adjusting demonstrate its favorable advantages of flexibility and robustness when dealing with noise and details.
    Lossless compression method of industrial remote monitoring data based on improved floating point data compression
    QIU Jie, LIANG Jiuzhen, WU Qin, WANG Peibin
    2015, 35(11):  3232-3237.  DOI: 10.11772/j.issn.1001-9081.2015.11.3232
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that a lot of industrial remote monitoring data transmission would leads to the transmission delay on General Packet Radio Service (GPRS) network, a lossless compression method of industrial remote monitoring data based on improved FPC (Floating Point data Compression) was proposed in this paper. First of all, according to the characteristics of industrial remote monitoring data, the structure of predictor in original FPC algorithm was improved, and then was combined with range encoding algorithm to compress the entire monitoring data. The experimental results show that the prediction precision of improved FPC is higher than before, and the compression ratio is enhanced with the same high compression efficiency. The results of comparison experiments between the proposed method and general lossless compression algorithms show that, the average compression ratio is increased more than 12% and the average compression time is decreased more than 38.5%, which leads to the result that the transmission time is decreased more than 23.7%. The method can increase real-time monitoring performance when network transmission rate is very low and transmission data is very large.
    Artificial intelligence
    Heterogenous particle swarm optimization algorithm with multi-strategy parallel learning
    WANG Yun, SUN Hui
    2015, 35(11):  3238-3242.  DOI: 10.11772/j.issn.1001-9081.2015.11.3238
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    The standard Particle Swarm Optimization (PSO) suffers from the premature convergence problem and the slow convergence speed problem when solving complex optimal problems, so a Heterogenous PSO with Multi-strategy parallel learning (MHPSO) was presented. Firstly two new learning strategies, named local disturbance learning strategy and Gaussian subspace learning strategy respectively, were proposed to maintain the population's diversity and jump out from the local optima. And an efficient and stable strategy pool was constructed by combing the above two strategies with the existed one (MBB-PSO); Secondly, a simpler and more effective strategy change mechanism was proposed, which could guide particles when to change the learning strategy. The experimental study on a set of classical test functions show that the proposed approach improves the solution accuracy and convergence speed greatly, and has a superior performance in comparison with several other improved PSO algorithms, such as APSO (Adaptive Particle Swarm Optimization).
    Clustering by density and distance analysis based on genetic algorithm
    WANG Ze, ZHANG Hongjun, ZHANG Rui, HE Dengchao
    2015, 35(11):  3243-3246.  DOI: 10.11772/j.issn.1001-9081.2015.11.3243
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    In order to solve the difficulty of selecting cluster centers and weakness of density analysis generalization, a novel clustering method was proposed. The method completed clustering by density and distance analysis based on genetic algorithm, which computed density with exponential method to reduce the impact of parameters and adopted genetic algorithm to search optimum threshold values. It introduced a penalty factor to overcome the excursion of search region for accelerating convergence. Numerical experiments on both artificial and UCI data sets show that compared with K-means, fast search clustering and Max_Min_SD, the proposed algorithm can achieve better or comparable performance on Rand index, accuracy, precision and recall.
    Video recommendation algorithm fusing comment analysis and latent factor model
    YIN Lutong, YU Jiong, LU Liang, YING Changtian, GUO Gang
    2015, 35(11):  3247-3251.  DOI: 10.11772/j.issn.1001-9081.2015.11.3247
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Video recommender is still confronted with many challenges such as lack of meta-data of online videos, and also it's difficult to abstract features on multi-media data directly. Therefore an Video Recommendation algorithm Fusing Comment analysis and Latent factor model (VRFCL) was proposed. Starting with video comments, it firstly analyzed the sentiment orientation of user comments on multiple videos, and resulted with some numeric values representing user's attitude towards corresponding video. Then it constructed a virtual rating matrix based on numeric values calculated before, which made up for data sparsity to some extent. Taking diversity and high dimensionality features of online video into consideration, in order to dig deeper about user's latent interest into online videos, it adapted Latent Factor Model (LFM) to categorize online videos. LFM enables us to add latent category feature to the basis of traditional recommendation system which comprised of dual user-item relationship. A series of experiments on YouTube review data were carried to prove that VRFCL algorithm achieves great effectiveness.
    Fault prediction model based on health analysis and harmony search-ant colony algorithm-support vector machine
    QIU Wenhao, HUANG Kaoli, JIN Saisai, LIAN Guangyao
    2015, 35(11):  3252-3255.  DOI: 10.11772/j.issn.1001-9081.2015.11.3252
    Asbtract ( )   PDF (730KB) ( )  
    References | Related Articles | Metrics
    A new method of fault prediction based on health analysis was proposed for the problem of the existing fault prediction technology could not response the declining trend of system property as whole. Firstly, in order to achieve multi-step prediction, multiple output Support Vector Machine (SVM) was formatted on the basis of support vector machine regression algorithm, while using the Harmony Search-Ant Colony Algorithm (HSACA) to optimize parameters of SVM to solve the local optimal problem. Then nonlinear mapping Harmony Search-Ant Colony Algorithm-Support Vector Machine (HSACA-SVM) model matching monitoring data and health degree was built with the optimal parameters. Finally, the proposed model was used to evaluate a power supply system. The results indicate that the HSACA-SVM model can predict the downward trend of health degree with 97% accuracy, and then realize fault prediction.
    Application of stacked denoising autoencoder in spamming filtering
    LI Yantao, FENG Weisen
    2015, 35(11):  3256-3260.  DOI: 10.11772/j.issn.1001-9081.2015.11.3256
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Aiming at the continually increasing number of spams, an approach for spam filtering based on the use of Stacked Denoising Autoencoder (SDA) was proposed. Firstly, to get more abstract and robust feature representation of raw data, greedy layer-wise unsupervised algorithm was used to train the SDA by minimizing the construction error on unlabeled data set. Then a classifier was added on the top level of SDA. Next, the parameters of SDA were optimized with supervised algorithm by minimizing the classification error to obtain a optimal model on labeled data set. Lastly, experiments were performed on six different public corpora using the trained SDA. The performance of SDA algorithm was compared with Support Vector Machine (SVM), Bayes approach and Deep Belief Network (DBN), by using precision, recall, Matthews Correlation Coefficient (MCC) with more balanced performance measure as the experimental measures. The experimental results indicate that using SDA to filter spams has higher precision and more robustness. Since it not only acquires best average performance with all precision greater than 95%, but also gets close to prefect prediction with all MCC greater than 0.88.
    Network and communications
    Compressive wideband spectrum blind detection based on high-order statistics
    CAO Kaitian, CHEN Xiaosi, ZHU Wenjun
    2015, 35(11):  3261-3264.  DOI: 10.11772/j.issn.1001-9081.2015.11.3261
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    In cognitive radio network, wideband spectrum sensing is faced with the technical restrictions of high-speed Analog-to-Digital Converter (ADC). To cope with this issue, the probability distribution of high-order decision statistics for wideband spectrum sensing fed by compressed observations based on Compressive Sampling (CS) theory was deduced, and then a High-Order Statistics (HOS)-based Compressive Wideband Spectrum Blind Detection (HOS-CWSBD) scheme with theses compressive measurements was proposed in this paper. The proposed algorithm need neither the prior acknowledge of the transmitted signal, nor the signal recovery. Both theoretical analyses and simulation results show that the proposed scheme has lower computational complexity and more robustness to the noise uncertainty compared to the traditional spectrum sensing schemes based on CS requiring the signal recovery and the HOS-based spectrum sensing scheme with Nyquist samples.
    Improved linear minimum mean square error channel estimation algorithm based on orthogonal frequency division multiplexing
    XIE Bin, CHEN Bo, LE Honghao
    2015, 35(11):  3265-3269.  DOI: 10.11772/j.issn.1001-9081.2015.11.3265
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    Traditional Linear Minimum Mean Square Error (LMMSE) channel estimation was required to know the statistical characteristics of the channel. However, these characteristics are usually unknown in practical applications. Aiming at the uncertainty of wireless channel statistics, taking the time-domin channel sparsity of the energy distribution into consideration, this article proposed an improved LMMSE channel estimation algorithm based on Least Squares (LS) estimation. The algorithm began with the highest confidence degree subcarrier, making the adjacent subcarrier channel estimation value as the current subcarrier real response to compute the weighting coefficient, then to complete channel response of the multiple channels by the method of weighted average. This algorithm avoided the complicated operation of the matrix inversion and decomposition, and might be done effectively and easily. The experimental results show that the performance of the improved algorithm is better than LS and the SVD-LMMSE (Singular Value Decomposition-Linear Minimum Mean Square Error) channel estimation, and the Bit Error Ratio (BER) is close to traditional LMMSE algorithm.
    Automatic implementation scheme of implementing access control rules in OpenFlow network
    LIU Yi, ZHANG Hongqi, DAI Xiangdong, LEI Cheng
    2015, 35(11):  3270-3274.  DOI: 10.11772/j.issn.1001-9081.2015.11.3270
    Asbtract ( )   PDF (933KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that OpenFlow network can't meet access control policy constantly resulted from its data plane changing frequently, an automatic implementation scheme of implementing access control rules in OpenFlow network was proposed. Firstly, reachable space was obtained by building real-time forwarding paths, and conflicts among access control rules were resolved by using dynamical synthesis algorithm. Then, denied space was extracted from synthetic set of access control rules by using rule space division algorithm, which was compared with reachable space subsequently to detect direct and indirect violations. According to network update situations and violation detection results, automatic violation resolutions were adopted flexibly, such as rejecting rule update, removing rule sequence, deploying rule near source based on Linear Programming (LP) and deploying rule terminally. Lastly, the format of access control rule was converted. The theoretical analysis and simulation results demonstrate that the proposed scheme is applicable under the condition that multiple security applications are running on the controller and memory of switch is limited, and show that deploying rule near source based on LP can minimize unwanted traffic of network.
    Design of medium access control protocol tradeoff between throughput and fairness in MANET
    ZHU Qingchao, CHEN Jing, GONG Shuiqing, SHI Ting
    2015, 35(11):  3275-3279.  DOI: 10.11772/j.issn.1001-9081.2015.11.3275
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Since Mobile Ad Hoc NETwork (MANET) has imbalance of high throughput but low fairness, a novel Medium Access Control (MAC) protocol named MAC-FT was proposed. Firstly, two expressions were deduced and focus on relationship of optimal throughput and nodes' number, and relationship of idle slot probability and nodes' number. On the basis of this, idle slot probability model was developed, whose feasibility and stability were proved based on Lyapunov drift. Secondly, idle slot probability computation was implemented through Auto-Regressive and Moving Average (ARMA) model filter scheme and its dynamics was controlled by Proportional Integral Controller (PIC) model. Finally, performance of MAC-FT was analyzed synthetically. Results show that, fairness index and throughput reached to 0.98 and 6.15 Mb/s respectively, which were similar to optimal value 1 and 5.85 Mb/s. Therefore, performance of MAC-FT is better than Asymptotically Optimal Backoff (AOB), Idle Sense (IS), Distribution Coordination Function (DCF), and Gentle DCF (GDCF), and it improves balance of throughput and fairness.
    High accuracy frequency estimation algorithm of sinusoidal signals based on fast Fourier transform
    FAN Lei, QI Guoqing
    2015, 35(11):  3280-3283.  DOI: 10.11772/j.issn.1001-9081.2015.11.3280
    Asbtract ( )   PDF (574KB) ( )  
    References | Related Articles | Metrics
    In order to further improve the estimation precision of sinusoid frequency in additive white Gaussian noise background, a new frequency estimation algorithm of sinusoidal signals based on interpolated Fast Fourier Transform (FFT) was proposed. Firstly, zeros of length N were padded to the sinusoid sampled data of length N in the time domain. Next, 2N-point FFT was performed and the coarse estimation was made by searching the location of the discrete spectrum line with maximum amplitude. Finally, the fine estimation was made by utilizing the spectrum line with maximum amplitude and two sample values of Discrete-Time Fourier Transform (DTFT) of the original signal on the left and right side of the maximum spectrum line. Simulation results show that the root mean square error of the proposed estimator is close to the Cramer-Rao lower bound when the signal frequency locates anywhere between two neighboring FFT discrete spectral lines and the performance is stable. The estimation precision is higher than Candan estimator, Fang estimator, Rational Combination of Three Spectrum Lines (RCTSL) estimator and Aboutanios estimator. The proposed estimator also has lower signal-to-noise ratio threshold than the existing estimators.
    Virtual reality and digital media
    Fast selection algorithm for intra prediction in AVS2
    ZHAO Chao, ZHAO Haiwu, WANG Guozhong, LI Guoping, TENG Guowei
    2015, 35(11):  3284-3287.  DOI: 10.11772/j.issn.1001-9081.2015.11.3284
    Asbtract ( )   PDF (650KB) ( )  
    References | Related Articles | Metrics
    For Audio Video coding Standard Ⅱ(AVS2) intra-prediction mode determination process is complicated to calculate, and the popularity of ultra-high definition video put encoding and decoding system under great pressure, a kind of fast intra prediction algorithm was presented in this paper. The algorithm selected the part of the Smallest Coding Unit (SCU) prediction mode,reducing the amount of computation of the underlying SCU, and then the upper layer Coding Unit (CU) obtained the prediction mode by the lower CU prediction mode, thereby reducing the amount of computation of the upper CU. The experimental results show that the impact on the compression efficiency of the algorithm is very small, the encoding time on average decreases more than 15 percent, and can effectively reduce the complexity of intra-coding.
    Quantization algorithm based on images lowpass subband maxima mapping
    HUANG Sheng, DU Chengchen, JIAN Wei
    2015, 35(11):  3288-3292.  DOI: 10.11772/j.issn.1001-9081.2015.11.3288
    Asbtract ( )   PDF (826KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that deadzone quantization in image compression cannot protect the edges of images effectively, a novel quantization algorithm called Lowpass subband Maxima Mapping Quantization (LMMQ) was proposed. In all kinds of subbands after wavelet transform, the importance of the coefficients in lowpass subbands could be decided by the average value of all coefficients in highpass subbands which have a mapping relationship with the coefficients in lowpass subbands. During quantization, the coefficients of highpass subbands were quantized by deadzone quantization in JPEG2000. The quantization step size of coefficients in lowpass subbands could be adaptively refined because of their own importance, so the edges of images could be protected effectively. The proposed algorithm has an advantage of adaptability in the aspect of coefficient selection when the step size is refined, and has higher encoding speed in Tier1 of EBCOT (Embedded Block Coding with Optimized Truncation) than traditional JPEG2000. The experimental results show that the proposed algorithm has an advantage of protecting the edges of images and has 0.2 dB more than traditional deadzone quantization.
    Pedestrian texture extraction by fusing significant factor
    MA Qiang, WANG Wenwei
    2015, 35(11):  3293-3296.  DOI: 10.11772/j.issn.1001-9081.2015.11.3293
    Asbtract ( )   PDF (634KB) ( )  
    References | Related Articles | Metrics
    The algorithm of extracting pedestrian features based on texture information has the problems of redundant feature information and being unable to depict the human visual sensitivity, an algorithm named SF-LBP was proposed to extract pedestrian texture feature by Significant Local Binary Pattern which fuses the characteristics of human visual pedestrian system. Firstly, the algorithm calculated the significant factor in each region by saliency detection method. Then, it rebuilt the eigenvector of the image by significant factor weight and pedestrian texture feature, and generated the feature histogram according to local feature. Finally it integrated adaptive AdaBoost classifier to construct pedestrian detection system. The experimental results on INRIA database show that the SF-LBP feature achieves a detection rate of 97% and about 2%-3% higher than HOG (Histogram of Oriented Gradients) feature and Haar feature. It reaches recall rate of 90% and 2% higher than other features. It indicates that the SF-LBP feature can effectively describe the texture characteristics of pedestrians, and improve the accuracy of the pedestrian detection system.
    Occluded object tracking algorithm based on mirror image and Mean Shift
    CAO Yiqing, XIAO Jinsheng, HUANG Xiaosheng
    2015, 35(11):  3297-3301.  DOI: 10.11772/j.issn.1001-9081.2015.11.3297
    Asbtract ( )   PDF (826KB) ( )  
    References | Related Articles | Metrics
    A new occluded object tracking algorithm based on mirror image and Mean Shift was proposed to solve the problem that the track object is not accurate, even lost during full occlusion in this paper. The algorithm included three steps: Firstly, when the object was uncovered (Bhattacharyya coefficient matching degree of adjacent frames was greater than or equal to 80%), color features and contour features were used to locate the target, and size adaptive adjustment was realized by sandbag kernel window based on partition. Secondly, when the object is occluded (Bhattacharyya coefficient matching degree of adjacent frames was less than 80%), the location and the size of the target was predicted by using prior training classifier and mirror principle.Thirdly, When target left the occlusion area (Bhattacharyya coefficient matching degree of adjacent frames was greater than or equal to 80% again), Mean Shift algorithm was used to track the target. The experimental results show that when the object is fully occluded, the proposed algorithm is more accurate and robust to better solve the occlusion problem than sub-regional on-line Boosting algorithm and multi-view object tracking algorithm combining modified fusion feature with dynamic occlusion threshold, and meets the real-timer requirements.
    Design and implementation of multi-target detection and recognition algorithm for optical remote sensing image
    JI Xiaofei, QIN Ningli
    2015, 35(11):  3302-3307.  DOI: 10.11772/j.issn.1001-9081.2015.11.3302
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    At present, the processing and analysis of optical remote sensing images is mostly concentrated on the detection and recognition of single target. The multi-target detection and recognition has become a very important research topic. A multi-target detection and recognition algorithm for optical remote sensing images was proposed. Firstly, the adaptive threshold algorithm was used for target detection and segmentation quickly. Then, a hierarchical BoF-SIFT (Bag of Feature-Scale Invariant Feature Transform) feature was constructed by effectively combing the image pyramid and BoF-SIFT feature to represent the global features and local features of the target and describe the distribution characteristics of the target in detail. Finally, the support vector machine based on radial basis function was used as weak classifier for AdaBoost algorithm. Then a strong classifier was obtained to complete classification and recognition of the target image after continuous updating weights. The recognition rate reached 93.52%. A large number of experimental results show that the proposed algorithm has a notable effect on the segmentation of multi-class target in remote sensing images. The feature is selected appropriately and the recognition method is fast and effective.
    Scale invariant feature transform mismatches removal based on canonical correlation analysis
    ZHAO Wei, TIAN Zheng, YANG Lijuan, YAN Weidong, WEN Jinhuan
    2015, 35(11):  3308-3311.  DOI: 10.11772/j.issn.1001-9081.2015.11.3308
    Asbtract ( )   PDF (654KB) ( )  
    References | Related Articles | Metrics
    A method to remove Scale Invariant Feature Transform (SIFT) mismatches based on Canonical Correlation Analysis (CCA) was presented to improve the quality of feature matching when the feature points locate in some similar structures of one image. At first, SIFT matching algorithm was used to get the initial matching pairs. Then, a line was fitted based on the linear relation between the canonical correlation components. The mismatches were removed by thresholding the distances from the points to the line. Furthermore, the influence of each remained match on the collineartiy degree was analyzed to indicate false matches. The experimental results show that the proposed algorithm can remove mismatches efficiently and keep more correct matches.
    Randomized Hough transform straight line detection based on least square correction
    QIAO Yinqi, XIAO Jianhua, HUANG Yinhe, YIN Kuiying
    2015, 35(11):  3312-3315.  DOI: 10.11772/j.issn.1001-9081.2015.11.3312
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    When applying Hough transform to straight line detection, a straight line's mapping model can easily be interfered by the other lines, short segment noise or its own un-ideality in the parameter space, which brings invalid votings leading to problems such as fault detection, missed detection and inaccurate endpoint location. A novel method was proposed which introduced Least Square Method (LSM) performed in ρ-θ domain to Random Hough Transform (RHT) algorithm for detecting straight lines. The validity of sample was verified by pixel-length ratio before each voting in order to get rid of pseudo lines, which was followed by linear fitting based on least square method in parameter space for parameter correction. By setting an accumulation threshold, straight line candidates were picked out one by one via detecting peak point. Endpoints of straight line segments were located by setting a gap-and-scale threshold. The method is proved to have higher detecting precision than conventional Hough transform.
    Fast visual optimization defogging algorithm based on atmospheric physical model
    FU Hui, WU Bin, HAN Dongxuan, HUANG Yangqiang
    2015, 35(11):  3316-3320.  DOI: 10.11772/j.issn.1001-9081.2015.11.3316
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of single image degradation and high time complexity of exiting defogging methods under foggy weather, a fast visual optimization defogging algorithm based on atmospheric physical model was proposed. The proposed method firstly used threshold segmentation to find the sky region, and combined with binary tree algorithm to locate global atmospheric light precisely, and then adopted improved constrained least squares filter which can keep the edge detail and reduce noise to optimize original transmittance map. Finally, the fog image could be restored by atmospheric physical model, and the average gradient, information entropy and the visual information fidelity index were adopted to evaluate the image. The experimental results show that compared with the adaptive image enhancement method based on multi-scale Retinex algorithm, the image restoration based on independent component analysis, a quick visual image restoration method and the dark-channel prior de-hazing algorithm, the proposed method has good visual evaluation indexes and strong real-time processing capability.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF