Loading...

Table of Content

    10 December 2015, Volume 35 Issue 12
    Network and communications
    Review of modeling, statistical properties analysis and routing strategies optimization in Internet of vehicles
    CHEN Yufeng, XIANG Zhengtao, DONG Yabo, XIA Ming
    2015, 35(12):  3321-3324.  DOI: 10.11772/j.issn.1001-9081.2015.12.3321
    Asbtract ( )   PDF (842KB) ( )  
    References | Related Articles | Metrics
    It has been a hot research area using the complex network theory and method to model the communication network, analyzing the statistical properties in evolving process and guiding the optimization of routing strategies. The research status of the modeling, the analysis of the statistical properties, the optimization of routing strategies and the design of routing protocols in the Internet of Vehicles (IoV) were analyzed. In addition, three improvements were proposed. The first is using the directed weighted graph to describe the topology of IoV. The second is analyzing the key statistical properties influencing the transmission capacity of IoV based on the differences of statistical properties between the IoV and the mobile Ad Hoc network. The third is optimizing the multi-path routing strategies based on Multiple-Input Multiple-Output (MIMO) technologies by the complex network, which means utilizing multiple channels and multiple paths to transmit.
    Fast routing micro-loop avoidance algorithm in IP network
    YANG Shiqi, YU Hongfang, LUO Long
    2015, 35(12):  3325-3330.  DOI: 10.11772/j.issn.1001-9081.2015.12.3325
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics
    When a link weight changes in network with Internet Protocol (IP), routing loops may occur. Such loops increase the network latency and cause packet losses, which cannot meet the needs of high-level real-time service. A fast routing micro-loop avoidance algorithm using a weight sequence was proposed. The link weights were reallocated according to the weight sequence so that no loops would occur during convergence phase. In order to calculate the weight sequence, a safety weight interval was defined to describe the condition for avoiding loops, then the safety interval was used to search a set of safety weight ranges. During calculation, the prunning technology was used to reduce search range and improve efficiency. At last, the final weight sequence was obtained from these ranges. The simulation test results using typical network topology algorithm show that in average five times of link weight reallocation can successfully avoid loops in 87% of topologies. In addition, compared with other existing algorithms using iterative adjustment link weights to solve the routing micro-loop, the computational complexity of the proposed algorithm was greatly reduced by an order of magnitude and the computational efficiency was improved by 30%-80%. The proposed algorithm can greatly shorten the calculation time and more efficiently solve the problem of routing micro-loop, which will avoid network latency and packet loss to provide a high level of service quality.
    Clustering routing algorithm based on attraction factor and hybrid transmission
    ZHAO Zuopeng, ZHANG Nana, HOU Mengting, GAO Meng
    2015, 35(12):  3331-3335.  DOI: 10.11772/j.issn.1001-9081.2015.12.3331
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    In order to effectively reduce the energy consumption of Wireless Sensor Network (WSN) and extend the life cycle of the network, Low Energy Adaptive clustering Hierarchy (LEACH) and other clustering routing protocols were analyzed. For improving their weaknesses, a Clustering Routing algorithm based on Attraction factor and Hybrid transmission (CRAH algorithm) was proposed. Firstly, in order to solve the problem of unreasonable selection of Cluster Head (CH) nodes, the node residual energy and the node location were combined as a new index of CH nodes selection by adopting the method of weighted sum. Then, the tasks of the CH nodes were reassigned, and new fusion nodes were chosen. The fusion nodes sent data to Base Station (BS) according to a hybrid of single hop and multiple hops, and combined attraction factor and the Dijkstra algorithm to present a new algorithm, Attraction Factor-Dijkstra (AF-DK) algorithm was proposed with the combination of attract factor and Dijkstra algorithm for finding the optimal paths for fusion nodes. The simulation results show that, compared with the protocols of LEACH, LEACH-Centralized (LEACH-C) and Hybrid Energy-Efficient Distributed clustering (HEED), the CRAH algorithm improved the network lifetime by about 51.56%, 47.1% and 42% respectively, and slowed the network energy consumption significantly. The amount of data receiving by Base Station (BS) decreased 69.9% in average. The CRAH algorithm makes CH selection more reasonable, effectively reduces the redundant data in the process of communication, balances the network energy consumption, and extends the life cycle of the network.
    Optimal routing selection algorithm of end-to-end key agreement in quantum key distribution network
    SHI Lei, SU Jinhai, GUO Yixi
    2015, 35(12):  3336-3340.  DOI: 10.11772/j.issn.1001-9081.2015.12.3336
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    Focusing on the routing selection of end-to-end key agreement in Quantum Key Distribution (QKD) network, an optimal routing selection algorithm of end-to-end key agreement based on the Dijkstra algorithm was designed. Firstly, the unavailable links in the QKD networks were eliminated based on the strategy of choosing the available paths. Secondly, based on the strategy of choosing the shortest paths, the Dijkstra algorithm was improved to find out all the shortest paths with the least key consumption. Finally, according to the strategy of choosing the optimal path, the optimal path with the highest network service efficiency was selected from the shortest paths. The analysis results show that, the proposed algorithm solves the problems such as the optimal path is not unique, the best path is not the shortest, the optimal path is not optimal, and so on.The proposed algorithm can reduce the key consumption of end-to-end key agreement in QKD network, and improve the efficiency of network services.
    Estimating algorithm for missing values based on attribute correlation in wireless sensor network
    XU Ke, LEI Jianjun
    2015, 35(12):  3341-3343.  DOI: 10.11772/j.issn.1001-9081.2015.12.3341
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    The missing of the sensing data is inevitable due to the inherent characteristic of Wireless Sensor Network (WSN),which affects various applications significantly. To solve the problem, an estimation algorithm for missing values based on attribute correlation of the sensing data was proposed. The multiple regression model was adopted to estimate missing values of attribute-correlated sensing data. Meanwhile, a data interleaved transmitting strategy was proposed to improve the robustness of the algorithm. The simulation results show that the proposed algorithm can estimate the missing values and is more accurate and reliable than some algorithms based on temporal and spatial correlation such as Linear interpolation Model (LM) algorithm and the traditional Nearest Neighbor Interpolation (NNI) algorithm.
    Incremental selection algorithm for incremental network monitoring points
    DING Sanjun, TAO Xingyu, SHI Xiangchao, XU Lei
    2015, 35(12):  3344-3347.  DOI: 10.11772/j.issn.1001-9081.2015.12.3344
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    In order to resolve the difficulty in changing the monitoring points of the original network after extending the network topology structure, an incremental election algorithm for incremental network monitoring points was proposed. The greedy algorithm was optimized by the proposed algorithm to obtain the approximate solution of fewer vertices, which used the degree of vertices in the network as the greedy-choice strategy for the weak vertex cover of the graph. While calculating incremental network monitoring point set, only the extended network topology was used to obtain the corresponding monitoring point set of the new network. The obtained incremental monitoring points could be directly added to the collection of the original network monitoring points for the new whole network monitoring point set and the cost of rearranging the whole network monitoring points could be reduced. The experiment results show that the number of vertices of the whole monitoring point set obtained by the proposed incremental selection algorithm is basically the same with that of a new monitoring point set generated by recalculating the whole network topology structure in the new network. The proposed algorithm can be effectively applied to the deployment of actual network monitoring points.
    Imbalanced network traffic classification method based on improved forest rotation algorithm
    DING Yaojun
    2015, 35(12):  3348-3351.  DOI: 10.11772/j.issn.1001-9081.2015.12.3348
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of not high accuracy of the unbalanced network traffic classification, on the basis of rotation forest algorithm, an improved rotation forest algorithm by combining the Bootstrap sampling of Bagging algorithm and the base classifier selection algorithm based on sorting of accuracy was proposed. Firstly, the subset was divided from the original training set according to the characteristics, the Bagging was used for sampling, and the coefficient matrix of principal components was computed by Principal Component Analysis (PCA). Then, features of subset were converted based on the original training set and coefficient matrix of principal components to generate new training subsets. In order to enhance the difference of training set and train base classifier of C4.5 by the training subset, the Bagging was used again for sampling subsets. Finally, the testing set was used to evaluate the base classifiers, and the classifiers were sorted and filtered by the overall classification accuracy.The classifiers with high accuracy were chosen to generate consistent classifier results. The imbalanced network traffic data set was chosen for the test experiment, and the precision and recall were used for evaluating the classifiers of C4.5, Bagging, rotation forest and the improved rotation forest. The time efficiency of the four algorithms were evaluated by the training time and testing time of models. The experimental results show that, the classification accuracy of the improved rotation forest algorithm is above 99.5% on the protocols of World Wide Web (WWW), Mail, Attack, Peer-to-Peer (P2P), and the recall rate is also higher than rotation forest, Bagging and C4.5. The proposed algorithm can be used for network intrusion forensics, maintaining network security and improving the quality of network service.
    Detection method of linear frequency modulated signal based on frequency domain phase variance weighting
    WANG Sixiu, GUO Wenqiang, TANG Jianguo, WANG Xiaojie
    2015, 35(12):  3352-3356.  DOI: 10.11772/j.issn.1001-9081.2015.12.3352
    Asbtract ( )   PDF (906KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of detecting unknown Linear Frequency Modulated (LFM) signal, according to the feature that the phase of the signal is stable, a LFM signal detection method based on the frequency domain phase variance weighting was proposed. The proposed method utilized the characteristics that the phase of LFM signal frequency unit was stable, and the phase of noise frequency unit was random, to weight each frequency unit by the phase variance, which could further restrain the background noise energy disturbances, enhanced the Signal-to-Noise Ratio (SNR) gain of signal detection, and achieved detecting unknown LFM signal. Under simulation conditions, when the input average Spectrum Level Ratio (SLR) was greater than -10 dB, compared with phase difference alignment method, the output average SLR of the proposed method was further improved, and with the input the average SLR became higher, the output SLR was further improved. The theoretical analysis and experimental results show that the proposed method can well enhance the energy of LFM signal, restrain the background noise energy, and improve SNR.
    Multiple input multiple output radar orthogonal waveform design of joint frequency-phase modulation based on chaos
    ZHOU Yun, LU Xiaxia, YU Xuelian, WANG Xuegang
    2015, 35(12):  3357-3361.  DOI: 10.11772/j.issn.1001-9081.2015.12.3357
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    The single frequency modulation or phase modulation waveform based on chaotic sequence has low waveform complexity, which limits predictive probability of chaotic signal, radar intercept probability and anti-interference performance. In order to solve the problems, joint frequency-phase modulation based on chaotic sequence in radar waveform was proposed. Firstly, the radar signal was carried out for the chaotic frequency encoding, which was that a pulse was divided into a series of sub-pulses and different frequency modulation was carried out for different sub-pulses. At the same time, in each frequency encoding sub-pulse, the random initial phase was used in each cycle of waveform. The simulation results show that the maximum value of autocorrelation sidelobe peak of joint frequency-phase modulation based on chaotic radar signal achieved -24.71 dB. Compared with the frequency modulation or phase modulation based on chaotic signal, the correlation performance of the proposed joint frequency-phase modulation has improved. The experimental results show that, the joint frequency-phase modulation chaotic radar waveform combines the advantages of phase modulation and frequency modulation and is an ideal detection signal with the flat power spectrum characteristic of phase modulation and anti-noise-interference ability of frequency modulation.
    Advanced computing
    Dynamic power consumption profiling and modeling by structured query language
    GUO Binglei, YU Jiong, LIAO Bin, YANG Dexian
    2015, 35(12):  3362-3367.  DOI: 10.11772/j.issn.1001-9081.2015.12.3362
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    In order to build energy-saving green database, a database model of dynamic power consumption based on the smallest unit of Structured Query Language (SQL) resource (Central Processing Unit (CPU), disk) consumption. The proposed model profiled the dynamic power consumption and mapped the main hardwares (CPU, disk) resource consumption to power consumption. Key parameters of the model were fitted by adopting the method of multiple linear regression to estimate the dynamic system power in real-time and build the unit-unified model of dynamic power consumption. The experimental results show that, compared with the model based on the total number of tuples, the total number of CPU instructions can better reflect the CPU power consumption. The average relative error of the constitutive model is less than 6% and the absolute error of the constitutive model is less than 9% while the DataBase Management System (DBMS) monopolizes system resources in the static environment. The proposed dynamic power consumption model is more suitable for building energy-saving green database.
    Efficient memory management algorithm based on segment tree and its space optimization
    WANG Donghui, HAN Jianmin, ZHUANG Jiaqi
    2015, 35(12):  3368-3373.  DOI: 10.11772/j.issn.1001-9081.2015.12.3368
    Asbtract ( )   PDF (951KB) ( )  
    References | Related Articles | Metrics
    Most existing works on memory management focus on the efficiency, which are real-time, but have memory fragmentation problems. To address the problem, an efficient memory management algorithm based on segment tree was proposed. The proposed method built a memory management segment tree by dividing memory space into segments, and allocated and reclaimed memory efficiently and flexibly based on the memory management segment tree to reduce the memory fragmentation. Furthermore, a method was proposed to optimize the space complexity of segment trees. The experimental results show that the proposed method has advantages in terms of efficiency, memory fragmentation, storage space, and so on.
    MapReduce performance model based on multi-phase dividing
    LI Zhenju, LI Xuejun, YANG Sheng, LIU Tao
    2015, 35(12):  3374-3377.  DOI: 10.11772/j.issn.1001-9081.2015.12.3374
    Asbtract ( )   PDF (712KB) ( )  
    References | Related Articles | Metrics
    In order to resolve the low precision and complexity problem of the existing MapReduce model caused by the reasonable phase partitioning granularity, a multi-phase MapReduce Model (MR-Model) with 5 partition granularities was proposed. Firstly, the research status of MapReduce model was reviewed. Secondly, the MapReduce job was divided into 5 phases of Read, Map, Shuffle, Reduce, Write and the specific processing time of each phase was studied. Finally, the MR-model prediction performance was tested by experiments. The experimental results show that MR-Model is suitable for the MapReduce actual job execution process. Compared with the two existing models of P-Model and H-Model, the time accuracy precision of MR-Model can be improved by 10%-30%; in the Reduce phase, its time accuracy precision can be improved by 2-3 times, the comprehensive property of the MR-Model is better.
    Design and implementation of cloud monitor system based on P2P monitor network
    LI Tengyao, ZHANG Shuiping, ZHANG Yueling, ZHANG Jingyi
    2015, 35(12):  3378-3382.  DOI: 10.11772/j.issn.1001-9081.2015.12.3378
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    To solve the bottleneck on performance and low reliability of single core node, the cloud monitor system based on Peer-to-Peer (P2P) monitor network was designed and implemented. On the hardware deployment, the monitor nodes were capsulated in application containers and distributed on different racks to build P2P monitor network. By establishing distributed storage clusters with non-relational database, remote data access and backup were supported to improve system reliability. On the software implementation, it was designed hierarchically using combination method of push and pull to collect data, estimating trust degree of data, storing data in distributed nodes and managing hosts in cloud with threshold control and free host estimation strategies. Through the system tests, it was concluded that the system only accounted for 2.17% on average computing resource usage and its average responsive ratio on reading and writing requests per millisecond reached above 93%.The results indicate that the monitor system is superior on low resource consumption and high read/write efficiency.
    Resource matching maximum set job scheduling algorithm under Hadoop
    ZHU Jie, LI Wenrui, ZHAO Hong, LI Ying
    2015, 35(12):  3383-3386.  DOI: 10.11772/j.issn.1001-9081.2015.12.3383
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that jobs of high proportion of resources execute inefficiently in job scheduling algorithms of the present hierarchical queues structure, the resource matching maximum set algorithm was proposed. The proposed algorithm analysed job characteristics, introduced the percentage of completion, waiting time, priority and rescheduling times as urgent value factors. Jobs with high proportion of resources or long waiting time were preferentially considered to improve jobs fairness. Under the condition of limited amount of available resources, the double queues was applied to preferentially select jobs with high urgent values, select the maximum job set from job sets with different proportion of resources in order to achieve scheduling balance. Compared with the Max-min fairness algorithm, it is shown that the proposed algorithm can decrease average waiting time and improve resource utilization. The experimental results show that by using the proposed algorithm, the running time of the same type job set which consisted of jobs of different proportion of resources is reduced by 18.73%, and the running time of jobs of high proportion of resources is reduced by 27.26%; the corresponding percentages of reduction of the running time of the mixed-type job set are 22.36% and 30.28%. The results indicate that the proposed algorithm can effectively reduce the waiting time of jobs of high proportion of resources and improve the overall jobs execution efficiency.
    Finite element parallel computing based on minimal residual-preconditioned conjugate gradient method
    FU Chaojiang, CHEN Hongjun
    2015, 35(12):  3387-3391.  DOI: 10.11772/j.issn.1001-9081.2015.12.3387
    Asbtract ( )   PDF (700KB) ( )  
    References | Related Articles | Metrics
    Finite element analysis for elastic-plastic problem is very time-consuming. A parallel substructure Preconditioned Conjugate Gradient (PCG) algorithm combined with Minimal Residual (MR) smoothing was proposed under the environment of Message Passing Interface (MPI) cluster. The proposed method was based on domain decomposition, and substructure was treated as isolated finite element model via the interface conditions. Throughout the analysis, each processor stored only the information relevant to its substructure and generated the local stiffness matrix. A parallel substructure oriented preconditioned conjugate gradient method was developed, which combined with MR smoothing and diagonal storage scheme. Load balance was discussed and interprocessor communication was optimized in the parallel algorithm. A substepping scheme to integrate elastic-plastic stress-strain relations was used. The errors in the integration process were controlled by adjusting the substep size automatically according to a prescribed tolerance. Numerical example was implemented to validate the performance of the proposed PCG algorithm on workstation cluster. The performance of the proposed PCG algorithm was analyzed and the performance was compared with conventional PCG algorithm. The example results indicate that the proposed algorithm has good speedup and efficiency and is superior in performance to the conventional PCG algorithm. The proposed algorithm is efficient for parallel computing of 3D elastic-plastic problems.
    Data migration model based on RAMCloud hierarchical storage architecture
    GUO Gang, YU Jiong, LU Liang, YING Changtian, YIN Lutong
    2015, 35(12):  3392-3397.  DOI: 10.11772/j.issn.1001-9081.2015.12.3392
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    In order to achieve the efficient storage and access to the huge amounts of data online, under the hierarchical storage architecture of memory cloud, a model of Migration Model based on Data Significance (MMDS) was proposed. Firstly, the importance of data itself was calculated based on factors of the size of the data itself, the importance of time, the total amount of user access, and so on. Secondly, the potential value of the data was evaluated by adopting users' similarity and the importance ranking of the PageRank algorithm in the recommendation system. The importance of the data was determined by the importance of data itself and its potential value together. Then, data migration mechanism was designed based on the importance of data, The experimental results show that, the proposed model can identify the importance of the data and place the data in a hierarchical way and improved the data access hit rate from the storage system compared with the algorithms of Least Recently Used (LRU), Least Frequently Used (LFU), Migration Strategy based on Data Value (MSDV). The proposed model can alleviate the part pressure of storage and has improved the data access performance.
    Lowest label algorithm for minimum cut/maximum flow based on preflow push
    ZHAO Lifeng, YAN Ziheng
    2015, 35(12):  3398-3402.  DOI: 10.11772/j.issn.1001-9081.2015.12.3398
    Asbtract ( )   PDF (954KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of the low execution efficiency in the part of the network caused by the backtracking phenomena of the original highest label preflow push algorithm, the lowest label algorithm based on preflow push was proposed. Based on the preflow, the proposed algorithm chose the lowest label active node as the adjustment point in the selection of active nodes by following the greedy algorithm. The backtracking verification method was introduced to terminate the backtracking loop for enhancing the efficiency of algorithm. The proposed algorithm could meet the demands of various kinds of network graphs and was five more times faster than the classic highest label preflow push algorithm for the sparse networks in the simulation experiments. When applied into the image segmentation, the proposed method achieved more than 50 percent of speed compared with the classic algorithm. The proposed lowest label algorithm for minimum cut/maximum flow based on preflow push can satisfy the large-scale network traffic distribution and the image processing of computer vision.
    Information fusion algorithm for Argo buoy profile based on MapReduce
    JIANG Hua, HU Ying
    2015, 35(12):  3403-3407.  DOI: 10.11772/j.issn.1001-9081.2015.12.3403
    Asbtract ( )   PDF (688KB) ( )  
    References | Related Articles | Metrics
    The analysis about Argo buoy profile is not comprehensive for taking the single Argo buoy as a processing object, and the calculation time of uniprocessing method is long. In order to solve the problems, a new algorithm using latitude and longitude cell as analysis object and combining MapReduce with principal curve analysis was proposed. In Map processing, the effective information of Argo buoy was extracted from the big data files and the extracted Argo profiles were classified according to the latitude and longitude. In Reduce processing, the principal Argo profile of each region was generated. Firstly, the information was normalized, and then the principal Argo profile of regional profile characteristics which consisted of a small amount of profile points and lines was obtained through the Kegl's principal curve theory, the information fusion of massive Argo buoy was realized. The proposed algorithm was verified through the global Argo buoy sample data, the new algorithm achieved the mean of residual errors was within 0.1 in the condition of 0.03-0.10 squared distance, the data storage space was saved by 99.4%, and the computation speed was increased by 36.4%, compared with the traditional method only based on uniprocessing. The experimental results show that the proposed algorithm can generate principal profiles accurately, meanwhile reduce the data storage space and effectively improve the computation speed.
    Information security
    Parallel algorithm for homomoriphic encryption base on MapReduce
    HU Chi, YANG Geng, YANG Beisi, MIN Zhao'e
    2015, 35(12):  3408-3412.  DOI: 10.11772/j.issn.1001-9081.2015.12.3408
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics
    According to the distributed feature of cloud computing, a parallel homomorphic encryption scheme based on the MapReduce Hadoop was proposed with the combination of homomorphic encryption and MapReduce parallel framework under Hadoop environment. The concrete parallel homomorphic encrypting algorithm was implemented, and the theoretical analysis was given to prove the security and correctness of the proposed algorithm. The evaluation experiments on the cloud cluster consisting of 4 computing nodes with total 16 Central Processing Units (CPUs) show that the data encryption of the parallel homomorphic encryption algorithm can reach the speed-up radio of 13. The experimental result shows that the proposed algorithm can reduce the time cost of data encryption and can be applied to real-time applications.
    Cloud storage system with fine-grained access control and low storage space overhead
    YIN Kaize, WANG Haihang
    2015, 35(12):  3413-3418.  DOI: 10.11772/j.issn.1001-9081.2015.12.3413
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics
    Concerning the data's confidentiality when stored in public cloud storage system and the system's performance, a secure and efficient scheme was proposed and applied in the cloud storage system of cryptography with cryptographic fine-grained access control, which was based on Ciphertext-Policy Attribute-Based Encryption (CP-ABE). In the proposed scheme, the original data were firstly divided into a number of slices by the (k,n)algorithm. Then some of slices were randomly chosen to encrypt. At last, the encrypted slices were published to the cloud storage, and only one copy of these slices was stored. The proposed scheme was proved that it could improve the performance of the user's cancel operation and reduce the cost of the storage space. At the same time, the system was also proved to be safe on calculation by the analysis of the security. By contrast, the experimental results show that, the data management time for the data owner is decreased obviously through optimizing the user revocation phase. The data storage cost is also decreased because of only storing one copy of data. The proposed scheme achieves secure sharing and efficient storage of the sensitive data in the public cloud storage.
    Dynamic information security evaluation model in mobile Ad Hoc network
    PAN Lei, LI Tingyuan
    2015, 35(12):  3419-3423.  DOI: 10.11772/j.issn.1001-9081.2015.12.3419
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics
    In the field of information security risk evaluation, it is difficult for the traditional static evaluation methods to adapt to the dynamic topology of Mobile Ad hoc NETwork (MANET). In order to solve the problem, a new dynamic reevaluation model was proposed. In the proposed model, the whole system was abstracted into a topology which was comprised of components and access paths. The relations between components were abstracted into three kinds of association relations and four kinds of combination relations. In addition, the methods of security metrics under different relations were provided. When system changed, the influence range of its change and new relation types were determined by taking the changed component as a center. Under that condition, only adjacent components were reevaluated. Then, the new local and whole security metrics were obtained. The experimental results show that the proposed model has higher evaluation efficiency and can decrease evaluation cost greatly.
    Cryptanalysis of two anonymous user authentication schemes for wireless sensor networks
    XUE Feng, WANG Ding, CAO Pinjun, LI Yong
    2015, 35(12):  3424-3428.  DOI: 10.11772/j.issn.1001-9081.2015.12.3424
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of designing secure and efficient user authentication protocols with anonymity for wireless sensor networks, based on the widely accepted assumptions about the capabilities of attackers and using the scenarios-based attacking techniques, the security of two recently proposed two-factor anonymous user authentication schemes for wireless sensor networks was analyzed. The following two aspects were pointed out:1) the protocol suggested by Liu etc. (LIU C, GAO F, MA C, et al. User authentication protocol with anonymity in wireless sensor network. Computer Engineering, 2012, 38(22):99-103) cannot resist against offline password guessing attack as the authors claimed and is also subject to a serious design flaw in usability; 2) the protocol presented by Yan etc. (YAN L, ZHANG S, CHANG Y. A user authentication and key agreement scheme for wireless sensor networks. Journal of Chinese Computer Systems, 2013, 34(10):2342-2344) cannot withstand user impersonation attack and offline password guessing attack as well as fall short of user un-traceability. The analysis results demonstrate that, these two anonymous authentication protocols have serious security flaws, which are not suitable for practical applications in wireless sensor networks.
    Software-defined networking-oriented intrusion tolerance controller architecture and its implementation
    HUANG Liang, JIANG Fan, XUN Hao, MA Duohe, WANG Liming
    2015, 35(12):  3429-3436.  DOI: 10.11772/j.issn.1001-9081.2015.12.3429
    Asbtract ( )   PDF (1276KB) ( )  
    References | Related Articles | Metrics
    In the centralized network control environment of Software-Defined Network (SDN), the problem of a single point of failure exists in the controlling plane. In order to solve the problem, a kind of controller architecture was proposed based on intrusion tolerance ideology to improve the availability and reliability of network by using the redundant and diverse central controller platform. In the proposed architecture, the intruded controllers were detected by comparing their messages. Firstly, the key message types and fields needing to be compared were defined. Then, different controller messages were compared using a consistency judgement algorithm. Finally, the controllers with abnormal messages would be isolated and restored. The Mininet-based intrusion tolerance reliability test demonstrated that the controller architecture based on intrusion tolerance could detect and filter the abnormal controller messages. The Mininet-based response-delay test showed that the requirement-delay of underlying network increased by 16% and 42% while the tolerance degree was 1 and 3 respectively. In addition, the Cbench-based response-delay and throughput tests showed that the performance of the intrusion tolerance controller lay among the subsidiary controllers, such as Ryu and Floodlight, and approached the advanced one. In practical application, the quantity and type of the subsidiary controllers can be configured according to the security level of application scenarios, and the proposed intrusion tolerance controller can satisfy the application requirements of response rate and intrusion tolerance degree.
    K-anonymity privacy-preserving for trajectory in uncertain environment
    ZHU Lin, HUANG Shengbo
    2015, 35(12):  3437-3441.  DOI: 10.11772/j.issn.1001-9081.2015.12.3437
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    To comprehensively consider the factors influencing the moving objects in uncertain environment, a k-anonymity privacy-preserving method for the trajectory recorded by automatic identification system was presented. Firstly, an uncertain spatial index model was established which was stored in grid quadtree. Then the continuous k-Nearest Neighbor (KNN) query method was used to find the trajectory which had the similar area to the current trajectory, and the trajectory was added to the anonymous candidate set. By considering the network scale influence on the effectiveness of the anonymous information and the probability of attacker's attack on trajectory, the optimal exploit chain of trajectory was generated by using the heuristic algorithm to strengthen the trajectory privacy-preserving. Finally, the experimental results show that, compared with the traditional method, the proposed method can decrease the information loss by 20% to 50%,while the information distortion can maintain below 50% with the enlarge of query range and the cost loss is cut down by 10% to 30%.The proposed method can effectively prevent malicious attackers from the information access of trajectory,and can be applied for the official boat to law enforcement at sea.
    Efficient partitioning error concealment method for I frame
    WANG Chaolin, ZHOU Yu, WANG Xiaodong, ZHANG Lianjun
    2015, 35(12):  3442-3446.  DOI: 10.11772/j.issn.1001-9081.2015.12.3442
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics
    The existing error concealment algorithms for I frame are difficult to balance the recovery image quality and the algorithm complexity. To solve the problem, an efficient intra-fame partitioning error concealment method was proposed. Firstly, according to the motion correlation between video frames, the lost macro blocks were divided into motion blocks and static blocks. For static blocks, frame copy error concealment method was used to conceal lost blocks. For motion blocks, they were divided into smooth blocks and texture blocks by the texture information of the correctly decoded macro blocks. Then, the bilinear interpolation method was adopted to restore the smooth blocks and more delicate Weighted Template matching with Exponentially distributed weights (WTE) method was used to conceal texture blocks. The experimental results show that, compared with the WTE method, the proposed method has improved the Peak Signal-to-Noise Ratio (PSNR) by the average of 2.6 dB and decreased the computation complexity averagely by 90%. As for video sequences with different features and resolutions in continuous scene, the proposed method achieves certain applicability.
    Derivation and spectrum analysis of a kind of low weight spectral annihilator
    HU Jianyong, ZHANG Wenzheng
    2015, 35(12):  3447-3449.  DOI: 10.11772/j.issn.1001-9081.2015.12.3447
    Asbtract ( )   PDF (576KB) ( )  
    References | Related Articles | Metrics
    For stream cipher to implement effective fast discrete Fourier spectra attack, it is necessary to find a low spectral weight relation or a low spectral weight annihilator. By using discrete Fourier transform of periodic sequences, a necessary and sufficient condition of the sequences which meet product relation was achieved. And on this basis, by defining spectral cycle difference, a kind of low spectral weight relation and annihilator was derived. At the same time, the spectral properties of m sequences was researched, a method to calculate the spectral space quickly was proposed and an example was given.
    Error detection algorithm of program loop control
    ZOU Yu, XUE Xiaoping, ZHANG Fang, PAN Yong, PAN Teng
    2015, 35(12):  3450-3455.  DOI: 10.11772/j.issn.1001-9081.2015.12.3450
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    There are the errors that memory data is not updated, the loop exits early and the loop exits late in the program loop control. In order to ensure the correctness of the program execution in the safety critical system, a new error detection algorithm of program loop control based on ANBD-code (arithmetic-code with signature and timestamp) was proposed. Through ANBD-code, the program variables were encoded as a signed code word by the proposed algorithm. And the errors in the loop control were detected by verifying code signature, the error of memory data being not updated could be detected by using the time label of ANBD-code. In addition, on the basis of the ANBD-code, the errors of the loop exiting early and the loop exiting late could be detected by using the online statement block signature allocation algorithm, the block signature function and the variable signature compensation function. The occurrence probability of an undetected error was 1/A in theory, where A was coding prime. The primes were selected between 97 and 10993 to test occurrence probability of an undetected error and the Normalized Mean Square Error (NMSE) of theoretical model and test result was about-30 dB. The test results show that the proposed algorithm can effectively detect all kinds of errors in the loop control and the occurrence probability of an undetected error is up to 10-9 when the prime A is close to 232. The proposed algorithm can satisfy the requirements of safety critical system.
    Artificial intelligence
    Multi-attribute decision-making method of intuitionistic fuzziness based on entropy and co-correlation degree
    WANG Feng, MAO Junjun, HUANG Chao
    2015, 35(12):  3456-3460.  DOI: 10.11772/j.issn.1001-9081.2015.12.3456
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    The multi-attribute decision-making has the problems that the decision information is Intuitionistic Fuzzy Set (IFS) and the attribute weights are completely unknown. In order to solve the problems, a decision-making method based on Intuitionistic Fuzzy (IF) entropy and co-correlation degree was proposed. Considering the intuitionism and fuzziness of IFS, an improved IF entropy was defined from the axiomatic definition. Furthermore, based on the criterion that the total uncertain information of all the attributes kept minimization, a nonlinear programming model was established by utilizing the proposed IF entropy, and the formula of attribute weights was obtained. From the structure of the correlation coefficients in statistics between the variables, the concept of co-correlation degree of IFS was proposed, and the similar properties with correlation coefficient were also discussed. Moreover, the formula of co-correlation degree weighted between each object and the ideal object was acquired. Finally, a new multi-attribute decision-making approach was presented, which was successfully applied to the example of teachers' election. By calculating the co-correlation degree of each teacher, the best candidate was determined, and the optimal decision was achieved. With the advantages of reasonable operation, reliable calculation result, and easy to implement, the proposed method can be used for a variety of decision problems.
    Improvement of constraint conditions and new constructional method for intuitionistic fuzzy entropy
    ZHAO Fei, WANG Qingshan, HAO Wanliang
    2015, 35(12):  3461-3464.  DOI: 10.11772/j.issn.1001-9081.2015.12.3461
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics
    To resolve the irrationality in the definition and measurement of intuitionistic fuzzy entropy, a new axiomatic definition for intuitionistic fuzzy entropy was proposed, and a new measuring formula was structured. Firstly, the existing differences in research of axiomatic definition for intuitionistic fuzzy entropy were analyzed, its defects and insufficiency were also pointed out. Secondly, an improved axiomatic definition for intuitionistic fuzzy entropy and a calculation formula of intuitionistic fuzzy entropy were proposed. Finally, the new formula was compared with the existing formulas for intuitionistic fuzzy entropy by examples. The results of the example analysis show that, the proposed entropy formula can reflect better the uncertainty and fuzziness of intuitionistic fuzzy sets, and the capability to discriminate the uncertainty of intuitionistic fuzzy sets is stronger.
    Generalized intuitionistic fuzzy geometric Bonferroni mean and its applications for multi-attribute decision making
    MA Qinggong, WANG Feng
    2015, 35(12):  3465-3471.  DOI: 10.11772/j.issn.1001-9081.2015.12.3465
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of information aggregation in the intuitionistic fuzzy environment, a new Generalized Intuitionistic Fuzzy Geometric Bonferroni Mean (GIFGBM) operator was proposed on the basis of the Archimedean T-norm and S-norm. The proposed operator considered the importance of each attribute and could capture the interrelationships among attributes. Firstly, based on the intuitionistic fuzzy operational laws with Archimedean T-norm and S-norm, the GIFGBM was investigated. Then, its desirable properties were studied, including idempotency, monotonicity, boundedness and permutation invariance. Some special cases of the GIFGBM were further discussed in detail. Finally, an approach to intuitionistic fuzzy multi-attribute decision making was developed with the proposed aggregation operator, and the presented method was applied to research on the development of the regional economy. The experimental results show that the proposed decision making method is practical and effective, and the decision makers can make decision by their attitude.
    Stopping criterion of active learning for scenario of single-labeling mode
    YANG Ju, LI Qingwen, YU Hualong
    2015, 35(12):  3472-3476.  DOI: 10.11772/j.issn.1001-9081.2015.12.3472
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that selected accuracy stopping criterion can only be applied in the scenario of batch mode-based active learning, an improved stopping criterion for single-labeling mode was proposed. The matching relationship between each predicted label and the corresponding real label existing in a pre-designed number of learning rounds was used to approximately estimate and calculate the selected accuracy. The higher the match quality was, the higher the selected accuracy was. Then, the variety of selected accuracy could be monitored by moving a sliding-time window. Active learning would stop when the selected accuracy was higher than a pre-designed threshold. The experiments were conducted on 6 baseline data sets with active learning algorithm based on Support Vector Machine (SVM) classifier for indicating the effectiveness and feasibility of the proposed criterion. The experimental results show that when pre-designing an appropriate threshold, active learning can stop at the right time. The proposed method expands the applications of selected accuracy stopping criterion and improves its practicability.
    Overlapping community discovering algorithm based on latent features
    SUN Huixia, LI Yuexin
    2015, 35(12):  3477-3480.  DOI: 10.11772/j.issn.1001-9081.2015.12.3477
    Asbtract ( )   PDF (592KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of exponential increase of label space, an overlapping community discovery algorithm based on latent feature was proposed. Firstly, a generative model for network including overlapping communities was proposed. And based on the proposed generative model, an optimal object function was presented by maximizing the generative probability of the whole network, which was used to infer the latent features for each node in the network. Next, the network was induced into a bipartite graph, and the lower bound of feature number was analyzed, which was used to optimize the object function. The experiments show that, the proposed overlapping community discovering algorithm can improve the recall greatly while keeping the precision and execution efficiency unchanged, which indicates that the proposed algorithm is effective with the exponential increase of label space.
    Fine-grained sentiment analysis oriented to product comment
    LIU Li, WANG Yongheng, WEI Hang
    2015, 35(12):  3481-3486.  DOI: 10.11772/j.issn.1001-9081.2015.12.3481
    Asbtract ( )   PDF (1058KB) ( )  
    References | Related Articles | Metrics
    The traditional sentiment analysis is coarse-grained and ignores the comment targets, the existing fine-grained sentiment analysis ignores multi-target and multi-opinion sentences. In order to solve these problems, a method of fine-grained sentiment analysis based on Conditional Random Field (CRF) and syntax tree pruning was proposed. A parallel tri-training method based on MapReduce was used to label corpus autonomously. CRF model of integrating various features was used to extract positive/negative opinions and the target of opinions from comment sentences. To deal with the multi-target and multi-opinion sentences, syntax tree pruning was employed through building domain ontology and syntactic path library to eliminate the irrelevant target of opinions and extract the correct appraisal expressions. Finally, a visual product attribute report was generated. After syntax tree pruning, the accuracy of the proposed method on sentiment elements and appraisal expression can reach 89% approximately.The experimental results on two product domains of mobile phone and camera show that the proposed method outperforms the traditional methods on both sentiment analysis accuracy and training performance.
    User influence algorithm based on user content and relational structure
    MA Huifang, SHI Yakai, XIE Meng, ZHUANG Fuzhen
    2015, 35(12):  3487-3490.  DOI: 10.11772/j.issn.1001-9081.2015.12.3487
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    In order to rapidly detect the information dissemination ways and alleviate the influence of malicious information, a user Content and Structure-based Influence Algorithm with Iteration (CSIAI) was proposed. The word-user documentation similarity was iteratively computed by the proposed algorithm through the content modeling of user's microblog. Through the concern and attention behaviors of microblog, user relational structures were established and user influence weights were calculated to get the adjacency matrix of user influence. The k nodes with higher influence were extracted as the information transmission path. In the detection simulation experiments, the influence coverage rate and response time were adopted as the evaluation indexes, According to the expansion of the new knowledge base, the relationships of parameters α and β of CSIAI were determined based on the extended new knowledge base. With the increase of users, the influence coverage rate and response time performance of the proposed CSIAI are superior to the algorithms of PageRank, CELF and Content and Structure-based Influence Algorithm (CSIA) without iteration. The experimental results show that the proposed CSIAI can effectively detect the dissemination of microblog information.
    Multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field
    ZHENG Yanbin, LI Bo, AN Deyu, LI Na
    2015, 35(12):  3491-3496.  DOI: 10.11772/j.issn.1001-9081.2015.12.3491
    Asbtract ( )   PDF (903KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of the path planning algorithm, such as slow convergence and low efficiency, a multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field was proposed. Firstly, the multi-Agent operating environment was regarded as an artificial potential field, the potential energy of every point, which represented the maximal rewards obtained according to the optimal strategy, was determined by the priori knowledge. Then, the update process of strategy was limited to smaller local space or lower dimension of high-level space to enhance the performance of learning algorithm by using model learning without environment and partial update of hierarchical reinforcement learning. Finally, aiming at the problem of taxi, the simulation experiment of the proposed algorithm was done in grid environment. To close to the real environment and increase the portability of the algorithm, the proposed algorithm was verified in three-dimensional simulation environment. The experimental results show that the convergence speed of the algorithm is fast, and the convergence procedure is stable.
    Probabilistic matrix factorization algorithm based on AdaBoost
    PENG Xingxiong, XIAO Ruliang, ZHANG Guigang
    2015, 35(12):  3497-3501.  DOI: 10.11772/j.issn.1001-9081.2015.12.3497
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor generalization ability (the recommended performance for new users and items) and low predictive accuracy of Probabilistic Matrix Factorization (PMF) in recommendation system, a new algorithm of Probabilistic Matrix Factorization algorithm based on AdaBoost (AdaBoostPMF) was proposed. Firstly, the initial weight for each sample was assigned. Secondly, the feature vectors of users and items were learned by each round of PMF stochastic gradient descent method and the global mean and standard deviation of the prediction error were calculated. The sample weights were adaptively adjusted by using AdaBoost from the a global perspective, which made the proposed algorithm pay more attention to training those samples with the larger prediction error than others. Finally, the sample weights were assigned to predictive error, which found the more appropriate optimum direction for feature vectors of users and items. Compared with traditional PMF algorithm, the proposed AdaBoostPMF algorithm could significantly improve the prediction precision by about 2.5% on average. The experimental results show that, the proposed algorithm can better fit the user feature vector and the item feature vector and improve the prediction accuracy by weighting the samples with larger prediction error.The proposed algorithm can be effectively applied to the personalized recommendation.
    Non-equilibrium mass diffusion recommendation algorithm based on popularity
    GUO Qiang, SONG Wenjun, HU Zhaolong, HOU Lei, ZHANG Yilu, CHEN Fangjiao
    2015, 35(12):  3502-3505.  DOI: 10.11772/j.issn.1001-9081.2015.12.3502
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of not using the product heterogeneity well in recommendation algorithm, a modified mass diffusion algorithm was presented by considering the effect of the object popularity information on the user preference prediction. By introducing a tunable parameter of product popularity and simulating the mass diffusion process on the user-product bipartite network, the effect of the product popularity was quantitatively characterized. The experimental results on three empirical data sets which named MovieLens, Netflix and Last.FM show that, compared with the traditional mass diffusion method, the proposed algorithm can enhance the average ranking score by 25.6%, 10.96% and 1.2% respectively, and increase the diversity of the recommendation lists by 59.30%, 53.07% and 8.59% respectively. The proposed non-equilibrium mass diffusion algorithm can get more practical results.
    Improvement of term frequency-inverse document frequency algorithm based on Document Triage
    LI Zhenjun, ZHOU Zhurong
    2015, 35(12):  3506-3510.  DOI: 10.11772/j.issn.1001-9081.2015.12.3506
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    The Term Frequency-Inverse Document Frequency (TF-IDF) algorithm does not consider the importance of index items themselves in the document when computing the weights of index terms. In order to solve the problem, the users' behaviors when reading were utilized to improve the efficiency of TF-IDF. By introducing Document Triage to TF-IDF, the Interest Profile Manager (IPM)was used to collect data about users' reading behaviors, and then the document scores were computed. Since the users' annotation was quite important in the aimed text, or reflected the users' interest. The improved term weighting algorithm named Document Triage-Term Frequency-Inverse Document Frequency (DT-TF-IDF) was proposed by introducing document scores and users' annotation to TF-IDF and giving a greater weight to annotated term. The experimental results show that the recall, the precision and their harmonic mean of DT-TF-IDF are all higher than those of the traditional TF-IDF algorithm. The proposed DT-TF-IDF algorithm is more effective than TF-IDF and has improved the accuracy of the text similarity calculation.
    Detecting community in bipartite network based on cluster analysis
    ZHANG Qiangqiang, HUANG Tinglei, ZHANG Yinming
    2015, 35(12):  3511-3514.  DOI: 10.11772/j.issn.1001-9081.2015.12.3511
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of the low accuracy of community detection in bipartite network and the strong dependence on additional parameters, depending on the original network topology, based on the idea of spectral clustering algorithm, a new community algorithm was proposed. The proposed algorithm mined community by mapping a bipartite network to a single network, substituted resource distribution matrix for traditional similarity matrix, effectively guaranteed the information of the original network, improved the input of spectral clustering algorithm and the accuracy of community detection. The modularity function was applied to clustering analysis, and the modularity was used to measure the quality of community mining, effectively solved the problem of automatically determining the clustering number. The experimental results on the actual network and artificial network show that, compared with ant colony optimization algorithm, edge clustering coefficient algorithm etc., the proposed algorithm can not only accurately identify the number of the communities of the bipartite network, but also obtain higher quality of community partitioning without previously known parameters. The proposed algorithm can be applied to the deep understanding of bipartite network, such as recommendation and influence analysis.
    Computer software technology
    Energy consumption optimization of stochastic real-time tasks for dependable embedded system
    PAN Xiong, JIANG Wei, WEN Liang, ZHOU Keran, DONG Qi, WANG Junlong
    2015, 35(12):  3515-3519.  DOI: 10.11772/j.issn.1001-9081.2015.12.3515
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics
    The WCET (Worst Case Execution Time) is taken as the actual execution time of the task, which may cause a great waste of system resource. In order to solve the problem, a method based on stochastic task probability model was proposed. Firstly, Dynamic Voltage and Frequency Scaling (DVFS) was utilized to reduce the energy consumption by considering the effect of DVFS on the reliability of the system, the specific probability distribution of task execution time and the task requirement of No-Deadline Violation Probability (NDVP). Then, a new optimization algorithm with the operation time of polynomial was proposed based on the dynamic programming algorithm. In addition, the execution overhead of the algorithm was reduced by designing the state eliminating rules. The simulation results show that, compared with the optimal algorithm of the model of WCET, the proposed algorithm can reduce the system energy consumption by more than 30%. The experimental results indicate that considering the random execution time of tasks can save the system resources while ensuring the reliability of the system.
    Application of time-Petri net for process modeling of point-of-care testing
    WANG Lei, WANG Bidou, LUO Gangyin, NIE Lanshun, ZHAN Dechen, TIAN Haoran
    2015, 35(12):  3520-3523.  DOI: 10.11772/j.issn.1001-9081.2015.12.3520
    Asbtract ( )   PDF (699KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of designing and modeling the process of Point-Of-Care Testing (POCT) system, a method for concurrence system modeling and analyzing based on Time-Petri Net (TPN) was proposed which built more accurate information model for the process designing of POCT system. The activity holding duration was introduced into classical TPN, and the TPN modeling method for POCT control process was proposed. The scheduling simulator embedded in Petri net model was also designed for assisting the analysis, and optimization of the POCT control process. The simulation results show that the proposed modeling method for TPN can satisfy the practical requirement of process modeling of the parallel multi-class POCT control system in the fields such as reachable nodes and running time and provide powerful tool for process simulation and analysis. Furthermore, the proposed TPN can assist the system designer for the optimization of POCT system.
    Trust evaluation method for component reuse based on component use dependency relation
    WANG Yanling, ZENG Guosun
    2015, 35(12):  3524-3529.  DOI: 10.11772/j.issn.1001-9081.2015.12.3524
    Asbtract ( )   PDF (970KB) ( )  
    References | Related Articles | Metrics
    The number of components is continuously growing in the network component library, it is hard for users to select high-quality components from the massive uneven-quality components. In order to solve the problem, a reuse trust evaluation method based on component use dependency relations was proposed, in which component base was used as an evidence base. Firstly, component dependency relations were collected from evidence base. Secondly, the basic trust function was defined for each component, and the different believable weight value was set up for each evidence according to the different sources of component dependency relations on the above basis. Finally, the final trust value of component was generated by a specific conversion algorithm with the obtained results. In instance analysis, the component evaluation result of the proposed method was consistent with the expectation and the conclusion which was gotten by the internal and external quality model of the reference components. However, the proposed method greatly reduced the workload of component's credible evaluation and improved the evaluation efficiency. The results of analysis show that the proposed method can objectively reflect the credibility of components, and can be used as a trusted measurement mechanism of component retrieval in the component library, which helps to realize the high quality retrieval and reuse of components.
    Fault tolerance as a service method in cloud platform based on virtual machine deployment policy
    LIU Xiaoxia, LIU Jing
    2015, 35(12):  3530-3535.  DOI: 10.11772/j.issn.1001-9081.2015.12.3530
    Asbtract ( )   PDF (930KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that how to make full use of the resources in cloud infrastructure to satisfy various and high reliable fault tolerant requirements of cloud application systems for cloud application tenants, a cloud application tenant and service provider oriented fault tolerance as a service method in cloud platform was proposed based on virtual machine deployment policy. According to specific fault tolerant requirements from cloud application tenants, suitable fault tolerance methods with corresponding fault tolerant levels were adopted. Then, the revenue and resource usage of service provider were computed and optimized. Based on the analysis, virtual machines which providing fault tolerant services were well deployed, which could make full use of the resources in virtual machine level to provide more reliable fault tolerant services for cloud application systems and their tenants. The experimental results show that the proposed method could guarantee the revenue of service providers, and achieve more flexible and more reliable fault tolerant services for cloud application systems with multiple tenants.
    Clone code detection based on Levenshtein distance of token
    ZHANG Jiujie, WANG Chunhui, ZHANG Liping, HOU Min, LIU Dongsheng
    2015, 35(12):  3536-3543.  DOI: 10.11772/j.issn.1001-9081.2015.12.3536
    Asbtract ( )   PDF (1361KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of less clone code detection tools and low efficiency for the current Type-3, an effective clone code detection method for Type-3 based on the levenshtein distance of token was proposed. Type-1, Type-2 and Type-3 clone codes could be detected by the proposed method in an efficient way. Firstly, the source codes of a subject system were tokenized into some token sequences with specified code size. Secondly, each definite-sized substring of the token sequences was mapped with corresponding index. Thirdly, the clone pairs were built by the levenshtein distance algorithm and the clone groups were built by the disjoint-set algorithm on the basis of the mapping information query. Finally, the feedback information of clone codes were given. A prototype tool named FClones was implemented. It was evaluated by the code mutation-based framework and compared with two state-of-the-art tools SimCad and NiCad. The experimental results show that the recall of FCloens is equal to or greater than 95% and its precision is not lower than 98% in detecting all of these three types of clone codes. FClones can do better in detecting Type-3 clones than others.
    Virtual reality and digital media
    Real-time object tracking method based on multi-channel kernel correlation filter
    HU Zhaohua, XING Weiguo, HE Jun, ZHANG Xiuzai
    2015, 35(12):  3544-3549.  DOI: 10.11772/j.issn.1001-9081.2015.12.3544
    Asbtract ( )   PDF (1057KB) ( )  
    References | Related Articles | Metrics
    The most existing algorithms have to build the complex model and draw a large number of training samples to achieve accurate object tracking,which will produce large amount of calculation. The proposed problem is not conducive to real-time tracking. In order to solve the problem, a real-time tracking method based on multi-channel kernel correlation filter was presented. Firstly, the target information of video frames were trained by using the nucleation ridge regression method to get the filter template. Secondly, the filter template was utilized to carry out the correlation measure for the possible area of the frame to be detected. Finally, the most relevant location was considered as the tracking result and the independent inputs of multiple channels were weighted and then added to solve the problem of multi-channel input. A large number of comparison experiments with the existing tracking methods show that, the proposed method guarantees the tracking accuracy and its tracking speed also has obvious advantages under different challenge factors. The proposed method avoids to extract a large number of samples by the correlation filter and use the dot product of frequency domain to replace the correlation operation of time-domain, which greatly reduces the computational complexity and makes the tracking speed completely meet the tracking demand of real-time scenario.
    Improved target tracking algorithm based on kernelized correlation filter
    YU Liyang, FAN Chunxiao, MING Yue
    2015, 35(12):  3550-3554.  DOI: 10.11772/j.issn.1001-9081.2015.12.3550
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the Kernelized Correlation Filter (KCF) tracking algorithm has poor performance in handling scale-variant target, a multi-scale tracking algorithm called Scale-KCF (SKCF) based on Correlation Filter (CF) and multi-scale image pyramid was proposed. Firstly, the occlusion status of the target was got through the response of the conventional KCF algorithm's classifier. The multi-scale image pyramid was built for the occluded target. Secondly, the scale information of the target was obtained by calculating the correlation filter's maximum response on the multi-scale image pyramid. Finally, the appearance model and the scale model of the target were updated with the fresh target. The experimental results on comparison with some state-of-the-art trackers such as Structured Output tracking with kernel (Struck), KCF, Tracking-Learning-Detection (TLD) and Multiple Instance Learning (MIL) demonstrate that the proposed tracker of SKCF achieves the best accuracy and overlap rate than other algorithms. Meanwhile, the proposed tracker can be widely used in target tracking and achieve high precise target tracking.
    Robust tracking operator using augmented Lagrange multiplier
    LI Feibin, CAO Tieyong, HUANG Hui, WANG Wen
    2015, 35(12):  3555-3559.  DOI: 10.11772/j.issn.1001-9081.2015.12.3555
    Asbtract ( )   PDF (970KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of robust video object tracking, a robust generative algorithm based on sparse representation was proposed. Firstly, object and background templates were constructed by extracting the image features, and sufficient candidates were acquired by using random sampling method at each frame. Secondly, the sparse coefficient vector was got to structure the similarity map by an innovative optimization formulation named multitask reverse sparse representation formulation, which searched multiple subsets from the whole candidate set to simultaneously reconstruct multiple templates with minimum error. Here a customized Augmented Lagrange Multiplier (ALM) method was derived for solving the L1-min problem within several iterations. Finally, the additive pooling was proposed to extract discriminative information in the similarity map for effectively selecting the best candidate which the most similar to the object template and was most different to the background template to be the tracking result, and the tracking was implemented within the Bayesian filtering framework. Moreover, a simple but effective update mechanism was made to update object and background templates so as to handle the object appearance variation caused by illumination change, occlusion, background clutter and motion blur. Compared with the other tracking algorithms, both qualitative and quantitative evaluations on a variety of challenging sequences demonstrate that the tracking accuracy and stability of the proposed algorithm has improved and the proposed algorithm can effectively solve target tracking problem in these scenes of illumination and scale changing, occlusion, complex background, and so on.
    Object detection based on visual saliency map and objectness
    LI Junhao, LIU Zhi
    2015, 35(12):  3560-3564.  DOI: 10.11772/j.issn.1001-9081.2015.12.3560
    Asbtract ( )   PDF (889KB) ( )  
    References | Related Articles | Metrics
    A novel salient object detection approach was proposed based on visual saliency map and objectness for detecting salient objects in images. For each input image, a number of bounding boxes with high objectness scores were exploited to estimate the rough object location, and a scheme of transferring the bounding box-level objectness score to pixel level was used to weight the input saliency map. The input saliency map and the weighted saliency map were adaptively binarized and the convex hull algorithm was used to obtain the maximum search region and the seed region, respectively. Finally, a global optimal solution was obtained by combining the edge density with the search region and seed region. The experimental results on the public MSRA-B dataset with 5000 images show that the proposed approach outperforms the maximum saliency region method, the region diversity maximization method and the objectness detection method in terms of precision, recall and F-measure.
    Regional stereo matching algorithm based on visual saliency
    ZHANG Huadong, PAN Chen, ZHANG Dongping
    2015, 35(12):  3565-3569.  DOI: 10.11772/j.issn.1001-9081.2015.12.3565
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics
    Regional stereo matching algorithm is sensitive to illumination change. Disparity map has the problems that target and the weak texture region mismatch, boundary is not smooth, and so on. In order to solve these problems, an improved quick stereo matching algorithm by using visual saliency characteristics was proposed. Saliency detection was used to locate the main target area in the image. Then feature matching was completed by combining image Sobel edge characteristics and phase features to get the rough disparity map. Finally, by detecting visual saliency in the disparity map, abrupt noise in weak texture image area was eliminated. Compared to the traditional algorithms such as Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), Normalized Cross Correlation (NCC), the proposed algorithm is not sensitive to light changing, and it can get better disparity map and higher matching rate, which is conducive to real-time applications.
    Multi-feature based descriptions for automated grading on breast histopathology
    GONG Lei, XU Jun, WANG Guanhao, WU Jianzhong, TANG Jinhai
    2015, 35(12):  3570-3575.  DOI: 10.11772/j.issn.1001-9081.2015.12.3570
    Asbtract ( )   PDF (1207KB) ( )  
    References | Related Articles | Metrics
    In order to assist in the fast and efficient diagnosis of breast cancer and provide the prognosis information for pathologists, a computer-aided diagnosis approach for automatically grading breast pathological images was proposed. In the proposed algorithm,cells of pathological images were first automatically detected by deep convolutional neural network and sliding window. Then, the algorithms of color separation based on sparse non-negative matrix factorization, marker controlled watershed, and ellipse fitting were integrated to get the boundary of each cell. A total of 203-dimensional image-derived features, including architectural features of tumor, texture and shape features of epithelial cells were extracted from the pathological images based on the detected cells and the fitted boundary. A Support Vector Machine (SVM) classifier was trained by using the extracted features to realize the automated grading of pathological images. In order to verify the proposed algorithm, a total of 49 Hematoxylin & Eosin (H&E)-stained breast pathological images obtained from 17 patients were considered. The experimental results show that,for 100 ten-fold cross-validation trials, the features with the cell shape and the spatial structure of organization of pathological image set successfully distinguish test samples of low, intermediate and high grades with classification accuracy of 90.20%. Moreover, the proposed algorithm is able to distinguish high grade, intermediate grade, and low grade patients with accuracy of 92.87%, 82.88% and 93.61%, respectively. Compared with the methods only using texture feature or architectural feature, the proposed algorithm has a higher accuracy. The proposed algorithm can accurately distinguish the grade of tumor for pathological images and the difference of accuracy between grades is small.
    Polarized image dehazing algorithm based on dark channel prior
    ZHANG Jingjing, CHEN Zihong, ZHANG Dexiang, YAN Qing, XUN Lina, ZHANG Weiguo
    2015, 35(12):  3576-3580.  DOI: 10.11772/j.issn.1001-9081.2015.12.3576
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics
    Aiming at not satisfactory defogging effect of the traditional defogging algorithm based on polarized characteristics in heavy fog, a new color space conversion algorithm using dark channel prior for polarization image dehazing was proposed. Compared with the traditional imaging technology, polarization imaging detection technology has remarkable advantages in the target detection and recognition of complex environment. Intensity, polarization degree and polarization angel information are usually used to describe target's polarization information for polarization images. In order to combine the polarization information and defogging model, a method of color space transformation was adopted. Firstly, the polarization information was converted into the components of the brightness, hue, saturation in Hue-Intensity-Saturation (HIS) color space and then the HIS color space was mapped to the Red-Green-Blue (RGB) space. Secondly, the dark channel prior principle was applied to get the dark channel image with the combination of the atmospheric scattering model in haze weather. Finally, the atmospheric transmission rate was elaborated by using softmatting algorithm based on sparse prior of the image. The experimental results show that, compared with the existing polarization defogging algorithm, many technical specifications of defogged images such as standard deviation, entropy, average gradient of the proposed algorithm have been greatly improved in very low visibility conditions. The proposed algorithm can effectively enhance the global contrast in heavy fog weather and improve the identification capability for the polarized images.
    Industries and fields application
    Dynamic obstacle avoidance method of mobile online water quality monitoring platform
    LAO Jiajun, YANG Jiang, ZHU Wuming
    2015, 35(12):  3581-3585.  DOI: 10.11772/j.issn.1001-9081.2015.12.3581
    Asbtract ( )   PDF (876KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of encountering the moving obstacle for the water quality monitoring platform in the autonomous navigation,a new dynamic obstacle avoidance method based on the obstacle motion prediction model and the velocity obstacle avoidance model was proposed. Firstly, the ultrasonic ranging module and image acquisition module were used to measure the distance and the azimuth angle between the obstacle and the platform, and then the obstacle's speed and direction were calculated using the coordinate transformation method. Secondly, obstacle motion prediction model was built based on maximum likelihood estimation method, and obstacle speed and direction of the next sampling instant were obtained by this model. Finally, the platform's course angle of the next sampling instant was calculated using the velocity obstacle avoidance model. The experimental results prove that the proposed obstacle avoidance method can plan out a more realistic and optimal path. Compared with the obstacle avoidance method without the obstacle motion prediction model, the proposed obstacle avoidance method could improve the success rate of obstacle avoidance.
    Information acquisition solution for automotive after-sales service based on Android platform
    KONG Yu, WANG Shuying
    2015, 35(12):  3586-3591.  DOI: 10.11772/j.issn.1001-9081.2015.12.3586
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    Aiming at the possible fraud problem of maintenance service picture information in industry chain collaboration Software as a Service (SaaS) platform's after sales maintenance service, a scheme for collecting and dealing with the after-sales service information by mobile intelligent terminal equipment based on Android platform was proposed. Firstly, the proposed solution collected maintenance service information through processing of digital image of mobile intelligent terminal. Secondly, the proposed solution utilized the recognition technology of images and characters for key information contained in maintenance service information such as chassis number and odometer. Thirdly, the proposed solution embedded the above key information in the collected images via digital watermarking technology. Lastly, mobile intelligent terminal and the after-sales service system were integrated through Web service. The feasibility and effectiveness about the scheme of after-sales service information acquisition based on the Android platform are verified by specific use of after-sales service image information acquisition to prevent fraud in industry chain collaboration SaaS platform's after sales maintenance service.
    Tunnel intersection modeling based on cylinder-axis aligned bounding box detection
    WANG Chong, AN Weiqiang, WANG Hongjuan
    2015, 35(12):  3592-3596.  DOI: 10.11772/j.issn.1001-9081.2015.12.3592
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of too long operation time and complex modeling of the three-dimensional roadway intersection modeling in geotechnical engineering, a method about cylinder-Axis Aligned Bounding Box (AABB) two-level bounding box detection was proposed according to the characteristics of tunnel's shape. The proposed method could quickly find out the possible intersected triangular elements and established a new approach to solve the Irregular Triangular Network (TIN) modeling problem of tunnel intersection by combining the three-Dimensional (3D) Boolean operation. The basic principles of the cylinder-AABB double bounding box collision detection and the key technologies about Boolean operation to implement intersection modeling in 3D were described, and an optimization scheme for generated entity mesh was proposed. Through the engineering examples, it is proved that, compared with the Oriented Bounding Box (OBB) hierarchical bounding box method, the modeling method by cylinder-AABB detection increases nearly 50% on the bounding box production efficiency in the roadway surface intersection modeling. The proposed method has the advantages of simple modeling, short detection time, high top detection accuracy, and so on.
    Fast calculation method of multivariable control for reheat steam temperature based on Smith control and predictive functional control
    WANG Fuqiang, LI Xiaoli, ZHANG Qiusheng, ZHANG Jinying
    2015, 35(12):  3597-3601.  DOI: 10.11772/j.issn.1001-9081.2015.12.3597
    Asbtract ( )   PDF (700KB) ( )  
    References | Related Articles | Metrics
    The reheat steam temperature control system has the problems of multivariable control, difficult control, and so on. In order to solve the problems, a fast calculation method of multivariable control for reheat steam temperature was proposed,which was based on Smith control method and Predictive Functional Control (PFC) method. First of all, the reheat steam temperature multivariable control system was decomposed into three single-variable control systems. In every single-variable control system, the other two control volumes were taken as the interference terms. Secondly, according to Smith control idea, every single-variable control system was designed. Finally, on the basis of improving the performance index of the predictive functional control, three single-variable control systems were considered synthetically to realize the reheat steam temperature control. The simulation results of reheat steam temperature control show that with less parameters and explicit physical meaning, the proposed method is about 50 times as fast as the traditional predictive control with constraint conditions. The experimental results show that the proposed algorithm can effectively improve the quality of the reheat steam temperature control in the field.
    Passenger counting system based on intelligent detection of polyvinylidene fluoride human gait
    XIE Yu, HU Xintong, MENG Xiyun, LIU Yunjie
    2015, 35(12):  3602-3606.  DOI: 10.11772/j.issn.1001-9081.2015.12.3602
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics
    The existing passenger flow counting sensors with PolyVinyliDene Fluoride (PVDF) piezoelectric material exist the lack of accuracy caused by erroneous counting and missing counting, which has characteristics of low cost and resistance to wear and tear. In order to solve the problem, a passenger counting system based on PVDF gait intelligent detection technology was proposed. The ANSYS software was applied to carry out stress analysis of passengers' gait stepping on and off the bus and observe the distribution of the PVDF piezoelectric signal. The multi-input signal conditioning circuit was designed to acquire multi-channel plantar signal. Combined with signal processing algorithm, the sensor mechanical structure and people-counting system on buses were introduced by Laboratory Virtual Instrument Engineering Workbench (LabVIEW).The experimental results indicate that the proposed system improves the precision in comparison with the existing PVDF passenger flow counting sensors, reduces the cost in comparison with video image counting and human body infrared detection technology, and the average counting error is 5.3%.The proposed system has high practicality and can be widely used in Chinese public transport buses.
    Design of serial peripheral interface module for system-on-chip
    YANG Xiao, LI Zhanming
    2015, 35(12):  3607-3610.  DOI: 10.11772/j.issn.1001-9081.2015.12.3607
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics
    The traditional module design for Serial Peripheral Interface (SPI) is inflexible, unfavorable to expand and does not support the out-of-order access. In order to solve the problems, a kind of SPI module for System-on-a-Chip (SoC) was designed. Firstly, the basic architecture of SPI was designed according to the SPI communication protocol. Secondly, the Finite State Machine (FSM) of input and output, the extension ports and the Identification (ID) module supporting the out-of-order access were designed according to the architecture of SPI. Thirdly, the correctness of the SPI design was verified by using Verilog Compile Simulator (VCS) simulation tool of Synopsys. Finally, a random verification environment was built for the SPI design, in which parameters could be configured. The code coverage report was analyzed and test points were manually added to improve the code coverage rate. The simulation results show that, compared with the traditional design of SPI, the design of SPI module for SoC supports Advanced eXtensible Interface (AXI) bus extension and has eight independent read and write channels, each of which can be out-of-order accessed. The proposed design is incapable of occurring channel congestion.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF