Table of Content

    10 June 2015, Volume 35 Issue 6
    Joint switch scheduling and resource allocation algorithm based on energy efficiency in heterogeneous wireless networks
    QIU Changxiao, LENG Supeng, YE Yu
    2015, 35(6):  1505-1508.  DOI: 10.11772/j.issn.1001-9081.2015.06.1505
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics

    In order to improve the energy efficiency of the heterogeneous wireless networks with macro-cells and micro-cells, a Joint Switch scheduling and Resource Allocation (JSRA) algorithm was proposed. Firstly, based on sleeping of micro base stations, Centralized User Switch scheduling Algorithm (CUSA) was adopted to determine the associated base station for each user. The sleeping/waking status of a micro base station was judged according to whether to decrease of network power consumption when users of the micro base station entirely switched to macro base station.Then, the Best Channel quality Subcarrier Adjustment (BCSA) algorithm was used to assign subcarriers and transmission power for users. The network energy efficiency was guaranteed to approximate the optimal solution by adjusting the subcarrier allocation between the user with the maximum energy efficiency and the user with the minimum energy efficiency. The theoretical analysis and simulation experiments show that,compared with three existing algorithms which considered user handoff or resource allocation separately, JSRA has high computational complexity; however, when the number of users is 120, JSRA can reduce network power consumption 44.4% at most, increase the total effective data rate of users by 80% with the slight reduction only than one contrast as well as the energy efficiency of the network by 200% at most. Experimental results show JSRA can improve the energy efficiency of heterogeneous wireless networks effectively.

    Optimized bandwidth allocation mechanism in peer-to-peer streaming based on network coding
    CHEN Zhuo, ZHOU Jiang
    2015, 35(6):  1509-1513.  DOI: 10.11772/j.issn.1001-9081.2015.06.1509
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics

    To the Peer-to-Peer (P2P) streaming application based on network coding, a load transfer-based node bandwidth resource balancing policy was proposed to avoid the overloaded node, which was due to random neighbor node selection and bandwidth resource requestion. When some nodes overloaded, the nodes with spare bandwidth resource could be selected as load transfer nodes to alleviate the load of overloaded peers. Through the ladder-based bandwidth allocation mechanism, the coded data required by the request nodes could be pushed to the selected load transfer nodes. The numerical simulation results show that the proposed load transfer policy can efficiently alleviate the bandwith resource consumption of overloaded nodes, and the hot area of network can be avoided.

    WSN node localization based on improved bi-system cooperative optimization algorithm
    SHANG Junna, LIU Chunju, YUE Keqiang, LI Lin
    2015, 35(6):  1514-1518.  DOI: 10.11772/j.issn.1001-9081.2015.06.1514
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    In order to improve the locating accuracy in Wireless Senor Network (WSN) node localization, an algorithm based on Particle Swarm Optimization (PSO) and Shuffled Frog Leaping Algorithm (SFLA) was proposed, namely Bi-system Cooperative Optimization (BCO) algorithm. With the advantages of fast convergence in PSO and high optimization precision in SFLA, the proposed algorithm was easier to converge through less iterations and achieve higher accuracy of depth search. The simulation experiments indicate that the BCO algorithm is effective. First, the BCO algorithm can be very close to the optimal solution when it is used for solving the test target functions with better locating accuracy and higher convergence speed. Meanwhile, when the proposed algorithm is used for node localization based on Received Signal Strength Indicator (RSSI), the absolute distance error of the prediction location and the actual location is less than 0.05 meters. Compared with the Improved Particle Swarm Optimization algorithm based on RSSI (IPSO-RSSI), the locating accuracy of the proposed algorithm can be increased 10 times at least.

    Node localization in wireless sensor networks based on improved particle swarm optimization
    YU Quan, SUN Shunyuan, XU Baoguo, CHEN Shujuan
    2015, 35(6):  1519-1522.  DOI: 10.11772/j.issn.1001-9081.2015.06.1519
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    The estimation error of the least square method in traditional Distance Vector-Hop (DV-Hop) algorithm is too large and the Particle Swarm Optimization (PSO) algorithm easily traps into local optimum. In order to overcome the problems, a fusion algorithm of improved particle swarm algorithm and DV-Hop algorithm was presented. First of all, PSO algorithm was improved from aspects of particle velocity, inertia weight, learning strategy and variation, which enhanced the ability of algorithm to jump out of local optimum and increased the search speed of the algorithm in later iterative stage. The node localization result was optimized by using the improved PSO algorithm in the third stage of the DV-Hop algorithm. The simulation results show that compared with the traditional DV-Hop algorithm,the improved DV-Hop based on chaotic PSO algorithm, and the DV-Hop algorithm based on improved PSO, the proposed algorithm has high positioning accuracy, good stability.

    Transmission reliability algorithm based on power control in Internet of vehicles
    HU Rongna, GUO Aihuang
    2015, 35(6):  1523-1526.  DOI: 10.11772/j.issn.1001-9081.2015.06.1523
    Asbtract ( )   PDF (619KB) ( )  
    References | Related Articles | Metrics

    In order to solve the low transmission reliability of the Vehicle to Vehicle (V2V) communication in the Internet of Vehicles (IoV), a Minimum Maximum Power-control Evaluation of Transmission Reliability (MMPETR) algorithm was proposed. Firstly, the effect on data transmission reliability of transmitted power control technology in mobile vehicles was studied and it was concluded that successful message transmission probability increased by improving transmitted power.Secondly, current Evaluation of Transmission Reliability (ETR) algorithm was improved by fully considering all the relationships between transmitted power for successfully sending warning message of the send vehicle and the minimum and maximum power of the vehicle itself. Finally, the appropriate numerical values of minimum transmitted power and maximum transmitted power of the send vehicle were given via simulation. The simulation results show that successful transmission probability of MMPETR algorithm is 4% higher than that of ETR algorithm, the MMPETR algorithm can effectively improve transmission reliability of the IoV system.

    Transmission channel selection of cognitive radio networks based on quality optimization for video streaming
    LIU Jinxia, CHEN Lianna, LIU Yanwei, WANG Zunyi, PENG Guangchao
    2015, 35(6):  1527-1530.  DOI: 10.11772/j.issn.1001-9081.2015.06.1527
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics

    The conventional channel selection approach in cognitive radio network is random selection of the transmission channel based on the channel characteristics while neglecting the channel quality requirement of the video streaming at the application layer. Aiming at solving this problem, a cross-layer optimized channel selection method targeting on the video streaming quality optimization was presented. Via minimizing the end-to-end video distortion, the video encoding quantization parameter at the application layer and the adaptive modulation and channel coding as well as the specific transmission channel at the physical layer were jointly selected. A large number of video transmission simulation experiments were made for the proposed algorithm over the multi-channel cognitive radio networks. The experimental results show that this kind of cross-layer optimized channel selection approach can efficiently improve the objective quality of second user video streaming more than 1.5 dB over the cross-layer optimization without channel-selection method over cognitive radio networks.

    Joint blind estimation of multiple frequency offsets and multiple channels for distributed MIMO-OFDM systems
    HUANG Yanyan, PENG Hua
    2015, 35(6):  1531-1536.  DOI: 10.11772/j.issn.1001-9081.2015.06.1531
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    Joint blind estimation of multiple frequency offsets and multiple channels is difficult in distributed Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system under the multipath fading channel. In order to solve the problem, an effective algorithm was proposed. The proposed algorithm made use of blind deconvolution separation method to receive signal and got the multiple channels embedded with frequency offsets meanwhile. After estimating frequency offsets of the separated signals, the real channels estimation could be obtained by removing channel ambiguity and compensating the whole channels. The simulation results show that, the proposed algorithm is able to get 1e-6 average Mean Square Error (MSE) of frequency offsets estimation at 5 dB and 1e-2 average MSE of channels estimation at 15 dB compared with existing frequency offset channel estimation method based on pilot, the joint blind estimation of multiple frequency offsets and multiple channels for distributed MIMO-OFDM signal is realized

    Gaussian transmission channel optimization algorithm based on superposition coding and multi-user scheduling
    SONG Hailong, ZHANG Shuzhen
    2015, 35(6):  1537-1540.  DOI: 10.11772/j.issn.1001-9081.2015.06.1537
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics

    To improve the data transmission efficiency of Gaussian transmission channel, a Gaussian transmission channel optimization algorithm based on Superposition Coding and Multi-user scheduling (MGSC)was proposed. First, the system model of Gaussian transmission channel was proposed, the Probability Density Function (PDF) and the Cumulative Distribution Function (CDF) of distance from the source to the end and the user's average power gain were analyzed. Next, the method of optimal superposition coding and successive interference cancellation was used in Gaussian transmission channel, the optimal power and rate allocation was performed according to each user's utility function, the channel users were selected by the optimal method with probability and then the optimal transmission channel was obtained. Finally, by comparing MGSC with the optimization algorithm of Two-Way Relay Channel wireless signal and power transmission (TWRC), and the data transmission algorithm based on matrix and Lattice Sphere Decoding technique (LSD), the simulation results show that the total data rate of channels for MGSC improves 10.2% and 21.7% and the average effect of channel gain is 5.7% and 6.4% higher when channel access users change. Therefore, the MGSC has better optimization results in the channel transmission rate and the channel gain.

    Algorithm design of early detection and offset polyline decoding for IRA codes
    BAO Zhixiang, LYU Na, CHEN Kefan
    2015, 35(6):  1541-1545.  DOI: 10.11772/j.issn.1001-9081.2015.06.1541
    Asbtract ( )   PDF (709KB) ( )  
    References | Related Articles | Metrics

    Irregular Repeat Accumulate (IRA) codes' decoding usually adopts Belief Propagation (BP) decoding algorithm, but BP decoding algorithm needs hyperbolic tangent calculation, so its hardware implementation is very difficult because of the high complexity. A decoding algorithm combining the early detection mechanism and offset polyline was put forward. Its performance would approach to BP algorithm via non-uniform error compensation for polyline approximation decoding algorithm. And the early detection method was introduced which observed the transmitted information of check nodes in advance, judged the lines' log-likelihood value which had negligible influence on the next iteration and moved it out of the iteration. So the computational complexity of next iterations was reduced. The simulation results show that the proposed algorithm greatly reduces the computational complexity through the offset polyline approximating the hyperbolic tangent, and the decoding performance is close to BP algorithm.

    Achievable information rate region of multi-source multi-sink multicast network coding
    PU Baoxing, ZHU Hongpeng, ZHAO Chenglin
    2015, 35(6):  1546-1551.  DOI: 10.11772/j.issn.1001-9081.2015.06.1546
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of multi-source multi-sink multicast network coding, an algorithm for computing achievable information rate region and an approach for constructing linear network coding scheme were proposed. Based on the previous studies, the multi-source multi-sink multicast network coding problem was transformed into a specific single-source multicast network coding scenario with a constraint at the source node. By theoretical analyses and formula derivation, the constraint relationship among the multicast rate of source nodes was found out. Then a multi-objective optimization model was constructed to describe the boundary of achievable information rate region. Two methods were presented for solving this model. One was the enumeration method, the other was multi-objective optimization method based on genetic algorithm. The achievable information rate region could be derived from Pareto boundary of the multi-objective optimization model. After assigning the multicast rate of source nodes, the linear network coding scheme could be constructed by figuring out the single-source multicast network coding scenario with a constraint. The simulation results show that the proposed methods can find out the boundary of achievable information rate region including integral points and construct linear network coding scheme.

    Community detection algorithm based on signal adaptive transmission
    TAN Chunni, ZHANG Yumei, ZHANG Jiatong, WU Xiaojun
    2015, 35(6):  1552-1554.  DOI: 10.11772/j.issn.1001-9081.2015.06.1552
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics

    In order to accurately detect the community structure of complex networks, a community detection algorithm based on signal adaptive transmission was proposed. First, the signal was adaptively passed on complex networks,thereby getting the vector affecting on the entire network of each node, then the topological structure of each node was translated into geometrical relationships of algebra vector space. Thus, according to the nature of the clustering, the community structure of the network was detected. In order to get the feasible spatial vectors, the optimum transfer number was determined, which reduced the searching space, and effectively strengthened the search capability of community detection.The proposed algorithm was tested on computer-generated network, Zachary network and American college football network. Compared with Girvan-Newman (GN) algorithm, spectral clustering algorithm,extremal optimization algorithm and signal transmission algorithm, the results show that the accuracy and precision of the proposed community division algorithm is feasible and effective.

    Virtual machine introspection and memory security monitoring based on light-weight operating system
    MA Lele, YUE Xiaomeng, WANG Yuqing, YANG Qiusong
    2015, 35(6):  1555-1559.  DOI: 10.11772/j.issn.1001-9081.2015.06.1555
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics

    The method of utilizing Virtual Machine Introspection (VMI) in a traditional privileged Virtual Machine (VM) to monitor the memory security of other VMs may weaken the isolation between the security module and other parts of the system, and slows down the total performance of the virtualization platform. In order to mitigate these disadvantages, a security architecture based on implementing VMI in a light-weight operating system was proposed, along with a security checking scheme based on memory integrity measurements. By monitoring and checking other VMs' runtime memory in a light-weight VM, the attack surface as well as the performance overhead was reduced. By non-intrusive checking and personalized authentication policy of the virtualization platform, the isolation of the security module was strengthened. A prototype system of VMI and memory detection was implemented based on Mini-OS of Xen. Compared with achieving the same function in privileged VM, the proposed scheme can reduce performance loss by more than 92% . It is proved that the proposed scheme can significantly improve the performance of VMI and realtime checking.

    Quantitative evaluation for null models of complex networks
    LI Huan, LU Gang, GUO Junxia
    2015, 35(6):  1560-1563.  DOI: 10.11772/j.issn.1001-9081.2015.06.1560
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics

    The null models of complex networks generated by random scrambling algorithm often can't tell when null models can be stable because of the difference of successful scrambling probabilities of different order null models. Focusing on the issue, the concept of "successful scrambling times" was defined and used to replace the usual "try scrambling times" to set the algorithm. The index of the proposed successful scrambling times could be added only when the randomly selected edges could meet the scrambling conditions of corresponding null models, and thus be successfully scrambled. The generation experiments of null models of every order show that every index can be stable in a small scale of successful scrambling times. Further quantitative analyses show that, according to the corresponding orders, 0-order, 1-order and 2-order null models with good quality can be got by setting successfully scrambling times to be 2 times, 1 times and 1 times of actual networks' edge number respectively.

    Optimization of spherical Voronoi diagram generating algorithm based on graphic processing unit
    WANG Lei, WANG Pengfei, ZHAO Xuesheng, LU Lituo
    2015, 35(6):  1564-1566.  DOI: 10.11772/j.issn.1001-9081.2015.06.1564
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics

    Spherical Voronoi diagram generating algorithm based on distance computation and comparison of Quaternary Triangular Mesh (QTM) has a higher precision relative to dilation algorithm. However, massive distance computation and comparison lead to low efficiency. To improve efficiency, Graphic Processing Unit (GPU) parallel computation was used to implement the algorithm. Then, the algorithm was optimized with respect to the access to GPU shared memory, constant memory and register. At last, an experimental system was developed by using C++ and Compute Unified Device Architecture (CUDA) to compare the efficiency before and after the optimization. The experimental results show that efficiency can be improved to a great extent by using different GPU memories reasonably. In addition, a higher speed-up ratio can be acquired when the data scale is larger.

    Trust evaluation model based on service level agreement in cloud
    MA Manfu, WANG Mei
    2015, 35(6):  1567-1572.  DOI: 10.11772/j.issn.1001-9081.2015.06.1567
    Asbtract ( )   PDF (970KB) ( )  
    References | Related Articles | Metrics

    The service consumers lack trust in cloud service providers in the interactive process. Aiming at the problem, a trust model of cloud computing based on Service Level Agreement (SLA) was proposed. In the model, when registering to a third-party trust platform which called service center, a cloud service provider must submit its strength evaluation report on its strength, operations, technologies and service attributes, etc. According to the relevant criteria, service center made an evaluation of the cloud service provider and got the system trust. Then, the system trust was combined to traditional reputation. Thus, direct trust, indirect trust and system trust were made as three important factors for evaluating a cloud service provider, and the last trust value was calculated. Finally, the service consumer could make the SLA negotiation with the service provider according to the service and the last trust value, was used to determine the selection from multi-service providers. Thus, the dishonest or less reputable cloud service providers were excluded. The experimental results show that the last trust value is obtained more comprehensively and accurately due to the introduction of system trust and service consumer can select the cloud service provider with high credibility, which can effectively prevent the dishonest behaviors of cloud service providers and improve the success rate of interaction in the proposed trust model.

    Anonymous privacy-preserving scheme for cloud storage based on CP_ABE
    XU Qian, TAN Chengxiang
    2015, 35(6):  1573-1579.  DOI: 10.11772/j.issn.1001-9081.2015.06.1573
    Asbtract ( )   PDF (1054KB) ( )  
    References | Related Articles | Metrics

    In order to solve the confidentiality issues such as key exposure and attribute revocation of data stored in cloud server, an advanced anonymous privacy-preserving scheme based on Ciphertext-Policy Attributed-Based Encryption (CP_ABE) was proposed by considering confidentiality of data storage and indistinguishability of access. First, the scheme constructed a forward-secure irreversible key-update algorithm to solve key exposure. On the basis of the classified user-group and the advanced Subset-Difference algorithm, fine-grained attribute revocation was implemented with the help of cloud data re-encryption algorithm. The potential interests of user would be concealed when k-anonymity l-diversity data request was introduced based on the homomorphic encryption algorithm. The backward-security of key exposure was realized on the basis of secondary encryption inserted in data response. Under the l-Bilinear Diffie-Hellman Exponent Problem (l-BDHE) assumption, selective security of the proposed scheme was proved in the standard model. The performance advantage of the proposed scheme was demonstrated respectively in terms of efficiency, key length and security.

    Execution optimization policy of scientific workflow based on cluster aggregation under cloud environment
    DUAN Ju, CHEN Wanghu, WANG Runping, YU Maoyi, WANG Shikai
    2015, 35(6):  1580-1584.  DOI: 10.11772/j.issn.1001-9081.2015.06.1580
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics

    Focusing on the higher ratio of processor utilization and lower execution cost of a scientific workflow in cloud, a policy of execution optimization based on task cluster aggregation was proposed. First, the tasks were reasonably replicated and aggregated into several clusters. Therefore, the key tasks could be scheduled as early as possible. Then, the task clusters were aggregated again to facilitate the spare time among the tasks in the task cluster. The experimental results show that the proposed policy can improve the parallelism of workflow tasks, advance the earliest finish time of the whole workflow and it has a significant effect in improving the utilization ratio of processors and lowering the cost of workflow execution.

    Service layer agreement-aware resource allocation for cloud center profit maximization
    HE Huaiwen, FU Yu, YANG Liang
    2015, 35(6):  1585-1589.  DOI: 10.11772/j.issn.1001-9081.2015.06.1585
    Asbtract ( )   PDF (693KB) ( )  
    References | Related Articles | Metrics

    For the problem of optimizing resource allocation to achieve profit maximization of cloud computing center, an analysis model based on Service Layer Agreement (SLA)-aware was proposed for optimizing server number and speed of cloud center. Meanwhile some important factors were taken into account, such as energy cost, server rental cost, customer waiting time, and SLA violation penalty. The impacts of cloud center profit by changing server number and speed were analyzed by numerical simulation. The numerical simulation results indicate that cloud center will obtain maximum profit by optimizing server number and speed at a certain request rate; with request rate increasing, profit will increase linearly by optimizing server number and speed. The analysis results can provide a reference method for cloud service provider to improve net business gain.

    Energy-efficient scheduling algorithm under reliability constraint in multiprocessor system
    ZHANG Binlian, XU Hongzhi
    2015, 35(6):  1590-1594.  DOI: 10.11772/j.issn.1001-9081.2015.06.1590
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics

    A kind of Energy-efficient Scheduling Algorithm under the Constraint of Reliability (ESACR) for the random tasks in multiprocessor system was proposed. It would choose the processor which might consume the least energy when the task's deadline could be guaranteed. For the signal processor, Earliest Deadline First (EDF) strategy was used to schedule the tasks and all the tasks were made execute in the same voltage/frequency. When the new task could not match the deadline, the non-execution voltage/frequency of former tasks would be raised. At the same time, the recovery time was reserved for the executing task in order to promise that the task could be rescheduled when errors happened. The simulation shows that the ESACR can provide the better energy efficiency with the guarantee of system reliability , compared to Highest Voltage Energy-Aware (HVEA), Minimum Energy Minimum Completion time (ME-MC) and Earliest Finish First (EFF).

    Decentralized online alternating direction method of multipliers
    XU Haofeng, LING Qing
    2015, 35(6):  1595-1599.  DOI: 10.11772/j.issn.1001-9081.2015.06.1595
    Asbtract ( )   PDF (826KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of learning streaming data collected by a decentralized network in an online manner, a decentralized online learning algorithm — Decentralized Online alternating direction method of Multipliers (DOM) was proposed based on the Alternating Direction Method of Multipliers (ADMM). Firstly, observing that decentralized online learning required each node to update its local iterate based on new local data while keeping the estimation of all the nodes converged to a consensual iterate, a mathematical model was developed and solved by DOM. Secondly, a Regret bound of decentralized online learning was defined to evaluate performance of online estimation. DOM was proved to be convergent when the instantaneous local cost functions were convex, and the convergence rate was given. Finally, the results of numerical experiments show that, compared with existing algorithms such as Distributed Online Gradient Descent (DOGD) and Distributed Autonomous Online Learning (DAOL), the proposed algorithm DOM has the fastest convergence rate.

    Performance analysis of multi-scale quantum harmonic oscillator algorithm
    YUAN Yanan, WANG Peng, LIU Feng
    2015, 35(6):  1600-1604.  DOI: 10.11772/j.issn.1001-9081.2015.06.1600
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics
    Multi-scale Quantum Oscillator Harmonic Algorithm (MQHOA) has good characteristics of global convergence and adaptability. For analyzing the specific performance of MQHOA on solution precision and speed, the comparisons of theoretical models and experiments were completed among the MQHOA, the classic Quantum Particle Swarm Optimization (QPSO) algorithm adopting the quantum-behaved model and having been widely used, and the QPSO with Random Mean best position (QPSO-RM) through solving the integer nonlinear programming problems. In simulation experiments, MQHOA achieved 100% success rate in solving seven unconstrained integer nonlinear programming problems, and was faster than QPSO and QPSO-RM in most cases. MQHOA was a little slower than QPSO and QPSO-RM in solving the two constrained integer nonlinear programming problems, but could obtain 100% success rate which was higher than the latter. Through the comparison of the convergence process of QPSO, QPSO-RM, MQHOA was faster and earlier on converging to the global optimal solution. The experimental results show that MQHOA can effectively adapt to solving the integer programming problems, and can avoid falling into the local optimal solution so as to obtain the global optimal solution. MQHOA is better than the contrast algorithms of QPSO and QPSO-RM in accuracy and convergence rate.
    Weighted online sequential extreme learning machine based on imbalanced sample-reconstruction
    WANG Jinwan, MAO Wentao, HE Ling, WANG Liyun
    2015, 35(6):  1605-1610.  DOI: 10.11772/j.issn.1001-9081.2015.06.1605
    Asbtract ( )   PDF (842KB) ( )  
    References | Related Articles | Metrics

    Many traditional machine learning methods tend to get biased classifier which leads to low classification precision for minor class in imbalanced online sequential data. To improve the classification accuracy of minor class, a new weighted online sequential extreme learning machine based on imbalanced sample-reconstruction was proposed. The algorithm started from exploiting distributed characteristics of online sequential data, and contained two stages. In offline stage, the principal curve was introduced to construct the confidence region, where over-sampling was achieved for minor class to construct the equilibrium sample set which was consistent with the sample distribution trend, and then the initial model was established. In online stage, a new weighted method was proposed to update sample weight dynamically, where the value of weight was related to training error. The proposed method was evaluated on UCI dataset and Macao meteorological data. Compared with the existing methods, such as Online Sequential-Extreme Learning Machine (OS-ELM), Extreme Learning Machine (ELM)and Meta-Cognitive Online Sequential- Extreme Learning Machine (MCOS-ELM), the experimental results show that the proposed method can identify the minor class with a higher ability. Moreover, the training time of the proposed method has not much difference compared with the others, which shows that the proposed method can greatly increase the minor prediction accuracy without affecting the complexity of algorithm.

    Graph transduction via alternating minimization method based on multi-graph
    XIU Yu, WANG Jun, WANG Zhongqun, LIU Sanmin
    2015, 35(6):  1611-1616.  DOI: 10.11772/j.issn.1001-9081.2015.06.1611
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics

    The performance of the Graph-based Semi-Supervised Learning (GSSL) method based on one graph mainly depends on a well-structured single graph and most algorithms based on multiple graphs are difficult to be applied while the data has only single view. Aiming at the issue, a Graph Transduction via Alternating Minimization method based on Multi-Graph (MG-GTAM) was proposed. Firstly, using different graph construction parameters, multiple graphs were constructed from data with one single view to represent data point relation. Secondly,the most confident unlabeled examples were chosen for pseudo label assignment through the integration of a plurality of map information and imposed higher weights to the most relevant graphs based on alternating optimization,which optimized agreement and smoothness of prediction function over multiple graphs. Finally, more accurate labels were given over the entire unlabeled examples by combining the predictions of all individual graphs. Compared with the classical algorithms of Local and Global Consistency (LGC), Gaussian Fields and Harmonic Functions (GFHF), Graph Transduction via Alternation Minimization (GTAM), Combined Graph Laplacian (CGL), the classification error rates of MG-GTAM decrease on data sets of COIL20 and NEC Animal. The experimental results show that the proposed method can efficiently represent data point relation with multiple graphs, and has lower classification error rate.

    Teaching and peer-learning particle swarm optimization for multi-objective flexible job-shop scheduling problem
    WU Dinghui, KONG Fei, TIAN Na, JI Zhicheng
    2015, 35(6):  1617-1622.  DOI: 10.11772/j.issn.1001-9081.2015.06.1617
    Asbtract ( )   PDF (1018KB) ( )  
    References | Related Articles | Metrics

    To solve multi-objective Flexible Job-shop Scheduling Problems (FJSP), a Teaching and Peer-Learning Particle Swarm Optimization with Pareto Non-Dominated Solution Set (PNDSS-TPLPSO) algorithm was proposed. First, the minimum completion time of jobs, the maximum work load of machines and the total work load of all machines were taken as the optimization goals to establish a multi-objective flexible job-shop scheduling model. Then, the proposed algorithm combined multi-objective Pareto method with Teaching and Peer-Learning Particle Swarm Optimization (TPLPSO). A fast Pareto non-dominated sorting operator was applied to generate initial Pareto non-dominated solution set, and extracting Pareto dominance layer program was adopted to update Pareto non-dominated solution set. Furthermore, composite dispatching rule was adopted to generate the initial population, and opening up parabola decreasing inertia weigh strategy was taken to improve the convergence speed. Finally, the proposed algorithm was adopted to solve three Benchmark instances. In the comparison experiments with Multi-Objective Evolutionary Algorithm with Guided Local Search (MOEA-GLS) and Controlled Genetic Algorithm with Approach by Localization (AL-CGA), the proposed algorithm can obtain more and better Pareto non-dominated solutions for the same Benchmark instance. In terms of computing time, the proposed algorithm is less than MOEA-GLS. The simulation results demonstrate that the proposed algorithm can solve multi-objective FJSP effectively.

    Solving approach of capacity constrained P-median problem based on Power diagram
    ZHENG Liping, JIANG Ting, ZHOU Chenglong, CHENG Yajun
    2015, 35(6):  1623-1627.  DOI: 10.11772/j.issn.1001-9081.2015.06.1623
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    Aiming at the capacity P-median problem of continuous domains under the dense demand, the Centroidal Capacity Constrained Power Diagram (CCCPD) theory was proposed to approximately model the continuous P-median problem and accelerate the solving process. The Power diagram was constructed by extended Balzer's method, centroid restriction was imposed to satisfy the requirements of P-median, and capacity constraint was imposed to meet the capacity requirements of certain demand densities. The experimental results show that the proposed algorithm can quickly obtain an approximate feasible solution, having the advantages of better computing efficiency and capacity accuracy compared to Alper Murata's method and Centroidal Capacity Constrained Voronoi Tessellation (CCCVT) respectively. Additionally, the proposed method has excellent adaptability to complex density functions.

    Approach for multi-period and multi-attribute matching decision based on perceived expectation
    LIN Yang, WANG Yingming
    2015, 35(6):  1628-1632.  DOI: 10.11772/j.issn.1001-9081.2015.06.1628
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    The current research of bilateral matching problem is limited to single-period scenario. Aiming at the issue, an approach was proposed to study matching decision problem under multi-period and multi-attribute. First, through the orness, a measurement of Agent's preference, an optimal program was constructed to determine the cumulative weight of an Agent within each attribute. More specifically, the criteria of this program consisted of two parts: one part was to minimize the sum of deviation between an orness and corresponding cumulative weight of an Agent in different period; another part was to minimize the maximum disparity among cumulative weights of an Agent. Then, based on obtained cumulative weight, matching degree which represented by Agent's positive and negative ideal between the cumulative evaluation value and perceived expectation can be determined via the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Furthermore, a double-objective optimization model based on perceived expectation was constructed and the minimax method was used to solve this model for obtaining matching results. Finally, a numerical example was given to compare the minimax method with the linear weighting method. The results show that difference of profit and loss of utility obtained by the former method was 0.33, less than 0.36 that obtained by the latter method. Moreover, it also demonstrates the proposed method can maximize the profit and loss of utility of inferior side.

    Wolf pack algorithm based on modified search strategy
    LI Guoliang, WEI Zhenhua, XU Lei
    2015, 35(6):  1633-1636.  DOI: 10.11772/j.issn.1001-9081.2015.06.1633
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics

    Aiming at the shortcomings of Wolf Pack Algorithm (WPA), such as slow convergence, being easy to fall into local optimum and unsatisfactory artificial wolf interactivity, a wolf pack algorithm based on modified search strategy was proposed, which named Modified Wolf Pack Algorithm (MWPA). In order to promote the exchange of information between the artificial wolves, improve the wolves' grasp of the global information and enhance the exploring ability of wolves, the interactive strategy was introduced into scouting behaviors and summoning behaviors. An adaptive beleaguering strategy was proposed for beleaguering behaviors, which made the algorithm have a regulatory role. With the constant evolution of algorithm, the beleaguered range of wolves decreased constantly and the exploitation ability of algorithm strengthened constantly. Thus the convergence rate of algorithm was enhanced. The simulation results of six typical complex functions of optimization problems show that compared to the Wolf Colony search Algorithm based on the strategy of the Leader (LWCA), the proposed method obtains higher solving accuracy, faster convergence speed and is especially suitable for function optimization problems.

    Protein function prediction based on doubly indexed matrix
    MENG Jun, ZHANG Xin
    2015, 35(6):  1637-1642.  DOI: 10.11772/j.issn.1001-9081.2015.06.1637
    Asbtract ( )   PDF (880KB) ( )  
    References | Related Articles | Metrics

    The single data source cannot effectively predict the function of protein and the information of protein interaction network is incomplete. In order to solve the problem, A Multi-Source Integration and Random Walk with Doubly Indexed Matrix (MSI-RWDIM) algorithm was proposed. The proposed algorithm used protein sequence, gene expression and protein-protein interaction for the prediction of protein function. The weighting networks were constructed from the data sources with their characteristics. A network, which was fused by the weighting networks, integrated with function correlation network to construct a doubly indexed matrix. Random walk was used to calculate annotation scores and predict protein function. The cross-validation experiments on Yeast show that MSI-RWDIM can achieve higher prediction accuracy, lower coverage and lower loss rate of function labels. The research results show that the overall performance of MSI-RWDIM is much better than commonly used k-nearest neighbor, transductive multi-label ensemble classifier and fast simultaneous weighting method.

    Feature transfer weighting algorithm based on distribution and term frequency-inverse class frequency
    QIU Yunfei, LIU Shixing, LIN Mingming, SHAO Liangshan
    2015, 35(6):  1643-1648.  DOI: 10.11772/j.issn.1001-9081.2015.06.1643
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics

    Traditional machine learning faces a problem: when the training data and test data no longer obey the same distribution, the classifier trained by training data can't classify test data accurately. To solve this problem, according to the transfer learning principle, the features were weighted according to the improved distribution similarity of source domain and target domain's intersection features. The semantic similarity and Term Frequency-Inverse Class Frequency (TF-ICF) were used to weight non-intersection features in source domain. Lots of labeled source domain data and a little labeled target domain were used to obtain the required features for building text classifier quickly. The experimental results on test dataset 20Newsgroups and non-text dataset UCI show that feature transfer weighting algorithm based on distribution and TF-ICF can transfer and weight features rapidly while guaranteeing precision.

    Project keyword lexicon and keyword semantic network based on word co-occurrence matrix
    WANG Qing, CHEN Zeya, GUO Jing, CHEN Xi, WANG Jinghua
    2015, 35(6):  1649-1653.  DOI: 10.11772/j.issn.1001-9081.2015.06.1649
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of keyword extraction and project keyword lexicon establishment of technological projects in professional fields, an algorithm for building the lexicon based on semantic relation and co-occurrence matrix was proposed. On the basis of conventional keyword extraction research based on co-occurrence matrix, the algorithm considered several advanced factors such as the location, property and Inverse Document Frequency (IDF) index of the keywords to improve the traditional approach. Meanwhile, a method was given for the establishment of keyword semantic network using co-occurrence matrix and hot keyword identification through computing the similarity with semantic base vector. At last, 882 project experiment documents in power field were used to perform the simulation. And the experimental results show that the proposed algorithm can effectively extract the keywords for the technological projects, establish the keyword correlation network, and has better performance in precision, recall rate and F1-score than the keyword extraction algorithm of Chinese text based on multi-feature fusion.

    Calculation method of user similarity based on location sequence generalized suffix tree
    XIAO Yanli, ZHANG Zhenyu, YUAN Jiangtao
    2015, 35(6):  1654-1658.  DOI: 10.11772/j.issn.1001-9081.2015.06.1654
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics

    To solve the user similarity between trajectories formed by mobility data, an algorithm based on Location Sequence Generalized Suffix Tree (LSGST) was proposed. First, the location sequence was extracted from mobility data. At the same time the location sequence was mapped to a string. The transformation from the processing of location sequence to the processing of string was completed. Then the location sequence generalized suffix tree between different users was constructed. The similarity was calculated in detail from the number of similar positions, longest common subsequence and the frequent common position sequence. The theoretical analysis and simulation results show that the proposed algorithm has ideal effect in terms of similarity measure. Besides, compared to the ordinary construction method, the proposed algorithm has low time complexity. In the comparison with dynamic programming and naive string-matching, the proposed algorithm has higher efficiency when searching for the longest common sub-string and frequent public position sequence. The experimental results indicate that the LSGST can measure the similarity effectively, meanwhile reduces the trajectory data when searching for the measurement index, and has better performance in time complexity.

    Recommendation algorithm of taxi passenger-finding locations based on spatio-temporal context collaborative filtering
    QIAN Wenyi, JIANG Xinhua, LIAO Lyuchao, ZOU Fumin
    2015, 35(6):  1659-1662.  DOI: 10.11772/j.issn.1001-9081.2015.06.1659
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    Because existing passenger-finding algorithms do not consider taxi's spatio-temporal context, a collaborative filtering recommendation algorithm of taxi passenger-finding based on spatio-temporal context was proposed. The proposed algorithm mapped potential passenger locations to space network, and introduced time delay factor to similarity measure to get the neighbor set which was similar to a target taxi's driving behavior. Based on location context, the proposed algorithm chose the target taxi's most interest potential passenger location from similar neighbor set. The experimental results on Fuzhou taxi trajectory data show that the proposed algorithm can get the best recommendation result when the time delay factor is 0.7. Meanwhile, compared to the traditional collaborative filtering recommendation algorithms, the proposed algorithm obtains better recommendation result under the neighbor sets with different size, which means the proposed algorithm is more accurate than the traditional collaborative filtering algorithms.

    Potential friend recommendation based on user tagging
    WU Buxiao, XIAO Jing
    2015, 35(6):  1663-1667.  DOI: 10.11772/j.issn.1001-9081.2015.06.1663
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics

    At present, most social networking systems recommend potential friends mainly according to the existed friend relationship, and users' interests are not emphasized. Furthermore, it is a very difficult task to find users' interests with high precision from a large amount of data. A Friend Recommendation Based on user Tagging (FRBT) algorithm was proposed to find potential friends with the same interests by mining users' interests in tagging behavior data. First, Term Frequency-Inverse Document Frequency (TF-IDF) was used to cluster the similar semantic tags into topics. A new formula for calculating the users' similarity of topics was described. Combined with the user similarity based on topic and item, the proposed algorithm could recommend the users with high similarities as potential friends. The experimental results on tagging dataset of Delicious validate, compared wtih the algorithms of item, tag and tri-graph, FRBT has better performance in terms of precision and recall.

    Homomorphism of a public key encryption scheme based on the chinese residue theorem
    WANG Huiyong, SUN Shuang, FENG Yong
    2015, 35(6):  1668-1672.  DOI: 10.11772/j.issn.1001-9081.2015.06.1668
    Asbtract ( )   PDF (688KB) ( )  
    References | Related Articles | Metrics

    The existing (fully) homomorphic encryption schemes fail to meet practical needs for poor efficiency. To explore new resolution for better homomorphic encryption schemes, the possibility to construct homomorphism on a public key encryption scheme in literature based on Chinese Residue Theorem (CRT) was studied. The possibility of the original scheme to construct the addition and multiplication homomorphic operations was investigated. The original scheme was proved to be unsuitable for constructing homomorphic addition and multiplication operations. Several problems concerning security and efficiency existing in the original scheme were analyzed. Then a revised scheme with tougher security under proper configurations was given, as well as its correctness verification. After that, analysis on security and computing complexity of the revised scheme was given, emphasizing on its ability against the lattice reduction attack. Afterwards, the feasibility of building homomorphic operations on the revised scheme was studied and the main performance comparison between the original and the revised schemes was constructed. Finally, experience on building homomorphism was summarized and some advice on constructing an ideal (fully) homomorphic encryption scheme was presented.

    Fast greatest common divisor algorithm based on k-ary reduction
    WANG Guangsai, ZENG Guang, HAN Wenbao, LI Yongguang
    2015, 35(6):  1673-1677.  DOI: 10.11772/j.issn.1001-9081.2015.06.1673
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics

    Greatest Common Divisor (GCD) is one of the basic subjects in computational number theory. It has a wide application in encryption and analysis of cryptography. For inputing B and C, an algorithm based on right-shift k-ary reduction proposed by Sorenson was presented for finding the integers x and y which satisfy the least significant bits of Bx-Cy were 0,i.e., Bx-Cy=0(mod2e) where positive integer e was a constant. It could do a lot of right shifts and reduce a large number of cycles with taking advantage of the algorithm for finding the integers x and y. A fast GCD algorithm was proposed combined with modulus algorithm. When the size of the input was n bits, the worst complexity of the fast GCD algorithm was still O(n2).In the best case, the complexity of the proposed algorithm could achieve O(nlog2 nlog logn). The experimental data show that actual implementations given input about more than 200000 bits, the fast GCD algorithm is faster than the Binary GCD algorithm, and the fast GCD algorithm is twice as fast as the Binary GCD algorithm for 1 million bits of input.

    Identity-based proxy re-signature scheme without bilinear pairing
    HUANG Ping, YANG Xiaodong, LI Yan, WANG Caifen
    2015, 35(6):  1678-1682.  DOI: 10.11772/j.issn.1001-9081.2015.06.1678
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    The existing identity-based bidirectional proxy re-signature schemes require expensive bilinear pairing operations. Focused on the issue, an identity-based bidirectional proxy re-signature scheme without bilinear pairing was presented by using hash function. Under the assumption of discrete logarithm difficult problem, the proposed proxy re-signature scheme was proved secure against forgery under adaptive chosen message attacks. Furthermore, the proposed scheme was bidirectional, versatile, transparent and key optimal,which eliminated the bilinear pairing operations. Compared with the identity-based bidirectional proxy re-signature scheme — Shao scheme, the proposed scheme could reduce the computational complexity of re-signature algorithm and improve the computation efficiency of signature verification algorithm. Based on the proposed scheme, an aggregate proxy re-signature scheme was proposed. The new scheme can aggregate re-signatures only if they are generated in the same time period, which can greatly reduce the communication overhead.

    Neural cryptography algorithm based on "Do not Trust My Partner" and fast learning rule
    ZHANG Lisheng, LIU Fengchai, DONG Tao, ZHANG Huachuan, HU Wenjie
    2015, 35(6):  1683-1687.  DOI: 10.11772/j.issn.1001-9081.2015.06.1683
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics

    Focusing on the key exchange problem of how to get the higher security for neural cryptography in the short time of the synchronization, a new hybrid algorithm combining the features of "Do not Trust My Partner" (DTMP) and the fast learning rule was proposed. The algorithm could send erroneous output bits in the public channel to disrupt the attacker's eavesdropping of the exchanged bits and reduce the success rate of passive attack. Meanwhile, the proposed algorithm estimated the synchronization by estimating the probability of unequal outputs, then adjusted the change of weights according to the level of synchronization to speed up the process of synchronization. The simulation results show that the proposed algorithm outperforms the original DTMP in the time needed for the partners to synchronize. Moreover, the proposed algorithm is securer than the original DTMP when the partners do not send erroneous output bits at the same time. And the proposed algorithm outperforms the feedback algorithm in both the synchronization time and security obviously. The experimental results show that the proposed algorithm can obtain the key with a high level of security and a less synchronization time.

    Cache pollution attack defense scheme based on cache diversification in content centric networking
    ZHENG Linhao, TANG Hongbo, GE Guodong
    2015, 35(6):  1688-1692.  DOI: 10.11772/j.issn.1001-9081.2015.06.1688
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics

    In order to deal with the cache pollution attacks in Content Centric Networking (CCN), a defense scheme based on cache diversification was proposed. To reduce the attack scope, the in-network content services were divided into three categories and different cache strategies were used for different services. For private and real-time services, contents were directly delivered without being cached; for streaming media services, contents were pushed to be cached in the edge of network according to probablity; for document services, the priority was caching contents in the upstream, then pushing them to the downstream. Then different defense methods were configured on different nodes. For the edge nodes, attacks were detected by observing the request probability variation of different contents; for the upstream nodes, contents with low request rate were ruled out from the cache space by setting filter rules. The simulation results show that the network average hit ratio under service diversification mechanism is 17.3% higher than that under CCN with traditional caching strategies.The proposed scheme can effectively improve the defense capability of the network for the cache pollution attack.

    Trust model for wireless sensor network based on grey theory
    CHEN Di, ZHOU Mingzheng
    2015, 35(6):  1693-1697.  DOI: 10.11772/j.issn.1001-9081.2015.06.1693
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue of accurate assessment of the communication nodes in Wireless Sensor Network (WSN), a Grey Theory-based Trust Model named GTTM was proposed. The proposed model fully monitored the behaviors of nodes in network, builded sample matrix, and computed the weights of the recommending nodes by grey relational method, and computed nodes' trust values through the clustering algorithm of grey theory. The simulation experiments showed that compared with the classic Reputation-based Framework for Sensor Networks (RFSN) model, the convergence of the trust value of nodes in GTTM network is more gentle; GTTM could resist malicious recommendation and reduce the values of untrusted nodes in a timely manner, could still get a high rate of successful trading when the network suffered attacks. Compared with the Trust Computation Model based on Bayes Estimation (TCM-BE) model based on Bayes estimation, even under the circumstances of less recommended samples, GTTM still could keep low rate of false positive malicious nodes. The experimental results show that the GTTM can evaluate the trust values of nodes more accurately and ensure the reliable operation of the network.

    Construction of mobile botnet based on URL shortening services flux and Google cloud messaging for Android
    LI Na, DU Yanhui, CHEN Mo
    2015, 35(6):  1698-1704.  DOI: 10.11772/j.issn.1001-9081.2015.06.1698
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics

    In order to enhance the defensive ability and prediction ability of mobile network,a method for constructing mobile botnet based on a URL Shortening Services Flux (USSes-Flux) and Google Cloud Messaging for Android (GCM) was proposed. The mobile botnet model was designed with hybrid topology of central structure and peer-to-peer (P2P), USSes-Flux algorithm was presented, which increased robustness and stealthiness of Command and Control (C&C) channel. The control model was discussed. The states change of different bot, command design and propagation algorithm were also analyzed. In the test environment, the relationship between probability of short URL invalidness and number of required short URL was discussed. The static analysis, dynamic analysis and power testing of the mobile botnet and the samples of different C&C channel were carried out. The results show that the proposed mobile botnet is more stealthy, robust and low-cost.

    Optimization method of tracing distributed denial of service attacks based on autonomous system and dynamic probabilistic packet-marking
    SHEN Xueli, SHEN Jie
    2015, 35(6):  1705-1709.  DOI: 10.11772/j.issn.1001-9081.2015.06.1705
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics

    Distributed Denial of Service (DDoS) attack is a serious threat to network security. In order to solve this problem, an effective method of tracing DDoS attack was proposed based on Autonomous System (AS) and Dynamic Probabilistic Packet-Marking (DPPM). In the proposed method, a new scheme of packet marking was designed with setting up two markers as the domain marks and routing tags for inter-domain tracing and in-domain tracing. Domain marks and routing tags were set at the same time using dynamic packet marking methods. Finally, through the path reconstruction on in-domain and inter-domain, the attack node was traced back rapidly. The experimental results show that the proposed algorithm is efficient and feasible, which provides an important basis for the DDoS attack prevention.

    Expectation-maximization Bernoulli-asymmetric-Gaussian approximate message passing algorithm based on compressed sensing
    ZHANG Zheng, XIE Zhengguang, YANG Sanjia, JIANG Xinling
    2015, 35(6):  1710-1715.  DOI: 10.11772/j.issn.1001-9081.2015.06.1710
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics

    Bernoulli-Gaussian (BG) model in Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing (EM-BG-AMP) algorithm is constrained by its symmetry and restricted in the approximation of the actual signal prior distribution. Gaussian-Mixture (GM) model in Expectation-Maximization Gaussian-Mixture Approximate Message Passing (EM-GM-AMP) algorithm is a high-order model of BG model and has quite high complexity. In order to solve these problems, the Bernoulli-Asymmetric-Gaussian (BAG) model was proposed. Based on the new model, by further derivation, the Expectation-Maximization Bernoulli-Asymmetric-Gaussian Approximate Message Passing (EM-BAG-AMP) algorithm was obtained. The main idea of the proposed algorithm was based on the assumption that the input signal obeyed the BAG model. Then the proposed algorithm used Generalized Approximate Message Passing (GAMP) to reconstruct signal and update the model parameters in iteration. The experimental results show that, when processing different images, compared to EM-BG-AMP,the time and the Peak Signal-to-Noise Ratio (PSNR) values of EM-BAG-AMP are increased respectively by 1.2% and 0.1-0.5 dB, especially in processing images with simple texture and obvious color difference changing, the PSNR values are increased by 0.4-0.5 dB. EM-BAG-AMP is the expansion and extension of EM-BG-AMP and can better adapt to the actual signal.

    Terrain rendering for level of detail based on hardware tessellation
    WANG Wenbo, YIN Hong, XIE Wenbin, WANG Jiateng
    2015, 35(6):  1716-1719.  DOI: 10.11772/j.issn.1001-9081.2015.06.1716
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics

    The vertex shader needs an extra generating pattern and the calculation of subdivision level is complicated when subdividing terrain grid. A Level of Detail (LOD) terrain rendering algorithm using subdivision shader was put forward for the insufficiency. The proposed method used block quad tree organization to build a rough terrain grid hierarchical structure, and filtrated the activity terrain blocks by LOD discrimination function. A subdivision factor calculation method was proposed based on viewpoint in a three-dimensional continuous distance in tessellation control shader and cracks of the external factor segment was eliminated. As a result, displacement mapping on tessellation evaluation shader and displacement of height component in fine grid blocks were achieved. Meanwhile, the quadtree was saved to vertex buffer, and the exchange of resource between Central Processing Unit (CPU) and Graphic Processing Unit (GPU) was decreased. The subdivision process was accelerated by bringing in subdivision queue. The experimental results show that the proposed algorithm has a smooth detail level transition and good subdivision effect, and it can increase the utilization ratio of GPU and terrain rendering efficiency.

    Automatic matching method for aviation oblique images based on homography transformation
    ZHAO Xia, ZHU Qing, XIAO Xiongwu, LI Deren, GUO Bingxuan, ZHANG Peng, HU Han, DING Yulin
    2015, 35(6):  1720-1725.  DOI: 10.11772/j.issn.1001-9081.2015.06.1720
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the high computing complexity of Affine Scale Invariant Feature Transform (ASIFT) algorithm, a robust and rapid matching method for large angle aviation oblique images based on homography transformation was proposed, which was named H-SIFT. Firstly, the homography matrix between the two oblique images was calculated by making full use of the rough Exterior Orientation (EO) elements of the images, then a homography transformation was made to the left image to get its rectified image for eliminating geometric distortion, scale and rotation. Secondly, the matches between the rectified image and the right image were got by using Scale Invariant Feature Transform (SIFT) algorithm. During the matching process, the coarse matches were got by using two matching constraints, Nearest Neighbor Distance Ratio (NNDR) and consistency checking, then the false matches in them were eliminated by using the RANdom SAmple Consensus (RANSAC) algorithm. Finally, as the matching points on the rectified image were got, the corresponding matching points on the left image were calculated by using the homography matrix. The experiments on three pairs of typical oblique images obtained by Si Wei Digital Camera 5 (SWDC-5) demonstrate that the matching points obtained by the proposed algorithm are significantly improved in the computation efficiency, quantity and distribution than ASIFT algorithm, as the proposed algorithm not only takes just about 0.93%, 0.88%, 0.97% time of ASIFT algorithm, but also gets the correct matching points about 2.18, 1.31, 1.70 times of ASIFT algorithm.

    Improved Hough transform to detect planar geologic features in borehole images
    PENG Cheng, ZOU Changchun
    2015, 35(6):  1726-1729.  DOI: 10.11772/j.issn.1001-9081.2015.06.1726
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics

    For the implementation of automatically extracting planar geologic features in borehole images, the methods of detecting single cycle sinusoidal curves in images were studied. An improved Hough transform were proposed. The proposed method voted in the two-dimensional accumulator based on three related points in the sinusoidal curve, to determine the phase and baseline depth. Then vote was performed in one-dimensional accumulator, to determine the amplitude. The simulated images and borehole images were processed. The proposed method was compared with the traditional Hough transform and fast Hough transform. The results show that the proposed method not only improves the detection efficiency, but also has high accuracy.

    Design of virtual surgery system in reduction of maxillary fracture
    LI Danni, LIU Qi, TIAN Qi, ZHAO Leiyu, HE Ling, HUANG Yunzhi, ZHANG Jing
    2015, 35(6):  1730-1733.  DOI: 10.11772/j.issn.1001-9081.2015.06.1730
    Asbtract ( )   PDF (660KB) ( )  
    References | Related Articles | Metrics

    Based on open source softwares of Computer Haptics, visualizAtion and Interactive in 3D (CHAI 3D) and Open Graphic Library (OpenGL), a virtual surgical system was designed for reduction of maxillary fracture. The virtual simulation scenario was constructed with real patients' CT data. A geomagic force feedback device was used to manipulate the virtual 3D models and output haptic feedback. On the basis of the original single finger-proxy algorithm, a multi-proxy collision algorithm was proposed to solve the problem that the tools might stab into the virtual organs during the simulation. In the virtual surgical system, the operator could use the force feedback device to choose, move and rotate the virtual skull model to simulate the movement and placement in real operation. The proposed system can be used to train medical students and for preoperative planning of complicated surgeries.

    Spatial-temporal filtering for coronary angiography image sequence
    ZHOU Zhenzhen, SUN Fengrong, SONG Shangling, WANG Lixin, LUAN Yuhuan, YAO Guihua
    2015, 35(6):  1734-1738.  DOI: 10.11772/j.issn.1001-9081.2015.06.1734
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the noise of coronary angiography image sequence, enhance the diagnostic accuracy for coronary heart disease, and eventually acquire superior image quality under low X-ray dose, a method of spatial-temporal filtering for coronary angiography images was proposed. By introducing the idea of threshold noising in wavelet denoising into the Fast Discrete Orthonormal Stockwell Transform (FDOST), a soft-threshold denoising algorithm based on FDOST was proposed for the spatial denoising of coronary angiography images. The conventional wavelet denoising was used for temporal denoising of coronary angiography images, taking advantage of its time smoothing feature. Hessian matrix was used in pre-processing to track the line-like structure of coronary angiography images. The simulation and experimental results show that the signal-to-noise ratio and contrast-to-noise ratio of the denoised images are improved significantly compared with the original image, and the proposed method is suitable for the denoising of low-dose coronary angiography image sequence.

    Noise-suppression method for flicker pixels in dynamic outdoor scenes based on ViBe
    ZHOU Xiao, ZHAO Feng, ZHU Yanlin
    2015, 35(6):  1739-1743.  DOI: 10.11772/j.issn.1001-9081.2015.06.1739
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics

    Visual Background extractor (ViBe)model for moving target detection cannot avoid interference caused by irregular flicker pixels noise in dynamic outdoor scenes. In order to solve the issue, a flicker pixels noise-suppression method based on ViBe model algorithm was proposed. In the initial stage of background model, a fixed standard deviation of background model samples was used as the threshold value to limit the range of background model samples and get suitable background model samples for each pixel. In the foreground detection stage, an adaptive detection threshold was applied to improve the accuracy of detection result. Edge inhibition of image edge background pixels was executed to avoid error background sample values updating to the background model in the background model update process. On the basis of above, morphological operation was added to fix connected components to get more complete foreground images. Finally, the proposed method was compared with the original ViBe algorithm and the ViBe's improvement with morphology post-processing on the results of multiple video sequences. The experimental results show that the flicker pixels noise-suppression method can suppress flicker pixels noise effectively and get more accurate results.

    Error concealment for high efficiency video coding based on block-merging
    GAO Wenhua, ZHANG Yiyun, WANG Haidong
    2015, 35(6):  1744-1748.  DOI: 10.11772/j.issn.1001-9081.2015.06.1744
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    The Coding Unit (CU) size in High Efficiency Video Coding (HEVC) is more times than that in previous coding standards, so the error concealment in HEVC has a poor result in video decoding when packet loss occurs. An error concealment method based on block-merging of segmentation block under CU was proposed. Firstly, the correlation between residual energy and block segmentation was analyzed. Secondly, the reference frame residual energy was compared with a set threshold, and the lost CU segmentation block was merged based on the comparison information. Then, the vector extrapolation method was optimized by weights to ensure the applicability of the proposed algorithm in HEVC error concealment. Finally, the optimized vector extrapolation method was used for error concealment of merged block. The experimental results show that,compared with the classic error concealment methods such as the methods of copy, motion compensation, the proposed method guarantees the Structural Similarity Index Measurement (SSIM) of decoded video and improves the Peak Signal-To-Noise Ratio (PSNR) of decoded video in different motility, and the feasibility of the proposed algorithm is verified.

    Fast super-resolution reconstruction for single image based on predictive sparse coding
    SHEN Hui, YUAN Xiaotong, LIU Qingshan
    2015, 35(6):  1749-1752.  DOI: 10.11772/j.issn.1001-9081.2015.06.1749
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics

    The classic super-resolution algorithm via sparse coding has high computational cost during the reconstruction phase. In view of the disadvantages, a predictive sparse coding-based single image super-resolution method was proposed. In the training phase, the proposed method imposed a code prediction error term to the traditional sparse coding error function, and used an alternating minimization procedure to minimize the resultant objective function. In the testing phase, the reconstruction coefficient could be estimated by simply multiplying the low-dimensional image patch with the low-dimensional dictionary, without any need to solve sparse regression problems. The experimental results demonstrate that, compared with the classic single image super-resolution algorithm via sparse coding, the proposed method is able to significantly reduce the reconstruction time while maintaining super-resolution visual effect.

    Music genre classification based on multiple kernel learning and support vector machine
    SUN Hui, XU Jieping, LIU Binbin
    2015, 35(6):  1753-1756.  DOI: 10.11772/j.issn.1001-9081.2015.06.1753
    Asbtract ( )   PDF (601KB) ( )  
    References | Related Articles | Metrics

    Multiple Kernel Learning and Support Vector Machine (MKL-SVM) was applied to automatic music genre classification to choose the optimal kernel functions for different features, a method of conducting the optimal kernel function combination into the synthetic kernel function by weighting for music genre classification was proposed. Different optimal kernel functions were chosen for different acoustic features by multiple kernel classification learning, the weight of each kernel function in classification was obtained, and the weight of each acoustic feature in the classification of the genre was clarified, which provided a clear and definite result for the analysis and selection of the feature vector in the classification of music genre. The experiments on the dataset of ISMIR 2011 show that, compared with the traditional single kernel support vector machine classification, the accuracy of the proposed music genre automatic classification method based on MKL-SVM is greatly improved by 6.58%. And the proposed method can more clearly reveal the the different features' impacts on music genre classification results, the classification results has also been significantly improved by selecting features with larger effects on classification.

    Efficient mining algorithm for uncertain data in probabilistic frequent itemsets
    LIU Haoran, LIU Fang'ai, LI Xu, WANG Jiwei
    2015, 35(6):  1757-1761.  DOI: 10.11772/j.issn.1001-9081.2015.06.1757
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics

    When using the way of pattern growth to construct tree structure, the exiting algorithms for mining probabilistic frequent itemsets suffer many problems, such as generating large number of tree nodes, occupying large memory space and having low efficiency. In order to solve these problems, a Progressive Uncertain Frequent Pattern Growth algorithm named PUFP-Growth was proposed. By the way of reading data in the uncertain database tuple by tuple, the proposed algorithm constructed tree structure as compact as Frequent Pattern Tree (FP-Tree) and updated dynamic array of expected value whose header table saved the same itemsets. When all transactions were inserted into the Progressive Uncertain Frequent Pattern tree (PUFP-Tree), all the probabilistic frequent itemsets could be mined by traversing the dynamic array. The experimental results and theoretical analysis show that PUFP-Growth algorithm can find the probabilistic frequent itemsets effectively. Compared with the Uncertain Frequent pattern Growth (UF-Growth) algorithm and Compressed Uncertain Frequent-Pattern Mine (CUFP-Mine) algorithm, the proposed PUFP-Growth algorithm can improve mining efficiency of probabilistic frequent itemsets on uncertain dataset and reduce memory usage to a certain degree.

    Spatial range query in wireless broadcast environment
    MA Xiaoqin, PENG Xiufen, YANG Li
    2015, 35(6):  1762-1765.  DOI: 10.11772/j.issn.1001-9081.2015.06.1762
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics

    In order to realize fast and energy-efficient spatial range query in wireless broadcast environment, a Range Query based on Grid Spatial Index (RQGSI) algorithm was proposed. On the server, grid spatial index was established for all data objects to shorten tuning time, and then the meshed grid was scheduled according to the Hilbert curve filling order to optimize access time. On the client, the query processing algorithm was designed for filtering and pruning the data objects. Finally, the simulation experiments verified the performance of the proposed RQGSI. The experimental results show that, compared with the R-tree Index (RI) algorithm, the RQGSI algorithm reduces tuning time by about 10%, decreases access time approximately by 8%, and it can achieve faster and lower energy consumption range query.

    Open robot Agent: construction of host SoftMan
    WU Danfeng, ZENG Guangping, XIAO Chao'en, ZHANG Qingchuan
    2015, 35(6):  1766-1772.  DOI: 10.11772/j.issn.1001-9081.2015.06.1766
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of updating, modifying, upgrading and maintaining the function of robot by offline and static method, SoftMan was introduced for robot platform, and the architecture of robot system, whose managing center is host SoftMan, was built. The host SoftMan was mainly researched. Firstly, the architecture of host SoftMan was constructed. Then the descriptive unification model of knowledge and behavior of host SoftMan was put forward, the knowledge model was constructed and implemented based on data structure, and the design specifications and reference realization of the algorithm were given for its main service behaviors. Finally, the robot system was unified with the SoftMan system. Through the test, the function of robot was successfully replaced online and dynamically, verifying the correctness and feasibility of the method of designing and implementing the host SoftMan.

    Formal model supporting Web service composition and verification
    HOU Jinkui, WANG Lei
    2015, 35(6):  1773-1779.  DOI: 10.11772/j.issn.1001-9081.2015.06.1773
    Asbtract ( )   PDF (1219KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of Web service composition and verification, a formal model was proposed based on the framework of category theory. Process Algebra was introduced into the framework to describe the external behavior of service component, establishing a formal semantic model for the architecture of Web service system. The service network was described with category diagrams, in which Web services were used as categorical objects, and the interactive and composition relationships between services were used as morphisms. On the basis of the formal definitions of service interface, Web service and service composition, a further analysis and discussion about the semantics of service composition and interaction was undertaken. The concepts on Web service substitutability and service request satisfiability were formally defined. The application research shows that the proposed framework enhances semantic description capabilities of Web service architecture.

    Authentication algorithm of multi-touch based on mobile touch sensor
    PANG Yongchun, SUN Ziwen, WANG Yao
    2015, 35(6):  1780-1784.  DOI: 10.11772/j.issn.1001-9081.2015.06.1780
    Asbtract ( )   PDF (675KB) ( )  
    References | Related Articles | Metrics

    A multi-touch authentication method based on mobile touch sensor was proposed to solve the information security threats of smart phone. First, the original data sequences of finger sliding collected from a touch sensor were preprocessed through smooth denoising and normalizing the position and length. And then, the authentication feature sequences were extracted from the normalized first and second derivatives, and the direction of the preprocessed data sequences. Finally, the template matching method was adopted to judge the identity of user by using dynamic time warping algorithm to compare the registered temple feature sequences with the test feature sequences. The simulation results show that the proposed algorithm can reach 3.83% False Rejection Rate (FRR) and 2.07% False Acceptance Rate (FAR) in average for different users. Compared with Support Vector Distribution Estimation (SVDE) classifier with Radial Basis Function (RBF), the FRR and FAR of the proposed algorithm reduces 1.81% and 2.35% respectively. The performance analysis shows that the proposed algorithm can improve the accuracy of user authentication obviously.

    Human detection algorithm with variable rotation angle
    DONG Zhicong, LI Fuhai, LIU Shaoxiong
    2015, 35(6):  1785-1790.  DOI: 10.11772/j.issn.1001-9081.2015.06.1785
    Asbtract ( )   PDF (882KB) ( )  
    References | Related Articles | Metrics

    Prevalent human detection methods are usually applied in cases without rotation angle, and their detection rates are poor when rotation angle varies. In order to solve the issue, an algorithm which could identify human with variable rotation angle was proposed. Firstly, Radial Gradient Transform (RGT) method was adopted to obtain the rotation-invariance gradient. Then, adopting the method similar to the way that blocks were overlapped in the Histogram of Oriented Gradient(HOG) feature, a plurality of descriptors with rotation angle information were obtained and connected linearly into a descriptor group with rotation invariance feature, according to the descriptors' rotation angle. Finally, the human detection algorithm was conducted with the support of a two-level cascaded classifier based on Support Vector Machine (SVM). The recognition rate of the proposed algorithm achieves more than 86% for a human test set with 144 different rotation angles based on the INRIA pedestrian database. In the meantime, the false detection rate is less than 10% for a non-human test set with 144 different rotation angles. The experiments indicate that the proposed algorithm can be used for human detection in an image with arbitrary rotation angle.

    Feature detection method of fingertip and palm based on depth image
    FAN Wenjie, WANG Mingyan, YANG Wenji
    2015, 35(6):  1791-1794.  DOI: 10.11772/j.issn.1001-9081.2015.06.1791
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics

    To solve the gesture segmentation deviation problem under the interference of other skins and overlapping objects, a method of using depth data and skeleton tracking to segment gesture accurately was proposed. The minimum circumscribed circle, the average and the maximal inscribed circle of convexity defect, were combined to improve the detection of palm and the palm region's radius of various gesture. A fingertip candidate set was got through integrating the finger arc with convex hull, then real fingertips were obtained with three-step filtering. Six gestures have been tested in four transform cases, the recognition rate of flip, parallel, overlapping are all higher than 90% but the rate decreases obviously when tilting more than 70 degree and yawing more than 60 degree. The experimental results show that the accuracy of the proposed method is high in a variety of real scenes.

    Fingertip detection and gesture recognition method based on Kinect
    TAN Jiapu, XU Wensheng
    2015, 35(6):  1795-1800.  DOI: 10.11772/j.issn.1001-9081.2015.06.1795
    Asbtract ( )   PDF (982KB) ( )  
    References | Related Articles | Metrics

    The recognition of bending finger points based on video is difficult and recognition rate is not high. Focusing on the issue, a hand gesture recognition method based on depth image, skeleton image and color image information was proposed. Firstly, the gesture in color image area was initially determined rapidly by using depth information and skeleton information of Kinect, and the hand posture region was extracted from the gesture area by using the YCrCb color model. Then the distances between the gesture contour points and the palm point were calculated to generate the distance curve, and the ratio of curve peaks to troughs was set up to obtain finger point. Finally, commonly used 12 gestures were identified by combining bending finger point features and the maximum amount of contour area. Six experimenters were invited to validate the proposed method in experimental results verification phase. Every gesture was experimented 120 times under the condition of relatively stable light environment and the recognition rate of 12 gestures was 97.92% on average. The experimental results show that the proposed method can quickly location gestures and accurately recognize the commonly used 12 kinds of hand gestures, and the recognition rate is high.

    Design and implementation of Wi-Fi Direct based multi-screen interaction system
    LIU Wei, ZHANG Shuben, ZHU Ruiyi, YANG Jian
    2015, 35(6):  1801-1804.  DOI: 10.11772/j.issn.1001-9081.2015.06.1801
    Asbtract ( )   PDF (564KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of current multi-screen interaction systems such as high bandwidth occupancy of Wide Local Area Network (WLAN) and unstability between terminal devices and the router, a multi-screen interaction system based on Wi-Fi Direct was proposed, which directly connected two intelligent devices not via any access points and delivered content of one device to the other. The design of the system was detailedly described. According to the the principles of low delay and high compatibility, the proposed system was realized by developing an Android APP used on a smart phone or a smart TV. The test of the proposed system in practice shows that time delay and packet loss rate have been reduced in comparison with conventional multi-screen system depending on the WLAN. Also, the connection provided by Wi-Fi Direct between two devices is stable and the distance has been doubled. Besides, the structure of the proposed system has no request for WLAN bandwidth.

2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF