Loading...

Table of Content

    10 February 2015, Volume 35 Issue 2
    Reliability-aware virtual data center embedding algorithm
    ZUO Cheng, YU Hongfang
    2015, 35(2):  299-304.  DOI: 10.11772/j.issn.1001-9081.2015.02.0299
    Asbtract ( )   PDF (1031KB) ( )  
    References | Related Articles | Metrics

    By introducing the current research progress of Virtual Data Center (VDC) embedding, and in accordance with the reliability requirement of VDC, a new heuristic algorithm to address reliability-aware VDC embedding problem was proposed. It restricted the number of Virtual Machines (VMs) which can be embedded onto the same physical server to guarantee the VDC reliability, and then regarded reduction of the bandwidth consumption and energy consumption as main objective to embed the VDC. Firstly, it reduced bandwidth consumption of data center by consolidating the virtual machines, which had high communication services, into the same group and placed them onto the same physical server. Secondly, the consolidated groups were mapped onto the powered physical servers to decrease the number of powered servers, thus reducing the power consumption of servers. The results of experiment conducted on fat tree topology show that, compared with 2EM algorithm, the proposed algorithm can satisfy VDC reliability requirement, and effectively reduce a maximum of 30% bandwidth consumption of data center without increasing extra energy consumption.

    Minimum MPR set selection algorithm based on OLSR protocol
    LIU Jie, WANG Ling, WANG Shan, FENG Wei, LI Wen
    2015, 35(2):  305-308.  DOI: 10.11772/j.issn.1001-9081.2015.02.0305
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that there is redundancy when using the greedy algorithm to solve the minimum MultiPoint Relay (MPR) set in the traditional Optimized Link State Routing (OLSR) protocol, a Global_OP_MPR algorithm based on the improvement of overall situation was proposed. First, an improved OP_MPR algorithm based on the greedy algorithm was introduced, and this algorithm removed the redundancy by gradually optimizing MPR set, which could simply and efficiently obtain the minimum MPR set; then on the basis of OP_MPR algorithm, the algorithm of Global_OP_MPR added the overall factors into MPR selection criteria to introduce "overall optimization" instead of "local optimization", which could eventually obtain the minimum MPR set in the entire network. Simulations were conducted on the OPNET using Random Waypoint motion model. In the simulation, compared with the traditional OLSR protocol, the OLSR protocol combined with OP_MPR algorithm and Global_OP_MPR algorithm effectively reduced the number of MPR nodes in the entire network, and had less network load to bear Topology Control (TC) grouping number and lower network delay. The simulation results show that the proposed algorithms including OP_MPR and Global_OP_MPR can optimize the size of the MPR set and improve the network performance of the protocol. In addition, due to taking the overall factors into consideration, Global_OP_MPR algorithm achieves a better network performance.

    Communication scheme design for embedded control node cluster
    ZHOU Haiyang, CHE Ming
    2015, 35(2):  309-312.  DOI: 10.11772/j.issn.1001-9081.2015.02.0309
    Asbtract ( )   PDF (751KB) ( )  
    References | Related Articles | Metrics

    Since the current network design of RS485 has limited the communication node number and scalability, a networking scheme for embedded control node cluster and a corresponding control protocol based on RS485 bus were proposed. By adding repeaters between the master and leaves, the networking scheme expanded the number of nodes on the RS485 bus to 27000, and constituted a cluster of embedded network nodes in one-to-many control mode. For this characteristic, the Modbus protocol was extended and the repeater layer stipulation was appended. Unlike traditional way that simply increases the length of the physical address, the new protocol used the method of local-addressing to break the restriction on the number of nodes which created by Modbus address length, and introduced the node scanning and error feedback mechanism. As a result, a reliable node control for the master controller was achieved. The control protocol not only keep the original simplicity of Modbus protocol, but also owns the features of portability and scalability, and it is easy to implement on a micro control unit. When the extended protocol was adopted, the turnaround delay increased by 10.36% with one layer of repeaters, while two layers of repeaters caused the delay increased by 69.9%. With two layers of repeaters, the total delay increased by 2.4 times than that of the original Modbus system. Nevertheless, the average delay in practical two-layer relay system was still controlled below 70 ms. The results show that the scheme can attain the goal of clustering management of embedded nodes at the expense of some real-time performance.

    Routing protocol for bus vehicle network with cyclical movement
    PENG Yali, XU Hong, YIN Hong, ZHANG Zhiming
    2015, 35(2):  313-316.  DOI: 10.11772/j.issn.1001-9081.2015.02.0313
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics

    As an important part of the urban vehicle network, bus vehicle network provides supports for a wide range of urban-vehicle communication network due to cyclical movement law. However, the complex urban road environment brings great challenges to highly efficient and reliable routing protocols for bus vehicle network. In bus vehicle network with the characteristics of cyclical movement, a new protocol named SRMHR (Single & Realmending-Multi Hop Routing) was proposed to ensure the single hop link's life time and multi-hop submission probability in limited delay. According to the signal propagation attenuation model and vehicle mobility model, a single hop selection mechanism and a multi-hop delay probability forwarding mechanism were proposed to ensure the reliability and effectiveness of bus-assistant forwarding. On the urban traffic simulation platform, using real road traffic data of slight adjustment, the performance of signal attenuation model, single hop selection mechanism and light correction model under different traffic densities were tested. The results prove the validity of each link of the scheme. Comparison with SF (Spray and Focus) and SW (Spray and Wait) proves that SRMHR protocol has a higher successful rate of data transmission and lower delivery delay.

    Enhanced frequency-domain channel contention mechanism in wireless local area network
    WANG Jing, GAO Zehua, GAO Feng, PAN Xiang
    2015, 35(2):  317-321.  DOI: 10.11772/j.issn.1001-9081.2015.02.0317
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics

    Concerning the high overheads of current channel access mechanisms and frequent collisions in high-density deployed Wireless Local Area Network (WLAN), an enhanced mechanism named as Hybrid Frequency-domain Channel Contention (HFCC) based on frequency-domain channel contention was proposed. Firstly, the subcarriers within Orthogonal Frequency Division Multiplexing (OFDM) symbols which were used for frequency-domain channel contention were classified as contention subcarriers and information subcarriers. Secondly, two rounds of channel contention were performed when station (STA) needs to access channel and the confirmation of successful channel contention was performed if necessary. Finally, a single OFDM symbol was used to acknowledge correct reception of a packet. The theoretical analysis showed that, compared with Distributed Coordination Function (DCF), when many STAs (35 or so) were contending a channel in the network, the collision probability of HFCC declined 99.1%, and the system throughput of it increased 73.2%. In addition, the throughput of HFCC respectively increased by 35.7% and 75.2% compared with Back2F and REPICK. The analysis results show that HFCC can reduce the overhead while improving the robustness, which is suitable for high-density deployed network.

    Information propagation model for social network based on local information
    CHENG Xiaotao, LIU Caixia, LIU Shuxin
    2015, 35(2):  322-325.  DOI: 10.11772/j.issn.1001-9081.2015.02.0322
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics

    The traditional information propagation model is more suitable for homogeneous network, and cannot be effectively applied to the non-homogeneous scale-free Social Network (SN). To solve this problem, an information propagation model based on local information was proposed. Topological characteristic difference between users and different effect on information propagation of user influence were considered in the model, and the probability of infection was calculated according to the neighbor nodes' infection and authority. Thus it could simulate the information propagation on real social network. By taking simulation experiments on Sina microblog networks, it shows that the proposed model can reflect the propagation scope and rapidity better than the traditional Susceptible-Infective-Recovered (SIR) model. By adjusting the parameters of the proposed model, it can verify the impact of control measures to the propagation results.

    Method of IOT complex event processing based on event sharing mechanism
    XU Dongdong, YUAN Lingyun
    2015, 35(2):  326-331.  DOI: 10.11772/j.issn.1001-9081.2015.02.0326
    Asbtract ( )   PDF (907KB) ( )  
    References | Related Articles | Metrics

    Concerning the problems including repetitive query, storage and processing in the process of complex event query processing in Internet of Things (IOT), an Event Sharing Mechanism (ESM) was proposed. Firstly, in order to realize the query and detection of complex events, a semantic event definition about IOT and semantic descriptions of event operators were presented. Secondly, research on the IOT ESM was done from the following three aspects: the definition of public subqueries, the design of public internal query structure and the sharing of event resources. Through rewriting the query expression, building the Directed Acyclic Graph (DAG) related to the query expression and using the improved Continuous, one of the parameter contexts, on each node to handle event streams, the sharing of public events' query, storage and processing was implemented. Finally, a Semantics Formal Query-plan Processing Model (SFQPM) based on ESM was also designed, which could process query expressions and predicates automatically, and fulfill the automation of complex event detection and processing. The simulation results show that, compared with the method based on BTree (Binary Tree), the proposed SFQPM has high efficiency and reliability in processing, and can process massive and real-time IOT event streams timely and efficiently. In addition, a case study was given to verify the effectiveness and feasibility of the proposed SFQPM.

    Load balancing algorithm for non-uniform clustering with distributed hierarchical structure
    GUO Jinqin, HAN Yan
    2015, 35(2):  332-335.  DOI: 10.11772/j.issn.1001-9081.2015.02.0332
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems such as short survival time and great energy consumption caused by load imbalance in Wireless Sensor Network (WSN), a kind of load balancing algorithm for non-uniform clustering with distributed hierarchical structure named DCWSN was proposed. First, the network topology structure with multilayer clusters for WSN was established, and the energy consumption mode of the nodes in the clusters of it was analyzed.Second, the load balancing algorithm for non-uniform clustering was used to choose the node with highest weight to be cluster head node with considering the node connected density, node residual energy and cluster head choose time. In the establishment of the cluster stage, the energy load of the cluster head was balanced by cluster size decision threshold and the cluster updating mechanism to prevent the premature death of cluster nodes. Comparison experiments on life cycle of the network and the network energy consumption were conducted with EDDIE, M-TRAC, DDC and EELBC to verify the effectiveness of the proposed algorithm, and DCWSN achieved a higher survival rate of node at 37.7% and a higher energy efficiency. The experimental results show that DCWSN has good performance in load balance, effectively controls the overload of node, and also improves the energy efficiency of node.

    LTE downlink cross-layer scheduling algorithm based on QoS for value-added service
    LIU Hui, ZHANG Sailong
    2015, 35(2):  336-339.  DOI: 10.11772/j.issn.1001-9081.2015.02.0336
    Asbtract ( )   PDF (622KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that how to achieve the different value-added service users' rates in the Long Term Evolution (LTE) system, an optimization Proportional Fairness (PF) algorithm was proposed. Considering channel conditions, pay level and satisfaction, this optimization PF algorithm with QoS-aware service's eigenfunction could properly schedule the paying users under the situation that the paid rates could not be achieved. So it could achieve the different paying levels of rates. Simulations were conducted in Matlab environment. In the simulations, the optimization PF algorithm performed better than traditional PF in satisfaction and effective throughput. Compared with traditional PF algorithm, the difference of average satisfaction between paying users was about 26%, and the average effective throughput increased by 17%. The simulation results indicate that, under the premise of QoS in multi-service, the optimization algorithm can achieve the different users' perceived average rates, guarantee the satisfaction among the different paying parties and raise the effective throughput of system.

    Improvement on DV-Hop localization algorithm in wireless sensor networks
    XIA Shaobo, ZOU Jianmei, ZHU Xiaoli, LIAN Lijun
    2015, 35(2):  340-344.  DOI: 10.11772/j.issn.1001-9081.2015.02.0340
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics

    DV-Hop localization algorithm uses the hop count multiplied by the average distance per hop to estimate the distance between nodes. Under the condition of not changing the step of the original DV-Hop algorithm and not needing an additional hardware, the traditional DV-Hop algorithm was improved from two aspects to solve the problem of the large error in the localization. On the one hand, the hop count between the nodes based on the communication radius was corrected. On the other hand, with the help of the deviation between the actual distance and the estimated distance of the beacon nodes, the average hop distance per hop was corrected. In the same network environment, the positioning error of the proposed algorithm was effectively reduced by about 15% compared with the original DV-Hop algorithm, as well as reduced by 5%-7% compared with another improved algorithm which also used the ideal estimated hop count value between the beacon nodes to correct the actual value between them.The experimental results show that the proposed algorithm can effectively reduce the distance estimation error between nodes and improve the positioning accuracy.

    Reliable data delivery with low delay in energy harvesting wireless sensor network
    QIU Shuwei, LI Yanyan
    2015, 35(2):  345-350.  DOI: 10.11772/j.issn.1001-9081.2015.02.0345
    Asbtract ( )   PDF (1110KB) ( )  
    References | Related Articles | Metrics

    Using Network Coding (NC) can effectively improve the reliability of data delivery in Energy Harvesting Wireless Sensor Network (EH-WSN). Most of the existing research used fixed Data Rate (DR) and fixed Maximum Number of Retransmissions (MNR) in reliable data delivery, and its end-to-end delay is long. In order to reduce the data delivery delay, a data delivery scheme, which combined the energy harvesting characteristics of EH-WSN node and the link quality between the adjacent nodes, was proposed to obtain low end-to-end delay by optimizing the DR and the MNR. The energy harvesting process and the energy consumption were modeled and the residual energy equation of node was given. The probability of successfully transmitting a packet under the retransmission mechanism was derived by modeling the link quality between the adjacent nodes, and the transmission delay over a hop on the data delivery path was also derived. The proposed scheme can minimize the transmission delay of each hop by optimizing the DR and the MNR based on the optimization equation under the condition that the node can satisfy the constraints of link quality and residual energy. The experimental results show that compared with the fixed DR and fixed MNR data delivery scheme, the proposed scheme can obtain the lowest end-to-end data delivery delay.

    DOA estimation for wideband chirp signal with a few snapshots
    LIU Deliang, LIU Kaihua, YU Jiexiao, ZHANG Liang, ZHAO Yang
    2015, 35(2):  351-353.  DOI: 10.11772/j.issn.1001-9081.2015.02.0351
    Asbtract ( )   PDF (538KB) ( )  
    References | Related Articles | Metrics

    Conventional Direction-Of-Arrival (DOA) estimation approaches suffer from low angular resolution or relying on a large number of snapshots. The sparsity-based SPICE can work with few snapshots and has high resolution and low sidelobe level, but it only applies to narrowband signals. To solve the above problems, a new FrFT-SPICE method was proposed to estimate the DOA of wideband chirp signals with high resolution based on a few snapshots. First, the wideband chirp signal was taken on the Fractional Fourier Transform (FrFT) under a specific order so that the chirp wave in time domain could be converted into sine wave with single frequency in FrFT domain. Then, the steering vector of the received signal was obtained in FrFT domain. Finally, SPICE algorithm was utilized with the obtained steering vector to estimate the DOA of the wideband chirp. In the simulation with the same scanning grid and same snapshots, the DOA resolution level of the proposed FrFT-SPICE method was better than that of the FrFT-MUSIC method which combines MUltiple SIgnal Classification (MUSIC) algorithm and FrFT algorithm; and compared to the SR-IAA which utilizes Spatial Resampling (SR) and IAA (Iterative Adaptive Approach), the proposed method had a better accuracy. The simulation results show that the proposed method can estimate the DOA of wideband chirp signals with high accuracy and resolution based on only a few snapshots.

    Spectrum sensing algorithm based on least eigenvalue distribution
    YANG Zhi, XU Jiapin
    2015, 35(2):  354-357.  DOI: 10.11772/j.issn.1001-9081.2015.02.0354
    Asbtract ( )   PDF (535KB) ( )  
    References | Related Articles | Metrics

    Among the existing spectrum sensing algorithms, energy detection can be implemented easily, but its detection performance depends on noise power. Spectrum sensing algorithms based on random matrix theory can skillfully avoid the influence of noise uncertainty on detection performance, but most of them make use of approximate distribution of the largest eigenvalue. The accuracy of threshold expression derived from it needs to be further improved. Aiming to above problems, by using the latest research results about random matrix theory, a spectrum sensing algorithm based on distribution of the least eigenvalue of sample covariance matrix of received signals was proposed. Cumulative distribution function of the least eigenvalue is not based on asymptotical assumptions, which is more suitable for realistic communication scenarios. The threshold expression derived from it was a function of false alarm probability, whose effectiveness and superiority were analyzed and verified with few samples. Simulations complied with single variable principle were conducted under the situation of few samples, few collaborative users, low signal to noise ratio and low false alarm probability, in comparison with classic maximum-minimum eigenvalue algorithm. Detection probability of the proposed algorithm was increased by 0.2 or so. The results show that the proposed algorithm can significantly improve the detection performance of system.

    Data combination method based on structure's granulation
    YAN Lin, LIU Tao, YAN Shuo, LI Feng, RUAN Ning
    2015, 35(2):  358-363.  DOI: 10.11772/j.issn.1001-9081.2015.02.0358
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics

    In order to study the problem about data combinations occurring in real life, different kinds of data information were combined together, leading to a structure called associated-combinatorial structure. Actually, the structure was constituted by a data set, an associated relation and a partition. The aim was to use the structure to set up a method of data combination. To this end, the associated-combinatorial structure was transformed into a granulation structure by granulating the associated relation. In this process, data combinations were completed in accordance with the data classifications. Moreover, because an associated-combinatorial structure or a granulation structure could be represented by the associated matrix, the transformation from a structure to another structure was characterized by algebraic calculations determined by matrix transformations. Therefore, the research not only involved theoretical analysis for the data combination, but also established the data processing method connected with matrix transformations. Accordingly, a computer program with linear complexity was formulated according to the data combinations method. The experimental result proves that the program is accurate and fast.

    Non-fragile synchronous guaranteed cost control for complex dynamic network with time-varying delay
    LUO Yiping, DENG Fei, ZHOU Bifeng
    2015, 35(2):  364-368.  DOI: 10.11772/j.issn.1001-9081.2015.02.0364
    Asbtract ( )   PDF (578KB) ( )  
    References | Related Articles | Metrics

    A kind of non-fragile synchronization guaranteed cost control method was put forward for a class of time-varying delay complex network system. Under the condition of assuming that the nonlinear vector function f(x) was differentiable, a non-fragile state feedback controller with gaining perturbations was designed through the method of Jacobi matrix linearization with remainder satisfying matching conditions, to ensure that the parameters of controller could still be effective under small perturbation. The sufficient conditions for the existence of non-fragile synchronous guaranteed cost control of this system were obtained by constructing suitable Lyapunov-Krasovskii functional, using integral equation, matrix analysis, theorem of Schur complement and so on. Under the condition of a given insurance performance index, the condition which was equivalent to the feasibility of a set of Linear Matrix Inequality (LMI) problem was shown, and the convex optimization construction method under the condition of LMI constraints was given, and the minimum value of the closed-loop time-varying delay system guaranteed performance value was calculated. Finally, by a numerical example comparison, the feasibility of the design method was verified.

    Multi-universe parallel quantum-inspired evolutionary algorithm based on adaptive mechanism
    LIU Xiaohong, QU Zhijian, CAO Yanfeng, ZHANG Xianwei, FENG Gang
    2015, 35(2):  369-373.  DOI: 10.11772/j.issn.1001-9081.2015.02.0369
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    The way of selecting evolutionary parameters is vital for the optimal performance of the Quantum-inspired Evolutionary Algorithm (QEA). However, in conventional QEA, all individuals employ the same evolutionary parameters to complete update without considering the individual difference of the population, thus the drawbacks including slow convergence speed and being easy to fall into local optimal solution are exposed in computing combination optimization problem. To address those problems, an adaptive evolutionary mechanism was employed to adjust the rotation angle step and the quantum mutation probability in the quantum evolutionary algorithm. In the algorithm, the evolutionary parameters in each individual and each evolution generation were determined by the individual fitness to ensure that as many evolutionary individuals as possible could evolve to the optimal solution direction. In addition, the adaptive-evolution-based evolutionary algorithm needs to evaluate the fitness of each individual, which leads to a longer operation time. To solve this problem, the proposed adaptive quantum-inspired evolutionary algorithm was parallel implemented in different universe to improve the execution efficiency. The proposed algorithms were tested by searching the optimal solutions of three multimodal functions and solving knapsack problem. The experimental results show that, compared with conventional QEA, the proposed algorithms can achieve better performances in convergence speed and searching the global optimal solution.

    microRNA identification method based on feature clustering and random subspace
    RUI Zhiliang, ZHU Yuquan, GENG Xia, CHEN Geng
    2015, 35(2):  374-377.  DOI: 10.11772/j.issn.1001-9081.2015.02.0374
    Asbtract ( )   PDF (644KB) ( )  
    References | Related Articles | Metrics

    As sensitivity and specificity of current microRNA identification methods are not ideal or imbalanced because of emphasizing new features but ignoring weak classification ability and redundancy of features. An ensemble algorithm based on feature clustering and random subspace method was proposed, named CLUSTER-RS. After eliminating some features with weak classification ability using information ratio, the algorithm utilized information entropy to measure feature relevance and grouped the features into clusters. Then it selected the same number of features randomly from each cluster to compose a feature set, which was used to train base classifiers for constituting the final identification model. By tuning parameter and selecting base classifiers to optimize the algorithm, experimental comparison of CLUSTER-RS and five classic microRNA identification methods (Triplet-SVM,miPred,MiPred,microPred,HuntMi) was conducted using latest microRNA dataset. CLUSTER-RS was only inferior to microPred in sensitivity and performed best in specificity, and also had advantage in accuracy and Matthew correlation coefficient. Experiments show that, CLUSTER-RS algorithm achieves good performance and is superior to the rivals in the aspect of balance between sensitivity and specificity.

    Energy-efficient strategy of distributed file system based on data block clustering storage
    WANG Zhengying, YU Jiong, YING Changtian, LU Liang
    2015, 35(2):  378-382.  DOI: 10.11772/j.issn.1001-9081.2015.02.0378
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics

    Concerning the low server utilization and complicated energy management caused by block random placement strategy in distributed file systems, the vector of the visiting feature on data block was built to depict the behavior of the random block accessing. K-means algorithm was adopted to do the clustering calculation according to the calculation result, then the datanodes were divided into multiple regions to store different cluster data blocks. The data blocks were dynamic reconfigured according to the clustering calculation results when the system load is low. The unnecessary datanodes could sleep to reduce the energy consumption. The flexible set of distance parameters between clusters made the strategy be suitable for different scenarios that has different requests for the energy consumption and utilization. Compared with hot-cold zoning strategies, the mathematical analysis and experimental results prove that the proposed method has a higher energy saving efficiency, the energy consumption reduces by 35% to 38%.

    Secure outsourcing computation of square matrix power to public cloud
    LIU Wuyang, LIAO Xiaofeng
    2015, 35(2):  383-386.  DOI: 10.11772/j.issn.1001-9081.2015.02.0383
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics

    Computing the high power of huge-dimension square matrix is a hard job for those entities (clients) with limited compute capability. To resolve this problem, a secure and verifiable cloud computation outsourcing protocol of square matrix power was designed using the cloud computing platform. In the protocol, the client firstly constructed a random permutation and generated a secret key which included a non-singular square matrix and its inverse matrix by combining the permutation with the Kronecker function. Secondly, the original square matrix was encrypted with the secret key by the client, and then the encrypted matrix was sent to the cloud along with the original exponent. After completing the calculation of the encrypted square matrix power, the cloud returned the result to the client. The client decrypted the returned result with its own secret key and correspondingly compared the elements which were randomly chosen by the client with the correct values to verify the correctness of the result. Theoretical analysis shows that the protocol meets the requirements of outsourcing protocol well, including correctness, security, verifiability and high efficiency. Based on this protocol model, the simulation experiments were conducted in two aspects: dimension fixed exponent changing and exponent fixed dimension changing. Finally the experiment result indicates that, compared with completing the original job by client himself, the outsourcing computation can substantially reduce the time consumption of the client in both cases and get a desirable outsourcing performance which becomes better with the increase of dimension and exponent.

    Provable secure certificateless fully homomorphic encryption scheme in standard model
    LI Shaokun
    2015, 35(2):  387-392.  DOI: 10.11772/j.issn.1001-9081.2015.02.0387
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics

    Focused on the flaw of large-scale public keys which is shared by the existing fully homomorphic encryption schemes, the idea of certificateless public-key encryption was introduced into the design of fully homomorphic encryption schemes, and an certificateless fully homomorphic encryption scheme was proposed. The overall efficiency of the cryptosystem would be improved since the public keys of the scheme no longer need identity authentication. The full-rank differencing matrix was used to embed the identities into the scheme, and the random oracles were no longer needed in the security proof because of the absence of hash function. The partial private keys were abstracted by a pair of dual normal distribution sampling functions, and were transformed to the private keys by the instance of learning with errors problem. The scheme employed double encryption to deprive the servers of the capability of decryption and thus avoided key escrow. The security of the scheme reduces to the hardness of learning with errors problem.

    Context and role based access control for cloud computing
    HUANG Jingjing, FANG Qun
    2015, 35(2):  393-396.  DOI: 10.11772/j.issn.1001-9081.2015.02.0393
    Asbtract ( )   PDF (653KB) ( )  
    References | Related Articles | Metrics

    The open and dynamic characteristics of cloud computing environment is easy to cause security problems, so security of the data resource and the privacy of user are facing severe challenges. According to the characteristics of dynamic user and data resources in cloud computing, a context and role based access control model was proposed. This model took context information and context restrict of cloud computing environment into account, and evaluated the user access request and the authorization policy in server, which could dynamically grant user's permission. The implementation process of cloud users accessing the resource were given, and the analysis and comparison further illuminated that the model has more advantages in the aspect of access control. This scheme can not only reduce the complexity of management, but also limit the privileges of cloud service providers, so it can effectively ensure the safety of cloud resources.

    Quantum secret sharing of arbitrary N-qubit via entangled state
    WU Junqin, LIN Huiying
    2015, 35(2):  397-400.  DOI: 10.11772/j.issn.1001-9081.2015.02.0397
    Asbtract ( )   PDF (687KB) ( )  
    References | Related Articles | Metrics

    Focused on the issue that the quantum secret sharing is limited to the maximally entangled state, a scheme for quantum state sharing of an arbitrary unknown N-qubit state by using entangled state as quantum channel was proposed. The sender Alice used the Bell basis measurement and then the receiver Bob or Charlie used the single particle measurement. The participants chose the right joint unitary operation according to the results from Alice and the signal measurement, which could realize arbitrary N-qubit secret sharing. The eavesdropping analysis shows explicitly that the scheme is secure and it can resist the external eavesdropper and internal dishonest participant.

    Signcryption scheme based on multivariate cryptosystem
    LAN Jinjia, HAN Yiliang, YANG Xiaoyuan
    2015, 35(2):  401-406.  DOI: 10.11772/j.issn.1001-9081.2015.02.0401
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that signcryption scheme of the conditional public key cryptosystems cannot resist the quantum attack, a new signcryption scheme based on multivariate public key cryptosystems was proposed. Combining the central map of multilayer structure in Multi-layer Matsumoto-Imai (MMI) with the CyclicRainbow signature scheme, and using the constructure of the central map in Hidden Field Equation (HFE), the signcryption scheme was designed by introducing an improved method of constructing central map. The analysis shows that, compared with the original MMI, the scheme's key size decreases by 5% and the ciphertext reduces by 50%, and it also makes encryption and signature both realizable at the same time. In the random oracle model, its indistinguishability under the hardness of Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption were proved respectively. And it shows that the proposed scheme has unforgeability under the adaptive chosen-ciphertext attack as well as indistinguishability under the adaptive chosen message attack.

    Efficient certificate-based verifiably encrypted signature scheme
    DU Guiying, HUANG Zhenjie
    2015, 35(2):  407-411.  DOI: 10.11772/j.issn.1001-9081.2015.02.0407
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics

    Focusing on the certificate management problem in the traditional public key cryptography and the key escrow problem in identity-based cryptography, as well as the unfairness exposed on online transaction, a new Certificate-Based Verifiably Encrypted Signature (CBVES) scheme was proposed by combining the Verifiably Encrypted Signature (VES) with Certificate-Based Signature (CBS). Firstly, the security model of certificate-based verifiably encrypted signature scheme was defined; secondly, a new CBVES scheme was proposed based on the hardness of k-CAA (Collision Attack Algorithm with k traitors) problem and Squ-CDH (Square Computational Differ-Hellman) problem, and its safety was proved under the random oracle model. Compared with the previous CBVES, the proposed scheme is efficient, and has a small amount of calculation and only when the adjudicator and the verifier united, the ordinary signature can be recovered from the VES.

    Provably secure identity-based aggregate signcryption scheme
    WANG Daxing, TENG Jikai
    2015, 35(2):  412-415.  DOI: 10.11772/j.issn.1001-9081.2015.02.0412
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics

    In order to more effectively protect the security of network information, confidentiality and authentication of message need to be realized at the same time. Signcryption performs signature and encryption simultaneously in one logical step. In order to improve safety and efficiency of existing signcryption, an identity-based aggregate signcryption scheme was proposed by combining the ideas of aggregate signature. Under the random oracle model, the scheme was proved to be indistinguishable against adaptive chosen ciphertext attacks, and existentially unforgeable against adaptive chosen messages attacks. The security could be reduced to the elliptic curve discrete logarithm problem and computational bilinear paring Diffe-Hellman problem. Compared with serveral schemes with high efficiency and short key length, the analysis of results shows that the new scheme's signcryption and unsigncryption has only one pairing operation, thus has the excellent features with low computational cost and short length of ciphertext.

    Real-time detection framework for network intrusion based on data stream
    LI Yanhong, LI Deyu, CUI Mengtian, LI Hua
    2015, 35(2):  416-419.  DOI: 10.11772/j.issn.1001-9081.2015.02.0416
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    The access request for computer network has the characteristics of real-time and dynamic change. In order to detect network intrusion in real time and be adapted to the dynamic change of network access data, a real-time detection framework for network intrusion was proposed based on data stream. First of all, misuse detection model and anomaly detection model were combined. A knowledge base was established by the initial clustering which was made up of normal patterns and abnormal patterns. Secondly, the similarity between network access data and normal pattern and abnormal pattern was measured using the dissimilarity between data point and data cluster, and the legitimacy of network access data was determined. Finally, when network access data stream evolved, the knowledge base was updated by reclustering to reflect the state of network access. Experiments on intrusion detection dataset KDDCup99 show that, when initial clustering samples are 10000, clustering samples in buffer are 10000, adjustment coefficient is 0.9, the proposed framework achieves a recall rate of 91.92% and a false positive rate of 0.58%. It approaches the result of the traditional non-real-time detection model, but the whole process of learning and detection only scans network access data once. With the introduction of knowledge base update mechanism, the proposed framework is more advantageous in the real-time performance and adaptability of intrusion detection.

    FPGA-based implementation for fault detection of SMS4
    XIN Xiaoxia, WANG Yi, LI Renfa
    2015, 35(2):  420-423.  DOI: 10.11772/j.issn.1001-9081.2015.02.0420
    Asbtract ( )   PDF (595KB) ( )  
    References | Related Articles | Metrics

    Faults will frequently occur during the computational process of the hardware based SMS4 algorithm. The attacker can easily break the algorithm by using the fault information and performing the fault attack. In order to solve this issue, a new fault detection method for SMS4 was proposed. Firstly, locations of the fault occurrence and the impact of the faults were analyzed. Then, three detection position points on the critical path were targeted, and by monitoring these three points in real-time to locate the faults. Once a fault was successfully detected, the system would immediately re-execute the algorithm to avoid the attacker obtaining the fault information. Furthermore, the proposed SMS4 with fault detection and the original SMS4 without fault detection were implemented on two Field Programmable Gate Array (FPGA) platforms respectively, including Virtex-7 of Xilinx and Cyclone Ⅱ of Altera. Compared with the original SMS4, hardware resource of the proposed SMS4 with fault detection was increased by 30% with similar throughput on Virtex-7. Hardware resource of the proposed SMS4 with fault detection was increased by 0.1% and the throughput was around 93% on EP2C35F76C6. The experimental results show that the proposed algorithm can effectively detect faults using affordable hardware resource to avoid fault attack without affecting throughput.

    Function pointer attack detection with address integrity checking
    DAI Wei, LIU Zhi, LIU Yihe
    2015, 35(2):  424-429.  DOI: 10.11772/j.issn.1001-9081.2015.02.0424
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics

    Traditional detection techniques of function pointer attack cannot detect Return-Oriented-Programming (ROP) attack. A new approach by checking the integrity of jump address was proposed to detect a variety of function pointer attacks on binary code. First, function address was obtained with static analysis, and then target addresses of jump instructions were checked dynamically whether they fell into allowed function address space. The non-entry function call was analyzed, based on which a new method was proposed to detect ROP attack by combining static and dynamic analysis. The prototype system named fpcheck was developed using binary instrumentation tool, and evaluated with real-world attacks and normal programs. The experimental results show that fpcheck can detect various function pointer attacks including ROP, the false positive rate reduces substantially with accurate policies, and the performance overhead only increases by 10% to 20% compared with vanilla instrumentation.

    Robust medical image encryption algorithm based on fast chaotic scrambling
    HAI Jie, DU Hailong, DENG Xiaohong
    2015, 35(2):  430-434.  DOI: 10.11772/j.issn.1001-9081.2015.02.0430
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics

    In order to improve the robustness and efficiency of medical image encryption algorithm based on chaos, a new robust medical image encryption algorithm based on rapid chaotic scrambling named RMIEF-CS was presented. Firstly, the presented algorithm utilized two low dimensional chaotic systems to generate chaotic sequence with an alternating iterative way, and the problem of chaotic convergence due to computer precision was solved. Secondly, the data stream of plaintext image was firstly scrambled using the generated chaotic sequence, and the ciphertext was scrambled once again using a new chaotic sequence to obtain the final ciphertext image. In the second scrambling procedure, a bidirectional ciphertext feedback mechanism was used to enhance the security and robustness of RMIEF-CS. Because the proposed algorithm used the simple low chaotic system to generate key sequence, and did not need the time-consuming sort operation, it had good time efficiency and could be suitable for images with any shape. The simulation experimental results show that the presented algorithm has better encryption performance, and can decrypt the approximate image to the original medical image even if the ciphertext image has been damaged. In addition, compared with the method based on even scrambling and chaotic mapping, the time consumption of RMIEF-CS is reduced to 1/6. The presented algorithm is suitable for transmitting the medical image with large amount of data in real-time.

    Soft partition based clustering models with reference to historical knowledge
    SUN Shouwei, QIAN Pengjiang, CHEN Aiguo, JIANG Yizhang
    2015, 35(2):  435-439.  DOI: 10.11772/j.issn.1001-9081.2015.02.0435
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics

    Conventional soft partition based clustering algorithms usually cannot achieve desired clustering outcomes in the situations where the data are quite spare or distorted. To address this problem, based on maximum entropy clustering, by means of the strategy of historical knowledge learning, two novel soft partition based clustering models called SPBC-RHK-1 and SPBC-RHK-2 for short respectively were proposed. SPBC-RHK-1 is the basic model which only refers to the historical cluster centroids, whereas SPBC-RHK-2 is of advanced modality based on the combination of historical cluster centroids and historical memberships. In terms of the historical knowledge, the effectiveness of both algorithms was improved distinctly, and SPBC-RHK-2 method showed better effectiveness and robustness compared to the other method since its higher ability of utilizing knowledge. In addition, because the involved historical knowledge does not expose the historical raw data, both of these two approaches have good capacities of privacy protection for historical data. Finally, experiments were conducted on both artificial and real-world datasets to verify above merits.

    Query expansion method based on semantic property feature graph
    HAN Caili, LI Jiajun, ZHANG Xiaopei, XIAO Min
    2015, 35(2):  440-443.  DOI: 10.11772/j.issn.1001-9081.2015.02.0440
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics

    Because of ignoring the semantic relations between words, traditional query expansion methods cannot achieve the desired goals to expand right keywords in the nonstandard short term. Linked Data technology exploits the graph structure of RDF (Resource Description Framework) to form Linked Open Data Cloud, and provides more semantic information. In order to take into account the semantic relationships, a new query expansion method based on semantic property feature graph was proposed by combining semantic Web and graph. It used DBpedia resources as nodes to build a RDF attribute graph in which the relevance of a node was given by its relations. First, 15 kinds of semantic property weights for expressing semantic similarities between resources were obtained by supervised learning. Then, the query keywords were mapped to DBpedia resources based on the labelling properties in the whole graph of DBpedia. According to semantic features, the neighbor nodes were found out by breadth-first search and used as expansion candidate words. Eventually, the word sets with the highest relevance score values were selected as the query expansion terms. The experimental results show that compared with LOD Keyword Expansion, the proposed method based on semantic graph achieves recall of 0.89 and provides an increase of 4% in Mean Reciprocal Rank (MRR), which offers a better matching result to users.

    Density-sensitive clustering by data competition algorithm
    SU Hui, GE Hongwei, ZHANG Huanqing, YUAN Yunhao
    2015, 35(2):  444-447.  DOI: 10.11772/j.issn.1001-9081.2015.02.0444
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics

    Since the clustering by data competition algorithm has poor performance on complex datasets, a density-sensitive clustering by data competition algorithm was proposed. Firstly, the local distance was defined based on density-sensitive distance measure to describe the local consistency of data distribution. Secondly, the global distance was calculated based on local distance to describe the global consistency of data distribution and dig the information of data space distribution, which can make up for the defect of Euclidean distance on describing the global consistency of data distribution. Finally, the global distance was used in clustering by data competition algorithm. Using synthetic and real life datasets, the comparison experiments were conducted on the proposed algorithm and the original clustering by data competition based on Euclidean distance. The simulation results show that the proposed algorithm can obtain better performance in clustering accuracy rate and overcome the defect that clustering by data competition algorithm is difficult to handle complex datasets.

    Community detection by label propagation with LeaderRank method
    SHI Mengyu, ZHOU Yong, XING Yan
    2015, 35(2):  448-451.  DOI: 10.11772/j.issn.1001-9081.2015.02.0448
    Asbtract ( )   PDF (714KB) ( )  
    References | Related Articles | Metrics

    Focusing on the instability of Label Propagation Algorithm (LPA), an advanced label propagation algorithm for community detection was proposed. It introduced the concept of LeaderRank score to quantify the importance of nodes, and chose some core nodes according to the node importance in descending order, then updated labels layer by layer outward centered on every core node respectively, until no node changed its label any more. Thus the instability caused by the random ranking of nodes was solved. Compared with several existing label propagation algorithms on LFR benchmark networks and real networks, both of the Normalized Mutual Information (NMI) and modularity of community detection result of the proposed algorithm were higher. The theoretical analysis and experimental results demonstrate that the proposed algorithm not only improves the stability effectively, but also increases the accuracy.

    Space vector model algorithm for query of continuous and multidirectional regions
    LIU Runtao, ZHAO Zhenguo, TIAN Guangyue
    2015, 35(2):  452-455.  DOI: 10.11772/j.issn.1001-9081.2015.02.0452
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics

    The directional relations between spatial objects cannot be quantitatively analyzed with directional feature in existing models, which caused regional query to be involved in only single-direction open region. To resolve this problem, a space vector model algorithm combined with vector operation and MB-tree was proposed to deal with directional relation query in continuous and open regions. The proposed algorithm mainly included two steps. One was filtration, and the other was purification. In the process of filtration, the relations between query region and the vertexes of space object's Minimum Bounding Rectangles (MBRs) were analyzed quantitatively and the corresponding judging method was given. Effective pruning rule for nodes of MB-tree was given to reduce I/O cost when directional query was executed by the order of MB-tree. In the process of purification, in order to find actual target objects, MBRs which were saved in the filtration step were traversed. The experiments show that the proposed algorithm can not only solve the query of single open region, but also solve the query of continuous and multidirectional region. In addition, the algorithm can be used in two- and three-dimensional spaces.

    Topic evolution in text stream based on feature ontology
    CHEN Qian, GUI Zhiguo, GUO Xin, XIANG Yang
    2015, 35(2):  456-460.  DOI: 10.11772/j.issn.1001-9081.2015.02.0456
    Asbtract ( )   PDF (886KB) ( )  
    References | Related Articles | Metrics

    In the era of big data, research in topic evolution is mostly based on the classical probability topic model, the premise of word bag hypothesis leads to the lack of semantic in topic and the retrospective process in analyzing evolution. An online incremental feature ontology based topic evolution algorithm was proposed to tackle these problems. First of all, feature ontology was built based on word co-occurrence and general WordNet ontology base, with which the topic in text stream was modeled. Secondly, a text stream topic matrix construction algorithm was put forward to realize online incremental topic evolution analysis. Finally, a text topic ontology evolution diagram construction algorithm was put forward based on the text steam topic matrix, and topic similarity was computed using sub-graph similarity calculation, thus the evolution of topics in text stream was obtained with time scale. Experiments on scientific literature showed that the proposed algorithm reduced time complexity to O(nK+N), which outperformed classical probability topic evolution model, and performed no worse than sliding-window based Latent Dirichlet Allocation (LDA). With ontology introduced, as well as the semantic relations, the proposed algorithm can demonstrate the semantic feature of topics in graphics, based on which the topic evolution diagram is built incrementally, thus has more advantages in semantic explanatory and topic visualization.

    Applicability evaluating method of Dempster's combination rule for bodies of evidence with non-singleton elements
    LIU Zhexi, YANG Jianhong, YANG Debin, LI Min, MIN Xianchun
    2015, 35(2):  461-465.  DOI: 10.11772/j.issn.1001-9081.2015.02.0461
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics

    Aiming to overcome the problem that fuzzy or even inaccurate results will be concluded when evaluating the applicability of Dempster's combination rule while the bodies of evidence contain some non-singleton evidences which basic probability assignments have larger differences between any two bodies of evidence, a modified pignistic probability distance was proposed to describe the relevance between bodies of evidence. And then, combining the modified pignistic probability distance with the classical conflict coefficient, a new method of evaluating the applicability of Dempster's combination rule was presented. In the proposed method, a new conflict coefficient was defined to measure the conflict between bodies of evidence. The new conflict coefficient was consistent with the modified pignistic probability distance when the classical conflict coefficient was zero, and it was consistent with an average value of the modified pignistic probability distance and the classical conflict coefficient when the classical conflict coefficient was not zero. The results of the numerical analysis examples demonstrate that compared with the evaluating method based on the pignistic probability distance, the proposed method based on the improved pignistic probability distance can provide more applicable and reasonable evaluating results of the applicability of the Dempster's combination rule.

    Object tracking with efficient multiple instance learning
    PENG Shuang, PENG Xiaoming
    2015, 35(2):  466-469.  DOI: 10.11772/j.issn.1001-9081.2015.02.0466
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics

    The method based on Multiple Instance Learning (MIL) can alleviate the drift problem to a certain extend. However, MIL method has relatively poor performance in running efficiency and accuracy, because the update strategy efficiency of the strong classifiers is low, and the update speed of the classifiers is not same with the appearance change speed of the targets. To solve this problem, a new update strategy for strong classifier was proposed to improve the running efficiency of MIL method. In addition, to improve the tracking accuracy of the MIL method, a new dynamic mechanisim for learning rate renewal of the classifier was given to make the updated classifier would more conform to the appearance of the target. The experimental results on comparison with MIL method and the Weighted Multiple Instance Learning (WMIL) method show that, the proposed method has the best performance in running efficiency and accuracy among the three methods, and has an advantage over tracking when there is no similar interference objects to target objects in background.

    Nonlinear feature extraction based on discriminant diffusion map analysis
    ZHANG Cheng, LIU Yadong, LI Yuan
    2015, 35(2):  470-475.  DOI: 10.11772/j.issn.1001-9081.2015.02.0470
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics

    Aiming at that high-dimensional data is hard to be understood intuitively, and cannot be effectively processed by traditional machine learning and data mining techniques, a new method for nonlinear dimensionality reduction called Discriminant Diffusion Maps Analysis (DDMA) was proposed. It was implemented by applying a discriminant kernel scheme to the framework of the diffusion maps. The Gaussian kernel window width was selected from the within-class width and the between-class width according to discriminating sample category labels, it made kernel function effectively extract data correlation features and exactly describe the structure characteristics of data space. The DDMA was used in artificial Swiss-roll test and penicillin fermentation process, with comparisons with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Kernel Principle Components Analysis (KPCA), Laplacian Eigenmaps (LE) and Diffusion Maps (DM). The results show that DDMA represents the high-dimensional data in a low-dimensional space while successfully retaining original characteristics of the data; in addition, the data structure features in low-dimensional space generated by DDMA are superior to those generated by the comparison methods, the performance of data dimension reduction and feature extraction verifies effectiveness of the proposed scheme.

    Flexible job-shop scheduling optimization based on two-layer particle swarm optimization algorithm
    KONG Fei, WU Dinghui, JI Zhicheng
    2015, 35(2):  476-480.  DOI: 10.11772/j.issn.1001-9081.2015.02.0476
    Asbtract ( )   PDF (674KB) ( )  
    References | Related Articles | Metrics

    To deal with the Flexible Job-shop Scheduling Problem (FJSP), an Improved Two-Layer Particle Swarm Optimization (ITLPSO) algorithm was proposed. First, minimization of the maximal completion time of all machines was taken as the optimization objective to establish a flexible job-shop scheduling model. And then the improved two-layer PSO algorithm was presented, in which the stagnation prevention strategy and concave function decreasing strategy were adopted to avoid falling into local optimum and to improve the convergence rate. Finally, the proposed algorithm was adopted to solve the relevant instance and the comparison with existing methods was also performed. The experimental results showed that, compared with the standard PSO algorithm and the Two-Layer Particle Swarm Optimization (TLPSO) algorithm, the optimal value of the maximum completion time was reduced by 11 and 6 respectively, the average maximum completion time was reduced by 15.7 and 4 respectively, and the convergence rate was improved obviously. The performance analysis shows that the proposed algorithm can improve the efficiency of the flexible job-shop scheduling obviously and obtain better scheduling solution.

    Image retargeting algorithm based on parallel translation of gridlines
    ZHANG Zijuan, KANG Baosheng
    2015, 35(2):  481-485.  DOI: 10.11772/j.issn.1001-9081.2015.02.0481
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics

    To resolve the problem that more distortions occured in image retargeting algorithm, an image retargeting algorithm based on the Parallel Translation of Gridlines (PTG) was put forward. Firstly, Achanta algorithm was used to compute the important degree and extract the main object. Secondly, the optimal grid line displacement was calculated. Using grid lines movement can keep the size of important areas and protect the aspect ratio of object and the dual constraints can avoid distortion. At the same time, the lower and upper thresholds were used to restrain the distortion caused by excessively narrowing and widening grids. Finally, in order to achieve better effect, a edge discarding process was introduced to assign wider space to the important area for reducing the distortion. Image retargeting survey system was used to compare PTG with the methods including column removal method with importance diffusion, seam carving method with importance diffusion and grid warping method, and PTG got a better result in images with obvious main goal. The experimental results show that PTG not only has less distortion but also retains the interest area and important object of the image than the comparison methods.

    Smooth surface reconstruction based on fourth-order partial differential equation
    DENG Shiwu, JIA Yu, YAO Xingmiao
    2015, 35(2):  486-489.  DOI: 10.11772/j.issn.1001-9081.2015.02.0486
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    The common surface reconstruction methods based on scattered points, including Kriging interpolation and spline surface fitting, have some problems such as large amount of calculation, unsmooth reconstructed surface and being unable to interpolate the given points. Aiming at this issue, a new surface reconstruction method based on a fourth-order partial differential equation was proposed. In this method, a fourth-order partial differential equation was selected and its difference scheme was built, and then the stability and convergence of the difference scheme was analyzed. On this basis, with the idea of evolution, the finite difference method was used to get the numerical solution of the partial differential equation, and the steady-state solution was treated as an approximation of the original surface. As an example, with the logging data in geological exploration, a geological curved surface was reconstructed by the partial differential surface modeling method. The result shows that the method is easy to implement and the reconstructed surface is smooth naturally, as well as can interpolate the given scattered data points.

    Multi-focus image fusion algorithm based on nonsubsampled shearlet transform and focused regions detection
    OUYANG Ning, ZOU Ning, ZHANG Tong, CHEN Lixia
    2015, 35(2):  490-494.  DOI: 10.11772/j.issn.1001-9081.2015.02.0490
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy of focusd regions in multifocus image fusion based on multiscale transform, a multifocus image fusion algorithm was proposed based on NonSubsampled Shearlet Transform (NSST) and focused regions detection. Firstly, the initial fused image was acquired by the fusion algorithm based on NSST. Secondly, the initial focusd regions were obtained through comparing the initial fused image and the source multifocus images. And then, the morphological opening and closing was used to correct the initial focusd regions. Finally, the fused image was acquired by the Improved Pulse Coupled Neural Network (IPCNN) in the corrected focusd regions. The experimental results show that, compared with the classic image fusion algorithms based on wavelet or Shearlet, and the current popular algorithms based on NSST and Pulse Coupled Neural Network (PCNN), objective evaluation criterions including Mutual Information (MI), spatial frequency and transferred edge information of the proposed method are improved obviously. The result illustrates that the proposed method can identify the focusd regions of source images more accurately and extract more sharpness information of source images to fusion image.

    Image retrieval based on multi-feature fusion
    ZHANG Yongku, LI Yunfeng, SUN Jinguang
    2015, 35(2):  495-498.  DOI: 10.11772/j.issn.1001-9081.2015.02.0495
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics

    At present, the accuracy of image retrieval is a difficult problem to study, the main reason is the method of feature extraction. In order to improve the precision of image retrieval, a new image retrieval method based on multi-feature called CAUC (Comprehensive Analysis based on the Underlying Characteristics) was presented. First, based on YUV color space, the mean value and the standard deviation were used to extract the global feature from an image that depicted the global characteristics of the image, and the image bitmap was introduced to describe the local characteristics of the image. Secondly, the compactness and Krawtchouk moment were extracted to describe the shape features. Then, the texture features were described by the improved four-pixel co-occurrence matrix. Finally, the similarity between images was computed based on multi-feature fusion, and the images with high similarity were returned.On Corel-1000 image set, the comparative experiments with method which only considered four-pixel co-occurrence matrix showed that the retrieval time of CAUC was greatly reduced without significantly reducing the precision and recall. In addition, compared with the other two kinds of retrieval methods based on multi-feature fusion, CAUC improved the precision and recall with high retrieval speed. The experimental results demonstrate that CAUC method is effective to extract the image feature, and improve retrieval efficiency.

    Image classification based on global dictionary learning method with sparse representation
    PU Guolin, QIU Yuhui
    2015, 35(2):  499-501.  DOI: 10.11772/j.issn.1001-9081.2015.02.0499
    Asbtract ( )   PDF (568KB) ( )  
    References | Related Articles | Metrics

    To address the problem of low efficiency for traditional massive image classification, a sparse representation based global dictionary learning method was designed. The traditional dictionary learning steps were distributed to parallel nodes, local dictionaries were first learnt in local nodes and then a global dictionary was updated in real time by those local dictionaries and variables through using convex optimization method, thereby enhancing the efficiency of dictionary learning and classification of massive data. Experiments on the MapReduce platform show that the new algorithm has better performance than classical image classification methods without affecting the classification accuracy, and the new algorithm can be widely used in massive and distributed image classification tasks.

    Image multi-scale recognition method based on computer vision
    ZHANG Yupu, YANG Qi, ZHANG Qi
    2015, 35(2):  502-505.  DOI: 10.11772/j.issn.1001-9081.2015.02.0502
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issues of size-varying and angle-varying of the images, and low recognition rate and poor robustness in image recognition, a morphological image recognition method was proposed. Firstly, image was centralized and normalized, and the silhouettes of image was converted into binary image. Secondly, varable circles were used to extract morphological features of image, and a fan-shaped area feature vector was established. Finally, multi-scale analysis method was applied to image recognition and image angle analysis. Compared with traditional method in the conditions such as angle independence, proportion independence and profile robustness, the experimental results show that the proposed method has higher recognition rate, and can analyze the angle difference between the images. The method is robust to noise, and can significantly reduce the influence of different image scale and rotation angle on image recognition.

    MAP super-resolution reconstruction based on adaptive constraint regularization HL-MRF prior model
    QIN Longlong, QIAN Yuan, ZHANG Xiaoyan, HOU Xue, ZHOU Qin
    2015, 35(2):  506-509.  DOI: 10.11772/j.issn.1001-9081.2015.02.0506
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics

    Aiming at the poor suppression ability for the high-frequency noise in Huber-MRF prior model and the excessive punishment for the high frequency information of image in Gauss-MRF prior model, an adaptive regularization HL-MRF model was proposed. The method combined low frequency function of Huber edge punishment with high frequency function of Lorentzian edge punishment to realize a linear constraint for low frequency and a less punishment for high frequency. The model gained its optimal solution of parameters by using adaptive constraint method to determine regularization parameter. Compared with super-resolution reconstruction methods based on Gauss-MRF prior model and Huber-MRF prior model, the method based on HL-MRF prior model obtains higer Peak Signal-to-Noise Ratio (PSNR) and better performace in details, therefore it has ceratin advantage to suppress the high frequency noise and avoid excessively smoothing image details.

    Shadow detection method based on shadow probability model for remote sensing images
    LI Pengwei, GE Wenying, LIU Guoying
    2015, 35(2):  510-514.  DOI: 10.11772/j.issn.1001-9081.2015.02.0510
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics

    The inhomogeneous spectral response of shadow area makes the shadow detection methods based on threshold always produce results with much difference with real situations. In order to overcome this problem, a new shadow probability model was proposed by combining opacity and intensity. To eliminate the neglection of interaction between neighboring pixels, a method based on multiresolution Markov Random Field (MRF) was proposed for shadow detection of remote sensing images. First, the proposed probability model was used to describe the shadow probability of pixels in the multiresolution images. Then, the Potts model was employed to model multiscale label fields. Finally, the detection result was obtained by Maximizing A Posteriori (MAP) probability. This method was compared with some shadow detection methods, e.g., the hue/intensity-based method, the difference dual-threshold method and Support Vector Machine (SVM) classifier. The experimental results reveal that the proposed method can improve the accuracy of shadow detection for high-resolution urban remote sensing images.

    Trajectory segment-based abnormal behavior detection method using LDA model
    ZHENG Bingbin, FAN Xinnan, LI Min, ZHANG Ji
    2015, 35(2):  515-518.  DOI: 10.11772/j.issn.1001-9081.2015.02.0515
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics

    Most of the current trajectory-based abnormal behavior detection algorithms do not consider the internal information of the trajectory, which might lead to a high false alarm rate. An abnormal behavior detection method based on trajectory segment using the topic model was presented. Firstly, the original trajectories were partitioned into trajectory segments according to turning angles. Secondly, the behavior characteristic information was extracted by quantifying the observations from these segments into different visual words. Then the time-space relationship among the trajectories was explored by Latent Dirichlet Allocation (LDA) model. Finally, the behavior pattern analysis and the abnormal behavior detection could be implemented by learning the corresponding generative topic model combined with the Bayesian theory. Simulation experiments of behavior pattern analysis and abnormal behavior detection were conducted on two video scenes, and different kinds of abnormal behavior patterns were detected. The experimental results show that, combining with trajectory segmentation, the proposed method can dig the internal behavior characteristic information to identify a variety of abnormal behavior patterns and improve the accuracy of abnormal behavior detection.

    No-reference temporal flickering identification method for video
    LU Qianqian, CHEN Li, TIAN Jing, HUANG Xiaotong
    2015, 35(2):  519-522.  DOI: 10.11772/j.issn.1001-9081.2015.02.0519
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    Temporal flickering in the video is a key factor of affecting the quality of video. Accurate identification of temporal flickering is required for the automatic analysis and diagnosis of video quality. Moreover, it can be integrated with artifact removal and quality enhancement algorithms to promote the adaptivity of the proposed algorithm. A study of temporal flickering in video surveillance was given to demonstrate that the differential signal of temporal flickering in time domain follows the Laplacian distribution. Motivated by this statistical observation and the idea of small probability events, the proposed method iteratively segmented differential signal of motion in foreground, which affected the identification of temporal flickering. Furthermore, the proposed approach exploited the Just-Noticeable Difference (JND) mechanism of the human visual system to identify the temporal flickering using the flickering frequency and amplitude. The proposed method yielded superior performance to that of the conventional Gaussian Mixture model to achieve more accurate classification of the normal video and temporal flickering video, as verified in the ROC (Receiver Operating Characteristic) curve presented in experimental results. The proposed no-reference algorithm is able to achieve fairly good performance in temporal flickering identification.

    Automatic road extraction from high resolution SAR images based on fuzzy connectedness
    FU Xiyou, ZHANG Fengli, WANG Guojun, SHAO Yun
    2015, 35(2):  523-527.  DOI: 10.11772/j.issn.1001-9081.2015.02.0523
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that high resolution Synthetic Aperture Radar (SAR) image is influenced by speckle noise and road environment is complex, an automatic road extraction method based on fuzzy connectedness was proposed. Firstly, a speckle filtering process was employed to SAR images to reduce the influence of speckle noise. Then seed points were extracted automatically by combining the results of Ratio of Exponentially Weighted Averages (ROEWA) detector and Fuzzy C-Means (FCM) clustering method. Finally, the roads were extracted by using fuzzy connectedness method which characterized by gray level and the edge intensity, and a morphology operation was done to optimize the final result. Comparison experiments between FCM based road extraction method and the proposed method were performed on two SAR images, the detection completeness, correctness and quality of the proposed method were better than those of FCM based road extraction method. The experimental results show that the proposed approach can effectively extract roads from high resolution SAR images without inputting seed points manually.

    Visibility estimation algorithm for fog weather based on inflection point line
    LIU Jianlei, LIU Xiaoliang
    2015, 35(2):  528-530.  DOI: 10.11772/j.issn.1001-9081.2015.02.0528
    Asbtract ( )   PDF (575KB) ( )  
    References | Related Articles | Metrics

    Concerning that the existing visibility estimation methods based on region growing method has shortcomings of low precision and high computational complexity, a new algorithm was proposed to measure the visibility based on Inflection Point Line (IPL). Firstly, the three characteristics including anisotropy, continuity and level of inflection point line were analyzed. Secondly, a new 2-D filter to detect the IPL based on the three characteristics was proposed to improve the accuracy and speed of the inflection point detection. Finally, the visibility of fog weather could be calculated through combing the visibility model and detection results of the proposed filter. Compared with the visibility estimation algorithm based on region growing, the proposed algorithm decreased the time cost by 80% and detection error by 12.2%, respectively. The experimental results demonstrate that the proposed algorithm can effectively improve the detection accuracy, meanwhile reducing the computational complexity of positioning inflection points.

    Fast algorithm for color image haze removal using principle component analysis and atmospheric scattering mode
    LIANG Zengyan, LIU Benyong
    2015, 35(2):  531-534.  DOI: 10.11772/j.issn.1001-9081.2015.02.0531
    Asbtract ( )   PDF (621KB) ( )  
    References | Related Articles | Metrics

    For haze removal in color image, a fast algorithm based on Principle Component Analysis (PCA) and atmospheric scattering model was proposed for color image haze removal. Firstly, the principal components of three color channels were extracted from original color image, and the three color channels were reconstructed by use of maximum principal component, and the Minimum Reconstruction Map (MRM) was obtained by taking the minimum gray value in three color channels. Then, the MRM was filtered by median filter to improve the accuracy of estimation of the global atmosphere light, then the global atmosphere light was estimated in MRM. Finally, according to the atmospheric scattering model to obtain media transmittance and the sence radiance of the haze removal image. The experimental results showed that the proposed algorithm achieved better visual recovery results, in comparison with dark channel prior haze removal algorithm and contrast limited adaptive histogram equalization algorithm. The results domonstrate that the proposed algorithm improves the operation efficiency, it is simple and easy to implement, and can quickly remove haze in color image.

    Face sketch-photo synthesis based on locality-constrained neighbor embedding
    HU Yanting, WANG Nannan, CHEN Jianjun, MURAT Hamit, ABDUGHRNI Kutluk
    2015, 35(2):  535-539.  DOI: 10.11772/j.issn.1001-9081.2015.02.0535
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics

    The neighboring relationship of sketch patches and photo patches on the manifold cannot always reflect their intrinsic data structure. To resolve this problem, a Locality-Constrained Neighbor Embedding (LCNE) based face sketch-photo synthesis algorithm was proposed. The Neighbor Embedding (NE) based synthesis method was first applied to estimate initial sketches or photos. Then, the weight coefficients were constrained according to the similarity between the estimated sketch patches or photo patches and the training sketch patches or training photo patches. Subsequently, alternative optimization was deployed to determine the weight coefficients, select K candidate image patches and update the target synthesis patch. Finally, the synthesized image was generated by merging all the estimated sketch patches or photo patches. In the contrast experiments, the proposed method outperformed the NE based synthesis method by 0.0503 in terms of Structural SIMilarity (SSIM) index and by 14% in terms of face recognition accuracy. The experimental results illustrate that the proposed method resolves the problem of weak compatibility among neighbor patches in the NE based method and greatly alleviates the noises and deformations in the synthetic image.

    End-user programming language for mobile children educational game
    HU Zhengyu, SHEN Beijun
    2015, 35(2):  540-544.  DOI: 10.11772/j.issn.1001-9081.2015.02.0540
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics

    Compared with the rapid growing demand of mobile game-based learning, the number of games with both playful and instructive characters is quite small. In order to deal with this problem, an End-User Programming (EUP) language called Kids was designed, which allows end-users to create mobile educational games for preschool-aged children. Through the analysis of the domain of mobile children game-based learning, the game elements were identified and the feature model was developed. Kids was designed based on the feature model, which was easy-to-use for users without programming experience. A Kids development tool was also developed to support users to create games effectively using visual editor, and generate Android codes through code generation engine. Finally, an initial experimental evaluation shows that user can implement the game creation easily and rapidly by Kids.

    Automatic software test data generation based on hybrid particle swarm optimization
    DONG Yuehua, DAI Yuqian
    2015, 35(2):  545-549.  DOI: 10.11772/j.issn.1001-9081.2015.02.0545
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    Since the fully connected topology of particle swarm algorithm has low convergence precision and easily falls into local extremum, an approach for automatically generating structural test data based on a hybrid particle swarm algorithm named HPSO (Hybrid Particle Swarm Optimization) was proposed. Firstly, under the premise of global convergence, the population which lacked of diversity used fixed-length ring topology to replace the fully connected one. Secondly, the roulette wheel method was introduced to select the candidate solutions and update the location information and velocity information. Lastly, for controlling and directing the particles to escape from local minimum, the conditions of tabu search algorithm were introduced too. The result of experiment shows that HPSO has a better performance than the Basic Particle Swarm Optimization (BPSO) in population diversity. And HPSO exhibited superiority in search success rate and path coverage in contract with combination method of Genetic Algorithm and Particle Swarm Optimization algorithm named GA-PSO in test data generation, while the average time-consuming is not much different from BPSO.

    Reliability modeling and analysis of embedded system hardware based on Copula function
    GUO Rongzuo, FAN Xiangkui, CUI Dongxia, LI Ming
    2015, 35(2):  550-554.  DOI: 10.11772/j.issn.1001-9081.2015.02.0550
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics

    The reliability of Embedded System Hardware (ESH) is very important, which is directly related to the quality and longevity of the embedded system. To analyze the reliability of ESH, it was studied on the perspective of hardware using Copula function. At first, abstract formalization of the ESH was defined from composition level. Then reliability modeling of each function module of the ESH was given by considering integration of hardware and software, as well as using Copulas function to establish the reliability model of ESH. Finally, the parameters of the proposed reliability model were estimated, and a specific calculation example by using this proposed model was put forward and compared with some other Copulas functions. The result shows that the proposed model using Copula function is effective.

    Smart meter software function test model based on disassembly technique
    LIU Jinshuo, WANG Xiebing, CHEN Xin, DENG Juan
    2015, 35(2):  555-559.  DOI: 10.11772/j.issn.1001-9081.2015.02.0555
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    During the procedure of smart meter production, electric power enterprises have noticed the fact that there exist significant differences between sample meters used to check and batch meters for large numbers of production. Lots of batch meters either have an unstable working state or become quality rejected, resulting from lack of detection. Maintenance of these meters causes unnecessary expense. Aiming at this problem, a smart meter software function test scheme was formulated and an embedded smart meter code reversal model was figured out. Taking obtaining system operating characteristics via analysis of smart meter kernel program as main idea, the model operated a software function difference test on smart meter with disassembly technology as means to analyze smart meter firmware code function. The model included three modules, namely firmware code extraction, firmware code disassembly and software function comparison. A Single-step Disassembly Algorithm (SDA) was adopted in firmware code disassembly module based on traditional linear sweep and recursive scanning algorithm. It has remarkable effects when applying the model to sample and batch meters identification. Meanwhile, the model can control function and quality error within 20 percent when maintaining meters of used and to be used.

    Evolving model of multi-local world based on supply chain network with core of manufacturers
    SUN Junyan, FU Weiping, WANG Wen
    2015, 35(2):  560-565.  DOI: 10.11772/j.issn.1001-9081.2015.02.0560
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics

    In order to reveal the evolution rules of supply chain network with the core of manufacturers, a kind of five-level local world network model was put forward. This model used the BA model and the multi-local world theory as the foundation, combined with the reality of network node generation and exit mechanism. First of all, the intrinsic characteristics and evolution mechanism of network were studied. Secondly, the topology structure and evolution rules of the network were analyzed, and the simulation model was established. Finally, the changes of network characteristic parameters were simulated and analyzed in different time step and different critical conditions, including nodes number, clustering coefficient and degree distribution, then the evolution law of the network was derived. The simulation results show that the supply chain network with the core of manufacturers has the characteristics of scale-free and high concentration. With the increase of time and the growth rate of the network nodes, the degree distribution of overall network approaches to the power-law distribution with the exponent three. The degree distribution of the network at all levels is different, sub-tier suppliers and retailers obey power-law distribution, suppliers and distributors obey exponential distribution, manufacturers generally obey the Poisson distribution.

    Optimal strategy for production-distribution network of perishable products based on WCVaR
    ZHANG Lei, YANG Chenghu, LU Meijin
    2015, 35(2):  566-571.  DOI: 10.11772/j.issn.1001-9081.2015.02.0566
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics

    According to partially known probability distribution of demand information on the production-distribution network of perishable products, WCVaR (Worst-Case Conditional Value-at-Risk) was introduced to measure the risk. On the basis of considering the effect of factors, such as production, logistics distribution, transportation path etc, on production cost, transportation cost, storage cost and loss of stockout, an optimization model with minimum WCVaR at certain service level was proposed. And then the best optimization strategy was realized by minimizing tail risk loss of production-distribution network. The numerical simulation results show that the WCVaR method can handle the uncertainty with more volatility and has more excellent stability, compared with the robust optimization method. When the demand obeys mixed distribution, the optimization problem of production-distribution network with uncertainty can be well solved with WCVaR optimization model.

    Nearest neighbor query algorithm in microscopic traffic simulation system
    SONG Zhu, QIN Zhiguang, DENG Weiwei, ZHAO Yuping
    2015, 35(2):  572-577.  DOI: 10.11772/j.issn.1001-9081.2015.02.0572
    Asbtract ( )   PDF (898KB) ( )  
    References | Related Articles | Metrics

    Since methods based on linked list in existing microscopic traffic simulation systems are not efficient and scalable to process Nearest Neighbor (NN) queries, a variation of B+ tree based method was proposed to resolve these problems. This method combined ideas from NN queries in database with advantages of linked list. By maintaining indices of nearby vehicles of each vehicle in the local lane, the performance of NN queries in that lane could be largely improved. Under the assumption of randomly distribution of vehicles, a mathematical model was also proposed to optimize the parameter setting according to multiple parameters for lanes and the amount of vehicles. This model calculated the minimized average query length of each NN query to optimize the parameter setting. The results of theoretical analysis and simulations showed that in common traffic conditions including sparse, normal and congestion, the main indicator, namely the average simulation time cost of each vehicle, could be reduced by 64.2% and 12.8% compared with linked list and B+ tree respectively. The results prove that the proposed method is suitable for larges-cale microscopic traffic simulation systems.

    Human performance model with temporal constraint in human-computer interaction
    ZHOU Xiaolei
    2015, 35(2):  578-584.  DOI: 10.11772/j.issn.1001-9081.2015.02.0578
    Asbtract ( )   PDF (1193KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the prediction model for task accuracy is deficient in the relationship of speed-accuracy tradeoff in human computer interaction, a method of predictive model for accuracy based on temporal constraint was proposed. The method studied the relationship between task accuracy and specified temporal constraint when users tried to complete the task with a specified amount of time in the computer user interface by controlled experiments, which was used to measure the human performance in temporal constraint tasks. A series of steering tasks with temporal constraint were designed in the experiment, which manipulated the tunnel amplitude, tunnel width and specified movement time. The dependent variable in the experiment was the task accuracy, which was quantifiable as lateral deviation of the trajectory. It was pointed out that the task accuracy was linearly related to tunnel width and steering speed (indicated as specified movement time divided by tunnel amplitude) by analyzing the experimental data from 30 participants. Finally, a quantitative model was established to predict the task accuracy based on the least-square regression in steering tasks with temporal constraint. The proposed model has a good fit with the real dataset, the goodness of fit is 0.857.

    New self-localization method for indoor mobile robot
    ZHOU Yancong, DONG Yongfeng, WANG Anna, GU Junhua
    2015, 35(2):  585-589.  DOI: 10.11772/j.issn.1001-9081.2015.02.0585
    Asbtract ( )   PDF (837KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems of the current self-localization algorithms for indoor mobile robot, such as the low positioning accuracy, increasing positioning error with time, the signal's multipath effect and non-line-of-sight effect, a new mobile robot self-localization method based on Monte Carlo Localization (MCL) was proposed. Firstly, through analyzing the mobile robot self-localization system based on Radio Frequency IDentification (RFID), the robot motion model was established. Secondly, through the analysis of the mobile robot positioning system based on Received Signal Strength Indicator (RSSI), the observation model was put forward. Finally, in order to improve the computing efficiency of particle filter, the particle culling strategy and particle weight strategy considering orientation of the particles were given, to enhance the positioning accuracy and the execution efficiency of the new positioning system. The position errors of the new algorithm were about 3 cm in both the X direction and the Y direction, while position error of the traditional localization algorithm in the X direction and the Y direction were both about 6 cm. Simulation results show that the new algorithm doubles the positioning accuracy, and has good robustness.

    Mobile robot motion estimation based on classified feature points
    YIN Jun, DONG Lida, CHI Tianyang
    2015, 35(2):  590-594.  DOI: 10.11772/j.issn.1001-9081.2015.02.0590
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics

    In order to solve the real-time problem of visual navigation system with traditional motion estimation algorithm, a new approach based on classified feature points for mobile robot motion estimation was proposed. For dividing feature points into far points and near points, the distances between feature points and mobile robot were calculated according to the 3-dimensional coordinates of feature points. The far points were sensitive to the rotational movement of robot, thus they were used to calculate rotational matrix; the near points were sensitive to translational motion, thus they were used to calculate the translational matrix. When the far points and the near points are 30% of original feature points, the proposed approach had equivalent accuracy but reduced 60% computing time compared with RANdom SAmple Consensus (RANSAC). The results demonstrate that, by using classified feature points, the proposed algorithm can effectively reduce computing time, meanwhile ensure accuracy of motion estimation, and it can meet the the real-time requirement with large feature points.

    Load forecasting based on multi-variable LS-SVM and fuzzy recursive inference system
    HU Shiyu, LUO Diansheng, YANG Shuang, YANG Jingwei
    2015, 35(2):  595-600.  DOI: 10.11772/j.issn.1001-9081.2015.02.0595
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics

    In the smart grid, the development of electric power Demand Response (DR) brings great change to the traditional power utilization mode. Combined with real-time electricity price, consumers can adjust their power utilization mode by their energy demand. This makes load forecasting more complicated. The multi-input and two-output Least Squares Support Vector Machine (LS-SVM) was proposed to preliminarily predict the load and price at the same time. Considering the interaction between the real-time electricity price and load, the fuzzy recursive inference system based on data mining technology was adopted to simulate the game process of the forecasting of the price and load, and then the preliminary forecast results of multi-variable LS-SVM prediction algorithm were recursively corrected until the forecasting results were tending towards stability. Multi-variable LS-SVM can avoid running into local optima and has an excellent capacity of generalization, the improved association rules mining algorithm and loop predictive control algorithm have good completeness and robustness, and can correct the forecasting result approximately in every real situation. Simulation results of the actual power system show that the proposed method has better application effects.

2024 Vol.44 No.7

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF