Table of Content

    01 March 2014, Volume 34 Issue 3
    Network and communications
    IP address lookup algorithm based on multi-bit priority tries tree
    HUANG Sheng ZHANG Wei WU Chuanchuan CHEN Shenglan
    2014, 34(3):  615-618.  DOI: 10.11772/j.issn.1001-9081.2014.03.0615
    Asbtract ( )   PDF (671KB) ( )  
    Related Articles | Metrics

    Concerning the low efficiency of present methods of IP lookup, a new data lookup algorithm based on Multi-Bit Priority Tries (MBPT) was proposed in this paper. By storing the prefixes with higher priority in dummy nodes of multi-bit tries in proper order and storing the prefixes for being extended in an auxiliary storage structure,this algorithm tried to make the structure find the longest matching prefix in the internal node instead of the leaf node. Meanwhile, the algorithm avoided the reconstruction of router-table when it needed to be updated. The simulation results show that the proposed algorithm can effectively minimize the number of memory accesses for dynamic router-table operations, including lookup, insertion and deletion, which significantly improves the speed of router-table lookup as well as update.

    Network throughput analysis of IEEE 802.15.4 based on M/G/1/K queuing theory
    GUO Ning MAO Jianlin WANG Rui QIAO Guanhua HU Yujie ZHANG Chuanlong
    2014, 34(3):  619-622.  DOI: 10.11772/j.issn.1001-9081.2014.03.0619
    Asbtract ( )   PDF (598KB) ( )  
    Related Articles | Metrics

    According to the IEEE 802.15.4 slotted Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) algorithm, a network analysis model using analysis method of two-dimensional Markov chain was proposed. Not only the sleep mode of IEEE 802.15.4 agreement but also the condition where the backoff window reached the maximum value before the Number of Backoff (NB) were especially considered in the model. On this basis, combined with M/G/1/K queuing theory, the throughput expression was derived, and the packet arrival rate effect on the throughput was analyzed under unsaturated network. Using the simulation platform Network Simulator Version2 (NS2), the experimental results show that the theoretical analysis fits well with the simulation result, and the network throughput is described accurately. Then the effectiveness of the analytical model is validated.

    Clock synchronization algorithm based on component decoupling fusion for wireless sensor networks
    SHI Xin ZHAO Xiangmo HUI Fei YANG Lan
    2014, 34(3):  623-627.  DOI: 10.11772/j.issn.1001-9081.2014.03.0623
    Asbtract ( )   PDF (752KB) ( )  
    Related Articles | Metrics

    Improving sync accuracy in Wireless Sensor Network (WSN) usually causes additional sync overhead. To optimize the compatibility of higher sync accuracy and lower sync overhead, a time sync algorithm based on the component decoupling fusion was proposed. With the two-way broadcast sync mechanism and the clock correlations, the clock deviations between synchronized nodes and reference nodes were estimated by the component decoupling fusion. Besides, the computation for the weights of different components was discussed according to the linear unbiased minimum variance estimation. The simulation results show that, without the additional energy consumption, the proposed algorithm can improve its sync accuracy after 20 sync rounds by 4.52μs, 13.8μs and 25.48μs, compared to PBS (Pairwise Broadcast Synchronization), TPSN (Timing-sync Protocol for Sensor Network) and RBS (Reference Broadcast Sync) respectively.

    Parameter optimization of cognitive wireless network based on cloud immune algorithm
    ZHANG Huawei WEI Meng
    2014, 34(3):  628-631. 
    Asbtract ( )   PDF (565KB) ( )  
    Related Articles | Metrics
    In order to improve the parameter optimization results of cognitive wireless network, an immune optimization based parameter adjustment algorithm was proposed. Engine parameter adjustment of cognitive wireless network is a multi-objective optimization problem. Intelligent optimization method is suitable for solving it. Immune clonal optimization is an effective intelligent optimization algorithm. The mutation probability affects the searching capabilities in immune optimization. Cloud droplets have randomness and stable tendency in normal cloud model, so an adaptive mutation probability adjustment method based on cloud model was proposed, and it was used in parameter optimization of cognitive radio networks. The simulation experiments were done to test the algorithm under multi-carrier system. The results show that, compared with relative algorithms, the proposed algorithm has better convergence, and the parameter adjustment results are consistent with the preferences for the objectives function. It can get optimal parameter results of cognitive engine.
    Active queue management algorithm of queue delay control
    WU Dong
    2014, 34(3):  632-634.  DOI: 10.11772/j.issn.1001-9081.2014.03.0632
    Asbtract ( )   PDF (565KB) ( )  
    Related Articles | Metrics

    To solve the problem that the queue delay cannot meet the demand of media applications, such as VoIP, real time video and remote video conference in the existing Active Queue Management (AQM) algorithms, a new AQM algorithm named DCQA (Direct Control Queue Delay Algorithm) was proposed. In this new algorithm, to control the queue delay below the expected value, identifying the packet dropping and other corresponding processes were done before a new packet entered the router buffer. The packet dropping rate was computed by PID controller. Then, a simulation in three network environments was conducted to compare the algorithm performance of DCQA and CoDel algorithm. The experimental results show that the queue delay can be effectively controlled and high utilization of link is obtained in DCQA. The detailed data of utilization of link are 99.93%, 99.88% and 99.95%. At the same time, the simulation results show that the probabilities of the queue delay below the expected value are 50.45%, 51.59% and 52.4%, improved by 3.6%, 40.53% and 50.69% compared with the CoDel algorithm. Therefore, DCAQ is applicable for streaming media transmission.

    Research and implementation of WLAN centralized management system based on control and provisioning of wireless access points protocol
    LIU Qian HU Zhikun LIAO Beiping LIAO Yuanqin GUO Hailiang
    2014, 34(3):  635-639.  DOI: 10.11772/j.issn.1001-9081.2014.03.0635
    Asbtract ( )   PDF (751KB) ( )  
    Related Articles | Metrics

    In view of maintenance difficulties and high cost in large-scale development of Wireless Local Access Network (WLAN), the Control and Provisioning of Wireless Access Points (CAPWAP) protocol that applied to communication between Access Controller (AC) and Wireless Terminator Point (WTP) was researched and implemented. In Linux environment, main features were realized, such as state machine management, and WTP centralized configuration. A platform of WLAN centralized management system based on local Medium Access Control (MAC) framework was built up. Wireshark capture tool, Chariot and Iperf were used to test the platform. The capture test results verify the feasibility of the framework, and the results of throughput and User Datagram Protocol (UDP) test also show that network performance is efficient and stable.

    Non Data-aided Feedforward Symbol Timing Recovery for Multi-h CPM
    ZHONG Sheng XIE Shunqin ZHANG Jian YANG Chun
    2014, 34(3):  640-643.  DOI: 10.11772/j.issn.1001-9081.2014.03.0640
    Asbtract ( )   PDF (747KB) ( )  
    Related Articles | Metrics

    To solve the difficulty and complexity of symbol timing problem of Multi-h CPM (Continuous Phase Modulation), a non-data-aided feedforward symbol timing recovery algorithm for multi-h CPM was proposed. The joint likelihood functions of modulation index offset and symbol timing offset were simplified into the likelihood functions of symbol timing offset and estimation of symbol timing offset was acquired by means of averaging the expectation functions according to possible modulation index offset. Modified Cramer-Rao Bound (MCRB) and implementation scheme of the proposed symbol timing recovery were presented. The simulation analyses show that, the proposed algorithm is suitable for full response and partial response multi-h CPM. Meanwhile, symbol timing recovery performance is good, and is not sensitive to the carrier frequency offset and modulation index offset.

    Active congestion control strategy based on historical probability in delay tolerant networks
    SHEN Jian XIA Jingbo FU Kai SUN Yu
    2014, 34(3):  644-648.  DOI: 10.11772/j.issn.1001-9081.2014.03.0644
    Asbtract ( )   PDF (739KB) ( )  
    Related Articles | Metrics

    To solve the congestion problem at node in delay tolerant networks, an active congestion control strategy based on historical probability was proposed. The strategy put forward the concept of referenced probability that could be adjusted dynamically by the degree of congestion. Referenced probability would control the forwarding conditions to avoid and control the congestion at node. At the same time the utilization of idle resources and the transmission efficiency of the network would be promoted. The simulation results show that the strategy upgrades delivery ratio of the entire network and reduces the load ratio and message loss rate. As a result, the active congestion control is realized and the transmission performance of the network is enhanced.

    Adaptive beamforming algorithm based on interference-noise covariance matrix reconstruction
    HOU Yunshan ZHANG Xincheng JIN Yong
    2014, 34(3):  649-652.  DOI: 10.11772/j.issn.1001-9081.2014.03.0649
    Asbtract ( )   PDF (715KB) ( )  
    Related Articles | Metrics

    In adaptive beamforming, the presence of the desired signal component in the training data, small sample size, and imprecise knowledge of the desired signal steering vector are the main causes of performance degradation. In order to solve this problem, this paper proposed a robust adaptive beamforming algorithm which performed interference-plus-noise covariance matrix reconstruction and desired signal steering vector estimation. In this algorithm, first the interference-plus-noise covariance matrix was reconstructed using Multiple Signal Classification (MUSIC) spatial spectrum in the signal-free angle section, then the constraint that prevented the convergence of the estimate of the desired signal steering vector to any of the interference steering vectors or their linear combination was derived, next this constraint was used together with the maximization of the array output power to formulate an optimization problem of estimating the desired signal steering vector, and convex optimization software was used to yield the desired signal steering vector. In the paper, the computational complexity of the proposed method was discussed and its effectiveness and superiority were validated by simulations. The simulation results demonstrate that the Signal to Interference plus Noise Ratio (SINR) of proposed adaptive beamformer is almost always close to optimal in a very large range of Signal-to-Noise Ratio (SNR) in the scenarios of random signal and interference look direction mismatch and incoherent local scattering, which is more robust than the existing beamformers.

    Intellectual property core design of communication terminal based on 1553B bus
    LI Yanjie HE Jingsong LI Ran
    2014, 34(3):  653-657.  DOI: 10.11772/j.issn.1001-9081.2014.03.0653
    Asbtract ( )   PDF (657KB) ( )  
    Related Articles | Metrics

    To meet the needs of ground simulation equipment used for spacecraft, a design of 1553B bus communication terminal Intellectual Property (IP) core based on Field Programmable Gate Array (FPGA) was proposed. On the premise of reliability, the bus system was designed with top-down approach and "two-process" coding method to generate object code with Very-High-Speed Integrated Circuit Hardware Description Language (VHDL), and then was simulated with ModelSim software, and finally, got verified and applied on actual device. The working mode of IP core can be configured with bus controller, remote terminal and bus monitor respectively. In addition, the IP core is easy to be integrated into System on Chip (SoC), and provides more choices for the further application of 1553B bus.

    Under-determined blind source separation based on potential function and compressive sensing
    LI Lina ZENG Qingxun GAN Xiaoye LIANG Desu
    2014, 34(3):  658-662.  DOI: 10.11772/j.issn.1001-9081.2014.03.0658
    Asbtract ( )   PDF (843KB) ( )  
    Related Articles | Metrics

    There are some deficiencies in traditional two-step algorithm for under-determined blind source separation, such as the value of K is difficult to be determined, the algorithm is sensitive to the initial value, noises and singular points are difficult to be excluded, the algorithm is lacking theory basis, etcetera. In order to solve these problems, a new two-step algorithm based on the potential function algorithm and compressive sensing theory was proposed. Firstly, the mixing matrix was estimated by improved potential function algorithm based on multi-peak value particle swarm optimization algorithm, after the sensing matrix was constructed by the estimated mixing matrix, the sensing compressive algorithm based on orthogonal matching pursuit was introduced in the process of under-determined blind source separation to realize the signal reconstruction. The simulation results show that the highest estimation precision of the mixing matrix can reach 99.13%, and all the signal reconstruction interference ratios can be higher than 10dB, which meets the reconstruction accuracy requirements well and confirms the effectiveness of the proposed algorithm. This algorithm is of good universality and high accuracy for under-determined blind source separation of one-dimensional mixing signals.

    Relationships between latency scalability and execution time
    XIONG Huanliang ZENG Guosun WU Canghai KUANG Guijuan HE Huojiao
    2014, 34(3):  663-667.  DOI: 10.11772/j.issn.1001-9081.2014.03.0663
    Asbtract ( )   PDF (829KB) ( )  
    Related Articles | Metrics

    Concerning the problem that previous studies on the scalability do not fully consider parallel execution time, and the relationships between latency scalability and parallel execution time have not been yet studied thoroughly, this paper studied the relationships between latency scalability and parallel execution time deeply and fully. Thereby some important conclusions were drawn, and they were about the relationships between latency scalability and parallel execution time after different algorithm-machines were extended from the same initial state. Then the proof of the above conclusions was given in this paper. The derived conclusions enriched the research content about the relationships between latency scalability and parallel execution time and provided a theoretical basis for obtaining ideal latency scalability of parallel computing. Finally the important conclusions and analytical expressions were verified through experimental results obtained for different algorithm-machines.

    Real-time scheduling algorithm for periodic priority exchange
    WANG Bin WANG Cong XUE Hao LIU Hui XIONG Xin
    2014, 34(3):  668-672.  DOI: 10.11772/j.issn.1001-9081.2014.03.0668
    Asbtract ( )   PDF (782KB) ( )  
    Related Articles | Metrics

    A static priority scheduling algorithm for periodic priority exchange was proposed to resolve the low-priority task latency problem in real-time multi-task system. In this method, a fixed period of timeslice was defined, and the two independent tasks of different priorities in the multi-task system exchanged their priority levels periodically. Under the precondition that the execution time of the task with higher priority could be guaranteed, the task with lower priority would have more opportunities to perform as soon as possible to shorten its execution delay time. The proposed method can effectively solve the bad real-time performance of low-priority task and improve the whole control capability of real-time multi-task system.

    Reliability-aware workflow scheduling strategy on cloud computing platform
    YAN Ge YU Jiong YANG Xingyao
    2014, 34(3):  673-677.  DOI: 10.11772/j.issn.1001-9081.2014.03.0673
    Asbtract ( )   PDF (737KB) ( )  
    Related Articles | Metrics

    Through the analysis and research of reliability problems in the existing workflow scheduling algorithm, the paper proposed a reliability-based workflow strategy concerning the problems in improving the reliability of the entire workflow by sacrificing efficiency or money in some algorithms. Combining the reliability of tasks in workflow and duplication ideology, and taking full consideration of priorities among tasks, this strategy lessened failure rate in transmitting procedure and meantime shortened transmit time, so it not only enhanced overall reliability but also reduced makespan. Through the experiment and analysis, the reliability of cloud workflow in this strategy, tested by different numbers of tasks and different Communication to Computation Ratios (CCR), was proved to be better than the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and its improved algorithm named SHEFTEX, including the superiority of the proposed algorithm over the HEFT in the completion time.

    Service trust evaluation method based on weighted multiple attribute cloud
    WEI Bo WANG Jindong ZHANG Hengwei YU Dingkun
    2014, 34(3):  678-682.  DOI: 10.11772/j.issn.1001-9081.2014.03.0678
    Asbtract ( )   PDF (839KB) ( )  
    Related Articles | Metrics

    With regard to the characteristics of randomness and fuzziness in service trust under computing environment, and lack of consideration in timeliness and recommend trust, a service trust evaluation method based on weighted multiple attribute cloud was proposed. Firstly, each service evaluation was given weight by introducing time decay factor, the evaluation granularity was refined by trust evaluation from multiple attribute of service, and direct trust cloud could be generated using the weighted attribute trust cloud backward generator. Then, the weight of recommender could be confirmed by similarity of evaluation, and recommended trust cloud was obtained by recommend information. Finally, the comprehensive trust cloud was obtained by merging direct and recommended trust cloud, and the trust rating could be confirmed by cloud similarity calculation. The simulation results show that the proposed method can improve the success rate of services interaction obviously, restrain malicious recommendation effectively, and reflect the situation of service trust under computing environment more truly.

    Reliability optimization approach for Web service composition based on cost benefit coefficient
    TIAN Qiang XIA Yongying FU Xiaodong LI Changzhi WANG Wei
    2014, 34(3):  683-689.  DOI: 10.11772/j.issn.1001-9081.2014.03.0683
    Asbtract ( )   PDF (1073KB) ( )  
    Related Articles | Metrics

    To solve the problem of large amount of calculation and nonlinear programming in the process of service composition optimization, a Cost Benefit Coefficient (CBC) approach was proposed for Web services composition reliability optimization in the situation of a given cost investment. First, the structure patterns of service composition and related reliability function were analyzed. Furthermore, the Web service composition method of reliability calculation was proposed and a nonlinear optimization model was established accordingly. And then the cost benefit coefficient was computed through the relationship between the cost and the reliability of component services, and the optimization schemes of Web service composition were decided. According to the nonlinear optimization model, the results of optimization were computed. Finally, given cost investment, the higher reliability of the approach to optimize the reliability of Web service composition was verified through the comparison of this approach and the traditional method on the reliable data of component service. The experimental results show that the proposed algorithm is effective and reasonable for reliability optimization of Web services composition.

    Cloud framework for hierarchical batch-factor algorithm
    YUAN Xinhui LIU Yong QI Fengbin
    2014, 34(3):  690-694.  DOI: 10.11772/j.issn.1001-9081.2014.03.0690
    Asbtract ( )   PDF (1002KB) ( )  
    Related Articles | Metrics

    Bernstein’s Batch-factor algorithm can test B-smoothness of a lot of integers in a short time. But this method costs so much memory that it’s widely used in theory analyses but rarely used in practice. Based on splitting product of primes into pieces, a hierarchical batch-factor algorithm cloud framework was proposed to solve this problem. This hierarchical framework made the development clear and easy, and could be easily moved to other architectures; Cloud computing framework borrowed from MapReduce made use of services provided by cloud clients such as distribute memory, share memory and message to carry out mapping of splitting-primes batch factor algorithm, which solved the great cost of Bernstein’s method. Experiments show that, this framework is with good scalability and can be adapted to different sizes batch factor in which the scale of prime product varies from 1.5GB to 192GB, which enhances the usefulness of the algorithm significantly.

    Large-scale image retrieval solution based on Hadoop cloud computing platform
    ZHU Weisheng WANG Peng
    2014, 34(3):  695-699.  DOI: 10.11772/j.issn.1001-9081.2014.03.0695
    Asbtract ( )   PDF (801KB) ( )  
    Related Articles | Metrics

    Concerning that the traditional image retrieval methods are confronted with massive image data processing problems, a new solution for large-scale image retrieval, named MR-BoVW, was proposed, which was based on the traditional Bag of Visual Words (BVW) approach and MapReduce model to take advantage of the massive storage capacity and powerful parallel computing ability of Hadoop. To handle image data well, firstly an improved method for Hadoop image processing was introduced, and then, the MapReduce layout was divided into three stages: feature vector generation, feature clustering, image representation and inverted index construction. The experimental results demonstrate that the MR-BoVW solution shows good performance on speedup, scaleup, and sizeup. In fact, the efficiency results are all greater than 0.62, and the curve of scaleup and sizeup is gentle. Thus it is suitable for large-scale image retrieval.

    Reputation model based on fuzzy prediction in wireless sensor networks
    CAO Xiaomei SHEN Heyang ZHU Haitao
    2014, 34(3):  700-703.  DOI: 10.11772/j.issn.1001-9081.2014.03.0700
    Asbtract ( )   PDF (635KB) ( )  
    Related Articles | Metrics

    In view of the update problem of the trust value in Wireless Sensor Network (WSN), a trust model based on Fuzzy Prediction (FP), called RMFP, was proposed. The behavior of nodes was described by using fuzzy mathematics theory method, and the fuzzy membership degree was converted by the fuzzy membership functions. Finally, the trust value was achieved by integrating the fuzzy membership degrees. The simulation results show that the accuracy of trust value is increased by 10.8%, and the judgment speed of suspected nodes is increased by two times. This shows that the effect on accuracy and speed of discovering, eliminating malicious node is more significant, especially for the judgment of the pre-made malicious nodes of high trust value.

    Credible service quality evaluation model based on separation of explicit quality attributes and implicit quality attributes
    ZHOU Guoqiang DING Chengcheng ZHANG Weifeng ZHANG Yingzhou
    2014, 34(3):  704-709.  DOI: 10.11772/j.issn.1001-9081.2014.03.0704
    Asbtract ( )   PDF (969KB) ( )  
    Related Articles | Metrics

    Concerning the present situation that Quality of Service (QoS) evaluation methods ignore the implicit service quality assessment and lead to inaccurate results, a service evaluation method that comprehensively considered explicit and implicit quality attributes was put forward. Explicit quality attributes were expressed in vector form, using service quality assessment model, after quantization, normalization, then evaluation values were calculated; and implicit quality attributes were expressed according to the evaluation on similar users' recommendation. The users' credibility and difference between old and new users were considered in the evaluation process. Finally the explicit and implicit quality evaluation was regarded as the QoS evaluation results. The experiments were performed in comparison with three algorithms by using one million Web Service QoS data. The simulation results show that the proposed method has certain feasibility and accuracy.

    Wormhole detection based on neighbor routing in Ad Hoc network
    CAO Xiaomei WU Lei LI Jiageng
    2014, 34(3):  710-713.  DOI: 10.11772/j.issn.1001-9081.2014.03.0710
    Asbtract ( )   PDF (719KB) ( )  
    Related Articles | Metrics

    To solve high energy and time delay cost problems caused by wormhole detection in Ad Hoc networks, a light-weighted wormhole detection method, using less time delay and energy, was proposed. The method used the neighbor number of routing nodes to get a set of abnormal nodes and then detect the presence of a wormhole by using the neighbor information of abnormal node when routing process was completed. The simulation results show that the proposed method can detect wormhole with less number of routing query. Compared with the DeWorm (Detect Wormhole) method and the E2SIW (Energy Efficient Scheme Immune to Wormhole attacks) method, it effectively reduces the time delay cost and energy cost.

    Integration-preservation data aggregation scheme based on distributed authentication
    YANG Wenwen MA Chunguang HUANG Yuluo
    2014, 34(3):  714-719.  DOI: 10.11772/j.issn.1001-9081.2014.03.0714
    Asbtract ( )   PDF (1122KB) ( )  
    Related Articles | Metrics

    In this paper, to protect data integrity in data aggregation of Wireless Sensor Network (WSN), a secure and efficient data aggregation scheme was proposed, which was based on Dual-head Cluster Based Secure Aggregation (DCSA). By setting symmetric keys between nodes and using distributed authentication method, this scheme performed node authentication and aggregation simultaneously, as integrity-checking of child node was completed immediately in the process of aggregation. Also, by using the oversight features of red and black cluster head, this scheme could locate malicious nodes and enhance the capability of anti-collusion attack. The experimental results show that the proposed scheme ensures the same security level with DCSA, and this scheme is able to detect and discard erroneous data immediately. It improves the efficiency of integrity detection mechanism and it has lower network energy consumption.

    Game-theoretic model of active attack on steganographic system
    LIU Jing TANG Guangming
    2014, 34(3):  720-723.  DOI: 10.11772/j.issn.1001-9081.2014.03.0720
    Asbtract ( )   PDF (548KB) ( )  
    Related Articles | Metrics

    To solve the problem of active attack on steganographic system, the counterwork relationship was modeled between steganographier and active attacker. The steganographic game with embedding rate and error rate as the payoff function was proposed. With the basic theory of two-person finite zero-sum game, the equilibrium between steganographier and active attacker was analyzed and the method to obtain their strategies in equilibrium was given. Then an example case was solved to demonstrate the ideas presented in the model. This model not only provides the theoretic basis for steganographier and active attacker to determine their optimal strategies, but also brings some guidance for designing steganographic algorithms robust to active attack.

    Cryptographic access control scheme for cloud storage based on proxy re-encryption
    LAN G Xun WEI Lixian WANG Xuan WU Xuguang
    2014, 34(3):  724-727.  DOI: 10.11772/j.issn.1001-9081.2014.03.0724
    Asbtract ( )   PDF (798KB) ( )  
    Related Articles | Metrics

    Concerning the data's confidentiality when being stored in the untrusted cloud storage, a new encryption algorithm based on the Proxy Re-Encryption (PRE) was proposed, and applied in the access control scheme for the cloud storage. The scheme had partial ciphertexts stored in the cloud storage for sharing, and the others sent to users directly. It was proven that the scheme can ensure the confidentiality of the sensitive data stored in the cloud storage under the third untrusted open environment. By contrast, the experimental results show the transmission of ciphertexts can be controlled by the sender. The scheme used the properties of the proxy re-encryption. The number of ciphertexts' operation and storage did not increase linearly with the increase of the users. It decreased the data computation cost, interactive cost, and the space of the data storage effectively. The scheme achieves sharing securely and efficiently when the sensitive data is stored in the cloud.

    Implementation of data encryption and device authentication in Konnex/European installation bus protocol
    DING Jun ZHANG Xihuang
    2014, 34(3):  728-732.  DOI: 10.11772/j.issn.1001-9081.2014.03.0728
    Asbtract ( )   PDF (819KB) ( )  
    Related Articles | Metrics

    To implement secure data transition in Home and Building Automation (HBA), an encryption and authentication mechanism was introduced into Konnex/European Installation Bus (KNX/EIB). Diffie-Hellman algorithm was used to realize asymmetric key sharing, Advanced Encryption Standard (AES) was applied to data encryption, Hash algorithm was adopted for challenge authentication, a device named controller was employed to coordinate the procedure of key sharing and device authentication. The simulation results show the proposed method is feasible concerning the space and time cost. Compared to other improvement methods, this one is easier to implement and operate, and it can ensure data security.

    Artificial intelligence
    Shopping information extraction method based on rapid construction of template
    LI Ping ZHU Jianbo ZHOU Lixin LIAO Bin
    2014, 34(3):  733-737.  DOI: 10.11772/j.issn.1001-9081.2014.03.0733
    Asbtract ( )   PDF (888KB) ( )  
    Related Articles | Metrics

    Concerning the shopping information Web page constructed by template, and the large number of Web information and complex Web structure, this paper studied how to extract the shopping information from the Web page template by not using the complex learning rule. The paper defined the Web page template and the extraction template of Web page and designed template language that was used to construct the template. This paper also gave a model of extraction based on template. The experimental results show that the recall rate of the proposed method is 12% higher than the Extraction problem Algorithm (EXALG) by testing the standard 450 Web pages; the results also show that the recall rate of this method is 7.4% higher than Visual information and Tag structure based wrapper generator (ViNTs) method and 0.2% higher than Augmenting automatic information extraction with visual perceptions (ViPER) method and the accuracy rate of this method is 5.2% higher than ViNTs method and 0.2% higher than ViPER method by testing the standard 250 Web pages. The recall rate and the accuracy rate of the extraction method based on the rapid construction template are improved a lot which makes the accuracy of the Web page analysis and the recall rate of the information in the shopping information retrieval and the shopping comparison system improve a lot .

    Label propagation algorithm based on potential function for community detection
    SHI Lixin ZHANG Junxing
    2014, 34(3):  738-741.  DOI: 10.11772/j.issn.1001-9081.2014.03.0738
    Asbtract ( )   PDF (643KB) ( )  
    Related Articles | Metrics

    Because of randomness, the robustness of Label Propagation Algorithm (LPA) is severely hampered. To improve the robustness, a LPA based on potential function of data field (LPAP) was proposed. The potential of every node was calculated, and local extreme potential was searched. Only the node with extreme potential was labeled initially, and the label was updated according to the sum potential of its neighbors with equal label during iteration. When there were no nodes changing its label, iteration stopped. The experimental results show that the average distinct community partition of LPAP is 4.0% of that of LPA, 12.9% of that of Balanced Propagation Algorithm (BPA), and the average Variation of Information (VOI) of LPAP is 45.1% of that of LPA, 73.3% of that of BPA. LPAP is significantly more robust, and is suitable for community detection in large network.

    Modeling on box-office revenue prediction of movie based on neural network
    ZHENG Jian ZHOU Shangbo
    2014, 34(3):  742-748.  DOI: 10.11772/j.issn.1001-9081.2014.03.0742
    Asbtract ( )   PDF (1041KB) ( )  
    Related Articles | Metrics

    Concerning the limitations that the accuracy of prediction is low and the classification on box-office is not significant in application, this paper proposed a new model to predict box-revenue of movie, based on the movie market in reality. The algorithm could be summarized as follows. Firstly, the factors that affected the box and format of the output were determined. Secondly, these factors should be analyzed and quantified within [0, 1]. Then, the number of neurons was also determined, aiming to build up the architecture of the neural network according to input and output. The algorithm and procedure were improved before finishing the prediction model. Finally, the model was trained with denoised historical movie data, and the output of model was optimized to dispel the randomness so that the result could reflect box more reliably. The experimental results demonstrate that the model based on back propagation neural network algorithm performs better on prediction and classification (For the first five weeks, the average relative error is 43.2% while the average accuracy rate achieves 93.69%), so that it can provide a more comprehensive and reliable suggestion for publicity and risk assessment before the movie is on, which possesses a better application value and research prospect in the prediction field.

    Residents travel mode choice based on prospect theory
    ZHANG Wei HE Ruichun
    2014, 34(3):  749-753.  DOI: 10.11772/j.issn.1001-9081.2014.03.0749
    Asbtract ( )   PDF (753KB) ( )  
    Related Articles | Metrics

    Concerning the influence of resident's psychological factors on travel mode choice in the actual travel, a travel mode choice model based on prospect theory was established and a choice method more according to human thinking habits was put forward. Considering psychological reference points of travel time and travel cost comprehensively, satisfied travel mode to resident was obtained. The influence of reference point on travel mode was analyzed by comparing changes of comprehensive prospect value under different reference points. Finally an example gave the application of this travel mode choice method. The experimental results show that residents in the minority whose expectation of travel time is lower prefer bus travel, although the comprehensive prospect value changes of taxi and private car are identical. More residents tend to use private car mode, which is consistent with the fact. The proposed method provides a new way to predict resident travel mode.

    Particle swarm optimization algorithm based on Gaussian disturbance
    ZHU Degang SUN Hui ZHAO Jia YU Qing
    2014, 34(3):  754-759.  DOI: 10.11772/j.issn.1001-9081.2014.03.0754
    Asbtract ( )   PDF (836KB) ( )  
    Related Articles | Metrics

    As standard Particle Swarm Optimization (PSO) algorithm has some shortcomings, such as getting trapped in the local minima, converging slowly and low precision in the late of evolution, a new improved PSO algorithm based on Gaussian disturbance (GDPSO) was proposed. Gaussian disturbance was put into in the personal best positions, which could prevent falling into local minima and improve the convergence speed and accuracy. While keeping the same number of function evaluations, the experiments were conducted on eight well-known benchmark functions with dimension of 30. The experimental results show that the GDPSO algorithm outperforms some recently proposed PSO algorithms in terms of convergence speed and solution accuracy.

    Global weighted sparse locality preserving projection
    LIN Kezheng CHENG Weiyue
    2014, 34(3):  760-762.  DOI: 10.11772/j.issn.1001-9081.2014.03.0760
    Asbtract ( )   PDF (556KB) ( )  
    Related Articles | Metrics

    For the problems of long runtime, ignoring the difference between classes of sample, the paper put forward an algorithm called Global Weighted Sparse Locality Preserving Projection (GWSLPP) based on Sparse Preserving Projection (SPP). The algorithm made sample have good identification ability while maintaining the sparse reconstruction relations of the samples. The algorithm processed the samples though sparse reconstruction, then made the sample on the projection and maximized the divergence between classes of sample. It got the projection and classified the sample at last. The algorithm made the experiments on FERET face database and YALE face database. The experimental results show the GWSLPP algorithm is superior to the Locality Preserving Projection (LPP), SPP and FisherFace algorithm in both execution time and recognition rate. The execution time is only 25s and the recognition rate can reach more than 95%. The experimental data prove the effectiveness of the algorithm.

    Application of fuzzy integral fusion of multiple decision trees into commercial bank credit management system
    FU Yue PAN Shiying WANG Jianling
    2014, 34(3):  763-766.  DOI: 10.11772/j.issn.1001-9081.2014.03.0763
    Asbtract ( )   PDF (687KB) ( )  
    Related Articles | Metrics

    In order to improve the level of assessment of the credit risk of commercial bank credit management system based on data mining, the model of multiple decision trees by Choquet fuzzy integral fusion (MTCFF) was applied to the system. The basic idea was to mine the classified customer data by decision tree, form the different decision trees and rules according to different pruning degree, and detect unclassified customer data by different decision tree rules, and then nonlinearly combine the results from multiple decision trees by Choquet fuzzy integral to get the best decision. Using the German of the UCI dataset, the experimental results show that fusion of Choquet fuzzy integral is superior to the single decision tree in terms of classification accuracy, and it is also superior to other linear fusion methods. Choquet fuzzy integral is superior to Sugeno fuzzy integral.

    Modeling and solving of resource allocation problem in automatic guided vehicle system
    WANG Wenrui WU Yaohua
    2014, 34(3):  767-770.  DOI: 10.11772/j.issn.1001-9081.2014.03.0767
    Asbtract ( )   PDF (768KB) ( )  
    Related Articles | Metrics

    For the resource allocation problem of automatic guided vehicle system, which was composed by both task assigning and route scheduling, a model based on the automatic in-put and out-put system of warehouse was built, and the algorithm with the framework of Particle Swarm Optimization (PSO) and the process of conflict-free routing was proposed to overcome the shortages of just assigning the tasks in sequence. Firstly, the iteration processes were used to search for the optimal scheme of assigning task. Then, the conflict-free routing was employed to obtain the result of resource allocation. Some constraints were added into the solution evaluation mechanism, such as time window, workload balance and conflict-free routes to ensure that the final scheme was feasible. Through the simulation of an automatic in-put system, the traditional scheduling algorithm and the new algorithm were compared. The proposed algorithm can save 10% of the total travelling distance and its balance of task assigning is better. It means that the proposed solution can improve the efficiency of whole system.

    Research of asynchronous reading imitating of brain-computer interface
    CAO Qiaoling GUAN Jinan
    2014, 34(3):  771-774.  DOI: 10.11772/j.issn.1001-9081.2014.03.0771
    Asbtract ( )   PDF (573KB) ( )  
    Related Articles | Metrics

    Reading imitating of Brain-Computer Interface (BCI) works on synchronous mode, but in practice users want to switch between "work" state and "idle" state freely, namely asynchrony. Therefore, a closing-eyes fixed time as the switch between the two states was proposed to solve the problem. Firstly, an experimental scheme was put forward, then the features of Electroencephalography (EEG) signal were extracted in time and frequency domains respectively, features of time domain were classified by Support Vector Machine (SVM) and the K-means algorithm, and features of frequency domain were classified by SVM. The highest recognition rates of time domain were 95% and 89.17%, the average time needed for classification were 1.89s and 0.11s respectively. The highest and the average recognition of frequency domain rate were 86.25% and 81.875% respectively. The experimental results show that this scheme can achieve the goal of switching the two states freely.

    Thermal comfort prediction model based on improved particle swarm optimization-back propagation neural network
    ZHANG Ling WANG Ling WU Tong
    2014, 34(3):  775-779.  DOI: 10.11772/j.issn.1001-9081.2014.03.0775
    Asbtract ( )   PDF (734KB) ( )  
    Related Articles | Metrics

    Aiming at the problem that thermal comfort prediction, which is a complicated nonlinear process, can not be applied to real-time control of air conditioning directly, this paper proposed a thermal comfort prediction model based on the improved Particle Swarm Optimization-Back Propagation (PSO-BP) neural network algorithm. By using PSO algorithm to optimize initial weights and thresholds of BP neural network, the problem that traditional BP algorithm converges slowly and is sensitive to the initial value of the network was improved in this prediction model. Meanwhile, for the standard PSO algorithm prone to premature convergence, weak local search capabilities and other shortcomings, this paper put forward some improvement strategies to further enhance the PSO-BP neural network capabilities. The experimental results show that, the thermal comfort prediction model based on the improved PSO-BP neural network algorithm has faster algorithm converges and higher prediction accuracy than the traditional BP model and standard PSO-BP model.

    GIS-based EIA Visualization of Complex River Course Based on Cartesian Cut Cell Method
    WU Peining
    2014, 34(3):  780-784.  DOI: 10.11772/j.issn.1001-9081.2014.03.0780
    Asbtract ( )   PDF (915KB) ( )  
    Related Articles | Metrics

    The numerical simulation of pollution dispersion within complex river course and its visualization on Geographic Information System (GIS) are very important to surface water Environmental Impact Assessment (EIA). But there still remain many difficulties and problems, such as grid generation, numerical simulation model of pollution dispersion and the visualization of result. To resolve these problems of area river pollution calculation and GIS-based visualization for point source side discharge situation, the ways of surface water EIA visualization based on Cartesian cut cell method were presented. The Cartesian cut cell method was applied to generate grid, by using cut cell intersection point chasing algorithm and choosing approach of background Cartesian grid in river course boundary, the Cartesian grids of complex river course were achieved. A self-adaptive grids refinement algorithm with steady state pollution decay model was proposed. Based on unstructured Cartesian grids, a point source river water pollution simulation model of side discharge was provided and an area filling algorithm was developed to achieve the visualization of EIA result. Through the visualization and analysis of a river pollution EIA example, the practicability and efficiency of the proposed methods were confirmed.

    Tone mapping algorithm based on multi-scale decomposition
    HU Qingxin CHEN Yun FANG Jing
    2014, 34(3):  785-789.  DOI: 10.11772/j.issn.1001-9081.2014.03.0785
    Asbtract ( )   PDF (1008KB) ( )  
    Related Articles | Metrics

    A new Tone Mapping (TM) algorithm based on multi-scale decomposition was proposed to solve a High Dynamic Range (HDR) image displayed on an ordinary display device. The algorithm decomposed a HDR image into multiple scales using a Local Edge-Preserving (LEP) filter to smooth the details of the image effectively, while still retaining the salient edges. Then a dynamic range compression function with parameters was proposed according to the characteristics of the decomposed layers and the request of compression. By changing the parameters, the coarse scale layer was compressed and the fine scale layer was boosted, which resulted in compressing the dynamic range of the image and boosting the details. Finally, by restructuring the image and restoring the color, the image after mapping had a good visual quality. The experimental results demonstrate that the proposed method is better than the algorithm proposed by Gu et al.(GU B, LI W J, ZHU M Y, et al. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping [J]. IEEE Transactions on Image Processing, 2013, 22(1): 70-79) and Yeganeh et al. (YEGANEH H, WANG Z. Objective quality assessment of tone-mapped images [J]. IEEE Transactions on Image Processing, 2013, 22(2): 657-667) in naturalness, structural fidelity and quality assessment; moreover, it avoids the halo artifacts which is a common problem existing in the local tone mapping algorithms. The algorithm can be used for the tone mapping of the HDR image.

    Texture description based on local spectrum energy self-similarity matrix
    YANG Hongbo HOU Xia
    2014, 34(3):  790-796.  DOI: 10.11772/j.issn.1001-9081.2014.03.0790
    Asbtract ( )   PDF (1166KB) ( )  
    Related Articles | Metrics

    To deal with the texture detection and classification problem, a new texture description method based on self-similarity matrix of local spectrum energy of Gabor filters bank output was presented. Firstly, local frequency band and orientation information of texture template were obtained by convolving template with polar LogGabor filters bank. And then the self-similarities of different local frequency patches were measured and stored in a self-similarity matrix which was defined as the texture descriptor in this paper. At last this texture descriptor could be used in texture detection and classification. Due to the reflection of self-similarity level of different bands and orientations, the descriptor had lower dependency of Gabor filters bank parameters. In the tests, this descriptor produced better detection results than Homogeneous Texture Descriptor (HTD) and the other self-similarity descriptors and the accuracy of multi-texture classification could be up to 91%. The experimental results demonstrate that self-similarity matrix of local power spectrum is a kind of effective texture descriptor. The output of texture detection and classification can be used widely in the post texture analysis task, such as texture segmentation and recognition.

    Improved algorithm for no-reference quality assessment of blurred image
    LI Honglin ZHANG Qi YANG Dawei
    2014, 34(3):  797-800.  DOI: 10.11772/j.issn.1001-9081.2014.03.0797
    Asbtract ( )   PDF (629KB) ( )  
    Related Articles | Metrics

    A fast and effective quality assessment algorithm of no-reference blurred image based on improving the classic Repeat blur (Reblur) processing algorithm was proposed for the high computational cost in traditional methods. The proposed algorithm took into account the human visual system, selected the image blocks that human was interested in instead of the entire image using the local variance, constructed blurred image blocks through low-pass filter, calculated the difference of the adjacent pixels between the original and the blurred image blocks to obtain the original image objective quality evaluation parameters. The simulation results show that compared to the traditional method, the proposed algorithm is more consistent with the subjective evaluation results with the Pearson correlation coefficient increasing 0.01 and less complex with half running time.

    New image denoising method based on rational-order differential
    JIANG Wei LI Xiaolong YANG Yongqing ZHANG Heng
    2014, 34(3):  801-805.  DOI: 10.11772/j.issn.1001-9081.2014.03.0801
    Asbtract ( )   PDF (792KB) ( )  
    Related Articles | Metrics

    The effect of the existing Total Variation (TV) method for image denoising is not ideal, and it is not good at keeping the characteristics of image edge and texture details. A new method of image denoising based on rational-order differential was proposed in this paper. First, the advantages and disadvantages of the present image denoising methods of TV and fractional differential were discussed in detail, respectively. Then, combining the model of TV with fractional differential theory, the new method of image denoising was obtained, and a rational differential mask in eight directions was drawn. The experimental results demonstrate that compared with the existing denoising methods, Signal Noise Ratio (SNR) is increased about 2 percents, and the method retains effectively the advantages of integer and fractional differential methods, respectively. In aspects of improving significantly high frequency of image and keeping effectively the details of image texture, it is also an effective, superior image denoising method. Therefore, it is an effective method for edge detection.

    Copy-paste image forgery blind detection based on mean shift
    JIAO Lixin DU Zhenglong
    2014, 34(3):  806-809.  DOI: 10.11772/j.issn.1001-9081.2014.03.0806
    Asbtract ( )   PDF (684KB) ( )  
    Related Articles | Metrics

    The traditional blind detection methods of image copy-paste forgery are time consuming, of high computation cost and low detection precision. A blind detection algorithm of copy-paste image forgery based on Mean Shift (MS) was proposed in this paper, which extracted Speed Up Robust Feature (SURF) points and then performed feature matching utilizing the method of best bin first in order to filter redundant points and locate the copy-paste forgery regions preliminarily. Pixels with the same or similar attributes would be segmented in the same region after implementing MS. The copy-paste regions could be detected according to the position dependency between matched feature point with its segmented region of MS and the detection result would be further refined by comparing the similarity of edge histogram and HSV (Hue-Saturation-Value) color histogram among the segmented regions of matched SURF and its neighborhood, and those regions with large similarity were included in the forged region. The experimental results show that the copy-paste forgery regions are detected accurately in the image with clear outline and rich details, and the proposed algorithm can robustly and efficiently detect the copy-paste forgery regions of image.

    Implementation algorithm of spherical screen projection system via internal projection
    CHEN Ke WU Jianping
    2014, 34(3):  810-814.  DOI: 10.11772/j.issn.1001-9081.2014.03.0810
    Asbtract ( )   PDF (1019KB) ( )  
    Related Articles | Metrics

    Addressing the issue of computer processing in the internal spherical screen projection, an internal spherical screen projection algorithm was proposed based on virtual spherical transform and virtual fisheye lens mapping. Concerning the spherical screen output distortion caused by irregular fisheye projection, a sextic polynomial of distortion correction algorithm based on the equal-solid-angle mapping function was presented to approximate any fisheye mapping function to eliminate the distortion. The six coefficients of the polynomial could be obtained via solving a linear algebra equation. The experimental results show this method is able to completely eradicate the spherical screen projection distortion. Addressing the illumination distribution modification stemming from the spherical screen projection, an illumination correction algorithm based on the cosine of the projecting angles was also proposed to eliminate the illumination distribution change. The experimental results show the illumination distribution correction method successfully recovers the originally severely modified illumination distribution into the illumination distribution almost identical to the original picture. This algorithm has theoretical instructive importance and significant practical application values for design and software development of the spherical projection systems.

    Image resampling tampering detection based on further resampling
    LIU Yi LIU Yongben
    2014, 34(3):  815-819.  DOI: 10.11772/j.issn.1001-9081.2014.03.0815
    Asbtract ( )   PDF (771KB) ( )  
    Related Articles | Metrics

    Resampling is a typical operation in image forgery, since most of the existing resampling tampering detection algorithms for JPEG images are not so powerful and inefficient in estimating the zoom factor accurately, an image resampling detection algorithm via further resampling was proposed. First, a JPEG compressed image was resampled again with a scaling factor less than 1, to reduce the effects of JPEG compression in image file saving. Then the cyclical property of the second derivative of a resampled signal was adopted for resampling operation detection. The experimental results show that the proposed algorithm is robust to JPEG compression, and in this manner, the real zoom factor may be accurately estimated and thus useful for resampling operation detection when a synthesized image is formed from resampled original images with different scaling factors.

    Single image dehazing method based on exposure fusion
    TANG Jianbo ZHU Guibin WANG Tian GUO Yu JIANG Tie
    2014, 34(3):  820-823.  DOI: 10.11772/j.issn.1001-9081.2014.03.0820
    Asbtract ( )   PDF (746KB) ( )  
    Related Articles | Metrics

    Outdoor images captured in bad weather often have poor qualities in terms of visibility and contrast. A simple and effective algorithm was designed to remove haze. Firstly, the spatial high-pass filtering was used to suppress the low-frequency component and enhance the edge detail, and then the contrast-stretching transformation was used to acquire an image with high dynamic range. Finally, the exposure fusion method based on Laplacian pyramid was utilized to fuse the two results above and get the defogged image. The experimental results show that the proposed method has a good performance on enhancing images that are degraded by fog, dust or underwater and it is appropriate for real-time applications.

    Fast convergent stereo matching algorithm based on sum of absolute difference and belief propagation
    ZHANG Lihong HE Shucheng
    2014, 34(3):  824-827.  DOI: 10.11772/j.issn.1001-9081.2014.03.0824
    Asbtract ( )   PDF (741KB) ( )  
    Related Articles | Metrics

    Concerning the high computation complexity and low efficiency in traditional stereo matching method based on Belief Propagation (BP), a fast convergent stereo matching algorithm based on Sum of Absolute Difference (SAD) algorithm and BP algorithm was proposed. Firstly, the SAD matching method was used as a similarity decision criterion to determine the initial disparity map. When the energy function was constructed, the initial parallax map was used as a limit of function to get the optimization of disparity distribution by BP. And when calculating the confidence level of each pixel in the optimization process, the algorithm only utilized the information translated from the neighboring pixels in an adaptive support window, while ignoring the impact of the pixels beyond the window, then the nodes of BP were reduced and the convergence speed was improved. The experimental results show that the proposed algorithm can reduce matching computation time by 50%-60% and improve efficiency while maintaining the matching accuracy, and the proposed algorithm lays the foundation for the real-time applications.

    Similarity measure method of Gaussian mixture model by integrating Kullback-Leibler divergence and earth mover's distance
    YU Yan
    2014, 34(3):  828-832.  DOI: 10.11772/j.issn.1001-9081.2014.03.0828
    Asbtract ( )   PDF (842KB) ( )  
    Related Articles | Metrics

    To improve the computation efficiency and effectiveness of the similarity measure method between two Gaussian Mixture Models (GMM), a new measure method was proposed by means of integrating symmetrized Kullback-Leibler Divergence (KLD) and earth mover's distance. At first, the KL divergence between Gaussian components of the two GMMs to be compared was computed and symmetrized for constructing the earth distance matrix. Then, the earth mover's distance between the two GMMs was computed using linear programming and it was used for GMM similarity measure. The new measure method was tested in colorful image retrieval. The experimental results show that the proposed method is more effective and efficient than the traditional measure methods.

    Vision-based gesture recognition method and its implementation on digital signal processor
    ZHANG Yi LIU Yuran LUO Yuan
    2014, 34(3):  833-836.  DOI: 10.11772/j.issn.1001-9081.2014.03.0833
    Asbtract ( )   PDF (762KB) ( )  
    Related Articles | Metrics

    The existing gesture recognition algorithms perform inefficiently on the embedded devices for their high complexity. A shape feature-based algorithm with major fixed-point arithmetic was proposed, which used the most significant internal circle algorithm and the circle cutting algorithm to obtain the features. This method could extract the center of a palm by finding the largest circle inside the palm, and could extract the finger tips by drawing circles at the edge of the hand. Finally gestures could be classified and recognized according to the feature information of the number of fingers, orientation and the position of the palm. This algorithm had been transplanted to Digital Signal Processor (DSP) by improving it. The experimental results show that the proposed method can adapt to different hands of different people and it is ideal for DSP. Compared with other shape-based algorithms, the average recognition rate has increased from 1.6%~8.6%, and the speed of the computer processing has increased by 2% by using this algorithm. Therefore, the proposed method facilitates the implementation of embedded gesture recognition systems and lays the foundation for the embedded gesture recognition system.

    Pedestrian segmentation based on Graph Cut with shape prior
    HU Jianghua WANG Wenzhong LUO Bin TANG Jin
    2014, 34(3):  837-840.  DOI: 10.11772/j.issn.1001-9081.2014.03.0837
    Asbtract ( )   PDF (640KB) ( )  
    Related Articles | Metrics

    Most of the variants of Graph Cut algorithm do not impose any shape constraints on the segmentations, rendering it difficult to obtain semantic valid segmentation results. As for pedestrian segmentation, this difficulty leads to the non-human shape of the segmented object. An improved Graph Cut algorithm combining shape priors and discriminatively learned appearance model was proposed in this paper to segment pedestrians in static images. In this approach, a large number of real pedestrian silhouettes were used to encode the a'priori shape of pedestrians, and a hierarchical model of pedestrian template was built to reduce the matching time, which would hopefully bias the segmentation results to be humanlike. A discriminative appearance model of the pedestrian was also proposed in this paper to better distinguish persons from the background. The experimental results verify the improved performance of this approach.

    Affine-invariant shape matching algorithm based on modified multi-scale product Laplacian of Gaussian operator
    DU Haijing XIAO Yanghui ZHU Dan TONG Xinxin
    2014, 34(3):  841-845.  DOI: 10.11772/j.issn.1001-9081.2014.03.0841
    Asbtract ( )   PDF (830KB) ( )  
    Related Articles | Metrics

    Geometric transforms of the object in the imaging process can be represented by affine transform in most situations. Therefore, a method for shape matching using corners was proposed. Firstly, the corner of contour using Multi-scale Product Laplacian of Gaussian (MPLoG) operator was detected, and the feature points based on corner interval were adaptively extracted to obtain the key feature of shape. In order to cope with affine transform, the similarity of two shapes on Grassmann manifold Gr(2,n) were represented and measured. Finally, the iterative sequence shift matching was adopted for overcoming the dependency of Grassmann manifold on the starting point, and achieving shape matching. The proposed algorithm was tested on the database of shapes. The simulation results show that the proposed method can achieve shape recognition and retrieval effectively, and it has strong robustness against noise.

    Sheep body size measurement based on computer vision
    JIANG Jie ZHOU Lina LI Gang
    2014, 34(3):  846-850.  DOI: 10.11772/j.issn.1001-9081.2014.03.0846
    Asbtract ( )   PDF (964KB) ( )  
    Related Articles | Metrics

    Body size parameters are important indicators to evaluate the growth status of sheep. How to achieve the measurement with non-stress instrument is an urgent and important problem that needs to be resolved in the breeding process of sheep. This paper introduced corresponding machine vision methods to measure the parameters. Sheep body in complex environment was detected by gray-based background subtraction method and chromaticity invariance principle. By virtue of grid method, the contour envelope of sheep body was extracted. After analyzing the contour sequence with D-P algorithm and Helen-Qin Jiushao formula, the point with maximum curvature in the contour was acquired. The point was chosen as the measurement point at the hip of sheep. Based on the above information, the other three measurment points were attained using four-point method and combing the spatial resolution, the body size parameters of sheep body were acquired. And the contactless measurement was achieved. The experimental results show that, the proposed method can effectively extract sheep body in complex environment; the measurement point at hip of sheep can be stably determined and the height of sheep can be stably attained. Due to the complexity of the ambient light, there still exits some problems when determining the shoulder points.

    Formal modeling and verification of train safety distance control
    HU Xiaohui XIAO Zhiqi CHEN Yong Li Xin
    2014, 34(3):  851-856.  DOI: 10.11772/j.issn.1001-9081.2014.03.0851
    Asbtract ( )   PDF (769KB) ( )  
    Related Articles | Metrics

    With the rapid development of Chinese railway, requirements for running safety of trains are more high. This paper used the Event-B formal modeling approach to research on the high-speed train safety distance control. With the support of simulation tool Rodin, combined with the multi-Agent theory, the safety distance control model of multi-train operation was constructed. The simulation researched the modeling and verification on the minimum interval tracking control for high speed train. The simulation results show that, the binding of formal verification methods of Event-B and Multi-Agent System (MAS) is meaningful. So the method has some practical significance for the modeling and verification of complex system.

    Runtime error site analysis tool based on variable tracking
    ZHANG Tianjiong WANG Zheng
    2014, 34(3):  857-860.  DOI: 10.11772/j.issn.1001-9081.2014.03.0857
    Asbtract ( )   PDF (574KB) ( )  
    Related Articles | Metrics

    A runtime error is generated in the course of the program's dynamic execution. When the error occurred, it needs to use traditional debug tools to analyze the cause of the error.For the real execution environment of some exception and multi-thread can not be reproduced, the traditional debug analysis means is not obvious. If the variable information can be captured during the program execution, the runtime error site will be caught, which is used as a basis for analysis of the cause of the error. In this paper, the technology of capture runtime error site based on variable tracking was proposed; it can capture specific variable information according to user needs, and effectively improved the flexibility of access to variable information. Based on it, a tool named Runtime Fault Site Analysis (RFST) was implemented, which could be used to analyze error cause and provide error site and aided analysis approach as well.

    Formal description of dynamic construction method for square Hmong language characters
    MO Liping ZHOU Kaiqing
    2014, 34(3):  861-864.  DOI: 10.11772/j.issn.1001-9081.2014.03.0861
    Asbtract ( )   PDF (746KB) ( )  
    Related Articles | Metrics

    Focusing on the problem of how to generate and display square Hmong language characters in computer, a dynamic construction method for square Hmong language characters was proposed. Firstly, the basic principles of the proposed method were presented. Then, the related operators for generating the square Hmong language characters were discussed. Finally, the transform operations were also explained based on the predicate rules. The proposed method realized the dynamic structure of square Hmong language characters by using up-down, left-right, half-include combination of displayed characters after storing the components and isolated characters. This technique provided an important support for glyph component extracting technology and automatic generation technology of compounds.

    Improvement of Boyer-Moore string matching algorithm
    HAN Guanghui ZENG Cheng
    2014, 34(3):  865-868.  DOI: 10.11772/j.issn.1001-9081.2014.03.0865
    Asbtract ( )   PDF (489KB) ( )  
    Related Articles | Metrics

    A new variant of Boyer-Moore (BM) algorithm was proposed on the basis of analyzing BM algorithm. The basic idea of the improvement was to form match heuristic (i.e. good-suffix rule) for the expanded pattern Pa in preprocessing phase, where P was the pattern and a was an arbitrary character that belonged to the alphabet, so both to increase length of the matched suffix and to imply Sunday's occurrence heuristic (i.e. bad-character rule), therefore a larger shift distance of scanning window was obtained. The theoretical analyses show that the improvement has linear time complexity even in the worst case and sublinear behavior on the average case, and space complexity of O(m(σ+1)). The experimental results also show that implementation performance of the improved one is significantly improved, especially in the case of small alphabet.

    Anti-money laundering management model of central bank high-value payment system based on limited information fusion
    WANG Zheng PENG Jialing FU Lili ZHANG Jialing
    2014, 34(3):  869-872.  DOI: 10.11772/j.issn.1001-9081.2014.03.0869
    Asbtract ( )   PDF (774KB) ( )  
    Related Articles | Metrics

    To deal with the problem of inter-bank money laundering, combined with limited information management methods, a new anti-money laundering model was presented with central bank High-Value Payment System (HVPS) architecture. The proposed model utilized distributed monitor nodes to trace money laundering crimes. And it used event description method to record the crime procedures and so on. A new grey relational information fusion algorithm was invented to integrate multi-monitor information. And an improved power spectral algorithm was proposed to deal with fast data analysis and money laundering recognition operations. The simulation results show that the model has better processing performance and anti-money laundering recognition accuracy than others do. In detail, the model does well in money-laundering client coverage (by 12%), discovery rates (by 12%) and recall rates (by 5%).

    Financial failure prediction using truncated Hinge loss support vector machine with smoothly clipped absolute deviation penalty
    LIU Zunxiong HUANG Zhiqiang LIU Jiangwei CHEN Ying
    2014, 34(3):  873-878.  DOI: 10.11772/j.issn.1001-9081.2014.03.0873
    Asbtract ( )   PDF (878KB) ( )  
    Related Articles | Metrics

    Aiming at the problems that the traditional Support Vector Machine (SVM) classifier is sensitive to outliers and has the large number of Support Vectors (SV) and the parameter of its separating hyperplane is not sparse, the Truncated hinge loss SVM with Smoothly Clipped Absolute Deviation (SCAD) penalty (SCAD-TSVM) was put forward and was used for constructing the financial early-warning model. At the same time, an iterative updating algorithm was proposed to solve the SCAD-TSVM model. Experiments were implemented on the financial data of A-share manufacturing listed companies of the Shanghai and Shenzhen stock markets. Compared to the T-2 and T-3 models constructed by SVM with L1 norm penalty (L1-SVM), SVM with SCAD penalty (SCAD-SVM) and Truncated hinge loss SVM (TSVM), the T-2 and T-3 model constructed by the SCAD-TSVM had the best sparseness and the highest accuracy of prediction, and its average accuracies of prediction with different number of training samples were higher than those of the L1-SVM, SCAD-SVM and TSVM algorithms.

    Design and implement of simulation platform for urban rail transit system
    LI Shaowei CHEN Yongsheng
    2014, 34(3):  879-883.  DOI: 10.11772/j.issn.1001-9081.2014.03.0879
    Asbtract ( )   PDF (704KB) ( )  
    Related Articles | Metrics

    To evaluate the operational efficiency and emergency strategies of the trail transit under different passenger flow conditions, also simulate and analyze the emergency strategies quantitatively, a simulation platform for urban rail transit was proposed. This system modeled four main objects that consisted of the kinetic model of train, the Automatic Train Control (ATC), the trackside equipment and the moving block system. On this basis, the whole simulation system was designed and implemented based on VC〖KG-*3〗+〖KG-*3〗+ development platform combined with computer network and database technology. Finally, the operation of the train was able to be automatically implemented on this simulation platform driven by the train timetable. The system was assessed by using the data of the rail transit of Shanghai 8th line and the simulation results show good consistency with the real timetable.

    Parking guidance system based on ZigBee and geomagnetic sensor technology
    YUE Xuejun LIU Yongxin WANG Yefu CHEN Shurong LIN Da QUAN Dongping YAN Yingwei
    2014, 34(3):  884-887.  DOI: 10.11772/j.issn.1001-9081.2014.03.0884
    Asbtract ( )   PDF (601KB) ( )  
    Related Articles | Metrics

    Concerning the phenomenon that common parking service could not satisfy the increasing demand of the private vehicle owners, an intelligent parking guidance system based on ZigBee network and geomagnetic sensors was designed. Real-time vehicle position or related traffic information was collected by geomagnetic sensors around parking lots and updated to center sever via ZigBee network. On the other hand, out-door Liquid Crystal Display (LCD) screens controlled by center sever displayed information of available parking places. In this paper, guidance strategy was divided into 4 levels, which could provide clear and effective information to drivers. The experimental results prove that the distance detection accuracy of geomagnetic sensors was within 0.4m, and the lowest loss packet rate of the wireless network in the range of 150m is 0%. This system can possibly provide solution for better parking service in intelligent cities.

    Compensation method for abnormal temperature data of automatic weather station
    ZHANG Yingchao GUO Dong XIONG Xiong HE Lei
    2014, 34(3):  888-891.  DOI: 10.11772/j.issn.1001-9081.2014.03.0888
    Asbtract ( )   PDF (656KB) ( )  
    Related Articles | Metrics

    To ensure the integrity and accuracy of the meteorological data, combined with automatic weather station's daily average temperature data which contained discontinuous noise, three types of membership functions were submitted. A compensation algorithm of Fuzzy Support Vector Machine (FSVM) based on root-mean-square membership function was designed and the compensation model was established too. Finally, the FSVM method was compared with the traditional Support Vector Machine (SVM) method. The experimental results show that the proposed algorithm has good recognition capability for noise points. After interpolation, the data precision was 1.4℃, better than 1.6℃ of the traditional SVM method. Moreover, the whole data precision was 1.13℃, superior to 1.42℃ of the traditional SVM method.

    Supervised learning for visibility combining features in spatial domain with that in frequency domain
    XU Xi LI Yan HAO Weihong
    2014, 34(3):  892-897.  DOI: 10.11772/j.issn.1001-9081.2014.03.0892
    Asbtract ( )   PDF (958KB) ( )  
    Related Articles | Metrics

    Atmospheric measurement not only impacts marine, land, and air transportation and resident trip, but also is a leading indicator of air quality. The existing visibility estimators based on image processing have problems such as constant computational formulae, poor stability and stringent requirement of application environment. Visibility measurement with supervised learning extracted features related to image edge in spatial domain and features of energy distribution in frequency domain to constitute the high-dimensional feature vector directly from observed scene images, and needed no manual object installing and modeling of the observed scene. It trained Support Vector Regression (SVR) model on the samples that were similar to test image and chosen by k-Nearest Neighbor (kNN), dynamically established the learning model between image features and visibility, and hid various impact factors of visibility in the model. The experimental results of natural scene show that the accuracy of the method can be as high as 96.29%, and moreover, it has good stability and real-time and simplicity of operation so that it is propitious for generalization in large scale.

    Wind shear recognition based on improved genetic algorithm and wavelet moment
    JIANG Lihui CHEN Hong ZHUANG Zibo XIONG Xinglong YU Lan
    2014, 34(3):  898-901.  DOI: 10.11772/j.issn.1001-9081.2014.03.0898
    Asbtract ( )   PDF (785KB) ( )  
    Related Articles | Metrics

    According to the shape features of wind shear images extracted by wavelet invariant moment based on cubic B-spline wavelet basis, an improved Genetic Algorithm (GA) was proposed to apply to the type recognition of microburst, low-level jet stream, side wind shear and tailwind-or-headwind shear. In the improved algorithm, the adaptive crossover probability only considered the number of generation and mutation probability just emphasized the fitness valve of individuals and group, so that it could control the evolution direction uniformly, and greatly maintain the population diversity simultaneously. Lastly, the best feature subset chosen by the improved genetic algorithm was fed into 3-nearest neighbor classifier to classify. The experimental results show that it has a good direction and be able to rapidly converge to the global optimal solution, and then steadily chooses the critical feature subset in order to obtain a better performance of wind shear recognition that the mean recognition rate can reach more than 97% at last.

    Research on dynamic stability of badminton
    ZHANG Jinghua WANG Renhuang YUE Hongwei
    2014, 34(3):  902-906.  DOI: 10.11772/j.issn.1001-9081.2014.03.0902
    Asbtract ( )   PDF (754KB) ( )  
    Related Articles | Metrics

    To solve the problem of the regulation of badminton dynamic stable equilibrium, the particle influence coefficient method of feather piece was put forward. The method combined badminton quality models and quality feather piece, bending camber degree, angle of attack, and other related factors. The feather piece of particle influence coefficient was obtained by adjusting the height centroid which satisfied badminton dynamic stability requirements got by striking tilt minimum square. Compared with the traditional badminton dynamic stabilization which must depend on the experience accumulated for a long time, the badminton particle influence coefficient method of feather piece that was put forward by this paper formed a theoretical system. And it had less time consumption, high efficiency, etc. The numerical results show that the proposed method is correct and effective.

    Automated Fugl-Meyer assessment based on genetic algorithm and extreme learning machine
    WANGJingli LI Liang YU Lei WANG Jiping FANG Qiang
    2014, 34(3):  907-910.  DOI: 10.11772/j.issn.1001-9081.2014.03.0907
    Asbtract ( )   PDF (775KB) ( )  
    Related Articles | Metrics

    To realize automatic and quantitative assessment in home-based upper extremity rehabilitation for stroke, an Extreme Learning Machine (ELM) based prediction model was proposed to automatically estimate the Fugl-Meyer Assessment (FMA) scale score for shoulder-elbow section. Two accelerometers were utilized for data recording during performance of 4 tasks selected from shoulder-elbow FMA and 24 patients were involved in the study. Accelerometer-based estimation was obtained by preprocessing raw sensor data, extracting data features, selecting features based on Genetic Algorithm and ELM. Then 4 single-task models and a comprehensive model were built individually using the selected features. Results show that it is possible to achieve accurate estimation of shoulder-elbow FMA score from the analysis of accelerometer sensor data with a root mean squared prediction error value of 2.1849 points. This approach breaks through the subjective and time-consuming property of traditional outcome measures which rely on clinicians at hand and can be easily utilized in the home settings.

    Induction logging inversion algorithm based on differential evolution
    XIONG Jie ZOU Changchun
    2014, 34(3):  911-914.  DOI: 10.11772/j.issn.1001-9081.2014.03.0911
    Asbtract ( )   PDF (567KB) ( )  
    Related Articles | Metrics

    An induction logging inversion algorithm based on the Differential Evolution (DE) was proposed to avoid the dependency of initial model. This inversion algorithm was applied to induction logging inversion on the 2-D axisymmetric models of different thickness layers, and yielded consistent results with the models in the noise-free case. When noises of 5%, 10% and 15% were added, the inversion results of thick reservoir remain fairly good but the results of thin reservoir became slightly inferior. The numerical experimental results demonstrate that the proposed inversion algorithm has the capabilities of global optimization and anti-noise. It is more independent of initial model than the traditional ones.

    Model error restoration for lower E-type membrane of six-axis force sensor based on adaptive Kalman filtering
    ZHU Wenchao XU Dezhang
    2014, 34(3):  915-920.  DOI: 10.11772/j.issn.1001-9081.2014.03.0915
    Asbtract ( )   PDF (847KB) ( )  
    Related Articles | Metrics

    To reduce the influence of the noise on the measurement accuracy of the six-axis force sensor and solve the problem that the standard Kalman filter can not gain the optimal estimation because of the state-space model error of the sensor, a new adaptive Kalman filtering with two adaptive factors was proposed. The augmented state-space model of colored noise for lower E-type membrane based on the relationship between the response of sinusoidal excitation force and the strain was established. Based on the principle of standard Kalman filter, the impact of model errors on the filter estimate results were analyzed. The technology of dynamically adjusting the weight of state prediction in the filter estimation was introduced. The adaptive Kalman filter estimation principle and the recursion formula were presented. Finally, the dual adaptive factors were constructed through the model of three-section function on the basis of orthogonality principle and least square method. The simulation results indicate that comparing with the strong tracking filter and standard Kalman filter, the proposed algorithm has better estimate accuracy and stability. It can effectively enhance the measurement accuracy of six-axis force sensor and control the influence of model errors.

2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF