Loading...

Table of Content

    01 July 2013, Volume 33 Issue 07
    Network and communications
    Vehicular Ad Hoc network routing protocol and its research progress
    FU Yuanke TANG Lun CHEN Qianbin GONG Pu
    2013, 33(07):  1793-1801.  DOI: 10.11772/j.issn.1001-9081.2013.07.1793
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    In recent years, with the rapid development of Vehicular Adhoc NETwork (VANET), wellclassified routing protocol and comparative analysis are important for future study. This paper dealt with the ambiguous classification and deficient analysis of the routing protocol in VANET, classifying and summarizing the routing protocols on the basis of topology, geography and hybrid route in turn. Mainly introducing some classical routing protocols, it analyzed their characteristics and performance, and put forward the advantages and disadvantages as well as improvements. Special stress was put on the comparative analysis of Delay Tolerant Network (DTN) and opportunity routing protocol which would become hot spots for future geographic routing study. Besides, the major challenges and potential opportunities that VANET was facing were pointed out, which had laid the foundation of future protocol study and suggested the path to research clearly.
    Probabilistic transmittingbased data aggregation scheme for wireless sensor networks
    GUO Jianghong LUO Yudong LIU Zhihong
    2013, 33(07):  1798-1801.  DOI: 10.11772/j.issn.1001-9081.2013.07.1798
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    For reducing the communication overhead of traditional data aggregation method in wireless sensor networks, the authors proposed a probabilistic transmissionbased data aggregation scheme for Wireless Sensor Network (WSN). Due to limited number of nodes in the cluster and the fact that aggregation error is unavoidable, probabilistic transmission was adopted to reduce the number of innercluster transmissions and lower the communication overhead with tolerable error. Besides, Dixon criterion was adopted to eliminate the gross error in the small sample to provide high reliability of innercluster aggregation. The experimental results show that the probabilistic transmission can lower the innercluster transmissions effectively with tolerable error, the communication overhead of proposed scheme is about 27.5% that of traditional data aggregation schemes. The aggregation error of probabilistic transmission and all nodes transmission are at the same level and both are acceptable for wireless sensor networks.
    Low-power secure localization algorithm based on distributed reputation evaluation
    WANG Yong YUAN Chaoyan TANG Jing HU Liangliang
    2013, 33(07):  1802-1808.  DOI: 10.11772/j.issn.1001-9081.2013.07.1802
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    A new low-power localization algorithm based on the evaluation of distributed reputation was proposed to improve the security and energy consumption of the node positioning for wireless sensor network. The concepts of Trustworthy Node Table (TNT) and the backup cluster head node were introduced to find the reliable beacon nodes quickly, and the backup cluster head node could assist and monitor the cluster head node, reducing the workload of the cluster head and participating in the integration process of the beacon nodes reputation values. The proposed algorithm enhanced the reliability and integrity of the beacon nodes, improved the efficiency and security of the node localization, reduced the systems energy consumption and improved the detection rate of malicious nodes. The simulation results show that in malicious node environment, the algorithm can effectively improve the detection rate of malicious nodes, reduce the positioning error, weaken the malicious nodes damage and influence on the positioning system to achieve the safe positioning of the nodes.
    Shortest dynamic time flow problem in continuous-time capacitated network
    MA Yubin XIE Zheng CHEN Zhi
    2013, 33(07):  1805-1808.  DOI: 10.11772/j.issn.1001-9081.2013.07.1805
    Asbtract ( )   PDF (689KB) ( )  
    References | Related Articles | Metrics
    Concerning a kind of continuous-time capacitated network with limits on nodes process rate, a shortest dynamic time flow was proposed and its corresponding linear programming form was also given. Based on the inner relationship of the above-mentioned network and the classical continuous-time capacitated network, efficient algorithms in terms of the thought of maximal-received flow and returning flow were designed to precisely solve the shortest dynamic time flow issue in those two kinds of network respectively. Afterwards, the algorithms were proved to be correct and their complexities were also concluded to be small. Finally, an example was used to demonstrate the execution of the algorithm.
    Adaptive TCP congestion algorithm based on fuzzy loss discrimination in heterogeneous networks
    WU Xiaochuan ZHANG Zhixue
    2013, 33(07):  1809-1812.  DOI: 10.11772/j.issn.1001-9081.2013.07.1809
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    In the hybrid wired/wireless network, the traditional Transmission Control Protocol (TCP) versions in wired network simply ascribe packet loss to congestion, which causes unnecessary performance degradation. To solve this problem, a new adaptive control algorithm based on fuzzy theory was proposed. It selected new network status parameters, and used fuzzy loss differentiating method to make comprehensive evaluation out of network status, and it was based on feedback theory method, finally built an adaptive control model, i.e. getting the evaluation result set, then yielding the Transmission Performance Index (TPI) by summing up the result sets weighting elements, which entered into next evaluation cycle as one of the input factors and also adjusted the factors weights. The simulation results show this algorithm better reflects the real congestion status of hybrid network, has better network adaptability and performs better than current main TCP mechanisms. This algorithm, on the background of multi-parameters and using fuzzy methods, makes new explorations of hybrid network congestion and its adaptive control research.
    Inter-cluster routing algorithm in wireless sensor network based on game theory
    ZHAO Xin ZHANG Xin
    2013, 33(07):  1813-1815.  DOI: 10.11772/j.issn.1001-9081.2013.07.1813
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    In Wireless Sensor Network (WSN), the network coverage range is wide, the communication range of sensor nodes is limited, and the long distance transmission is easy to cause data loss problem. To solve these problems, a routing algorithm based on game theory for WSN was proposed, through establishing the network Quality of Service (QoS) and the nodes residual energy of nodes as the utility function of game model, and resolving the Nash equilibrium. The simulation results show that the proposed game model can optimize network service quality, reduce the energy consumption of nodes and prolong the survival time of the entire network.
    Adaptive epidemic routing algorithm based on controlled mechanism for sparse vehicle Ad Hoc networks
    SU Chunbo XU Jiapin
    2013, 33(07):  1816-1819.  DOI: 10.11772/j.issn.1001-9081.2013.07.1816
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    Adaptive Epidemic routing (Ad-EPI) algorithm based on control mechanism was proposed to overcome the performance defects of traditional Epidemic algorithm. The overall balance of the peak transmission control, bandwidth resource consumption, cache utilization and delay were achieved by using controlled flooding mechanism, and information copy control mechanism, introduction of information on survival time (lifetime of information) and adaptive control strategy under the condition to ensure that there is a high arrival rate. The Ad-EPI algorithm was used in VC++ 6.0 programming and simulation and compared with the classic Epidemic algorithm on the VanetMobiSim simulation platform. The simulation results confirm that the Ad-EPI algorithm not only pays a smaller delay cost than classic Epidemic algorithm but also obtains a return of bandwidth usage decreasing by 27.62%, peak reducing by 15.19% on average, cache utilization increasing by 92.14% and so on. The Ad-EPI algorithm has achieved performance improvements in the three above mentioned areas, and it has engineering significance and application value.
    Multi-objective coverage control in wireless sensor network based on Poisson distribution
    XU Yixin BAI Yan ZHAO Tianyang WANG Renshu
    2013, 33(07):  1820-1824.  DOI: 10.11772/j.issn.1001-9081.2013.07.1820
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    A multi-objective optimization coverage control was proposed for solving the intractable problem of k-coverage rate, energy consumption and reliability in wireless sensor networks on the assumption that nodes are in Poisson distribution. In order to overcome the shortcomings of population initialization,parameter control and population maintenance in multi-objective differential evolution algorithm,the author designed tactics of swarm orthogonal initialization, parameter self-adaptive control and dynamic swarm maintenance strategy separately, and an improved multi-objective differential evolutional algorithm (I-DEMO) was proposed to solve this model. The results show that the control strategy can effectively achieve the three-coverage rate of 81.2%, reduce the energy consumption effectively, and ensure the reliability. This algorithm can dominate 76% Pareto fronts of the traditional algorithm and be applied to the solution of other multi-objective problems.
    Vertical handover trigger mechanism between 3G and WLAN based on available data rate
    ZHANG Jinfu YUAN Ling YOU Jianqiang
    2013, 33(07):  1825-1827.  DOI: 10.11772/j.issn.1001-9081.2013.07.1825
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    For the issue that traditional vertical handover trigger mechanism cannot guarantee to provide the real-time highest data rate, a method which can make the best of network resources and can effectively avoid unnecessary handover was proposed. This paper introduced IEEE802.21 Media Independent Handover (MIH) standard,using Media Independent Information Service (MIIS) provided by MIH Function (MIHF) to get the parameters of candidate Wireless LAN (WLAN) which can cover the mobile terminal, calculate the maximum available data rate of the candidate WLAN using the acquired parameters, and compare the maximum available data rate of WLAN with 3G networks to make handover decision. And when higher data rate WLAN was detected, handover to it could be made timely. If the mobile terminal moved out of the WLAN coverage, handover back to 3G network was made. The simulation results show that the proposed algorithm can make more full use of network resources and avoid unnecessary handover effectively compared to traditional vertical handoff trigger mechanism.
    Direction of arrival estimation of orthogonal frequency division multiplexing signal based on wideband focused matrix and higher-order cumulant
    WANG Zhichao ZHANG Tianqi WAN Yilong ZHU Hongbo
    2013, 33(07):  1828-1832.  DOI: 10.11772/j.issn.1001-9081.2013.07.1828
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of the Orthogonal Frequency Division Multiplexing (OFDM) broadband signal processing, an algorithm for the Direction of Arrival (DOA) estimation of OFDM signal based on the broadband focused matrix and higher-order cumulant was introduced. In the former algorithm, broadband array data was broken down into several narrowband signals by Fourier transform, the direction matrices under different frequence bands were transformed to the same reference frequency by a focused matrix, and then with the Multiple Signal Classification (MUSIC) algorithm DOA was estimated. In the higher-order cumulant algorithm, through the focus operation, array output vectors at different frequency bins were transformed to focusing frequency and individual cumulant matrix was got. Each cumulant matrix was made weighted average and eigen-decomposition, and then the MUSIC algorithm was applied to estimate DOA. Theoretical analysis and simulation results show that the two methods are able to accurately estimate DOA of OFDM signal, the spatial resolution of four-order cumulant method is better than the focusing matrix method. The four-order cumulant expanded the array aperture, and it also has good adaptability when the Signal-to-Noise Ratio (SNR) is low.
    Method of fast cyclic redundancy check reverse decoding
    LIANG Haihua PAN Lina
    2013, 33(07):  1833-1835.  DOI: 10.11772/j.issn.1001-9081.2013.07.1833
    Asbtract ( )   PDF (541KB) ( )  
    References | Related Articles | Metrics
    Cyclic Redundancy Check (CRC) has already been used in the field of computer network widely. Since the existing First In First Out (FIFO) method can only decode checksum which is encoded when initial registers state is zero, a Last In First Out (LIFO) method was proposed. First of all, by analyzing two kinds of serial encoding circuit based on transition of state matrix, the authors theoretically proved the matrix was invertible, and serial LIFO method and its circuit could be derived. Depending on serial method, rapid parallel LIFO method was given, in no need of dummy bits, thus simplifying the calculation process. A case study verified the correctness of this method when decoding checksum, no matter what initial registers state was. The simulation results show that FIFO and LIFO have similar calculation speed.
    Shaping method for low-density lattice codes based on lower triangular matrix
    ZHU Lianxiang LUO Hongyu
    2013, 33(07):  1836-1838.  DOI: 10.11772/j.issn.1001-9081.2013.07.1836
    Asbtract ( )   PDF (444KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that Low Density Lattice Codes (LDLC) cannot be used on the constrained power communication Additive White Gaussian Noise (AWGN) channel, the shaping methods were studied. In this paper, a lower triangular Hmatrix with a special structure was constructed first, together with the hypercube and systematic shaping method, and then the average power was fixed, the position change of lattice point before and after the shaping process, and its corresponding shaping gain were analyzed. The simulation results show that the codeword is uniformly distributed within the Voronoi regions of the lattice after shaping, and these shaping methods can achieve a shaping gain of 1.31dB when Symbol Error Rate (SER) 10 -5 and code length 10000 which improves 0.31dB compared with the traditional shaping technique. Power limited lattice points were generated efficiently after shaping.
    Echo cancellation technique solutions based on parallel filter
    WANG Zhen-chaoZhenchao GAO Yang XUE Wenling YANG Jianpo
    2013, 33(07):  1839-1841.  DOI: 10.11772/j.issn.1001-9081.2013.07.1839
    Asbtract ( )   PDF (469KB) ( )  
    References | Related Articles | Metrics
    To improve the convergence rate of digital repeater echo cancellation, firstly, the echo cancellation technique based on adaptive filter was studyed; secondly, the recursion algorithm of adaptive filter was improved by the technical schemes that two adaptive filters compute in parallel and update the weights jointly and recursively. Since the error signal to adjust weights of the two adaptive filters was generated in different ways, the schemes were divided into two categories: scheme one is that the weights of two filters were adjusted by error signal of echo cancellation (simultaneously); scheme two is that the weights of the first filter were adjusted by error signal as the difference value of received signal from antenna and output signal of the first filter; and the weights of the second filter were adjusted by error signal as the difference value of the above-mentioned error signal and the output signal of the second filter (separately). The simulation results show that the convergence rate of the echo cancellation is increased by 11.11%~17.78% in the improved technique scheme, so as to improve the condition effectively with the slow convergence rate of digital repeater echo cancellation.
    Information security
    Node behavior and identity-based trusted authentication in wireless sensor networks
    LIU Tao XIONG Yan HUANG Wenchao LU Qiwei GONG Xudong
    2013, 33(07):  1842-1845.  DOI: 10.11772/j.issn.1001-9081.2013.07.1842
    Asbtract ( )   PDF (833KB) ( )  
    References | Related Articles | Metrics
    Concerning the vulnerability to attack from external and internal nodes and node failure due to openness and limited resources in Wireless Sensor Network (WSN), an efficient, secure trusted authentication scheme was proposed. The theory of identity-based and bilinear pairings was adopted in the authentication key agreement and update. The node trust value was computed by node behavior reputation management based on Beta distribution. The symmetric cryptosystem combined with message authentication code was used in certification process between trusted nodes which were identified by the trust value. The scheme not only can prevent eavesdropping, injection, replay, denial of service and other external attacks, but also is able to withstand internal threats such as the selective forwarding, Wormhole attack, Sinkhole attack and Sybil attack. The analysis and comparison with SPINS scheme show that the scheme can achieve longer network lifetime, smaller certification delay, greater security and scalability in the same network environment. The scheme has good application value in unattended WSN with high safety requirements.
    Efficient provably secure certificateless signcryption scheme in standard model
    SUN Hua MENG Kun
    2013, 33(07):  1846-1850.  DOI: 10.11772/j.issn.1001-9081.201307.1846
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    At present, most of the existing certificateless signcryption schemes proven secure are proposed in the random oracle. Concerning the problem that this kind of schemes usually can not construct the corresponding instance in the practical application, a certificateless signcryption scheme was designed in the standard model. By analyzing several certificateless signcryption schemes in the standard model, it was pointed out that they were all insecure. Based on Aus scheme (AU M H, LIU J K, YUEN T H, et al. Practical hierarchical identity based encryption and signature schemes without random oracles. http://eprint.iacr.org/2006/368.pdf), a new proven secure certificateless signcryption scheme was proposed in the standard model by using bilinear pairing technique of elliptic curves. In the end, it is proved that the scheme satisfies indistinguishability against adaptive chosen ciphertext attack and existential unforgeability against adaptive chosen message and identity attack under the complexity assumptions, such as Decisional Bilinear Diffie-Hellman (DBDH) problem. Therefore, the scheme was secure and reliable.
    Security key pre-distribution scheme for wireless sensor networks
    ZHANG Ji DU Xiaoni LI Xu LIN Jipo
    2013, 33(07):  1851-1853.  DOI: 10.11772/j.issn.1001-9081.2013.07.1851
    Asbtract ( )   PDF (604KB) ( )  
    References | Related Articles | Metrics
    Key management is the core security issue in Wireless Sensor Network (WSN). Based on bivariate symmetric polynomials of random key pre-distribution scheme, a safety mechanism was provided for the nodes communication; however, it has the "t-security" problem and vulnerable to the node capture attack. In order to well solve this problem, and to improve the safety threshold value of network and node ability of anti-trapping, the authors used the convertible ternary polynomial instead of binary polynomial to establish inter-node communication and introduce the key distribution node to do key distribution of the cluster network. The new scheme improved the network anti-trapping ability. Moreover, decoding the sensor node key became almost impossible because of the use of the one-way Hash function. The analytical results show that this scheme has higher anti-destruction ability, scalability and security, and also reduces the overhead of storage and computation among common sensor nodes.
    Matrix-based authentication protocol for RFID and BAN logic analysis
    LI Hongjing LIU Dan
    2013, 33(07):  1854-1857.  DOI: 10.11772/j.issn.1001-9081.2013.07.1854
    Asbtract ( )   PDF (589KB) ( )  
    References | Related Articles | Metrics
    Currently, most of proposed Radio Frequency Identification (RFID) authentication protocols cannot resist replay attack and altering attack. This article proposed a low-cost secure protocol, called Matrix-based Secure Protocol (MSP), which could resist these attacks. MSP utilized matrix-theory and Pseudo Random Number Generator (PRNG), and required only 1000 gate equivalents. Compared to previous proposed protocols using the same algorithm, MSP had less demand on the storage and the computing capability. Then, this article analyzed the security of MSP with Burrows-Abadi-Needham (BAN) logic. The conclusion is that MSP applies to RFID well.
    Certificateless aggregate signcryption scheme with public verifiability
    ZHANG Xuefeng WEI Lixian WANG Xu'an
    2013, 33(07):  1858-1860.  DOI: 10.11772/j.issn.1001-9081.2013.07.1858
    Asbtract ( )   PDF (583KB) ( )  
    References | Related Articles | Metrics
    The research on aggregate signcryption is mostly based on identity-based encryption to provide confidentiality and authentication, thus improving efficiency. But aggregate signcryption has the problem in certificate management and key escrow. Therefore, it needs to design new aggregate signcryption schemes, which not only solve the problem of certificate management and key escrow, but also guarantee the confidentiality and authentication of the scheme. This paper analyzed the main stream aggregate signcryption schemes and their development. Combined with the scheme of Zhang et al.(ZHANG L, ZHANG F T. A new certificateless aggregate signature scheme. Computer Communications, 2009,32(6):1079-1085) and the needs mentioned above, this article designed a certificateless aggregate signcryption scheme, and proved its confidentiality and unforgeability based on the Bilinear Diffie-Hellman (BDH) problem and Computational Diffie-Hellman (CDH) problem. The experimental results show that the proposed scheme is more efficient and the amount of computation is equal or lower in comparison with the other schemes. What's more, the new scheme is publicly verifiable, and it eliminates the use of public key certificate and solves the problem in key escrow.
    Hybrid spam filtering method based on users' feedback
    HUANG Guowei XU Yuwei
    2013, 33(07):  1861-1865.  DOI: 10.11772/j.issn.1001-9081.2013.07.1861
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    Several limitations exist in the current spam filtering methods, such as they usually rely on only one type of E-mail characteristic to realize the E-mail classification, and have poor adaptability to the dynamic changes of E-mail characteristics. Concerning these limitations, a hybrid spam filtering method based on users' feedback was proposed. Based on the Social Network (SN) relationship among users, the dynamic update of the knowledge for E-mail classification was achieved with the help of the user's feedback scheme. Furthermore, the Bayesian model was introduced to integrate the content-based and the identity-based characteristics of E-mail in the classification. The simulation results show that the proposed method outperforms the traditional method in terms of E-mail classification, when the E-mail characteristics change dynamically. The overall recall, precision and accuracy ratios of the method can achieve 90% and above. While guaranteeing the performance of E-mail classification, the proposed method can improve the adaptability of classification to the changes of E-mail characteristics effectively. Therefore, the proposed method can act as a useful complement to the current spam filtering methods.
    Blind watermarking algorithm in H.264 compressed domain
    LIU Lidong TIAN Xiang
    2013, 33(07):  1866-1869.  DOI: 10.11772/j.issn.1001-9081.2013.07.1866
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of H.264 video copyright protection, a new blind watermarking algorithm was proposed. Based on the texture features of the picture, watermarking information was embedded on the Discrete Cosine Transform (DCT) domain of Instantaneous Decoding Refresh (IDR) frames. First, a rectangular sliding window was used to search the region of complex textures. Second, in the selected region, a 4×4 sub-block of maximum energy was chosen for embedding one watermarking bit. Last, one Alternating Current (AC) coefficient value of the selected 4×4 sub-block was modified adaptively. The experimental results show that Peak-Signal-to Noise Ratio (PSNR) decreases 0.15dB and the bitrate rises 0.49% on average, and the accuracy of watermark detection is above 91%; moreover, the algorithm can effectively resist the re-coded attacks of different Quantization Parameter (QP).
    Algorithm for directly computing 7P elliptic curves and its application
    LAI Zhongxi ZHANG Zhanjun TAO Dongya
    2013, 33(07):  1870-1874.  DOI: 10.11772/j.issn.1001-9081.2013.07.1870
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    To raise the efficiency of scalar multiplication on elliptic curve, based on the idea of trading inversions for multiplications, two efficient algorithms were proposed to compute 7P directly over binary field F2n in terms of affine coordinates. The common divisor and division polynomial were respectively introduced to compute 7P in two algorithms, their computational complexity were 2I+7S+14M and I+6S+20M, saving one inversion and two inversions respectively, compared with the Purohit′s method (PUROHIT G N, RAWAT S A, KUMAR M. Elliptic curve point multiplication using MBNR and Point halving. International Journal of Advanced Networking and Applications, 2012, 3(5): 1329-1337). Moreover, a new method was given to compute 7kP directly, which was more efficient than computing 7P for k times. Finally, these new algorithms were applied to scalar multiplication combined with point halving and extended MBNS(Multi-Base Number Representation). The experimental results show that the efficiency of the new method is improved about 30%-37% over the Purohit's method and about 9%-13% over the Hong's method (HONG Y F, GUI F, DING Y. Extended algorithm for scalar multiplication based on point halving and MBNS.Computer Engineering, 2011, 37(4): 163-165) on the elliptic curves recommended by NIST(National Institute of Standards and Technology), when the number of pre-storages points is 2 and 5. The new method can reduce the computational complexity of scalar multiplication efficiently with a few more pre-computation storage.
    Network and distributed techno
    Research status and development trend of human computation
    YANG Jie HUANG Xiaopeng SHENG Yin
    2013, 33(07):  1875-1879.  DOI: 10.11772/j.issn.1001-9081.2013.07.1875
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Human computation is a kind of technology to combine the human ability with the distributed theory to solve problems that computer cannot solve. The concept of human computation and its properties were introduced. Meanwhile, the distinctions of human computation with many other similar concepts got clarified. According to the reference review, the current research methods and design criterion of human computation were sorted out. Finally, research directions and development trends of human computation were discussed.
    Secure and distributed cloud storage model from threshold attribute-based encryption
    WU Shengyan XU Li LIN Changlu
    2013, 33(07):  1880-1884.  DOI: 10.11772/j.issn.1001-9081.2013.07.1880
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics
    Since there are more and more security issues in cloud storage, this paper designed a new secure and distributed cloud storage model based on the threshold Attribute-Based Encryption (ABE). Three phases in the model included: the encryption phase, the data storage phase and the decryption phase, and all messages in these phases were distributed through the whole process. It not only enhanced the security of the storage data by using the ABE but also supported the threshold decryption and allowed to add or remove the arbitrary attribute authorities, with the use of the multi-attribute authorities method in the model. In the data storage phase, this paper used the distributed erasure code to improve the robustness of our model and this model could resist against collusion attack. It can be applied in some special cloud situations and provides secure cloud storage service for users.
    Chord protocol and algorithm in distributed programming language
    PENG Chengzhang JIANG Zejun CAI Xiaobin ZHANG Zhike
    2013, 33(07):  1885-1889.  DOI: 10.11772/j.issn.1001-9081.2013.07.1885
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    The Peer-to-Peer (P2P) Distributed Hash Table (DHT) protocol is concise, and can be understood easily, but implementing and deploying a component like Chord with all functions in practice is very difficult and complicated because of the mismatch between popular imperative language and distributed architecture. To resolve these problems, a P2P DHT protocol based on Bloom system was proposed. Firstly, the distributed logic programming language's key elements of Bloom system were expounded. Secondly, a minimal distributed system was designed. Thirdly, a Chord prototype system was implemented through defining persistent, transient, asynchronous communicating and periodic collections and designing several algorithms for finger table maintaining, successor listing, stabilization preseving and so on. The experimental results show that the prototype system can finish full functions of Chord, and compared to traditional languages, 60% of the code lines can be saved. The analysis indicates such a high degree of uniformity between final code of the algorithm and the DHT protocol specification makes it more readable and reusable, and helpful for further understanding the specific protocol and relative applications.
    Auto-clustering algorithm based on compute unified device architecture and gene expression programming
    DU Xin LIU Dagang ZHANG Kaihuo SHEN Yuan ZHAO Kang NI Youcong
    2013, 33(07):  1890-1893.  DOI: 10.11772/j.issn.1001-9081.2013.07.1890
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    There are two inefficient steps in GEP-Cluster algorithm: one is screening and aggregation of clustering centers and the other is the calculation of distance between data objects and clustering centers. To solve the inefficiency, an auto-clustering algorithm based on Compute Unified Device Architecture (CUDA) and Gene Expression Programming (GEP), named as CGEP-Cluster, was proposed. Specifically, the screening, and aggregation of clustering center step was improved by Gene Read & Compute Machine (GRCM) method, and CUDA was used to parallel the calculation of distance between data objects and clustering centers. The experimental results show that compared with GEP-Cluster algorithm, CGEP-Cluster algorithm can speed up by almost eight times when the scale of data objects is large. CGEP-Cluster can be used to implement automatic clustering with the clustering number unknown and large data object scale.
    Least square support vector classification-regression machine for multi-classification problems
    ZHAI Jia HU Yiqing XU Er
    2013, 33(07):  1894-1897.  DOI: 10.11772/j.issn.1001-9081.2013.07.1894
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics
    Tri-class classification method based on Support Vector Machine (SVM) is a kind of method for solving multi-class classification problems. Least Square Support Vector Classification-Regression (LSSVCR) was proposed, which considered the effects of all the sample points by using least squares objective function. Even if there were wrongly marked sample points in the training set, the result would not be affected largely by them. LSSVCR was more accurate and faster, and it was efficient for the problems that there are large differences among the number of sample points in different classes. The numerical experiments show that the proposed method raises the accuracy by 2.57% on average compared to the existing tri-classification methods.
    Hardware/software partitioning based on greedy algorithm and simulated annealing algorithm
    ZHANG Liang XU ChengCheng TIAN Zheng LI Tao
    2013, 33(07):  1898-1902.  DOI: 10.11772/j.issn.1001-9081.2013.07.1898
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics
    Hardware/Software (HW/SW) partitioning is one of the crucial steps in the co-design of embedded system, and it has been proven to be a NP problem. Considering that the latest work has slow convergence speed and poor solution quality, the authors proposed a HW/SW partitioning method based on greedy algorithm and Simulated Annealing (SA) algorithm. This method reduced the HW/SW partitioning problem to the extended 0-1 knapsack problem, and used the greedy algorithm to do the initial rapid partition; then divided the solution space reasonably and designed a new cost function and used the improved SA algorithm to search for the global optimal solution. Compared to the existing improved algorithms, the experimental results show that the new algorithm is more effective and practical in terms of the quality of partitioning and the running time, and the promotion proportions are 8% and 17% respectively.
    Memory dependence prediction method based on instruction distance
    LU Dongdong HE Jun YANG Jianxin WANG Biao
    2013, 33(07):  1903-1907.  DOI: 10.11772/j.issn.1001-9081.2013.07.1903
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    Memory dependence prediction plays a very important role to reduce memory order violation and improve microprocessor performance. However, the traditional methods usually have large hardware overhead and poor realizability. Through the analysis of memory dependence's locality, this paper proposed a new memory predictor based on instruction distance. Compared to other memory dependence predictors, this predictor made full use of memory dependence's locality on instruction distance, predicted memory instruction' violation distance, controlled the speculation of a few instructions, finally deduced the number of memory order violation and improved the performance. The simulation results show that with only 1KB hardware budget, average Instruction Per Cycle (IPC) get a 1.70% speedup, and the most improvement is 5.11%. In the case of a small hardware overhead, the performance is greatly improved.
    Optimum packing of rectangles based on heuristic dynamic decomposition algorithm
    LI Bo WANG Shi SHI Songxin HU Junyong
    2013, 33(07):  1908-1911.  DOI: 10.11772/j.issn.1001-9081.2013.07.1908
    Asbtract ( )   PDF (597KB) ( )  
    References | Related Articles | Metrics
    To solve the optimum packing of two-dimensional rectangle layout problem, a heuristic dynamic decomposition algorithm was proposed, which can be used in the three-dimensional rectangule layout and global optimization problems. The container was orthogonally decomposed according to the emission rectangles, and the best sub-container was selected according to the degree of place coupling, then the state of all containers was updated by the interference relationship, so the large-scale and complex problem can be solved quickly and efficiently. The experimental results of the Bench-mark cases internationally recognized show that the proposed algorithm has better performance compared with similar algorithms, in which the layout utilization efficiency is increased by 9.4% and the calculating efficiency is improved up to 95.7%. Finally the algorithm has been applied to the commercialized packing software AutoCUT, and it has good application prospects.
    New coordinate optimization method for non-smooth losses based on alternating direction method of multipliers
    GAO Qiankun WANG Yujun WANG Jingxiao
    2013, 33(07):  1912-1916.  DOI: 10.11772/j.issn.1001-9081.2013.07.1912
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics
    Alternating Direction Method of Multipliers (ADMM) already has some practical applications in machine learning field. In order to adapt to the large-scale data processing and non-smooth loss convex optimization problem, the original batch ADMM algorithm was improved by using mirror descent method, and a new coordinate optimization algorithm was proposed for solving non-smooth loss convex optimization. This new algorithm has a simple operation and efficient computation. Through detailed theoretical analysis, the convergence of the new algorithm is verified and it also has the optimal convergence rate in general convex condition. Finally, the experimental results compared with the state-of-art algorithms demonstrate it gets better convergence rate under the sparsity of solution.
    Artificial intelligence
    Manifold learning and visualization based on self-organizing map
    SHAO Chao WAN Chunhong
    2013, 33(07):  1917-1921.  DOI: 10.11772/j.issn.1001-9081.2013.07.1917
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics
    Self-Organizing Map (SOM) tends to yield the topological defect problem when learning and visualizing the intrinsic low-dimensional manifold structure of high-dimensional data sets. To solve this problem, a manifold learning algorithm, Dynamic Self-Organizing MAP (DSOM), was presented in this paper. In the DSOM, the training data set was expanded gradually according to its neighborhood structure, and thus the map was trained step by step, by which local minima could be avoided and the topological defect problem could be overcome. Meanwhile, the map size was increased dynamically, by which the time cost of the algorithm could be reduced greatly. The experimental results show that DSOM can learn and visualize the intrinsic low-dimensional manifold structure of high-dimensional data sets more faithfully than SOM. In addition, compared with traditional manifold learning algorithms, DSOM can obtain more concise visualization results and be less sensitive to the neighborhood size and the noise, which can also be verified by the experimental results. The innovation of this paper lies in that DSOM expands the map size and the training data set synchronously according to its intrinsic neighborhood structure, by which the intrinsic low-dimensional manifold structure of high-dimensional data sets can be learned and visualized more concisely and faithfully.
    Artificial glowworm swarm optimization algorithm based on adaptive t distribution mixed mutation
    DU Xiaoxin ZHANG Jianfei SUN Ming
    2013, 33(07):  1922-1925.  DOI: 10.11772/j.issn.1001-9081.2013.07.1922
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics
    The convergence speed of Artificial Glowworm Swarm Optimization (AGSO) algorithm declines, even falls into local minimums, when some glowworms gather in non whole extreme points or some glowworms wander around aimlessly. Concerning this problem, an AGSO algorithm based on adaptive t distribution mixed mutation was proposed. Adaptive t distribution mutation and optimization adjustment mutation was introduced into the AGSO algorithm to improve the diversity of glowworm swarm, and prevent the AGSO algorithm from falling into local minimums. Mutation control factor was defined. Combining history status information, the description of adaptive t distribution mixed mutation was given. The mutation method could enhance ability of global exploration and local development. The emulation results of representative test functions and many application examples show that the proposed algorithm is reliable and efficient. Meanwhile, this algorithm is better than tradition algorithm in terms of speed and precision for seeking the optimum.
    Multi-objective particle swarm optimization method with balanced diversity and convergence
    GENG Huantong GAO Jun JIA Tingting WU Zhengxue
    2013, 33(07):  1926-1929.  DOI: 10.11772/j.issn.1001-9081.2013.07.1926
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics
    Particle Swarm Optimization (PSO) algorithm is population-based and it is effective for multi-objective optimization problems. For the convergence of the swarm makes the classical algorithm easily converge to local pareto front, the convergence and diversity of the solution are not satisfactory. This paper proposed an independent dynamic inertia weights method for multi-objective particle swarm optimization (DWMOPSO). It changed each particle's inertia weight according to the evolution speed which was calculated by the value of each particle's best fitness in the history. It improved the probability to escape from the local optima. In comparison with Coello's MOPSO through five standard test functions, the solution of the new algorithm has great improvement both in the convergence to the true Pareto front and diversity.
    Dimensionality reduction algorithm of local marginal Fisher analysis based on Mahalanobis distance
    LI Feng WANG Zhengqun XU Chunlin ZHOU Zhongxia XUE Wei
    2013, 33(07):  1930-1934.  DOI: 10.11772/j.issn.1001-9081.2013.07.1930
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    Considering high dimensional data image in face recognition application and Euclidean distance cannot accurately reflect the similarity between samples, a Mahalanobis distance based Local Marginal Fisher Analysis (MLMFA) dimensionality reduction algorithm was proposed. A Mahalanobis distance could be ascertained from the existing samples. Then, the Mahalanobis distance was used to choose neighbors and to reduce the dimensionality of new samples. Meanwhile, to describe the intra-class compactness and the inter-class separability, intra-class “similarity” graph and inter-class “penalty” graph were constructed by using Mahalanobis distance, and local structure of data set was preserved well. With the proposed algorithm being conducted on YALE and FERET, MLMFA outperforms the algorithms based on traditional Euclidean distance with maximum average recognition rate by 1.03% and 6% respectively. The results demonstrate that the proposed algorithm has very good classification and recognition performance.
    Speaker recognition method based on utterance level principal component analysis
    CHU Wen LI Yinguo XU Yang MENG Xiangtao
    2013, 33(07):  1935-1937.  DOI: 10.11772/j.issn.1001-9081.2013.07.1935
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    To improve the calculation speed and robustness of the Speaker Recognition (SR) system, the authors proposed a speaker recognition algorithm method based on utterance level Principal Component Analysis (PCA), which was derived from the frame level features. Instead of frame level features, this algorithm used the utterance level features in both training and recognition. What's more, the PCA method was also used for dimension reduction and redundancy removing. The experimental results show that this algorithm not only gets a little higher recognition rate, but also suppresses the effect of the noise on speaker recognition system. It verifies that the algorithm based on utterance level features PCA can get faster recognition speed and higher system recognition rate, and it enhances system recognition rate in different noise environments under different Signal-to-Noise Ratio (SNR) conditions.
    Speech emotion recognition algorithm based on modified SVM
    LI Shuling LIU Rong ZHANG Liuqin LIU Hong
    2013, 33(07):  1938-1941.  DOI: 10.11772/j.issn.1001-9081.2013.07.1938
    Asbtract ( )   PDF (664KB) ( )  
    References | Related Articles | Metrics
    In order to effectively improve the recognition accuracy of the speech emotion recognition system, an improved speech emotion recognition algorithm based on Support Vector Machine (SVM) was proposed. In the proposed algorithm, the SVM parameters, penalty factor and nuclear function parameter, were optimized with genetic algorithm. Furthermore, an emotion recognition model was established with SVM method. The performance of this algorithm was assessed by computer simulations, and 91.03% and 96.59% recognition rates were achieved respectively in seven-emotion recognition experiments and common five-emotion recognition experiments on the Berlin database. When the Chinese emotional database was used, the rate increased to 97.67%. The obtained results of the simulations demonstrate the validity of the proposed algorithm.
    Extension clustering-based extreme learning machine neural network
    LUO Genghe
    2013, 33(07):  1942-1945.  DOI: 10.11772/j.issn.1001-9081.2013.07.1942
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    During the construction process of Extreme Learning Machine (ELM), its input weights are randomly generated, and these parameters are non-optimized and contain no prior knowledge of the inputs. To solve these problems, combining the clustering method of Extension Neural Network type 2 (ENN-2), an extension clustering based extreme learning machine (EC-ELM) neural network was proposed. In EC-ELM neural network, the radial basis function centers of hidden neurons were firstly taken as the input weights, then extension clustering method was used to adaptively adjust the hidden neurons number and center vectors, and this well-adjusted information was trained by Moore-Penrose generalized inverse to obtain the output weights. Meanwhile, the effectiveness of this network was tested by the Friedman#1 dataset and the Wine dataset. The results indicate that EC-ELM provides a simple and convenient way to train the structure and parameters of neural network, and it is of higher modeling accuracy and faster learning speed than Extension theory based Radial Basis Function (ERBF) or ELM, which will provide a new way to apply the EC-ELM to complex process modeling.
    Feature evaluation of radar signal based on aggregation, discreteness and divisibility
    DENG Yanli JIN Weidong LI Jiahui LIU Xin
    2013, 33(07):  1946-1949.  DOI: 10.11772/j.issn.1001-9081.2013.07.1946
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    Quality of intrapulse feature about radar signals has proved to be a significant foundation to decide whether the signals can be differentiated effectively. For evaluating the quality quantitatively, a method adopting fuzziness and close-degree to evaluate intrapulse feature aggregation and discreteness of signals was proposed in this paper. Space distribution of intrapluse feature about radar signals was analyzed with this method, while intrapulse feature aggregation was evaluated by fuzziness and intrapulse feature discreteness was evaluated by close-degree. And for the overlapping states of feature space distribution, a linear separable measure of intrapluse feature about radar signals was put forward by within-class distance, between-class distance and linear discriminated criterion. The simulation results, based on the experiments of two intrapulse features extracted via time-frequency atom approach about five kinds of radar signal, show that the method and measure proposed in the paper are effective and feasible. It provides a new idea and approach for quantitatively evaluating features of the radar emitter signal.
    Online transfer-Bagging question recommendation based on hybrid classifiers
    WU Yunfeng FENG Jun SUN Xia LI Zhan FENG Hongwei HE Xiaowei
    2013, 33(07):  1950-1954.  DOI: 10.11772/j.issn.1001-9081.2013.07.1950
    Asbtract ( )   PDF (786KB) ( )  
    References | Related Articles | Metrics
    Traditional Collaborative Filter (CF) often suffers from the shortage of historic information. A transfer-Bagging algorithm based on hybrid classifiers was proposed for question recommendation. The main idea was that the recommendation and prediction problem were cast into the framework of transfer learning, then the users' demand for recommend questions were treated as target domain, while similar users who had applicable historic information were employed as auxiliary domain to help training target classifiers. The experimental results on both question recommendation platform and popular open datasets show that the accuracy of the proposed algorithm is 10%-20% higher than CF, and 5%-10% higher than single Bagging algorithm. The method solves cold start-up and sparse data problem in question recommendation field, and can be generalized into production recommendation on E-commerce platform.
    Adaptive multi-view learning and its application to image classification
    MAO Jinlian
    2013, 33(07):  1955-1959.  DOI: 10.11772/j.issn.1001-9081.2013.07.1955
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    Since the existing multi-view learning algorithms often suffer from the problem of lacking data adaptiveness in the nearest neighbor graph construction procedure, an Adaptive Multi-View Learning (AMVL) algorithm was proposed. Firstly, by utilizing the automatic data sample selection property of L1 norm constraint, multiple view-related directed L1-graphs were constructed. Secondly, according to the obtained L1 graphs, the algorithm tried to minimize the low dimensional reconstruction error in each view. Lastly, the objective function of the proposed adaptive multi-view learning algorithm was obtained by performing global coordinate alignment process in different views. Moreover, an iterative optimization method was also proposed to solve the proposed objective function. The algorithm was applied to the problem of image classification on two public image datasets, i.e., Corel5K and NUS-WIDE-OBJECT, and compared with several existing methods. The experimental results show that: a) the proposed algorithm can increase the classification accuracy up to 5% and 2% respectively on these two datasets; b) the optimization method convergences within 100 iterations; c) the number of nearest neighborhoods learned by the algorithm is datum-adaptive.
    Prediction of trajectory based on modified Bayesian inference
    LI Wangao ZHAO Xuemei SUN Dechang
    2013, 33(07):  1960-1963.  DOI: 10.11772/j.issn.1001-9081.2013.07.1960
    Asbtract ( )   PDF (671KB) ( )  
    References | Related Articles | Metrics
    The existing algorithms for trajectory prediction have very low prediction accuracy when there are a limited number of available trajectories. To address this problem, the Modified Bayesian Inference (MBI) approach was proposed, which constructed the Markov model to quantify the correlation between adjacent locations. MBI decomposed historical trajectories into sub-trajectories to get more precise Markov model and the probability formula of Bayesian inference was obtained. The experimental results based on real datasets show that MBI approach is two to three times faster than the existing algorithm, and it has higher prediction accuracy and stability. MBI makes full use of the available trajectories and improves the efficiency and accuracy for the prediction of trajectory.
    Continuous k nearest neighbor query algorithm based on road network
    LIU Degao LI Xiaoyu
    2013, 33(07):  1964-1968.  DOI: 10.11772/j.issn.1001-9081.2013.07.1964
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of redundant search of Incremental Monitoring Algorithm (IMA), this paper proposed a new algorithm of improving Continuous k Nearest Neighbor (Ck NN) queries for moving objects based on IMA. The incremental query processing mechanism was adopted. Adopting the characteristic that the close queries have similar results, a pretreatment process was first performed before the network expansion which took query point as the center, by investigating the expansion tree of other queries and reuse the effective part, thus avoiding blind expansion of road network. During the network expansion of nodes, by applying the expansion results of other queries which have the same expansion direction, not only repetitive expansion was reduced, but also computational cost was saved. The experimental results show that the query response time of proposed algorithm is reduced, the efficiency is improved compared with the traditional algorithm. In addition, the improved algorithm is applicable to different types of k nearest neighbor queries.
    Selective K-means clustering ensemble based on random sampling
    WANG Lijuan HAO Zhifeng CAI Ruichu WEN Wen
    2013, 33(07):  1969-1972.  DOI: 10.11772/j.issn.1001-9081.2013.07.1969
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    Without any prior information about data distribution, parameter and the labels of data, not all base clustering results can truly benefit for the combination decision of clustering ensemble. In addition, if each base clustering plays the same role, the performance of clustering ensemble may be weakened. This paper proposed a selective K-means clustering ensemble based on random sampling, called RS-KMCE. In RS-MKCE, random sampling can avoid local minimum in the process of selecting base clustering subset for ensemble. And the defined evaluation index according to diversity and accuracy can lead to a better base clustering subset for improving the performance of clustering ensemble. The experiment results on two synthetic datasets and four UCI datasets show that performance of the proposed RS-KMCE is better than K-means, K-means clustering ensemble, and selective K-means clustering ensemble based on bagging.
    Network and distributed techno
    Implementation of gray level error conpensation for optical 4f system
    HAN Liang JIANG Ziqi PU Xiujuan
    2013, 33(07):  1973-1975.  DOI: 10.11772/j.issn.1001-9081.2013.07.1973
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    To compensate the gray level error in optical 4f system, a method for gray level error compensation based on histogram matching and Radial Basis Function (RBF) neural network was proposed. The nonlinear transformation of histogram between input and output images in optical 4f System was fitted by RBF neural network, then the optimal estimation of curve for histogram matching between input and output images was obtained. The gray level error compensation image was obtained by utilizing histogram matching according to the optimal estimation of curve for histogram matching. The average Peak Signal-to-Noise Ratio (PSNR) gain achieved was 2.96dB and the visual effect of images processed was improved by utilizing the proposed method in actual optical 4f system. The experimental results show the gray level error in optical 4f system can be compensated effectively and the precision of optical information processing was improved by the proposed method.
    Codebook generation based on self-organizing incremental neural network for image classification
    YUAN Feiyun
    2013, 33(07):  1976-1979.  DOI: 10.11772/j.issn.1001-9081.2013.07.1976
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of ignoring topological information in incremental learning in traditional image classification, a new codebook generation method was proposed to improve the accuracy of image classification. After reviewing several codebook methods, the detailed method was discussed. Based on the Self-Organizing Incremental Neural Network (SOINN) which can automatically generate clusters while conserving topological structures, the method produced a more effective way for representing words and coding. The experimental results show that the new method has at most nearly 1% precision increase over other similar algorithms in different scale of samples as well as different kind of codebook models. The results reveal that the new method has more appropriate and more accurate classifications for images. Also, it can be widely used in all kinds of image classification tasks with higher precision and efficiency.
    Image retrieval based on edge direction histogram correlation matching
    SHEN Haiyang LI Yue'e ZHANG Tian
    2013, 33(07):  1980-1983.  DOI: 10.11772/j.issn.1001-9081.2013.07.1980
    Asbtract ( )   PDF (646KB) ( )  
    References | Related Articles | Metrics
    With regard to the advantages and disadvantages of image retrieval algorithm based on edge orientation autocorrelogram, a kind of image retrieval algorithm based on edge direction histogram correlation matching was proposed. Firstly, the salt and pepper noise in image was filtered by using an adaptive median filter, and then Sobel operator was used to extract image edge. After the edge orientation histogram was got through calculating the edge gradient amplitude and angle, the feature vector was constituted. Lastly, Spearman rank correlation coefficient was used to calculate the correlation coefficient between the feature vectors of images, as a measure of image similarity. Compared with the algorithm based on edge orientation autocorrelogram, the average precision and the recall rate of the new image retrieval algorithm increased by 10.5% and 9.7%. And the retrieval time consumption was also reduced by 7.5%. The experimental results verify the effectiveness of the proposed algorithm. The algorithm could be applied in medium to large image retrieval system to improve retrieval effect and raise the system speed.
    Star pattern recognition algorithm of large field of view based on concentric circles segmentation
    LIU Heng ZHENG Quan QIN Long ZHAO Tianhao WANG Song
    2013, 33(07):  1984-1987.  DOI: 10.11772/j.issn.1001-9081.2013.07.1984
    Asbtract ( )   PDF (706KB) ( )  
    References | Related Articles | Metrics
    Since the triangle identification algorithm commonly utilized in star sensitive system is of high data redundancy and low recognition speed, especially initial recognition speed, a concentric circles-based star pattern recognition algorithm of large Field Of View (FOV) was proposed. After analyzing the information of star map to acquire its main star, draw eight concentric circles around the main star at some certain radiuses, then figure out the number of stars in each annulus based on the coordinates to obtain the distributional vector of companion stars. Construct the navigation star feature database from the base database with the utilization of the same method, so as to process the pattern matching with the distributional vector to acquire star pattern recognition result. The vectors in the database will be sorted by the first dimensional element in order to accelerate the process of recognition. The simulation results show that this algorithm needs much less storage space of navigation star feature database, and possesses good real-time and noise resistance, and high recognition rate. It takes 95.3μs recognition time to achieve more than 88.9% accuracy, and it also can be integrated with other recognition algorithms and performance in different stages to realize more efficient and accurate celestial navigation.
    Multi-pose cooperative face detection based on hypersphere support vector machine
    TENG Shaohua CHEN Haitao ZHANG Wei
    2013, 33(07):  1988-1990.  DOI: 10.11772/j.issn.1001-9081.2013.07.1988
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics
    With regard to poor accuracy of multi-pose face detection, a hyper-sphere Support Vector Machine (SVM) was used to detect human faces. A model was proposed in this paper, which was composed by thirteen SVMs. These SVMs were divided into three levels, the first level had one SVM, the second level had three SVMs, and the third level had nine SVMs. Each SVM was a hyper-sphere support vector machine, which was exploited to detect multi-pose faces from various angles. The 3-tier model was applied to fast reduce detection area. On one hand, it accelerated the speed of detection; on the other hand it was favorable to make a careful detection in a small local area. In addition, the k-Nearest Neighbor (kNN) algorithm was improved in this paper. The improved kNN algorithm was applied to deal with the detection of hyper-sphere overlap samples. The experimental results show that the proposed algorithm can promote about 5% in the face detection accuracy than the traditional SVM-based face detection algorithm, but also ensure the speed of face detection.
    Face recognition method fusing Monogenic magnitude and phase
    LI Kunming WANG Ling YAN Haiting LIU Jifu
    2013, 33(07):  1991-1994.  DOI: 10.11772/j.issn.1001-9081.2013.07.1991
    Asbtract ( )   PDF (638KB) ( )  
    References | Related Articles | Metrics
    In order to use the magnitude and phase information of filtered image for face recognition, a new method fusing Monogenic local phase and local magnitude was proposed. Firstly, the authors encoded the phase using the exclusive or (XOR) operator, and combined the orientation and scale information. Then the authors divided the phase pattern maps and binary pattern maps based on magnitude into blocks. After that, they extracted the histograms from blocks. Secondly, they used the block-based Fisher principle to reduce the feature dimension and improve the discrimination ability. At last, the authors fused the cosine similarity of magnitude and phase at score level. The phase method Monogenic Local XOR Pattern (MLXP) reached the recognition rate of 0.97 and 0.94, and the fusing method recognition rate was 0.99 and 0.979 on the ORL and CAS-PEAL face databases respectively and the fusing method outperformed all the other methods used in the experiment. The results verify that the MLXP method is effective. And the method fusing the Monogenic magnitude and phase not only avoids the Small Sample Size (3S) problem in conventional Fisher discrimination methods, but also improves the recognition performance significantly with smaller time and space complexity.
    Novel fast rendition algorithm for dehazed image
    ZHANG Xiao WU Jun LOU Xiaolong
    2013, 33(07):  1995-1997.  DOI: 10.11772/j.issn.1001-9081.2013.07.1995
    Asbtract ( )   PDF (665KB) ( )  
    References | Related Articles | Metrics
    Collected images are often degraded in terms of contrast and visibility by scattering caused by atmospheric particles. To solve the problem, a new fast rendition algorithm for haze degraded image was proposed. The algorithm was based on the monochrome atmospheric scattering model. According to the prior knowledge and the priori assumptions of the airlight, the constrainted optimal problem was constructed to estimate the airlight. And the scene albedo was restored with scattering model and the airlight. The experimental results show that the proposed algorithm achieves great restoration of images with various depth maps and improves the visibility of the scene. Besides, the algorithm is robust. Compared with other algorithms, the running efficiency of the algorithm is more than doubled. And the algorithm can be implemented in intelligent traffic monitoring and some other visible light computer vision systems.
    Dehazing algorithm based on dark channel with feedback regulation mechanism
    FANG Wen LIU Binghan
    2013, 33(07):  1998-2001.  DOI: 10.11772/j.issn.1001-9081.2013.07.1998
    Asbtract ( )   PDF (653KB) ( )  
    References | Related Articles | Metrics
    When the dark channel image dehazing algorithms deal with the bright region without satisfying the dark channel fog priori condition, the estimated transmission is relatively small, and it leads to large deviation from the original image in terms of color, smoothness and texture. Therefore, a feedback regulation mechanism of the dark channel dehazing was proposed. First, removed haze using dark channel prior algorithm and gave the feedback difference of the texture smoothness of haze-free image and the original image, segmented the bright region by using Fuzzy C-Means (FCM) algorithm, and then used the Gaussian function to adjust the transmission of the bright region, made it closer to the actual transmission. Finally, the article got haze-free image by using the adjusted transmission. The experimental results show that the proposed algorithm can effectively deal with the bright region which does not meet the assumptions of dark channel. It also makes the dehazed image's color more accord with the real scene, and its visual effect is also better. This method can improve the robustness of outdoor surveillance system.
    GPU parallel implementation of edge-detection algorithm based on multidirectional linear gradient adjusted predictor
    DANG Xiangying BAO Rong JIANG Daihong
    2013, 33(07):  2002-2004.  DOI: 10.11772/j.issn.1001-9081.2013.07.2002
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics
    Concerning the fixed direction and monotony of lossless image compression template of Gradient Adjusted Predictor (GAP), according to the characteristic of the actual edge with the same linear increments, this paper proposed Multidirectional Linear Gradient Adjusted Predictor (MLGAP) template. Firstly the image was cut into four sub-images from the center to the periphery. With the application of Graphics Processor Unit (GPU) parallel technology operation, the predictive value was calculated by MLGAP template in each sub-image, and then error feedback was used to construct prediction error image. The threshold was calculated by OSTU algorithm, and error image edge was classified. At last the edge was thinned by Hilditch algorithm. The simulation results show that using the proposed method can get clear, complete and precise edges. In addition, the GPU parallel technology accelerates the image processing.
    Unstructured road detection based on two-dimensional entropy and contour features
    GUO Qiumei HUANG Yuqing
    2013, 33(07):  2005-2008.  DOI: 10.11772/j.issn.1001-9081.2013.07.2005
    Asbtract ( )   PDF (640KB) ( )  
    References | Related Articles | Metrics
    The scene of unstructured road is complex and easy to be influenced by many factors. In order to solve the detection difficulty, a road detection algorithm based on contour features and two-dimensional maximum entropy was proposed. Quadratic two-dimensional maximum entropy segmentation method combined with invariant color feature was used for road image segmentation. Afterwards, contour features were extracted from segmentation result by boundary tracking algorithm, and then the maximum contour was chosen. Finally, the improved mid-to-side algorithm was used to search road edge points, then road boundary was reconstructed through road model and road direction was judged. The experimental results show that the detection accuracy rate is improved about 25% in three kinds of unstructured scene compared with traditional algorithm. In addition, this method is robust against shadows and can recognize road direction efficiently.
    Plant leaf recognition method based on clonal selection algorithm and K nearest neighbor
    ZHANG Ning LIU Wenping
    2013, 33(07):  2009-2013.  DOI: 10.11772/j.issn.1001-9081.2013.07.2009
    Asbtract ( )   PDF (782KB) ( )  
    References | Related Articles | Metrics
    To decrease the time of classifier design and training, a new method combining the Clonal Selection Algorithm and K Nearest Neighbor (CSA+KNN) was proposed. Having the image preprocessed and getting the comprehensive features information from geometry and texture feature, the CSA+KNN was used to train and classify the plant leaf samples. The plant leaf database with 100 leaf species was applied to test the proposed algorithm, and the recognition accuracy was 91.37%. Compared with other methods, the experimental results demonstrate the efficiency, accuracy and high training speed of the proposed method, and verify the significance of texture features in leaf recognition. CSA+KNN method broadens the field of plant leaf recognition method, and it can be applied to create digitalized plant specimens museum.
    Automatic brain extraction method based on hybrid level set model
    AO Qian ZHU Yanping JIANG Shaofeng
    2013, 33(07):  2014-2017.  DOI: 10.11772/j.issn.1001-9081.2013.07.2014
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    Automatic extraction of brain is an important step in the preprocessing of brain internal analysis. To improve the extraction result, a modified Brain Extraction Tool (BET) and hybrid level set model based method for automatic brain extraction was proposed. The first step of the proposed method was obtaining rough brain boundary with the improved BET algorithm. Then the morphological expansion was operated on the rough brain boundary to initialize the Region of Interest (ROI) where the hybrid active contour model was defined to obtain a new contour. The ROI and the new contour were iteratively replaced until the accurate brain boundary was achieved. Seven Magnetic Resonance Imaging (MRI) volumes from Internet Brain Segmentation Repository (IBSR) website were used in the experiment. The proposed method achieved low average total misclassification ratio of 7.89%. The experimental results show the proposed method is effective and feasible.
    New engineering method for defect detection of batteries based on computer vision
    XU Jianyuan YU Hongyang
    2013, 33(07):  2018-2021.  DOI: 10.11772/j.issn.1001-9081.2013.07.2018
    Asbtract ( )   PDF (668KB) ( )  
    References | Related Articles | Metrics
    Equipment failure often brings some defects on the surface of batteries in the battery production process. The traditional artificial detection has weakness on the timeliness and durability. But there has not been any efficient automatic detection means for the ordinary battery by now. Concerning the distribution and morphological characteristics of the defects, a new automatic optical detection method based on computer vision was proposed. The proposed method used Canny operator and virtual granule collision method with the minimum value searching method to determine the area to be detected based on the battery anode surface morphology features. Considering the sharpness of the defect, Harris corner points were used to mark the defects as mark points. False mark points were filtered by the degree of aggregation of the points. The defect region would be extracted at last according to the location of mark points. The experimental results illustrate the detection success rate of the proposed method is over 90% and the method can work more efficiently than the popular wavelet analytical method. The study achievement provides a reference for product quality automatic detection on battery production.
    Research and design of Web service-based matter-centric middleware in Internet of Things
    ZHENG Shuquan WANG Qian DING Zhi-Zhigang
    2013, 33(07):  2022-2025.  DOI: 10.11772/j.issn.1001-9081.2013.07.2022
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    To solve the problems in Internet of Things (IOT), such as high coupling between levels, low reusability and massive data processing difficulty, this paper proposed a Web service-based middleware model of IOT. It used XML file to configure and implement logical separation, achieved function modularization by the application of "object" in IOT, distributedly processed massive data by load balance, and ensured information security by role permissions setting. The experimental results show that, the middleware model reduces reliance on the type of gateway, improves the reusability of the program and achieves ten-thousand data processing. According to the application of middleware system developed by the model in the vehicle monitoring, the cost of development is reduced, the efficiency of development is improved and the system has a flexible configuration.
    Overview of complex event processing technology and its application in logistics Internet of Things
    JING Xin ZHANG Jing LI Junhuai
    2013, 33(07):  2026-2030.  DOI: 10.11772/j.issn.1001-9081.2013.07.2026
    Asbtract ( )   PDF (1091KB) ( )  
    References | Related Articles | Metrics
    Complex Event Processing (CEP) is currently an advanced analytical technology which deals with high velocity event streams in a real-time way and primarily gets applied in Event Driven Architecture (EDA) system domain. It is helpful to realize intelligent business in many applications. For the sake of reporting its research status, the paper introduced the basic meaning and salient feature of CEP, and proposed a system architecture model composed of nine parts. Afterwards, the main constituents of the model were reviewed in terms of key technology and its formalization. In order to illustrate how to use CEP in the logistic Internet of things, an application framework with CEP infiltrating in it was also proposed here. It can be concluded that CEP has many merits and can play an important role in application fields. Finally, the shortcomings of this research domain were pointed out and future works were discussed. The paper systematically analyzed the CEP technology in terms of theory and practice so as to further develop CEP technology.
    New inverted index storage scheme for Chinese search engine
    MA Jian ZHANG Taihong CHEN Yanhong
    2013, 33(07):  2031-2036.  DOI: 10.11772/j.issn.1001-9081.2013.07.2031
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    After analyzing inverted index structure and access mode of an open source search engine-ASPSeek, this paper gave an abstract definition of "inverted index". In order to solve the difficulties of inverted index updating and the efficiency issues caused by directly accessing inverted index through file caching of operating system in ASPSeek, considering the characteristics of 1.25 million Chinese agricultural Web pages, this article proposed a new blocking inverted index storage scheme with a buffer mechanism which was based on CLOCK replacement algorithm. The experimental results show that the new scheme is more efficient than ASPSeek whether the buffer system is disabled or enabled. When the buffer system got enabled and 160 thousand Chinese terms or 50 thousand high-frequency Chinese terms were used as a test set, the retrieval time of new scheme tended to be a constant after one million accesses. Even when using entire 827309 terms as a test set, the retrieval time of new scheme began to converge after two million accesses.
    Metagraph for genealogical relationship visualization
    LIU Jundan ZHAO Shuliang ZHAO Jiaojiao GUO Xiaobo CHEN Min LIU Mengmeng
    2013, 33(07):  2037-2040.  DOI: 10.11772/j.issn.1001-9081.2013.07.2037
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics
    For the poor readability and understandability with the existing display form for genealogical data, this paper presented visualization for genealogical data with metagraph. In the metagraph representation of genealogy, the generating set comprised of all persons in the family; each edge represented only "parents-child" relationship. An edge in the metagraph representation of genealogy was a pair consisting of an invertex and an outvertex, the invertex consisted of two nodes of the marital relationship, and the outvertex represented a single child node set. The experimental results show that the number of the edges in the metagraph form is almost half of common form in the case of the same data, and the visualizing effect is significantly improved. At the same time, the proposed methodology has a guiding role in the mathematical modeling of genealogy, the research of genealogy visualization and the improvement of genealogical information system.
    Validation method of security features in safety critical software requirements specification
    WANG Fei GUO Yuanbo LI Bo HAO Yaohui
    2013, 33(07):  2041-2045.  DOI: 10.11772/j.issn.1001-9081.2013.07.2041
    Asbtract ( )   PDF (681KB) ( )  
    References | Related Articles | Metrics
    Since the security features described by natural language in the safety-critical software requirements specification are of inaccuracy and inconsistence, a validation method of security features based on UMLsec was proposed. The method completed the UMLsec model by customizing stereotypes, tags and constraints for security features of the core class on the basis of class diagram and sequence diagram for UML requirements model. Afterwards, the support tool for designing and implementing UMLsec was used for automatic verification of security features. The experimental results show that the proposed method can accurately describe security features in the safety-critical requirements specification and can automatically verify whether the security features meet the security requirements.
    Behavior analysis technology of software network communication based on session association
    DU Kunping KANG Fei SHU Hui SUN Jing
    2013, 33(07):  2046-2050.  DOI: 10.11772/j.issn.1001-9081.2013.07.2046
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    According to the software network communication behavior, a reverse analytical method based on session association was proposed in this paper. The method restored the network traffic communication session and Application Programming Interface (API) sequence session produced by software firstly, then associated the sessions restored. Through the association, a direct mapping was built between two kinds of software network behavior analytical methods based on execution trace analysis and network traffic analysis respectively. The prototype system was designed and completed. Based on the system, the function call list was extracted. The reverse analytical method based on session association makes the reverse analysis of software network behaviors fast and convenient.
    Contention management model based on relativity-detection of conflicts
    CHU Caijun HU Dasha JIANG Yuming
    2013, 33(07):  2051-2054.  DOI: 10.11772/j.issn.1001-9081.2013.07.2051
    Asbtract ( )   PDF (832KB) ( )  
    References | Related Articles | Metrics
    Contention Manager (CM), which is used for the resolution of conflicting transactions, plays a significant role in the obstruction-free software transactional memory. The relativity-detection contention management model was put forward to solve the problem that the existing contention management policies' performance is sensitive to their workloads. This model could detect and analyze the relativity of conflict from the past decision-making records, then took the relativity as the basis of the current arbitration,so that it helped to get more favorable resolution results. Two benchmarks were tested and the experimental results show that it has the advantages of being flexible and adaptable. The number of detected transactions, which is committed finally, can be accounted for up to 30% of the throughput of the system. Using this model, the total transaction throughput is about 11% higher than the other reference objects.
    Typical applications
    Dynamic simulation based on dyna971 for computer numerical control boring and milling machine
    WU Youde ZHU Liuxian LI Bailin
    2013, 33(07):  2055-2058.  DOI: 10.11772/j.issn.1001-9081.2013.07.2055
    Asbtract ( )   PDF (736KB) ( )  
    References | Related Articles | Metrics
    To ensure good dynamic performance, a dynamic simulation analysis is needed for Computer Numerical Control (CNC) floor type boring and milling machine in the design stage because of its complex structure. According to the request, a dynamic simulation idea with a practical effect was presented. The proposed scheme firstly obtained modal parameters of joint surface such as bed-slide, slide-column, column-spindle box via the modal test of CNC floor boring and milling machine tool prototype; then joint parameters obtained by finite element optimization identification of modal parameters and furthermore simulated by COMBIN14 element were used to establish the finite element model of CNC. With the dynamic simulation on the dyna971 software platform, a good dynamic characteristic was displayed due to reposeful waveform of stress and strain, while machine tool was subject to external force effect. At present, the research result had been integrated into batch production of this kind of machine.
    Fireworks simulation based on CUDA particle system
    CHEN Xiuliang LIANG Yingjie GUO Fuliang
    2013, 33(07):  2059-2062.  DOI: 10.11772/j.issn.1001-9081.2013.07.2059
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics
    The elementary theory of the particle system coincides with the objective laws of the natural world. As a result, the particle system can be used for fireworks and other complex phenomena simulation. To solve the problem that the simulation of particle system is of huge computation and memory resources consumption, the paper built the basic particle system model based on Compute Unified Device Architecture (CUDA) framework. The storage and movement update of particles in the model were considered. Then the parallel KD-TRIE neighbor particle search algorithm based on CUDA was studied. Finally, the detailed implementation of the fireworks simulation was discussed based on the CUDA particle system. The results show that the model can simulate the rise and bloom of the fireworks realistically with the frame rate of up to 320 frames per second, enhancing the fidelity and real-time performance of the simulation.
    Small fault detection method of instruments based on independent component subspace algorithm and ensemble strategy
    HU Jichen HUANG Guoyong SHAO Zongkai WANG Xiaodong ZOU Jinhui
    2013, 33(07):  2063-2066.  DOI: 10.11772/j.issn.1001-9081.2013.07.2063
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of small fault detection of instruments in process industry, independent components were extracted by Independent Component Analysis (ICA) from instruments recorded data. And independent component subspaces were established according to the contribution matrix. Fault detection model was constructed in each independent component subspace with statistical variables. A proper ensemble strategy was chosen by combining all the fault detection results. Finally, the instrument with fault was located by contribution algorithm. The simulation results with TE (Tennessee Eastman) process show that this method has higher precision on small fault detection and more flexibility with proper ensemble strategy.
    MicroWindows-based multi-device support intelligent Chinese input system
    ZHOU Huijuan XIANG Rong
    2013, 33(07):  2067-2070.  DOI: 10.11772/j.issn.1001-9081.2013.07.2067
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics
    The existing embedded Chinese input systems are restricted by problems such as single type of input device, low inefficiency and poor user experience. To solve these problems, this paper put forward a MicroWindows-based intelligent Chinese input system. First messages from different types of device were packed and delivered in the device input layer. Then the delivered messages were centrally treated by uniformly encoding and distributing in the message processing center. Finally the improved N-gram model and user model were combined to implement Chinese input method. The experiments in Microprocessor without Interlocked Piped Stages (MIPS) and other platform show that this system runs well with fluent and fast Chinese input. The efficiency of input is raised by 35% compared with traditional Chinese input.
    Design and implementation of regional malodor on-line monitoring platform
    YU Hui LI Jinhang WANG Yuangang
    2013, 33(07):  2071-2073.  DOI: 10.11772/j.issn.1001-9081.2013.07.2071
    Asbtract ( )   PDF (672KB) ( )  
    References | Related Articles | Metrics
    To improve malodor management and emergency response ability, this paper proposed a regional malodor on-line monitoring platform solution. Referring to network-load equilibrium and dynamic extendibility, the platform implemented real-time monitoring and remote monitoring. The remote monitoring module corresponding to the Remote Desktop Protocol (RDP) could adjust a terminal's parameters, alarm beyond limit and sample at different grades. Based on Advanced Encryption Standard (AES) and MD5 digital-signature technology, a combined algorithm was designed to improve the safety of the RDP. The platform has been piloting in Dagang Petrochemical Industrial Park in Tianjin Binhai New Area,which could accumulate data and experience for the research of future malodor diffusion model as well as malodor pollution control, and make technical preparation for malodor on-line monitoring system to merge into the Internet of Things (IOT) for environmental protection.
    TT&C network resource assignment model and simulation based on genetic algorithm
    DONG Jiaqiang
    2013, 33(07):  2074-2077.  DOI: 10.11772/j.issn.1001-9081.2013.07.2074
    Asbtract ( )   PDF (662KB) ( )  
    References | Related Articles | Metrics
    To resolve the resource pressure of multi-satellite on-orbit-telemetry in TT&C(Tracking, Telemetry and Control) network, on the basis of analyzing the phenomenon of resource conflict in TT&C network, a resource assignment model of TT&C network was constructed. The model was established by the existing hardware resource in TT&C network. On the basis of not increasing the construction cost of TT&C network, the effectiveness factor was introduced, which not only preferentially met the measuring & controlling demand of higher level satellite, but also fully considered the actual factors in each TT&C station, such as delay and bandwidth, when it received the TT&C mission. Through allocating different weight, the model realized the maximum of the utilization ratio of the existing TT&C resource, and the model was settled by Genetic Algorithm (GA). The results of simulation experiment demonstrate that compared with the traditional resource assignment method, the task accomplishing ratio of the model is improved by 23%, and the utilization ratio of TT&C resource is over two times higher than the traditional method, but the running time of the algorithm is equivalent to the traditional method. Therefore, under the conditions of multi-window and multi-satellite telemetry at the same time, the model can much more reasonably assign the TT&C resource, and improve the utilization ratio of the whole TT&C network.
    Improved complementary filter for attitude estimation of micro air vehicles using low-cost inertial measurement units
    YAN Shiliang WANG Yinling ZHANG Hua
    2013, 33(07):  2078-2082.  DOI: 10.11772/j.issn.1001-9081.2013.07.2078
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Concerning the issue of how to achieve effective estimation of gravitational acceleration for attitude estimation of Micro Air Vehicle (MAV) under all dynamic conditions, an improved explicit complementary filter was proposed in combination with stepped-gain schedule. In order to validate the nonlinear complementary filter in the case when a MAV circled for an extended period, a centripetal acceleration mode was built using gyroscopes and indicated airspeed data, which resulted in precise estimation based on the estimation of gravitational acceleration and avoided reconstructing an estimation of the attitude. In the phase of Proportional-Integral (PI) compensation, the proportional gain and integral gain can achieve better adaptability by assigning different cut-off frequency value to the estimation of pitch and roll angles respectively. The experimental results show that the attitude angle estimation can be maintained under the range of ±2°. Compared with the typical filter algorithm, a better performance was achieved with respect to the efficiency and estimation error, so the algorithm proposed in this paper can be applied to accurate attitude estimation for MAV with low-cost inertial measurement units.
    Scale design of Chen's system and its hardware implementation
    LYU Ensheng ZHANG Guangfeng
    2013, 33(07):  2083-2086.  DOI: 10.11772/j.issn.1001-9081.2013.07.2083
    Asbtract ( )   PDF (529KB) ( )  
    References | Related Articles | Metrics
    To effectively use the chaotic system, a scale design method based on Chen's chaotic system was proposed in this paper. Scaling transformation and differential to integral conversion were performed on the Chen's system, and the characteristics of the scaled Chen's system were analyzed in detail. The scaled Chen's chaotic circuit was taken by the system as model with common electronic circuit structures. The synchronization of the drive and response systems could be realized by carrying out unidirectional single-variable linking substitution method on the scale system. The theoretical analyses and hardware experimental results show that the proposed method can be applied to industrial production directly.
    Modeling and simulation for route-transition of mine-hunting and mine-sweeping by helicopter
    REN Dongyan SUN Mingtai
    2013, 33(07):  2087-2090.  DOI: 10.11772/j.issn.1001-9081.2013.07.2087
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    Concerning the issue of route-transition by helicopter towing arming of mine-hunting and mine-sweeping, three patterns of route-transition were put forward. Through analyzing the swerve characteristics of helicopter and the real movement of underwater towed cable system, the kinematic model of swerving process was established, the complexity of fluid dynamics was avoided and the calculation rate was advanced. Finally, the correctness was testified by examples of three flight states by helicopter towing arming of mine-hunting and mine-sweeping. Through increasing velocity of helicopter, the underwater towed cable system was fast close to the beeline sea-lane, the result was in accordance with the real movement. The trajectory of swerving process of route-transition by helicopter towing arming of mine-hunting and mine-sweeping was truly reflected, and the result can be used for decision-making of route optimization and operation scheme.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF