Table of Content

    10 January 2015, Volume 35 Issue 1
    Design of relay node selection scheme for cooperative communication
    ZHAO Yuli, GUO Li, ZHU Zhiliang, YU Hai
    2015, 35(1):  1-4.  DOI: 10.11772/j.issn.1001-9081.2015.01.0001
    Asbtract ( )   PDF (604KB) ( )  
    References | Related Articles | Metrics

    As the instantaneous Channel State Information (CSI) of source to relay and relay to destination affects the overall Bit Error Rate (BER) of the cooperative communication system, a relay selection scheme which evaluated the two-stage channel coefficients was proposed. Firstly, the channel coefficients of source-relay channel and the channel coefficients of relay-destination channel were compared according to the CSI of each candidate relay, and the worse one was found out. Moreover, a node set containing the approximate optimal relays was obtained by sorting the candidate relays based on their worse channel coefficients. Finally, the relay with the highest summation of the two-stage channel coefficients in the set was selected as the one which participated in the cooperative transmission. The simulation results reveal that the Signal-to-Noise Ratio (SNR) of the proposed relay selection scheme respectively decreases by 0.4 dB and 0.2 dB compared with the best worse channel selection scheme and the delay selection scheme based on the nearest neighbor relation, when the number of candidate relay nodes is 100 and 5, and the BER decreases to 10-4 and 10-5. In general, the proposed scheme can expend the information transmission range and improve the reliability in the wireless relay network.

    Optimal beacon nodes-based centroid localization algorithm for wireless sensor network
    CHEN Xiaohai, PENG Jian, LIU Tang
    2015, 35(1):  5-9.  DOI: 10.11772/j.issn.1001-9081.2015.01.0005
    Asbtract ( )   PDF (854KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy of Centroid Localization (CL) algorithm in Wireless Sensor Network (WSN), an Optimal Beacon nodes-based Centroid Localization (OBCL) algorithm was proposed. In this algorithm, four mobile beacon nodes were used. First, the path for each mobile beacon node was planned. Second, the optimal beacon nodes were selected from the candidate beacon nodes by each unknown node to estimate location according to Set Deviation Degree (SDD). Besides, a role-change mechanism that an unknown node can assist other unknown nodes to locate as the expectant beacon node after it got its estimated location was adopted to solve the problem of beacon nodes' shortage. At last, to ensure that each unknown node could get its location, a relocation procedure was executed after the completion of the initial locating. The simulation results show that, the average locating error is respectively reduced by 67.7%, 39.2%, 24.4% comparing with the CL, WCL (Weighted Centroid Localization), RR-WCL (Weighted Centroid Localization based on Received signal strength indication Ration) algorithms. For the reason that OBCL can achieve better locating results using only four mobile beacon nodes, it is suitable for scenes which require low network cost and high locating accuracy.

    Enhanced tag anti-collision algorithm based on multi-bit identification for radio frequency identification
    JIN Zefen, WU Chuankun
    2015, 35(1):  10-14.  DOI: 10.11772/j.issn.1001-9081.2015.01.0010
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics

    Most Radio Frequency Identification (RFID) tag anti-collision protocols have the problem that too many bits are transmitted by tag during the identification. To solve this issue, an Enhanced Multi-Bit Identification (EnMBI) algorithm was proposed. On the premise of guaranteeing the identification efficiency, a frame-slotted structure was adopted to avoid the repeated transmitting of common prefixes. Meanwhile, through locating the collision bits, only the collision bits were recovered so as to further decrease the communication overhead. The simulation results show that the EnMBI algorithm has less tag overhead and total overhead than the multi-bit identification anti-collision algorithm. Its total overhead is at most 20% lower than the multi-bit identification algorithm.

    Heuristic anti-monitoring path finding algorithm based on local Voronoi tessellation in sensory field
    CHEN Juan
    2015, 35(1):  15-18.  DOI: 10.11772/j.issn.1001-9081.2015.01.0015
    Asbtract ( )   PDF (728KB) ( )  
    References | Related Articles | Metrics

    Considering the safety problem of mobile objects while traversing through a sensory field, a novel heuristic anti-monitoring path finding algorithm based on local Voronoi Tessellation (VT) was proposed in this paper. First, an approximate estimation model of path exposure based on local Voronoi tessellation was presented, in which, the mobile object could generate the local Voronoi tessellation dynamically based on currently detected sensor nodes information, and approximately estimated the exposure risk of each path corresponding to an edge of the local Voronoi tessellation based on the newly defined exposure risk computation formula. And then, based on the newly given exposure model, a novel heuristic anti-monitoring path finding algorithm was designed, in which, the mobile object could firstly determine its candidate set of next hop location points based on the local Voronoi tessellation, and then selected a location point with the minimum risk cost from its candidate set as its actual next hop location based on the newly defined heuristic cost function, and therefore, moved along the corresponding path with the minimum exposure risk in the local Voronoi tessellation to the selected next hop location. The theoretical analysis and simulation results show that the proposed algorithm has good anti-monitoring performance, and as for a sensory field with total n sensor nodes, the mobile object can select a path with relatively small risk to get to the destination within the time no more than O(n log n).

    Adaptive tree grouping and blind separation anti-collision algorithm for radio frequency identification system
    MU Yuchao, ZHANG Xiaohong
    2015, 35(1):  19-22.  DOI: 10.11772/j.issn.1001-9081.2015.01.0019
    Asbtract ( )   PDF (583KB) ( )  
    References | Related Articles | Metrics

    For the low tag identification rate problem caused by that reader can not identify multiple tags simultaneously in single-antenna Radio Frequency Identification (RFID) system, combined with multi-antenna technology and the grouping of binary tree slots based on tags ID sequence, an adaptive tree grouping and blind separation anti-collision algorithm for RFID system was proposed. In the presented algorithm, by adjusting query code length of reader according to the number of antennas in RFID system and sending a query signal, the eligible response tags were assigned to the appropriate slots so that the number of tags in each slot was less than or equalled to the number of antennas and met Blind Source Separation (BSS) system conditions which could identify tags, so as to achieve the purpose of identifying tags simultaneously and quickly. Compared with the Blind Separation and Dynamic Bit-slot Grouping (BSDBG) algorithm using the same multi-antenna technology, the simulation results show that the tag identification speed of the proposed algorithm increases from 20% to 69% and the tag identification rate improves from 60% to 88% when the number of antennas is from 4 to 32, while it has low complexity, low hardware overhead and it is relatively simple to implement and conducive to the promotion and use.

    Optimization of anti-collision algorithm for radio frequency identification reader system in Internet of things
    PAN Hao, CHEN Meng
    2015, 35(1):  23-26.  DOI: 10.11772/j.issn.1001-9081.2015.01.0023
    Asbtract ( )   PDF (721KB) ( )  
    References | Related Articles | Metrics

    Concerning the collision problem of the reader in Radio Frequency Identification (RFID) application field, the polling-based frame slot algorithm and the binary bit anti-collision algorithm were compared, and then the improved frame slot algorithm was proposed. First, the frame length was divided into several slots; second, the numbers of tags were dynamically estimated, and the frame length to be transmitted was determined, then the response probability of the electronic label for a slot in the frame was reached a maximum; finally, the minimum system collision probability was reached. The simulation results show that, the system throughput rate of the improved frame slot anti-collision algorithm can be maintained at more than 50%, and in the working scope with a large number of electronic tags throughput rate can reach more than 65%. Compared with the frame slot anti-collision algorithm on average 36% of the system throughput rate, the system throughput rate of improved frame slot algorithm nearly doubles. And the structure is simple, so it is easy to be used in practical applications.

    Transmission length design of IP network with wavelength-selectable reconfigurable optical add/drop multiplexer
    XIONG Ying, MAO Xuesong, LIU Xing, WANG Yaling, JIN Gang
    2015, 35(1):  27-30.  DOI: 10.11772/j.issn.1001-9081.2015.01.0027
    Asbtract ( )   PDF (560KB) ( )  
    References | Related Articles | Metrics

    For dealing with the problem of low efficiency and high maintenance cost when multi-point breakdown or change occurs in Wavelength Division Multiplexing (WDM) network with high speed and large capacity, the component of Reconfigurable Optical Add/Drop Multiplexer (ROADM) was used to construct a flexible network. Firstly, the 5-node network configuration model was provided. Then, the relation between loss and transmission length was investigated when optical network was composed of ROADM under dynamic conditions. The design flow of network transmission length was proposed. Next, a 5-node bi-directional fiber ring experiment network was constructed, and the optical loss characteristics were measured. Finally, based on the analysis of experiment data, it shows that the computed optical loss value and the measured loss value are approximately equal (0.8 dB difference). Thus, the feasibility of the design is verified, which assures the reliable transmission between nodes.

    Optimization method of energy consumption for 802.15.4 networks based on multi-condition sleep
    CHENG Hongbin, SUN Xia
    2015, 35(1):  31-34.  DOI: 10.11772/j.issn.1001-9081.2015.01.0031
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems of the energy consumption of 802.15.4 network, a channel access mechanism for Media Access Control (MAC) layer based on multi-condition sleep mode was proposed. First, a Markov model of the mechanism was established. Then, the mathematical derivation based on the model of the steady-state probability of the main state, related parameters were given out. Furthermore, the analysis of the node average energy consumption in superframe was carried out. At last, the influence of the protocol parameters such as arrival rate of packets, number of back, superframe order and mininum of backoff exponent to the steady-state probability of the main state, the average energy consumption and the survival time of node was researched. The experimental results show that, compared with 802.15.4 network without node sleep state, the node energy consumption is reduced by 84.4% or so. And compared with the methods of some conditions, node energy consumption is reduced by 62.8% on average; the average survival time of network is increased by 70%. The model describes the proposed channel access mechanism very well, and the reasonable settings for parameters can improve the performance of node energy consumption. It also provides reference for the energy optimization in the practical application of Wireless Sensor Network (WSN).

    Emergency data scheduling method for asynchronous and multi-channel industrial wireless sensor networks
    YANG Li, ZHANG Xiaoling, LIANG Wei, ZHU Lizhong
    2015, 35(1):  35-38.  DOI: 10.11772/j.issn.1001-9081.2015.01.0035
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics

    The existing Time Division Multiple Access (TDMA) scheduling methods for industrial emergency data under the conditions of asynchronous and multi-channel medium have the problems of high delay, saturated Control Channel (CC), and large energy consumption. To solve these problems, an Emergency data scheduling algorithm Oriented Asynchronous Multi-channel industrial wireless sensor networks, called EOAM, was proposed. First, the receiver-based strategy was adopted to solve the problem of saturated control channel during asynchronous multi-channel scheduling. Then a well-designed Special Channel (SC) together with the priority indication method was proposed to provide fast channel switch and real-time transmission of emergency data; additionally, the non-urgent data was allowed to occupy channel by a backoff-based mechanism indicated by the priority indication method, which could ensure the utilization of special channel. EOAM was suitable for both unicast and broadcast communications. The simulation results show that, compared with the Distributed Control Algorithm (DCA), the transmission delay of EOAM can reach 8 ms, the reliability is above 95%, and the energy consumption is reduced by 12.8%, which can meet the transmission requirements of industrial emergency data.

    Optimal power consumption of heterogeneous servers in cloud center under performance constraint
    HE Huaiwen, FU Yu, YANG Liang, YANG Yihong
    2015, 35(1):  39-42.  DOI: 10.11772/j.issn.1001-9081.2015.01.0039
    Asbtract ( )   PDF (697KB) ( )  
    References | Related Articles | Metrics

    For the problem of minimizing the energy consumption under performance constraint of cloud center, an optimal power consumption allocation method among multiple heterogeneous servers was proposed. First, an optimal energy consumption mathematical model of cloud center was built. Second, a Minimizing Power Consumption (MPC) algorithm for calculating the minimum energy was developed by using Lagrange multiplier method to obtain the optimal solution of the model. Finally, the MPC algorithm was verified by plenty of numerical experiments and compared with the Equal-Power (EP) baseline method. The experimental results indicate that MPC algorithm can save approximately 30% energy than the EP baseline method under the same load and the same response time conditions, and the proportion of energy saving will increase with load increasing. The MPC algorithm can effectively avoid energy configuration overload and it will provide ideas and reference data for optimal resource allocation of cloud center.

    Graph data processing technology in cloud platform
    LIU Chao, TANG Zhengwang, YAO Hong, HU Chengyu, LIANG Qingzhong
    2015, 35(1):  43-47.  DOI: 10.11772/j.issn.1001-9081.2015.01.0043
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    MapReduce computation model can not satisfy the efficiency requirement of graph data processing in the Hadoop cloud platform. In order to address the issue, a novel computation framework of graph data processing, called MyBSP (My Bulk Synchronous Parallel), was proposed. MyBSP is similar with Pregel developed from Google. Firstly, the running mechanism and shortcomings of MapReduce were analyzed. Secondly, the structure, workflow and principal interfaces of MyBSP framework were described. Finally, the principle of the PageRank algorithm for graph data processing was analyzed. Subsequently, the design and implementation of the PageRank algorithm for graph data processing were presented. The experimental results show that, the iteration processing performance of graph data processing algorithm based on the MyBSP framework is raised by 1.9-3 times compared with the algorithm based on MapReduce. Furthermore, the execution time of the MyBSP algorithm is reduced by 67% compared with MapReduce approach. Thus, MyBSP can efficiently meet the application prospect of graph data processing.

    PageRank parallel algorithm based on Web link classification
    CHEN Cheng, ZHAN Yinwei, LI Ying
    2015, 35(1):  48-52.  DOI: 10.11772/j.issn.1001-9081.2015.01.0048
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the efficiency of serial PageRank algorithm is low in dealing with mass Web data, a PageRank parallel algorithm based on Web link classification was proposed. Firstly, the Web was classified according to its Web link, and the weights of different Web which was from diverse websites were set variously. Secondly, with the Hadoop parallel computation platform and MapReduce which has the characteristics of dividing and conquering, the Webpage ranks were computed parallel. At last, a data compression method of three layers including data layer, pretreatment layer and computation layer was adopted to optimize the parallel algorithm. The experimental results show that, compared with the serial PageRank algorithm, the accuracy of the proposed algorithm is improved by 12% and the efficiency is improved by 33% in the best case.

    Parallel implementation of OpenVX and 3D rendering on polymorphic graphics processing unit
    YAN Youmei, LI Tao, WANG Pengbo, HAN Jungang, LI Xuedan, YAO Jing, QIAO Hong
    2015, 35(1):  53-57.  DOI: 10.11772/j.issn.1001-9081.2015.01.0053
    Asbtract ( )   PDF (742KB) ( )  
    References | Related Articles | Metrics

    For the image processing, computer vision and 3D rendering have the feature of massive parallel processing, the programmability and the flexible mode of parallel processing on the Polymorphic Array Architecture for Graphics (PAAG) platform were utilized adequately, the parallelism design method by combing the operation level parallelism with data level parallelism was used to implement the OpenVX Kernel functions and 3D rendering pipelines. The experimental results indicate that in the parallel implementation of image processing of OpenVX Kernel functions and graphics rendering, using Multiple Instruction Multiple Data (MIMD) of PAAG in parallel processing can obtain a linear speedup that the slope equals to 1, which achieves higher efficiency than the slope as nonlinear speedup that less than 1 of Single Instruction Multiple Data (SIMD) in traditional parallel processing of the Graphics Processing Unit (GPU).

    Implementation and performance analysis of Knuth39 parallelization based on many integrated core platform
    ZHANG Baodong, ZHOU Jinyu, LIU Xiao, HUA Cheng, ZHOU Xiaohui
    2015, 35(1):  58-61.  DOI: 10.11772/j.issn.1001-9081.2015.01.0058
    Asbtract ( )   PDF (588KB) ( )  
    References | Related Articles | Metrics

    To solve the low running speed problem of Knuth39 random number generator, a Knuth39 parallelization method based on Many Integrated Core (MIC) platform was proposed. Firstly, the random number sequence of Knuth39 generator was divided into subsequences by regular interval. Then, the random numbers were generated by every thread from the corresponding subsequence's starting point. Finally, the random number sequences generated by all threads were combined into the final sequence. The experimental results show that the parallelized Knuth39 generator successfully passed 452 tests of TestU01, the results are the same as those of Knuth39 generator without parallelization. Compared with single thread on Central Processing Unit (CPU), the optimal speed-up ratio on MIC platform is 15.69 times. The proposed method improves the running speed of Knuth39 generator effectively, ensures the randomness of the generated sequences, and it is more suitable for high performance computing.

    S-DIFC: software defined network-based decentralized information flow control system
    WANG Tao, YAN Fei, WANG Qingfei, ZHANG Leyi
    2015, 35(1):  62-67.  DOI: 10.11772/j.issn.1001-9081.2015.01.0062
    Asbtract ( )   PDF (1155KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that current Decentralized Information Flow Control (DIFC) systems are unable to monitor the integration of host and network sensitive data effectively, a new design framework of DIFC system based on Software Defined Network (SDN), called S-DIFC, was proposed. Firstly, this framework used DIFC modules to monitor files and processes in host plane with fine granularity. Moreover, label mapping modules were used to block network communication and insert sensitive data labels into network flow. Meanwhile the multi-level access control of the flow with security label was implemented with SDN's controller in network plane. Finally, S-DIFC recovered security labels carried by sensitive data in DIFC system on target host. The experimental results show S-DIFC influences host with CPU performance decrease within 10% and memory performance decrease within 1.3%. Compared to Dstar system with extra time-delay more than 15 seconds, S-DIFC mitigates communication overhead of distributed network control system effectively. This framework can meet the sensitive data security requirements of next generation network. In addition, the distributed method can enhance the flexibility of monitor system.

    Propagation modeling and analysis of peer-to-peer botnet
    FENG Liping, SONG Lipeng, WANG Hongbin, ZHAO Qingshan
    2015, 35(1):  68-71.  DOI: 10.11772/j.issn.1001-9081.2015.01.0068
    Asbtract ( )   PDF (543KB) ( )  
    References | Related Articles | Metrics

    To effectively control large-scale outbreak, the propagation properties of the leeching P2P (Peer-to-Peer) botnet was studied using dynamics theory. Firstly, a delayed differential-equation model was proposed according to the formation of the botnet. Secondly, the threshold expression of controlling botnet was obtained by the explicit mathematical analysis. Finally, the numerical simulations verified the correctness of theoretical analysis. The theoretical analysis and experimental results show that the botnet can be completely eliminated if the basic reproduction number is less than 1. Otherwise, the defense measures can only reduce the scale of botnet. The simulation results show that decreasing the infection rate of bot programs or increasing the immune rate of nodes in the network can effectively inhibit the outbreak of botnet. In practice, the propagation of bot programs can be controlled by some measures, such as uneven distribution of nodes in the network, timely downloading patch and so on.

    JavaScript code protection method based on temporal diversity
    FANG Dingyi, DANG Shufan, WANG Huaijun, DONG Hao, ZHANG Fan
    2015, 35(1):  72-76.  DOI: 10.11772/j.issn.1001-9081.2015.01.0072
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics

    Web applications are under the threat from malicious host problem just as native applications. How to ensure the core algorithm or main business process's security of Web applications in browser-side has become a serious problem needed to be solved. For the problem of low effectiveness to resist dynamic analysis and cumulative attack in present JavaScript code protection methods, a JavaScript code Protection based on Temporal Diversity (TDJSP) method was proposed. In order to resist cumulative attack, the method firstly made the JavaScript program obtain the diverse ability in runtime by building program's diversity set and obfuscating its branch space. And then, it detected features of abnormal execution environments such as debuggers and emulations to improve the difficulty of dynamic analysis. The theoretical analyses and experimental results show that the method improves the ability of JavaScript program against the converse analysis. And the space growth rate is 3.1 (superior to JScrambler3) while the delay time is in millisecond level. Hence, the proposed method can protect Web applications effectively without much overhead.

    Secure storage and self-destruction scheme for privacy data in mobile devices
    SHEN Weiwei, YAO Zhiqiang, XIONG Jinbo, LIU Ximeng
    2015, 35(1):  77-82.  DOI: 10.11772/j.issn.1001-9081.2015.01.0077
    Asbtract ( )   PDF (1010KB) ( )  
    References | Related Articles | Metrics

    To protect the privacy data stored in mobile devices, a secure storage and self-destruction scheme for mobile devices was proposed, which was based on data compression, threshold secret sharing and mobile social networks. In this security scheme, the private data was first compressed with a lossless compression technique, and then the compressed data was encrypted symmetrically by a symmetric key to obtain primitive ciphertext, which was divided into two parts of ciphertext. With time attribute, one part of ciphertext was encapsulated into the Mobile Data Self-destructing Object (MDSO), which was stored in cloud servers. Furthermore, with the symmetric key and time attribute, the other part of ciphertext was processed by the Lagrange polynomial, and the mixture ciphertext shares were generated. At last, these mixture ciphertext shares were embedded into the pictures sharing to social networks. When the authorization was expired, no one could obtain ciphertext block to recombine the original ciphertext, so the security of the privacy data could be protected. The experimental results show that, the sum of the compression and encryption time is only 22 ms when the size of file is 10 KB, which explains the proposed scheme has low performance overhead; furthermore, the results of the comprehensive analysis indicate that the proposed scheme has high security, and it can resist against attacks effectively and protect the mobile privacy data.

    Optimized construction scheme of seeded-key matrices of collision-free combined public key
    LI Tao, ZHANG Haiying, YANG Jun, YU Dan
    2015, 35(1):  83-87.  DOI: 10.11772/j.issn.1001-9081.2015.01.0083
    Asbtract ( )   PDF (716KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of key collision and the storage space of matrices of seeded-key in Combined Public Key (CPK), a method of coefficient remapping was proposed and the rules of selecting the elements of seeded matrices were designed. Firstly, in the phase of identification mapping, the binary bit streams were produced, and they were divided into coefficient sequence and row sequence. Then the coefficient sequence was remapped according to the remapping rules, which could avoid that the coefficient was zero. So the storage space of the matrices was reduced by the coefficient remapping. Secondly, in the generation step of seeded-key matrix, based on the coefficient remapping, some rules were specified to choose elements to create matrices of seeded-key to ensure that the generated keys were exclusive. Finally, the elements of the matrices were selected according to the row sequence and the increasing column sequence. Then the public key and the private key were generated on the basis of the coefficient sequence and the selected elements. The theoretical analysis results suggest that the proposed scheme can optimize the storage of matrices and solve the key collision problem.

    Differentially private statistical publication for two-dimensional data stream
    LIN Fupeng, WU Yingjie, WANG Yilei, SUN Lan
    2015, 35(1):  88-92.  DOI: 10.11772/j.issn.1001-9081.2015.01.0088
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics

    Current research on statistical publication of differential privacy data stream only considers one-dimensional data stream. However, many applications require privacy protection publishing two-dimensional data stream, which makes traditional models and methods unusable. To solve the issue, firstly, a differential privacy statistical publication algorithm for fixed-length two-dimensional data stream, call PTDSS, was proposed. The tuple frequency of the two-dimensional data stream under certain condition was calculated by a one-time linear scan to the data stream with low-cost space. Basing on the result of sensitivity analysis, a certain amount of noise was added into the statistical results so as to meet the differential privacy requirement. After that, a differential privacy continuous statistical publication algorithm for any length two-dimensional data stream using sliding window model, called PTDSS-SW, was presented. The theoretical analysis and experimental results show that the proposed algorithms can safely preserve the privacy in the statistical publication of two-dimensional data stream and ensure the relative error of the released data in the range of 10% to 95%.

    Image compression encryption algorithm based on compression ratio control of JPEG-LS
    CHEN Yigang, DENG Jiaxian, XIE Kaiming
    2015, 35(1):  93-98.  DOI: 10.11772/j.issn.1001-9081.2015.01.0093
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the traditional image compression ratio control is rough and the low-dimensional chaotic system has low confidentiality, the image compression encryption algorithm based on compression ratio control of Joint Photographic Experts Group-Lossless Standard (JPEG-LS) was proposed. On the basis of further analysis about the distortion control parameter Near of JPEG-LS which had significant influence on the image compression ratio and reconstruction quality, the image data of raster scan was first gradient processed in causal model, the size of the gradient value and the Near were compared to determine the run-length coding in run mode or Golomb coding in normal mode, and then random processing sequence generated by three-dimensional Lorenz chaotic system was regarded as the key to encrypt the compressed code stream in run mode, conventional model and whole model (run and conventional model). Finally, the fine control of image compression ratio and improvement of the confidentiality were realized by dynamically adjusting Near in real-time. The simulation results indicate that the proposed algorithm has a good performance on control of the compression ratio, and its quality of reconstruction image is improved about by 0.5 dB than that of linear compression ratio control algorithm. Furthermore, it has high security that can efficiently resist entropy attack, differential attack, brute force attack and statistical attack, etc., and the encryption has no effect on the compression efficiency.

    KStore: Linux kernel-based Key-Value store system
    XIE Peidong, WU Yanjun
    2015, 35(1):  99-102.  DOI: 10.11772/j.issn.1001-9081.2015.01.0099
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics

    Nowadays Key-Value store system is widely used in various Internet services. However, the existing Key-Value store systems, mostly run in user-mode, can not meet the demands of high-concurrency and low-latency. It is mainly because user-mode usually provides inefficient access interfaces and transaction processing due to mode switch or context switch. To solve these problems, an in-kernel implementation of Key-Value store system, called KStore, was proposed in this paper. It had an in-kernel index and an in-kernel memory allocator, which were used to manage Key-Value data efficiently. To guarantee the low-latency response, KStore provided a remote interface based on in-kernel Socket, and a local interface based on file system. In addition, KStore processed concurrent requests with a novel mechanism based on in-kernel multi-thread. The experimental results show that KStore gains a remarkable advantage over Memcached in the characteristics of real-time and concurrency.

    HBase-based real-time storage system for traffic stream data
    LU Ting, FANG Jun, QIAO Yanke
    2015, 35(1):  103-107.  DOI: 10.11772/j.issn.1001-9081.2015.01.0103
    Asbtract ( )   PDF (1041KB) ( )  
    References | Related Articles | Metrics

    Traffic stream data has characteristics of multi-source, high speed and large volume, etc. When dealing with these data, the traditional methods and systems of data storage have exposed the problems of weak scalability and low real-time storage. To address these problems, this work designed and implemented a HBase-based real-time storage system for traffic streaming data. The system adopted the distributed storage architecture, standardized data through front-end preprocessing, divided different kinds of streaming data into different queues by using multi-source cache structure, and combined the consistent Hash algorithm, multi-thread and row-key optimization strategy to write data into HBase cluster in parallel. The experimental results demonstrate that, compared with the real-time storage system based on Oracle, the storage performance of the system has 3-5 times increment. When compared with the original HBase, it has 2-3 times increment of storage performance and it also has good scalability.

    Ranking-k: effective subspace dominating query algorithm
    LI Qiusheng, WU Yadong, LIN Maosong, WANG Song, WANG Haiyang, FENG Xinmiao
    2015, 35(1):  108-114.  DOI: 10.11772/j.issn.1001-9081.2015.01.0108
    Asbtract ( )   PDF (1078KB) ( )  
    References | Related Articles | Metrics

    Top-k dominating query algorithm requires high consumption of time and space to build combined indexes on the attributes, and the query accuracy is low for the data with same attribute values. To solve these problems, a Ranking-k algorithm was given in this paper. The proposed Ranking-k algorithm is a new subspace dominating query algorithm combining the B+-trees with probability distribution model. Firstly, the ordered lists for each data attribute were constructed by the B+-trees. Secondly, the round-robin scheduling algorithm was used to scan ordered attribute lists satisfying skyline criterion. Some candidate tuples were generated and k end tuples were obtained. Thirdly, the dominated scores of end tuples were calculated by using the probability distribution model according to the generated candidate tuples and end tuples. Through iterating the above process, the optimal query results were obtained. The experimental results show that the overall query efficiency of the proposed Ranking-k algorithm is improved by 94.43% compared with the Basic-Scan Algorithm (BSA) and by 7.63% compared with the Differential Algorithm (DA), and the query results of Ranking-k algorithm are much closer to theoretical values in comparison of the Top-k Dominating with Early Pruning (TDEP) algorithm, BSA and DA.

    Construction of rectangle trapezoid circle tree and indeterminate near neighbor relations query
    LI Song, LI Lin, WANG Miao, CUI Huanyu, ZHANG Liping
    2015, 35(1):  115-120.  DOI: 10.11772/j.issn.1001-9081.2015.01.0115
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics

    The spatial index structure and the query technology plays an important role in the spatial database. According to the disadvantages in the approximation and organization of the complex spatial objects of the existing methods, a new index structure based on Minimum Bounding Rectangle (MBR), trapezoid and circle (RTC (Rectangle Trapezoid Circle) tree) was proposed. To deal with the Nearest Neighbor (NN) query of the complex spatial data objects effectively, the NN query based on RTC (NNRTC) algorithm was given. The NNRTC algorithm could reduce the nodes traversal and the distance calculation by using the pruning rules. According to the influence of the barriers on the spatial data set, the barrier-NN query based on RTC tree (BNNRTC) algorithm was proposed. The BNNRTC algorithm first queried in an idea space and then judged the query result. To deal with the dynamic simple continuous NN chain query, the Simple Continues NN chain query based on RTC tree (SCNNCRTC) algorithm was given. The experimental results show that the proposed methods can improve the efficiency of 60%-80% in dealing with large complex spatial object data set with respect to the query method based on R tree.

    Classification method for imbalance dataset based on genetic algorithm improved synthetic minority over-sampling technique
    HUO Yudan, GU Qiong, CAI Zhihua, YUAN Lei
    2015, 35(1):  121-124.  DOI: 10.11772/j.issn.1001-9081.2015.01.0121
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics

    When the Synthetic Minority Over-sampling Technique (SMOTE) is used in imbalance dataset classification, it sets the same sampling rate for all the samples of minority class in the process of synthetising new samples, which has blindness. To overcome this problem, a Genetic Algorithm (GA) improved SMOTE algorithm, namely GASMOTE (Genetic Algorithm Improved Synthetic Minority Over-sampling Technique) was proposed. At the beginning, GASMOTE set different sampling rates for different minority class samples. One combination of the sampling rates corresponded to one individual in the population. And then, the selection, crossover and mutation operators of GA were iteratively applied on the population to get the best combination of sampling rates when the stopping criteria were met. At last, the best combination of sampling rates was used in SMOTE to synthetise new samples. The experimental results on ten typical imbalance datasets show that, compared with SMOTE algorithm, GASMOTE can increase 5.9 percentage on F-measure value and 1.6 percentage on G-mean value, and compared with Borderline-SMOTE algorithm, GASMOTE can increase 3.7 percentage on F-measure value and 2.3 percentage on G-mean value. GASMOTE can be used as a new over-sampling technique to deal with imbalance dataset classification problem.

    Online pedigree editing system based on graph database
    JIANG Yang, PENG Zhiyong, PENG Yuwei
    2015, 35(1):  125-130.  DOI: 10.11772/j.issn.1001-9081.2015.01.0125
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics

    Motivated by the poor performance of existing domestic pedigree systems on data sharing, scalability and editing efficiency, an online pedigree editing system was proposed based on Browser/Server (B/S) architecture and graph database. First, the proposed system took advantage of B/S architecture to support online collaborative entering, so as to promote data entering efficiency. Second, the system used database to store pedigrees for better management and retrieval, and promoted the data sharing. Third, the system greatly improved the efficiency of data processing, because it was managed by graph database and pedigrees are graphs in nature. Finally, the system is empirically proven to be effective through systematical experiments with real pedigree data, LIU's pedigree data, which contained over 200000 people. Specifically, the proposed system based on graph database Neo4j is 50% better than that based on relation database PostgreSQL on storage space; and the query responding time of the system based on Neo4j is respectively 20%, 80%, 16% and 15% of that based on PostgreSQL for descendant query, ancestor query, relative query and descendant gender query. According to the experimental results, a conclusion can be achieved that the system can be used to process massive pedigree data efficiently and support online collaborative entering.

    Design and implementation of context driven SoftMan knowledge communication framework
    WU Danfeng, XU Xiaowei, WANG Kang
    2015, 35(1):  131-135.  DOI: 10.11772/j.issn.1001-9081.2015.01.0131
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    This thesis focused on the traditional and message-based SoftMan's communication approach which has some problems in the aspects of expression ability, communication efficiency and quality. Based on the early research in SoftMan system and its communication theory, as well as SoftMan cogmatics model and context awareness mechanism, this thesis proposed the Context driven SoftMan Knowledge Communication (CSMKC) framework by learning from mature Agent communication language specification. First, the message layer, the knowledge layer and the scenarios layer in knowledge communication framework were designed; second, from the three aspects of implementation of message layer, knowledge layer and scenarios layer, the key points of knowledge communication achievements of scenario-driven SoftMan were introduced; finally, different SoftMan's communication in knowledge grade and the maintenance of scenario context were realized basically. The experimental results show that when the later content has high dependence on communication scenario, compared with the traditional message-based SoftMan communication approach, the communication overhead per unit time of CSMKC reduces by 46.15% averagely. Thus, the higher dependence on the scene, the more obvious CSMKC advantages in terms of reducing communication while accomplishing a task in the system.

    Consensus analysis for a class of heterogeneous multi-Agent systems via nonlinear protocols
    SUN Yijie, ZHANG Guoliang, ZHANG Shengxiu
    2015, 35(1):  136-139.  DOI: 10.11772/j.issn.1001-9081.2015.01.0136
    Asbtract ( )   PDF (522KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that the state of Agents is immeasurable and only the stationary consensus can be achieved in heterogeneous multi-Agent systems composed of first-order and second-order Agents, a novel nonlinear consensus protocol with reference velocity was proposed. Firstly, consensus analysis was transformed to stability demonstration. Then, the Lyapunov function was constructed. Finally, the sufficient conditions for achieving consensus were obtained by using Lyapunov stability theory and LaSalle's invariance principle. The simulation results show that if the conditions can be satisfied, the consensus can be achieved.

    Construction method for Bayesian network based on Dempster-Shafer/analytic hierarchy process
    DU Yuanwei, SHI Fangyuan, YANG Na
    2015, 35(1):  140-146.  DOI: 10.11772/j.issn.1001-9081.2015.01.0140
    Asbtract ( )   PDF (1250KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of lacking completeness and accuracy in the individuals inference information and scientificity in the overall integration results, which exists in the process of inferring Conditional Probability Table (CPT) in Bayesian network according to expert knowledge, this paper presented a method based on the Dempster-Shafer/Analytic Hierarchy Process (DS/AHP) to derive optimal conditional probability from the expert inference information. Firstly, the inferred information extraction mechanism was proposed to make judgment objects more intuitive and judgment modes more perfect by introducing the knowledge matrix of the DS/AHP method. Then, the construction process of Bayesian network was proposed following an inference sequence of "anterior to later". Finally, the traditional method and the presented method were applied to infer the missing conditional probability table in the same Bayesian network. The numerical comparison analyses show that the calculation efficiency can be improved and the accumulative total deviation can be decreased by 41% through the proposed method. Meanwhile, the proposed method is illustrated to be scientific, applicable and feasible.

    Multi-label classification algorithm based on floating threshold classifiers combination
    ZHANG Danpu, FU Zhongliang, WANG Lili, LI Xin
    2015, 35(1):  147-151.  DOI: 10.11772/j.issn.1001-9081.2015.01.0147
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics

    To solve the multi-label classification problem that a target belongs to multiple classes, a new multi-label classification algorithm based on floating threshold classifiers combination was proposed. Firstly, the theory and error estimation of the AdaBoost algorithm with floating threshold (AdaBoost.FT) were analyzed and discussed, and it was proved that AdaBoost.FT algorithm could overcome the defect of unstabitily when the fixed segmentation threshold classifier was used to classify the points near classifying boundary, the classification accuracy of single-label classification algorithm was improved. And then, the Binary Relevance (BR) method was introduced to apply AdaBoost.FT algorithm into multi-label classification problem, and the multi-label classification algorithm based on floating threshold classifiers combination was presented, namely multi-label AdaBoost.FT. The experimental results show that the average precision of multi-label AdaBoost. FT outperforms the other three multi-label algorithms, AdaBoost.MH (multiclass, multi-label version of AdaBoost based on Hamming loss), ML-kNN (Multi-Label k-Nearest Neighbor), RankSVM (Ranking Support Vector Machine) about 4%, 8%, 11% respectively in Emotions dataset, and is just little worse than RankSVM about 3%, 1% respectively in Scene and Yeast datasets. The experimental analyses show that multi-label AdaBoost. FT can obtain the better classification results in the datasets which have small number of labels or whose different labels are irrelevant.

    Extension pattern distinguishing model and its application
    ZHANG Haitao, WANG Binjun
    2015, 35(1):  152-156.  DOI: 10.11772/j.issn.1001-9081.2015.01.0152
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of extension state recognition, an extension pattern distinguishing model was proposed. First, an extension pattern distinguishing definition was built; second, the characteristics of both static state and dynamic state were analyzed for universe of discourse; furthermore, a general framework of extension pattern discrimination was designed, and the formulas to calculate the degree of quantitative change and qualitative change were given in the paper; finally, both general and extension states of a given case were distinguished by using the proposed method. The experimental results demonstrate the feasibility of the proposed model for expression, analysis and discrimination of extension state. The extension pattern distinguishing model can effectively solve the pattern recognition problem of extension and states transformation which is inextricable for traditional extension pattern classifiers.

    Topic group discovering algorithm based on trust chain in social network
    LI Meizi, XIANG Yang, ZHANG Bo, JIN Bo
    2015, 35(1):  157-161.  DOI: 10.11772/j.issn.1001-9081.2015.01.0157
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    To solve the challenge of accurate user group discovering, a user topic discovering algorithm based on trust chain, which was composed by three steps, i.e., topic space discovering, group core user discovering and topic group discovering, was proposed. Firstly, the related definitions of the proposed algorithm were given formally. Secondly, the topic space was discovered through the topic-correlation calculation method and a user interest calculation method for topic space was addressed. Further, the trust chain model, which was composed by atomic, serial, and parallel trust chains, and its trust computation method of topic space were presented. Finally, the detail algorithms of topic group discovering, including topic space discovering algorithm, core user discovering algorithm and topic group discovering algorithm, were proposed. The experimental results show that the average accuracy of the proposed algorithm is 4.1% and 11.3% higher than that of the traditional interest-based and edge density-based group discovering methods. The presented algorithm can improve the accuracy of user group organizing effectively, and it will have good application value for user identifying and classifying in social network.

    New recommendation algorithm based on multi-objective optimization
    SHE Xiangyang, CAI Yuanqiang, DONG Lihong
    2015, 35(1):  162-166.  DOI: 10.11772/j.issn.1001-9081.2015.01.0162
    Asbtract ( )   PDF (691KB) ( )  
    References | Related Articles | Metrics

    In view of the efficiency problem of multi-objective recommender systems, this paper utilized the online and offline separation strategy to construct a new frame of recommender system. Aiming at the multi-objective feature of recommender system and current recommendation algorithms' limitations in adaptability, this paper put forward a new multi-objective recommendation algorithm based on the hybrid strategy. Firstly, the algorithm did weighted mix of multiple recommendation algorithms. Secondly, it established a multi-objective optimization model, using the weight sequence as variables and evaluation metrics including F-score, diversity and novelty as objective functions. Then, it optimized the solution through a second version of Strength Pareto Evolutionary Algorithm (SPEA2). Finally, it recommended items to users based on users' shopping preferences and the Pareto set. The experimental results show that: compared with the best single metric sub-recommendation algorithm, the new recommendation algorithm is nearly as well in the F-score, meanwhile increases by 1% in the diversity and increases by 11.5% in the novelty; the distribution of various Pareto solutions of multi-objective forms a dense and neighboring point curve in the solution space. So the recommender algorithm can satisfy the recommend requirements of users with different shopping preferences.

    Trust-aware collaborative filtering recommendation method for social E-commerce
    CAI Zhiwen, LIN Jianzong
    2015, 35(1):  167-171.  DOI: 10.11772/j.issn.1001-9081.2015.01.0167
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    For improving the accuracy and validity of social E-commerce recommendation services, a trust-aware collaborative filtering recommendation method was proposed with considering the factors that influence the trust relationship of users in social E-commerce, such as transaction evaluation score, transaction frequency, transaction amount, direct trust and recommended reputation. The belief factor was introduced to compute the trust relationship of social E-commerce users, the cosine similarity method was used to calculate the similarity of the users, the harmonic factor was used to synthesize the influence of the trust relationship and similarity on the users, the Mean Absolute Error (MAE), rating coverage and user coverage were used as the evaluation indexes. The experimental results show that the accuracy of the trust-aware collaborative filtering method is superior to the traditional collaborative filtering method and the regularized matrix factorization based collaborative filtering recommendation method in that the MAE reduces to 0.162, and the rating coverage and user coverage rise to 77% and 80% respectively. This proves that the trust-aware collaborative filtering method can solve the problem of recommending the commodities with less transaction evaluation.

    Quadratic path planning algorithm based on sliding window and ant colony optimization algorithm
    LAI Zhiming, GUO Gongde
    2015, 35(1):  172-178.  DOI: 10.11772/j.issn.1001-9081.2015.01.0172
    Asbtract ( )   PDF (1102KB) ( )  
    References | Related Articles | Metrics

    A Quadratic path planning algorithm based on sliding window and Ant Colony Optimization (QACO) algorithm was put forward on the issue of weak planning ability of Ant Colony Optimization (ACO) algorithm in complex environments. The feedback strategy of the ACO based on Feedback Strategy (ACOFS) algorithm was improved, and the feedback times were reduced through the decrease of pheromone along feedback path. In the first path planning, the improved ACO algorithm was applied to make a global path planning for the grid environment. In the second path planning, the sliding windows slid along the global path. Local path in sliding windows was planned with ACO algorithm. Then the global path could be optimized by local path until target location was contained in the sliding window. The simulation experiments show that, the average planning time of QACO algorithm respectively reduces by 26.21%, 52.03% and the average length of path reduces by 47.82%, 42.28% compared with the ACO and QACO algorithms. So the QACO algorithm has a relatively strong path planning ability in complex environments.

    Construction method of radial basis function approximation model based on parameter optimization of space decomposition
    WU Zongyu, LUO Wencai
    2015, 35(1):  179-182.  DOI: 10.11772/j.issn.1001-9081.2015.01.0179
    Asbtract ( )   PDF (701KB) ( )  
    References | Related Articles | Metrics

    To improve the accuracy of Radial Basis Function (RBF) approximation model, the influencing factors on approximation accuracy were deeply studied. The truth that matrix condition number and shape parameter were two important factors of approximation accuracy was pointed out by analyzing the influence of rounding error over approximation accuracy thoroughly. The matrix condition number was decreased and the design freedom was increased by separating design space based on sensitivity analysis. Learning from the traditional RBF based on optimal shape parameter, the construction method of RBF approximation model based on parameter optimization of space decomposition was proposed. The numerical test results show that, in the two cases, the Root Mean Square Error (RMSE) of the proposed method is reduced by 51.3% and 58.0% respectively while comparing with the traditional method based on optimal shape parameter for construction of RBF approximation model. The proposed method has high approximate accuracy.

    Modified binary cuckoo search algorithm for multidimensional knapsack problem
    ZHANG Jing, WU Husheng
    2015, 35(1):  183-188.  DOI: 10.11772/j.issn.1001-9081.2015.01.0183
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics

    The Multidimensional Knapsack Problem (MKP) is a typical multi-constraint combinatorial problem. In order to solve this problem, a Modified Binary Cuckoo Search (MBCS) algorithm was proposed. Firstly, with the help of classical binary code transformer, the Binary Cuckoo Search (BCS) algorithm was built; Secondly, the virus evolution mechanism and virus infection operation were introduced into the BCS. Specifically, on one hand, it made the position of bird's nest have mutation mechanism, which could improve the diversity of the population; on another hand, the main groups that consisted of nest position transmitted information cross the vertical generations and guided the global search, while the virus groups transfered evolutionary information cross the same generation through virus infection and guided the local search. These improved the convergence speed and decreased the probability of falling into the local optimum. Thirdly, the hybrid repair strategy for infeasible solutions was designed according to the characteristics of the MKP. At last, comparison experiments among the MBCS algorithm, Quantum Genetic Algorithm (QGA), Binary Particle Swarm Optimization (BPSO) algorithm and BCS algorithm were given on 15 different problems from ELIB and OR_LIB database. The experimental results show that the computational error and standard deviation of MBCS are less than 1% and 170, respectively, which shows the MBCS algorithm can achieve better solutions with good accuracy and robustness than QGA, BPSO and BCS algorithm. It is an effective algorithm in solving NP-hard problems such as the MKP.

    Acceleration gesture recognition based on random projection
    LIU Hong, LIU Rong, LI Shuling
    2015, 35(1):  189-193.  DOI: 10.11772/j.issn.1001-9081.2015.01.0189
    Asbtract ( )   PDF (719KB) ( )  
    References | Related Articles | Metrics

    Since the gesture signals in gesture interaction are similar and instable, an acceleration gesture recognition method based on Random Projection (RP) was designed and implemented. The system incorporated two parts, one was the training stage and the other was the testing stage. In the training stage, the system employed Dynamic Time Warping (DTW) and Affinity Propagation (AP) algorithms to create exemplars for each gesture; in the testing stage, the method firstly calculated the distance between the unknown trace and all exemplars to find the candidate traces, then used the RP algorithm to translate all the candidate traces and the unknown trace onto the same lower dimensional subspace, and by formulating the whole recognition problem as an l1-minimization problem, the unknown trace was recognized. The experimental results on 2400 gesture traces show that the proposed algorithm achieves an accuracy rate of 98.41% for specific individuals and 96.67% for unspecific individuals, and it can effectively identify acceleration gestures.

    Collision detection optimization algorithm based on classified traversal
    SUN Jinguang, WU Suhong
    2015, 35(1):  194-197.  DOI: 10.11772/j.issn.1001-9081.2015.01.0194
    Asbtract ( )   PDF (618KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that present traversal methods of hierarchical tree which lead to low efficiency, a new collision detection algorithm based on classified traversal was proposed. Firstly, these objects were classified according to the difference between the balance factors of two tree' nodes. The simultaneous depth-first traversal method was applied to the objects which have similar structure, and the commutative depth-first traversal method was applied to the other objects, which reduced the number of intersect tests. Then, the process of traversal was optimized by using the temporal spatial coherence and priority strategy. Finally, the experimental results show that, compared with the collision detection algorithm based on unified traversal, the proposed algorithm shortens the time of the intersection test. The bigger the number of objects, the more significant the advantage of quickness, it can reduce about 1/5 of the required time.

    Adaptive improvement of video compressed sensing based on linear dynamic system
    JIANG Xingguo, LI Zhifeng, ZHANG Long
    2015, 35(1):  198-201.  DOI: 10.11772/j.issn.1001-9081.2015.01.0198
    Asbtract ( )   PDF (672KB) ( )  
    References | Related Articles | Metrics

    The model parameters of Video Compressed Sensing of Linear Dynamic System (CS-LDS) can be estimated directly from random sampling data. If all video frames are sampled in the same way, the sampling data will be redundant. To solve this problem, an adaptive improvement algorithm based on adaptive compression sampling technology was proposed in this paper. Firstly, a Linear Dynamic System (LDS) model of the video signal was established. And then the sampling data of video signal was obtained by using the adaptive compression sampling method. Finally, the model parameters were estimated and the video signal was reconstructed by the sampling data. Without affecting the video reconstruction quality, the experimental results show that the proposed algorithm is better than the CS-LDS algorithm, it can not only reduce 20%-40% sampling data in the uniform measurement process, but also save the average running time of 0.1-0.3 s per frame. The improved algorithm reduces the number of samples and the algorithm's running time.

    Remote sensing image enhancement algorithm based on Shearlet transform and multi-scale Retinex
    WANG Jingjing, JIA Zhenhong, QIN Xizhong, YANG Jie, Nikola KASABOV
    2015, 35(1):  202-205.  DOI: 10.11772/j.issn.1001-9081.2015.01.0202
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the traditional wavelet transform, curverlet transform and contourlet transform are unable to provide the optimal sparse representation of image and can not obtain the better enhancement effect, an image enhancement algorithm based on Shearlet transform was proposed. The image was decomposed into low frequency components and high frequency components by Shearlet transform. Firstly, Multi-Scale Retinex (MSR) was used to enhance the low frequency components of Shearlet decomposition to remove the effect of illumination on image; secondly, the threshold denoising was used to suppress noise at high frequency coefficients of each scale. Finally, the fuzzy contrast enhancement method was used to the reconstruction image to improve the overall contrast of image. The experimental results show that proposed algorithm can significantly improve the image visual effect, and it has more image texture details and anti-noise capabilities. The image definition, the entropy and the Peak Signal-to-Noise Ratio (PSNR) are improved to a certain extent compared with the Histogram Equalization (HE), MSR and Fuzzy contrast enhancement in Non-Subsampled Contourlet Domain (NSCT_fuzzy) algorithms. The operation time reduces to about one half of MSR and one tenth of NSCT_fuzzy.

    Boundary handling algorithm for weakly compressible fluids
    NIE Xiao, CHEN Leiting
    2015, 35(1):  206-210.  DOI: 10.11772/j.issn.1001-9081.2015.01.0206
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics

    In order to simulate interactions of fluids with solid boundaries, a boundary handling algorithm based on weakly compressible Smoothed Particle Hydrodynamics (SPH) was presented. First, a novel volume-weighted function was introduced to solve the density estimation errors in non-uniformly sampled solid boundary regions. Then, a new boundary force computation model was proposed to avoid penetration without position correction of fluid particles. Last, an improved fluid pressure force model was proposed to enforce the weak incompressibility constraint. The experimental results show that the proposed method can effectively solve the stability problem of interactions of weakly compressible fluids and non-uniformly sampled solid boundaries using position correction-based boundary handling method. In addition, only the positions of boundary particles are needed, thus the memory as well as the extra computation due to position correction can be saved.

    Curved planar reformation algorithm based on coronary artery outline extraction by multi-planar reformation
    HOU He, LYU Xiaoqi, JIA Dongzheng, YU Hefeng
    2015, 35(1):  211-214.  DOI: 10.11772/j.issn.1001-9081.2015.01.0211
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of three-dimensional clipping and Multi-Planar Reformation (MPR) that only the geometrical information of the tissues or organs can be obtained and the structure of a curving organ cannot be displayed in a single image, a Curved Planar Reformation (CPR) algorithm based on MPR to extract the outline was proposed to reform the coronary artery. Firstly, the discrete points expressing the outline of the coronary artery were extracted by using MPR. Afterwards, the Cardinal interpolation was used to get smooth outline fitting curve. Secondly, the outline was projected along the interested direction to get the scanning curved planar. Finally, the scanning curved planar corresponding to the volume data of the cardiac was displayed, so the CPR image of artery was got. The experimental results show that, compared with three-dimensional clipping method and three-dimensional data field method, the increment for the extracting speed of the coronary artery outline is about 4 to 6 frames per second, and the rendering time is shorter. On the aspect of rendering quality, compared with three-dimensional segmentation method, the image of coronary artery curved plane is clear and complete, which is helpful for doctors to analyze the lesion clearly and satisfies the demands of actual clinical diagnosis.

    Object localization method based on fusion of visual saliency and superpixels
    SHAO Mingzheng, QI Jianfeng, WANG Xiwu, WANG Lu
    2015, 35(1):  215-219.  DOI: 10.11772/j.issn.1001-9081.2015.01.0215
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics

    Considering the weakness of the selective search method that needs a large number of windows to localize objects, a novel object localization method based on fusion of visual saliency and superpixels was proposed in this paper. Firstly, the visual saliency map was used to coarsely localize the objects, and then the adjacent superpixels could be merged according to the appearance features of image, starting from the above coarse positions. Furthermore, the method employed a simple background detector to avoid the over-merge. Finally, a greedy algorithm was used to iteratively combine the merged regions and generate the final bounding boxes. The experimental results on Pascal VOC 2007 show that the proposed method leads to a 20% reduction in the number of the bounding boxes on the same detection rate (recall of 0.91) compared to the selective search algorithm, and its overlap rate reaches 0.77. The presented method can keep higher overlap rate and recall scores with fewer windows because of its coarse-to-fine process.

    Video image mosaic based on relative orientation and small region fusion
    DU Bingxin
    2015, 35(1):  220-223.  DOI: 10.11772/j.issn.1001-9081.2015.01.0220
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems in video mosaic system that image transformation distortion caused by improper selection of stitching surface and fusion blur caused by image parallax, half angle correction method was proposed to help select proper stitching surface and small region fusion method was applied to solve fusion blur problem. First, image's attitude parameters were calculated by using relative orientation of photogrammetry; then, according to the attitude parameters, images to be stitched were rotated half angle and mapped to the same intermediary plane; next, the images were aligned by matching points; finally, a bar area was selected in the middle of overlapping area as a transition area and the fade fusion method was applied to the area. The experimental results show that, half angle correction method can achieve smaller distortion than the traditional plane stitching method; compared to fade fusion method, small region fusion method can avoid large area of blur problem caused by parallax. So, half angle correction and small region fusion methods can solve image transformation distortion and overlapping area fusion blur problems effectively.

    Image classification approach based on statistical features of speed up robust feature set
    WANG Shu, LYU Xueqiang, ZHANG Kai, LI Zhuo
    2015, 35(1):  224-230.  DOI: 10.11772/j.issn.1001-9081.2015.01.0224
    Asbtract ( )   PDF (1151KB) ( )  
    References | Related Articles | Metrics

    The current method of image classification which uses the Speed Up Robust Feature (SURF) is low in efficiency and accuracy. To overcome these shortages, this paper proposed an approach for image classification which uses the statistical features of the SURF set. This approach took all dimensions and scale information of the SURF as independent random variables, and split the data with the sign of Laplace response. Firstly, the SURF vector set of the image was got. Then the feature vector was constructed with the first absolute order central moments and weighted first absolute order central moments of each dimision. Finally, the Support Vector Machine (SVM) accomplished the image classification process with this vector. The experimental results show that the precision of this approach is better than that of the methods of SURF histogram and 3-channel-Gabor texture features by increases of 17.6% and 5.4% respectively. By combining this approach with the HSV histogram, a high-level feature fusion method was got, and good classification performance was obtained. Compared with the fused method of the SURF histogram and HSV histogram, the fused method of 3-channel-Gabor texture features and HSV histogram, and the multiple-instance-learning method based on the model of Bag of Visual Word (BoVW), the fused method of this approach and HSV histogram has better precision with the increases of 5.2%, 6.8% and 3.2% respectively.

    Registration of multispectral magnetic resonance images based on cross cumulative residual entropy
    XIANG Yan, HE Jianfeng, YI Sanli, XING Zhengwei
    2015, 35(1):  231-234.  DOI: 10.11772/j.issn.1001-9081.2015.01.0231
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that classical Mutual Information (MI) image registration may lead to local extremum, a registration method for multispectral magnetic resonance images based on Cross Cumulative Residual Entropy (CCRE) was proposed. Firstly, the gray level of reference and floating images were compressed into 5 and 7 bits. Then the Hanning windowed Sinc interpolation was used to calculate the CCRE of 5-bit grayscale images, and the Brent algorithm was used to search the CCRE to get the initial transformation parameters of pre-registration. Finally, the Partial Volume (PV) interpolation was adopted to calculate the CCRE of 7-bit grayscale images, and the Powell algorithm was applied to optimize the CCRE to get final parameters from the pre-registration parameters. The experimental results show that the robustness of the proposed method is improved compared with the CCRE registration of PV interpolation, while the registration time is saved about 90% and accuracy is improved compared with the CCRE of Hanning windowed Sinc interpolation. The presented method ensures robustness, efficiency and accuracy, so it is suitable for multi-spectral image registration.

    Image denoising based on nonlocal self-similarity and Shearlet adaptive shrinkage model
    XU Zhiliang, DENG Chengzhi
    2015, 35(1):  235-238.  DOI: 10.11772/j.issn.1001-9081.2015.01.0235
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics

    For the Gibbs artifact and "cracks" phenomenon which introduced by the Shearlet shrinkage denoising, a Shearlet adaptive shrinkage and nonlocal self-similarity model-based method for image denoising was proposed in this paper. First, the noisy image was firstly decomposed with multi-scale and multi-orientation by Shearlet transform. Second, based on the modeling of Shearlet coefficients by using Gaussian Scale Mixture (GSM) model, the image noises were reduced by adaptively approaching Shearlet coefficients with Bayesian least squares estimator, and then, the preliminary denoised image was reconstructed by inverse Shearlet transform. Finally, the preliminary denoised image was further filtered by nonlocal self-similarity model, and the final denoised image was produced. The experimental results show that the proposed method can better preserve the edge information. Meanwhile, it can effectively reduce the image noise and Gibbs-like artifacts produced by shrinkage. Compared with Non-Subsampled Shearlet Transform (NSST)-based image denoising with hard-thresholding, the proposed method improves the Peak-Signal to-Noise-Ratio (PSNR) and Structural Similarity (SSIM) by 1.41 dB and 0.08 respectively; compared with GSM model-based image denoising in the Shearlet domain, the proposed method improves the PSNR and SSIM by 1.04 dB and 0.045 respectively; compared with shearlet-based image denoising using trivariate prior model, the proposed method improves the PSNR and SSIM by 0.64 dB and 0.025 respectively.

    Research and development of intelligent healthy community system based on mobile Internet
    YUAN Xi, LI Qiang
    2015, 35(1):  239-242.  DOI: 10.11772/j.issn.1001-9081.2015.01.0239
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of low resource utilization in community health center, little contact between community health center and community residents, and difficulty for residents to participate in personal health management and medical care, an intelligent healthy community system was developed. With the increasing popular mobile devices, the system provided support for health record management, chronic disease management, immunization, appointment registration, medical information query and other services in community health center. It realized the data sharing and interaction among smart phones, tablet PCs and Hospital Information System (HIS), which allowed the residents to actively participate in personal health management. Now the system has been deployed in one community health center of Chengdu, it makes community residents convient to manage their personal health, and improves the work efficiency and service quality of community health center.

    Research of location-routing problem in emergency logistics system for post-earthquake transitional stage
    WANG Yong, XU Dongchuan, NONG Lanjing
    2015, 35(1):  243-246.  DOI: 10.11772/j.issn.1001-9081.2015.01.0243
    Asbtract ( )   PDF (604KB) ( )  
    References | Related Articles | Metrics

    During the post-earthquake transitional phase, there are relief goods recycling and environmental protection problems. In the premise of meeting the basic demand of people in disaster area, the Location-Routing Problem (LRP) model of emergency logistics facilities with forward and reverse directions was built. First, according to the characteristics that the recycled materials could be partially transported, a mathematical model was established in which the objective function was minimum time of emergency system. Second, a two-phase heuristic algorithm was used to solve the model. Finally, the example analyses verified the feasibility of the model and algorithm. The experimental results show that, compared with the traditional one-way LRP model, the objective function value of the proposed method decreases by 51%. The proposed model can effectively improve the efficiency of emergency logistics system operation and provide auxiliary decision support for emergency management department.

    Quay crane allocation and scheduling joint optimization model for single ship
    ZHENG Hongxing, WU Yue, TU Chuang, LIU Jinping
    2015, 35(1):  247-251.  DOI: 10.11772/j.issn.1001-9081.2015.01.0247
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics

    This paper proposed a liner programming model to deal with the Quay Crane (QC) allocation and scheduling problem for single ship under the circumstance of fixed berth allocation. With the aim of minimizing the working time of the ship at berth, the model considered not only the disruptive waiting time when the quay cranes were working, but also the workload balance between the cranes. And an Improved Ant Colony Optimization (IACO) algorithm with the embedding of a solution space split strategy was presented to solve the model. The experimental results show that the proper allocation and scheduling of quay cranes from the model in this paper can averagely save 31.86% of the crane resource compared with full application of all available cranes. When comparing to the solution solved by Lingo, the results from IACO algorithm have an average deviation of 5.23%, while the average CPU (Central Processing Unit) computational time is reduced by 78.7%, which shows the feasibility and validity of the proposed model and the algorithm.

    Data processing techniques on sensors of smart terminals for 3D navigation
    JI Lianen, ZOU Yinlong, XIN Bing
    2015, 35(1):  252-256.  DOI: 10.11772/j.issn.1001-9081.2015.01.0252
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics

    Since there are severe instability problems of human-computer interaction in 3D scenes' navigation, a data processing algorithm of variable smoothing and normalization model for sensors of smart terminals was proposed. In view of the data characteristics of sensors and processing characteristics of each smoothing model, the different smoothing models were combined and the variations of smooth window were enabled. A normalization algorithm was applied to deal with the interaction data of sensors equidistantly. The experimental results in navigation system of 3D scenes based on terminals show that, the standard variance decreases by 71.3% after dealing with low frequency data of sensors with the proposed algorithm, which means the data perturbations is significantly reduced. And while processing the high frequency data, the standard variance decreases by 7.9%, which effectively preserves the signal features. The proposed algorithm can distinctly improve the stability and continuity of 3D navigation interaction using smart terminals.

    Robust speech recognition algorithm based on articulatory features for vocal effort variability
    CHAO Hao, SONG Cheng, PENG Weiping
    2015, 35(1):  257-261.  DOI: 10.11772/j.issn.1001-9081.2015.01.0257
    Asbtract ( )   PDF (785KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of robust speech recognition for Vocal Effort (VE) variability, a speech recognition algorithm based on multi-model framework was presented. Firstly, changes of acoustic characteristics under different VE modes, as well as influence of these changes on speech recognition, were analyzed. Secondly, a VE detection method based on Gaussian Mixture Model (GMM) was proposed. Finally, the special acoustic models were trained to recognize whisper speech if the result of VE detection was whisper mode; otherwise articularoty features, in company with spectrum features, were introduced to recognize speech of the remaining four VE modes. The experiments conducted on isolated-word recognition show that significant improvement of recognition accuracy can be achieved when using proposed method: compared with the baseline system, the mixed corpus training method and the Maximum Likelihood Linear Regression (MLLR) adaptation method, the average character error rate of five VE modes is reduced by 26.69%,14.51% and 15.30% respectively. These results prove that articularoty feature is more robust than the traditional spectrum feature when confronting VE variability, and the multi-model framework is an efficient method for robust speech recognition related to VE variability.

    Real-time human identification algorithm based on dynamic electrocardiogram signals
    LU Yang, BAO Shudi, ZHOU Xiang, CHEN Jinheng
    2015, 35(1):  262-264.  DOI: 10.11772/j.issn.1001-9081.2015.01.0262
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics

    Electrocardiogram (ECG) signal has attracted widespread interest for the potential use in biometrics due to its ease-of-monitoring and individual uniqueness. To address the accuracy and real-time performance problem of human identification, a fast and robust ECG-based identification algorithm was proposed in this study, which was particularly suitable for miniaturized embedded platforms. Firstly, a dynamic-threshold method was used to extract stable ECG waveforms as template samples and test samples; then, based on a modified Dynamic Time Warping (DTW) method, the degree of difference between matching samples was calculated to reach a result of recognition. Considering that ECG is a kind of time-varying and non-stationary signals, ECG template database should be dynamically updated to ensure the consistency of the template and body status and further improve recognition accuracy and robustness. The analysis results with MIT-BIH Arrhythmia database and own experimental data show that the proposed algorithm has an accuracy rate at 98.6%. On the other hand, the average running times of dynamic threshold setting and optimized DTW algorithms on Android mobile terminals are about 59.5 ms and 26.0 ms respectively, which demonstrates a significantly improved real-time performance.

    Clutter suppression method based on dynamic region regression and singular value decomposition in ultrasound flow image
    XIAO Lei, XIONG Xiujuan, CHEN Fei, CHEN Bo
    2015, 35(1):  265-269.  DOI: 10.11772/j.issn.1001-9081.2015.01.0265
    Asbtract ( )   PDF (876KB) ( )  
    References | Related Articles | Metrics

    For the inaccurate problem of the estimation of the blood flow velocity which is caused by the clutter signal in ultrasound Color Flow Imaging (CFI), this paper proposed a clutter suppression method based on dynamic region polynomial regression and Singular Value Decomposition (SVD), called ARS algorithm. First, according to the time-domain characteristics and the energy intensity of the echo signal, this method adopted the dynamic partitioning method to distinguish the range of signal; then, according to the divided range, polynomial regression method or SVD method was selected to dynamically reject the clutter signal. This paper made a simulation to compare the proposed method with the projection initialized Infinite Impulse Response (IIR) filter, the non-stationary filter, the regression filter and the SVD algorithm. The experimental results show that the proposed method can completely reject the interference of tissue motion (the velocity is almost zero in the tissue area and the clutter-to-blood ratio is about 5.427 dB after the clutter suppressing is implemented), the estimated maximum blood flow velocity (0.968 m/s) is close to the theoretical value and the blood flow distributes uniformly, the integrity of the blood flow velocity profile can be better maintained and the achieved blood flow velocity map illustrates the authentic flow velocity and high image quality.

    Design of aerial photography control system for unmanned aerial vehicle
    ZHAO Haimeng, ZHANG Wenkai, GU Jingbo, WANG Qiang, SHEN Luning, YAN Lei
    2015, 35(1):  270-275.  DOI: 10.11772/j.issn.1001-9081.2015.01.0270
    Asbtract ( )   PDF (920KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems of automatic control of camera load parameters and real-time tracking of the flight path in the remote sensing photography of Unmanned Aerial Vehicle (UAV), this paper presented a design scheme which could complete camera load control and aerial control automatically. First, the information of real-time geographic location and environment forecasting could be acquired in the system according to experimental requirements, and the parameter encoding was completed based on the table of camera control parameters; second, the custom protocol instruction set was sent to hardware control circuits through the communication port to complete the set of camera load parameters, and photography could be completed. Meanwhile, the geographic coordinate information of real-time flight path was recorded by the route planning software. The system can combine hardware control platform with software data processing, to achieve collaborative control. The UAV experiment results show that compared with the mode of single parameter aerial control, the proposed system in this paper can automatically control camera parameters and track real-time flight path according to different photography conditions and photography scenes.

    Data acquisition and transmission system for γ-ray industrial computed tomography
    GAO Fuqiang, CHEN Chunjiang, LAN Yang, AN Kang
    2015, 35(1):  276-278.  DOI: 10.11772/j.issn.1001-9081.2015.01.0276
    Asbtract ( )   PDF (634KB) ( )  
    References | Related Articles | Metrics

    In order to meet the requirements of high speed and multi-channel of data acquisition and transmission for γ-ray industrial Computed Tomography (CT), the system based on User Datagram Protocol (UDP) with Field-Programmable Gate Array (FPGA) controlling was designed. This system increased FPGA counting unit, so more channels could be used for data collection. Main control was based on FPGA as the core, which used UDP protocol and was implemented by Verilog programming. Then, data was transmitted to upper computer for image reconstruction by Ethernet interface chip. The upper computer interface and mutual communication with the underlying transmission circuit realized by VC ++ 6.0 programming. The experimental results indicate that, in the 100 Mb/s full-duplex mode, the network utilization rate can reach 93%, and transmission speed is 93 Mb/s (11.625 MB/s), and the upper computer can receive data correctly in a long distance. So, it can satisfy the system requirements of rapid speed and long distance for γ-ray industrial CT.

    Wireless communication system of capsule endoscope based on ZL70102
    WEI Xueling, LIU Hua
    2015, 35(1):  279-282.  DOI: 10.11772/j.issn.1001-9081.2015.01.0279
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics

    For the traditional method of digestive tract disease diagnosis, the accuracy rate is low and the process is painful. In order to solve these problems, a wireless capsule endoscope system was designed using the wireless communication technology to transmit the image of the tract out of the body. Firstly, the image gathering module was used to capture the image of the digestive tract. Secondly, the image data was transmitted out of the body by the digital wireless communication system. Finally, the data was quickly uploaded to PC by the receiving module to decompress and display the image. The experimental results show that the wireless communication system with MSP430 and ZL70102 has several excellent features such as small-size, low-power and high-rate. Compared with the existing capsule endoscope that transmits analog signal, this digital wireless communication system has strong anti-interference capacity. Also, the accuracy of transmitting image data can reach 80% and the power consumption is only 31.6 mW.

    Blowing state recognition of basic oxygen furnace based on feature of flame color texture complexity
    LI Pengju, LIU Hui, WANG Bin, WANG Long
    2015, 35(1):  283-288.  DOI: 10.11772/j.issn.1001-9081.2015.01.0283
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics

    In the process of converter blowing state recognition based on flame image recognition, flame color texture information is underutilized and state recognition rate still needs to be improved in the existing methods. To deal with this problem, a new converter blowing recognition method based on feature of flame color texture complexity was proposed. Firstly, the flame image was transformed into HSI color space, and was nonuniformly quantified; secondly, the co-occurrence matrix of H component and S component was computed in order to fuse color information of flame images; thirdly, the feature descriptor of flame texture complexity was calculated using color co-occurrence matrix; finally, the Canberra distance was used as similarity criteria to classify and identify blowing state. The experimental results show that in the premise of real-time requirements, the recognition rate of the proposed method is increased by 28.33% and 3.33% respectively, compared with the methods of Gray-level co-occurrence matrix and gray differential statistics.

    Dynamic prediction model for gas emission quantity based on least square support vector machine and Kalman filter
    FU Hua, ZI Hai
    2015, 35(1):  289-293.  DOI: 10.11772/j.issn.1001-9081.2015.01.0289
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics

    In order to solve the multifactor problem related to gas emission quantity prediction, a new dynamic prediction method of coupling Least Square Support Vector Machine (LS-SVM) with Kalman filter was proposed. The dynamically adaptive set of training samples were obtained to replace the fixed set of training samples based on the strategy for predicting variance ratio of residual errors. LS-SVM identification network was used to perform nonlinear mapping on relevant factors of gas emission quantity to extract the state vector with the best dimension number. The Kalman filter based gas emission quantity forecasting model was established by using the state vector. Experiments were carried out with the monitoring data of the mine. The experimental results show that the average relative error of results predicted by the model is 2.17% and the average relative variance is 0.008873. The proposed model is superior to other prediction models of neural network and support vector machine in terms of prediction accuracy and generalization ability.

    Characteristic information extraction and dynamic warning algorithm of watershed disaster
    JIAO Fangyuan, LI Jia, LI Wei
    2015, 35(1):  294-298.  DOI: 10.11772/j.issn.1001-9081.2015.01.0294
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    For the problem that the current methods could not meet the practical needs in the field of dynamic characteristic information extraction and watershed disaster early warning, in order to enhance the technology level of characteristics information extraction and dynamic early warning process, an in-depth study was conducted on the core aspects of performance parameters calculation and the implementation of dynamic early warning depending on the typical watershed disaster, the performance parameters calculation method based on Wireless Sensor Network (WSN) was proposed and the algorithm to deal with disaster dynamic early warning information was designed. With the performance parameters sampled data of typical watershed disaster, the simulation analysis of core performance parameters of the watershed disaster was carried out by using Matlab simulation platform. The experimental results show that the proposed algorithm can capture the dynamic core characteristics information of watershed disaster effectively and improve the dynamic early warning indication accuracy of watershed disaster.

2024 Vol.44 No.2

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF