Loading...

Table of Content

    01 February 2013, Volume 33 Issue 02
    Advanced computing
    Resource allocation strategies for cloud platform
    QIN Zhiguang KE Tao LIU Mengjuan WANG Cong
    2013, 33(02):  299-307.  DOI: 10.3724/SP.J.1087.2013.00299
    Asbtract ( )   PDF (1023KB) ( )  
    Related Articles | Metrics
    Resource allocation strategy has been the hot and difficult research topic in the field of cloud computation. Research and analysis of current resource allocation strategies were carried out. Firstly, the challenges facing cloud platform resource allocation were analyzed. Then the formalization description of cloud platform resource allocation was given. The current mainstream of resource allocation strategies were described from the perspectives of heuristic allocation algorithms, economic theory based strategy, and other allocation strategies, and their advantages and disadvantages were discussed. Finally, a comprehensive comparison of the mainstream strategies based on specific indicators was made, and some future research directions were pointed out.
    Study of chaos in double coupled oscillators system based on topological horseshoe theory
    XU Guilan TANG Song YANG Yanfang
    2013, 33(02):  304-307.  DOI: 10.3724/SP.J.1087.2013.00304
    Asbtract ( )   PDF (597KB) ( )  
    Related Articles | Metrics
    Recently, collective chaos in coupled oscillators networks has become a new hot spot in the chaos study. On account of short growing history of collective chaos and lacking of mature theories and methods, the main means of research is still concentrated on the rough ones such as numerical computation, power spectrum and Lyapunov exponent, which are not strict judgment in mathematics and hard to describe the mechanism of chaos. By means of topological horseshoe theory, the authors studied deeply collective chaos of one four-dimensional continuous system consisting of a pair of coupled oscillators and found that topological horseshoe with expanding in one direction in the phase space of the corresponding Poincare map. It not only strictly demonstrates by numerical way that the coupled oscillator system is chaotic, but also reveals the dynamic mechanism of chaos.
    Hierarchical clustering algorithm based on sticker and 2-armed DNA model
    BAI Xue REN Xiaoling LIU Xiyu
    2013, 33(02):  308-315.  DOI: 10.3724/SP.J.1087.2013.00308
    Asbtract ( )   PDF (555KB) ( )  
    Related Articles | Metrics
    In order to take full advantage of the high parallelism and huge storage capacity of DNA molecules in biological computing, this paper introduced DNA computing into hierarchical clustering to do global research on data set. For realizing the nearest neighbor hierarchical clustering, an algorithm combining sticker model with 2-armed DNA molecules was put forward. Based on the idea of MST (Minimum Spanning Tree), the first thing to do was generating complex DNA strands of all combinations of edges and then screening those containing n-1 edges. Based on the edges, it is needed to set the corresponding vertex stickers and keep those strands covering all the vertices. After that, weight strands constructed by 2-armed molecules would be appended at the end of the complex strands and the shortest ones could be detected by gel electrophoresis. Finally, by fluorescence analysis the clustering result can be got. In computer simulation, this algorithm may take different lengths of edges into account instead of varying the polynomial time complexity and the number of steps to read final results is set as a constant.
    Improved binary particle swarm optimization algorithm with experience factor
    CAO Yiqin ZHANG Zhen HUANG Xiaosheng
    2013, 33(02):  311-315.  DOI: 10.3724/SP.J.1087.2013.00311
    Asbtract ( )   PDF (749KB) ( )  
    Related Articles | Metrics
    The traditional Binary Particle Swarm Optimization (BPSO) algorithm does not make full use of the historical position information for its iterative optimization, which impedes further improvement on the efficiency of the algorithm. To deal with the problem, an improved BPSO algorithm with the experience factor was proposed. The new algorithm exploited the experience factor, which could reflect the historical information of particle's position, to influence the speed update of particles and therefore improved the optimization process. In order to avoid the excessive dependence on the historical experience information of particles, the algorithm regulated the historical information through the reward and punishment mechanism and a history-forgotten coefficient, and in the end, empirical weights were used to determine the final effect on the experience factor. Compared with the classic BPSO and related improved algorithm, the experimental results show that the new algorithm can achieve better effects both in convergence speed and global search ability.
    Quantum particle swarm optimization based on Bloch coordinates of qubits
    CHEN Yixiong LIANG Ximing HUANG Ya-fei
    2013, 33(02):  316-322.  DOI: 10.3724/SP.J.1087.2013.00316
    Asbtract ( )   PDF (545KB) ( )  
    Related Articles | Metrics
    To improve the efficiency of Particle Swarm Optimization (PSO), a quantum particle swarm optimization algorithm combined with quantum theory on the basis of Bloch sphere was proposed. In Bloch spherical coordinates, the particle automatically updated rotation angle and particle position, without setting the rotation angle in the form of look-up table (or setting fixed value of the interval), making up for the deficiency of quantum evolutionary algorithm and quantum genetic algorithm on the basis of Bloch sphere, and the algorithm is more generalizable. Using quantum Hadamard gate to realize the variation of particle enhanced the diversity of population, and prompted particle jump out of local extreme value. The simulation results of the typical function optimization problem show that the algorithm is stable with high precision and fast convergence rate, and it is practical.
    Improved PSO algorithm based on cosine functions and its simulation
    ZHANG Min HUANG Qiang XU Zhouzhao JIANG Baizhuang
    2013, 33(02):  319-322.  DOI: 10.3724/SP.J.1087.2013.00319
    Asbtract ( )   PDF (648KB) ( )  
    Related Articles | Metrics
    The advantages of simplicity and easy implementation of Particle Swarm Optimization (PSO) algorithm have been validated in science and engineering fields. However, the weaknesses of PSO algorithm are the same as that of other evolutionary algorithms, such as being easy to fall into local minimum, premature convergence. The causes of these disadvantages were analyzed, and an improved algorithm named Cosine PSO (CPSO) was proposed, in which the inertia weight of the particle was nonlinearly adjusted based on cosine functions and the learning factor was symmetrically changed, as well as population diversity was maintained based on bacterial chemotaxis. Therefore, CPSO algorithm is better than the Standard PSO (SPSO) in a certain degree. Simulation comparison of the three algorithms on five standard test functions indicates that, CPSO algorithm not only jumps out of local optimum and effectively alleviates the problem of premature convergence, but also has fast convergence speed.
    Quorum generation algorithm with time complexity of O(n)
    WU Peng LI Meian
    2013, 33(02):  323-360.  DOI: 10.3724/SP.J.1087.2013.00323
    Asbtract ( )   PDF (557KB) ( )  
    Related Articles | Metrics
    It is necessary to generate the quorums as soon as possible in large-scale fully distributed system for its mutual exclusion problem. Based on the theory of relaxed cyclic difference set, the definition of second relaxed cyclic difference set was proposed. After researching the new concepts, the subtraction steps in previously classical methods can be changed into summation steps. Furthermore, a lot of summation steps can be cut down by the recurrence relation deduced from the summation steps. The time complexity of this algorithm is just only O(n) and the size of the symmetric quorums is still close to 2n^(1/2).
    Artificial intelligence
    Remote sensing image classification based on active learning with manifold structure
    LIU Kang QIAN Xu WANG Ziqiang
    2013, 33(02):  326-328.  DOI: 10.3724/SP.J.1087.2013.00326
    Asbtract ( )   PDF (477KB) ( )  
    Related Articles | Metrics
    To efficiently solve remote sensing image classification problem, a new classification algorithm based on manifold structure and Support Vector Machine (SVM) was proposed. Firstly, the proposed algorithm trained the SVM with initial training set and found the samples close to the decision hyperplane, then built the manifold structure of the samples by using Laplacian graph of the selected samples. The manifold structure was applied to find the representative samples for the classifier. The experimental evaluations were conducted on the hyperspectral images, and the effectiveness of the proposed algorithm was evaluated by comparing it with other active learning techniques exiting in the literature. The experimental results on data set confirm that the algorithm has higher classification accuracy.
    Differential evolution harmony search algorithm for solving job-shop scheduling problem
    ZHANG Jingmin LI Xia
    2013, 33(02):  329-356.  DOI: 10.3724/SP.J.1087.2013.00329
    Asbtract ( )   PDF (749KB) ( )  
    Related Articles | Metrics
    To solve the Job-Shop Scheduling Problem (JSSP) efficiently, a Differential Evolution Harmony Search Algorithm (DEHSA) was put forward. First of all, the sorting process number conversion method was designed for converting floating-point numbers harmony into a workpiece sequence to solve the phenomenon that the harmony function is continuous while the process is discrete. Secondly, in order to improve the convergence rate of HSA, its evolution model was improved instead of replacing a worst solution only, and the probability of the harmonies variable evolution depending on current optimal solution named "guide excellent" was also proposed. At last, the Differential Evolution Algorithm (DEA) was introduced to HSA to overcome the poor directional and late stagnation. A large number of simulation results show that DEHSA has good feasibility and effectiveness in job-shop scheduling problem.
    Approach on irregular block dynamic stacking in shipbuilding based on grid technology
    LU Chunxia MA Shaohui
    2013, 33(02):  333-337.  DOI: 10.3724/SP.J.1087.2013.00333
    Asbtract ( )   PDF (785KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of spatial scheduling of irregular blocks in stacking field after they were built, a dynamic stacking algorithm based on grid searching was proposed. An improved Particle Swarm Optimization (PSO) algorithm was used to determine the optimal processing sequence, and the locations of blocks were determined in spatial layout decoding by a dynamic location strategy based on grid technology. In the decoding process, bitmap was used to describe the information of yard and polygon and find the best location for every block. The space utilization and the quantity of blocks that need moving were used as the evaluation function, which fully considered the dynamic nature and the correlation between time and space in a stacking problem. Every particle in population was a stacking sequence, and the optimal solution could be found in the process of evolution by the improved PSO. Finally, the results of an actual data of a shipyard and the comparative analysis with other stacking algorithms show that the proposed algorithm is the best when comprehensively considering the space utilization, the number of movements of other blocks and efficiency of the algorithm.
    Variable neighborhood search algorithm for nurse rostering problem
    WANG Chao DONG Xingye
    2013, 33(02):  338-352.  DOI: 10.3724/SP.J.1087.2013.00338
    Asbtract ( )   PDF (769KB) ( )  
    Related Articles | Metrics
    Variable neighborhood search algorithm is effective for the nurse rostering, and the perturbation method used in it has significant effect on its performance. In order to improve the satisfaction of nurses in the nurse rostering problem, an Improved Variable Neighborhood Search (IVNS) algorithm was proposed. The algorithm used three neighborhood structures, when using any neighborhood could not improve the current solution further, a method for perturbing the current optimal solution was designed: firstly, two days in the rostering period were randomly selected, then a group of nurses were selected and their shifts in these two days were exchanged under the restriction of hard constraints. Comparison experiments with a Hybrid Variable Neighborhood Search (HVNS) algorithm were carried out on the benchmarks provided by the first international nurse rostering competition in 2010, and the results in the Sprint-early, Medium-early and Long-early instance groups show that, the optimal value of the IVNS algorithm is not inferior to HVNS at least, and its average value is superior to HVNS; the maximum variance of IVNS algorithm is 0.72, which means the fluctuation range is small, and the solution performance is stable. The IVNS disturbance program makes small disturbance to the existing project, and the current local optimal value can effectively jump out, enhancing the optimization ability of variable neighborhood search algorithm. Compared with HVNS algorithm, its performance is better.
    Multi-constraint hierarchical optimization combined with forecast calculation used in intelligent strategy of generating test paper
    LU Ping WANG Yuying
    2013, 33(02):  342-345.  DOI: 10.3724/SP.J.1087.2013.00342
    Asbtract ( )   PDF (643KB) ( )  
    Related Articles | Metrics
    Multi-constraint constrains and reduces the test success rate, and it is difficult to make the knowledge points uniformly and automatically distributed in intelligent generating test paper. To solve these above problems, a multi-constraint hierarchical optimization strategy was proposed. It used hierarchical method to reduce the problem size, and used the tree structure to manage knowledge points and realize uniform distribution of knowledge points. With regard to the low success rate and efficiency of small test bank in generating test paper, a forecast calculation algorithm without backtracking was put forward based on hierarchical optimization algorithm to increase the test success rate. The experimental results indicate that the algorithm is suitable for large, medium and small question database, and all of them have good results.
    New task negotiation model of multiple mobile-robots
    KE Wende PENG Zhiping CHEN Ke CAI Zesu
    2013, 33(02):  346-349.  DOI: 10.3724/SP.J.1087.2013.00346
    Asbtract ( )   PDF (635KB) ( )  
    Related Articles | Metrics
    Concerning the problems of lacking the mind states and task handling capability, low real-time caused by congested bandwidth and slow learning from negotiation history, a task negotiation model for multiple mobile-robots was proposed. Firstly, the basic moving state of robot was shown. Secondly, the states of mind (belief, goal, intention, knowledge update, etc.) and ability (cooperation, capability judgment, task allocation, etc.) based on π calculus for the negotiation of multiple mobile-robots were defined. Thirdly, the negotiation model of multiple mobile-robots was constructed, in which the negotiation period, negotiation task, utility estimation, negotiation allocation protocol, learning mechanism were studied. Finally, the validity of model was proved through experiments of robot soccer.
    Fault diagnosis for batch processes based on two-dimensional principal component analysis
    KONG Xiaoguang GUO Jinyu LIN Aijun
    2013, 33(02):  350-352.  DOI: 10.3724/SP.J.1087.2013.00350
    Asbtract ( )   PDF (438KB) ( )  
    Related Articles | Metrics
    Multiway Principal Component Analysis (MPCA) has been widely used to monitor multivariate batch process. In MPCA method, the batch data are transformed as a vector in high-dimensional space, resulting in large computation, storage space and loss of important information inevitably. A new batch process fault diagnosis method based on the two-Dimensional Principal Component Analysis (2DPCA) was presented. Essentially, every batch data was presented as a second order vector, or a matrix. In this case, 2DPCA could be used to deal with the two-dimensional batch data matrix directly instead of performing vectorizing procedure with low memory and storage requirements. In addition, 2DPCA was used to model with the covariance average of all the batches, which accurately reflected the different faults and enhanced the accuracy of fault diagnosis to a certain extent. The monitoring results of an industrial example show that the 2DPCA method outperforms the conventional MPCA.
    Lithology identification based on genetic optimized radial basis probabilistic neural network
    JIN Yuping LI Baolin
    2013, 33(02):  353-356.  DOI: 10.3724/SP.J.1087.2013.00353
    Asbtract ( )   PDF (584KB) ( )  
    Related Articles | Metrics
    Lithology identification is the most critical procedure in the logging data interpretation field, while the traditional lithology identification methods have a lot of defects such as slow explain efficiency, low accuracy, and big influenced human factors. To resolve these problems, a new kind lithology identification method was put forward using genetic optimized Radial Basis Probability Neural Network (RBPNN). Probabilistic Neural Network (PNN) and the Radial Basis Function Neural Network (RBFNN) were combined to construct RBPNN. To optimize network structure, upgrade convergence speed and accuracy, Genetic Algorithm (GA) was used to search for the optimal hidden center vector and matching kernel function control parameters of the RBPNN structure which must satisfy minimum error of RBPNN training and form genetic optimized RBPNN network model. The case study shows that lithology identification based on genetic optimized RBPNN can achieve the actual application standards, and it is feasible and effective, it also can provide scientific theoretical supports and dependences for oil geological exploration field.
    Network and communications
    HiCuts classification algorithm based on non-uniform cutting
    WANG Wenyong REN Chunmei HUANG Lisheng
    2013, 33(02):  357-360.  DOI: 10.3724/SP.J.1087.2013.00357
    Asbtract ( )   PDF (671KB) ( )  
    Related Articles | Metrics
    Packet classification technology has been widely used in many network services, and Hierarchical Intelligent Cuttings (HiCuts) algorithm is the most representative multi-dimensional packet classification algorithm. However, due to the uneven distribution of rules, it is difficult to divide rules into different nodes by dividing each domain equally, thus causing the depth of the decision tree increase dramatically, and the time efficiency and space efficiency of the algorithm reduced greatly. By massive statistical analysis, it is found that the rules of rule set are not uniformly distributed within their range. A non-uniform cutting technique named N-HiCuts algorithm was proposed to build decision tree algorithm on the basis of HiCuts. For the uneven distribution domain, non-uniform cutting was adopted on the basis of statistical rules. For the even distribution domain, the equal dividing function was adopted to cut, therefore the efficiency of cutting the rule set is improved. The experimental results show that the overall performance of the proposed algorithm is greatly improved.
    Routing algorithm in opportunistic network based on historical utility
    LIU Qilie XU Meng LI Yun YANG Jun
    2013, 33(02):  361-364.  DOI: 10.3724/SP.J.1087.2013.00361
    Asbtract ( )   PDF (620KB) ( )  
    Related Articles | Metrics
    In view of the low delivery ratio of conventional probabilistic routing in opportunistic networks, an improved routing algorithm based on History Meeting Predictability Routing (HMPR) was put forward. The algorithm was primarily based on the contact duration and the meeting frequency of history information of nodes, and predicted the utility of packets successfully delivered to the destination. Through comparing the utility value, nodes could determine packets whether to be forwarded from them to next hop nodes. The simulation results show that, compared with traditional epidemic routing and probabilistic routing, the proposed routing scheme has better performance in the delivery ratio of packets, the average delay time and the average buffer time.
    New power control scheme with maximum energy efficiency in wireless transmission
    ZHAO Hui ZHANG Xue LIU Ming GONG Haigang WU Yue
    2013, 33(02):  365-381.  DOI: 10.3724/SP.J.1087.2013.00365
    Asbtract ( )   PDF (735KB) ( )  
    Related Articles | Metrics
    Energy efficiency is an important metric in wireless Ad Hoc networks. Until now, there is no universally accepted definition of energy efficiency, and the related results are asymptotic or qualitative, which has limited its applicability. By regarding a bit as a physical particle with one unit of mass, the authors assumed that a bit in transmission possessed a certain amount of kinetic information energy. As a result, the energy efficiency of wireless transmission was defined as the ratio of information energy to physical energy. A quantitative analysis on energy efficiency in wireless transmission was carried out and meaningful results were obtained. It is concluded that the energy efficiency changes non-monotonously with the transmission power, and there is an optimal transmission power, with which the maximum energy efficiency will be acquired. The optimal transmission power was given to help the protocol design. Based on the theoretical results, a practical solution for transmission power control was proposed, and an extensive experimental study of it was given on the CC2420 radio. The results show the effectiveness of the proposed transmission power control scheme.
    Cluster-based and energy-efficient hierarchical time synchronization algorithm for multi-hop wireless sensor network
    WANG Yuxiu HUANG Jian SHI Xin WANG Xiaogang
    2013, 33(02):  369-373.  DOI: 10.3724/SP.J.1087.2013.00369
    Asbtract ( )   PDF (836KB) ( )  
    Related Articles | Metrics
    Concerning the problem that typical time synchronous algorithms used in multi-hop Wireless Sensor Network (WSN) mainly focus on increasing the network synchronization precision, but ignore such issues as the network energy, path hop and error accumulation, a cluster-based and energy-efficient hierarchical time synchronization algorithm was proposed based on a cluster hierarchical network structure. In this algorithm, a cluster node was selected to take a two-way synchronization with the cluster head, and its neighbor nodes achieved bidirectional synchronization indirectly through the passive monitoring mode. In such a way, the number of the packet's transmission was reduced while the digital signature pattern ensured the safety of transmission. In addition, the cycle update coefficient of synchronous group delay was quoted to further reduce the consumption of synchronization packets. The simulation results show that the algorithm reduces the energy consumption and prolongs the network life time, therefore this algorithm has certain practicability.
    Correlation adaptive compressed sensing of wireless sensor network data
    ZHOU Jian ZHANG Mingxin
    2013, 33(02):  374-389.  DOI: 10.3724/SP.J.1087.2013.00374
    Asbtract ( )   PDF (738KB) ( )  
    Related Articles | Metrics
    In order to eliminate the influence of varying correlation of Wireless Sensor Network (WSN) data caused by transmission in the performance of the current Compressed Sensing (CS) reconstruction algorithms, a correlation adaptive reconstruction algorithm for network data was proposed. Firstly, the iterative algorithm was used to estimate the correlation of the date to be reconstructed, then two-step correlation of support set were used for coordinating the non-zero value in sparse coefficient vector, and eventually a more precise reconstruction of data was realized. The simulation result shows that this algorithm can effectively restrain the effect of noises in WSN data reconstruction and improve the accuracy of reconstruction under varying correlation.
    Link fault monitoring in optical networks based on wavelet transform
    XIONG Yu LIU Xiaoqing PENG Haiying WANG Ruyan
    2013, 33(02):  382-399.  DOI: 10.3724/SP.J.1087.2013.00382
    Asbtract ( )   PDF (651KB) ( )  
    Related Articles | Metrics
    The traditional fault monitoring methods have some problems such as great deviation and slow speed. To solve these problems, a link fault monitoring algorithm based on the wavelet transform was presented. This algorithm used the dynamic polling scheme to detect the optical power and used the local characteristic in time-frequency domain of the wavelet transform to extract the fault information from the detection value. The monitoring optical power value was decomposed with multi-scale to eliminate the effects of noise, thereby improving the accuracy of the fault monitoring. The experimental results show that compared to the analytucal methods in time domain, the proposed fault monitoring algorithm based on wavelet transform is better to overcome the effects of noise. The leakage alarm rate is reduced to zero and the false alarm rate is decreased by five percentage. The monitoring time is between 2.53ms and 3.12ms, which can meet the real-time requirement.
    IPv6 routing table lookup algorithm based on multi-layer hybrid structure
    DENG Yaping ZHOU Meihong
    2013, 33(02):  385-389.  DOI: 10.3724/SP.J.1087.2013.00385
    Asbtract ( )   PDF (845KB) ( )  
    Related Articles | Metrics
    Most of the existing routing lookup algorithms usually use a variety of optimization methods to improve search performance; however they need to reconstruct the entire routing table to update routing information. Concerning this, a multi-layer hybrid structure algorithm for IPv6 routing table lookup was proposed. The first layer used the advantage of the optimal search tree for reference. The different values of prefix 1-16 bit were ordered by their probability of occurrence in the routing table, and were stored in the linear list. The 17-32 bit and 33-48 bit of prefix were organized respectively with the balanced binary tree. The 49~64 bit of prefix was organized with the linear list. The evaluation results show that the proposed scheme performs faster, occupies less memory, and supports incremental update.
    The Study of Active Queue Management Algorithm Based on Particle Swarm Optimization
    WANG Junxiang LIN Bogang
    2013, 33(02):  390-396.  DOI: 10.3724/SP.J.1087.2013.00390
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    In order to mitigate the network congestion, a novel active queue management algorithm RQQM (Rate and Queue-based Queue Management algorithm) is proposed by particle swarm optimization. In this algorithm, actual queue length is deducted with particle swarm optimization and variation factor, and the dropping strategy and dropping rate are presented based on arrival rate and actual queue length. Then, a simulation with actual data was conducted to study of the algorithm performance between RQQM and RFQM (Rate-based Fair Queue Management algorithm), as well as ABLUE (Adaptive BLUE algorithm). The result shows that it is better adaptability for RQQM.
    Performance analysis and improvement of forward error correction encoder in G3-PLC
    WU Xiaomeng LIU Hongli LI Cheng GU Zhiru
    2013, 33(02):  393-396.  DOI: 10.3724/SP.J.1087.2013.00393
    Asbtract ( )   PDF (595KB) ( )  
    Related Articles | Metrics
    To solve the problems of single and low rate of convolutional codes and large loss of data rate in the G3 standard, the low voltage power line carrier communication system model based on Orthogonal Frequency Division Multiplexing (OFDM) in the G3 standard was analyzed, and a designing scheme of forward error correction encoder was presented based on RS encoding, convolutional encoding, puncturing and depuncturing, repetition encoding and two dimensional time and frequency interleaving algorithm. Moreover, a method for raising the code rate by puncturing and depuncturing was mainly introduced. The simulation results show that the rate of convolutional codes is raised from 1/2 to 2/3, the data rate is improved without increasing the complexity of decoding, and the effective and reliable communication can be realized, which means the scheme can be widely used in low voltage Power Line Communication (PLC).
    Optimization and improvement for Turbo product code decoding algorithm
    LIU Zhao WEI Yanqing ZHANG Xiaoming
    2013, 33(02):  397-399.  DOI: 10.3724/SP.J.1087.2013.00397
    Asbtract ( )   PDF (483KB) ( )  
    Related Articles | Metrics
    The Chase2 algorithm is one of the soft-decision decoding algorithms of Turbo Product Code (TPC). The conventional Chase2 algorithm needs a large amount of computation to calculate the Euclidean distance and search for competing codeword, so it is complex to implement in the hardware. Therefore, on the basis of the conventional Chase2 algorithm, the correlation was substituted as the metric for the Euclidean distance and the process of searching for competing codeword was simplified to reduce the decoding complexity; and the soft-output value was adjusted when there was no competing codeword to improve the coding gain. The simulation results show that, compared with the conventional algorithm, the modified algorithm has better performance and faster decoding speed, and it is suitable for hardware implementation.
    Information security
    Markov-based survivability model for Web applications
    QIN Zhiguang SONG Xu GENG Ji CHEN Wei
    2013, 33(02):  400-403.  DOI: 10.3724/SP.J.1087.2013.00400
    Asbtract ( )   PDF (637KB) ( )  
    Related Articles | Metrics
    Current survivability models can hardly bring up a practical solution, nor reflect the properties of Web applications. Firstly, the properties of Web applications were analyzed, especially the differences between atomic Web applications and composite Web applications. Secondly, the mathematical model reflecting the invoking relationship of the atomic Web applications for the development of composite Web application was constructed. Lastly, a survivability model for atomic Web applications and a Markov-based survivability model for composite Web applications with regard to runtime environment were proposed. And on the basis of these models, a recovery approach for Web applications was given, when part or all of the functions failed in adverse environment. Besides, a case was analyzed using these models, and its recovery procedures were given, in which the high survivability was guaranteed.
    Network security situation awareness model based on autonomic computing
    ZHANG Dan ZHENG Ruijuan WU Qingtao DAI Yumei
    2013, 33(02):  404-407.  DOI: 10.3724/SP.J.1087.2013.00404
    Asbtract ( )   PDF (646KB) ( )  
    Related Articles | Metrics
    Concerning the complexity of network security management and the absence of self-adaptation on situation awareness process, a Network Security Situation Awareness Model (NSSAM) based on autonomic computing was proposed. The situation extraction was analyzed in real-time by an autonomic feedback law. From the perspectives of attack and defense, a multi-level and multi-angle network security situation assessment model employing Analytic Hierarchy Process (AHP) was established according to the extracted situation information. The model of future network security situation prediction adopting improved genetic neural network was built on the basis of the past and current network security situation. Test results show that NSSAM with autonomic feedback mechanism can effectively enhance self-adaptation of the system.
    Information aggregation leakage proof model based on assignment partition
    XIE Wenchong YANG Yingjie WANG Yongwei DAI Xiangdong
    2013, 33(02):  408-416.  DOI: 10.3724/SP.J.1087.2013.00408
    Asbtract ( )   PDF (791KB) ( )  
    Related Articles | Metrics
    To solve the problems existing in BLP (Bell-LaPadula) model, such as information aggregation leakage, excessive privileges of trusted subject and the deficiency of integrity, with reference to the application requirement of hierarchical file protection, an information aggregation leakage proof model named IALP (Information Aggregation Leakage Proof) was proposed based on assignment partition. First of all, the cause of information aggregation leakage and the current research situation were discussed. Secondly, on the basis of assignments partition, the knowledgeable degree of subject and the information weight of object were quantized, and the relatively trusted subject was proposed. Security axioms and state transition rules were given. Finally, the theoretical proof, application examples and analysis indicate that IALP can control the knowable degree of the subject towards the object set with the aggregation leakage relation, and limits the privilege of trusted subject and enhances the integrity to some extent.
    Clustering-based approach for multi-level anonymization
    GUI Qiong CHENG Xiaohui
    2013, 33(02):  412-416.  DOI: 10.3724/SP.J.1087.2013.00412
    Asbtract ( )   PDF (842KB) ( )  
    Related Articles | Metrics
    To prevent the privacy disclosure caused by linking attack and reduce information loss resulting from anonymous protection, a (λα,k) multi-level anonymity model was proposed. According to the requirement of privacy preservation, sensitive attribute values could be divided into three levels: high, medium, and low. The risk of privacy disclosure was flexibly controlled by privacy protection degree parameter λ. On the basis of this, clustering-based approach for multi-level anonymization was proposed. The approach used a new hierarchical clustering algorithm and adopted more flexible strategies of data generalization for numerical attributes and classified attributes in a quasi-identifier. The experimental results show that the approach can meet the requirement of multi-level anonymous protection of sensitive attribute, and effectively reduce information loss.
    Fully anonymous multi-service subscription system without random oracles
    LIU Xin LEI Wenqing
    2013, 33(02):  417-429.  DOI: 10.3724/SP.J.1087.2013.00417
    Asbtract ( )   PDF (1076KB) ( )  
    Related Articles | Metrics
    Lately, Canard et al. (CANARD S, JAMBERT A. Untraceability and profiling are not mutually exclusive [C]// TrustBus 2010: Proceedings of the 7th International Conference on Trust, Privacy and Security in Digital Business, LNCS 6264. Berlin: Springer-Verlag, 2010: 117-128) introduced the notion of multi-service subscription and proposed several instantiations. Unfortunately, their systems only satisfied a weaker variant of anonymity called revocable-anonymity and they were not fit for "pay-per-use" services. To this end, a revised multi-service subscription system was put forward to extending Canard et al's system. The new system achieved pay-per-use subscriptions by incorporating the anonymous payment system raised by Liu et al. (LIU J K, AU M H, SUSILO W, et al. Enhancing location privacy for electric vehicles (at the right time) [EB/OL]. [2012-08-01]. http://eprint.iacr.org/2012/342). To allow users to prove in zero-knowledge that their account balance is enough for making a payment for the required access, it also utilized the Peng-Bao range proof for small ranges. Furthermore, it was constructed on several 4-round perfect zero-knowledge proofs of knowledge, which were obtained by applying a technique by Cramer et al. to the underlying Sigma-protocols. Compared with typical systems in the literature, the new solution gains advantages in terms of security. Concretely, it can be proved secure in the standard model. Moreover, it matches the strongest level of three crucial security notions, such as inseparability for spendable tokens, anonymity for users, and zero-knowledge for underlying proof systems.
    Information security defense mechanism based on wireless sensor network correlation
    HONG Yong LI Ping
    2013, 33(02):  423-467.  DOI: 10.3724/SP.J.1087.2013.00423
    Asbtract ( )   PDF (635KB) ( )  
    Related Articles | Metrics
    When the sensor nodes in Wireless Sensor Network (WSN) are captured, the internal attack may occur, thus resulting in a security deficiency of the system information. Regarding this, a security defense mechanism based on the annular space correlation model was proposed. The combined trust value between the nodes was calculated based on the model of the annular space, and the trust assessment again on its adjacent nodes was carried out. The captured nodes were recognized according to the trust assessment, and the information of the captured nodes were removed indirectly, thus achieving information security defense. The simulation results demonstrate that the data distortion has significantly improved by the optimized mechanism. This mechanism can effectively identify and remove errors and detrimental information, which improves the security of information in the system.
    Application of multi-class classification algorithm based on twin support vector machine in intrusion detection
    NIE Panpan ZANG Li LIU Leilei
    2013, 33(02):  426-429.  DOI: 10.3724/SP.J.1087.2013.00426
    Asbtract ( )   PDF (638KB) ( )  
    Related Articles | Metrics
    The multi-class classification algorithms based on traditional Support Vector Machine (SVM) are weak on training speed when dealing with large-scale data. To solve the problem, this paper proposed a multi-class classification algorithm based on Twin Support Vector Machine (TWSVM). It combined binary tree multi-class classification and constructed classifiers based on TWSVM on the nodes of binary tree. To reduce the error accumulation of Binary Tree SVM (BT-SVM), it firstly got clustering centers through the clustering algorithm, and then compared the distances between them to determine the separation sequence of classes. Finally, it was applied to network intrusion detection. The experimental results show that the proposed algorithm has higher detection accuracy and certain advantages on training speed especially for large-scale data. The training speed is approximately two times faster than that of the traditional BT-SVM algorithm. It is valuable for large-scale data processing in the field of network intrusion detection.
    Detection model and method of website defacements based on attributes partial changes
    WEI Wenhan DENG Yigui
    2013, 33(02):  430-433.  DOI: 10.3724/SP.J.1087.2013.00430
    Asbtract ( )   PDF (657KB) ( )  
    Related Articles | Metrics
    The traditional methods of website remote monitoring are limited to static webpages. A rule-based classifier for dynamic webpage was proposed. The method took the website partial changes into consideration, and divided the websites into the dynamic regions and the static regions according to the dynamic updates of the historical pages, and then calculated thresholds based on the historical features for dynamic regions and built history database of MD5 based on blocks for the static regions. Finally, it decided whether to send alarms according to the defined IF-THEN rules. The test results show that the model can scan the whole website in shorter time, get lower false detection rate for normal pages and higher detection rate for distorted pages.
    Image zero-watermarking algorithm against geometric attacks based on Tchebichef moments
    CHENG Xinghong HOU Yuqing CHENG Jingxing PU Xin
    2013, 33(02):  434-437.  DOI: 10.3724/SP.J.1087.2013.00434
    Asbtract ( )   PDF (639KB) ( )  
    Related Articles | Metrics
    The existing watermarking algorithm based on image moments has the disadvantages of small capacity, large complexity and its robustness should be improved in further study. A new zero-watermarking against geometric attacks was proposed. Using the image normalization and the features of Tchebichef moments, the rotation normalized Tchebichef moments of original image was calculated in the unit circle, and the upper-left corner of Tchebichef moments was scanned into numerical matrix. Afterwards, binary secret key was generated according to numerical matrix and watermark image, and saved to zero-watermarking information database. In detection, the same process was executed to draw out numerical matrix from the unauthenticated image, and watermark image was extracted by using the secret key and numerical matrix. The experimental results show that this algorithm is robust against rotation attacks of random angles, scaling attacks and common signal processing operations, even combined attacks.
    Robust watermark algorithm for adaptive choice of information embedding position
    LI Song GU Qiaolun GAO Tiegang
    2013, 33(02):  438-446.  DOI: 10.3724/SP.J.1087.2013.00438
    Asbtract ( )   PDF (629KB) ( )  
    Related Articles | Metrics
    To improve the quality of watermarked image, an improved robust watermark algorithm based on Support Vector Regression (SVR) and Genetic Algorithm (GA) was proposed. Following Haar wavelet transform, the wavelet coefficients which had strong similarity in image subband were adopted as feature vector, and then the SVR optimized by GA was used to build a wavelet coefficients direction tree model. The values of Mean Square Error (MSE) of the feature vector were compared to adaptively determine the information embedding position. According to the size between the prediction value and real value of the model, the watermark was embedded and extracted. The experimental results show that the proposed algorithm has strong robustness to common image attacks, even the Peak Signal to Noise Ratio (PSNR) can achieve 44.15dB with the embed capacity of 16384 bits. Thus, the proposed algorithm can resist watermarking attacks more effectively and it has high transparency under the situation with big capacity information embedded.
    New provable secure public key encryption scheme in standard model
    WANG Zecheng
    2013, 33(02):  441-446.  DOI: 10.3724/SP.J.1087.2013.00441
    Asbtract ( )   PDF (951KB) ( )  
    Related Articles | Metrics
    The public key encryption schemes with semantic security against adaptively chosen cipertext attacks in the standard model suffer from the drawbacks of low efficiency or strong computational assumptions. Concerning these problems, a new provable secure public key encryption scheme was proposed based on the newly introduced d-decisional Diffie-Hellman problem. To obtain the security and efficiency, the methodology of Hash proof system was adopted in the construction and security proof of the scheme. The intractability of the d-decisional Diffie-Hellman problem was between that of the computational Diffie-Hellman problem and decisional Diffie-Hellman problem. The efficiency of the scheme surpassed that of the schemes based on the computational Diffie-Hellman problem and approximated with the schemes based on the decisional Diffie-Hellman problem. Therefore, the proposed scheme has reached a good compromise between efficiency and computational assumption. Moreover, it can select different d for different security demand of applications.
    Improved bidirectional blind proxy re-signature scheme
    LI Xihe YANG Xiaodong
    2013, 33(02):  447-449.  DOI: 10.3724/SP.J.1087.2013.00447
    Asbtract ( )   PDF (479KB) ( )  
    Related Articles | Metrics
    The security of the bidirectional blind proxy re-signature scheme proposed by Deng et al. (DENG Y Q, DU M H, YOU Z L, 〖WTBX〗et al.〖WTBZ〗 A blind proxy re-signatures scheme based on standard model [J]. Journal of Electronics and Information Technology, 2010, 32(5): 1119-1223) was analyzed, and the scheme was found insecure. Meanwhile, a forgery attack on this scheme was given, in which the delegatee could produce the valid signature of the delegator without the help of the proxy. To overcome the weakness of this scheme, an improved blind proxy re-signature scheme was proposed, which was proved secure in the standard model. It can efficiently resist the forgery attack. The delegatee and the proxy can not obtain the contents of the message to be signed in order to better protect the privacy of the message. The proposed scheme is blind, bidirectional, multi-useful, transparent and key optimal.
    Reconfigurable serial AES encryption and decryption circuit design
    XIE Huimin GUO Donghui
    2013, 33(02):  450-459.  DOI: 10.3724/SP.J.1087.2013.00450
    Asbtract ( )   PDF (770KB) ( )  
    Related Articles | Metrics
    To improve the efficiency of hardware resources of the Advanced Encryption Standard (AES) algorithm on the Field Programmable Gate Array (FPGA), an implementation method of serial AES circuit that could perform both encryption and decryption with 128/192/256bit key options was proposed. The design computed byte multiplication inverse in composite field transform, integrated MixColumn and InvMixColumn circuits, and fused three kinds of key expansion algorithms at the same time. The design was implemented in Xilinx FPGA Virtex-Ⅴ and the consumption of hardware resources was 1871slices, 4 block RAM. The results show that the throughput can be up to 2119/1780/1534Mb·s^(-1) for 128/192/256bit key length while the maximum frequency is 173.904MHz. The design achieves high throughput/hardware resource ratio and can be applied to the Gigabit Ethernet.
    Research on implementation mechanism and detection technique of BIOS trapdoor
    JIANG Zifeng ZENG Guangyu WANG Wei GAO Hongbo
    2013, 33(02):  455-459.  DOI: 10.3724/SP.J.1087.2013.00455
    Asbtract ( )   PDF (780KB) ( )  
    Related Articles | Metrics
    Basic Input Output System (BIOS) trapdoor has huge impact on computer system, and it is difficult to detect the existence of BIOS trapdoor effectively with the existing tools. After researching BIOS structure and BIOS code obfuscation technique based on reverse analysis, BIOS trapdoors were divided into module-level BIOS trapdoor and instruction-level BIOS trapdoor according to implementation granularity, followed by analyzing the implementation principle and characteristics of these two BIOS trapdoors in detail. Finally the detection method of module-level trapdoor based on analyzing module structure and the detection method of instruction-level trapdoor based on integrity measurement were presented. The experimental results show that these two methods can detect the existence of their corresponding BIOS trapdoors effectively.
    Multimedia processing technology
    Content-based image re-ranking technology in search engine
    XIE Hui LU Yueming
    2013, 33(02):  460-462.  DOI: 10.3724/SP.J.1087.2013.00460
    Asbtract ( )   PDF (474KB) ( )  
    Related Articles | Metrics
    As the existing text-based image search results sorting cannot meet the users' query expectations, two kinds of content-based re-ranking methods for image search results named SI (Similarity Integral) algorithm and D (Dijkstra) algorithm were put forward. These methods treated images as nodes, used the color and shape features to calculate the similarity between images, and took the similarity as the edge's weight to construct the similarity graph. SI algorithm sorted the images according to the similarity integral of each node image, and D algorithm traversed all the images from the specified image by Dijkstra algorithm. The experimental results show that both of the methods can improve the sorting performance of the image search. In addition, SI algorithm is suitable for the situation with initial precision rate at 0.5-0.9, while D algorithm does not require the initial precision rate, but has high accuracy requirements of similarity value between images, and can be used to the images re-ranking queried by an specified image.
    Extended ray-based method in 3D model retrieval
    JIANG Yang LYU Xueqiang LI Lin SHI Shuicai
    2013, 33(02):  463-467.  DOI: 10.3724/SP.J.1087.2013.00463
    Asbtract ( )   PDF (782KB) ( )  
    Related Articles | Metrics
    The basic ray-based method is time consuming and only uses the information of triangle facets. An extended ray-based method was proposed based on the principle of non-intersecting pencil of planes. The key points of this method were as follows: firstly, a group of rays was scattered evenly from the center of the 3D model to intersect with triangle facets, and the non-intersecting pencil of planes determined by the rays was used to get the intersection points; secondly, the retrieval model was established to improve the 3D model retrieval effectiveness, according to the distances from the center to those intersection points and the vertices of the 3D model. Applying this method on ten categories of 3D models in PSB (Princeton Shape Benchmark), the results show that this approach not only reduces the processing time, but improves the retrieval accuracy.
    Remote sensing image fusion combining entropy principal component transform and optimization methods
    LUO Xiaoqing WU Xiaojun
    2013, 33(02):  468-475.  DOI: 10.3724/SP.J.1087.2013.00468
    Asbtract ( )   PDF (814KB) ( )  
    Related Articles | Metrics
    In the process of remote sensing images fusion, the spectral distortion of fusion image is the main problem. To reduce distortion, an optimization image fusion method in combination with entropy component analysis transform was proposed. First, multi-band image was transformed to a small amount of bands by the entropy component analysis to reduce the spectral dimension. Projection transformation was finished from the perspective of entropy contribution so as to keep more information of source bands. Wavelet decomposition was done between the first entropy component and the high resolution image after histogram matching to get low frequency and high frequency subbands. For the fusion of low frequency subbands, Quantum-behaved Particle Swarm Optimization (QPSO) algorithm was applied to select the optimal weight coefficients. For the high frequency subbands, statistical feature and statistical model were used to perform fusion. The result of wavelet fusion was regarded as the first entropy principal component. The fusion image was obtained by wavelet and entropy component inverse transform. Entropy, cross entropy, standard deviation, grad, correlation coefficient and spectral distortion were selected as objective evaluation indexes. The experimental results show that the proposed method can enhance the spatial information and avoid spectral distortion.
    New approach for super-resolution from a single color image based on sparse coding
    YANG Ling LIU Yiguang HUANG Ronggang HUANG Zengxi
    2013, 33(02):  472-475.  DOI: 10.3724/SP.J.1087.2013.00472
    Asbtract ( )   PDF (660KB) ( )  
    References | Related Articles | Metrics
    Traditional learning-based super-resolution algorithms generally adopt training images for learning dictionary pairs, they are time-consuming, and the results strongly depend on the training images. To address these problems, a new super-resolution approach from a single color image was proposed based on sparse coding model. According to image self-similarity and redundancy features, this algorithm utilized low-resolution image itself for training dictionary pairs, combined with image pyramid structure. Meanwhile, in view of color images, the sparse representation based color image storage technology was used, which concatenated the values of three channels to a single vector and directly represented them sparsely. The experimental results illustrate that the proposed method not only can generate high-resolution images with better visual effects and higher Peak Signal-to-Noise Ratio (PSNR) but also has less computation time.
    New self-adaptive method for image denoising based on sparse decomposition and clustering
    WEI Yali WEN Xianbin LIAO Yongchun ZHENG Yongchun
    2013, 33(02):  476-479.  DOI: 10.3724/SP.J.1087.2013.00476
    Asbtract ( )   PDF (668KB) ( )  
    Related Articles | Metrics
    The sparse representations of signal theory has been extensively and deeply researched in recent years, and been widely applied to image processing. For the huge computation of over-complete dictionary structure and sparse decomposition, a new self-adaptive method for image denoising based on sparse decomposition and clustering was proposed. Firstly, an overcomplete dictionary was designed by training samples with a modified K-means clustering algorithm. In the training process, atoms of the dictionary were updated adaptively in every iterative step to better fit the sparse representation of the samples. Secondly, the sparse representation of the test image was obtained by using the dictionary combined with Orthogonal Matching Pursuit (OMP) algorithm, so as to achieve image denoising. The experimental results show that in terms of image denoising and computational complexity, the performance of the proposed method is better than the traditional dictionary training algorithm.
    Super-resolution image reconstruction algorithms based on compressive sensing
    FAN Bo YANG Xiaomei HU Xuezhu
    2013, 33(02):  480-483.  DOI: 10.3724/SP.J.1087.2013.00480
    Asbtract ( )   PDF (711KB) ( )  
    Related Articles | Metrics
    Compressed Sensing (CS) theory can reconstruct original images from fewer measurements using the priors of the images sparse representation. The CS theory was applied into the single-image Super-Resolution (SR), and a new reconstruction algorithm based on two-step iterative shrinkage and Total Variation (TV) sparse representation was proposed. The proposed method does not need an existing training set but the single input low resolution image. A down-sampling low-pass filter was incorporated into measurement matrix to make the SR problem meet the restricted isometry property of CS theory, and the TV regularization method and a two-step iterative method with TV denoising operator were introduced to make an accurate estimate of the image's edge. The experimental results show that compared with the existing super-resolution techniques, the proposed algorithm has higher precision and better performance under different magnification level, the proposed method achieves significant improvement (about 4~6dB) in Peak Signal-to-Noise Ratio (PSNR), and the filter plays a decisive role in the reconstruction quality.
    Stereo matching algorithm based on fast-converging belief propagation
    ZHANG Hongying LIU Yixuan YANG Yu
    2013, 33(02):  484-494.  DOI: 10.3724/SP.J.1087.2013.00484
    Asbtract ( )   PDF (624KB) ( )  
    Related Articles | Metrics
    Concerning the high computation complexity and low efficiency in traditional stereo matching method based on belief propagation, a fast-converging algorithm was proposed. When calculating the confidence level of each pixel, the algorithm only utilized the information translated from the neighboring pixels in an adaptive support window, while ignoring the impact of the pixels beyond the window. The experimental results show that the proposed algorithm can reduce 40% to 50% of computation time while maintaining the matching accuracy. Therefore, it can meet the real-time requirement for stereo matching.
    Parameter-adaptive approach to image sub-pixel registration
    HAN Lei HUANG Chenrong XU Mengxi ZHENG Shengnan
    2013, 33(02):  487-490.  DOI: 10.3724/SP.J.1087.2013.00487
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    The performance of some current area-based image registration algorithms declines when image transformation parameters are of both wide range and high precision. Concerning this problem, a parameter-adaptive registration algorithm was proposed based on the natures of image transformation in frequency/space domain, and the estimation steps and fusion method for rotation parameter and shift parameter were designed. A set of simulation experiments were implemented to compare the performance of the proposed algorithm with the Vandewalle's and improved Keren's. Mean square error and standard deviation of square error were used as evaluation indicators for registration precision and parameters adaptation. The two indicators of the proposed algorithm are lower than those of the other two methods, which means the proposed algorithm has adaptive ability in wide range parameters estimation and high accuracy of registration.
    Fast local binary fitting optimization approach for image segmentation
    LIN Yazhong LI Xin ZHANG Huiqi LUAN Qinbo
    2013, 33(02):  491-494.  DOI: 10.3724/SP.J.1087.2013.00491
    Asbtract ( )   PDF (716KB) ( )  
    Related Articles | Metrics
    It is difficult to get the correct segmentation results for the intensity inhomogeneity images, and the segmentation results are very sensitive to the initial contours. Thus, a fast and stable approach was proposed to overcome these disadvantages. First, an Adaptive Distance Preserving Level Set (ADPLS) method was utilized to get a better initial contour. Second, the Local Binary Fitting (LBF) model was used for a further segmentation. The experimental results show that the improved model can achieve good performance and is better to solve the contradiction among the segmentation speed, accuracy and stability.
    Improved target tracking method based on on-line Boosting
    SUN Laibing CHEN Jianmei SONG Yuqing YANG Gang
    2013, 33(02):  495-502.  DOI: 10.3724/SP.J.1087.2013.00495
    Asbtract ( )   PDF (884KB) ( )  
    Related Articles | Metrics
    When the tracked targets get seriously obscured, temporarily leave the tracking screen or have significant displacement variation, adjoining interval updating algorithm based on on-line Boosting will lead to the error accumulation thus producing the drift or even tracking failure. Therefore, a reformative target tracking method based on on-line Boosting was proposed. The classifier feature library was updated by using on-line Boosting algorithm, and the threshold was dynamically renewed by using Kalman filter, hence the system could automatically capture the local features and apply corresponding adjustment to the value of threshold according to the tracking confidence of the object. When the confidence of the moving target was less than the lower threshold value, Blob tracking methodology would be applied. It processed as follows: the target was segmented into many regions according to the similarity of both color and space, and each single region contained the information of region number, location and size. One of the regions would be randomly selected into an on-line Boosting tracking module for testing, and the switch to the adjacent region by applying update algorithm for tracking would not happen unless the captured confidence level reached the upper threshold. Results of tests on different video sequences show that the proposed algorithm is capable of speedily and accurately capturing the target object real-time and holding a better robustness in comparison of the traditional on-line Boosting algorithm and other tracking algorithms.
    CamShift tracking algorithm based on speed-up robust features
    WANG Jinjiang LIU Yang WU Mingyun
    2013, 33(02):  499-502.  DOI: 10.3724/SP.J.1087.2013.00499
    Asbtract ( )   PDF (669KB) ( )  
    Related Articles | Metrics
    In order to deal with the poor or invalid tracking performance caused by the color sensitivity of Continuously adaptive Mean Shift (CamShift) algorithm, a new CamShift tracking algorithm based on local feature matching was proposed. The new algorithm used the method of Speeded-Up Robust Feature (SURF) to extract the local feature points containing the image information from the target and searched areas of multi-channel images, and then matched the feature points by the method of approximate nearest neighbor searching. The location, scale and orientation information of the feature points were obtained utilizing the purified matching results, therefore the CamShift method was constrained and updated to improve the accuracy and stability of tracking. The experimental results show that the new algorithm can outperform the classic CamShift algorithm and the similar improved algorithms for rotating and zooming objects against complex backgrounds in real-time tracking.
    Fast segmentation of sign language video based on cellular neural network
    ZHANG Aihua LEI Xiaoya CHEN Xiaolei CHEN Lili
    2013, 33(02):  503-506.  DOI: 10.3724/SP.J.1087.2013.00503
    Asbtract ( )   PDF (564KB) ( )  
    Related Articles | Metrics
    To achieve sign language video coding of region of interest, and improve call efficiency, a fast segmentation methodology of sign language video based on Cellular Neural Network (CNN) was proposed. Firstly, the skin regions of sign language video were detected through corresponding CNN templates by using the skin color information characteristics. Secondly, CNN based motion detection was carried out on the skin detection results by using inter-frame difference algorithm, and then the initial gesture region could be obtained. Finally, morphological processing methods were employed to fill small holes and smooth the boundaries of regions, and eventually the segmentation of the face and hands regions of sign language video image sequence was realized. The results show that the method can rapidly and accurately segment sign language video.
    Face recognition method for scenario with lighting variation
    LI Xinxin CHEN Dan XU Fengjiao
    2013, 33(02):  507-514.  DOI: 10.3724/SP.J.1087.2013.00507
    Asbtract ( )   PDF (831KB) ( )  
    Related Articles | Metrics
    With serious sidelight, it is difficult for the traditional algorithm to eliminate shadows. To improve the illumination compensation effect, a logarithmic transformation function was presented. In order to improve the performance of face recognition, by taking this problem as a classic pattern classification problem, a new method combining Local Binary Pattern (LBP) and Support Vector Machine (SVM) was proposed. One-against-one was used to convert multi-class problem to two-class problem, that can be used by SVM. Simulation experiments were conducted on the database of CMU PIE, AR, CAS-PEAL and one face database collected by the authors. The results show that lighting effects can be well eliminated and the proposed method performs better than the traditional ones.
    Facial feature point tracking of active appearance model based on prediction of strong tracking filter
    TONG Lei ZHAO Hui
    2013, 33(02):  511-514.  DOI: 10.3724/SP.J.1087.2013.00511
    Asbtract ( )   PDF (629KB) ( )  
    Related Articles | Metrics
    Active Appearance Model (AAM) can locate facial feature points of video sequences. When the initial position is far away from the destination, the fitting process often falls into local minimum, so that the iteration cannot converge to the correct location, resulting in locating failure. Concerning this problem, a facial feature point tracking method of AAM using prediction of strong tracking filter (STF-AAM) was proposed. Firstly, it viewed the head movement in the video as a dynamic system and used Strong Tracking Filter (STF) to predict and track it. So the fitting initial position of each frame was found and fitting algorithm was executed. This method could find the fitting initial position of each frame of video sequences and achieve a more accurate and more rapid tracking result. The experimental results show that the proposed method performs better than the traditional method in the tracking speed along with the fitting accuracy.
    Building's vanishing points detection method with line parameter information
    CHU Jun WANG Li ZHANG Guimei
    2013, 33(02):  515-538.  DOI: 10.3724/SP.J.1087.2013.00515
    Asbtract ( )   PDF (858KB) ( )  
    Related Articles | Metrics
    The existing vanishing point detection methods mostly remove outliers by statistical analysis of the vanishing point's candidates, and do not make full use of the straight line's parameter information, which leads to low precision and large calculation. In the paper, a robust vanishing points detection method with line parameter information was proposed. Firstly, the algorithm extracted and analyzed the line parameter information at Manhattan direction, and proved them with linear relation. Secondly, the parameters' linear model was established with robust regression algorithm, and then the outliers were removed to get effective lines. Finally, it estimated the optimal vanishing point at Manhattan direction from the obtained effective lines. The experimental results show that the average error of the focal length, which is calibrated by the vanishing points detection algorithm, is 1.05 pixel. Therefore, the detected vanishing points can be effectively applied to the camera calibration.
    Controllable and realistic terrain synthesis based on feature sketch and fractal interpolation
    WANG Jidong ZHAO Ruibin PANG Mingyong
    2013, 33(02):  519-542.  DOI: 10.3724/SP.J.1087.2013.00519
    Asbtract ( )   PDF (691KB) ( )  
    Related Articles | Metrics
    Three-dimensional terrain has been used widely in the design and production of virtual outdoor scenes. In order to achieve predictability and controllability of synthesized terrain, a new method for synthesizing realistic terrain was presented based on the controllability of feature sketch and piecing of some hills. Firstly, many hills were generated with different shapes and features by using quadtree structure and improved fractal interpolation method. Then, the required terrain was synthesized by piecing many hills under control of feature sketch. The experimental results show that the method is not only effective and versatile for synthesizing various realistic terrains, but also allow users to control feature and style of synthesized terrain.
    Simulation algorithm for broken branches phenomenon
    SUN Jing-ping
    2013, 33(02):  522-529.  DOI: 10.3724/SP.J.1087.2013.00522
    Asbtract ( )   PDF (696KB) ( )  
    Related Articles | Metrics
    In order to simulate broken branches caused by big wind quickly and realistically, a simulation algorithm was proposed. Firstly, a wind model was presented with reference to noise function. Then, mechanics of materials was applied to analyze the movement details of branches so as to get the deformation parameters of branches and the parameters were then introduced into the interpretation on the fractal geometry of trees. At last, on the basis of these models, a visual simulation method for broken branches was designed. This method could obtain visual simulation results of broken branches with different strong wind by modifying the wind model parameters. The simulation results show that the algorithm which provides new ideas for estimating disaster of trees is right and efficient.
    Integrated model construction and simplification methods for Web 3D road
    PU Hao LI Wei ZHAO Haifeng
    2013, 33(02):  525-529.  DOI: 10.3724/SP.J.1087.2013.00525
    Asbtract ( )   PDF (901KB) ( )  
    Related Articles | Metrics
    In order to realize the Web 3D visualization of road engineering, the key technologies such as the road integrated 3D model construction and simplification methods concerning constraints were studied. Based on the theory of constrained Delaunay triangulation, the road 3D model with integrated appearance and inner topological relationship was created. A half-edge collapse error metric concerning road constrained edges was proposed. Based on it, original road model was integrated and simplified by half-edge collapse and operating hierarchical tree was built to store operation records on the server. The view-dependent strategy was put forward, in which the constrained edges were refined preferentially and simplified afterwards. Combined with the view-dependent reconstruction criterions, the transmission data for 3D visualization was substantially reduced and fast view-dependent reconstruction of the road 3D model was realized on the client. Given the benefits from above methods, a relevant system was developed out and applied to many highways Web-based construction management successfully.
    Fast collision detection algorithm based on image space
    YU Haijun MA Chunyong ZHANG Tao CHEN Ge
    2013, 33(02):  530-533.  DOI: 10.3724/SP.J.1087.2013.00530
    Asbtract ( )   PDF (653KB) ( )  
    Related Articles | Metrics
    In order to meet the high requirements of real-time collision detection in increasingly complex virtual environment, a fast collision detection algorithm based on image space was proposed. It made efficiently use of the Graphics Processing Unit (GPU). Based on the hierarchical binary tree and the collision detection between Oriented Bounding Boxes (OBB), the algorithm could quickly eliminate disjoint bumps of the virtual scene. With the potential collision set, the efficiency of the algorithm has a significantly improvement on the basis of RECODE algorithm. The experimental results show that the algorithm achieves good results, and has a higher efficiency, especially in a highly complex virtual environment.
    SAR target recognition method based on weighted two-directional and two-dimensional linear discriminant analysis
    LIU Zhen JIANG Hui WANG Libin
    2013, 33(02):  534-538.  DOI: 10.3724/SP.J.1087.2013.00534
    Asbtract ( )   PDF (751KB) ( )  
    Related Articles | Metrics
    To solve the Small Sample Size (SSS) problem and the "inferior" problem of traditional Fisher Linear Discriminant Analysis (FLDA) when it is applied to Synthetic Aperture Radar (SAR) image recognition tasks, a new image feature extraction technique was proposed based on weighted two-directional and two-dimensional linear discriminant analysis (W(2D)2LDA). First, the scatter matrices in the two-directional and two-dimensional linear discriminant analysis criterion were modified by adding weights. Then, feature matrix was extracted by W(2D)2LDA. The experimental results with MSTAR dataset verify the effectiveness of the proposed method, and it can strengthen the feature's discrimination and obtain better recognition performance with fewer memory requirements simultaneously.
    Detection of camouflaged miner objects based on color and texture features
    XIAN Xiaodong LI Kewen
    2013, 33(02):  539-542.  DOI: 10.3724/SP.J.1087.2013.00539
    Asbtract ( )   PDF (601KB) ( )  
    Related Articles | Metrics
    Due to the low illumination, low contrast and similar color between target and environment in a coal mine, problems of undetected objects and false detections appear. An improved miner target detection method was proposed, integrating Gaussian Mixture Model (GMM) with Local Binary Pattern (LBP). The color information of background was fitted by means of GMM, and the texture information was extracted by employing LBP, then the miners targets were detected by integrating the color and the texture information. The simulation results indicate that the proposed algorithm decreases the problems of undetected objects and false detections, and can detect miner target in real-time with high precision.
    Microaneurysm detection based on multi-scale match filtering and ensemble learning
    PENG Yinghui ZHANG Dongbo SHEN Ben
    2013, 33(02):  543-566.  DOI: 10.3724/SP.J.1087.2013.00543
    Asbtract ( )   PDF (834KB) ( )  
    Related Articles | Metrics
    According to the gray distribution characteristics of microaneurysms, a new microaneurysm detection algorithm was proposed. First, by multi-scale matched filtering, candidate microaneurysm lesions were picked out as seed points. And region growing technology was applied to segment the lesion areas. Then the features of the lesion areas were extracted. Finally the Adaboost neural network ensemble was designed to distinguish the real microaneurysm from all of the candidate lesions. The proposed method was tested on public ROC database. The experimental results show that the average detection accuracy is 40.92%, which is better than that of previous doublering filtering and morphological methods.
    Database technology
    Second clustering algorithm for fuzzy points based on clear radius
    GAO Cuifang HU Quan
    2013, 33(02):  547-582.  DOI: 10.3724/SP.J.1087.2013.00547
    Asbtract ( )   PDF (597KB) ( )  
    Related Articles | Metrics
    Concerning the problem of wrong partition at fuzzy boundary in Fuzzy C-Means (FCM) clustering algorithm, an improved recalculation technique for fuzzy points was proposed. The new method took into account the data distribution characteristics in different classes. Firstly, it made the hyperspheres central regions by clear data, then defined a new similarity distance based on the clear radius of central region to recalculate the membership of fuzzy point, and finally reassigned the fuzzy points to right category. The experimental results show that the new algorithm can correct some wrong partition and improve the definition of fuzzy point, and also it is a promising algorithm for dataset with significant density differences.
    Adaptive random subspace ensemble classification aided by X-means clustering
    CAO Peng LI Bo LI Wei ZHAO Dazhe
    2013, 33(02):  550-553.  DOI: 10.3724/SP.J.1087.2013.00550
    Asbtract ( )   PDF (700KB) ( )  
    Related Articles | Metrics
    To solve low accuracy and efficiency issues on the large-scale data classification, an adaptive random subspace ensemble classification algorithm aided by the X-means clustering was proposed. X-means clustering was adopted to separate the original data space into multiple clusters automatically, maintaining the original data structure; moreover adaptive random subspace ensemble classifier enhanced diversity of the base components and determined the size of base classifiers automatically, so as to improve the robustness and accuracy. The experimental results show that the proposed method improves the traditional single and ensemble classifiers with respect to accuracy and robustness on the large scale datasets with high dimension. Furthermore, it improves the overall efficiency of the algorithm.
    Detection and elimination of similar Web pages based on text structure and string of feature code
    XIONG Zhongyang YA Man ZHANG Yufang
    2013, 33(02):  554-557.  DOI: 10.3724/SP.J.1087.2013.00554
    Asbtract ( )   PDF (661KB) ( )  
    Related Articles | Metrics
    In order to reduce the interference of the duplicated Web pages, and improve the efficiency of detection and elimination of similar Web pages, a new kind of large-scale Web page detection algorithm was proposed. Firstly, adopting the Web label values, the algorithm created the text structure trees to realize the fingerprint similarity calculation layer by layer. Secondly, the head and tail words of a certain sentence, in which high frequency punctuations occur, were extracted out as the feature code. Lastly, the fingerprint similarity of Web page features was discriminated with Bloom filter algorithm. The experimental results show that the algorithm can improve the recall rate up to more than 90%, and reduce the time complexity to O(n).
    KNN algorithm of average mutual information and class distinction pruning rules
    ZHOU Jing
    2013, 33(02):  558-562.  DOI: 10.3724/SP.J.1087.2013.00558
    Asbtract ( )   PDF (780KB) ( )  
    Related Articles | Metrics
    Large sample size and characteristics of high dimension affects the K-Nearest Neighbor (KNN) classification algorithm in classification performance. Therefore, an improved KNN method named AMI&CD-KNN was put forward, which had feature dimension reduction and pruning mechanism based on average mutual information and class distinction of the feature parameters. Firstly, it used the concept of average mutual information in entropy to measure the accuracy of the feature parameters reflecting the class characteristic information. Secondly, it described the class distinction through the feature parameters odd ratio relative class and its distribution probability in the dataset, measured the size of the amount of class information provided by feature parameters. Finally, the relation was established between average mutual information and class distinction, and the sample pruning method was designed. Therefore, the classification was speeded up while ensuring the classification accuracy. The theoretical analysis and simulation experiment show that compared with the KNN and other algorithms with pruning mechanism, the improved algorithm has better classification generalization.
    Index structure with self-adaptive mechanism in flash-based database system
    FANG Junhua WANG Hanhu CHEN Mei MA Dan
    2013, 33(02):  563-566.  DOI: 10.3724/SP.J.1087.2013.00563
    Asbtract ( )   PDF (591KB) ( )  
    Related Articles | Metrics
    The log-based index update mechanism in flash-based database system has following shortage: low query efficiency, expensive update cost, unreasonable space allocation and merge for the log. In order to solve these problems, a new adaptive index structure named LM-B+TREE was proposed. LM-B+TREE can map the page for index update buffer into corresponding node of traditional B+ TREE. Furthermore, according to the read/write workload and read/write overhead, LM-B+TREE can dynamically maintain the update buffer and adjust the index frame adaptively. The experimental results show that LM-B+ TREE can dynamically adjust the index structure to adapt to the read-write workload, significantly reduce the overhead of index update and improve the query performance.
    Computer software technology
    Design of supervisory control and data acquisition system based on GIS technology
    YANG Zeping LIU Deqiang WANG Qian XIANG Qiangming
    2013, 33(02):  567-574.  DOI: 10.3724/SP.J.1087.2013.00567
    Asbtract ( )   PDF (838KB) ( )  
    Related Articles | Metrics
    To solve the problems of distributed control and lack of geographic information technical assistance in the process of integrated automation development for Supervisory Control and Data Acquisition (SCADA) systems, system integration technology was applied to achieve comprehensive design of the Geographic Information System (GIS) and SCADA system in terms of data, interface and function. The comprehensive monitoring system was developed based on the multilayer structure of C/S. ActiveX technology was used in the developmental process to complete the system integration, and the SCADA system was used as the basic means of monitoring and control, while the GIS was used to provide the necessary geographic information analysis and processing functions. With the establishment of integrated database, the seamless connection of GIS with SCADA and sharing were realized, and the data consistency and real-time performance of the integrated system were ensured. The communication between the devices and remote clients helped the system to realize on-site video monitoring. Safety measures were also integrated in the system. The system is now successfully applied in actual working environment, and all functions are verified with stable operation and high scalability.
    Design and implementation of EPON performance management system based on SSH framework
    GONG Shangfu GONG Qin FENG Jian
    2013, 33(02):  571-574.  DOI: 10.3724/SP.J.1087.2013.00571
    Asbtract ( )   PDF (603KB) ( )  
    Related Articles | Metrics
    To solve the problem of poor portability and low load capacity in network management system based on C/S model, a kind of Ethernet Passive Optical Network (EPON) network performance management system based on Web was designed and implemented. The system used SNMP4J library to develop underlying application, adopted Struts+Spring+Hibernate (SSH) framework based on Model-View-Controller (MVC) model to realize separation of user interface, application business logic and data access logic, and realized the acquisition, statistical analysis and display of the system related performance parameters. The test results indicate that performance alarm reporting delay and single operation response time of the system are less than that in the "China Telecom mobile service network management system norms—total volume (v1.1)" standard.
    Modeling and application for software reliability of embedded real-time control system
    GUO Rongzuo HUANG Jun
    2013, 33(02):  575-578.  DOI: 10.3724/SP.J.1087.2013.00575
    Asbtract ( )   PDF (575KB) ( )  
    Related Articles | Metrics
    Embedded Real-time Control System (ERCS) is widely used in all kinds of control systems. Its software is different from other common software. Besides to meet the real-time requirements, the software needs to be reliable. At first, abstract formalization for the software of embedded real-time control system was defined. Then, reliability modeling was given for software modules which can not be subdivided, and the reliability modeling for software of embedded real-time control system was also provided by applying Copula function. The reliability of specific system software was calculated by using the model. The results show that the reliability model established with Copula function takes account of the correlation between software modules, therefore the reliability of software modules of embedded real-time control system was improved compared with the independent modules.
    Typical applications
    Automatic detection algorithm for new roads based on trajectory of floating cars
    JIANG Xinhua LIAO Lyuchao ZOU Fumin
    2013, 33(02):  579-582.  DOI: 10.3724/SP.J.1087.2013.00579
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    In order to achieve dynamic update of digital map data to support the geographic information services in traffic network with rapid development, a new-road automatic detection algorithm was proposed based on the Floating Car Data (FCD) technology. In this method, the moving trajectories of massive floating cars were calculated in real-time, then the suspected new road sets were extracted with the image matching between the existing map layers and the trajectories. After applying a filtering algorithm to the data sets for cleaning, the new road detection reports covering the new roads' location and length were generated automatically and saved as temporary map layers. The field test results show that this algorithm can detect the new roads quickly, so far as to detect new road within five minutes. It is a cost-effective solution for the real-time road map layer update.
    Travel route identification method of subway passengers based on mobile phone location data
    LAI Jianhui CHEN Yanyan ZHONG Yuan WU Decang YUAN Yifang
    2013, 33(02):  583-586.  DOI: 10.3724/SP.J.1087.2013.00583
    Asbtract ( )   PDF (696KB) ( )  
    Related Articles | Metrics
    Traditional theory-deduced route choice always has large deviation from the actual one in complex rail transit network. The signaling data were collected from the passengers' mobile phone in rail wireless communication network. According to these data, a new travel route identification algorithm was proposed based on normal location update. Meanwhile, concerning the data missing, a repair algorithm was also put forward by using other signaling data of users to deduce their actual travel route by the K shortest paths. And the route validity would be checked to get the actual travel route. Finally, typical application in Beijing rail transit network was selected to validate this algorithm. The application results show that the algorithm has a good performance in illustrating the actual travelers' travel behaviors.
    Remote sensing service discovery mechanism based on trusted QoS clustering
    YAO Jianhua WU Jiamin NIU Wenjia TONG Endong
    2013, 33(02):  587-591.  DOI: 10.3724/SP.J.1087.2013.00587
    Asbtract ( )   PDF (878KB) ( )  
    Related Articles | Metrics
    Web service technology has been utilized in the remote sensing area to increase the dynamics and scalability of remote sensing resources. However, the underlying remote sensing data of sensing services is characterized by the large-span frequent change, while the Quality of Service (QoS) values given by the service provider has the open and untrusted characteristics as well. These two aspects reduce the efficiency and accuracy of the remote service discovery, which brings forth new challenge for effective service discovery utilization in remote sensing applications. To resolve these issues, a new remote sensing service discovery mechanism based on trusted QoS clustering was proposed, in which the underlying service response time, updating frequency and the upper evaluation standards would be used to measure the QoS. Based on the QoS measurement, the updating frequency-adaptive QoS detection and updating mechanism was developed. Through the clustering, the proposed approach could increase the QoS credibility of remote sensing service. The experiment shows that the proposed approach can improve the efficiency of remote sensing service discovery and user satisfaction.
    New group-based processing tag anti-collision algorithm for RFID system
    LIU Chishi WANG Chunhua FU Kui
    2013, 33(02):  592-599.  DOI: 10.3724/SP.J.1087.2013.00592
    Asbtract ( )   PDF (606KB) ( )  
    Related Articles | Metrics
    Concerning long length of identification time and large amounts of data transmission in some binary search schemes, a new anti-collision algorithm was proposed. The algorithm adopted grouping strategy, and all tags in each subset were identified by reader in order. The grouping strategy could reduce the quantity of responding tags in each query and the probability of collision. On the other hand, tag ID was divided into two segments, the first seven bits were the first part and the remaining bits were the second part. Therefore, the transmission of redundant data could be reduced. The simulation results show that, compared with several other algorithms, the proposed algorithm has fewer searching times, and the data transmission is only one-sixth of Dynamic Binary Search Tree (DBS) algorithm. Thus, the identification efficiency of the proposed algorithm is significantly improved.
    RFID fingerprint-based localization based on different resampling algorithms
    HUANG Baohu LIU Ran ZHANG Hua ZHANG Zhao
    2013, 33(02):  595-599.  DOI: 10.3724/SP.J.1087.2013.00595
    Asbtract ( )   PDF (790KB) ( )  
    Related Articles | Metrics
    In order to meet the needs of precise positioning of the mobile robot, a fingerprint positioning method of particle filter based on different resampling algorithms was presented. Firstly, during the positioning phase, the motion model built on robot kinematics served as the proposal density of particle filter, and the observation information and environment fingerprint were infused into the filtering process to enhance the particles' refining capacity and reduce the required number of particles. Secondly, an Exquisite Resampling (ER) algorithm was introduced to improve the refining ability of the particles, thus the effect of particle impoverishment could be decreased and the localization accuracy could be improved. At last, the influence of the positioning accuracy caused by different re-sampling algorithms was analyzed, and a further investigation on the accuracy and reliability of localization algorithm from different experimental perspectives was given. The experimental results show that this algorithm has the advantages in localization accuracy and robustness.
    Application of high-precision clock-synchronous method in hydrophone's linear array
    CEHN Jin DUAN Fajie JIANG Jiajia CHANG Zongjie HUA Xiangning LI Yanchao
    2013, 33(02):  600-602.  DOI: 10.3724/SP.J.1087.2013.00600
    Asbtract ( )   PDF (555KB) ( )  
    Related Articles | Metrics
    The paper proposed a precise full-array-synchronization data acquisition clock generation and transmission method to handle the hydrophone's data acquisition synchronization problem in the ocean underwater acoustic detection. By using the independent high-precision clock source and asynchronous differential transmission lines, the long distance synchronous acquisition of the hydro-phone's array was realized, which was characterized with high anti-jamming capability and so on. The detailed model of the full-array-synchronization principle was analyzed and the prototype system was established. Circuits experiment was carried out to verify the feasibility of the entire system. Recovered clock's time delay in the acquisition node was less than 165ns when the low-speed synchronous clock signal was transmitted by the unshielded twisted pair of 18 meters length. The experimental results show that the proposed method has good effect when being applied to transfer the standard clock of data acquisition in the linear array.
2024 Vol.44 No.10

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF