Loading...

Table of Content

    01 January 2013, Volume 33 Issue 01
    Network and distributed techno
    Tamper proofing technique based on three-thread protection and software guard
    YU Yanwei ZHAO Yaxin
    2013, 33(01):  1-3.  DOI: 10.3724/SP.J.1087.2013.00001
    Asbtract ( )   PDF (692KB) ( )  
    References | Related Articles | Metrics
    Software guard is a dynamic tamper proofing technique. However, the guard cannot guarantee its own security and is easy to be bypassed or removed by hacker. This paper studied this problem and implemented a dynamic tamper proofing method combining three-thread architecture with software guard, which used improved three-thread structure to protect the guard security. Compared to the traditional three-thread protection technique, the improved three-thread structure increased the protection and difficulty to attack by the mutual watch and protection between remote thread and watch thread. The experimental results show that the software guard protected by the improved three-thread structure can not only prevent software tampering attacks but also prevent the attack to the guard itself effectively.
    Reliable peer exchange mechanism based on semi-distributed peer-to-peer system
    ZHANG Han ZHANG Jianbiao LIN Li
    2013, 33(01):  4-7.  DOI: 10.3724/SP.J.1087.2013.00004
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    Peer Exchange (PEX) technique used vastly among Peer-to-Peer (P2P) systems brings more peers along with security leak. Malicious peer can pollute normal peer's neighbor table by exploiting peer exchange. First, this paper analyzed the leak and discussed the main reasons. Second, based on the analysis, a peer exchange mechanism based on semi-distributed peer-to-peer system was proposed. It introduced an approach to estimate the super node's trust value based on incentive mechanism. A concept of peer's source trust value, which is the foundation of the mechanism proposed in this paper, was proposed also. By using peer's source trust value, the goal of controlling peer exchange was finally achieved. The experimental results show that, due to the trust value miscalculation caused by the heterogeneity of the network, 2.5% good peers are denied being exchanged, pollution of good peers' neighbor table and passive infection from good peers are significantly reduced due to the mechanism proposed in this paper. System reliability is guaranteed then.
    Fine-grained access control scheme for social network with transitivity
    GAO Xunbing MA Chunguang ZHAO Ping XIAO Liang
    2013, 33(01):  8-11.  DOI: 10.3724/SP.J.1087.2013.00008
    Asbtract ( )   PDF (872KB) ( )  
    References | Related Articles | Metrics
    A fine-grained access control scheme based on Attribute-Based Encryption (ABE) was proposed to satisfy the demands for the protection of personal privacy in social network. The scheme realized the description for different granularity of members in social network through the setting of the property, which is the basis for the fine-grained encryption and access control. In particular, a proxy server was introduced to judge the relationship between the unauthorized members and authorized members. If the unauthorized members were judged to have the access rights, the key generation center would generate private keys for them based on ABE. The scheme achieved the transitivity of the access rights. Compared with other privacy protection methods based on access control or encryption technique, the proposed scheme combined access control with encryption and realized encryption and fine-grained access control at the same time.
    New algorithm for computing minerror linear complexity of p^n-periodic binary sequences
    NIU Zhihua GUO Danfeng
    2013, 33(01):  12-14.  DOI: 10.3724/SP.J.1087.2013.00012
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics
    The cost of a sequence must be calculated and stored in each step by using a classical k-error linear complexity algorithm. If only considering the first drop of its linear complexity namely the minerror linear complexity, a lot of calculation and memory space could be saved. A new algorithm for computing the minerror linear complexity of pn-periodic binary sequences was proposed in this paper. Here p is an odd prime, and 2 is a primitive root (module p2). The new algorithm eliminated the storage and computation of the cost of a sequence, focused on the method of calculation of the linear complexity when k was the minerror which made the first drop of its linear complexity. Besides, theoretical proof was given. Although the new algorithm saved more than half of the storage space and computation time, the results were totally same as the classical algorithm. It is an effective algorithm on the research of sequence's stability.
    Efficient threshold signature scheme in standard model
    SHI Xianzhi LIN Changlu ZHANG Shengyuan TANG Fei
    2013, 33(01):  15-18.  DOI: 10.3724/SP.J.1087.2013.00015
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    To improve the computational efficiency in the threshold signature scheme, the authors proposed a new threshold signature scheme based on bilinear pairing via combining Gennaro's (GENNARO R, JAREAKI S, KRAWCZYK H, et al. Secure distributed key generation for discrete-log based cryptosystem. Journal of Cryptology, 2007, 20(1): 51-83) distributed secret key generation solution and Gu's (GU K, JIA W J, JIANG C L. Efficient and secure identity-based signature scheme. Journal of Software, 2011, 22(6): 1350-1360) signature scheme in the standard model. There was no trusted dealer for secret key share distribution and each party could verify the validity of some important information, which guaranteed the proposed scheme can avoid the malicious private key generator attack and public key share replacing attack. The comparison results with two previous threshold signature schemes show that the proposed scheme needs less pairing computation and raises the computational efficiency.
    Software security measurement based on information entropy and attack surface
    ZHANG Xuan LIAO Hongzhi LI Tong XU Jing ZHANG Qianru QIAN Ye
    2013, 33(01):  19-22.  DOI: 10.3724/SP.J.1087.2013.00019
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    Software security measurement is critical to the development of software and improvement of software security. Based on the entropy and attack surface proposed by Manadhata et al. (MANADHATA P K, TAN K M C, MAXION R A, et al. An approach to measuring a system's attack surface, CMU-CS-07-146. Pittsburgh: Carnegie Mellon University, 2007; MANADHATA P K, WING J M. An attack surface metric. IEEE Transactions on Software Engineering, 2011, 37(3): 371-386), a method of software security measurement was used to assess the threat of the software's resources and provide the threat weight of these resources. Based on the threat weight, the attack surface metric was calculated for determining whether a software product is secure in design, or in what aspect the software product can be improved. The method is demonstrated in a case to show that, when using the method, the probable security threats can be found as early as possible to prevent from producing the software products that may have vulnerabilities, and the directions for the improvement of software security are pointed out clearly.
    Cloud service selection based on trust evaluation for cloud manufacturing environment
    WEI Le ZHAO Qiuyun SHU Hongping
    2013, 33(01):  23-27.  DOI: 10.3724/SP.J.1087.2013.00023
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    For cloud manufacturing environment, many manufacturing cloud services have the same or similar function, so it is difficult to get the most suitable cloud services. This study designed a selection method of the manufacturing cloud services based on trust evaluation. How to select cloud services was described by abstraction; the reliability, usability, timeliness, price and honesty were used as the trust characteristics together; and the evaluation time and effect of estimators' honesty on the service's credibility were also taken into account; and then the overall credibility was calculated from all above data by weighted average method. Furthermore, with all factors such as the cloud services' function, workload, current state and physical distance considered in package, the method was built to guide the cloud service selection by matching the services' function, workload and price and combining the trust evaluation. The results of simulation experiments show that the service selection method is able to recognize entities of manufacturing cloud services, and it improves the rate of the cloud service trades and meets users' functional and non-functional requests better.
    Numerical method based on total variation wavelet inpainting
    HU Wenjin LI Zhanming
    2013, 33(01):  28-30.  DOI: 10.3724/SP.J.1087.2013.00028
    Asbtract ( )   PDF (476KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new numerical method for Total Variation (TV) wavelet inpainting algorithm. The information of neighborhood of damaged pixel was employed sufficiently to calculate curvature in pixel domain. In comparison with the traditional approximation solution method, the proposed algorithm has higher accuracy; moreover, the result is also less sensitive to noise. The experimental results which use different loss ratio of image show that the proposed method achieves better inpainting effect, especially when the wavelet coefficient is relatively high. The proposed method provides new idea for restoring the missing information in the wavelet domain.
    Shadow removal algorithm based on Gaussian mixture model
    ZHANG Hongying LI Hong SUN Yigang
    2013, 33(01):  31-34.  DOI: 10.3724/SP.J.1087.2013.00031
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    Shadow removal is one of the most important parts of moving object detection in the field of intelligent video since the shadow definitely affects the recognition result. In terms of the disadvantage of shadow removal methods utilizing texture, a new algorithm based on Gaussian Mixture Model (GMM) and YCbCr color space was proposed. Firstly, moving regions were detected using GMM. Secondly, the Gaussian mixture shadow model was built through analyzing the color statistics of the difference between the foreground and background of the moving regions in YCbCr color space. Lastly, the threshold value of the shadow was obtained according to the Gaussian probability distribution in YCbCr color space. More than 70 percent of shadow pixels in sequence images of the experiments could be detected by the algorithm accurately. The experimental results show that the proposed algorithm is efficient and robust in object extraction and shadow detection under different scenes.
    Image denoising based on Riemann-Liouville fractional integral
    HUANG Guo XU Li CHEN Qingli PU Yifei
    2013, 33(01):  35-39.  DOI: 10.3724/SP.J.1087.2013.00035
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics
    To preserve more image texture information while obtaining better denoising performance, the Riemann-Liouville (R-L) fractional integral operator was described in signal processing. The R-L fractional integral theory was introduced into the digital image denoising, and the method of ladder approximation was used to achieve numerical calculation. The model constructed the corresponding mask of image denoising by setting a tiny integral order to achieve local fine-tuning of noise image, and it could control the effect of image denoising by the way of iteration to get better denoising results. The experimental results show that, compared with the traditional image denoising algorithms, the image denoising algorithm based on R-L fractional integral proposed in this paper can enhance the Signal-to-Noise Ratio (SNR) of image, the SNR of denoising image with the algorithm proposed in this paper can reach 18.3497dB, and the lowest growth rate compared to the traditional denoising algorithms increases about 4%. In addition, the proposed algorithm can better retain weak image edge and texture details information of image.
    Image transition region extraction and thresholding based on local feature fusion
    WU Tao YANG Junjie
    2013, 33(01):  40-43.  DOI: 10.3724/SP.J.1087.2013.00040
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    To select the optimal threshold for image segmentation, a new method based on local complexity and local difference was proposed. Firstly, the local grayscale features of a given image were generated, including local complexity and local difference. Next, the new feature matrix was constructed using local feature fusion. Then, an automatic threshold was defined based on the mean and standard deviation of feature matrix, and the image transition region was extracted. Finally, the optimal grayscale threshold was obtained by calculating the grayscale mean of transition pixels, and the binary result was yielded. The experimental results show that, the proposed method performs well in transition region extraction and thresholding, and it is reasonable and effective. It can be an alternative to traditional methods.
    Graph context and its application in graph similarity measurement
    WEI Zheng TANG Jin JIANG Bo LUO Bin
    2013, 33(01):  44-48.  DOI: 10.3724/SP.J.1087.2013.00044
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Feature extraction and similarity measurement for graphs are important issues in computer vision and pattern recognition. However, traditional methods could not describe the graphs under some non-rigid transformation adequately, so a new graph feature descriptor and its similarity measurement method were proposed based on Graph Context (GC) descriptor. Firstly, a sample point set was obtained by discretely sampling. Secondly, graph context descriptor was presented based on the sample point set. At last, improved Earth Mover's Distance (EMD) was used to measure the similarity for graph context descriptor. Different from the graph edit distance methods, the proposed method did not need to define cost function which was difficult to set in those methods. The experimental results demonstrate that the proposed method performs better for the graphs under some non-rigid transformation.
    Improved image segmentation algorithm based on GrabCut
    ZHOU Liangfen HE Jiannong
    2013, 33(01):  49-52.  DOI: 10.3724/SP.J.1087.2013.00049
    Asbtract ( )   PDF (664KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that GrabCut algorithm is sensitive to local noise, time consuming and edge extraction is not ideal, the paper put forward a new algorithm of improving image segmentation based on GrabCut. Multi-scale watershed was used for gradient image smoothing and denoising. Watershed operation was proposed again for the new gradient image, which not only enhanced image edge points, but also reduced the computation cost of the subsequent processing. Then the entropy penalty factor was used to optimize the segmentation energy function to prevent target information loss. The experimental results show that the error rate of the proposed algorithm is reduced, Kappa coefficient is increased and the efficiency is improved compared with the traditional algorithm. In addition, the edge extraction is more complete and smooth. The improved algorithm is applicable to different types of image segmentation.
    Study on relationship between system matrix and reconstructed image quality in iterative image reconstruction
    CHEN Honglei HE Jianfeng LIU Junqing MA Lei
    2013, 33(01):  53-56.  DOI: 10.3724/SP.J.1087.2013.00053
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics
    In view of complicated and inefficient calculation of system matrix, a simple length weighted algorithm was proposed. Compared with the traditional length weighted algorithm, the proposed algorithm reduced situations of the intercepted photon rays with the grid and the grid index of the proposed approach was determined in the two-dimensional coordinate. The computational process of the system matrix was improved based on the proposed algorithm. The image reconstructed with the system matrix was constructed through the new process, and the quality of the reconstructed image was assessed. The experimental results show that the operation speed of the proposed algorithm is more than three times faster than Siddon improved algorithm, and the more lengths in the length weighted algorithm get considered, the better quality of the reconstructed image has.
    Template matching tracking algorithm with position and orientation information of camera
    RAN Huanhuan HUANG Zili
    2013, 33(01):  57-60.  DOI: 10.3724/SP.J.1087.2013.00057
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    While the image-guided target tracking system is tracking, image rotation, the changes of the scale of targets and imaging perspectives will cause tracking drift problems. In order to solve these problems, a method of correcting and updating the matching template using the position and orientation information of the camera was proposed. According to the relative position and orientation information of camera and targets, the objective perspective imaging model was established as well as the template correction equation of different camera position and orientation; the correction equation was decomposed into two parts which were the rotation of real-time image and the affine transformation of template to reduce the changes in error; the update strategy relative to the correlation coefficient and the distance information was designed to adapt to the changes while target tracking system was gradually close to the target. The algorithm had been validated using the flight simulation video generated by the visual-based simulation software Vega Prime. The experimental results show that the proposed algorithm can effectively adapt to the changes of the target scale, imaging perspective as well as image rotation while the image-guided tracking system is tracking, and it reduces the tracking drift.
    Fast inter mode decision algorithm for enhancement layers in scalable video coding
    2013, 33(01):  61-64.  DOI: 10.3724/SP.J.1087.2013.00061
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    Inter-Layer Residual Prediction (ILRP) in Scalable Video Coding (SVC) reduces the number of bits while significantly increasing the encoding complexity. To reduce the complexity, the paper proposed a fast mode decision algorithm. This algorithm analyzed the difference of Lagrangian Rate-Distortion cost (RDCost) when using residual prediction and not using residual prediction in enhancement layer. According to the difference ILRP was dynamically selected in the proposed algorithm. Meanwhile inter mode correlation between lower spatial layer and higher spatial layer was also analyzed. Based on the correlation the proposed algorithm used the best inter mode in lower layer to decide the best inter mode in higher layer and this would save more encoding time. The experimental results show that when compared with the algorithm in Joint Scalable Video Model (JSVM), the proposed algorithm can save more than 50% encoding time on average with neglected Peak Signal to Noise Ratio (PSNR) change and small bit-rate loss. The proposed algorithm provides valuable references for further study of optimizing encoder.
    All-zero block detection algorithm in H.264 based on radial basis function network
    GAO Fei ZHOU Changlin DANG Liming HOU Xuemei
    2013, 33(01):  65-68.  DOI: 10.3724/SP.J.1087.2013.00065
    Asbtract ( )   PDF (609KB) ( )  
    References | Related Articles | Metrics
    In this paper, a kind of algorithm for all-zero block detection based on Radial Basis Function (RBF) Neural Network (NN) was proposed to improve the accuracy of all-zero block detection algorithm. By analyzing the H.264 encoder features, six effective features were selected, including Sum of Absolute Difference (SAD), Sum of Absolute Transformed Difference (SATD), block type, Rate Distortion Optimization (RDO) cost, Quantization Parameter (QP) and the situation of reference block. Considering the SATD should be used in the Hadamard Transform (HT), to get the relationship of QP and RBF network width parameter through the least square method, the algorithm used two classifiers to separate all-zero blocks from non-all-zero blocks based on the encoding situation of the reference block. This algorithm could improve coding speed over 50% on average while keeping bit rate and video quality almost unchanged. The experimental results show that the proposed algorithm can improve all-zero block detection accuracy effectively and coding efficiency based on NN.
    Stratified sampling particle filter algorithm based on clustering method
    ZHOU Hang YE Junyong
    2013, 33(01):  69-71.  DOI: 10.3724/SP.J.1087.2013.00069
    Asbtract ( )   PDF (481KB) ( )  
    References | Related Articles | Metrics
    To solve the poor robustness due to the changing moving target or the inaccurate tracking, a stratified sampling particle filter algorithm based on clustering method was proposed. The sampling space was divided into several parts by group sampling to make sampling points focused on the big probability density value part, thus the sampling error was reduced half of the original; the clustering algorithm could group the particles reasonably by weight, the diversity of particles was kept, thus the tracking precision was improved. The experimental results show that the tracking error of proposed method is less than half of the original one, and the stability has strengthened in each simulation time, as well as the tracking precision.
    Uighur handwriting identification based on feature fusion
    GUO Shichao KAMIL Moydi ZHANG Weiyu
    2013, 33(01):  72-75.  DOI: 10.3724/SP.J.1087.2013.00072
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    Concerning the instability of Uighur handwriting identification by texture, the authors proposed a text-independent method of handwriting identification based on feature fusion, and feature fusion involved mesh-window microstructure feature and curvature-direction feature. On the basis of extracting edge strokes from original image, a large number of local window models were created. By scanning the edge image, the probability density distribution of the feature fusion structure was obtained. And a variety of distance formulas were used to calculate the distance between the probability density feature vectors. The experimental identification rate is 89.2% in the database involving 80 handwritings. This method can portray the local writing trends of the handwritings and the curvature-direction of the strokes, the proposed method adopts probability density distribution to statistically record the mesh-window microstructure features and the curvature-direction features, and the identification effect is satisfactory.
    Face recognition based on improved isometric feature mapping algorithm
    LIU Jiamin WANG Huiyan ZHOU Xiaoli LUO Fulin
    2013, 33(01):  76-79.  DOI: 10.3724/SP.J.1087.2013.00076
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    Isometric feature mapping (Isomap) algorithm is topologically unstable if the input data are distorted. Therefore, an improved Isomap algorithm was proposed. In the improved algorithm, Image Euclidean Distance (IMED) was embedded into Isomap algorithm. Firstly, the authors transformed images into image Euclidean Distance (ED) space through a linear transformation by introducing metric coefficients and metric matrix; then, Euclidean distance matrix of images in the transformed space was calculated to find the neighborhood graph and geodesic distance matrix; finally, low-dimensional embedding was constructed by MultiDimensional Scaling (MDS) algorithm. Experiments with the improved algorithm and nearest-neighbor classifier were conducted on ORL and Yale face database. The results show that the proposed algorithm outperforms Isomap with average recognition rate by 5.57% and 3.95% respectively, and the proposed algorithm has stronger robustness for face recognition with small changes.
    Network and communications
    Internet traffic classification method based on selective clustering ensemble of mutual information
    DING Yaojun CAI Wandong
    2013, 33(01):  80-82.  DOI: 10.3724/SP.J.1087.2013.00080
    Asbtract ( )   PDF (602KB) ( )  
    References | Related Articles | Metrics
    Because it is difficult to label Internet traffic and the generalization ability of single clustering algorithm is weak, a selective clustering ensemble method based on Mutual Information (MI) was proposed to improve the accuracy of traffic classification. In the method, the Normalized Mutual Information (NMI) between clustering results of K-means algorithm with different initial cluster number and the distribution of protocol labels of training set was computed first, and then a serial of K which were the initial cluster number of K-means algorithm based on NMI were selected. Finally, the consensus function based on Quadratic Mutual Information (QMI) was used to build the consensus partition, and the labels of clusters were labeled based on a semi-supervised method. The overall accuracies of clustering ensemble method and single clustering algorithm were compared over four testing sets, and the experimental results show that the overall accuracy of clustering ensemble method can achieve 90%. In the proposed method, a clustering ensemble model was used to classify Internet traffic, and the overall accuracy of traffic classification along with the stability of classification over different dataset got enhanced.
    Quality assurance mechanism based on wireless TCP cross-layer service in mobile Ad Hoc network
    LI Ming YANG Lei WU Yanling
    2013, 33(01):  83-87.  DOI: 10.3724/SP.J.1087.2013.00083
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    As one of the most popular routing protocols proposed by Internet Engineering Task Force (IETF), Ad Hoc on Demand Distance Vector Routing (AODV) has been implanted for many applications in Mobile Ad Hoc Network (MANET). In AODV, enormous broadcasting messages are generated during route discovery procedure, which consumes lots of bandwidth and degrades significantly the Quality of Service (QoS) of networks. To solve this problem, a cross-layer mechanism with a routing protocol Enhanced AODV (E-AODV) was proposed. In E-AODV, the Signal to Noise Ratio (SNR) of received signals was considered as the key criterion to select the next hop. Furthermore, Wireless Transmission Control Protocol (WTCP) was implanted as one important way in E-AODV to obtain a better QoS. The simulation results show that the proposed mechanism can reduce the Data Delivery Latency (DDL) up to 56% and improve the Data Delivery Ratio (DDR) up to 24%.
    Improved sliding window non-parameter cumulation sum algorithm
    CHEN Bo MAO Jianlin QIAO Guanhua DAI Ning
    2013, 33(01):  88-91.  DOI: 10.3724/SP.J.1087.2013.00088
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics
    To solve the detection problem of selfish behavior in IEEE802.15.4 Wireless Sensor Network (WSN), an improved Sliding Window Non-parameter Cumulation Sum (SWN-CUSUM) algorithm based on statistics was proposed to decrease the detection delay. By tracing the delay characteristic sequence between successful transmissions, the algorithm could distinguish if there was a selfish behavior in the WSNs. The NS2 simulation tool was conducted to validate the feasibility of the proposed algorithm. The experimental results show that the improved algorithm not only weakens the impact of the threshold on the performance of the algorithm, but also reduces the size of sliding window used to detect selfish behavior, and the improved algorithm makes improvement in the calculation and the detection delay than the primitive SWN-CUSUM algorithm, so the improved algorithm can detect effectively and rapidly the selfish behavior of nodes in IEEE802.15.4 WSNs.
    IP traffic matrix estimation based on ant colony optimization
    WEI Duo LYU Guanghong
    2013, 33(01):  92-94.  DOI: 10.3724/SP.J.1087.2013.00092
    Asbtract ( )   PDF (668KB) ( )  
    References | Related Articles | Metrics
    It is very difficult to estimate the Traffic Matrix (TM) of the network, because it is a highly ill-posed problem. To solve the problem, a traffic matrix estimation method based on the Ant Colony Optimization (ACO) algorithm was proposed. Through appropriate modeling, the traffic matrix estimation problem was transformed into the optimization problem, and then the model was solved by ACO algorithm, which could effectively estimate the traffic matrix. Through the test results, compared with the existing methods, the accuracy of proposed algorithm is a bit weaker than entropy maximization and quadratic programming. But these two methods have high complexity, and they cannot be applied to large-scale network. Therefore, in the large-scale network, the proposed algorithm is better. It can improve the accuracy of traffic matrix estimation.
    Node scheduling algorithm based on combinatorial assignment code model for heterogeneous sensor network
    CHEN Juan
    2013, 33(01):  96-100.  DOI: 10.3724/SP.J.1087.2013.00096
    Asbtract ( )   PDF (970KB) ( )  
    References | Related Articles | Metrics
    For node scheduling problem in Wireless Sensor Network (WSN) with heterogeneous sensing radius, a new distributed node scheduling scheme based on combinatorial assignment code model was proposed, in which the very possible biggest group number was decided first, and then nodes were divided into clusters in a distributed way based on the concept of two-hop cluster, finally, the nodes in each cluster were scheduled into different groups based on combinatorial assignment code model. The theoretical analysis and experimental results show that the proposed algorithm can prolong the network lifecycle better than the existing methods, such as random-based and two-hop cluster based methods. Therefore, it is more suitable for the environment of WSN with heterogeneous sensing radius.
    Hybrid emulation test method for large scale mobile Ad Hoc network
    GUO Yichen CHEN Jing ZHANG Li HUANG Conghui
    2013, 33(01):  101-104.  DOI: 10.3724/SP.J.1087.2013.00101
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    The current disadvantages of Mobile Ad Hoc Network (MANET) test method include simple model, high cost and difficult duplicate. In this paper, a large-scale MANET hybrid emulation testing method based on NS2 (LHEN) was proposed to solve these problems. By making use of simulation function of NS2, the authors could complete encapsulation and decapsulation of real packets and virtual packets with Tap agent, thus they could achieve the communication between virtual environment and real environment through network objects and NS2 real-time scheduler. A real node movement could be emulated by controlling network wireless signal strength, and then a real network environment was established. Finally, the authors constructed a large scale MANET respectively for contrast experiments through method of hybrid emulation and simulation. The experimental results show that the performance is almost consistent and mean difference value is lower than 18.7%, which means LHEN is able to be applied in some indicators test and verification for a large scale MANET.
    Characteristic analysis of information propagation pattern in online social network
    HAN Jia XIAO Ruliang HU Yao TANG Tao FANG Lina
    2013, 33(01):  105-107.  DOI: 10.3724/SP.J.1087.2013.00105
    Asbtract ( )   PDF (656KB) ( )  
    References | Related Articles | Metrics
    Because of its unique advantage of information propagation, the online social network has been a popular social communication platform. In view of the characteristics of the form of information propagation and the dynamics theory of infectious diseases, this paper put forward the model of information propagation through online social network. The model considered the influence of different users' behaviors on the transmission mechanism, set up the evolution equations of different user nodes, simulated the process of information propagation, and analyzed the behavior characteristics of the different types of users and main factors that influenced the information propagation. The experimental results show that different types of users have special behavior rules in the process of information propagation, i.e., information cannot be transported endlessly, and be reached at a stationary state, and the larger the spread coefficient or immune coefficient is, the faster it reached the stationary state.
    Multi-hop clustering routing protocol with low energy consumption based on control
    DENG yaping TANG Jun
    2013, 33(01):  108-111.  DOI: 10.3724/SP.J.1087.2013.00108
    Asbtract ( )   PDF (653KB) ( )  
    References | Related Articles | Metrics
    In multi-hop routing protocol of Wireless Sensor Network (WSN), the cluster heads near the Sink node consumes faster than other cluster heads, the cluster heads are uneven distributed, and multi-hop links are not effective. To solved these problems, a low energy consumption multi-hop routing protocol based on control was proposed. The clusters' number and size, the multi-hop links' energy consumption, the number of rounds and the quantity of the data transmission were controlled. The simulation results show that this routing protocol prolongs the network's stable period time by 138% and 13%, and it also prolongs the network's lifetime by 13% and 8% in comparison with the Low Energy Adaptive Clustering Hierarch (LEACH) and uneven cluster-based routing protocol for wireless sensor networks (EEUC). Therefore, the proposed protocol can reduce the energy consumption of the network, balances network load, and prolongs the survival time of the network efficiently.
    Hierarchical mobile IPv6 strategy based on new mobile anchor point selection algorithm
    SUN Wensheng HUANG Ji
    2013, 33(01):  112-114.  DOI: 10.3724/SP.J.1087.2013.00112
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics
    To deal with the big difference between inner-domain and inter-domain on handoff time delay of Hierarchical Mobile IPv6 (HMIPv6), this paper proposed a new Mobile Anchor Point (MAP) selection algorithm. When handoff happened on inner-domain, the Mobile Node (MN) continued to use HMIPv6, otherwise to use a new mechanism based on Duplicate Address Detection (DAD) HMIPv6, called D-HMIPv6. An element called Partner Node (PN) was introduced to help MN complete partial three layer handoff work in advance when it was in inter-domain, which reduced the time delay by DAD. The results of the network simulation tool NS-2 show that, compared with the HMIPv6, D-HMIPv6 reduces the inter-domain handoff time delay nearly by two seconds, and improves the ability of supporting mobile IPv6 in real-time.
    Analysis and improvement of joint routing and sleep scheduling algorithm
    SUN Hong ZHANG Xihuang
    2013, 33(01):  115-119.  DOI: 10.3724/SP.J.1087.2013.00115
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    To maximize the lifetime of Wireless Sensor Network (WSN) with small link load and less network delay, the Iterative Geometric Programming (IGP) algorithm of joint routing and sleep scheduling was analyzed and researched, and an improved algorithm was proposed. The improved algorithm counted the packets sent and received by the node and the number of idle cycles for a period of time. According to the record, sleep time which made the node work power smallest was calculated, and then the time was set as the node sleep time for next period. Last, the work power was transmitted to its adjacent nodes and the node residual energy was forecasted. Therefore, the energy route selection was done. The experimental results show that the improved algorithm prolongs the network lifetime about 23% and reduces network delay.
    LEACH-DRT: dynamic round-time algorithm based on low energy adaptive clustering hierarchy protocol
    ZHONG Yiyang LIU Xingchang
    2013, 33(01):  120-123.  DOI: 10.3724/SP.J.1087.2013.00120
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics
    Regarding the disadvantages of uneven clustering and fixed round time in Low Energy Adaptive Clustering Hierarchy (LEACH) protocol, the Dynamic Round-Time (DRT) algorithm based on LEACH (LEACH-DRT) was proposed to prolong network life time. The algorithm obtained clusters' and member nodes' information from base station, and then figured out clusters' round time according to the number of clusters' member nodes and clusters' remaining energy. The time information was sent to different clusters by base station. Clusters began to work according to the time information received. Meanwhile, by using the new cluster head election mechanism, it avoided the data loss and useless energy consumption caused by the cluster's insufficient energy. The analysis and simulation results show that the improved algorithm prolongs about four times network life time and reduces the probability of data loss by 18% than LEACH protocol. It also demonstrates that LEACH-DRT algorithm achieves a better application effect at balancing energy consumption and data loss rate.
    Energy-efficient multi-hop uneven clustering algorithm for underwater acoustic sensor network
    LEI Hui JIANG Weidong GUO Yong
    2013, 33(01):  124-126.  DOI: 10.3724/SP.J.1087.2013.00124
    Asbtract ( )   PDF (618KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of unbalanced energy consumption in the existing Underwater Acoustic Sensor Network (UW-ASN) clustering routing algorithm, an energy efficient multi-hop uneven clustering (EEMUC) routing algorithm was proposed in this paper. In EEMUC, the uneven layered model was constructed according to the distance between the node and the base station. The cluster head selection of each layered areas was based on comprehensive properties of nodes. The clusters closer to the sink had smaller sizes than those farther away from the sink. Multi-hop data routing was formed in inter-cluster to balance energy consumption. The experimental results show that EEMUC algorithm performs much better than the algorithm of Low-Energy Adaptive Clustering Hierarchy (LEACH) and Energy-Efficient Uneven Clustering (EEUC) in terms of the number of cluster head and residual energy, and it improves the energy efficiency and lifetime of UW-ASN.
    Adaptive radio resource allocation strategy based on density of femtocell
    LIU Gongmin ZHAO Yue
    2013, 33(01):  127-130.  DOI: 10.3724/SP.J.1087.2013.00127
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    Serious interference and low radio resource utilization exist in femtocell that is also known as Home Node B (HNB) and Home Enhanced Node B (HeNB). To solve these problems, an adaptive radio resource allocation strategy based on the density of femtocell was proposed. The interference between the macrocell and femtocell was suppressed by frequency division, and interference between the femtocell was also suppressed by the reuse of resources and power control, and the Femtocell Access Point (FAP) was configured automatically based on self-organized network. The simulation and performance analysis show that the proposed strategy can maximize the utilization of radio resource, the interference is close to zero and the overall system throughput is increased by 20%. This strategy is especially suitable for intensive femtocell or occasions with radio resource constraints.
    Information security
    Research and security analysis on open RFID mutual authentication protocol
    ZHANG Nan ZHANG Jianhua
    2013, 33(01):  131-134.  DOI: 10.3724/SP.J.1087.2013.00131
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    Considering that Radio Frequency Identification (RFID) system has many security problems because of limited resource and broadcasting transmission, a new improved mutual authentication protocol was put forward. In the protocol, symmetric encryption combined with the random number method was used. It has advantage in balancing the security, efficiency and cost. The protocol can be applied in an open environment which the transmission security between database and reader is not requested necessary. It can improve the mobility and the application range of the reader. BAN logic was used to do the formal analysis and proved that the proposed protocol is safe and reachable. The proposed protocol can effectively solve the security attacks, such as eavesdropping, tracing and replaying.
    Password multimodality method in financial transactions
    DAI Yong ZHANG Weijing SUN Guangwu
    2013, 33(01):  135-137.  DOI: 10.3724/SP.J.1087.2013.00135
    Asbtract ( )   PDF (516KB) ( )  
    References | Related Articles | Metrics
    In financial transactions, some safety and reliability problems exist in the client authentication system with single keyboard password mode. To solve these problems, the password multimodality method was proposed. Modal sensors accessed the password codeword information and transmitted its normalized result. The formatted code information was pre-processed in property and then classified by its attributes. After that, the multimode passwords were fused through sharing public units. For a certain M bits password, if each bit had N kinds of possible modals, the password theft rate was 1/(10MCN×MM). In this multimode input system, the keyboard password and black box handwriting are the two default modals, and the application results demonstrate that the proposed method realizes the disordered blending input of multimodal password codeword. At M=6, the password theft rate is 1/(106C2×66), and the method types and difficulty of cracking those passwords increase with the number of modals. The password input system's safety, reliability and other performances are significantly better than those with only one modal.
    Blind extraction algorithm of spread-spectrum watermark based on discrete wavelet transform and discrete cosine transform domain
    HU Ran ZHANG Tianqi GAO Hongxing
    2013, 33(01):  138-141.  DOI: 10.3724/SP.J.1087.2013.00138
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    According to the blind extracting issues within the spread-spectrum watermark, a kind of blind extracting algorithm which could be used in the extraction of the digital audio signals was proposed. In the algorithm, wavelet transform was applied to the audio document, then the Discrete Cosine Transform (DCT) was used to its low-frequency coefficient. Afterwards, the fifth coefficient was got and it was used to hide the watermark information being spectrum spread. As the spread-spectrum sequence and its length were unknown during the extraction, spectrum-reprocessing and Singular Value Decomposition (SVD) were introduced to estimate the spread-spectrum using in the embedding process, and the blind extraction to the spread-spectrum watermark of the given digital signal was fulfilled. The simulation results show that with unknown spread-spectrum parameter, watermark image with Normalized Coefficient (NC) of one can be extracted, and it is of strong robustness. Under the attacks of noises and low-pass filter, the accuracy rate of the estimating spread-spectrum sequence is over 90%, which guarantees the recovery of clear water mark image with normalization coefficient higher than 0.98.
    Research of trust assessment method in trust computing based on fuzzy theory
    MO Jiaqing HU Zhongwang YE Xuelin
    2013, 33(01):  142-145.  DOI: 10.3724/SP.J.1087.2013.00142
    Asbtract ( )   PDF (649KB) ( )  
    References | Related Articles | Metrics
    Trust chain is one of the key technologies in trusted computing, and how to express it and assess it is a current hot research focus. According to the complex factors which influence the trust assessment and the uncertainty and dynamism of trust relation in trust computing environment, a trust computing assessment method was proposed based on fuzzy theory. This method established level five space for the trust particle, the historical measurement record and time attenuation factor were introduced to construct the direct trust, and a fuzzy evaluation method of indirect trust was also given. By defining similar degree function with the improved Einstein operator, the trust entity fuzzy reasoning and evaluation process of trust chain was given. This method combined fuzzy reasoning with trust transferring, and it could assess the trust of entity in trust chain fully. The simulation experiments show that, the proposed method has a better ability to resist the malicious assessment and has better credibility and reliability in the results of the assessment as compared with other similar methods. It is a new method for the trust assessment in trusted computing.
    New neural synchronization learning rule based on tree parity machine
    LIANG Yifeng LIAO Xiaofeng REN Xiaoxia
    2013, 33(01):  146-148.  DOI: 10.3724/SP.J.1087.2013.00146
    Asbtract ( )   PDF (594KB) ( )  
    References | Related Articles | Metrics
    To solve the low speed of synchronization, a new learning rule was proposed by employing Tree Parity Machine (TPM). By setting queues to record the results of each communication in the synchronization process, this rule estimated the degree of synchronization of the two TPMs communicating with each other in real time. According to the results of estimation, the rule selected appropriate values to modify the weights, appropriately increased weight modifications in the lower degree of synchronization and reduced weight modifications in the higher degree of synchronization. Finally, the simulation results show that synchronization efficiency is improved more than 80% by applying new learning rule. Meanwhile, it is also indicated that the rule is computationally inexpensive and it improves the security of communication compared to the classic learning rules.
    User permission isolation model based on finite state machine
    LI Jianjun JIANG Yixiang QIAN Jie LI Wei LI Yu
    2013, 33(01):  149-152.  DOI: 10.3724/SP.J.1087.2013.00149
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    For privilege escalation problem in operating system, a user permission isolation model based on Finite State Machine (FSM) was proposed in this paper, which depicted the users' permissions as a FSM. A user's permission was mapped to a FSM, which was able to distinguish the legality of user's operation sequence. Besides, the model proved that it easily leaded to permission escalation at the shared permission points. Ultimately, through FSM, the model achieves effective identification and judgment of user permissions isolation.
    Image steganography algorithm based on human visual system and nonsubsampled contourlet transform
    LIANG Ting LI Min HE Yujie XU Peng
    2013, 33(01):  153-155.  DOI: 10.3724/SP.J.1087.2013.00153
    Asbtract ( )   PDF (480KB) ( )  
    References | Related Articles | Metrics
    To improve the capacity and invisibility of image steganography, the article analyzed the advantage and application fields between Nonsubsampled Contourlet Transform (NSCT) and Contourlet transform. Afterwards, an image steganography was put forward, which was based on Human Visual System (HVS) and NSCT. Through modeling the human visual masking effect, different secret massages were inserted to different coefficient separately in the high-frequency subband of NSCT. The experimental results show that, in comparison with the steganography of wavelet, the proposed algorithm can improve the capacity of steganography at least 70000b,and Peak Signal-to-Noise Ratio (PSNR) increases about 4dB. Therefore, the invisibility and embedding capacity are both considered preferably, which has a better application outlook than the wavelet project.
    Probability matching efficient-optimization mechanism on self-set detection in network intrusion detection system
    GAO Miaofen QIN Yong LI Yong ZOU Yu LI Qingxia SHEN Lin
    2013, 33(01):  156-159.  DOI: 10.3724/SP.J.1087.2013.00156
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics
    To deal with the huge spatial and temporal consumption caused by large-scale self-set data, the authors designed a self-set matching mechanism based on artificial immune Network Intrusion Detection System (NIDS). To improve the detection efficiency of the intrusion detection system, an efficient probability matching optimization mechanism was proposed. The authors first proved the relative concentration of the network data, and analyzed the validity of the probability matching mechanism by calculating the Average Search Length (ASL), then verified the fast matching efficiency of the mechanism through simulation experiments. The mechanism has been used in a project application in a new artificial immune network intrusion detection system based on self-set scale simplified mechanism, which has achieved satisfactory matching results.
    Research on certificate revocation list mechanism based on intrusion tolerance
    LYU Hongwei XU Lei
    2013, 33(01):  160-162.  DOI: 10.3724/SP.J.1087.2013.00160
    Asbtract ( )   PDF (670KB) ( )  
    References | Related Articles | Metrics
    In Public Key Infrastructure (PKI) systems, the Certificate Authority (CA) signature is not easy to forge, thus, intrusions to these certificate revocation systems which are based on Certificate Revocation List (CRL) usually aim at destroying system usability and data integration. Concerning this intrusion feature, an intrusion tolerance CRL service system was designed in this paper. Within the system, CRL was stored on multiple redundant servers. In order to copy and use data among these servers, a passive replication algorithm of randomly selecting main server and a simple vote algorithm of selecting the most recent updated CRL were proposed. Under the given experiment intrusion conditions, although system expenses were increased, the query accuracy of certificate revocation of a system that tolerated intrusions was about 20% higher than that of a system that did not. The experimental results show that adding more servers properly increases the query accuracy of certificate revocation and controls the system expenses.
    Security algorithm for Eta bilinear pairing over binary fields in crypto chip
    CHAI Jiajing GU Haihua BAO Sigang
    2013, 33(01):  163-167.  DOI: 10.3724/SP.J.1087.2013.00163
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics
    In order to securely and efficiently realize Eta bilinear pairing over binary fields in crypto chip, a power analysis resistant algorithm was proposed based on square method. The key masking and data masking schemes based on square method were researched respectively, and the implementation details of power analysis resistant algorithm were given based on square method. In typical fields, the implementation efficiency of power analysis resistant algorithm based on square method was increased by 10% or more compared to the algorithm based on square root method, and the proposed algorithm did not need to store any pre-computational variable. Furthermore, the idea of loop unrolling methods in characteristic three was expanded to the proposed algorithm, which further increased the implementation efficiency by about 3%. With the improvement of efficiency and optimization of storage, the proposed algorithm is more suitable for secure crypto chip.
    Cryptanalysis of efficient identity-based signature scheme
    HUANG Bin DENG Xiaohong
    2013, 33(01):  168-170.  DOI: 10.3724/SP.J.1087.2013.00168
    Asbtract ( )   PDF (475KB) ( )  
    References | Related Articles | Metrics
    Identity-based signatures are the groundwork of many cryptographic protocols. This paper analyzed GU KE et al.'s (GU KE, JIA WEIJIA, JIANG CHUNLIANG. Efficient and secure identity-based signature scheme. Journal of Software,2011,22(6):1350-1360) efficient identity-based signature scheme. Two equivalent signature generating algorithms were proposed and it was pointed out that Gu et al.'s scheme could not satisfy the basic security properties. In other words, any attacker could use the equivalent secret key and signature generating algorithms proposed in this paper to forge a valid secret key of a user and a valid signature on any message with respect to any identity in their scheme. Furthermore, the reason that the scheme is insecure was also analyzed and it was pointed out that designing a more efficient identity-based signature scheme than the classical one is almost impossible.
    Artificial intelligence
    Multi-user detector based on improved binary artificial bee colony algorithm
    LIU Ting ZHANG Liyi BAO Weiwei ZOU Kang
    2013, 33(01):  171-174.  DOI: 10.3724/SP.J.1087.2013.00171
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics
    Optimum Multi-user Detection (OMD) technique can achieve the theoretical minimum error probability, but it has been proven to be a Non-deterministic Polynomial (NP) problem. As a new swarm intelligence algorithm, Artificial Bee Colony (ABC) algorithm has been widely used in various optimization problems. However, the traditional Binary Artificial Bee Colony (BABC) algorithm has the shortcomings of slower convergence speed and falling into local optimum easily. Concerning the shortcomings, an improved binary artificial bee colony algorithm was proposed and used for optimum multi-user detection. The initialization process was simplified. The one-dimensional-reversal neighborhood search strategy was adopted. Compared with optimum multi-user detection, the computation complexity of the improved algorithm declines obviously. The simulation results show that the proposed scheme has significant performance improvement over the conventional detection in anti-multiple access interference and near-far resistance.
    Speech endpoint detection based on critical band and energy entropy
    ZHANG Ting HE Ling HUANG Hua LIU Xiaoheng
    2013, 33(01):  175-178.  DOI: 10.3724/SP.J.1087.2013.00175
    Asbtract ( )   PDF (605KB) ( )  
    References | Related Articles | Metrics
    The accuracy of the speech endpoint detection has a direct impact on the precision of speech recognition, synthesis, enhancement, etc. To improve the effectiveness of speech endpoint detection, an algorithm based on critical band and energy entropy was proposed. It took full advantage of the frequency distribution of human auditory characteristics, and divided the speech signals according to critical bands. Combined with the different distribution of energy entropy of each critical band of the signals respectively in the speech segments and noise segments, speech endpoint detection under different background noises was completed. The experimental results indicate that the average accuracy of the newly proposed algorithm is 1.6% higher than the traditional short-time energy algorithm. The proposed method can achieve the detection of speech endpoint under various noise environment of low Signal to Noise Ratio (SNR).
    QR code recognition based on sparse representation
    SUN Daoda ZHAO Jian WANG Rui FENG Ning HU Jianghua
    2013, 33(01):  179-181.  DOI: 10.3724/SP.J.1087.2013.00179
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics
    With regard to the problem that recognition software does not work when the Quick Response (QR) code image is contaminated, damaged or obscured, a QR code recognition method based on sparse representation was proposed. Forty categories QR code images were used as research subjects and each category has 13 images. Three images were randomly selected from each category and thus a total of 120 images were got as the training sample and the remaining 400 as test sample. Sparse representation dictionary was composed of all training samples. The test samples were a sparse linear combination of the training samples and the coefficients were sparse. The projection of each test sample in the dictionary was calculated, so category with the smallest residual was classification category. Finally, comparison and analysis were done between the recognition results of the proposed method and the QR code recognition software PsQREdit. The experimental results show that, the proposed method is able to correctly identify for partially contaminated, damaged and obscured image, and it has good robustness. It is a new effective means for the recognition of QR code.
    Chinese story link detection based on extraction of elements correlative word
    CHEN Zhimin MENG Zuqiang LIN Qifeng
    2013, 33(01):  182-185.  DOI: 10.3724/SP.J.1087.2013.00182
    Asbtract ( )   PDF (686KB) ( )  
    References | Related Articles | Metrics
    At present, the cost of Chinese story link detection is high,since the miss rate and false rate are high. Concerning this problem, based on multi-vector space model, the paper joined elements (time, site, people, content) correlative word to represent the relevance of the different elements, integrated coherence similarity and cosine similarity with Support Vector Machine (SVM), and then proposed an algorithm which was based on the extraction of elements correlative word. The proposed algorithm complementally expressed the story and provided more evidence for detection; the detection cost was decreased by nearly 11%. Finally, the experimental results show the validity of the proposed algorithm.
    Knowledge retrieval based on text clustering and distributed Lucene
    FENG Ruwei XIE Qiang DING Qiulin
    2013, 33(01):  186-188.  DOI: 10.3724/SP.J.1087.2013.00186
    Asbtract ( )   PDF (474KB) ( )  
    References | Related Articles | Metrics
    To solve the low performance and efficiency issues of the traditional centralized index when processing large-scale unstructured knowledge, the authors proposed the retrieval algorithm based on text clustering. The algorithm used text clustering algorithm to improve the existing index distribution method, and reduced the search range by judging the query intent through the distance of query and clusters. The experimental results show that the proposed scheme can effectively alleviate the pressure of indexing and retrieval in handling large-scale data. It greatly improves the performance of distributed retrieval, and it still maintains relatively high accuracy rate and recall rate.
    Blog community detection based on formal concept analysis
    LIU Zhaoqing FU Yuchen LING Xinghong XIONG Xiangyun
    2013, 33(01):  189-191.  DOI: 10.3724/SP.J.1087.2013.00189
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    Several problems exist in trawling algorithm, such as too many Web communities, high repetition rate between community-cores and isolated community formed by strict definition of community. Thus, an algorithm detecting Blog community based on Formal Concept Analysis (FCA) was proposed. Firstly, concept lattice was formed according to the linkage relations between Blogs,then clusters were divided from the lattice based on equivalence relation, finally communities were clustered in each cluster based on the similarity of concepts. The experimental results show that, the community-cores, which network density is greater than 40%, occupied 83.420% of all in testing data set, the network diameter of combined community is 3, and the content of community gets enriched significantly. The proposed algorithm can be effectively used to detect communities in Blog, micro-Blog and other social networks, and it has significant application value and practical meaning.
    Composite metric method for time series similarity measurement based on symbolic aggregate approximation
    LIU Fen GUO Gongde
    2013, 33(01):  192-198.  DOI: 10.3724/SP.J.1087.2013.00192
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    Key point-based Symbolic Aggregate approximation (SAX) improving algorithm (KP_SAX) uses key points to measure point distance of time series based on SAX, which can measure the similarity of time series more effectively. However, it is too short of information about the patterns of time series to measure the similarity of time series reasonably. To overcome the defects, a composite metric method of time series similarity measurement based on SAX was proposed. The method synthesized both point distance measurement and pattern distance measurement. First, key points were used to further subdivide the Piecewise Aggregate Approximation (PAA) segments into several sub-segments, and then a triple including the information about the two kinds of distance measurement was used to represent each sub-segment. Finally a composite metric formula was used to measure the similarity between two time series. The calculation results can reflect the difference between two time series more effectively. The experimental results show that the proposed method is only 0.96% lower than KP_SAX algorithm in time efficiency. However, it is superior to the KP_SAX algorithm and the traditional SAX algorithm in differentiating between two time series.
    Improved path planning algorithm of rapidly-exploring random tree for biped robot
    MO Dongcheng LIU Guodong
    2013, 33(01):  199-201.  DOI: 10.3724/SP.J.1087.2013.00199
    Asbtract ( )   PDF (574KB) ( )  
    References | Related Articles | Metrics
    To solve the problems that the Rapidly-exploring Random Tree (RRT) path planning is unstable and not taking cost into consideration, an anytime RRT Algorithm was proposed. The algorithm produced an initial solution very quickly, and then improved its growth by reusing information from the previous trees. Besides, to improve the algorithm, a biased distribution was produced to save the cost by using a waypoint cache. The resulted approach produced an initial solution very quickly, and then improved the quality of this solution within given time. It was guaranteed that subsequent solution would be less costly than all previous ones by using the bound. In the biped robot's simulation experiment, compared to the initial algorithm, the number of search nodes created by the improved algorithm decreases from 883 to 704 and the efficiency increases approximately 25%. The simulation result demonstrates the effectiveness of the improved algorithm.
    Data stream clustering algorithm based on dependent function
    PAN Lina WANG Zhihe DANG Hui
    2013, 33(01):  202-206.  DOI: 10.3724/SP.J.1087.2013.00202
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics
    The traditional data stream clustering algorithms are mostly based on distance or density, so their clustering quality and processing efficiency are weak. To address the above problems, this paper proposed a data stream clustering algorithm based on dependent function. Firstly, the data points were modeled in the form of matter-element and dependent function was established to solve the problem. Secondly, the value of the dependent function was calculated. According to this value, the degree that data point belongs to a certain cluster was judged. Then, the proposed method was applied to online-offline framework of the data stream clustering. Finally, the proposed algorithm was tested by using the real data set KDD-CUP99 and randomly generated artificial data sets. The experimental results show that clustering purity of the proposed method is over 92%, and it can deal with about 6300 records per second. Compared with the traditional algorithm, the processing efficiency of the algorithm is greatly improved. In the aspects of dimension and the number of cluster, the algorithm shows stronger scalability, and it is suitable for processing large dynamic data set.
    Curriculum scheduling based on improved particle swarm optimization algorithm
    WANG Nianqiao YAO Sigai
    2013, 33(01):  207-210.  DOI: 10.3724/SP.J.1087.2013.00207
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics
    After analyzing the problems of curriculum scheduling, an algorithm based on discrete particle swarm algorithm for the curriculum scheduling was proposed, with a framework for solving the problem. Because the particle swarm algorithm has a slow convergence speed during late period of its iterations and can easily be trapped in local optimal solution, an improved algorithm was applied with fully considering the features of curriculum scheduling. The algorithm was modeled in three-dimensional space, its particles were initialized with avoiding conflicts, and mutation was introduced to avoid being trapped in optimal solution, etc. The application makes it clear that the proposed algorithm can solve the curriculum scheduling problem effectively.
    Network and distributed techno
    Distributed storage solution based on parity coding
    CHEN Dongxiao WANG Peng
    2013, 33(01):  211-214.  DOI: 10.3724/SP.J.1087.2013.00211
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics
    To guarantee reliability, traditional cloud storage solutions generally backup data through mirror redundancy, which influences the usage efficiency of storage data space. A storage solution was proposed to reduce the usage of storage data space for redundancy-backup data. The solution introduced: 1) the parity coding backup instead of mirror backup, which reduced the size of backup data; 2) the conflict-jump mechanism to confirm the backup data, which guaranteed reliability while number of backup data copies was reduced. The contrast between running result of simulation program and performance of mainstream cloud storage solutions shows that, by using the proposed solution, the usage of storage space for distributed storage is significantly reduced while the reliability gets guaranteed.
    Multiple samples alignment for GC-MS data in parallel on Sector/Sphere
    YANG Huihua REN Hongjun LI Lingqiao DUAN Lixin GUO Tuo DU Lingling QI Xiaoquan
    2013, 33(01):  215-218.  DOI: 10.3724/SP.J.1087.2013.00215
    Asbtract ( )   PDF (616KB) ( )  
    References | Related Articles | Metrics
    To deal with the problem that the process of Gas Chromatography-Mass Spectrography (GC-MS) data is complex and time consuming which delays the whole experimental progress, taking the alignment of multiple samples as an example, a parallel framework for processing GC-MS data on Sector/Sphere was proposed, and an algorithm of aligning multiple samples in parallel was implemented. First, the similarity matrix of all the samples was computed, then the sample set was divided into small sample sets according to hierarchical clustering and samples in each set were aligned respectively, finally the results of each set were merged according to the average sample of the set. The experimental results show that the error rate of the parallel alignment algorithm is 2.9% and the speedup ratio reaches 3.29 using the cluster with 4 PC, which can speed up the process at a high accuracy, and handle the problem that the processing time is too long.
    Bayes decision-based singularity detection
    LIU Mige LI Xiaobin
    2013, 33(01):  219-221.  DOI: 10.3724/SP.J.1087.2013.00219
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new method for detecting and locating the singularities in the signal. By analyzing the characteristics of singular signal, the detection of the pulse singularities was first modeled as a classification task of two classes: one class consisted of the pulse singularities and the other contained the other points in the signal. Then, based on the Bayes decision rule and Neyman-Pearson criterion, a decision surface was derived by constraining the probability of missed detection for the class containing the pulse singularities to be fixed. As a result, a Bayes Decision Based Pulse Singularity Detection (BDPSD) method was directly developed. The experimental results on a number of artificial and real signals show that the BDPSD method can greatly improve the detection quality and locating accuracy of the pulse singularity, compared with the singularity detection method based on the wavelet transform local modulus maximum theory. This also shows that BDPSD is indeed an effective and practical singularity detection method.
    Research on adaptive time-varying terminal sliding mode control
    HUANG Guoyong HU Jichen WU Jiande FAN Yugang WANG Xiaodong
    2013, 33(01):  222-225.  DOI: 10.3724/SP.J.1087.2013.00222
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics
    To resolve the problem of poor robustness when reaching the Terminal sliding mode control, a time-varying sliding mode control method was proposed. A nonlinear time-varying sliding mode surface was designed after analyzing the influences of designed parameters of sliding mode surface to the performances of system. To deal with the disturbances of a class of Multi-Input Multi-Output (MIMO) nonlinear system, a disturbance observer system was constructed. According to the disturbance observer system, the external disturbances were approached on-line by adjusting the weights. The simulation results show that, the settle-time of the proposed scheme is less than that of PID control by 80%. The proposed method has no overshoots. The simulation results demonstrate that the proposed design can be used on the control of MIMO nonlinear system.
    Experimental analysis for calculation performance of mass data based on GemFire
    XU Xiang ZOU Fumin LIAO Lyuchao ZHU Quan
    2013, 33(01):  226-229.  DOI: 10.3724/SP.J.1087.2013.00226
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    With the demand of real-time and dynamic scalability processing for multi-source mass data in transportation, this paper proposed a distributed in-memory database experimental platform based on GemFire. The platform used the attributes of GemFire, such as key-value data storage structure and distributed dynamic membership. The actual data from floating car system was used to complete the performance analysis in cloud computing architecture. The experimental results show that the platform can shorten the calculation time of mass data to less than 10% of the existing system and basically satisfy the application requirements of transport data resources integration in cloud computing platform.
    Calculation method for singular solutions of a class of nonlinear equations and its application
    JI Zhenyi WU Wenyuan FENG Yong
    2013, 33(01):  230-233.  DOI: 10.3724/SP.J.1087.2013.00230
    Asbtract ( )   PDF (561KB) ( )  
    References | Related Articles | Metrics
    To resolve the peculiar problem of the Jacobian matrix for a special class of nonlinear equations, an improved Newton mtheod was proposed based on the dual space. This paper proposed an explicit formula to compute the dual space of an ideal in a point through polynomial multiplication, and constructed augmented equations using the dual space. Meanwhile, the Jacobian matrix of augmented equations at initial point was full rank, and then the algorithm recovered quadratical convergence of Newton's iteration. The experimental results show that after three iterations, the accuracy of computation can achieve 10^(-15). The proposed method further enriches the theories of the dual space of ideal in algebra geometry and provides a new method for the numerical calculation in engineering applications.
    Chaotic ship maneuvering control in course-keeping
    LI Tianwei LIU Xiaoguang PENG Weihua LI Wei
    2013, 33(01):  234-238.  DOI: 10.3724/SP.J.1087.2013.00234
    Asbtract ( )   PDF (730KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of chaotic maneuvering control for sailing ship, a rectangular impulsive parametric perturbation control method based on the Melnikov function of controlled chaotic system was proposed through the non-linear ship steering model. The proposed method perturbed the chaotic system parameters by using rectangular pulse. Through solving the chaotic system's homoclinic orbit and constructing the Melnikov function of controlled chaotic system, the value range of the pulse parameters was fixed by mathematic method with reference to the simple zero boundary conditions of Melnikov function, and this method avoided blindly choosing the pulse parameters when controlling chaos. The experiments for the chaotic ship steering control show that, the proposed method can stabilize the chaotic system to periodic orbits, and amplitude of the stabilized system is only 8.5% of the original system. The experimental results demonstrate the effectiveness of the proposed method for the chaotic ship steering control.
    Personalized Web services selection method based on collaborative filtering
    HE Chunlin XIE Qi
    2013, 33(01):  239-242.  DOI: 10.3724/SP.J.1087.2013.00239
    Asbtract ( )   PDF (626KB) ( )  
    References | Related Articles | Metrics
    The traditional Web services selection algorithms were analyzed and the problems existing in dynamic environment were pointed out. A personalized Web services selection method based on collaborative filtering was proposed to address these problems. And a personalized Web service selection framework was designed, which used the collaborative filtering to predict the Quality of Service (QoS) and selected the best service that met users' requirements. About 1.5 million real world QoS data were employed to evaluate the proposal with other four methods and the experimental results demonstrate that the proposed method is a feasible manner and it provides better prediction results.
    Fault behaviors analysis of embedded programs
    ZHANG Danqing JIANG Jianhui CHEN Linbo
    2013, 33(01):  243-249.  DOI: 10.3724/SP.J.1087.2013.00243
    Asbtract ( )   PDF (1411KB) ( )  
    References | Related Articles | Metrics
    To analyze the abnormal behavior of program induced by software defects, a characterization method of program behavior was proposed firstly, and then the baseline behavior and fault behavior of program got defined and formally described. A quantitative approach to represent the fault behavior of program was proposed afterwards. Furthermore, a Program Fault Behavior Analysis (PFBA) was delivered and implemented, which selected system-call as state granularity of program behavior. Based on specific embedded benchmarks, the experiment was followed through with fault injection method to obtain early-described indices of fault behavior. The experimental results show that there exists a difference among program behaviors under each individual fault type. Based on an in-depth analysis, it is demonstrated that the diversity of fault behaviors is induced by algorithm implementations and structural characteristics of embedded program themselves. Therefore, the analysis of fault behavior presented here can reveal the characteristics of embedded program response behavior under specific software defects, as well as providing important feedback to the process of program development.
    Web service composition method based on community service chain
    HE Li ZHAO Fuqiang RAO Jun
    2013, 33(01):  250-253.  DOI: 10.3724/SP.J.1087.2013.00250
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    A new Web service composition method based on service communities and service chains was proposed in this paper to improve the time efficiency of service composition. In the method, a service network was constructed for the Web service collection, the service community discovery algorithm based on information center was applied to find service clubs in the service network, and then the community service chain discovery algorithm and Web service composition algorithm based on service chain were built. With these algorithms, all of service interface associations in a service club were changed into service chains, and the Web service composition process based on community service chains and Quality of Service (QoS) pruning was implemented. The experimental results indicate that, compared with the traditional service composition method based on graph depth traversal, the response time on five test sets in the service composition method with community service chains is on average improved by 42%, and up to 67%. Community service chains can effectively reduce the service search space for the current service request and improve the time efficiency of service composition.
    Virtual machine memory of real-time monitoring and adjusting on-demand based on Xen virtual machine
    HU Yao XIAO Ruliang JIANG Jun HAN Jia NI Youcong DU Xin FANG Lina
    2013, 33(01):  254-257.  DOI: 10.3724/SP.J.1087.2013.00254
    Asbtract ( )   PDF (808KB) ( )  
    References | Related Articles | Metrics
    In a Virtual Machine (VM) computing environment, it is difficult to monitor and allocate the VM's memory in real-time. To overcome these shortcomings, a real-time method of monitoring and adjusting memory for Xen virtual machine called Xen Memory Monitor and Control (XMMC) was proposed and implemented. This method used hypercall of Xen, which could not only real-time monitor the VM's memory usage, but also dynamically real-time allocated the VM's memory by demand. The experimental results show that XMMC only causes a very small performance loss, less than 5%, to VM's applications. It can real-time monitor and adjust on demand VM's memory resource occupations, which provides convenience for the management of multiple virtual machines.
    Efficient and universal testing method of user interface based on SilkTest and XML
    HE Hao CHENG Chunling ZHANG Zhengyu ZHANG Dengyin
    2013, 33(01):  258-261.  DOI: 10.3724/SP.J.1087.2013.00258
    Asbtract ( )   PDF (646KB) ( )  
    References | Related Articles | Metrics
    In software testing, User Interface (UI) testing plays an important role to ensure software quality and reliability. Concerning the lack of stability and generality in the UI testing method for handle recognition, an improved method of recognizing and testing UI controls based on Extensible Markup Language (XML) was proposed through the introduction of XML. The method used the features of XML which processed data conveniently and combined the automation testing tool SilkTest to improve the traditional UI testing. Concerning the features of multi-language and multi-version in AutoCAD, the automation testing scheme was designed on the basis of the proposed method to test dialog box in a series of AutoCAD products. The experimental results show that the improved method reduces recognition time of the controls and the redundancy of the program. Also it improves the efficiency of the testing and the stability of UI controls recognition.
    Research and design of lightweight workflow model based on improved activity-on-vertex network
    2013, 33(01):  262-265.  DOI: 10.3724/SP.J.1087.2013.00262
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics
    For the deficiency when the existing workflow models deal with large and complex system, the concept of lightweight model was introduced and a lightweight workflow model based on improved Activity-On-Vertex (AOV) network was proposed to meet the requirements of the large and complex business process in workflow management. While the detailed definition and design of this model were presented, two critical algorithms in process scheduling, scheduling algorithm for branch points and synchronization algorithm for convergent points, were given to ensure the accurate operation of the process. The lightweight advantage of this model was reflected through the analysis of a concrete example's workflow modeling. Using graph theory to do static and dynamic verification on this model, it proves that this model is reasonable.
    Semantics of OWL-S process model based on temporal description logic
    LI Ming LIU Shiyi NIAN Fuzhong
    2013, 33(01):  266-269.  DOI: 10.3724/SP.J.1087.2013.00266
    Asbtract ( )   PDF (650KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that Ontology Web Language for Services (OWL-S) process model lacks capacity for dynamic interaction and timing characteristics, a formalization method based on temporal description logic for process model was proposed. It described the atomic processes and composite processes of the OWL-S process model, and then the dynamic semantic of OWL-S process model was obtained. Finally, the formal modeling of OWL-S process model was realized. The experimental results show that the proposed method is feasible, and it provides the foundation for the analysis and validation.
    Typical applications
    Data comparison software system for new generation Doppler weather radar network
    ZHOU Haiguang
    2013, 33(01):  270-275.  DOI: 10.3724/SP.J.1087.2013.00270
    Asbtract ( )   PDF (1110KB) ( )  
    References | Related Articles | Metrics
    China Doppler weather radar network composed of 216 radars will be set up soon. The network radar data comparison is not only very important for finding the radar network operation malfunction and assisting radar calibration, but also works as the foundation to improve the accuracy of the short-range and approaching weather forecast. To resolve this problem, the radar reflectivity comparison software was developed. Firstly, the data in the vertical cross-section along the equidistant line from the adjacent radars was proposed to compare so as to avoid the distance attenuation and beam broadening. Secondly, the rapid coordinate transform algorithm was advanced to save 85% computation time. Furthermore, the three-dimensional spatial mixed interpolation algorithm was also proposed to keep the radar data spatial character and improve the comparison accuracy. This system could automatically identify and process the synchronous data of the radar pairs. Furthermore, it could interpolate and analyze the grid point data in the vertical cross-section along the equidistant line, which could reveal the reliability and coherence of the network data. The simulation results show that the system can meet the requirement of the objective analysis on the network data.
    Real-time evaluation system of rainstorm risk degree based on GIS for Guangxi
    CHEN Chaoquan WANG Zhengfeng UANG Zhaomin LI Li MENG Cuili HE Li
    2013, 33(01):  276-280.  DOI: 10.3724/SP.J.1087.2013.00276
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics
    Concerning the lack of refined and quantized real-time evaluation of rainstorm disaster risk level, this paper applied meteorological data, historical disaster data, height and distance from the sea of Guangxi, and confirmed the identification technique and data sequence building method of hazard-formative factors of rainstorm of Guangxi based on hazard-bearing body, the hazard-formative environment, hazard-formative factors, and anti-disaster capability. Real-time evaluation model and grade index of rainstorm disaster risk level based on risk, subsequently environment fragile degree, vulnerability and anti-disaster capability were constructed for different hazard-bearing body, such as agriculture and social economy. And then the rainstorm risk level of real-time evaluation system was developed, with real-time evaluation model as the core. By using the Geographic Information System (GIS) secondary development techniques, the operating process of rainstorm risk level real-time evaluation was simplified and standardized. By using the proposed system to evaluate the violent typhoon named Neuchatel on September 29, 2011, the experimental results show that it is consistent with disaster condition.
    Staff evacuation simulation of different staff distribution in high-speed rail compartment
    HU Xiaohui TIAN Qiyuan CHEN Yong LI Xin
    2013, 33(01):  281-284.  DOI: 10.3724/SP.J.1087.2013.00281
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics
    Based on the cellular automata theory, for the safe evacuation of the inside passengers of the high-speed rail, the authors proposed a method, which considered the personnel individual differences and adjusted their own behavior dynamically to model the process of evacuation in the case of multi-speed, and researched on computer simulation. The experimental results show that, according to individual differences in the distribution, the strong-ordinary-weak distribution in evacuation has two faster time steps than weak-ordinary-strong. Meanwhile, the disaster points appear at different locations inside, and the evacuation time is quite different. The simulation in the case of different personnel distribution and disaster points has more realistic simulation of emergency evacuation and evacuation situation, and this provides the theoretical guidance for the safe evacuation problem.
    Optimization design of multi-commodity logistics network based on variational inequalities
    PENG Yongtao ZHANG Jin LI Yanlai
    2013, 33(01):  285-290.  DOI: 10.3724/SP.J.1087.2013.00285
    Asbtract ( )   PDF (906KB) ( )  
    References | Related Articles | Metrics
    For designing the logistics network of the multi-level and multi-commodity flow, according to network status, the logistics network was divided into static network and dynamic network. This paper analyzed the infrastructure construction of static network and logistics activities of dynamic network. It constructed the operating cost function and construction cost function which can describe the different stages of network. Considering the problem of environmental pollution caused by the operating process, the management cost function was also constructed. Based on the above functions, general logistics network design and re-design optimization models were proposed, whose constraints were supply capacity and goal was minimizing the total cost. The two optimization models were converted to variational inequalities. By the method of modified projection, the paper calculated and verified the model, and obtained the facilities construction program and logistics organization program under the optimal costs.
    Check valve's fault detection with wavelet packet's kernel principal component analysis
    TIAN Ning FAN Yugang WU Jiande HUANG Guoyong WANG Xiaodong
    2013, 33(01):  291-294.  DOI: 10.3724/SP.J.1087.2013.00291
    Asbtract ( )   PDF (599KB) ( )  
    References | Related Articles | Metrics
    High pressure piston diaphragm pump is the most important power source of the pipeline transportation. To solve the problem of on-line monitoring on the fault of internal piston, the authors put forward a detection method based on acoustic emission signal's wavelet packet frequency and Kernel Principal Component Analysis (KPCA). Firstly, the author adopted wavelet packet to deal with the acoustic emission data to get each frequency band energy value. Secondly, the authors used KPCA to decompose the energy in high dimensional space to find the feature model, and made use of statistics SPE and T2 in feature model to make detection on fault signal. Finally, the authors conducted experiments to verify the statistics of acoustic emission of GEHO diaphragm pump's check valve. In comparison with the PCA method, the proposed method can make on-line monitoring on fault of internal piston fast and accurate, so it has good application prospect on the domain of the high pressure piston diaphragm pump's non-destructive fault detection.
    Rate-distortion analysis for quantizing compressive sensing
    ZHANG Xukun MA Shexiang
    2013, 33(01):  295-298.  DOI: 10.3724/SP.J.1087.2013.00295
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics
    Recent studies in Compressive Sensing (CS) have shown that sparse signals can be recovered from a small number of random measurements, which raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. To examine the influence of quantization errors, the average distortion introduced by quantizing compressive sensing measurements was studied using rate distortion theory. Both uniform quantization and non-uniform quantization were considered. The asymptotic rate-distortion functions were obtained when the signal was recovered from quantized measurements using different reconstruction algorithms. Both theoretical and experimental results shows that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal, but compressive sensing is able to exploit the sparsity to reduce the distortion, so quantized compressive sampling is suitable to be used to encode the low sparse signal.
2024 Vol.44 No.5

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF