Loading...

Table of Content

    01 April 2013, Volume 33 Issue 04
    Network and communications
    Adaptive cache management strategy with node forwarding ability estimation
    WU Dapeng BAI Na WANG Ruyan
    2013, 33(04):  901-904.  DOI: 10.3724/SP.J.1087.2013.00901
    Asbtract ( )   PDF (658KB) ( )  
    References | Related Articles | Metrics
    In the manner of storing-carrying-forwarding, nodes can communicate with each other in opportunistic network, and the messages should be stored at intermediate nodes for a longer time to wait for communication opportunity. As can be seen, the limited buffer should be utilized reasonably with effective buffer management strategy. Based on the estimation method of nodes forwarding ability, an adaptive buffer management strategy was proposed. According to the status of message transmission in the network, the ability of message forwarding of the node can be evaluated by combining the store time of message. Furthermore, the priority on messages forwarding and deleting can be decided dynamically and adaptively. The results show that the proposed buffer management mechanism can effectively improve the delivery probability, and reduce the load ratio greatly.
    Node deployment of wireless sensor network based on glowworm swarm optimization algorithm
    LIU Cuiping ZHANG Haitao BAI Ge
    2013, 33(04):  905-907.  DOI: 10.3724/SP.J.1087.2013.00905
    Asbtract ( )   PDF (483KB) ( )  
    References | Related Articles | Metrics
    In order to improve the coverage rate of the sensor node deployment, concerning the coveragetrap, nodes redundancy and no reoptimization, a senor nodes deployment based on glowworm swarm optimization was proposed when the detection area was known. And the optimization had been improved. In this algorithm, each senor node was considered as a glowworm, and the intensity of signs was the intensity of luciferin. Firstly, the initial deployment of nodes was done. Then, after calculating the value of the movement probability, the direction of movement was determined as well as the direction of movement. Finally, the deployment of sensor nodes was finished. The simulation results show that this way of deployment is appropriate to the huge amounts of sensor nodes deployment, and has such characteristics as high coverage rate and strong flexibility.
    Routing algorithm for wireless sensor networks based on improved method of cluster heads selection
    YAO Guangshun WEN Weiming ZHANG Yongding DONG Zaixiu ZHAO Liang
    2013, 33(04):  908-911.  DOI: 10.3724/SP.J.1087.2013.00908
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics
    In order to alleviate the energy hole in wireless sensor network caused by the energy overconsumption of cluster heads, an improved algorithm was put forward. And the algorithm makes improvement on the selection and replacement of cluster heads. During the cluster heads selection, the algorithm divided the network into unequal clusters and selected the nodes with the most residual energy as cluster heads. And the cluster heads recorded the change of nodes' energy. During the cluster heads replacement, the cluster heads adopted local replacement strategy and appointed the node with the most residual energy as the next cluster head. Therefore, the algorithm modified cluster heads' energy efficiency and balanced the energy consumption among cluster heads. Finally, a simulation experiment was carried out and the experimental results show that the improved algorithm can effectively improve network performance and prolong the network life cycle.
    Energy efficient MAC protocol with power control and collision avoidance for wireless mesh network
    LI Dan GE Zhihui
    2013, 33(04):  912-915.  DOI: 10.3724/SP.J.1087.2013.00912
    Asbtract ( )   PDF (600KB) ( )  
    References | Related Articles | Metrics
    In order to improve the low energy utilization efficiency of IEEE 802.11 in wireless mesh network, a modified low energy consumption MAC protocol, Power Control and Collision Avoidance (PCCA) was proposed. Two core algorithms — Dynamic Power Control Algorithm (DPCA) and Collision Avoidance Algorithm (CAA) were introduced into IEEE 802.11 to reduce energy consumption. DPCA can reduce energy transmission consumption by carefully computing the best transmission power through information collected at receiving node; CAA can make advantage of neighborhoods' communication states table to make the potential collisions nodes into sleep state to save energy. The simulation experiment shows that, the PCCA protocol can save about 20% transmission energy at most.
    Hierarchical, secure routing protocol based on level in wireless sensor network
    ZHOU Xubao PAN Xiaozhong
    2013, 33(04):  916-918.  DOI: 10.3724/SP.J.1087.2013.00916
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics
    Current research on the Wireless Sensor Network (WSN) routing protocols considers little about the safety of the routers, or merely proposes key management algorithm without combing the practical net model with the algorithm. Therefore, the authors proposed a secure routing algorithm based on hierarchical and level-administration which guaranteed the alive-time as well as the security. In the algorithm, nodes were clustered by level-administration, information was transmitted from low to high level by level, redundancies were reduced by data merging. By combing level-administration and distributed approach to security in sensornets (DSPS) key management, the energy consumption of key management was greatly reduced, which not only extended the alive-time of the network, but also guaranteed the security of the network. Finally, the experimental results in NS2 indicate that this algorithm is suitable for large-scale WSNs, and balances the nodes energy consumption and extends the alive-time of the network.
    Ordering λ-generalized sphere decoding Algorithm based on reliability measurement
    LIU Kai XING Shuangshuang
    2013, 33(04):  923-925.  DOI: 10.3724/SP.J.1087.2013.00923
    Asbtract ( )   PDF (599KB) ( )  
    References | Related Articles | Metrics
    To solve the rank-deficient problem in the underdetermined Multiple-Input Multiple-Output (underdetermined MIMO) systems, this paper proposed the ordering λ-Generalized Sphere Decoding (λ-GSD) algorithm based on reliability measurement. The proposed algorithm transformed the rank-deficient channel matrix into the full-column-ranked one, and adopted a new ordering strategy based on reliability measurement, and then sorted the sub-optimal values of the Minimum Mean Square Error (MMSE) algorithm in a descending order and made the first point as the initial value of the λ-GSD algorithm to reduce the initial search radius. Meanwhile, the decreasing rate of the radius was accelerated with an exponential converging in the algorithm. The simulation results indicate that the proposed algorithm can approach the optimum maximum-likelihood decoding performance and has a lower average operation time than the original λ-GSD algorithm.
    Packet verification based traceback method for IPv6 translation mechanism
    ZHU Tian TIAN Ye MA Di
    2013, 33(04):  926-930.  DOI: 10.3724/SP.J.1087.2013.00926
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    IP address security is always a critical Internet security issue. As the transition from IPv4 to IPv6, multiple allocation modes of IP address, IPv6 translation techniques and IP spoofing increase the uncertainty of IP address of the host and the host has multiple IP addresses, which makes IP address resource more insecure. The emerging IPv6 home network, small enterprise network and campus network interconnect with traditional IPv4 network, which is typical and inevitable IPv6 translation scenario, and traditional IP traceback techniques cannot directly be applied to these scenarios. Therefore, this paper presented a new approach to solve IP traceback issue under these scenarios, which is able to go across the gateway that interconnects between IPv4 network and IPv6 network, and makes the destination network knowable for the source network. The traceback method guarantees the Internet fundamental resource safe.
    Data scheduling strategy in P2P streaming system based on improved particle swarm optimization algorithm
    LI Zhenxing LIU Zhuojun
    2013, 33(04):  931-934.  DOI: 10.3724/SP.J.1087.2013.00931
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    Data scheduling strategy in Peer-to-Peer (P2P) media streaming is the key research of the P2P media streaming system. A Particle Swarm Optimization (PSO) algorithm was modified according to P2P streaming data scheduling features and the style of digital encoding string for the algorithm was proposed in this paper. The data scheduling strategy to choose the data chunk took account of resource urgency and scarcity degree. The modified discrete particle swarm algorithm was used to choose the peers to get the optimal scheduling peers set. In order to verify the feasibility and effectiveness of the algorithm, experiments were done to simulate the convergence of the algorithm, the scheduling time, the P2P network uplink bandwidth utilization and the load balancing of peers.
    Wireless sensor networks data recovery algorithm based on quadratic programming
    WU Guifeng YU Xuan
    2013, 33(04):  935-938.  DOI: 10.3724/SP.J.1087.2013.00935
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics
    For improving the real-time performance of recovery algorithm in Compressed Sensing (CS) of Wireless Sensor Networks (WSN) data, a quadratic programming based network data recovery algorithm was proposed in this paper. The CS recovery was transformed to bound-constrained quadratic programming, and then the network data was recovered by solving the quadratic programming problem based on the Armijo rule. The analysis and experimental results demonstrate that the proposed algorithm can significantly reduce the complexity and ensure the accuracy of recovery, thus improving the real-time performance of data recovery in WSN.
    Target tracking algorithm of information detection for wireless sensor network
    DING Xiaoyang LI Xiaoyan
    2013, 33(04):  939-942.  DOI: 10.3724/SP.J.1087.2013.0939
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics
    Target tracking and localization algorithm can get the current position or predict the next stage position of target through communication between cluster-heads and sink, but the information may be wrong or lost during the routing because of random sequence, packet loss or external attacks, which eventually affected the target prediction accuracy. The paper analyzed the error sources in routing and data characteristic after being attacked, then took effective measures, doing information detection after the sink received the data in order to eliminate the abnormal information and prevent interference of the error data. The simulation results show that the positioning accuracy and tracking trajectory are improved in undesirable condition when using the information detection mechanism compared with the general case.
    Design and implementation of UDP-based terminal adaptive protocol
    WANG Bin CHEN Hongmei ZHANG Baoping
    2013, 33(04):  943-946.  DOI: 10.3724/SP.J.1087.2013.00943
    Asbtract ( )   PDF (651KB) ( )  
    References | Related Articles | Metrics
    Aiming at terminal performance bottleneck among current data transfer process, a UDP-based terminal adaptive protocol was proposed. After the analysis and the comparison of many factors which affected terminal performance, this protocol viewed both the previous packet loss ratio and the current one as congestion detection parameters. It employed various rate adaption methods such as finite loop counter and process scheduling function in order to balance performance differences in real-time and ensured reliable and fast data transfer. Compared with traditional idle Automatic Repeat reQuest (ARQ) method, the average delay is reduced by more than 25%. The experimental results show that the proposed algorithm has the features of strong real-time, quick response, and it is compatible with large amount of data transmission, especially suitable for small amount of data transmission in engineering applications.
    Self-similar traffic discrimination and generating methods based on fractal Brown motion
    ZHANG Xueyuan WANG Yonggang ZHANG Qiong
    2013, 33(04):  947-949.  DOI: 10.3724/SP.J.1087.2013.00947
    Asbtract ( )   PDF (583KB) ( )  
    References | Related Articles | Metrics
    To deal with the difficulties of lacking the discrimination method of network's traffic self-similarity and producing negative traffic based on classical Fractal Brown Motion (FBM), a discrimination method was proposed based on multiple order moment and a generation method was provided based on modified FBM model. Firstly, the mathematical formula of sample moment was studied. The discrimination method of self-similarity traffic was obtained on account of fractal moment analysis. Secondly, the classical Random Midpoint Displacement (RMD) algorithm was modified. At last, taking account of the real traffic of Bellcore and LBL, the discrimination method and generation method were given. The comparison of the simulation results with the actual experimental data proves that the method is feasible.
    Data transmission method based on hierarchical network coding
    PU Baoxing YANG Sheng
    2013, 33(04):  950-952.  DOI: 10.3724/SP.J.1087.2013.00950
    Asbtract ( )   PDF (666KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the size of finite fields GF(2n) which was needed for the encoding calculation in intermediate node of network, a data transmission method based on hierarchical network coding was proposed in this paper. Focusing on the single-source multicast network with backbone-sub network structure, the authors decoded at the node that connected the backbone network and the sub-network. Then the decoded information was multicast to sub-network by network coding data transmission method. The theoretical analysis and the simulation results show that this method can reduce the size of finite fields GF(2n), and then reduce the computation delay of data transmission. Besides, it can make full use of the network capacity.
    Artificial intelligence
    Artificial immune algorithm based on intelligence complementary strategy
    ZHANG Liwei YUAN Jinsha
    2013, 33(04):  953-956.  DOI: 10.3724/SP.J.1087.2013.00953
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics
    There are redundant antibodies after training in self-organization Antibody Network (soAbNet) and its network performance is instable. In order to improve the performance of soAbNet, a hybrid immune diagnosis method was proposed based on intelligence complementary strategy. Immune operator was introduced into soAbNet, which consisted of two components: vaccination and immunoselection. Vaccines obtained through K-means algorithm were taken as initial antibodies in immune operator, and immune network architecture was optimized by immunoselection. The experimental results on Iris dataset demonstrate that, the proposed hybrid immune algorithm sufficiency makes use of prior knowledge and learns data characteristics effectively, and the diagnostic accuracy and data enrichment rate are higher compared with soAbNet.
    Human behavior recognition based on stratified fractal conditional random field
    WANG Kejun LV Zhuowen SUN Guozhen YAN Tao
    2013, 33(04):  957-959.  DOI: 10.3724/SP.J.1087.2013.00957
    Asbtract ( )   PDF (627KB) ( )  
    References | Related Articles | Metrics
    In view of real-time issue of the Hidden Conditional Random Field (HCRF) and marked deviation problem of the Latent-Dynamic Conditional Random Field (LDCRF) during behavior transforming, this article proposed a kind of behavior recognition algorithm based on Stratified Fractal Conditional Random Field (SFCRF). The proposed algorithm improved LDCRF and put forward the concept of score mark, which made the integrity and direction of human behavior specific. The experimental results show that the proposed algorithm can obtain better recognition effect than Conditional Random Field (CRF), HCRF and LDCRF.
    Fuzzy iterative learning control based on genetic algorithm
    HAO Xiaohong JIN Yarong MA Yu LI Hengjie
    2013, 33(04):  960-963.  DOI: 10.3724/SP.J.1087.2013.00960
    Asbtract ( )   PDF (551KB) ( )  
    References | Related Articles | Metrics
    In order to improve the control precision and to speed up the convergence rate of the controlled system, a kind of fuzzy PD type iterative learning control algorithm was put forward based on genetic algorithm. In the proposed approach, the iterative learning controller was designed by fuzzy Takagi-Sugeno-Kang (TSK) system, the parameters of fuzzy TSK system were calculated by genetic algorithm, and then appropriate updating law was created. Appropriate iterative learning control algorithm of controlled system was designed and compared with PD iterative learning control algorithm and fuzzy PID iterative learning control algorithm, and then the proposed algorithm was used in double joint manipulator simulation. The simulation results show the effectiveness of the proposed algorithm.
    Fruit fly optimization algorithm based on bacterial chemotaxis
    HAN Junying LIU Chengzhong
    2013, 33(04):  964-966.  DOI: 10.3724/SP.J.1087.2013.00964
    Asbtract ( )   PDF (582KB) ( )  
    References | Related Articles | Metrics
    In this paper, attraction and exclusion operations of bacterial chemotaxis were introduced into original Fruit Fly Optimization Algorithm (FOA), and FOA based on Bacterial Chemotaxis (BCFOA) was proposed. Exclusion (to escape the worst individual) or attraction (to be attracted by the best individual) was decided to perform by judging the fitness variance is zero or no, so that the problem of premature convergence caused by the loss of population diversity, which resulted from the fact that individuals only were attracted by the best one in FOA, was solved. The experimental results show that the new algorithm has the advantages of better global searching ability, and faster and more precise convergence.
    Fuzzy multi-objective group decision making based on interval-valued intuitionistic fuzzy set
    WANG Huiying ZHANG Chaokun DONG Dong
    2013, 33(04):  967-970.  DOI: 10.3724/SP.J.1087.2013.00967
    Asbtract ( )   PDF (547KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of data decision making, an optimization approach was proposed for the algorithm of multi-objectives group decision making. By the interval-valued intuitionistic fuzzy set theory, the optimization was approached by the method of iterative computation gradually in the situation that the part weight information was incomplete usually. Results of the simulation indicate that the algorithm has low time complexity, and can be implemented in the computer easily; moreover, it also shows the effectiveness and accuracy of the algorithm.
    Promotion-pricing decision and endogenous timing in a supply chain
    LIU Jun TAN Deqing
    2013, 33(04):  971-975.  DOI: 10.3724/SP.J.1087.2013.00971
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    To get endogenous timing in a supply chain, a promotion-pricing game model was established in a supply chain with two manufacturers and one retailer. The effects of product substitutability and promotional efficiency on promotion-pricing strategies and endogenous timing were analyzed. The effect of cost difference on member decision and endogenous timing was explored through numerical simulations. It is found that the level of supply chain coordination is improved due to an increase in promotional efficiency of famous brand. Overall endogenous timing cannot be changed by cost difference, and only regional area gets influenced. If the action timing of participants in the game is arbitrarily assumed by a researcher, wrong conclusions may be obtained by him.
    Order acceptance policy in Make-to-Order manufacturing based on average-reward reinforcement learning
    HAO Juan YU Jianjun ZHOU Wenhui
    2013, 33(04):  976-979.  DOI: 10.3724/SP.J.1087.2013.00976
    Asbtract ( )   PDF (652KB) ( )  
    References | Related Articles | Metrics
    From the perspective of revenue management, a new approach for order acceptance under uncertainty in Make-to-Oder (MTO) manufacturing using average-reward reinforcement learning was proposed. In order to maximize the average expected revenue, the proposed approach took order types and different combinations of price and leadtime as criteria for the classification of the system states based on multi-level pricing mechanism. The simulation results show that the proposed algorithm has learning and selective ability to accept the order. Comparisons made with other order acceptance policies show the effectiveness of the proposed algorithm in average revenue, accepted order types, and adaptability.
    Application of cultural algorithm in cross-docking scheduling
    MAO Daoxiao XU Kelin ZHANG Zhiying
    2013, 33(04):  980-983.  DOI: 10.3724/SP.J.1087.2013.00980
    Asbtract ( )   PDF (652KB) ( )  
    References | Related Articles | Metrics
    This paper studied on the operational scheduling problem in a cross-docking center of a single receiving and a single shipping door with finite temporary storage. A dynamic programming model was built with the objective to minimize the costs including additional handing, temporary storage and truck replacement cost. A cultural algorithm with two layer evolutionary mechanism was proposed to solve the problem. The evolution of population space adopted genetic algorithm, and the belief space received good individual from population space to form knowledge which was used to guide evolution in turn. Numerical experiments under small and big scale situations prove the validity of proposed cultural algorithm.
    Chinese cross document co-reference resolution based on SVM classification and semantics
    ZHAO Zhiwei GU Jinghang HU Yanan QIAN Longhua ZHOU Guodong
    2013, 33(04):  984-987.  DOI: 10.3724/SP.J.1087.2013.00984
    Asbtract ( )   PDF (642KB) ( )  
    References | Related Articles | Metrics
    The task of Cross-Document Co-reference Resolution (CDCR) aims to merge those words distributed in different texts which refer to the same entity together to form co-reference chains. The traditional research on CDCR addresses name disambiguation posed in information retrieval using clustering methods. This paper transformed CDCR as a classification problem by using an Support Vector Machine (SVM) classifier to resolve both name disambiguation and variant consolidation, both of which were prevalent in information extraction. This method can effectively integrate various features, such as morphological, phonetic, and semantic knowledge collected from the corpus and the Internet. The experiment on a Chinese cross-document co-reference corpus shows the classification method outperforms clustering methods in both precision and recall.
    Exploratory text mining algorithm based on high-dimensional clustering
    ZHANG Aike FU Baolong
    2013, 33(04):  988-990.  DOI: 10.3724/SP.J.1087.2013.00988
    Asbtract ( )   PDF (637KB) ( )  
    References | Related Articles | Metrics
    Because of the unstructured characteristics of free text, text mining becomes an important branch of data mining. In recent years, types of text mining algorithms emerged in large numbers. In this paper, an exploratory text mining algorithm was proposed based on high-dimensional clustering. The algorithm required only a small number of iterations to produce favorable clusters from very large text. Mapping to other recorded data and recording the text to the user group enabled the result of the algorithm be improved further. The feasibility and validity of the proposed method is verified by related data test and the analysis of experimental results.
    Enhanced clustering algorithm based on fuzzy C-means and support vector machine
    HU Lei NIU Qinzhou CHEN Yan
    2013, 33(04):  991-993.  DOI: 10.3724/SP.J.1087.2013.00991
    Asbtract ( )   PDF (467KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy and efficiency of clustering algorithm, this paper proposed an enhanced algorithm based on Fuzzy C-Means (FCM) and Support Vector Machine (SVM). The sets of data were clustered into c kinds by FCM, and then they were classified by SVM in detail. The cascade SVM model based on fully binary decision tree was constructed, so as to enhance clustering. In order to solve the problem of losing balance in making new features, the idea of using division in a set of data to eliminate the bad effect was put forward. Some correlation algorithms were compared on Iris data set. The experimental results show that the algorithm can improve the precision, save the system resources and enhance the efficiency of clustering.
    Study on construction of Fisher-kernel-based mixed kernel
    FANG Wangang ZHU Jiagang LU Xiao
    2013, 33(04):  994-997.  DOI: 10.3724/SP.J.1087.2013.00994
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    To address the too much time consumption issue in the selection of parameter values existing in the mixed kernel composed of traditional kernels, a method of constructing mixed kernel based on Fisher-kernel was proposed. Because of the non-parameter characteristic of Fisher-kernel, the number of parameters in the Fisher-kernel based mixed kernel was effectively reduced, thus the selection time of parameter values was also effectively reduced. The experimental results on typical color face databases show that, compared with traditional mixed kernel, the parameter selection time of Fisher-kernel based mixed kernel is significantly reduced and the correctness rate of recognition is improved, which confirms the effectiveness of the proposed method.
    Margin maximizing hyperplanes based enhanced feature extraction algorithm
    HOU Yong ZHENG Xuefeng
    2013, 33(04):  998-1000.  DOI: 10.3724/SP.J.1087.2013.00998
    Asbtract ( )   PDF (483KB) ( )  
    References | Related Articles | Metrics
    Kernel Principal Component Analysis (KPCA) and Multi-Layer Perceptron (MLP) neural network are popular feature extraction algorithms. However, these algorithms are inefficient and easy to fall into local optimal solution. The paper proposed a new feature extraction algorithm — margin maximizing hyperplanes based Enhanced Feature Extraction algorithm (EFE), which can overcome the problem of KPCA and MLP algorithm. The proposed EFE algorithm, whcih maps the input samples to the subspace spanned by the normals of hyperplanes through adopting the pairwise orthogonal margin maximizing hyperplanes, is independent of the probability distribution of the input samples. The results of these feature extraction experiments on real world data set — wine and AR show that FE algorithm is beyond KPCA and MLP in terms of the efficiency of the implementation and accuracy of recognition.
    Deterministic prediction of wavelet neural network model and its application
    PAN Yumin DENG Yonghong ZHANG Quanzhu
    2013, 33(04):  1001-1005.  DOI: 10.3724/SP.J.1087.2013.01001
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics
    Concerning the random prediction results of the neural network model, a compact wavelet neural network was constructed. The method transferred the wavelet function into the hidden layer of the Back-Propagation (BP) network and made use of a random certain state command to obtain the definite prediction results. Compared with the wavelet neural network realized by programming and BP network, this method is suitable for mass data training and has such advantages as strong adaptability and robustness for data samples, especially has better adaptability for high frequency stochastic time series, and has characteristics of determined predicted results, powerful practicability and so on. It can obviously improve the training speed, prediction accuracy and prediction efficiency of the model. Its efficiency has been proved by the gas emission prediction experiment of wavelet packet transformation and wavelet neural network.
    Network and distributed techno
    Two-step task scheduling strategy for scientific workflow on cloud computing platform
    YAN Ge YU Jiong YANG Xingyao
    2013, 33(04):  1006-1009.  DOI: 10.3724/SP.J.1087.2013.01006
    Asbtract ( )   PDF (757KB) ( )  
    References | Related Articles | Metrics
    According to the research and analysis on the existing task scheduling strategy of scientific workflow under the cloud environment, a two-step task scheduling strategy was proposed. This strategy aimed at solving or alleviating the phenomenon of resource idle in Heterogeneous Earliest Finish Time (HEFT) algorithm and SHEFT algorithm. Along with the characteristics of cloud computing environment, it derives from the SHEFT algorithm. It can make the most use of the resources idle time and get the minimum makespan. The experiments and performance analysis for the scheduling strategy show that it has a significant improvement in the workflow makespan and resource utilization.
    Data-modeling and implementation for massive construction project data based on manageable entity-oriented object
    LI Chenghua JIANG Xiaoping XIANG Wen LI Bin
    2013, 33(04):  1010-1014.  DOI: 10.3724/SP.J.1087.2013.01010
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics
    For the requirements of building Project Information Portal (PIP) data center based on a unified data model, a manageable entity object-oriented data model was proposed. The project data were treated as a series of managerial entity based on management workflows which were decomposed according to the whole life cycle. The conceptual layer data model was designed. The project data could be naturally represented and recorded by using this model. The data organization method was presented based on MongoDB (document-oriented database technology). The cluster storage architecture for PIP was also addressed. The experiments show that it has efficient performance in data writing and querying. It also has high availability and storage capacity scalability.
    Object-oriented full-time domain moving object data model
    Luo Jianping WU Qunyong ZHU Li
    2013, 33(04):  1015-1017.  DOI: 10.3724/SP.J.1087.2013.01015
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics
    An object-oriented moving object data model supporting full-time data storage and query was put forward, which added dynamic attributes to the object-oriented model. The influence of Global Positioning System (GPS) positioning accuracy and direction on the moving object location updating was discussed, and a new location updating strategy of dynamic threshold based on the positioning accuracy, speed and direction was constructed. Therefore, the storage and query of full time domain of the mobile object were solved. Finally, an experiment about this new moving object data model was carried out. And the result show that the dynamic threshold location updating strategy of the model can effectively reduce the frequency of location update, save the data transmission flow, and reduce the amount of data storage without affecting the moving objects trajectory precision.
    Least cache value replacement algorithm
    LIU Lei XIONG Xiaopeng
    2013, 33(04):  1018-1022.  DOI: 10.3724/SP.J.1087.2013.01018
    Asbtract ( )   PDF (788KB) ( )  
    References | Related Articles | Metrics
    In order to improve the performance of cache for search applications, this paper proposed a new replacement algorithm — the Least Cache Value (LCV) algorithm. The algorithm took into account the object access frequency and size of the object. The set of objects in the cache which has the minimum contribution to Byte Hit Ratio (BHR) should have priority to be replaced. Moreover, the selection of optimal replacement set of objects was transformed into classical 0-1 knapsack problems and a rapid approximate solution and the data structure of algorithm were given. The experiment proves that the LCV algorithm has better performance in increasing BHR and reducing Average Latency Time (ALT) than algorithms of LRU (Least Recently Used), FIFO (First-In First-Out) and GD-Size (Greed Dual-Size).
    Parallel K-Medoids algorithm based on MapReduce
    ZHANG Xueping GONG Kangli ZHAO Guangcai
    2013, 33(04):  1023-1025.  DOI: 10.3724/SP.J.1087.2013.01023
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    In order to solve the bottleneck problems of memory capacity and CPU processing speed when the traditional K-Medoids clustering algorithm is used to deal with massive data, based on the in-depth study of K-Medoids algorithm, a parallel K-Medoids algorithm based on the MapReduce programming model was proposed. The part of Map function is to calculate the distance of each data object to the center point of the cluster and (re)allocation of their respective clusters, and the part of Reduce function is to calculate the new center point of each cluster according to the intermediate results of the Map section. The experimental results show that the parallel K-Medoids algorithm in the Hadoop cluster based on the MapReduce running has good clustering results and scalability, and for large data sets, the algorithm may get close to linear speedup.
    Design and implementation of distributed retrieval system for electronic products information
    ZHANG YuanYuan ZHANG Qinyan JIANG Guanfu
    2013, 33(04):  1026-1030.  DOI: 10.3724/SP.J.1087.2013.01026
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics
    In order to obtain the useful information that can satisfy the user requirements, this paper proposed a distributed information retrieval system based on Hadoop and Lucene handling the Web electronic products information retrieval. In order to improve the retrieval efficiency, using the Map and Reduce method of Hadoop technology implemented the storage of distributed index files and using Lucene technology implemented the file access of distributed index files. At the same time, it also proposed an improved method at fine grain retrieval level, which reduced the index building time. The experiment demonstrates that our distributed information retrieval system has a good retrieval performance for Web electronic products information.
    Multi-quantum states quantum-inspired evolutionary algorithm for layout optimization problem and its application
    MAI Jiahui XIAO Renbin
    2013, 33(04):  1031-1035.  DOI: 10.3724/SP.J.1087.2013.01031
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    To solve the prematurity of evolutionary algorithm on the equilibrium constrained circles packing problem, the Multi-Quantum States Quantum-Inspired Evolutionary Algorithm (MQSQIEA) proposed in this paper was beneficial to keeping the population diversity, which combined the heuristics based on the order-based positioning technique. The layout sequence was optimized efficiently by MQSQIEA. The solving speed was improved by the multi-quantum states coding and the convergence criterion based on the average convergence probability. The observation method based on the taboo strategy and heuristic information was introduced to obtain the n-ary solution with different integers and ensure the priority to place circles with large mass and long radius. The dynamic quantum evolutionary strategy was applied to guide the population to evolve towards the best individual. The positioning probability function introduced to the positioning rule was employed to improve the solution quality. The numerical experimental results show that the proposed method can effectively solve the circular packing problem with equilibrium constraints.
    Second-extraconnectivity of 3-ary n-cube networks
    ZHAO Yuanqing JIN Xianhua
    2013, 33(04):  1036-1038.  DOI: 10.3724/SP.J.1087.2013.01036
    Asbtract ( )   PDF (418KB) ( )  
    References | Related Articles | Metrics
    To evaluate the connectivity of parallel and distributed computer systems which take 3-ary n-cubes as underlying topologies, by constructing the extra 2-cut of 3-ary n-cubes, the second-extraconnectivity of 3-ary n-cubes was proved to be equal to 6n-7 for an arbitrary integer n no less than 2. The result shows that for any two nodes of the parallel and distributed system which take the 3-ary n-cube as underlying topology, there is at least a fault-free path connecting them if the number of the faulty nodes in the system does not exceed 6n-8 and each connected branch still has at least three healthy nodes.
    Efficient algorithm of depth-first stable in-place merge sort
    BAI Yu GUO Xiane
    2013, 33(04):  1039-1042.  DOI: 10.3724/SP.J.1087.2013.01039
    Asbtract ( )   PDF (711KB) ( )  
    References | Related Articles | Metrics
    Based on divide-and-conquer strategy, the depth-first method was used to design an algorithm of stable in-place merge sort for linear array. Its time complexity was O(n lb n), its auxiliary space complexity was O(1), its space complexity of recursion stack was O(lb n), and the algorithm analysis and experimental testing were completed. By analyzing the experimental results, its efficiency compared with stable in-place merge sort algorithm in the STL has a 67.51% improvement, which solves the high time complexity or high space complexity problem of stable sorting algorithm.
    Information security
    Image encryption algorithm based on fractional-order Chen chaotic system
    WANG Yaqing ZHOU Shangbo
    2013, 33(04):  1043-1046.  DOI: 10.3724/SP.J.1087.2013.01043
    Asbtract ( )   PDF (622KB) ( )  
    References | Related Articles | Metrics
    In this paper, a new image encryption algorithm was presented based on the fractional-order Chen chaotic system, for fractional-order chaotic dynamical systems have more complex dynamical behaviors than those of integer-order systems and can provide more freedom for image encryption schemes. In the transmitter, the positions of the image pixels were scrambled by the chaotic signal generated by the driving system firstly. Then the disturbed image was embedded into the chaotic signal and the encrypted image for transmission was obtained. In the receiver, the chaotic signal was removed by the synchronization system. Then the inverse process of pixel scrambling was carried out and the original image was recovered. The security of the proposed algorithm was analyzed in the end. The experimental results demonstrate that the encryption algorithm is of high security and has good research value and application prospects.
    Identity based broadcast encryption scheme against selective opening attack
    GE Yunlong WANG Xu'an PAN Feng
    2013, 33(04):  1047-1050.  DOI: 10.3724/SP.J.1087.2013.01047
    Asbtract ( )   PDF (595KB) ( )  
    References | Related Articles | Metrics
    Recently Sun Jin,et al. proposed an dentity-based broadcast encryption scheme against selective opening attack, (SUN JIN, HU YU-PU. Identity-based broadcast encryption scheme against selective opening attack. Journal of Electronics and Information Technology, 2011, 33(12): 2929-2934) and it claimed that the scheme can fight against Selective-Opening Attack (SOA) and has constant-size key and ciphertext in the standard model without random tags. However, this paper proved that their proposal cannot work at all. Furthermore, the authors improved their scheme to be a correct one, and then proved its security in the standard model.
    Adaptively-chosen ciphertext secure and publicly verifiable encryption scheme
    DU Weidong YANG Xiaoyuan ZHANG Xianghuo WANG Xu'an
    2013, 33(04):  1051-1054.  DOI: 10.3724/SP.J.1087.2013.01051
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics
    There is a great demand for publicly verifiable encryption in key escrow, optimistic fair exchange, publicly verifiable secret sharing and secure multiparty computation, but the current schemes are either chosen plaintext secure or chosen ciphertext secure in the random oracle model, which obviously are not secure enough to be applied in the complicated circumstances. Based on the analysis of the current schemes and application of the reality, this paper proposed a new publicly verifiable encryption scheme by combining the CS encryption scheme with the non-interactive zero knowledge proof protocol. The new scheme enabled any third party other than the sender and receiver to verify the validity of the ciphertext, but leaked no information about the message. Finally, without using the random oracle, the adaptively chosen ciphertext security of the scheme is proved in the standard model.
    Signcryption of proxy re-encryption with publicly verification
    LI Haifeng LAN Caihui
    2013, 33(04):  1055-1060.  DOI: 10.3724/SP.J.1087.2013.01055
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics
    The existing SignCryption with Proxy Re-Encryption (SCPRE) only provides non-repudiation and its security is weaker than that of the adaptive chosen message attack. Since there are two kinds of signcryption texts, the authors respectively provided two definitions for unforgeability: one was first-level signcryption text's unforgeability produced by the delegator and the other was second-level signcryption text's unforgeability produced by the delegatee. A new signcryption with proxy re-encryption scheme was also proposed, of which the first-level signcryption text and the second-level signcryption text all meet the publicly verifiable requirements, and its security can be proved under random oracle model. And the proposed new scheme is secure and effective. Therefore, this scheme is suitable for high security level requirements of the practical application areas.
    New blind signature scheme without trusted private key generator
    HE Junjie ZHANG Fan QI Chuanda
    2013, 33(04):  1061-1064.  DOI: 10.3724/SP.J.1087.2013.01061
    Asbtract ( )   PDF (794KB) ( )  
    References | Related Articles | Metrics
    In order to eliminate the inherent key escrow problem in identity-based public key cryptosystem, a new identity-based blind signature scheme without trusted Private Key Generator (PKG) was proposed. Under the random oracle model, the scheme was proved to be existentially unforgeable against adaptive chosen message and identity attacks from common attackers or semi-honest PKG, and the security was reduced to computational Diffie-Hellman assumption. For the forgery attacks from the malicious PKG, the legitimate signer can prove to the arbitration institution that the signature is forged by trace algorithm.
    Dynamic software watermarking algorithm based on stack-state relations
    XU Jinchao ZENG Guosun
    2013, 33(04):  1065-1069.  DOI: 10.3724/SP.J.1087.2013.01065
    Asbtract ( )   PDF (1078KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a new dynamic software watermarking algorithm based on stack-state relations in order to overcome the weakness of the existing software watermarking algorithms. The watermark was hidden in the stack-state relations generated by execution of the program and extracted by recognizing the relationship of stack states in runtime. The paper gave the concept of stack-state transition graph, and explained its properties. It also gave detailed description of the software watermark embedding and extraction algorithms, and explained it through a specific example. The analysis and experiments show that the algorithm does not affect the host program significantly, and has the ability to resist various attacks.
    Image encryption algorithm based on chaos and bit operations
    LIU Lepeng ZHANG Xuefeng
    2013, 33(04):  1070-1073.  DOI: 10.3724/SP.J.1087.2013.01070
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    In order to effectively improve the image encryption effect and safety, based on studying image encryption algorithm based on chaotic systems and bit operations, an improved algorithm of digital image encryption algorithm was proposed. Firstly, Logistic map was used to generate chaotic sequences, constructing row and column vector to scramble pixel position by the proposed algorithm. Secondly, another piecewise nonlinear Logistic sequence was applied to construct gray scale scrambling amplification factor to scramble the image gray scale, meanwhile the two processes were iteratively done . The mentioned algorithm made not only the key space increase and gray histogram uniform, but also the pixel correlation weaker and the operation speed faster than traditional algorithms. The experimental results show that the improved algorithm has good encryption efficacy and safety.
    Steganalysis of JPEG images based on bilateral transition probability matrix
    ZHAO Yanli WANG Xing
    2013, 33(04):  1074-1076.  DOI: 10.3724/SP.J.1087.2013.01074
    Asbtract ( )   PDF (615KB) ( )  
    References | Related Articles | Metrics
    For the typical steganographic algorithms in JPEG images, this paper firstly analyzed the correlation between neighboring coefficients of intra- and inter-block in Discrete Cosiine Transform (DCT) domain, and then extracted the conditional distribution probability matrix of the bilateral coefficients as the sensitive steganalytic features by taking the middle coefficient of three neighboring coefficients as the condition. At last, this paper proposed a JPEG image steganalytic algorithm on a basis of bilateral transition probability distribution of DCT coefficients. The experimental results show that, for different embedding ratios, the algorithm proposed in this paper outperforms the existing algorithms.
    Design of distributed honeypot system based on clustering and data shunting algorithm
    BAI Qing SU Yang
    2013, 33(04):  1077-1080.  DOI: 10.3724/SP.J.1087.2013.01077
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    Concerning the lack of activity, the low speed and accuracy of recognizing attacks of the current network security defense system, this paper proposed a distributed honeypot system. During the process of clustering, an improved clustering center selection algorithm was used to cluster the data of the network in a fuzzy way, so as to divide the unclassified data into the honeypot to learn their features. Then a new type of attack can be detected as soon as possible. This design can not only lighten the supervising and recording pressure of honeypots, lower the broken rate of the honeypot, but also help us adopt more effective defense strategy. This system can be used in the private networks of some government. The clustering algorithm used in this paper has a higher rate of success than the average clustering algorithm without increasing the amount of computations of the system obviously.
    Block encryption algorithm based on chaotic S-box for wireless sensor network
    HE Yuan TIAN Simei
    2013, 33(04):  1081-1084.  DOI: 10.3724/SP.J.1087.2013.01081
    Asbtract ( )   PDF (558KB) ( )  
    References | Related Articles | Metrics
    The pros and cons of the existing encryption algorithm for Wireless Sensor Network (WSN) were analyzed. A chaotic block cipher based on chaos box was proposed for WSN in accordance with its security request. Random number was generated by mainly using the chaotic aperiodicity, unpredictability and other related characteristics, and corresponding S box was designed by using random number. Finally, conclusion was made by comparing RC5 and RC6 at statistical performance and energy consumption. The results show this encryption scheme has better performance for WSN.
    Network and distributed techno
    Quantum-inspired image decomposition and edge detection
    XIE Kefu XU Wusheng
    2013, 33(04):  1089-1091.  DOI: 10.3724/SP.J.1087.2013.01089
    Asbtract ( )   PDF (471KB) ( )  
    References | Related Articles | Metrics
    A new approach based on quantum theory for the decomposition of image was proposed in this paper. An image can be decomposed into a series of characteristic sub-images according to the relation between the pixels built by means of a superposition of states in quantum mechanics in this approach. The characteristics of these sub-images were analyzed and a rule used to generate an operator for the detection of image edge was given on the basis of this. A new algorithm for image edge detection was proposed with the rule. The computer simulation results show the effectiveness of the rule and the superiority of the algorithm proposed here.
    Active contour segmentation model of combining global and dual-core local fitting energy
    ZHAO Jie QI Yongmei PAN Zhengyong
    2013, 33(04):  1092-1095.  DOI: 10.3724/SP.J.1087.2013.01092
    Asbtract ( )   PDF (683KB) ( )  
    References | Related Articles | Metrics
    As the Region-Scalable Fitting (RSF) model is sensitive to the location of initial curve, an active contour model combining global and local image information in a variational level set formulation was proposed in this paper. The local energy item was defined as a linear combination of the RSF model and our model by taking domain and range kernel functions into account, which made up for the defects that sampling weights were only related to spatial distance and improving the accuracy of segmentation. Also, it constructed an adaptive area term by introducing a global indicating function as a global intensity fitting force, which can speed up the convergence of the proposed model and can also avoid being trapped into local minima. In the numerical calculation, Gaussian filtering was utilized to regularize the level set function, which can ensure its smoothness and eliminate the requirement of re-initialization.The experimental results show that the proposed model allows flexible initialization, and it can achieve desirable results when being applied to images with inhomogeneous intensity.
    Binary projection for image local descriptor
    TANG Peikai CHEN Wei MAI Yicheng
    2013, 33(04):  1096-1099.  DOI: 10.3724/SP.J.1087.2013.01096
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the computational burden and maintain the recognition rate of the image local descriptor, a binary projection method for image local descriptor was proposed. The image patch was projected and transformed into a binary string for boosting the performance as well as speeding up the matching speed. The projection matrix was optimized by machine learning method to maintain its recognition rate and robustness. The experimental result indicates that only a 32-bit binary string is needed to perform as well as the state-of-art descriptors and it shows significantly faster matching speed.
    Improved image poisson denoising model based on fractional variation
    HU Xuegang LI Yu
    2013, 33(04):  1100-1102.  DOI: 10.3724/SP.J.1087.2013.01100
    Asbtract ( )   PDF (461KB) ( )  
    References | Related Articles | Metrics
    An effective Poisson denoising model based on fractional derivative for images with Poisson noise was proposed to improve the denoising effect. The model inherited the advantages of total variation model to eliminate noise. Furthermore, due to the advantage of property of amplitude-frequency in fractional differentiation, it can protect "weak information" well in processing specifics of image and texture characteristics. The numerical experimental results demonstrate that the proposed method of fractional variation to eliminate noise is better than traditional integer variation and can protect the detail characteristics of image edges.
    Improved image denoising algorithm of Contourlet transform based on gray relational degree
    ZENG Youwei YANG Huixian TANG FEI TAN Zhenghua HE Yali
    2013, 33(04):  1103-1107.  DOI: 10.3724/SP.J.1087.2013.01103
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    In order to denoise image more effectively, an improved Contourlet transform denoising algorithm based on gray relational degree was proposed. On one hand, considering the gray relational degree and inter-scale from the high-frequency sub-band and low frequency sub-band by Contourlet transform, the Bayes threshold was improved; On the other hand, in order to achieve the purpose of adaptive denoising, the characteristics of Contourlet coefficients were used to improve the compromising threshold function. The experimental results show that the proposed algorithm can denoise image effectively, get higher PSNR and better visual quality, and has a good practicability.
    Medical image classification based on scale space multi-feature fusion
    LI Bo CAO Peng LI Wei ZHAO Dazhe
    2013, 33(04):  1108-1111.  DOI: 10.3724/SP.J.1087.2013.01108
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    In order to describe different kinds of medical image more consistently and reduce the scale sensitivity, a classification model based on scale space multi-feature fusion was proposed according to the characteristics of medical image. First, scale space was built by difference of Gaussian, and then complementary features were extracted, such as gray-scale features, texture features, shape features, and features extracted in the frequency domain. In addition, maximum likelihood estimation was considered to realize decision level fusion. The scale space multi-feature fusion classification model was applied to medical image classification task following IRMA code. The experimental results show that compared with traditional methods, F1 value increased 5%-20%. Fusion classification model describes medical image more comprehensively, avoids the information loss from feature dimension reduction, improves classification accuracy, and has clinical value.
    l2-total variation image restoration based on subspace optimization
    LIU Xiaoguang GAO Xingbao ZHOU Dongmei
    2013, 33(04):  1112-1114.  DOI: 10.3724/SP.J.1087.2013.01112
    Asbtract ( )   PDF (428KB) ( )  
    References | Related Articles | Metrics
    The alternating direction method is used widely to deal with the problem of total variation image restoration. A correction method was proposed to solve the problem of inaccuracy in search direction of the alternating direction method, which may influence the efficiency of the algorithm and the quality of the restored images adversely. Combining Taylor expansion of energy function and properties of differentiable function, this subspace-optimization-based method corrected the current direction effectively by utilizing the previous one, and improved the accuracy of search direction. The numerical experiments expound the efficiency of this algorithm and the quality of the restored images by running time and Peak-Signal-to-Noise Ratio (PSNR), respectively.
    Point matching based on linear programming with similarity regularization
    ZHAO Yulan LIAN Wei
    2013, 33(04):  1115-1118.  DOI: 10.3724/SP.J.1087.2013.01115
    Asbtract ( )   PDF (560KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a linear programming based point matching method with similarity regularization in order to resolve the problems of non-rigid deformation, positional noise and outliers. Point matching was modeled as an energy minimization problem. Shape context was used to reduce the ambiguity of point correspondence, and similarity transform was used to preserve the continuity of spatial mapping. The continuously relaxed optimization problem is reduced to a linear program where optimality can be guaranteed. The simulation results verified the effectiveness of the method.
    Surface development of oblique circular cone with SolidWorks re-development
    SONG Yan ZHANG Jingjing CHEN Xiaopeng QIAN Qing
    2013, 33(04):  1119-1121.  DOI: 10.3724/SP.J.1087.2013.01119
    Asbtract ( )   PDF (398KB) ( )  
    References | Related Articles | Metrics
    Because the unfolded process of three-way conical pipe is complex, the development drawing cannot be completed automatically. This paper introduced a general analytic algorithm about the development of curve oblique based on the thought of symbolic-graphic combination. The equation of two conical intersection line was established by exploring the mathematical model of the skew intersection to the axis of the cone. The intersecting lines in the SolidWorks interface were created by means of VB progamming and were verified through SolidWorks. After setting up flattening curve equation, the development drawing was carried out in the SolidWorks. Thus, the expansion plan of two cones with arbitrary size and skew axises was designed by parametric means on the SolidWorks interface. The unfolding method is reliable and it makes automatic development drawing practical. The results and relevant data also reveal that the algorithm is of fast speed, high precision and strong universality, which can be used for the expansion mapping of a variety of sheet metal parts to facilitate the manufacture of sheet metal parts.
    Method of merging face detection windows based on Euclidean distance
    HEI Jianye XIONG Shuhua MA Yali
    2013, 33(04):  1122-1124.  DOI: 10.3724/SP.J.1087.2013.01122
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics
    To address the problem of the coincidence of the position and size for a same face cannot be guaranteed in different results due to the scale transformation in face detection, research has been done by the method of merging windows in face-detection employing statistical training. And a method based on Euclidean distance was proposed too, without considering the case of error-detected and undetected faces. Judging circle and Euclidean distance were employed to merge face-detection windows according to the distribution of center coordinates. Verifying experiments of the method were conducted according to different pictures, and the experimental results proved the method simple and effective.
    Contrast enhancement of hand vein images based on histogram equalization
    CAI Chaofeng REN Jingying
    2013, 33(04):  1125-1127.  DOI: 10.3724/SP.J.1087.2013.01125
    Asbtract ( )   PDF (520KB) ( )  
    References | Related Articles | Metrics
    The hand vein image tends to be of low contrast, which affects the recognition accuracy of the whole hand vein recognition system. The effective area of the hand vein image was extracted firstly and then the Histogram Equalization (HE) algorithm and its improved versions were employed to enhance the contrast of the extracted hand vein image. The results indicate that the Partially Overlapped Sub-block Histogram Equalization (POSHE) can enhance not only the overall contrast but also the detailed one. Meanwhile, its high efficiency is suitable to the hand vein image contrast enhancement.
    Reachability analysis of Petri net based on constraint optimization
    YANG Xia'ni LONG Faning ZHANG Yuanxia
    2013, 33(04):  1128-1131.  DOI: 10.3724/SP.J.1087.2013.01128
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics
    The judgment of reachability is one of the fundamental issues in Petri net analysis. The paper analyzed the existing method and the method based on constraint programming for the reachability of Petri net, and then proposed the judgment method for reachability problem based on constraint optimization. The method was based on the state equation method, separately using the constraint programming and the optimization to seek the feasible solution and the optimal solution, thereby decreased the searching path and attained the purpose of reducing the solution space of the state equation. Finally an example was given to prove that the algorithm can improve the determination efficiency.
    Formal analysis approaches of train control system based on Petri nets
    LIU Jiankun SONG Wen ZHOU Tao
    2013, 33(04):  1132-1135.  DOI: 10.3724/SP.J.1087.2013.01132
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics
    Formal approaches are construction methods with accurate mathematical semantics, which are based on strict mathematical proofs. Generally, Petri nets are considered as a class of computation models to model the concurrent behavior. Also, formal specifications and analysis of a system can be conveniently developed by Petri nets. However, it is difficult to model a train control system with prototype Petri nets. The difficulties can be solved by extended Petri nets with inhibitor arcs. Hence, some key problems of train control systems were modeled and analyzed by the computation models of extended Petri nets in this paper. Two control sub-systems, station management sub-system and interval operation sub-system. were proposed. The former performed the entering and leaving of trains from stations by cooperative control. The later executed the safety control of block regions in stations, the safety recovery of emergency situations such as lightning stroke and the loss of signals, and the management of railway crossings. Finally, the activity, reachability, and boundedness of the proposed models were analyzed by S-invariants.
    Optimized channel routing algorithms for dynamically adjusting channel with the program size
    HU Kaibao ZHANG Yikun ZHAO Ming
    2013, 33(04):  1136-1138.  DOI: 10.3724/SP.J.1087.2013.01136
    Asbtract ( )   PDF (618KB) ( )  
    References | Related Articles | Metrics
    To solve the routing confusion of the conventional hierarchical layout algorithm in the large-scale program, based on the Sugiyama hierarchical layout algorithm, this paper proposed an optimized algorithm for channel routing, which dynamically adjusted the number of channel according to the program size. In order to solve the low efficiency and lines overlap, the algorithm built functional relationships between channel number and program size. And by using the generalized tensor balance algorithm to reduce the crossings and realize the artistic layout. The algorithm also gave the corresponding line distribution and application strategy in accordance with the relative positional relationship between the calling nodes to achieve the ordered routing. The experimental results show that the algorithm has greater layout efficiency. It can reduce the crossings effectively, realize clear layout and is easy to implement.
    Path test data generation based on improved artificial fish swarm algorithm
    WANG Peichong QIAN Xu
    2013, 33(04):  1139-1141.  DOI: 10.3724/SP.J.1087.2013.01139
    Asbtract ( )   PDF (464KB) ( )  
    References | Related Articles | Metrics
    To solve the path test data generation automatically in software testing, a new scheme on searching solution space based on Artificial Fish Swarm (AFS) algorithm was proposed. To improve the ability of original AFS, chaotic searching was introduced to reform AFS' local searching ability and precision of solution. Once AFS finished an iteration process, chaos algorithm was executed with global best solution. At the same time, some partial individuals with bad state were washed out. Then, according to the optimization individual contracting the searching space, some new individuals were generated randomly. Two kinds of triangle program were tested and the results show that the improved AFS has faster convergence and higher calculation accuracy.
    Fuzzy multi-objective software reliability redundancy allocation based on swarm intelligence algorithm
    HOU Xuemei LIU Wei GAO Fei LI Zhibo WANG Jing
    2013, 33(04):  1142-145.  DOI: 10.3724/SP.J.1087.2013.01142
    Asbtract ( )   PDF (602KB) ( )  
    References | Related Articles | Metrics
    A fuzzy multi-objective software reliability allocation model was established, and bacteria foraging optimization algorithm based on estimation of distribution was proposed to solve software reliability redundancy allocation problem. As the fuzzy target function, software reliability and cost were regarded as triangular fuzzy members, and bacterial foraging algorithm optimization based on Gauss distribution was applied. Different membership function parameters were set up, and different Pareto optimal solutions could be obtained. The experimental results show that the proposed swarm intelligence algorithm can solve multi-objective software reliability allocation effectively and correctly, Pareto optimal solution can help the decision between software reliability and cost.
    Graphics performance optimization method for Wine based on client software rendering
    HUANG Conghui CHEN Jing ZHU Qingchao GUO Weiwu
    2013, 33(04):  1146-1148.  DOI: 10.3724/SP.J.1087.2013.01146
    Asbtract ( )   PDF (486KB) ( )  
    References | Related Articles | Metrics
    To deal with the performance bottleneck of operating the Device Independent Bitmap (DIB) in Wine, a method of client software rendering was brought forward. This method firstly analyzed GDI API for operating DIB, and then confirmed the load point of client software rendering, subsequently designed a list for linking different device context and their corresponding GDI API, at last realized the client software rendering of GDI API. The performance test shows that compared with the Wine without optimization, the average graphic performance of the Wine with the optimization of client software rendering enhances at least 10 times when it operates DIB, and is close to the performance of the local Windows XP, which effectively avoids the performance bottleneck of operating DIB.
    Typical applications
    Transit assignment based on stochastic user equilibrium with passengers' perception consideration
    ZENG Ying LI Jun ZHU Hui
    2013, 33(04):  1149-1152.  DOI: 10.3724/SP.J.1087.2013.01149
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Concerning the special nature of the transit network, the generalized path concept that maybe easily describe passenger route choice behavior was put forward. The key cost of each path was considered. Based on the analytical framework of cumulative prospect theory and passengers' perception, a stochastic user equilibrium assignment model was developed. A simple example revealed that the limitations of the traditional method can be effectively improved by this proposed method. The basic assumption of complete rationality in traditional model was improved. It helped us enhance our understanding of the complexity of urban public transportation behavior and the rule of decision-making. The facility layout and planning of the public transportation can be determined with this result, as well as the evaluation of the level of service. In addition, it can also be used as valid data support for traffic guidance.
    Service parts logistics system dynamics model and simulation based on lateral transshipment
    WANG Chaofeng SHUAI Bin
    2013, 33(04):  1153-1156.  DOI: 10.3724/SP.J.1087.2013.01153
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics
    Service parts logistics is a complex and stable system. From the view of system dynamics, service parts logistics system was analyzed, a system dynamics model containing a central warehouse and two local spare parts warehouses was established, considering lateral transshipment and vertical transshipment emergency, and then the reasonableness was examined. Three conclusions were obtained through simulation analysis. Firstly, volatility of service parts inventory increases before or after the produce idling period, and the period is a sensitive period for spare parts inventory management. Secondly, the shorter product life cycle, the longer product life, which leads to inventory management more difficult, and the more obvious service parts inventory fluctuates. Last, the more average every local warehouse service, the stronger the collaboration capability of every warehouse and the fewer inventory of service parts warehouse.
    Cross-country path planning based on improved ant colony algorithm
    WU Tianyi XU Jiheng LIU Yongjian
    2013, 33(04):  1157-1160.  DOI: 10.3724/SP.J.1087.2013.01157
    Asbtract ( )   PDF (663KB) ( )  
    References | Related Articles | Metrics
    According to the vehicle's cross-country path planning problem, the general influence of the terrain slope and attribute of the earth's surface on path planning was researched and analyzed. With the introduction of "window moving method" to beforehand judgment and traffic cability analysis about terrain slope, the rating index about landform roughness of wheeled vehicles and crawler vehicles were established and terrain roughness was rasterized with the "area dominant method". Constraint effect of slope and roughness of were stacked in order to reduce the search scope and improve the search efficiency through establishing taboo list. The evaluation function of the improved ant colony algorithm was structured, and with reference to the path table, a path optimization algorithm was designed with the consideration of slope and roughness constraint. The simulation results show that the algorithm can effectively realize cross-country path planning in accordance with real terrain environment.
    Navigation algorithm and implementation for blind based on GPS trajectory
    LU Yuanyao JIANG Jin
    2013, 33(04):  1161-1164.  DOI: 10.3724/SP.J.1087.2013.01161
    Asbtract ( )   PDF (664KB) ( )  
    References | Related Articles | Metrics
    This paper described an outdoor Global Positioning System (GPS) navigation algorithm for the blind by analyzing the GPS trajectory data received by the embedded platform Windows CE. Based on the data analysis of GPS trajectory, the algorithm introduced a method for the judgment of turning points and turning directions. During practical navigation, the blind can use the real-time voice prompt information for walking guidance. The experimental results show that the judgment of turning points can be realized in this algorithm effectively, and navigation information is effective and practical on helping users arrive at destinations safely and accurately.
    Automation system for computing geographic sunshine hours based on GIS
    ZHAO Hongwei LIAO Shunbao
    2013, 33(04):  1165-1168.  DOI: 10.3724/SP.J.1087.2013.01165
    Asbtract ( )   PDF (616KB) ( )  
    References | Related Articles | Metrics
    Because the model for computing geographic sunshine hours based on Digital Elevation Model (DEM) is complex and time-consuming, it is difficult to consider both high resolution and vast area at the same time when a study on geographic sunshine hours is conducted on a national scale in China. Some scholars proposed some methods for calculating high-resolution geographic sunshine hours across nationwide, however they didn't specify the computing platform and calculation methods. In this study, authors developed a set of automation system for calculating geographic sunshine hours based on existing models and DEM in which the part of curvature of the earth was corrected. This system was developed based on VS2008 platform and ArcGIS Engine component technology. The system could be used to calculate geographic sunshine hours at multi-spatial-scales and multi-resolutions. The raster data of geographic sunshine hours were generated as long as the user input DEM data with geographic coordinates of the region and the specific date.
    Design and implementation of covering software in automatic meteorological station quality control system
    ZHANG Zhiqiang SUN Chao
    2013, 33(04):  1169-1172.  DOI: 10.3724/SP.J.1087.2013.01169
    Asbtract ( )   PDF (641KB) ( )  
    References | Related Articles | Metrics
    Precipitation data quality of regional automatic weather stations determines the missed alarm and false probability of weather warning, but it is difficult to verify the validity of the data by only relying on space and time correlation of automatic data. Precipitation of weather radar can verify automatic weather station data. However, it is difficult to determine the coverage of weather stations by radar station due to heterogeneous and complex geographical environment of radar station, which results in constant error-checking. This paper built a coverage matching algorithm of radar stations and automatic weather stations by using the layer volume scan mode and the radar beam occlusion model of weather radar. Automatic stations and radar stations matching software based on the algorithm can effectively support Automatic Weather Station (AWS) precipitation data inspection.
    Arrhythmia classification based on mathematical morphology and support vector machine
    LIU Xiongfei YAN Chenwei HU Zhikun
    2013, 33(04):  1173-1175.  DOI: 10.3724/SP.J.1087.2013.01173
    Asbtract ( )   PDF (442KB) ( )  
    References | Related Articles | Metrics
    To achieve automatic analysis for different types of ElectroCardioGraph (ECG), a sequential screening method for maximum value was brought to detect R wave, while Support Vector Machine (SVM) was used to identify arrhythmia heart beats finally. The localization algorithm based on mathematical morphology combined with characteristics of ECG defined R-wave screening interval to avoid threshold selection in traditional algorithm. After R-peaks being positioned, various types of arrhythmia heart beats were extracted with R wave crest as its center and classified by selecting Radial Basis Function (RBF) or SVM. The results of the simulation experiment on the MIT-BIH database files indicate that this algorithm acquired high relevance ratio at 99.36% for ECG with different types of heart beats. After learning, the SVM can effectively identify as many as 4 types, such as atrial premature beat, premature ventricular beat, bundle branch block and normal heart beat, the overall recognition rate is 99.75%.
    Feature extraction of energy entropy of ECG signal on meridian systems using wavelet packet analysis
    LIU Xin HE Hong TAN Yonghong
    2013, 33(04):  1176-1178.  DOI: 10.3724/SP.J.1087.2013.01176
    Asbtract ( )   PDF (603KB) ( )  
    References | Related Articles | Metrics
    In order to study meridian characteristics, a feature extraction method of ElectroCardioGraph (ECG) signal on the meridian based on wavelet packet analysis and energy entropy was proposed. A meridian measuring experiment was firstly built to complete the acquisition of meridian data. Then meridian ECG signals were decomposed by a three layer wavelet packet decomposition. Energy entropy features of meridian ECG signals were extracted according to the results of signal reconstruction. After that, both K-means and Fuzzy C-Means (FCM) clustering techniques realized the effective partition of acupoints and non-acupoints. The derived clustering results indicate that the energy entropy values of ECG signals on the acupoints are obviously higher than those on the non-meridian points. It can be used as a powerful scientific basis for the discrimination of acupoints and non-acupoints.
    Threat modeling and assessment of unmanned aerial vehicle under complicated meteorological conditions
    WU Zhongjie ZHANG Yaozhong WANG Qiang
    2013, 33(04):  1179-1182.  DOI: 10.3724/SP.J.1087.2013.01179
    Asbtract ( )   PDF (542KB) ( )  
    References | Related Articles | Metrics
    To study the effect of meteorological conditions on Unmanned Aerial Vehicle (UAV), an algorithm of multi-level fuzzy comprehensive evaluation method based on threat value was proposed. This algorithm improved a two-level weight value determination and the comprehensive evaluation model, which can get the comprehensive threat index after being calculated. The simulation results show that this algorithm can assess the degree of weather threat accurately and have faster operation speed, smaller error and lower complexity. The efficiency and validity have also been improved.
    Data Exchanging Technology Research of Intelligent Measuring and Control system Based on IEEE1451
    YE Tingdong HUANG Guojian HONG Xiaobin
    2013, 33(04):  1183-1186.  DOI: 10.3724/SP.J.1087.2013.01183
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics
    With regard to the requirement of Internet of Things (IoT) application development and the data unification-modeling requirement of Measuring and Control System (MCS), the paper designed a MCS structure based on IEEE1451 standard, which had sensors plug-and-play and IPv6 communication functions. The paper realized dynamic description of network data flow from underlying sensor, intelligent measuring and control node to remote monitor and control application by UML modeling, and finished a general XML data-exchanging interface design. The designed MCS was applied in the process industry of ethanol production. The application results show that it realizes accurate and efficient transmission by using XML data exchanging technology based on IEEE1451 standard. Its data exchanging delay is about 0.51 ms, which can satisfy the requirements of openness, crossing platform and network real-time monitoring application.
    Implementation of soft start on design of body control system
    ZHANG Xiaoliang ZHU Qing WANG Yaonan CAO Shiwei
    2013, 33(04):  1187-1190.  DOI: 10.3724/SP.J.1087.2013.01187
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    To solve the instantaneous overcurrent of the large power inductive load, a body control system based on soft start was established. The abundant peripherals of Micro Control Unit (MCU) and the scalability of Field Programmable Gate Array (FPGA) were fully utilized to implement the rapid sampling of the multiplex switching signals and the output control of the Pulse Width Modulation (PWM) signals. In the software, the program was designed by the modular, which completed the task and met the need of the car. At last, the test shows the Electromagnetic Interference (EMI) on the vehicle is reduced, soft start of the large inductive equipment is realized, and the instantaneous overcurrent of the load is reduced by 40%, so the body control system can be applied with the unmanned vehicle's control
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF