Table of Content

    10 August 2015, Volume 35 Issue 8
    Stateless communication mechanism between an IPv4 network and the IPv6 Internet
    HAN Guoliang, SHENG Maojia, BAO Congxiao, LI Xing
    2015, 35(8):  2113-2117.  DOI: 10.11772/j.issn.1001-9081.2015.08.2113
    Asbtract ( )   PDF (938KB) ( )  
    References | Related Articles | Metrics

    In the IPv4/IPv6 transition process, since some legacy IPv4 networks still need to communicate with the IPv6 Internet, the stateless communication mechanism between an IPv4 network and the IPv6 Internet, which complements the current IPv4/IPv6 translation framework, was proposed. First, the communication procedures in two related scenarios were demonstrated. The two scenarios include IPv6 Internet clients accessing IPv4 servers and IPv4 clients accessing IPv6 Internet servers. The one-way IPv6-IPv4 address mapping function is the key component of the mechanism. Therefore, the requirements and three quantitative criteria of the one-way mapping function were discussed. Afterwards, multiple Hash functions as the candidates of the one-way mapping function were compared and analyzed with the real user data of large IPv6 websites and real IPv6 server addresses. The simulation results show that the FarmCity Hash function is suitable to be deployed in the above two scenarios because it has short average processing time, low collision rate and low reverse query complexity. It also verifies the validity of the stateless communication mechanism. Compared with current stateful communication mechanisms, the stateless mechanism has better scalability and traceability. Moreover, the capacity for bidirectional communication facilitates a smooth migration path towards the IPv6 Internet.

    Progressive auction based switch migration mechanism in software defined network
    CHEN Feiyu, WANG Binqiang, WANG Wenbo, WANG Zhiming
    2015, 35(8):  2118-2123.  DOI: 10.11772/j.issn.1001-9081.2015.08.2118
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics

    In multi-controller Software Defined Network (SDN), since the existed switch migration strategies always have low efficiency and need to migrate many times which only consider single migration factor, a mechanism of switches migration based on progressive auction named PASMM (Progressive Auction based Switches Migration Mechanism) was proposed. To improve network benefit, the switch migration problem was optimized by auctioning controllers' remaining resources in the mechanism. By increasing the trading price of the over-demanded controllers' resources, PASMM completed the auction and redeployed the controllers and switches. The simulation results show that, compared with some typical switch migration policies, PASMM achieves good load balancing of controllers, reduces the response time of the PACKET_IN messages by an average of 13.5%, and spends the least migration time with the increasing of switches flow requests.

    Design and implementation of hardware-in-loop simulation system of wireless network MAC protocol in USRP2
    LI Jiaxun, ZHANG Shaojie, ZHAO Haitao, MA Dongtang
    2015, 35(8):  2124-2128.  DOI: 10.11772/j.issn.1001-9081.2015.08.2124
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics

    Currently, due to the limitation of hardware for network protocol developing and huge cost of network building in hardware for performance evaluation, most of the literature focuses on software system which limits the results in theory. To solve these problems, a hardware-in-loop simulation system for distributed wireless network MAC (Media Access Control) protocols based on GNU Radio and the second generation of Universal Software Radio Peripheral (USRP2) was designed and implemented. Referring to the standard IEEE802.11 Distribution Coordination Function (DCF) protocol, the designed simulation system adopted the discrete-event simulation technique to realize simulation for multi-node distributed wireless networks with only the least hardware resources (i.e., one Personal Computer (PC) and two USRP2s). In the software, the MAC protocols were implemented using Python language, which is flexible and easy to change or extend. And in the physical layer, modularized modules in C++ language were adopted for signal processing, which further improves the scalability of the simulation system. The experimental results validate the reliability of the hardware-in-loop simulation system, in comparison with the Bianchi algorithm and time slot based saturation throughput calculation model.

    Blind source separation method for mixed digital modulation signals based on RobustICA
    ZHANG Guangyu, CHEN Hong, CAI Xiaoxia
    2015, 35(8):  2129-2132.  DOI: 10.11772/j.issn.1001-9081.2015.08.2129
    Asbtract ( )   PDF (597KB) ( )  
    References | Related Articles | Metrics

    Since the Bit Error Rate (BER) of the Blind Source Separation (BSS) of mixed digital modulation signals under the noisy environment is excessively high, a two-stage blind source separation algorithm named R-TSBS was proposed based on RobustICA (Robust Independent Component Analysis). Firstly, the algorithm used RobustICA to estimate the mixing matrix consisting of array response vector. In the second phase, each symbol sequence transmitted by digital modulation source signal was estimated by Maximum Likelihood Estimation (MLE) method using the finite symbol values character. Finally, R-TSBS achieved the purpose of blind source separation. The simulation results show that, when the Signal to Noise Ratio (SNR) is 10 dB, the BER of traditional Independent Component Analysis (ICA) algorithm such as FastICA (Fast Independent Component Analysis) and RobustICA reached 3.5×10-2, which is exactly high. However, the BER of the two-stage blind source separation on the basis of FastICA algorithm which named F-TSBS and the proposed R-TSBS algorithm dropped to 10-3, the separation performance has been significantly improved. At the same time, R-TSBS algorithm can obtain about 2 dB performance increase in low SNR (0~4 dB) compared to F-TSBS algorithm.

    RS code design in extended 1090ES based on phase modulation
    SONG Yan, LI Huaqiong, WANG Hong, SUN Qingqing, HUANG Zhongtao
    2015, 35(8):  2133-2136.  DOI: 10.11772/j.issn.1001-9081.2015.08.2133
    Asbtract ( )   PDF (599KB) ( )  
    References | Related Articles | Metrics

    The data link capacity of 1090ES (1090 MHz Extended Squitter) could be expended by modulating 1090 MHz signal with phase information, thus RS (Reed-Solomon) calibration technology of 1090ES expansion system based on 8PSK (8 Phase Shift Keying) phase modulation was studied. Firstly, the total length of the RS code symbols was designed as 54 according to the characteristics of RS code and the data link structure of 1090ES expansion system. Secondly, error performance with different RS code coding efficiency was discussed, and its influence on performance of the 1090ES expansion system was analyzed, thereby, the optimum selection of RS code coding efficiency range was determined as 0.6-0.7. Finally, the concrete analysis of the error performance in the selected encoding efficiency range was given, and then the experimental results show that the length of information symbols could be chosen as 32, 34 or 36. Furthermore, Matlab simulation analysis shows that the designed RS code can effectively improve the error performance of 1090ES expansion system with RS(54, 32) as an example.

    Approximation algorithm of cluster vertex deletion
    GAO Wenyu, LI Hua
    2015, 35(8):  2137-2139.  DOI: 10.11772/j.issn.1001-9081.2015.08.2137
    Asbtract ( )   PDF (562KB) ( )  
    References | Related Articles | Metrics

    Since the solution set obtained by 3-approximation algorithm of cluster vertex deletion problem is probably big, a new approximation algorithm was proposed through analyzing the characteristics of cluster. In the new algorithm, the number of P3 related to each vertex in the graph was counted by examining first-order neighbors and second-order neighbors of each vertex, and then the vertex with maximum number of P3 was selected into the solution set to eliminate P3 as soon as possible, which led to a smaller vertex deletion set. In order to verify the performance of this new algorithm, several sets of randomized simulation were designed. The simulation results show that the new algorithm outperforms the classic 3-approximation algorithm.

    Normal equitable total coloring algorithm of random graphs
    YIN Bo, LI Jingwen, DAI Sumin, HU Tengyun
    2015, 35(8):  2140-2146.  DOI: 10.11772/j.issn.1001-9081.2015.08.2140
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics

    The research on the equitable total coloring is limited to some special graphs such as complete-graph and join-graph. For the normal equitable total coloring of the simple connected graph, there is not any feasible method in the published paper. In order to research the equitable total coloring of the normal graph, a new heuristic intelligent algorithm was proposed according to four constraint rules including vertex constraint rule, edge constraint rule, vertex-edge constraint rule and equitable constraint rule of the equitable total coloring. First, four sub-functions and one total function were ascertained. Second, by using the dyeing matrix and complementary matrix in each sub-function, the iterative exchange did not stop until each sub-function value was zero, that meant the subgoal-coloring was completed. If each sub-function value was 0, the total function value was 0, which meant coloring was successful. The experimental results show that the proposed algorithm can generate all of the simple connected graphs in which the number of vertices is no more than 8, and it can achieve the corresponding coloring, and then obtains the equitable total chromatic number. Also when any positive integer k is not less than 3 and not more than 9, random graph G has k-equitable total coloring. At the same time, the proposed algorithm chooses 72 graphs whose vertex number is between 20 and 400, and draws the diagram about the vertex number, edge number and color number according to the equitable total coloring results.

    Cloud manufacturing service resource combination optimization model and algorithm under group buying mode
    MA Shugang, YANG Jianhua
    2015, 35(8):  2147-2152.  DOI: 10.11772/j.issn.1001-9081.2015.08.2147
    Asbtract ( )   PDF (1107KB) ( )  
    References | Related Articles | Metrics

    A could manufacturing service resource optimization under group buying mode was proposed in order to reduce service costs of demanders. In the early stage of cloud manufacturing platform, service resource optimization management was analyzed from perspective of service demanders, so as to reduce their service costs of cloud manufacturing platform as a whole. The group buying mode was introduced into the resource combination optimization model, and the key impact factors including pricing and trust level in group buying were considered. Under group buying circumstances, comprehensive decision of cloud manufacturing resource optimization was studied, and an improved genetic algorithm was designed for solving this model. Furthermore, the simulation experiments of different scale problems were also given to verify the validity and feasibility of the proposed model and the improved algorithm. The simulation results show that, when group buying scale increases, group buying mode has more cost advantages.

    Combined prediction scheme for runtime of tasks in computing cluster
    YU Ying, LI Kenli, XU Yuming
    2015, 35(8):  2153-2157.  DOI: 10.11772/j.issn.1001-9081.2015.08.2153
    Asbtract ( )   PDF (972KB) ( )  
    References | Related Articles | Metrics

    A Combined Prediction Scheme (CPS) and a concept of Prediction Accuracy Assurance (PAA) were put forward for the runtime of local and remote tasks, on the issue of inapplicability of the singleness policy to all the heterogeneous tasks. The toolkit of GridSim was used to implement the CPS, and PAA was a quantitative evaluation standard of the prediction runtime provided by a specific strategy. The simulation experiments showed that, compared with the local task prediction strategy such as Last and Sliding Median (SM), the average relative residual error of CPS respectively reduced by 1.58% and 1.62%; and compared with the remote task prediction strategy such as Running Mean (RM) and Exponential Smoothing (ES), the average relative residual error of CPS respectively reduced by 1.02% and 2.9%. The results indicate that PAA can select the near-optimal value from the results of comprehensive prediction strategy, and CPS enhances the PAA of the runtime of local and remote tasks in the computing environments.

    Architecture of new generation blog system with cloud storage
    ZHANG Baojun, PAN Ruifang
    2015, 35(8):  2158-2163.  DOI: 10.11772/j.issn.1001-9081.2015.08.2158
    Asbtract ( )   PDF (1000KB) ( )  
    References | Related Articles | Metrics

    To solve the storage problem of massive information for new generation blog system, combined with the cloud storage, a new blog system architecture named BlogCloud was proposed. By using the distributed storage technology, BlogCloud avoided the bottleneck problem in centralized storage, and had high scalability. With semi-distributed Peer-to-Peer (P2P) architecture, it located storage resources quickly. To avoid the network flipping caused by the instable nodes, only the stable nodes were regarded as storage nodes. The rule of nearest storage was adopted and the data was kept in the client cache to reduce the network transmission. Users were allowed to customize the block size of file, it meant that the large files could be devided into blocks and transmitted parallel to enhance the transmission speed, while the small files did not need to be devided, which saved the cost for segmentation and combination. It also had functions of redundancy to enhance the security and reliability of the data, including storing copies of files in multi-storage nodes and remote backup. The comparison test between BlogCloud and ZSWIN was given on the virtual machine. The results show that the throughput of BlogCloud is higher than ZSWIN obviously; the performance of BlogCloud is degraded when using instable nodes as the storage nodes; BlogCloud has high reliability, it still can run stably when reducing the storage nodes and index nodes. The results verify that BlogCloud can satisfy the storage requirements for the new generation blog system.

    Network security situation assessment method based on Naive Bayes classifier
    WEN Zhicheng, CAO Chunli, ZHOU Hao
    2015, 35(8):  2164-2168.  DOI: 10.11772/j.issn.1001-9081.2015.08.2164
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the current network security situation assessment has characteristics of limited scope, single information source, high time and space complexity and big deviation in accuracy, a new network security situation assessment method based on Naive Bayes classifier was proposed. It fully considered multi-information sources and fusion of multi-level heterogeneous information, and had the features of rapidity and high efficiency. It dynamically demonstrated the whole security state of the network, which could precisely reflect the network security situation. Finally, the proposed method was verified using the reality data from network and its validity was proved.

    New network vulnerability diffusion analysis method based on cumulative effect
    LI Yan, HUANG Guangqiu, ZHANG Bin
    2015, 35(8):  2169-2173.  DOI: 10.11772/j.issn.1001-9081.2015.08.2169
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    Network vulnerability assessment which intends to safety situation analysis and establishment of defensive measures before attack is a kind of active defense technology, but the traditional quantitative analysis models cannot show the dynamic interactive relationship between entities, and most of them cannot get global results for risk diffusion. With reference to the influence of social network in the process of communication, a new network vulnerability diffusion analysis method based on cumulative effect was proposed. The defined vulnerability diffusion analysis model described subject relation structure in a more detailed level, and the algorithm proposed by using the accumulation characteristics in attack effects described vulnerability diffusion rule more accurately to ensure better influence range. At last, the model and algorithm were verified by a typical example, the horizontal comparison analysis on some aspects such as simplicity of the model description, accuracy of the analysis results, rationality of the safety recommendations were given. The results show that the method has an advantage in visual assessment results and the formulation of the cost minimum security measures.

    Cascading failure model based on community theory in complex network
    LU Jingqiao, FU Xiufen
    2015, 35(8):  2174-2177.  DOI: 10.11772/j.issn.1001-9081.2015.08.2174
    Asbtract ( )   PDF (616KB) ( )  
    References | Related Articles | Metrics

    To deal with shortcomings of a single node or the simple neighbor relations in the research of cascading failures, a cascading failure model was proposed considering the local characteristics of node-community structure. The model gave each node dynamic initial load value based on the community property of the node, and adopted different strategies to attack the Western States Power Grid of the United States, US Air lines, IEEE118 standard grid and ScaleF-ree Network (SFN) to simulate the process of cascading failures. The simulation results show that these nodes within community lead to relative minor faults when community factor dominated in initial load, but some special nodes connecting multiple communities will cause serious cascading failures. It also indicates that the property of the number of neighbor nodes is more relevant than other properties by calculating Pearson correlation coefficients of different properties.

    (θ, k)-anonymous method in the subsets of social networks
    ZHANG Xiaolin, WANG Ping, GUO Yanlei, WANG Jingyu
    2015, 35(8):  2178-2183.  DOI: 10.11772/j.issn.1001-9081.2015.08.2178
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the current related research about social network do not consider subsets for neighborhood's privacy preserving, and the specific properties of neighborhood subsets also lead individual privacy disclosure, a new (θ, k)-anonymous model was proposed. According to the k-isomorphism ideology, the model removed labels of neighborhood subsets which needed to be protected in social network, made use of neighborhood component coding technique and the method of node refining to process nodes in candidate set and their neighborhood information, then completed the operation of specific subsets isomorphism with considering the sensitive attribute distribution. Ultimately, the model satisfies that each node in neighborhood subset meets neighborhood isomorphism with at least k-1 nodes, as well the model requires the difference between the attribute distribution of each node in the neighborhood subset and the throughout subsets is not bigger than θ. The experimental results show that, (θ, k)-anonymous model can reduce the anonymization cost and maximize the utility of the data.

    Research on access control policy for Web service
    HE Zhengqiu, ZHANG Yelin, XU Junkui, SUN Danhui
    2015, 35(8):  2184-2188.  DOI: 10.11772/j.issn.1001-9081.2015.08.2184
    Asbtract ( )   PDF (829KB) ( )  
    References | Related Articles | Metrics

    In Web service environment, the interacting entities usually cannot be predetermined and may be in different security domains. To address the access authorization for unknown users across domain borders, access control of Web service should be implemented based on domain-independent access control information but not the identities. A context-based access control policy model which can be appropriate for Web service environment was proposed. The main idea of the model was that, various access control information was abstracted and represented as a concept of context which was adopted as the center to define and perform access control policies. The context concept here acted as an intermediary between requesters and the access permissions, which was similar to the role of Role-Based Access Control (RBAC) in a way. Context-based access control policy axioms were defined based on Description Logic (DL), on the basis of these axioms, the access control policy knowledge base with the capacity of reasoning about the access control policies was put forward. Finally, the effect of access control policy enforcement was verified in Racer reasoning system, and the experiment result proved the feasibility and validity of the presented method.

    Attribute repeatable multi-authority attribute-based encryption scheme on prime order group
    LI Zuohui, YANG Mengmeng, CHEN Xingyuan
    2015, 35(8):  2189-2194.  DOI: 10.11772/j.issn.1001-9081.2015.08.2189
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics

    Since previous Multi-Authority Attribute-Based Encryption (MA-ABE) schemes limit each attribute to appear only once in the access structure, and suffer from superfluous computation overhead on repetitive encoding technique, an adaptively secure and unrestricted Multi-Authority Ciphertext-Policy ABE (MA-CP-ABE) scheme was proposed on prime order groups. Firstly, based on dual pairing vector space and linear secret-sharing schemes technology, an MA-CP-ABE scheme was constructed on prime order groups. Then, q-Parallel BDHE (Bilinear Diffie-Hellman Exponent) assumption was introduced to solve the problem that classical dual system encryption depends on a statistical hypothesis which requires each attribute to appear only once in the access structure, and a series of attacking games indistinguishable from each other was designed to prove that this scheme was adaptively secure in the standard model. Finally, performance analysis indicated that in comparison with another two adaptively secure MA-CP-ABE schemes on prime order groups, the speed of decryption was obviously improved by nearly 20%-40% and 0%-50% respectively as the number of participating attributes increasing, without considering the attribute repetition. This scheme is more efficient in real applications.

    Method for increasing S-box nonlinearity based on combination of hill climbing
    QIN Guanjie, MA Jianshe, CHENG Xuemin
    2015, 35(8):  2195-2198.  DOI: 10.11772/j.issn.1001-9081.2015.08.2195
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the 3-point and 4-point hill climbing algorithms have high calculation and low efficiency in enhancing the nonlinearity of a Substitution box (S-box), an algorithm named Combination of Hill Climbing (CHC), which could apply multiple swap elements at a time, was proposed. The algorithm defined the behavior of swapping 2 output data of an S-box as a swap element, and used weighting prioritizing function to select swap elements that have larger contribution to the enhancement of nonlinearity, then simultaneously applied multiple selected swap elements to enhance the nonlinearity of an S-box. In the experiments, a maximum of 12 output data were swapped at a time by using the CHC algorithm, and most of the random 8-input and 8-output S-boxes' nonlinearity surpassed 102, with a maximum of 106. The experimental results show that the proposed CHC algorithm not only reduces the amount of calculation, but also enhances the nonlinearity of random S-boxes more significantly in comparison with the 3-point and 4-point hill climbing algorithms.

    Cryptanalysis and improvement of two multi-server remote user authentication schemes using smart cards
    QU Juan, LI Yanping, WU Xili
    2015, 35(8):  2199-2204.  DOI: 10.11772/j.issn.1001-9081.2015.08.2199
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics

    User authentication is an important security issue when user access resources from network. Recently, Xu et al.(XU C, JIA Z, WEN F, et al. Cryptanalysis and improvement of a dynamic ID based remote user authentication scheme using smart cards [J]. Journal of Computational Information Systems, 2013, 9(14): 5513-5520) proposed a dynamic ID based remote user authentication scheme using smart cards. Though the rigorous security analysis, it was found that Xu scheme could not resist man in-the-middle attack and session key disclosure attack, and could not provide perfect forward secrecy for session key. Additionally, it was also demonstrated that the scheme proposed by Choi et al. (CHOI Y, NAM J, LEE D, et al. Security enhanced anonymous multiserver authenticated key agreement scheme using smart cards and biometrics [J]. The Scientific World Journal, 2014, 2014: 281305)was vulnerable to smart card loss attack, server spoofing attack, and could not provide user anonymity. Therefore, these two schemes could not be suitable for practical applications. At last, an improved scheme was proposed based on biometrics and extended chaotic maps to overcome the lack of Xu scheme and CNL scheme.

    Infrared image encryption scheme using Lorenz chaotic system
    WANG Congli, CHEN Zhibin, GE Yong
    2015, 35(8):  2205-2209.  DOI: 10.11772/j.issn.1001-9081.2015.08.2205
    Asbtract ( )   PDF (887KB) ( )  
    References | Related Articles | Metrics

    In order to ensure the security of infrared image in infrared imaging systems and avoid the shortages of traditional image encryption method such as low security and poor real-time performance, a new encryption scheme for infrared image by using Lorenz chaotic system was proposed based on the analysis of bit plane features of infrared image. In this scheme, based on the influence factor statistical characters of bit plane of infrared image, the abscissa, ordinate and bit plane of image's higher four bit planes were scrambled through one operation by using Lorenz chaotic system. This scheme extended image encryption form the pixel level to the bit level. Compared to traditional image encryption method, the scheme is based on the special bit plane statistical character of infrared image, and has fast encryption speed and good performance. It can resist exhaustive attack effectively and has good reliability and scrambling degree. The scheme can be applied to the infrared monitoring system with high security requirements to improve system security and prevent hacker intrusion effectively.

    W-POS language model and its selecting and matching algorithms
    QIU Yunfei, LIU Shixing, WEI Haichao, SHAO Liangshan
    2015, 35(8):  2210-2214.  DOI: 10.11772/j.issn.1001-9081.2015.08.2210
    Asbtract ( )   PDF (877KB) ( )  
    References | Related Articles | Metrics

    n-grams language model aims to use text feature combined of some words to train classifier. But it contains many redundancy words, and a lot of sparse data will be generated when n-grams matches or quantifies the test data, which badly influences the classification precision and limites its application. Therefore, an improved language model named W-POS (Word-Parts of Speech) was proposed based on n-grams language model. After words segmentation, parts of speeches were used to replace the words that rarely appeared and were redundant, then the W-POS language model was composed of words and parts of speeches. The selection rules, selecting algorithm and matching algorithm of W-POS language model were also put forward. The experimental results in Fudan University Chinese Corpus and 20Newsgroups show that the W-POS language model can not only inherit the advantages of n-grams including reducing amount of features and carrying parts of semantics, but also overcome the shortages of producing large sparse data and containing redundancy words. The experiments also verify the effectiveness and feasibility of the selecting and matching algorithms.

    Conflict detection model in collaborative design based on constraint
    YANG Kangkang, WU Shijing, LIU Yujie, ZHOU Lu
    2015, 35(8):  2215-2220.  DOI: 10.11772/j.issn.1001-9081.2015.08.2215
    Asbtract ( )   PDF (893KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that conflict is hard to detect accurately and comprehensively in collaborative design, a conflict detection model based on constraint was proposed. Considering the hierarchical constraints and constraint satisfaction, the detection model divided constraints into two sets: one set is with known constraints and the other set is with unknown constraints. The constraints of two sets were detected respectively. The set with known constraints was detected by interval propagation algorithm. Meanwhile, Back Propagation (BP) neural network was used to detect the set with unknown constraints. Immune Algorithm (IA) was utilized to optimize the weights and thresholds of BP neural network, and the steps of optimization process were put forward. In the comparison experiments with BP neural network optimized by Genetic Algorithm (GA), the convergent speed was increased by 69.96%, which indicated that BP neural network optimized by IA has better performance in convergent speed and global searching ability. The constraints were described by eXtensible Markup Language (XML), so that computers could automatically recognize and establish the constraint network. The implementation of conflict detection system based on constraint satisfaction was designed. Taking co-design of wind planetary gear train as an example, a conflict detection system in collaborative design was developed on Matlab with C#. The conflict detection model is proved to be feasible and effective, and provides a solution of conflict detection for collaborative design.

    Hybrid sampling extreme learning machine for sequential imbalanced data
    MAO Wentao, WANG Jinwan, HE Ling, YUAN Peiyan
    2015, 35(8):  2221-2226.  DOI: 10.11772/j.issn.1001-9081.2015.08.2221
    Asbtract ( )   PDF (882KB) ( )  
    References | Related Articles | Metrics

    Many traditional machine learning methods tend to get biased classifier which leads to lower classification precision for minor class in sequential imbalanced data. To improve the classification accuracy of minor class, a new hybrid sampling online extreme learning machine on sequential imbalanced data was proposed. This algorithm could improve the classification accuracy of minor class as well as reduce the loss of classification accuracy of major class, which contained two stages. In offline stage, the principal curve was introduced to model the confidence regions of minor class and major class respectively based on the strategy of balanced samples. Over-sampling of minority and under-sampling of majority was achieved based on confidence region. Then the initial model was established. In online stage, only the most valuable samples of major class were chosen according to the sample importance, and then the network weight was updated dynamically. The proposed algorithm had upper bound of the information loss through the theoretical proof. The experiment was taken on two UCI datasets and the real-world air pollutant forecasting dataset of Macao. The experimental results show that, compared with the existing methods such as Online Sequential Extreme Learning Machine (OS-ELM), Extreme Learning Machine (ELM) and Meta-Cognitive Online Sequential Extreme Learning Machine (MCOS-ELM), the proposed method has higher prediction precision and better numerical stability.

    New orthogonal basis neural network based on quantum particle swarm optimization algorithm for fractional order chaotic time series single-step prediction
    LI Ruiguo, ZHANG Hongli, WANG Ya
    2015, 35(8):  2227-2232.  DOI: 10.11772/j.issn.1001-9081.2015.08.2227
    Asbtract ( )   PDF (975KB) ( )  
    References | Related Articles | Metrics

    Since fractional order chaotic time series prediction has low precision and slow speed, a prediction model of new orthogonal basis neural network based on Quantum Particle Swarm Optimization (QPSO) algorithm was proposed. Firstly, on the basis of Laguerre orthogonal basis function, a new orthogonal basis function was put forward combined with the neural network topology to form a new orthogonal basis neural network. Secondly, QPSO algorithm was used for parameter optimization of the new orthogonal basis neural network, thus the parameter optimization problem was transformed into a function optimization problem on multidimensional space. Finally, the prediction model was established based on the optimized parameters. Fractional order Birkhoff-shaw and Jerk chaotic systems were taken as models respectively, then chaotic time series produced according to Adams-Bashforth-Moulton estimation-correction algorithm were used as the simulation objects. In the comparison experiments on single-step prediction with Back Propagation (BP) neural network, Radical Basis Function (RBF) neural network and general new orthogonal basis neural network, Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) of the new orthogonal basis neural network based on QPSO algorithm were significantly reduced, and Coefficients of Decision (CD) of it was closer to 1; meanwhile, Mean Modeling Time (MMT) of it was greatly shortened. The theoretical analysis and simulation results show that the new orthogonal basis neural network based on QPSO algorithm can improve the precision and speed of fractional order chaotic time series prediction, so the prediction model can be easily expanded and applied.

    Multi-instance multi-label learning method based on topic model
    YAN Kaobi, LI Zhixin, ZHANG Canlong
    2015, 35(8):  2233-2237.  DOI: 10.11772/j.issn.1001-9081.2015.08.2237
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    Concerning that most of the current methods for Multi-Instance Multi-Label (MIML) problem do not consider how to represent features of objects in an even better way, a new MIML approach combined with Probabilistic Latent Semantic Analysis (PLSA) model and Neural Network (NN) was proposed based on topic model. The proposed algorithm learned the latent topic allocation of all the training examples by using the PLSA model. The above process was equivalent to the feature learning for getting a better feature expression. Then it utilized the latent topic allocation of each training example to train the neural network. When a test example was given, the proposed algorithm learned its latent topic distribution, then regarded the learned latent topic allocation of the test example as an input of the trained neural network to get the multiple labels of the test example. The experimental results on comparison with two classical algorithms based on decomposition strategy show that the proposed method has superior performance on two real-world MIML tasks.

    Improved ant colony optimization for QoS-based Web service composition optimization
    NI Zhiwei, FANG Qinghua, LI Rongrong, LI Yiming
    2015, 35(8):  2238-2243.  DOI: 10.11772/j.issn.1001-9081.2015.08.2238
    Asbtract ( )   PDF (1051KB) ( )  
    References | Related Articles | Metrics

    The basic Ant Colony Optimization (ACO) has slow searching speed at prior period and being easy to fall into local optimum at later period. To overcome these shortcomings, the initial pheromone distribution strategy and local optimization strategy were proposed, and a new pheromone updating rule was put forward to strengthen the effective accumulation of pheromone. The improved ACO was used in QoS-based Web service composition optimization problem, and the feasibility and effectiveness of it was verified on QWS2.0 dataset. The experimental results show that, compared with the basic ACO, the improved ACO which updates the pheromone with the distance of the solution and the ideal solution, and the improved genetic algorithm which introduces individual domination strength into the environment selection, the proposed ACO can find more Pareto solutions, and has stronger optimizing capacity and stable performance.

    Complete parameter-free local neighborhood preserving and non-local maximization algorithm
    LIN Yu'e, CHEN Jingyi, XU Guangyu, LIANG Xingzhu
    2015, 35(8):  2244-2248.  DOI: 10.11772/j.issn.1001-9081.2015.08.2244
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics

    Parameter-free locality preserving projection does not need to set parameters and has stable performance, but the algorithm cannot effectively maintain the local structure of the sample and ignores the role of non-local samples. Moreover, this method exists the Small Size Sample (SSS) problem. A complete parameter-free local neighborhood preserving and non-local maximization algorithm was proposed. In order to make full use of the nearest neighbor samples and non-nearest neighbor samples, which were divided by whether the distance between two samples is no more than 0.5 or not, the neighbor scatter matrix and non-nearest neighbor scatter matrix were constructed. Then, the objective function of the algorithm was to seek a set of projection vectors such that the neighbor scatter matrix was maximized and non-nearest neighbor scatter matrix was minimized simultaneously. As to solve the objective function, the high dimensional samples were projected to a low dimensional subspace by Principal Component Analysis (PCA) algorithm, which was proved without lossing any effective discriminant information according to two theorems. In order to solve the SSS problem, the objective function was converted to differential form. The experimental results on face database and palmprint database illustrate that the proposed method outperforms Parameter-free locality preserving projection with average recognition rate, which proves the effectiveness of the proposed algorithm.

    Constrained multiobjective optimization algorithm based on repairing strategy for solving dynamic environment/economic dispatch
    QIAN Shuqu, WU Huihong, XU Guofeng
    2015, 35(8):  2249-2255.  DOI: 10.11772/j.issn.1001-9081.2015.08.2249
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics

    The classical multiobjectve optimization algorithm is difficult to achieve high quality feasible solutions on Multiobjectve Dynamic Environment/Economic Dispatch (MODEED) model, and shows a slower convergence speed. Firstly, a new constraint repairing strategy based on the constraint characteristic of MODEED was developed. Secondly, the proposed repairing approach was inserted into the Nondominated Sorting Genetic Algorithm-Ⅱ (NSGAⅡ), and a Constrained Multiobjective Evolutionary Algorithm based on repairing Strategy (CMEA/R) was proposed. Thirdly, fuzzy decision theory was applied to determine the best compromise solution of the MODEED. Finally, to validate the optimization ability of the CMEA/R, it was applied to solve the MODEED problem of standard IEEE 30-bus 10-generator system, and a comparative analysis with NSGAⅡ was presented under various population size. The simulation results revealed that the pollutant emission and fuel cost obtained by CMEA/R were reduced by 480 lb (217.7 kg) and 7 800 dollar, respectively, the average implication time was reduced by 0.021 second. Furthermore, CMEA/R shows a superior performance in terms of Hypervolume Rate (HR) indicator and convergence ability.

    Mobile robot obstacle avoidance based on improved fuzzy algorithm
    PENG Yuqing, LI Mu, ZHANG Yuanyuan
    2015, 35(8):  2256-2260.  DOI: 10.11772/j.issn.1001-9081.2015.08.2256
    Asbtract ( )   PDF (779KB) ( )  
    References | Related Articles | Metrics

    In order to improve the performance of obstacle avoidance for mobile robots in continuous obstacle environment, a fuzzy algorithm of obstacle avoidance with speed feedback was proposed. Ultrasonic sensors were utilized to perceive the surroundings, and based on fuzzy control, the mobile robot adjusted its speed according to the distribution of obstacles. Then the graceful degradation was introduced combined with the improved fuzzy obstacle avoidance to enhance the robustness of the mobile robot. The experimental results show that the method can adjust the speed through interaction with the environment, control the robot in a collision-free way and optimize the obstacle avoidance path. Simultaneously, the method shows good effectiveness.

    Improved detection algorithm of AdaBoost
    LIU Pingguang, WEN Chengyu, DU Hong
    2015, 35(8):  2261-2265.  DOI: 10.11772/j.issn.1001-9081.2015.08.2261
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics

    Considering the degradation and problem that the weight distribution of training targets is wider than average in the traditional AdaBoost algorithm in the process of human face image training, an improved AdaBoost algorithm was proposed based on adjusting margin of error and setting the threshold value. First, the weight values of the samples were updated according to the comparative result between the threshold value and the weight value of the matching errors of the current samples. Then, the emphasis of the training samples was controlled by adjusting the emphasis relation between positive error and negative error. The experimental results showed that different human face image databases and different ratios of positive and negative errors had little effects on the validness of the improved AdaBoost algorithm. Under the positive and negative error ratio of 1:1 in unrestricted face database LFW, the detection rate was 86.7%, which was higher than that of the traditional AdaBoost algorithm; the number of weak classifiers was 116, which was 15 more than that of the traditional AdaBoost algorithm. The results prove that the proposed algorithm suppresses the degradation and the problem that the weight distribution of training targets is wider than average, and effectively improves the detection rate of human face images.

    Polynomial interpolation algorithm framework based on osculating polynomial approximation
    ZHAO Xiaole, WU Yadong, ZHANG Hongying, ZHAO Jing
    2015, 35(8):  2266-2273.  DOI: 10.11772/j.issn.1001-9081.2015.08.2266
    Asbtract ( )   PDF (1379KB) ( )  
    References | Related Articles | Metrics

    Polynomial interpolation technique is a common approximation method in approximation theory, which is widely used in numerical analysis, signal processing, and so on. Traditional polynomial interpolation algorithms are mainly developed by combining numerical analysis with experimental results, lacking of unified theoretical description and regular solution. A uniform theoretical framework for polynomial interpolation algorithm based on osculating polynomial approximation theory was proposed here. Existing interpolation algorithms could be analyzed and new algorithms could be developed under this framework, which consists of the number of sample points, osculating order for sample points and derivative approximation rules. The presentation of existing mainstream interpolation algorithms was analyzed in proposed framework, and the general process for developing new algorithms was shown by using a four-point and two-order osculating polynomial interpolation. Theoretical analysis and numerical experiments show that almost all mainstream polynomial interpolation algorithms belong to osculating polynomial interpolation, and their effects are strongly related to the number of sampling points, order of osculating, and derivative approximation rules.

    Uniform SILTP based background modeling and its implementation on Intel HD graphics
    LIN Zecheng, ZHU Jianqing, LIAO Shengcai, LI Stan Z.
    2015, 35(8):  2274-2279.  DOI: 10.11772/j.issn.1001-9081.2015.08.2274
    Asbtract ( )   PDF (934KB) ( )  
    References | Related Articles | Metrics

    Since Scale Invariant Local Ternary Pattern (SILTP) background modeling algorithm is of high complexity and slow computing speed, which is not suitable for real-time video processing, a new method named Uniform Scale Invariant Local Ternary Pattern (USILTP) background modeling algorithm was proposed. Firstly, the feature of USILTP was extracted by regulating the frequency of SILTP coding jump in order to reduce the feature dimension of SILTP. Secondly, a USILTP background modeling parallel algorithm based on Intel core graphics (Intel HD) and Open Computing Language technology (OpenCL) was designed and implemented to further accelerate USILTP background modeling algorithm. Finally, the foreground result of USILTP background modeling algorithm was optimized by combing multiple color channel models. The experimental result shows that the proposed algorithm can be applied to process 320×240 resolution video at a rate of 98 frame/s on the Intel HD 4600, which is 4 times faster than that of SILTP background modeling algorithm. In terms of foreground detection, the performance of the proposed algorithm is improved by 2.1% compared with SILTP background modeling algorithm on the public dataset.

    Fourier representation, rendering techniques and applications of periodic dynamic images
    LYU Ruimin, CHEN Wei, MENG Lei, CHEN Lifang, WU Haotian, LI Jingyuan
    2015, 35(8):  2280-2284.  DOI: 10.11772/j.issn.1001-9081.2015.08.2280
    Asbtract ( )   PDF (896KB) ( )  
    References | Related Articles | Metrics

    In order to create novel artistic effects, a period-dynamic-image model was proposed, in which each element is a periodic function. Instead of using an array of color pixels to represent a digital image, a Fourier model was used to represent a periodic dynamic image as an array of functional pixels, and the output of each pixel was computed by a Fourier synthesis process. Then three applications with three rendering styles were put forward, including dynamic painting, dynamic distortion effects and dynamic speech balloons, to visually display the periodic dynamic images. A prototype system was constructed and a series of experiments were performed. The results demonstrate that the proposed method can effectively explore the novel artistic effects of periodic dynamic images, and it can be used as a new art media.

    Improved binary robust invariant scalable keypoints algorithm fusing depth information
    ZHANG Heng, LIU Dayong, LIU Yanli, NIE Chenxi
    2015, 35(8):  2285-2290.  DOI: 10.11772/j.issn.1001-9081.2015.08.2285
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics

    To effectively utilize the depth information from RGB-D (Red Green Blue and Depth) images and enhance the scale invariance and rotation invariance of BRISK (Binary Robust Invariant Scalable Keypoints) algorithm, an improved BRISK algorithm combined with depth information was proposed. Firstly, the keypoints were detected by the FAST (Features from Accelerated Segment Test) algorithm and their Harris corner response values were computed. Then, the entire image was divided into the same size grids, and the keypoint with the maximum Harris corner response value was reserved by each grid. Next, the scale factor of the keypoint was directly computed with the depth information of the image. Finally, the intensity centroid of the circle centered on the keypoint was calculated, and the orientation of keypoint was computed by the offset from its intensity centroid. The comparison experiment analysis of several algorithms on the scale invariance and rotation invariance was performed. The experimental results show that, compared with the BRISK algorithm, the number of correctly matched keypoints of the improved algorithm improves by more than 90% when the image's scale is changed and raises by at least 70% when the image is rotated.

    Contrast restoration algorithm for single image based on physicals model
    WANG Fan, YANG Yan, BAI Haiping
    2015, 35(8):  2291-2294.  DOI: 10.11772/j.issn.1001-9081.2015.08.2291
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics

    Concerning that the parameter estimation in defogging algorithms based on image restoration is easy to cause the loss of scene information, a new defogging algorithm for single image was proposed. On the basis of the dark channel prior method, the atmospheric scattering model was analyzed and then the influence to dark channel image caused by fog distribution was summarized, which is the basis for adding fog to the outdoor images. The transmittance was estimated through the field depth relationship between the fog added reference image and the outdoor image to defogging. The algorithm used physical model and multiple images to complete the estimation of relevant parameters and had a better result in retaining scene information. The experimental results show that the proposed algorithm is more effective than the comparison algorithms, and its processing speed is also improved significantly.

    Adaptive slicing algorithm to retain model characteristics
    LI Wenkang, CHEN Changbo, WU Wenyuan
    2015, 35(8):  2295-2300.  DOI: 10.11772/j.issn.1001-9081.2015.08.2295
    Asbtract ( )   PDF (941KB) ( )  
    References | Related Articles | Metrics

    To resolve the problem that the existing adaptive slicing algorithm in 3D printing cannot retain effectively model characteristics, a new adaptive slicing method for recognizing and retaining model characteristics was proposed. Firstly, the definition of model characteristic was extended, and the concept of loss and offset of model characteristic was introduced. Secondly, a characteristic recognition method was proposed, the key point of which is to make use of the fact that the surface complexity and number of contours must change around the model characteristics. Finally, based on existing adaptive slicing algorithms, this algorithm retained model characteristics by slicing the model with minimum layer thickness near the model characteristics. On the self-developed software Slicer3DP, the following algorithms were implemented: the uniform slicing algorithm, the adaptive slicing algorithm and the proposed slicing algorithm. By comparing these algorithms, it is found that the proposed slicing algorithm resolves effectively the loss and offset of model characteristics while maintaining both slicing precision and efficiency. The result shows that the proposed method can be used for 3D printing with high precision requirement.

    Low-illumination image enhancement based on physical model
    WANG Xiaoyuan, ZHANG Hongying, WU Yadong, LIU Yan
    2015, 35(8):  2301-2304.  DOI: 10.11772/j.issn.1001-9081.2015.08.2301
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics

    Since a low-illumination image will become a pseudo fog map after inversion, and the concentration of this pseudo fog map is decided by illumination rather than depth of field, a low-illumination image enhancement method based on physical model was proposed, which provided a fast and accurate method to estimate the transmittance. Firstly, dark channel prior was used to estimate atmospheric light value of pseudo fog map and the transmittance was estimated according to the illumination. Secondly, the image without fog was restored based on the atmospheric scattering mode. Finally, the enhanced image was obtained by inversing the image without fog. Furthermore, the clear image was got by making detail compensation on the enhanced image. A large number of experiments show that the proposed algorithm is faster and performs well without losing information compared with the existing algorithms including the enhancement algorithms based on dark channel prior, defogging techniques and the multi-scale Retinex with color restoration, meanwhile it can improve the efficiency of image analysis and recognition system.

    Denoising algorithm for bilateral filtered point cloud based on noise classification
    YUAN Hua, PANG Jiankeng, MO Jianwen
    2015, 35(8):  2305-2310.  DOI: 10.11772/j.issn.1001-9081.2015.08.2305
    Asbtract ( )   PDF (1005KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that different scale noise exists in denoising and smoothing of 3D point cloud data model, a bilateral filtering denoising algorithm for 3D point cloud based on noise classification was proposed. Firstly, the noise points were subdivided into the large-scale and the small-scale noise, and the large-scale noise was removed with statistical filtering and radius filtering. Secondly, the curvature of the 3D point cloud data was estimated, and the bilateral filter was improved to enhance the robustness and security. Finally, the small-scale noise was smoothed with the improved bilateral filter to achieve the smoothing and denoising of 3D point clouds. Compared with the algorithms simply based on bilateral filtering or Fleishman bilateral filtering, the smoothing average error index of 3D point cloud data model obtained by the proposed method respectively decreased by 50.53% and 21.67%. The experimental results show that the proposed algorithm increases the efficiency of calculation by scale subdivion of noise points, and avoids excessive smoothing and detail distortion, which can better maintain the geometric characteristics of the model.

    Pedestrian tracking method based on road environment context
    FANG Yi, JI Zhiyuan, SHENG Hao
    2015, 35(8):  2311-2315.  DOI: 10.11772/j.issn.1001-9081.2015.08.2311
    Asbtract ( )   PDF (992KB) ( )  
    References | Related Articles | Metrics

    Since tracking pedestrians with car driving is still a problem in multi-object tracking, a pedestrian tracking method based on road environment context was proposed. Firstly, with the analysis of road environment context, an interaction algorithm based on the path model was proposed for pedestrians' motion prediction, then the model of pedestrians pedestrians mixed with vehicles was introduced. Finally, the model was applied in pedestrian tracking. The experimental results show that compared with discrete-continuous tracking approach, Multiple Object Tracking Accuracy (MOTA) of the proposed algorithm grows from 47.6% to 63.2% and Multiple Object Tracking Precision (MOTP) grows from 68.8% to 74.3%. The results prove the effectiveness of road environment context to improve the pedestrian tracking effect in mixed vehicle scene.

    Study of human motion tracking system based on wireless sensor network
    CHEN Pengzhan, LI Jie, LUO Man
    2015, 35(8):  2316-2320.  DOI: 10.11772/j.issn.1001-9081.2015.08.2316
    Asbtract ( )   PDF (942KB) ( )  
    References | Related Articles | Metrics

    To solve the attitude drift, low real-time ability and high price problem in motion capture system based on inertial sensors, a kind of real-time motion capture system was designed to effectively overcome the attitude drift with low cost and power consumption. At first, a distributed joint motion capture node was built based on the human body kinematics principle, and every node worked in low-power mode, when the acquisition data from the node was lower than a predetermined threshold, the node would automatically enter into the sleep mode to reduce the power consumption of the system. In order to reduce the data drift in traditional algorithm, a kind of algorithm combined with inertial navigation and Kalman filter algorithm was designed to calculate the real-time motion data. Using the Wi-Fi module, the TCP-IP protocol was adopted to transmit the attitude data, which could drive the model in real time. At last, the accuracy of the algorithm was evaluated on the multi-axis motor test platform, and the effect of the system for tracking real human motion was compared. The experimental results show that the algorithm has higher accuracy by contrast with the traditional complementary filtering algorithm, which can control the angle drift in less than one degree; and the delay has no obvious lag by contrast with the complementary filter, which can realize the accurate tracking of human motion.

    Real-time face pose estimation system based on 3D face model on Android mobile platform
    WANG Haipeng, WANG Zhengliang, XU Weiwei, FAN Ran
    2015, 35(8):  2321-2326.  DOI: 10.11772/j.issn.1001-9081.2015.08.2321
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics

    Concerning that the high performance requirement of face pose estimation system which could not run on mobile phone in real time, a real-time face pose estimation system was realized for Android mobile phone terminals. First of all, one positive face image and one face image with a certain offset angle were obtained by the camera for establishing a simple 3D face model by Structure from Motion (SfM) algorithm. Secondly, the system extracted corresponding feature points from the real-time face image to 3D face model. The 3D face pose parameters were got by POSIT (Pose from Orthography and Scaling with ITeration) algorithm. At last, the 3D face model was displayed on Android mobile terminals in real-time using OpenGL (Open Graphics Library). The experimental results showed that the speed of detecting and displaying the face pose was up to 20 frame/s in the real-time video, which is close to 3D face pose estimation algorithm based on the affine correspondance on computer terminals; and the speed of detecting a large number of image sequences reached 50 frame/s. The results indicate that the system can satisfy the performance requirement for Android mobile phone terminals and real-time requirement of detecting the face pose.

    Fast intra prediction algorithm for high efficiency video coding
    ZHANG Jun, DONG Lanfang, YU Jiakui
    2015, 35(8):  2327-2331.  DOI: 10.11772/j.issn.1001-9081.2015.08.2327
    Asbtract ( )   PDF (854KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the computational complexity of Coding Unit (CU) quad-tree splitting algorithm for High Efficient Video Coding (HEVC) intra prediction is very high, a fast intra CU splitting algorithm was proposed, which can narrow depth range of CU splitting. Firstly, multiple texture features were achieved through new defined texture extraction algorithm. Secondly, in order to get a decision function, Support Vector Machine (SVM) was employed to train the feature parameters. At last, based on decision function, CU splitting depth range was narrowed by skipping unnecessary splitting and early terminating splitting. The experimental results show that compared with the reference model HM 12.0, the fast CU splitting algorithm provides about 43.23% time savings on average with 0.84% increasing on bit-rate which obviously improves the coding efficiency. Besides, the proposed algorithm is easy to be combined with other methods to further reduce the computational complexity for HEVC intra coding.

    Liver tumor CT image segmentation method using multi-scale morphology of eliminating local minima
    CHEN Lu, WANG Xiaopeng, ZHANG Huawei, WU Shuang
    2015, 35(8):  2332-2335.  DOI: 10.11772/j.issn.1001-9081.2015.08.2332
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics

    Many methods for liver tumor Computed Tomography (CT) segmentation have the difficulty to achieve accurate tumor due to inhomogeneous gray and fuzzy edges. To obtain precise segmentation result, a method using multi-scale morphology was proposed to eliminate local minima. Firstly, the morphological area operation was used to remove image's small burrs and irregular edges so as to avoid boundaries migration. Secondly, local minima in gradient image were distinguished by the combined knowledge of statistic characteristics and morphological properties including depth and scale. After partition, the function relationship was established between multi-scale structure elements and local minima. In order to filter noise via large-size structure elements and preserving major object via small-size structure elements, a morphological method called close operation was then employed to adaptively modify the image.Finally, standard watershed transform was utilized to implement segmentation of liver tumor. The experimental results show that this method can reduce over-segmentation effectively and liver tumor can be segmented accurately while boundaries of objects are located precisely.

    Speech denoising algorithm based on singular spectrum analysis and Wiener filtering
    JIN Liyan, CHEN Li, FAN Taiting, GAO Jing
    2015, 35(8):  2336-2340.  DOI: 10.11772/j.issn.1001-9081.2015.08.2336
    Asbtract ( )   PDF (727KB) ( )  
    References | Related Articles | Metrics

    Concerning that the Wiener filtering algorithm leads signal distortion with low Signal-to-Noise Ratio (SNR) when dealing with the noise of non-stationary speech signal, a new speech denoising algorithm named SSA-WF was proposed combining with Singular Spectrum Analysis (SSA) and Wiener Filtering (WF). To obtain the speech signal as smooth as possible, SSA was used to denoise the nonlinear and non-stationary speech signal to improve the SNR of the noisy speech. Then the processed signal was put into WF to further eliminate the high frequency noise that still existed in the speech signal. The simulation results from different intensity of background noise show that the proposed algorithm is superior to the traditional methods in SNR and Root-Mean-Square Error (RMSE). The results also demonstrate that the new algorithm can not only remove the background noise efficiently, but also reserve the details of the original signal, it is suitable for the denoising of nonlinear and non-stationary speech signal.

    Speech enhancement algorithm based on microphone array under multiple noise environments
    MA Jinlong, ZENG Qingning, HU Dan, LONG Chao, XIE Xianming
    2015, 35(8):  2341-2344.  DOI: 10.11772/j.issn.1001-9081.2015.08.2341
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics

    In order to get better speech enhancement effect for hearing aids when used in the environment with non-stationary or multiple noise, which will lead a sharp decline effect of user experience, a Coherent Filter Generalized Sidelobe Canceller (CF-GSC) speech enhancement algorithm based on small size microphone array was proposed. Aiming at the weak correlation noise which caused by the waves, fans and other approximate white noise, as well as the strong correlation noise caused by the point or other competitive sources, coherent filtering and traditional Generalized Sidelobe Canceller (GSC) structure were utilized to remove weak correlation and strong correlation noise separately, the Voice Activity Detection (VAD) algorithm was also applied during this process. The simulation results show that the proposed algorithm can obtain enhancement effect by almost 2 dB compared with the improved coherent filter and traditional generalized sidelobe canceller method under the environment of a variety of noise, meanwhile, the speech intelligibility also gets obviously improved.

    Membership of mixed dependency set in strong partial ordered temporal scheme
    WAN Jing, LIU Fang
    2015, 35(8):  2345-2349.  DOI: 10.11772/j.issn.1001-9081.2015.08.2345
    Asbtract ( )   PDF (919KB) ( )  
    References | Related Articles | Metrics

    The solution of membership problem is essential to design an available algorithm of scheme decomposition. Because of the partial order among temporal types in strong partial ordered temporal scheme, it is difficult to solve its membership problem. The concepts of mixed dependency base on given temporal type, mixed dependency base in strong partial ordered scheme, mixed set closure of partial temporal functional dependency and temporal multi-valued dependency and mixed closure of strong partial ordered scheme were given. The algorithms of dependency base of attribution and closure of attribution sets were also given. On this basis, the algorithm of membership problem of mixed dependency set in strong partial ordered scheme was put forward. The proof for its termination, correction and time complexity were presented. Application examples show that the research on related theory and algorithm solves determination of the membership problem in strong partial ordered mixed dependencies, and provides a theoretical basis for solving the strong partial order temporal scheme and the design of temporal database standardization.

    Information measures for interval-valued fuzzy soft sets and their clustering algorithm
    PENG Xindong, YANG Yong
    2015, 35(8):  2350-2354.  DOI: 10.11772/j.issn.1001-9081.2015.08.2350
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics

    Focusing on the precise definition of information measures for interval-valued fuzzy soft sets, the distance measure, the similarity measure, the entropy measure, the inclusion measure, and the subsethood measure of interval-valued fuzzy soft sets were introduced. A series of formulae of information measures were presented, and their transformation relationships were discussed. Then, combining the characteristics of interval-valued fuzzy soft sets, a clustering algorithm based on similarity measure was explored. It emphasized the clustering of similar level knowledge of experts who gave the evaluation of objects. Meanwhile, the computational complexity of the algorithm was discussed. Finally, a practical example was given to prove that the proposed algorithm can effectively handle the clustering problem of experts.

    Fast unsupervised feature selection algorithm based on rough set theory
    BAI Hexiang, WANG Jian, LI Deyu, CHEN Qian
    2015, 35(8):  2355-2359.  DOI: 10.11772/j.issn.1001-9081.2015.08.2355
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that feature selection for the usually encountered large scale data sets in the "big data" is too slow to meet the practical requirements, a fast feature selection algorithm for unsupervised massive data sets was proposed based on the incremental absolute reduction algorithm in traditional rough set theory. Firstly, the large scale data set was regarded as a random object sequence and the candidate reduct was set empty. Secondly, random object was one by one drawn from the large scale data set without replacement; next, each random drawn object was checked if it could be distinguished with the other objects in the current object set and then merged with current object set, if the new object could not be distinguished using the candidate reduct, a new attribute that can distinguish the new object should be added into the candidate reduct. Finally, if successive I objects were distinguishable using the candidate reduct, the candidate reduct was used as the reduct of the large scale data set. Experiments on five unsupervised large-scale data sets demonstrated that a reduct which can distinguish no less than 95% object pairs could be found within 1% time needed by the discernibility matrix based algorithm and incremental absolute reduction algorithm. In the experiment of the text topic mining, the topic found by the reducted data set was consistent with that of the original data set. The experimental results show that the proposed algorithm can obtain effective reducts for large scale data set in practical time.

    Variable precision rough set model based on variable-precision tolerance relation
    ZHENG Shumei, XU Xinying, XIE Jun, YAN Gaowei
    2015, 35(8):  2360-2365.  DOI: 10.11772/j.issn.1001-9081.2015.08.2360
    Asbtract ( )   PDF (979KB) ( )  
    References | Related Articles | Metrics

    Focusing on the underdeveloped robustness when the existing extended rough set model encounters the noise for the incomplete information system, the necessity of adjusting the size of basic knowledge granule as well as introducing the relative degree of misclassification was analyzed. Then the Variable Precision Rough Set model based on Variable-Precision Tolerance Relation (VPRS-VPTR) was established on the basis of the object connection weight matrix, which was proposed according to the lack probability of system attribute value. Moreover, the properties of the VPRS-VPTR model were discussed, the classification accuracy under the basic knowledge granule size and the relative degree of misclassification was analyzed, the corresponding algorithm was depicted and the time complexity analysis was given afterwards. The experimental results show that the VPRS-VPTR model has higher classification accuracy compared with some other research about the expanded rough set, and the change trend of the classification accuracy is similar for the train set and the test set of several groups of incomplete data sets in UCI database. It proves that the proposed model is more precise and flexible, and the algorithm is feasible and effective.

    New attribute reduction algorithm of neighborhood rough set based on distinguished object set
    LIANG Hailong, XIE Jun, XU Xinying, REN Mifeng
    2015, 35(8):  2366-2370.  DOI: 10.11772/j.issn.1001-9081.2015.08.2366
    Asbtract ( )   PDF (695KB) ( )  
    References | Related Articles | Metrics

    Since the algorithm of attribute reduction based on positive region is based on the thought of lower approximation, it just considers the right distinguished samples. Using the thought of upper approximation and the concept of neighborhood information granule, the distinguished object set with its basic characteristics was designed and analyzed, then the new attribute importance measurement based on distinguished object set and heuristic attribute reduction algorithm was proposed. The proposed algorithm considered both the relative positive region of information decision table and the influence on boundary samples when growing condition attributes. The feasibility of the algorithm was discussed by instance analysis, and the comparative experiments on UCI data set with attribute reduction algorithm based on positive region were carried out. The experimental results show that the proposed attribute reduction algorithm can get better reduction, and the classification precision of sample set can remain the same or has certain improvement.

    SIMD compiler optimization by selecting single or double word mode for clustered VLIW DSP
    HUANG Shengbing, ZHENG Qilong, GUO Lianwei
    2015, 35(8):  2371-2374.  DOI: 10.11772/j.issn.1001-9081.2015.08.2371
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics

    BWDSP100 is a 32-bit static scalar Digital Signal Processor (DSP) with Very Long Instruction Word (VLIW) and Single Instruction Multiple Data (SIMD) features, which is designed for high-performance computing. Its Instruction Level Parallelism (ILP) is acquired though clustering and special SIMD instructions. However, the existing compiler framework can not provide support for these SIMD instructions. Since BWDSP100 has much SIMD vectorization resources and there are very high requirements in radar digital signal processing for the program performance, an SIMD optimization which surpported the selection of single or double word mode was put forward based on the traditional Open64 compiler according to the characteristics of BWDSP100 structure, and it can significantly improve the performance of some compute-intensive programs which are widely used in DSP field. The experimental results show that this algorithm can achieve speedup of 5.66 on average compared with before optimization.

    Self-repair method for autonomous underwater vehicle software based on micro-reboot and partially observable Markov decision process model
    ZHANG Rubo, MENG Lei, SHI Changting
    2015, 35(8):  2375-2379.  DOI: 10.11772/j.issn.1001-9081.2015.08.2375
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics

    Aiming at the disadvantages of high fixing cost and partial observability of system environment in the process of repairing Autonomous Underwater Vehicle (AUV) software faults, a method was proposed based on micro-reboot mechanism and Partially Observable Markov Decision Process (POMDP) model for failure repair of AUV. To facilitate the implementation of the fine-grained self-repair micro-reboot strategy, a hierarchical structure was built based on micro-reboot combined with the characteristics of AUV software. Meanwhile, a self-repair model was put forward according to the theory of POMDP. With the goal of minimizing the fixing cost, the repair strategy was solved by Point Based Value Iteration (PBVI) algorithm to allow the repair action to execute in the partially observable environment at a lower cost.The simulation results show that the proposed repairing method can solve the AUV software failures caused by the software-aging and system calls. Compared with two-tier micro-repair strategy and three-tier micro-repair fixing strategy, this method is obviously superior to the contrast method in cumulative fault repair time and operational stability.

    Crowdsourcing quality control based on reputation model of Dempster-Shafer theory
    RUAN Shanshan, WANG Xiaoping, XUE Xiaoping
    2015, 35(8):  2380-2385.  DOI: 10.11772/j.issn.1001-9081.2015.08.2380
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics

    Since the existing crowdsourcing model could not detect the malicious behavior in the crowdsourcing system quickly and efficiently, a reputation model based on Dempster-Shafer theory, called DS_CQC (Dempster/Shafer Crowdsoucing Quality Control), was proposed to apply to the crowdsourcing quality control. Firstly, the sustainable credible evidence sequence and sustained incredible evidence sequence based on time-window were obtained. Secondly, the original D-S evidence theory was improved through three aspects including importance of evidence, relationship of evidence and reliability of witness, and the new basic probability function was acquired. Finally, evidence sequence was fused by using the improved D-S evidence theory and then the direct reputation, indirect reputation and comprehensive reputation were computed. The incentive mechanism based on reputation was used to encourage people to participate in crowdsourcing actively and submit a higher quality crowd, while the malicious workers were suppressed. Experiments on simulation and real crowd data were conducted, and compared to the trust model of probability, the detection of malicious behavior in the crowdsourcing system of DS_CQC increased by 50% in speed and 3.1% in efficiency at least. The result proves that the DS_CQC has the high anti-attacking capability.

    Object-based dynamic taint analysis for J2EE program
    ZENG Xiangfei, GUO Fan, TU Fengtao
    2015, 35(8):  2386-2391.  DOI: 10.11772/j.issn.1001-9081.2015.08.2386
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics

    The injection vulnerabilities of Web applications such as SQL injections and Cross Site Scripting (XSS) are mainly caused by external inputs which are not verified, while taint analysis can effectively locate these vulnerabilities. A dynamic analysis approach was presented by tracking all potentially tainted Java objects, which is different from existing approaches that only track characters or string objects. The approach used the hash code to represent the tainted object, defined the method node and method coordinates to record the location of the taint propagation, supported tracing the taint propagation path. The approach put forward a specific taint propagation analysis for stream-family objects according to the decorative pattern of Java stream objects. A language specification was also given to model Java libraries and user-defined methods related to taint propagation. The approach designed and formalized the taint propagation semantics of the methods according to the classification by taint introduction, taint propagation, taint sanitization and taint usage. The prototype system which implemented on SOOT used static analysis to collect reachable methods and instruments Java byte-code of the real Web sites, and the experimental results demonstrated the effect on detecting injection vulnerabilities.

    Optimized fault tolerance method based on virtualization technology in simulated system
    CHEN Zhijia, ZHU Yuanchang, DI Yanqiang, FENG Shaochong
    2015, 35(8):  2392-2396.  DOI: 10.11772/j.issn.1001-9081.2015.08.2392
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics

    The breakdown of simulation nodes or shortage of simulation resources cause failure of distributed simulation system and lower down the reliability of simulation system. To improve fault tolerance performance and decrease the overhead of fault tolerance, a fault tolerance method based on virtualization technology was proposed. According to different failure locations, different fault tolerance methods were adopted. The optimization of checkpoint strategy was analyzed and the optimal interval was obtained. Including selection of nodes, the number of copies and the distribution of the copies, three main problems of backup strategy were concluded. The problems were solved through virtualization technology. Fault tolerance strategy based on virtual machine migration was proposed as a complementary of checkpoint strategy and backup strategy to decrease the overhead. The performance of dynamic fault tolerance strategy and normal fault tolerance strategy were evaluated through experiments. The experimental results prove that the proposed fault tolerance trategy is efficient and the overhead is kept at a low level.

    Integrating piecewise linear representation and Gaussian process classification for stock turning points prediction
    LI Feng, GAO Feng, KOU Peng
    2015, 35(8):  2397-2403.  DOI: 10.11772/j.issn.1001-9081.2015.08.2397
    Asbtract ( )   PDF (1083KB) ( )  
    References | Related Articles | Metrics

    Focusing on the prediction issue of the price turning point in stock trading process, a prediction algorithm of stock price turning point, named PLR-GPC, was proposed based on Piecewise Linear Representation (PLR) and Gaussian Process Classification (GPC). The algorithm extracted the turning points of the historical stock price series by PLR, and classified the points with different labels. A prediction model of the stock price turning point was built based on GPC, and it was trained with the turning points extracted by PLR. Eventually, the model could predict whether a new price would be a price turning point, and could explain the result with probability. An experiment on the real stock data was carried out among PLR-GPC, PLR-BPN (PLR-Back Propagation Network), and PLR-WSVM (PLR-Weighted Support Vector Machine). It showed that the PLR-GPC had higher forecast accuracy than the other two algorithms, and its rate of return was higher than PLR-BPN, almost equal to PLR-WSVM. The experimental result proves that the PLR-GPC is effective on stock turning point prediction and it can be applied in the actual stock investment trading.

    Broken strand and foreign body fault detection method for power transmission line based on unmanned aerial vehicle image
    WANG Wanguo, ZHANG Jingjing, HAN Jun, LIU Liang, ZHU Mingwu
    2015, 35(8):  2404-2408.  DOI: 10.11772/j.issn.1001-9081.2015.08.2404
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics

    In order to improve the efficiency of power transmission line inspection by Unmanned Aerial Vehicle (UAV), a new method was proposed for detecting broken transmission lines and defects of foreign body based on the perception of line structure. The transmission line image acquired by UAV was easily influenced by the background texture and light, the gradient operators of horizontal and vertical direction which can be used to detect the line width were used to extract line objects in the inspection image. The study on calculation of gestalt perception of similarity, continuity and colinearity connected the intermittent wires into continuous wires. Then the parallel wire groups were further determined through the calculation of parallel relationship between wires. In order to reduce the detection error rate, spacers and stockbridge dampers of wires were recognized based on a local contour feature. Finally, the width change and gray similarity of segmented conductor wire were calculated to detect the broken part of wire and foreign object defect. The experimental results show that the proposed method can detect broken wire strand and foreign object defect efficiently under complicated backgrounds from the transmission line of UAV images.

    Bus emergency detection based on image processing
    LI Yanyan, WU Wei
    2015, 35(8):  2409-2414.  DOI: 10.11772/j.issn.1001-9081.2015.08.2409
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that the vehicle monitoring technology in the bus was not perfect and few emergencies detection method were invented, a real-time detection algorithm based on image processing was proposed to detect the emergency which mainly refers to the rapid flow of the crowd in the bus. First, the main motion area was grouped according to the trajectory of the passengers. Second, an improved moving foreground extraction method was used to extract moving foreground. Then the characteristic points in the moving foreground were extracted by Harris operator, and the optical flow constraint algorithm was used to establish the motion vector filed for characteristic points. At last, the KPA (Kinetic Potential Area) model was built to recognize the emergency. Theoretical analysis and experimental results show that, in testing the emergency under different environment, the proposed algorithm has a success rate of more than 83.9%. In addition, it has advantages of real-time detection in a practical application.

2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF