Loading...

Table of Content

    10 July 2015, Volume 35 Issue 7
    Cooperative spectrum sharing method based on spectrum contract
    ZHAO Nan, WU Minghu, ZHOU Xianjun, XIONG Wei, ZENG Chunyan
    2015, 35(7):  1805-1808.  DOI: 10.11772/j.issn.1001-9081.2015.07.1805
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics

    To alleviate the shortage of licensed spectrum resource, a method to design and implement the multi-user Cooperative Spectrum Sharing (CSS) mechanism was proposed based on the characteristics of asymmetric network information and selfishness of communication users. First, by modeling the CSS as a labor market, a modeling method for the multi-user contract-based CSS framework was investigated under the symmetric network information scenario. Then, to avoid the moral hazard problem due to the hidden-action of Secondary Users (SUs) after contract assignment, a contract-based CSS model was proposed to incentivize the contribution of SUs for ensuring spectrum sharing. The experimental results show that, when the direct transmission rate of Primary User (PU) is less than 0.2 b/s, in comparison with the case of non-cooperative spectrum sharing, the capacity of network is more than 3 times larger. The proposed multi-user contract-based CSS framework will put forward new ideas for efficient sharing and utilization of spectrum resource.

    Congestion avoidance and fast traffic migration based on multi-topology routing
    LUO Long, YU Hongfang, LUO Shouxi
    2015, 35(7):  1809-1814.  DOI: 10.11772/j.issn.1001-9081.2015.07.1809
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics

    For the potential link congestion problem during traffic migration caused by IP network updates, a Congestion Avoidance and Fast Traffic Migration based on Multi-Topology Routing (CAFTM-MTR) algorithm was proposed. Firstly, the link capacity constraints and the timing characteristic of source node traffic migration were considered, and a congestion avoidance migration sequence that each moves one source node was gotten. Secondly, to shorten the migration finishing time, the algorithm was improved based on the sequence independence of traffics to make each batch move multiple sequence independent traffics. By using typical topologies and Waxman topologies to validate the proposed algorithm, the proposed algorithm improved the success rate of avoiding congestion from 20%-60% to 100% in the comparison experiments with Non-Congestion Avoidance and Fast Traffic Migration based on MTR (NonCAFTM-MTR) method, and obtained less than 8-round migration sequence. In addition, the proposed algorithm had an adaptability of dynamic traffic and was able to accommodate the traffic growth ranging from 5% to 284%. The simulation results show that CAFTM-MTR algorithm can effectively improve the success rate of congestion avoidance, and meanwhile make the traffic migration fast.

    Hybrid multi-hop routing algorithm of effective energy-hole avoidance for wireless sensor networks
    YANG Xiaofeng, WANG Rui, PENG Li
    2015, 35(7):  1815-1819.  DOI: 10.11772/j.issn.1001-9081.2015.07.1815
    Asbtract ( )   PDF (753KB) ( )  
    References | Related Articles | Metrics

    In the cluster-based routing algorithm of Wireless Sensor Network (WSN), "energy hole" phenomenon was resulted from energy consumption imbalance between sensors. For this problem, a hybrid multi-hop routing algorithm of effective energy-hole avoidance was put forward on the basis of the research of the flat and hierarchical routing protocols. Firstly, the concept of hotspot area was introduced to divide the monitoring area, and then in clustering stage, the amount of data outside the hotspot area was reduced by using uneven clustering algorithm which could integrate data within the clusters. Secondly, energy consumption was cut down in the hotspot area during clustering stage by no clustering. Finally, in inter-cluster communication phase, the Particle Swarm Optimization (PSO) algorithm was addressed to seek optimal transmission path which could simultaneously meet the minimization of the maximum next hop distance between two nodes in the routing path and the minimization of the maximum hop count, so the minimization of whole network energy consumption was realized. Theoretical analysis and experimental results show that, compared with the Reinforcement-Learning-based Lifetime Optimal routing protocol (RLLO) and Multi-Layer routing protocol through Fuzzy logic based Clustering mechanism (MLFC) algorithm, the proposed algorithm shows better performance in energy efficiency and energy consumption uniformity, and the network lifetime is raised by 20.1% and 40.5%, which can avoid the "energy hole" effectively.

    Improved evaluation method for node importance based on mutual information in weighted networks
    WANG Ban, MA Runnian, WANG Gang, CHEN Bo
    2015, 35(7):  1820-1823.  DOI: 10.11772/j.issn.1001-9081.2015.07.1820
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics

    The existing evaluation methods for node importance in complex network mainly focus on undirected-unweighted complex networks, and can not reflect objectively the reality of some real world status. Focusing on the problems such as the limited scope of evaluation indexes and not enough comprehensive evaluation results in the undirected-weighted and directed-weighted networks, and the node importance evaluation method in undirected-unweighted networks based on mutual information was used for reference, a new evaluation method based on mutual information that is suitable for the undirected-weighted and directed-weighted networks was proposed. In this method, each edge was regarded as a flow of information, the structure characteristics of the corresponding complex networks and the definition method of "amount of information" were considered, then the amount of information was calculated as the node importance evaluation index. The analyses of the instance network show that the proposed algorithm can more detailed describe the differences between nodes in the directed-weighted network under the premise of guaranteeing estimation accuracy. In the evaluation of the ARPA (Advanced Research Project Agency) network nodes, the first five most important nodes number that were evaluated from the proposed algorithm and the previous indexes were especially close, so the algorithm's ability of finding the core nodes was highlighted. The proposed algorithm provides a certain theoretical help for evaluating the core nodes in the undirected-weighted and directed-weighted networks and improving the network invulnerability ability quickly and accurately.

    Quantized distributed Kalman filtering based on dynamic weighting
    CHEN Xiaolong, MA Lei, ZHANG Wenxu
    2015, 35(7):  1824-1828.  DOI: 10.11772/j.issn.1001-9081.2015.07.1824
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics
    Focusing on the state estimation problem of a Wireless Sensor Network (WSN) without a fusion center, a Quantized Distributed Kalman Filtering (QDKF) algorithm was proposed. Firstly, based on the weighting criterion of node estimation accuracy, a weight matrix was dynamically chosen in the Distributed Kalman Filtering (DKF) algorithm to minimize the global estimation Error Covariance Matrix (ECM). And then, considering the bandwidth constraint of the network, a uniform quantizer was added into the DKF algorithm. The requirement of the network bandwidth was reduced by using the quantized information during the communication. Simulations were conducted by using the proposed QDKF algorithm with an 8-bit quantizer. In the comparison experiments with the Metropolis weighting and the maximum degree weighting, the estimation Root Mean Square Error (RMSE) of the mentioned dynamic weighting method decreased by 25% and 27.33% respectively. The simulation results show that the QDKF algorithm using dynamic weighting can improve the estimation accuracy and reduce the requirement of network bandwidth, and it is suitable for network communications limited applications.
    Connectivity characteristics based on mobility model for vehicular Ad Hoc networks
    FENG Huifang, MENG Yuru
    2015, 35(7):  1829-1832.  DOI: 10.11772/j.issn.1001-9081.2015.07.1829
    Asbtract ( )   PDF (733KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of connectivity in Vehicular Ad Hoc Network (VANET), the evolution characteristics of connectivity characteristics for VANET were analyzed. Firstly, the number of connected components, connectivity probability and connectivity length were proposed to be used for the evaluation connectivity metrics for VANET. Then, based on Intelligent Driver Model with Lane Changes (IDM-LC), the VANET was set up through VanetMobiSim software. Finally, the relation of node communication radius and the average number of connected components, average connectivity probability and average connectivity length were given. At the same time, the statistical distribution of the number of connected components was also analyzed. The results show that number of connected components follows a normal distribution by using Q-Q plot and T-test. Moreover, the results also show that the statistical distribution of the number of connected components is independent of the node communication radius.

    Radio phase-based two-step ranging approach
    ZHAO Yang, HUANG Jianyao, LIU Deliang, LIU Kaihua, MA Yongtao
    2015, 35(7):  1833-1836.  DOI: 10.11772/j.issn.1001-9081.2015.07.1833
    Asbtract ( )   PDF (582KB) ( )  
    References | Related Articles | Metrics

    Concerning the ranging inaccuracy problem based on radio signal phase information under multi-path environments, a two-step ranging approach based on double tags was proposed. Each target was attached with double tags. Through single frequency subcarrier amplitude modulation, firstly, the wrapped phase information of carrier signal was extracted, the distance between reader and tag within half wavelength of carrier signal was calculated and fine ranging estimation value was achieved. Secondly, the unwrapped phase information of subcarrier signal was extracted, and the integral multiple of half wavelength within the distance of reader and tag was calculated. Thirdly, the average multiple was calculated between double tags, the distance of average multiple of half wavelength was used as coarse ranging value. Finally, the final ranging result was estimated by the sum of the fine ranging value and coarse ranging value. Additionally, single reader and double-tag based geometric localization method was introduced to reduce the cost of hardware facilities. The simulation results show that, under multi-path environments, compared with the directly ranging with subcarrier phase, the average ranging error of double tags based two-step ranging approach is reduced by 35%, and the final average localization error is about 0.43 m, and the maximum error is about 1 m. The proposed approach can effectively improve the accuracy of phase based localization technology and also reduce the hardware cost.

    Cloud resource re-allocation method focusing on user's evaluation feedback
    KUANG Guijuan, ZENG Guosun, XIONG Huanliang
    2015, 35(7):  1837-1842.  DOI: 10.11772/j.issn.1001-9081.2015.07.1837
    Asbtract ( )   PDF (1121KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that previous studies mostly consider from the resource provider's perspective, and user's evaluations have not been fully utilized to improve the resource decision making ability, this paper proposed a resource re-allocation method focusing on the user's evaluation feedback. First, through analyzing the process of cloud resource allocation, several factors influencing decision-making were defined, and an adaptive cloud resource management framework with user's involvement was proposed. Next, the main idea of method of resource re-allocation with user's involvement was elaborated, and a formula was designed to guide user's evaluation. Finally, based on similarity theory, the user's expected satisfaction of a new cloud task was predicted. Together with the cloud task parameters and environment parameters, it was used to be the input of BP (Back Propagation) neural network to make the resource allocation decision. In the comparison experiments with the allocation scheme without user's involvement, the average user's satisfactory of the proposed scheme increased by 7.4%, maintained at more than 0.8, showed a steady upward trend. In the comparison experiments with Min-Max algorithm and Cloud Tasks-Resources Satisfactory Matching (CTRSM) algorithm, its average user's satisfactory increased by 16.7% and 4.6% respectively. The theoretical analysis and simulation results show that the cloud resource re-allocation method focusing on user's evaluation is self-improved, and it can improve the adaptive ability of cloud resource management.

    Improvement and parallelization of A* algorithm
    XIONG Renhao, LIU Yu
    2015, 35(7):  1843-1848.  DOI: 10.11772/j.issn.1001-9081.2015.07.1843
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics

    Aiming to improve the low time performance of A* algorithm, an algorithm based on Parallel Searching and Fast Inserting (PSFI) was presented. Firstly, well known parallel heuristic search algorithms on shared memory platform were researched. Then the original serial A* algorithm was improved using a Delayed Single Table Searching (DSTS) method and new data structure. In the next place, a kind of parallel algorithm based on shared memory platform was designed. Finally, the proposed algorithm was implemented with OpenMP. The experimental results on 24-puzzle problem show that the improved serial and parallel algorithms decrease the runtime to 1/140 and 1/450 of the unimproved algorithms respectively. The speed-up ratio of parallel algorithm raises to 3.2 compared with the Parallel Best-NBlock-First (PBNF) algorithm. At the same time, the improved algorithm is strict best-first searching algorithm, which ensures the solution quality and is easy to implement.

    Massive terrain data storage based on HBase
    LI Zhenju, LI Xuejun, XIE Jianwei, LI Yannan
    2015, 35(7):  1849-1853.  DOI: 10.11772/j.issn.1001-9081.2015.07.1849
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics

    With the development of remote sensing technology, the data type and data volume of remote sensing data has increased dramatically in the past decades which is a challenge for traditional storage mode. A combination of quadtree and Hilbert spatial index was proposed in this paper to solve the the low storage efficiency in HBase data storage. Firstly, the research status of traditional terrain data storage and data storage based on HBase was reviewed. Secondly the design idea on the combination of quadtree and Hilbert spatial index based on managing global data was proposed. Thirdly the algorithm for calculating the row and column number based on the longitude and latitude of terrain data, and the algorithm for calculating the final Hilbert code was designed. Finally, the physical storage infrastructure for the index was designed. The experimental results illustrate that the data loading speed in Hadoop cluster improved 63.79%-78.45% compared to the single computer, the query time decreases by 16.13%-39.68% compared to the traditional row key index, the query speed is at least 14.71 MB/s which can meet the requirements of terrain data visualization.

    Floating point divider design of high-performance double precision based on Goldschmidt's algorithm
    HE Tingting, PENG Yuanxi, LEI Yuanwu
    2015, 35(7):  1854-1857.  DOI: 10.11772/j.issn.1001-9081.2015.07.1854
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that division is complex and needs a large delay to compute, a kind of method for designing the unit of high-performance double precision floating point divider based on Goldschmidt's algorithm was proposed and it supported IEEE-754 standard. Firstly, it was analyzed that how to compute division using Goldschmidt's algorithm and the error produced during iterative operation. Then, the method for controlling error was proposed. Secondly, bipartite reciprocal tables were adopted to calculate initial value of iteration with area saving, and parallel multipliers were adopted in the iterative unit for accelerating. Lastly, the executed station was divided reasonably and it made floating point divider supporting pipeline execution with state machine controlling. So, the speed of divider was improved. The experimental results show that the double precision floating point divider adopted 14-bit iterative initial value pipeline structure, its synthesis cell area is 84902.2618 μm2, the running frequency is up to 2.2 GHz with 40 nm technology. Compared with 8-bit iterative initial value pipeline structure, computing speed is increased by 32.73% and area is increased by 5.05%. The delay of a double precision floating division instruction is 12 cycles, and it is decreased to 3 cycles in pipeline execution. Compared with the divider based on SRT algorithm implemented in other processers, data throughput is improved by 3-7 times. Compared with the divider based on Goldschmidt's algorithm implemented in other processers, data throughput is improved by 2-3 times.

    Automated trust negotiation model based on interleaved spiral matrix encryption
    LI Jianli, XIE Yue, WANG Yimou, DING Hongqian
    2015, 35(7):  1858-1864.  DOI: 10.11772/j.issn.1001-9081.2015.07.1858
    Asbtract ( )   PDF (1133KB) ( )  
    References | Related Articles | Metrics

    The Automated Trust Negotiation (ATN) Model based on Interleaved Spiral Matrix Encryption (ISME) was proposed for the protection of sensitive information in the automated trust negotiation. The interleaved spiral matrix encryption and policy migration were used in the model to protect three kinds of sensitive information of negotiation. Compared with the traditional spiral matrix encryption algorithm, the concept of odd-even bit and triple were added into the interleaved spiral matrix encryption algorithm. In order to make the model adapt the application better, the concept of key attributes flag was introduced in the certification of negotiations, and thus it recorded the sensitive information which corresponded to the encrypted key effectively. Meanwhile, how to represent the negotiation rules through encryption function was listed in the negotiation model. To increase efficiency and success rate of the model, the 0-1 graph policy parity algorithm was proposed. The decomposition rules of six basic propositions were constructed by directed graph of graph theory in the 0-1 graph policy parity algorithm. The propositions abstracted by the access control policies could be determined effectively and the reliability and completeness was testified to prove the equivalence of semantics concept and syntax concept in logistic system. Finally, the simulation results demonstrate that the model of the average number of disclosure strategy is 15.2 less than the traditional model in 20 negotiations. The successful rate of the negotiation is increased by 21.7% and the efficiency of the negotiation is increased by 3.6%.

    Privacy-preserving various data sharing protocol in participatory sensing
    LIU Shubo, WANG Ying, LIU Mengjun, ZHU Guangjun
    2015, 35(7):  1865-1869.  DOI: 10.11772/j.issn.1001-9081.2015.07.1865
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics

    In the process of participatory sensing, not only data matching level but also data variation is required by users. In order to meet the aforementioned two requirements, meanwhile, to protect users' preference privacy, a privacy-preserving various data sharing protocol was proposed. Firstly, both interactive data were processed to two sets of integer and Counting Bloom Filter (CBF) was utilized to calculate the intersection of the two sets of integer, the result of which was used as data matching level. Secondly, the function to delete elements of CBF was utilized to calculate the value of various data. Lastly, the data matching level and the difference between various data were compared with pre-set threshold, so as to decide whether they complied with interactive condition. In the meantime, the structuring method of CBF was improved to protect users' preference privacy. Theoretical analysis and experiment results show the following facts: compared with protocols based on non-cryptographic Bloom Filter (BF), the problem of relatively large results is overcome and computational overhead is saved by more than 50%. Users' preference privacy is protected and the need of various data is met in the proposed protocol. In addition, the proposed protocol enjoys higher matching precision and efficiency.

    Automatic verification of security protocols with strand space theory
    LIU Jiafen
    2015, 35(7):  1870-1876.  DOI: 10.11772/j.issn.1001-9081.2015.07.1870
    Asbtract ( )   PDF (1179KB) ( )  
    References | Related Articles | Metrics

    Strand space theory relies on individual judgments and subjective experiences much, and can hardly be automated. To solve this problem, a more formal and objective verification process based on strand space theory and authentication test was proposed. First, a set of type labels was defined for terms of protocol to expand strand space and authentication test theory. By listing all possible occurrences of test instances, checking agreement of arguments in test component, verifying uniqueness of transforming edge, examining agreement of arguments in goal strand, protocols verification process based on strand space could be formalized to a series of programmable procedures. Time complexity of the whole verification algorithm is O(n2), hence state space explosion problem common in state space search was avoided. On the basis of theoretical study, an automatic tool was implemented to verify authentication attributes of security protocols automatically. BAN-Yahalom protocol and TLS handshake protocol 1.0 were analyzed as examples, where a new attack to BAN-Yahalom was found. It is similar to Syverson's attack, but it has no restriction on server's verification of nonce, hence has more general application scenario than Syverson's.

    Secret sharing scheme only needing XOR operations
    YUAN Qizhao, CAI Hongliang, ZHANG Jingzhong, XIA Hangyu
    2015, 35(7):  1877-1881.  DOI: 10.11772/j.issn.1001-9081.2015.07.1877
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics

    The traditional secret sharing schemes based on interpolation polynomial require a heavy computational cost. When the data is large, the efficiency of operation is particularly low. Therefore, a new secret sharing scheme for protecting the security of large scale data was proposed. The proposed scheme used the data block's method and need the exclusive-OR (XOR) operation over GF(2) only. The theoretical analysis and experimental results show that, compared to the traditional secret sharing scheme based on interpolation polynomial, the new scheme is increased by 19.3% in the operational efficiency.

    Network security situational awareness model based on information fusion
    LI Fangwei, ZHANG Xinyue, ZHU Jiang, ZHANG Haibo
    2015, 35(7):  1882-1887.  DOI: 10.11772/j.issn.1001-9081.2015.07.1882
    Asbtract ( )   PDF (863KB) ( )  
    References | Related Articles | Metrics

    Since the evaluation of Distributed Denial of Service (DDoS) is inaccurate and network security situational evaluation is not comprehensive, a new network security situational awareness model based on information fusion was proposed. Firstly, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed; Secondly, the original Common Vulnerability Scoring System (CVSS) was improved and the leak vulnerability was evaluated to make the assessment more comprehensive; Then, according to the combination of objective weight and subjective weight, the method of calculating the combined weights and optimizing the results by Sequence Quadratic Program (SQP) algorithm was raised to reduce the uncertainty of fusion; Finally, the network security situation was got by fusing three aspects evaluation. To verify the original evaluation of DDoS was inaccurate, a testing platform was built and the alarm of the same DDoS differed by 3 orders of magnitude. Compared to the original method based on alarm, the steady and accurate result of evaluation was obtained based on data packet. The experimental results show that the proposed method can improve the accuracy of evaluation results.

    Network security situation prediction based on hyper parameter optimization of relevance vector machine
    XIAO Hanjie, SANG Xiuli
    2015, 35(7):  1888-1891.  DOI: 10.11772/j.issn.1001-9081.2015.07.1888
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics

    To deal with the existing problems of current network security situation prediction methods, such as overfitting, underfitting, various free variables and insufficient prediction accuracy, this paper proposed a RVM (Relevance Vector Machine) model with an improved Simulated Annealing (PSA-RVM) to solve the network security situation prediction problems. In the process of prediction, the sample data of network security situation were firstly reconstructed in phase-space to form the training sample set; then, Powell algorithm was used to improve Simulated Annealing (PSA) and RVM was inserted into the target function calculation process of PSA algorithm to optimize RVM hyper parameters and to acquire a network security situation prediction model with enhanced learning capability and prediction accuracy. The simulation experiment results indicate that the proposed method has higher prediction accuracy, with Mean Average Percentage Error (MAPE) and Root Mean Squared Error (RMSE) of 0.39256 and 0.01261, higher than Elman and Particle Swarm Optimization-based Support Vector Regression (PSO-SVR) models; the proposed method can depict well the changing tendency of network security situation, which is helpful for network administrators to control the development trend of future network security situation and take the initiative to take network defense measures.

    Real-time detection system for stealthy P2P hosts based on statistical features
    TIAN Shuowei, YANG Yuexiang, HE Jie, WANG Xiaolei, JIANG Zhixiong
    2015, 35(7):  1892-1896.  DOI: 10.11772/j.issn.1001-9081.2015.07.1892
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    Since most malwares are designed using decentralized architecture to resist detection and countering, in order to fast and accurately detect Peer-to-Peer (P2P) bots at the stealthy stage and minimize their destructiveness, a real-time detection system for stealthy P2P bots based on statistical features was proposed. Firstly, all the P2P hosts inside a monitored network were detected using means of machine learning algorithm based on three P2P statistical features. Secondly, P2P bots were discriminated based on two P2P bots statistical features. The experimental results show that the proposed system is able to detect stealthy P2P bots with an accuracy of 99.7% and a false alarm rate below 0.3% within 5 minutes. Compared to the existing detection methods, this system requires less statistical characteristics and smaller time window, and has the ability of real-time detection.

    DR-PRO: cloud-storage privilege revoking optimization mechanism based on dynamic re-encryption
    DU Ming, HAO Guosheng
    2015, 35(7):  1897-1902.  DOI: 10.11772/j.issn.1001-9081.2015.07.1897
    Asbtract ( )   PDF (880KB) ( )  
    References | Related Articles | Metrics

    To effectively solve overhead computing and bandwidth, high complexity problems about user access privileges revoking in cloud-storage service, a cloud-storage privilege revoking optimization mechanism based on dynamic re-encryption (DR-PRO) was proposed. Firstly, based on ciphertext access control scheme of Ciphertext Policy Attribute Based Encryption (CP-ABE), by using (k,n) threshold algorithm of secret sharing scheme, data information was divided into a number of blocks, and then a data information block was dynamically selected to realize re-encryption. Secondly, the user access privilege revoking was finished by the sub-algorithms, including data cutting, data reconstructing, data publishing, data extracting and data revoking. The theoretical analysis and test simulation showed that, based on high security of user information in cloud-storage service, compared with lazy re-encryption mechanism, the average computing and bandwidth decrease of user access privileges revoking was 5% when data file changed; compared with full re-encryption mechanism, the average computing and bandwidth decrease of user access privileges revoking was 20% when shared data block changed. The experimental results show that DR-PRO effectively improves the performance and efficiency of user access privileges revoking in cloud-storage service.

    Robust image information hiding algorithm based on error correcting codes
    REN Fang, ZHENG Dong
    2015, 35(7):  1903-1907.  DOI: 10.11772/j.issn.1001-9081.2015.07.1903
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics

    The research on image Information Hiding (IH) algorithm based on error-correcting code is aimed at overcoming poor robustness flaws of spatial domain image information hiding algorithm. The property of correcting random error of the error-correcting code could be used to improve the robustness of spatial domain information hiding algorithm, so that the modifications to the covers by attacker could be effectively resisted. Two different algorithms were proposed: the Least Significant Bit (LSB) information hiding algorithm based on error-correcting code and the gray-bit information hiding algorithm based on error-correction code. By embedding the coding form of secret information in LSB in the first algorithm, relatively high robustness could be achieved in the case of low density noise; while in the second one, by applying the structural characteristics of image pixel gray values, and coding each pixel gray value which had been embedded the secret information in the form of Hamming code, one error could be corrected by each pixel independently. Both theoretical analysis and experimental results show that, under the condition of same density and value of noise, the percentages of secret information recovered of these two algorithms are higher than the basic LSB algorithm, so they are two information hiding algorithms with higher robustness.

    Iterative adaptive reversible image watermarking algorithm combined with mean-adjustable integer transform
    CHEN Wenxin, SHAO Liping, SHI Jun
    2015, 35(7):  1908-1914.  DOI: 10.11772/j.issn.1001-9081.2015.07.1908
    Asbtract ( )   PDF (1400KB) ( )  
    References | Related Articles | Metrics

    In the existing reversible watermarking algorithm based on mean-adjustable integer transform, there are following defects such as non-adaptive threshold selecting, incomplete location map building strategy which may lead to poor compression performance and compulsive partition strategy for embedded vectors which may lead to a failure embedding even if embedding capacity is enough. To address these problems, an iterative adaptive reversible image watermarking algorithm combined with mean-adjustable integer transform was proposed. Firstly, according to Peak Signal-to-Noise Ratio (PSNR) affected by the payload data size and integer vector, an iterative adaptive algorithm was used in selecting mean-adjustable offsets to balance the watermarking embedding capacity and the visual quality of embedded carrier; Secondly, based on the strategy that adjacent pixels have similar pixel values, a complete location map generating strategy was proposed to improve location map compression performance; Finally, to avoid failure embedding, the proposed reversible watermarking algorithm adopted hierarchical order embedding strategy to embed payload data in order from the first least significant bits to the third least significant bits. The experimental results show that the proposed algorithm has a big embedding capacity and does not need to preset threshold. Location map building strategy has a better performance in making location map data in smaller size and increasing the capacity indirectly compared with the reversible watermarking algorithm based on mean-adjustable integer transform, and the PSNR increases by 14.4% averagely in experimental sample.

    Computing method of attribute granule structure of information system based on incremental computation
    HAO Yanbin, GUO Xiao, YANG Naiding
    2015, 35(7):  1915-1920.  DOI: 10.11772/j.issn.1001-9081.2015.07.1915
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics

    A computational method utilizing divide-and-conquer and incremental computation was proposed to calculate the structure of attribute granule of an inseparable information system. Firstly, the rule that how the structure of attribute granule of an information system changed when new Functional Dependency (FD) was added to the functional dependency set of an information system was studied and the increment theorem of information system structure was proved. Secondly, by removing a part of the functional dependency, an inseparable information system could become a separable information system and the structure of the separable information system was calculated by using decomposition theorem. Thirdly, the removed functional dependency was added to the separable information system and the structure of the original information system was calculated by using increment theorem. Lastly, the algorithm to calculate the structure of attribute granule of inseparable information system was given and its complexity was analyzed. The complexity of the direct calculation of the structure of attribute granule of information system was O(n×m×2n), and the proposed method could reduce the complexity to below O(n×k×2n)(k<m), and when k=1,2, the complexity could be reduced to O(n1×m1×2n1)+O(n2×m2×2n2)(n=n1+n2,m=m1+m2). The theoretical analysis and practical calculation demonstrate that the proposed method can effectively reduce the computational complexity of the structure of attribute granule of an inseparable information system.

    Frequent pattern mining algorithm from uncertain data based on pattern-growth
    WANG Le, CHANG Yanfeng, WANG Shui
    2015, 35(7):  1921-1926.  DOI: 10.11772/j.issn.1001-9081.2015.07.1921
    Asbtract ( )   PDF (898KB) ( )  
    References | Related Articles | Metrics

    To improve the time and space efficiency of Frequent Pattern (FP) mining algorithm over uncertain dataset, the Uncertain Frequent Pattern Mining based on Max Probability (UFPM-MP) algorithm was proposed. First, the expected support number was estimated using maximum probability of the transaction itemset. Second, by comparing this expected support number to the minimum expected support number threshold, the candidate frequent itemsets were identified. Finally, the corresponding sub-trees were built for recursively mining frequent patterns. The UFPM-MP algorithm was tested on 6 classical datasets against the state-of-the-art algorithm AT (Array based tail node Tree structure)-Mine with positive results (about 30% improvement for sparse datasets, and 3-4 times more efficient for dense datasets). The expected support number estimation strategy effectively reduces the number of sub-trees and items of header table, and improves the algorithm's time and space efficiency; and when the minimum expected support threshold is low or there are lots of potential frequent patterns, time efficiency of the proposed algorithm performs more remarkably.

    Evolutionary data stream clustering algorithm based on integration of affinity propagation and density
    XING Changzheng, LIU Jian
    2015, 35(7):  1927-1932.  DOI: 10.11772/j.issn.1001-9081.2015.07.1927
    Asbtract ( )   PDF (1078KB) ( )  
    References | Related Articles | Metrics

    To solve the problems that the data stream outliers can not be disposed well, the efficiency of clustering data stream is low and the dynamic changes of data stream can not be real-time detected, an evolutionary data stream clustering algorithm based on integration of affinity propagation and density (I-APDenStream)was proposed. The traditional two-stage processing model was used in this algorithm, namely online and offline clustering. Not only the decay density of micro-cluster which could represent the dynamic changes of data stream and deletion mechanism for online dynamic maintenance of micro-cluster were introduced, but also the outliers' detection and simplification mechanism for model reconstruction by using the extended Weight Affinity Propagation (WAP) cluster was introduced. The experimental results on two types of data sets demonstrate that the cluster accuracy of the proposed algorithm remains at above 95%, and also achieves considerable improvements with respect to the purity compared to other algorithms. The proposed algorithm can cluster the data stream with high real-time, high quality and high efficiency.

    Characterization of motor-related task brain states based on dynamic functional connectivity
    ZHANG Xin, HU Xintao, GUO Lei
    2015, 35(7):  1933-1938.  DOI: 10.11772/j.issn.1001-9081.2015.07.1933
    Asbtract ( )   PDF (1042KB) ( )  
    References | Related Articles | Metrics

    Focusing on the limitation of conventional static Functional Connectivity (FC) techniques in investigating the dynamic functional brain states, an effective method based on whole-brain Dynamic Functional Connectivity (DFC) was proposed to characterize the time-varying brain states. First, the Diffusion Tensor Imaging (DTI) data were used to construct individual whole-brain networks with high accuracy and the functional Magnetic Resonance Imaging (fMRI) data of motor-related task was projected to the corresponding DTI space to extract the fMRI signals of each node for each subject. Then, one kind of sliding time window approach was applied to calculate the time-varying whole-brain functional connectivity strength matrix, and the corresponding Dynamic Functional Connectivity Vector (DFCV) samples were further extracted and collected. Finally, the DFCV samples were learned and classified by one sparse representation based method called Fisher Discriminative Dictionary Learning (FDDL). Total eight different whole-brain functional connectome patterns representing the dynamic brain states were obtained from this motor-related task experiment. The spatial distributions of functional connectivity strength showed obvious variance within different patterns. The pattern #1, pattern #2 and pattern #3 covered most of the samples (77.6%) and the similarities between each of them and the average static whole-brain functional connectivity strength matrix were obviously higher than other five patterns. Furthermore, the brain states were found to transfer from one pattern to another according to certain rules. The experimental results show that the proposed analysis method combining whole-brain DFC and FDDL learning is effective for describing and characterizing the dynamic brain states during task brain activity. It provides a foundation for exploring the dynamic information processing mechanism of the brain.

    Kernel improvement of multi-label feature extraction method
    LI Hua, LI Deyu, WANG Suge, ZHANG Jing
    2015, 35(7):  1939-1944.  DOI: 10.11772/j.issn.1001-9081.2015.07.1939
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the label kernel functions do not take the correlation between labels into consideration in the multi-label feature extraction method, two construction methods of new label kernel functions were proposed. In the first method, the multi-label data were transformed into single-label data, and thus the correlation between labels could be characterized by the label set; then a new label kernel function was defined from the perspective of loss function of single-label data. In the second method, mutual information was used to characterize the correlation between labels, and a new label kernel function was proposed from the perspective of mutual information. Experiments on three real-life data sets using two multi-label classifiers demonstrated that the best method of all measures was feature extraction method with label kernel function based on loss function and the performance of five evaluation measures on average increased by 10%; especially on the data set Yeast, the evaluation measure Coverage reached a decline of about 30%. Closely followed by feature extraction method with label kernel function based on mutual information and the performance of five evaluation measures on average increased by 5%. The theoretical analysis and simulation results show that the feature extraction methods based on new output kernel functions can effectively extract features, simplify learning process of multi-label classifiers and, moreover, improve the performance of multi-label classification.

    Entity recognition of clothing commodity attributes
    ZHOU Xiang, LI Shaobo, YANG Guanci
    2015, 35(7):  1945-1949.  DOI: 10.11772/j.issn.1001-9081.2015.07.1945
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics

    For the entity recognition of commodity attributes in clothing commodity title, a hybrid method combining Conditional Random Field (CRF) with entity boundary detecting rules was proposed. Firstly, the hidden entity hint character messages were obtained through a statistical method; secondly, statistical word indicators and their implications were interpreted with a granularity of character; thirdly, entity boundary detecting rules was proposed based on the entity hint characters and statistical word indicators; finally, a method for identifying threshold values in rules was proposed based on empirical risk minimization. In the comparison experiments with character-based CRF models, the overall precision, recall and F1 score were increased by 1.61%, 2.54% and 2.08% respectively, which validated the efficiency of the entity boundary detecting rule. The proposed method can be used in e-commerce Information Retrieval (IR), e-commerce Information Extraction (IE) and query intention identification, etc.

    LIBSVM-based relationship recognition method for adjacent sentences containing "jiushi"
    ZHOU Jiancheng, WU Ting, WANG Rongbo, CHANG Ruoyu
    2015, 35(7):  1950-1954.  DOI: 10.11772/j.issn.1001-9081.2015.07.1950
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics

    Aiming at the low accuracy caused by the phenomenon of rule weight weakening from iterations of machine learning when judging the sentence relationships by applying rules and machine learning methods, the method of strengthening the imported obvious rule characteristics in the process of combining rules and machine learning was proposed. Firstly, these specific characteristics that having obvious rules such as dependency vocabulary, syntax and semantics information were extracted; secondly, universal characteristics were extracted based on these words that could indicate relationships; then, the characteristics were written into the data vector that to be input, and another dimensional vector was added to store the obvious rule characteristics; Finally, rules and machine learning methods were combined with LIBSVM model to perform the experiment. The experimental results show that the accuracy rate is averagely 2% higher than that before strengthening the characteristics, and all kinds of relationships' accurate rate, recall rate and F1 value show good results as a whole, their average values achieved 82.02%, 88.95% and 84.76%. The experimental ideas and methods are important for studying the compactness of adjacent sentences.

    Similarity measurement between cloud models based on overlap degree
    SUN Nini, CHEN Zehua, NIU Yuguang, YAN Gaowei
    2015, 35(7):  1955-1958.  DOI: 10.11772/j.issn.1001-9081.2015.07.1955
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics

    Similarity measurement of cloud model is a method that is used to measure the correlation between cloud models, which have same concept but different languages. Both similar cloud and its measurement analysis method are the extension of cloud model theory. To overcome the disadvantages of high consumption and low precision of calculation, a similarity measure algorithm based on overlap degree was proposed. Firstly, the position and logical relationships between these two clouds were defined according to three digital features: expected value, entropy and hyper entropy; secondly, the overlap degree of two clouds was calculated by using their location and shape features; finally, combined with overlap degree and similarity, the similarity measurement was converted to quantitative description of the overlapping part. In the time series classification experiments with compared Likeness comparing method based on Cloud Model (LICM), the computing consumption of the proposed measurement algorithm is reduced by 50% on the premise of ensuring the stability and accuracy. It is proved to be feasible and effective by the application.

    Prediction of retweeting behavior for imbalanced dataset in microblogs
    ZHAO Yu, SHAO Bilin, BIAN Genqing, SONG Dan
    2015, 35(7):  1959-1964.  DOI: 10.11772/j.issn.1001-9081.2015.07.1959
    Asbtract ( )   PDF (980KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that imbalanced dataset influencing the effect of prediction for retweeting behavior in microblogs, a novel predicting algorithm based on oversampling techniques and Random Forest (RF) algorithm was proposed. Firstly, the retweeting-related features, including individual information, social relationships and topic information, were defined. The key feature selection method was implemented based on information gain algorithm. Secondly, by considering the characteristics of the microblogs feature data, an improved algorithm for oversampling based on Synthetic Minority Over-sampling Technique (SMOTE) was proposed. In the course of this algorithm, the probability distribution of the original dataset was estimated based on nonparametric distribution estimation. In order to ensure a balanced number of positive examples and negative examples, an oversampling method was executed based on the improved SMOTE method, according to approximate probability distribution of the original dataset. Finally, a classifier based on random forest algorithm was trained, according to retweeting-related key features. The algorithm parameters of random forest were selected by analyzing the error estimation of Out Of Bag (OOB) data. By comparison with Decision Tree (DT), Support Vector Machine (SVM), Naive Bayesian (NB) and RF algorithms, which were used in the analysis for microblog retweeting behavior, the overall performance of the proposed method is superior to the method based on SVM, which obtains optimal results in all the baseline methods. The recall rate and F-measure of the proposed method are improved by 8%, 5% respectively. The experimental results show that the proposed method can effectively improve the prediction accuracy of microblog retweeting behavior analysis in practical application.

    Weakly-supervised training method about Chinese spoken language understanding
    LI Yanling, YAN Yonghong
    2015, 35(7):  1965-1968.  DOI: 10.11772/j.issn.1001-9081.2015.07.1965
    Asbtract ( )   PDF (834KB) ( )  
    References | Related Articles | Metrics

    Annotated corpus acquisition is a difficult problem in supervised approach. Aiming at the intention recognition task of Chinese spoken language understanding, two weakly supervised training approaches were studied. One is combining active learning with self-training, the other is co-training. A new method of acquiring two independent feature sets as two views for co-training was proposed based on spoken language understanding data in cascade frame. The two feature sets were character features of sentence and semantic class features obtained from key semantic concept recognition task. The experimental results on Chinese spoken language corpus show that the method combining active learning with self-training can minimize manual annotation compared with passive learning and active learning. Furthermore, under the premise of a few initial annotation data, co-training based on two feature sets can make the classification error rate fall in an average of 0.52% with single character feature set.

    Improved artificial bee colony algorithm based on dynamic evaluation selection strategy
    XU Xiangping, LU Haiyan, CHENG Biyun
    2015, 35(7):  1969-1974.  DOI: 10.11772/j.issn.1001-9081.2015.07.1969
    Asbtract ( )   PDF (831KB) ( )  
    References | Related Articles | Metrics

    To overcome the problem of easily trapping into local optima of standard Artificial Bee Colony (ABC) algorithm, the roulette selection strategy of ABC was modified and an improved ABC based on dynamic evaluation selection strategy (DSABC) algorithm was proposed. Firstly, the quality of each food source position was evaluated dynamically according to the times that the food source position had been continuously updated or stagnated within a certain number of iterations so far. Then, onlooker bees were recruited for the food source according to the obtained value of the evaluation function. The experimental results on six benchmark functions show that, compared with standard ABC algorithm, the proposed dynamic evaluation selection strategy modifies the selection strategy of ABC algorithm, and greatly improves the quality of solution of DSABC algorithm, especially for function Rosenbrock with different dimensions, the absolute error of the best solution reduces from 0.0017 and 0.0013 to 0.000049 and 0.000057, respectively; Moreover, DSABC algorithm can avoid the premature convergence caused by the decrease of population diversity at later stage and improve the accuracy and stability of solutions, thus provides an efficient and reliable solution method for function optimization.

    Finding method of users' real-time demands for literature search systems
    XU Hao, CHEN Xue, HU Xiaofeng
    2015, 35(7):  1975-1978.  DOI: 10.11772/j.issn.1001-9081.2015.07.1975
    Asbtract ( )   PDF (827KB) ( )  
    References | Related Articles | Metrics

    Because of the literature search system failing to comprehend users' real-time demands, a method to find users' real-time demands for literature search systems was proposed. Firstly, this method analyzed the users' personalized search behaviors such as browsing and downloading. Secondly, it established users' real-time Requirement Documents (RD) based on the relations between users' search behaviors and users' requirements. And then it extracted keyword network from requirement documents. Finally, it gained users' demand graphs which were formed by core nodes extracted from keyword network by means of random walk. The experimental results show that the method by extracting demand graphs increases the F-measure by 2.5%, in the comparison of the K-medoids algorithm on average, under the condition that users' demands are emulated in the experiment. And it also increases the F-measure by 5.3%, in the comparison with the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm on average, under the condition that users really searches for papers. So, when the method is used in literature search systems where users' requirements are stable, it will be able to gain users' demands to enhance users' search experiences.

    Tensor factorization recommendation algorithm combined with social network and tag information
    DING Xiaohuan, PENG Furong, WANG Qiong, LU Jianfeng
    2015, 35(7):  1979-1983.  DOI: 10.11772/j.issn.1001-9081.2015.07.1979
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics

    The item recommendation precision of social tagging recommendation system was affected by sparse data matrix. A tensor factorization recommendation algorithm combined with social network and tag information was proposed, in consideration of that Singular Value Decomposition (SVD) had good processing properties to deal with sparse matrix, and that friends' information could reflect personal interests and hobbies. Firstly, Higher-Order Singular Value Decomposition (HOSVD) was used for latent semantic analysis and multi-dimensional reduction. The user-project-tag triple information could be analyzed by HOSVD, to get the relationships among them. Then, by combining the relationship of users and friends with the similarity between friends, the result of tensor factorization was modified and the third-order tensor model was set up to realize the item recommendation. Finally, the experiment was conducted on two real data sets. The experimental results show that the proposed algorithm can improve respectively recall and precision by 2.5% and 4%, compared with the HOSVD method. Therefore, it is further verified that the algorithm combining with the relation of friends can enhance the accuracy of recommendation. What's more, the tensor decomposition model is expanded to realize the user personalized recommendation.

    Social friend recommendation mechanism based on three-degree influence
    WANG Mingyang, JIA Chongchong, YANG Donghui
    2015, 35(7):  1984-1987.  DOI: 10.11772/j.issn.1001-9081.2015.07.1984
    Asbtract ( )   PDF (687KB) ( )  
    References | Related Articles | Metrics

    In view of the friend recommendation problem in social networks, a friend recommendation algorithm based on the theory of three-degree influence was proposed. The relationships between social network users include not only the mutual friends, but also the other connecting relations with different lengths. By introducing the theory of three-degree influence, the algorithm took all the relationships within three-degree between users into account, while not only considering the number of mutual friends between users as the main basis of the friend recommendation. By assigning corresponding weights to connections with different distances, the strength of friend relationship between users could be calculated, which would be used as the standard for recommendation. The experimental results on Sina microblog and Facebook show that the precision and recall rate of the proposed algorithm are improved by about 5% and 0.8% respectively than that merely based on mutual friends, which indicates the better recommendation performance of the improved recommendation algorithm. It can be helpful for the social platform to improve the recommendation system and enhance the user experience.

    Collaborative filtering recommendation system based on multi-attribute utility
    DENG Feng, ZHANG Yongan
    2015, 35(7):  1988-1992.  DOI: 10.11772/j.issn.1001-9081.2015.07.1988
    Asbtract ( )   PDF (775KB) ( )  
    References | Related Articles | Metrics

    Focusing on user remarkable burden and high dimension of Multi-Criteria Collaborative Filtering (MC-CF) recommendation system, the recommendation system of Multi-Attribute Utility Collaborative Filtering (MAU-CF) was proposed. Firstly, attribute weight and attribute-value utility were extracted by user browsing behavior, and user's multi-attribute utility function was built to achieve implicit rating of items. Secondly, attribute-value collection according to user preference was constructed based on Genetic Algorithm (GA). Thirdly, the nearest neighborhood was looked for by attribute weight and attribute-value similarity of attribute-value collection. Finally, utilities of items which the nearest neighborhood had browed and bought would be predicted for user by similarity, and the high-utility items would be recommended to user. In the comparison experiments with MC-CF, the explicit utility was replaced by the implicit utility calculated by MAU-CF, calculation dimension decreased by 44.16%, time expense decreased by 27.36%, and Mean Absolute Error (MAE) decreased by 5.69%, and user satisfaction increased by 13.44%. The experimental results show MAU-CF recommendation system outperforms MC-CF recommendation system on user burden, calculation dimension, and recommendation quality.

    Service integration-oriented workflow model and implementation method
    ZHANG Xinglong, LI Songli, XIAO Junchao
    2015, 35(7):  1993-1998.  DOI: 10.11772/j.issn.1001-9081.2015.07.1993
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics

    Since there are two problems when using current workflow to integrate existing software services: 1) Information of the integrated service is insufficient to satisfy the service integrating needs;2) Only few node types without any business meaning can be chosen during process customization, which complicates the process customization. Thus, a new workflow model was proposed. Firstly, three information parts of the workflow model: structure information, service information and people information were determined by analyzing actual business processes under service integration environment; secondly, the corresponding description between three information parts and JPDL (JBoss jBPM Process Definition Language) was given to prove the completeness of the new workflow model; finally, the key elements of each information part were described. The experimental results show that 48 business processes can be quickly built and met the actual business needs based on 35 services. The rich service information provides guarantee for the process operation and the execution correct rate is 100%; the process customization is more convenient with less than 2 min from process customization to execution. The results show that the proposed model can help to build new business process quickly based on existing software services and save the software development costs.

    Source code summarization technology based on syntactic analysis
    WANG Jinshui, XUE Xingsi, WENG Wei
    2015, 35(7):  1999-2003.  DOI: 10.11772/j.issn.1001-9081.2015.07.1999
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    For overcoming the drawback of ignoring the semantic relationship between terms and concept structure in the bag of words model, a source code summarization technology based on syntactic analysis was proposed. Firstly, the part-of-speech tagging was utilized to recognize the keywords that characterized the code feature most. Secondly, the chunk parsing was used to revise the errors that could be introduced in the process of part-of-speech tagging. Thirdly, the noise reduction for those keywords was carried out to decrease the influence of text noise. Finally, several keywords with highest weights were selected to compose the summaries. Through the comparison with TF-IDF (Term Frequency-Inverse Document Frequency)-based and extended TF-IDF-based source code summarization technologies in the experiment, with respect to the overlap coefficient of the golden set, the summaries obtained by the proposed technology are improved by at least 9% and 6% respectively, which illuminates that the proposed technology is able to generate more precise source code summaries.

    Generation method of thread scheduling sequence based on all synchronization pairs coverage criteria
    SHI Cunfeng, LI Zheng, GUO Junxia, ZHAO Ruilian
    2015, 35(7):  2004-2008.  DOI: 10.11772/j.issn.1001-9081.2015.07.2004
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low efficiency on generating Thread Scheduling Sequence (TSS) that cover synchronization statements in multi-thread concurrent program, a TSS Generation Based on All synchronization pairs coverage criteria (TGBA) method was proposed. First, according to the synchronization statements in concurrent program, the synchronization pair and All Synchronization Pairs Coverage Criteria (APSC) were defined. Second, a construction method of Synchronization Pair Thread Graph (SPTG) was given. On that basis, TSSs that satisfied APSC were generated. Finally, by using JPF (Java PathFinder) detection tool, TSS generation experiments were conducted on four Java Library concurrent programs, and the comparison analysis of generation efficiency was conducted with general sequence generation methods of Default Scheduling (DS), Preemptive Scheduling (PS) and Cross Scheduling (CS). The experimental results illustrate that TSSs generated by TGBA method can cover all synchronization pairs compared to the DS and CS method. Moreover, when satisfying APSC, TGBA method decreases at least 19889 states and 44352 transitions compared to the PS method, and the average generation efficiency increases by 1.95 times. So TGBA method can reduce cost of state space and improve the efficiency of TSS generation.
    Generation method for Web link testing cases with permissions and sequence based on UML diagram
    ZHANG Ju, WANG Shuyan, SUN Jiaze
    2015, 35(7):  2009-2014.  DOI: 10.11772/j.issn.1001-9081.2015.07.2009
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of erroneous judgement resulted from the lack of permission and time sequence considerations in traditional Web test case generation, a method combining the Unified Modeling Language (UML) activity diagram and statechart diagram was proposed,which generated testing cases according to the different users' permissions and interaction process analysis of Web page links. This proposed method generated extended statechart diagram containing information elements, got the final Web link testing cases with consideration of permission and sequence through transforming the extended statechart diagram and reordering the corresponding paths. In the comparison with the traditional Web testing case generation method which lacked the permission and time sequence considerations, this method avoids the erroneous judgement effectively and has obvious advantages in coverage, accuracy and efficiency.The experimental results show that the proposed method can improve the efficiency, reliability and feasibility of the Web test.

    Remote sensing image fusion algorithm based on modified Contourlet transform
    CHEN Lixia, ZOU Ning, YUAN Hua, OUYANG Ning
    2015, 35(7):  2015-2019.  DOI: 10.11772/j.issn.1001-9081.2015.07.2015
    Asbtract ( )   PDF (1075KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that remote sensing fusion image based on Contourlet transform has low spatial resolution, a remote sensing image fusion algorithm based on Modified Contourlet Transform (MCT) was proposed. Firstly, the multi-spectral image was decomposed into intensity component, hue component and saturation component by Intensity-Hue-Saturation (IHS) transform; secondly, Modified Contourlet decomposition was done between the intensity component and the panchromatic image after histogram matching to get low-pass subband coefficients and high-pass subbands coefficients; and then, the low-pass subband coefficients were fused by the averaging method, and the high-pass subbands coefficients were merged by Novel Sum-Modified-Laplacian (NSML). Finally, the fusion result was regarded as the intensity component of multi-spectral image, and remote sensing fusion image was obtained by inverse IHS transform. Compared with the algorithms based on Principal Components Analysis (PCA) and Shearlet, based on PCA and wavelet, based on NonSubsampled Contourlet Transform (NSCT), the average gradient that was used for evaluating image sharpness of the proposed method respectively increased by 7.3%, 6.9% and 3.9%. The experimental results show that, the proposed method enhances the frequency localization of Contourlet transform and the utilization of decomposition coefficients, and on the basis of keeping multi-spectral information, it improves the spatial resolution of remote sensing fusion image effectively.

    Color image segmentation algorithm based on rough-set and hierarchical idea
    HAN Jiandong, ZHU Tingting, LI Yuexiang
    2015, 35(7):  2020-2024.  DOI: 10.11772/j.issn.1001-9081.2015.07.2020
    Asbtract ( )   PDF (1017KB) ( )  
    References | Related Articles | Metrics

    Aiming at false segmentation of small regions and high computational complexity in traditional color image segmentation algorithm, a hierarchical method of color image segmentation based on rough set and HIS (Hue-Saturation-Intensity) space was proposed. Firstly, for the reason that the singularities in HSI space are the achromatic pixels in RGB space, the achromatic regions of RGB space were segmented and labeled in order to remove the singularities from the original image. Secondly, the original image was converted from RGB space to HSI space. In intensity component, in view of spatial neighbor information and regional distribution difference, the original histogram was weighted by homogeneity function with changing thresholds and gradience. The weighted and original histograms were respectively used as the upper and lower approximation sets of rough set. The new roughness function was defined and applied to image segmentation. Then the different regions obtained in the previous stage were segmented according to the histogram in hue component. Finally, the homogeneous regions were merged in RGB space in order to avoid over-segmentation. Compared with the method based on rough set proposed by Mushrif etc. (MUSHRIF M M, RAY A K. Color image segmentation: rough-set theoretic approach. Pattern Recognition Letters, 2008, 29(4): 483-493), the proposed method can segment small regions easily, avoid the false segmentation caused by the correlation between RGB color components, and the executing speed is 5-8 times faster. The experimental results show the proposed method yields better segmentation, and it is efficient and robust to noise.

    Target tracking approach based on adaptive fusion of dual-criteria
    ZHANG Canlong, TANG Yanping, LI Zhixin, CAI Bing, MA Haifei
    2015, 35(7):  2025-2028.  DOI: 10.11772/j.issn.1001-9081.2015.07.2025
    Asbtract ( )   PDF (815KB) ( )  
    References | Related Articles | Metrics

    Since the single-criterion-based tracker can not adapt to the complex environment, a tracking approach based on adaptive fusion of dual-criteria was proposed. In the method, the second-order spatiogram was employed to represent the target, the similarity between the target candidate and the target model as well as the contrast between the target candidate and its neighboring background were used to evaluate its reliability, and the objective function (or likelihood function) was established by weighted fusion of the two criteria. The particle filter procedure was used to search the target, and the fuzzy logic was applied to adaptively adjust the weights of the similarity and contrast. Experiments were carried out on several challenging sequences such as person, animal, and the results show that, compared with other trackers such as incremental visual tracker, ι1 tracker, the proposed algorithm obtains better comprehensive performance in handling occlusion, deformation, rotation, and appearance change, and its success rate and average overlap ratio are respectively more than 80% and 0.76.

    Adaptive moving object extraction algorithm based on visual background extractor
    LYU Jiaqing, LIU Licheng, HAO Luguo, ZHANG Wenzhong
    2015, 35(7):  2029-2032.  DOI: 10.11772/j.issn.1001-9081.2015.07.2029
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics

    The prior work of video analysis technology is video foreground detection in complex scenes. In order to solve the problem of low accuracy in foreground moving target detection, an improved moving object extraction algorithm for video based on Visual Background Extractor (ViBE), called ViBE+, was proposed. Firstly, in the model initialization stage, each background pixel was modeled by a collection of its diamond neighborhood to simply the sample information. Secondly, in the moving object extraction stage, the segmentation threshold was adaptively obtained to extract moving object in dynamic scenes. Finally, for the sudden illumination change, a method of background rebuilding and update-parameter adjusting was proposed during the process of background update. The experimental results show that, compared with the Gaussian Mixture Model (GMM) algorithm, Codebook algorithm and original ViBE algorithm, the improved algorithm's similarity metric on moving object extracting results increases by 1.3 times, 1.9 times and 3.8 times respectively in complex video scene LightSwitch. The proposed algorithm has a better adaptability to complex scenes and performance compared to other algorithms.

    Foreground detection algorithm based on dynamic threshold kernel density estimation
    YANG Dayong, YANG Jianhua, LU Wei
    2015, 35(7):  2033-2038.  DOI: 10.11772/j.issn.1001-9081.2015.07.2033
    Asbtract ( )   PDF (971KB) ( )  
    References | Related Articles | Metrics

    A new improved Kernel Density Estimation (KDE) algorithm used to segment foreground was proposed for the problem of reciprocating pumps and other troubles for segmenting foreground in the field of Coal Bed Methane (CBM) extraction and poor real-time of KDE. Background Subtraction (BS) and three frame difference were applied to divide the image into dynamic and non-dynamic background regions and then KDE was used to segment foreground for the dynamic background region. A new method of determining dynamic threshold was proposed when segmenting foreground region. Mean absolute deviation over the sample and sample variance were combined to compute the bandwidth. And the strategy of combining regular update with real-time update was used to renew the second background model. Random selection strategy instead of First In First Out (FIFO) mode was applied when replacing samples of the second background model. In the simulation experiments, the average time-consuming of per frame image is reduced by 94.18% and 15.38% and moving objects are more complete when comparing the improved KDE with the KDE and Background Subtraction Kernel Density Estimation (BS-KDE) respectively. The experimental results show that the proposed algorithm can detect foreground in the field of CBM extraction accurately and meet the real-time requirement in the standard definition video surveillance system basically.

    Pollen image classification and recognition based on Gaussian scale-space roughness descriptor
    XIE Yonghua, XU Zhaofei, FAN Wenxiao
    2015, 35(7):  2039-2042.  DOI: 10.11772/j.issn.1001-9081.2015.07.2039
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics

    According to the problem that the existing roughness descriptors are mostly dependent on the average grey value, which is easy to cause the loss of image information, a new roughness descriptor based on Gaussian scale space was presented for pollen image classification and recognition. With this method, the Gaussian pyramid algorithm was used to divide the image into several different levels of scale space, and then the roughness texture feature was extracted from the different level scale space. The statistical distribution of roughness frequency was calculated to build the Scale-Space Roughness Histogram Descriptor (SSRHD). At last, the Euclidean distance was used to measure the similarity between images. The simulation results on Confocal and Pollenmonitor image database demonstrate that, compared with Discrete Hidden Markov Model Descriptors (DHMMD), the Correct Recognition Rate (CRR) performed by the SSRHD increases by 2.32% on Confocal and 1.2% on Pollenmonitor, and the False Recognition Rate (FRR) decreases by 0.1% on Confocal. The experimental results show that the SSRHD feature can effectively describe the pollen image texture and it also has good robustness to pollen rotation and pose variation.

    Part appearance model based on support vector machine and fuzzy k-means algorithm
    HAN Guijin
    2015, 35(7):  2043-2046.  DOI: 10.11772/j.issn.1001-9081.2015.07.2043
    Asbtract ( )   PDF (660KB) ( )  
    References | Related Articles | Metrics

    The existing part appearance models based on Histogram of Oriented Gradient (HOG) have two defects: 1) the same cell size was used for different parts; 2) the linear Support Vector Machine (SVM) classifier can not represent the similarity of the position state and appearance model accurately. For overcoming these two defects, a part appearance model based on SVM and fuzzy k-means algorithm was built. The appearance model was composed of two classifiers: the linear SVM classifier was used to determine whether a position state belonged to human part; the similarity classifier, which was built according to the normalized Euclidean distance between the position state and the clustering center determined by fuzzy k-means algorithm, was used to calculate the similarity of the position state and appearance model. The experimental results show that the proposed appearance model can represent the appearance feature of real human part more accurately than the part appearance model built by SVM algorithm and HOG of the same cell size, and can get higher estimation accuracy when it is used to human pose estimation based on the tree-like pictorial structure model.

    Fairing computation for T-Bézier curves based on energy method
    FANG Yongfeng, CHEN Jianjun, QIU Zeyang
    2015, 35(7):  2047-2050.  DOI: 10.11772/j.issn.1001-9081.2015.07.2047
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics

    For fairing requirements of the T-Bézier curve, the T-Bézier curve was smoothed by using the energy method. A control point of the T-Bézier curve was modified by using the energy method to make the T-Bézier curve smooth, while it was shown how the interference factor α influenced the smoothness of the T-Bézier curve. It was obtained a method that a fairing T-Bézier curve would be obtained by moving a control point: the α could be determined before the new control point would be found out, the new T-Bézier curve was produced by these new control points. The whole curve would be smoothed: firstly, the interference factors {αi}i=1n were determined; secondly, the equation system whose coefficient matrix was a real symmetric matrix tridiagonal was solved; thirdly, the new control points {Pi}i=0n were obtained; finally, the new T-Bézier curve could be produced. Not only overall fairness of the T-Bézier curve but also C2 continuity of data points was achieved. Finally, it was shown that the proposed algorithm is simple, practical and effective by three examples.

    Face recognition algorithm based on cluster-sparse of active appearance model
    FEI Bowen, LIU Wanjun, SHAO Liangshan, LIU Daqian, SUN Hu
    2015, 35(7):  2051-2055.  DOI: 10.11772/j.issn.1001-9081.2015.07.2051
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics

    The recognition accuracy rate of traditional Sparse Representation Classification (SRC) algorithm is relatively low under the interference of complex non-face ingredient, large training sample set and high similarity between the training samples. To solve these problems, a novel face recognition algorithm based on Cluster-Sparse of Active Appearance Model (CS-AAM) was proposed. Firstly, Active Appearance Model (AAM) rapidly and accurately locate facial feature points and to get the main information of the face. Secondly, K-means clustering was run on the training sample set, the images with high similarity degree were assigned to a category and the clustering center was calculated. Then, the center was used as atomic to structure over-complete dictionary and do sparse decomposition. Finally, face images were classified and recognized by computing sparse coefficients and reconstruction residuals. The face images with different samples and different dimensions from ORL face database and Extended Yale B face database were tested for comparing CS-AAM with Nearest Neighbor (NN), Support Vector Machine (SVM), Sparse Representation Classification (SRC), and Collaborative Representation Classification (CRC). The recognition rate of CS-AAM algorithm is higher than other algorithms with the same samples or the same dimensions. Under the same dimensions, the recognition rate of CS-AAM is 95.2% when the selected number of samples is 210 on ORL face database; the recognition rate of CS-AAM is 96.8% when the selected number of samples is 600 on Extended Yale B face database. The experimental results demonstrate that the proposed method has higher recognition accuracy rate.

    Face recognition algorithm of collaborative representation based on Shearlet transform and uniform local binary pattern
    XIE Pei, WU Xiaojun
    2015, 35(7):  2056-2061.  DOI: 10.11772/j.issn.1001-9081.2015.07.2056
    Asbtract ( )   PDF (1149KB) ( )  
    References | Related Articles | Metrics

    To extract richer texture features of face images to improve face recognition accuracy, a new face recognition algorithm based on the Shearlet_ULBP features which are extracted by the histogram of Uniform Local Binary Pattern (ULBP) from the Shearlet coefficients, called Shearlet_ULBP CRC (Shearlet_ULBP feature based Collaborative Representation Classification) was proposed. First, Shearlet transform was used to extract the multi-orientational facial information, and the average fusion method was exploited to fuse the original Shearlet features of the same scale. Second, the fused image was divided into several nonoverlapping blocks, and then face image was described by the histogram sequence extracted from all the blocks with the ULBP operator. Finally, the extracted features were fed into the collaborative representation based classifier. The proposed method can extract richer information about edge and texture features. Several experiments were conducted on the ORL, Extended Yale B and AR face databases, more than 99% recognition accuracy was achieved for images without occlusion, while the images are occluded, the recognition accuracy still reached more than 91%. The experimental results show that the proposed method is robust to the illumination, pose and expression variations, as well as occlusions.

    Eye state recognition algorithm based on online features
    XU Guoqing
    2015, 35(7):  2062-2066.  DOI: 10.11772/j.issn.1001-9081.2015.07.2062
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the eye localization accuracy drastically affects the correct recognition rate of the eye state, an eye state recognition algorithm combined with online skin feature model was proposed. Firstly, an online skin model was established by fusing the Active Appearance Model (AAM) of the received face image and the skin characteristics of the active user. Secondly, in the preliminary positioned eye area, the online skin model was used again to calculate the precise location of the inner and outer corners of the eyes, and the optimal eye positions were computed by reference of the eye corners. Finally, the Local Binary Pattern (LBP) in the eye area was extracted, and the close and open state of the eyes was recognized effectively based on the Support Vector Machine (SVM). In the comparison experiments with eye corners location algorithm of global localization, the location error was further reduced, and in a low resolution face image, the average recognition accuracy of open eye state and close eye state were 95.03% and 95.47% respectively. Compared with the algorithms based on Haar features and Gabor features, the efficiency increased by 2.9% and 4.8% respectively. The theoretical analysis and simulation results show that the algorithm based on online feature can effectively improve the recognition efficiency of the eye state from real-time face video.

    Robust optimal control of single conveyor-serviced production station with uncertain service rate
    HUANG Hao, TANG Hao, ZHOU Lei, CHENG Wenjuan
    2015, 35(7):  2067-2072.  DOI: 10.11772/j.issn.1001-9081.2015.07.2067
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics

    The robust optimal control of single Conveyor-Serviced Production Station (CSPS) with uncertain service rate was researched. Under the cases where only the interval of service rate was given and the look-ahead range was controllable, the optimal robust control problem could be described as a mini-max problem by using Semi-Markov Decision Process (SMDP) with uncertain parameters. Global optimization method was adopted to derive the optimal robust control policy when states were dependent. Firstly, the worst performance value was obtained under fixed policy by genetic algorithm. Secondly, according to the obtained worst performance value, the optimal robust control policy was achieved with simulated annealing algorithm. The simulation results show that there is little difference between optimal performance cost of the system whose service rate is fixed as the mean of interval and optimal robust performance cost of the CSPS system with uncertain service rate. Moreover, the difference is getting smaller when the uncertain interval narrows and it means that the global optimization algorithm works effectively.

    Intelligent environment measuring and controlling system of textile workshop based on Internet of things
    LIU Xiangju, LI Jingzhao, LIU Lina
    2015, 35(7):  2073-2076.  DOI: 10.11772/j.issn.1001-9081.2015.07.2073
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics

    To improve the workshop environment of textile mill and enhance the automatic control level on the environment, an intelligent environment measuring and controlling system of textile workshop based on Internet of Things (IoT) was proposed. The overall design scheme of the system was given. In order to reduce traffic loads of sink nodes and improve the data transmission rate of network, the wireless network topology structure of single-hop multi-sink nodes was designed. The concrete implementation scheme of hardware design and software work process of sensing nodes, controlling nodes and other nodes were represented detailedly. The improved Newton interpolation algorithm was used as the fitting function to process the detection data, which improved the precision of detection and control of system. The application results show that the system is simple, stable and reliable, low in cost, easy to maintain and upgrade, and obtains good application effect.

    Stock market volatility forecast based on calculation of characteristic hysteresis
    YAO Hongliang, LI Daguang, LI Junzhao
    2015, 35(7):  2077-2082.  DOI: 10.11772/j.issn.1001-9081.2015.07.2077
    Asbtract ( )   PDF (869KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the inflection points are hard to forecast in stock price volatility degrades the forecast accuracy, a kind of Lag Risk Degree Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (LRD-TGARCH-M) model was proposed. Firstly, hysteresis was defined based on the inconsistency phenomenon of stock price volatility and index volatility, and the Lag Degree (LD) calculation model was proposed through the energy volatility of the stock. Then the LD was used to measure the risk, and put into the average share price equation in order to overcome the Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (TGARCH-M) model's deficiency for predicting inflection points. Then the LD was put into the variance equation according to the drastic volatility near the inflection points, for the purpose of optimizing the change of variance and improving the forecast accuracy. Finally, the volatility forecasting formulas and accuracy analysis of the LRD-TGARCH-M algorithm were given out. The experimental results from Shanghai Stock, show that the forecast accuracy increases by 3.76% compared with the TGARCH-M model and by 3.44% compared with the Exponential Generalized Autoregressive Conditional Heteroscedastic in Mean (EGARCH-M) model, which proves the LRD-TGARCH-M model can degrade the errors in the price volatility forecast.

    Application of extreme learning machine with kernels model based on iterative error correction in short term electricity load forecasting
    LANG Kun, ZHANG Mingyuan, YUAN Yongbo
    2015, 35(7):  2083-2087.  DOI: 10.11772/j.issn.1001-9081.2015.07.2083
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the method of Back Propagation (BP) neural network limits the prediction accuracy of the short term electricity load, a prediction model based on Extreme Learning Machine with Kernels and Iterative Error Correction (KELM-IEC) was proposed. Firstly, an input index system was built, in which 7 factors were selected as the input of the prediction model, namely, month of the year, day of the month, day of the week, week number, holiday, daily average temperature, and maximum electricity load for the day before. Secondly, a load prediction model was built. It was based on a new kind of neural network called Extreme Learning Machine with Kernels (KELM). KELM introduced the kernel function mapping of Support Vector Machine (SVM) as the hidden layer nodes mapping of Extreme Learning Machine (ELM). It combined the advantages of ELM with simple structure and SVM with good generalization ability effectively, which could improve the prediction accuracy. Finally, an Iterative Error Correction (IEC) model was built based on the method of IEC in the field of time series prediction. The prediction errors of the load prediction model were trained by KELM and the prediction results could be corrected and revised. Thus, the prediction errors could be decreased and the predictive performance could be improved. In simulation experiments of two actual electricity load data sets, the KELM-IEC model was compared with the BP neural network model, and Mean Absolute Percentage Error (MAPE) respectively decreased by 74.39% and 34.73%, while Maximum Error (ME) decreased by 58.34% and 39.58%, respectively. At the same time, the KELM-IEC model was compared with the KELM model, and MAPE decreased by 18.60% and 4.29% respectively, while ME decreased by 0.08% and 11.21%, respectively, which verified the necessity of the IEC strategy. The simulation experiment results show that the KELM-IEC model can improve the prediction accuracy of the short term electricity load. It can benefit the plan, operation and management of the power system. It can guarantee the demand for production and living electricity. And it can improve both the economic and social benefits.

    Improved artificial bee colony algorithm based on P system for 0-1 knapsack problem
    SONG Xiaoxiao, WANG Jun
    2015, 35(7):  2088-2092.  DOI: 10.11772/j.issn.1001-9081.2015.07.2088
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics

    Aiming at the defects of resolving large scale 0-1 knapsack problem with existed algorithm, an Improved Artificial Bee Colony algorithm based on P Systems (IABCPS) was introduced in this paper. The idea of Membrane Computing (MC), polar coordinate coding and One Level Membrane Structure (OLMS) was used by IABCPS. Evolutionary rules of improved Artificial Bee Colony (ABC) algorithm and transformation or communication rules in P systems were adopted to design its algorithm. The limit of number of trials "limit" was adjusted to keep the balance of exploitation and exploration. The experimental results show that IABCPS can find the optimum solutions in solving small scale knapsack problems. In solving a knapsack problem with 200 items, compared with Clonal Selection Immune Genetic Algorithm (CSIGA), IABCPS increases the average results by 0.15% and decreases variance by 97.53%; compared with ABC algorithm, IABCPS increases the average results by 4.15% and decreases variance by 99.69%. The results demonstrate that IABCPS has good ability of optimization and stability. Compared with Artificial Bee Colony algorithm based on P Systems (ABCPS) in solving large scale knapsack problem with 300, 500, 700 and 1000 items respectively, IABCPS increases the average results by 1.25%,3.93%,6.75% and 11.21%, and the ratio of the variance and the number of experiments keeps in single digits. It shows its strong robustness.

    Path optimization algorithm for team navigation based on multiple point collaboration
    QIU Jigang, LI Wenlong, YANG Jia
    2015, 35(7):  2093-2095.  DOI: 10.11772/j.issn.1001-9081.2015.07.2093
    Asbtract ( )   PDF (608KB) ( )  
    References | Related Articles | Metrics

    Concerning the path non-optimization and the delay due to mutual-waiting caused by information island in the team travel, a collaborative path optimization algorithm was proposed which employed centralized computing based on information sharing among team members. The algorithm calculated the optimum navigation path weighted by the factor of meeting priority, taking meeting convenience and path/time shortening into overall consideration. Theoretical analysis shows that the computation complexity increases linearly with the number of team members, and is approximately equal to that of the traditional path optimization algorithm. The simulation results show that the factor of meeting priority has a great influence on optimization path and meeting place. So, the factor of meeting priority needs to be set according to the actual requirement to ensure the dynamic balance between team cooperation and shortening path. A typical application solution of collaborative path optimization algorithm was given to illustrate how to support and to help each other among team members, and to travel together to the destination in order, safely and quickly.

    Petrol-oil and lubricants support model based on multiple time windows
    YAN Hua, GAO Li, LIU Guoyong, WANG Hongqi
    2015, 35(7):  2096-2100.  DOI: 10.11772/j.issn.1001-9081.2015.07.2096
    Asbtract ( )   PDF (762KB) ( )  
    References | Related Articles | Metrics

    In this paper, the military Petrol-Oil and Lubricants (POL) allotment and transportation problem was studied by introducing the concept of support time window. Considering the complicated restrictions of POL support time and transportation capability, the POL allotment and transportation model based on multiple time windows was proposed by using Constraint Satisfaction Problem (CSP) modelling approach. Firstly, the formalized description of the problem elements was presented, such as POL support station, demand unit, support time window, support demand, and support task. Based on the formalized description, the CSP model for POL support was constructed. The multi-objective model was transformed into single-objective one by using perfect point method. Finally, the solving procedure and its steps were designed based on Particle Swarm Optimization (PSO) algorithm, and an arithmetic example was followed to demonstrate the application of the method. In the example, the two optimization schemes obtained by the model given in this paper and got by the model in which the objective is maximizing the quantity supported were compared. In the two schemes, the transportation capacity both reached a maximum utilization, but the start supporting time of each POL demand in the scheme of the proposed method was no later than the one in the scheme of the single-objective model. By comparing different optimization schemes, it is shown that the proposed model and algorithm can effectively solve the multi-objective POL support optimization problem.

    Integration algorithm of improved maximum a posteriori probability vector quantization and least squares support vector machine
    ZHANG Jun, GUAN Shengxiao
    2015, 35(7):  2101-2104.  DOI: 10.11772/j.issn.1001-9081.2015.07.2101
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics

    In view of the current efficiency problem of speaker recognition system, this paper utilized the tactics of integration algorithm to put forward a new kind of speaker recognition system framework. The traditional Maximum A posteriori Probability Vector Quantization (VQ-MAP) algorithm only focuses on the average vector regardless of weight. In order to solve this problem, this paper put forward an improved algorithm based on VQ-MAP. The algorithm used weighted average vector instead of average vector. Moreover, Support Vector Machine (SVM) algorithm costs too much time, so Least Squares Support Vector Machine (LS-SVM) was used instead of SVM. Finally, in the speaker recognition system, this paper used the parameters calculated from the improved VQ-MAP algorithm as training set of LS-SVM. The experimental results show that, the modeling time of integration algorithm based on improved VQ-MAP and LS-SVM is about 40% less than that of traditional SVM algorithm when using the Radial Basis Function (RBF) kernel function and the sample of 40 people. As the threshold value is 1 and the test speech time is 4 s, compared to the traditional VQ-MAP and SVM algorithm, the deterrent rate is reduced by 1.1%, the false rejection rate is reduced by 2.9% and the recognition rate is increased by 3.9%. As the threshold value is 1 and the test speech time is 4 s, compared to the traditional VQ-MAP and LS-SVM algorithm, the deterrent rate is reduced by 3.6%, the false rejection rate is reduced by 2.7% and the recognition rate is increased by 4.4%. The results show that the integrated algorithm can improve the recognition rate effectively and reduce the operation time significantly, meanwhile reduce the deterrent rate and the false rejection rate.

    Cognitive radar waveform design for extended target detection based on signal-to-clutter-and-noise ratio
    YAN Dong, ZHANG Zhaoxia, ZHAO Yan, WANG Juanfen, YANG Lingzhen, SHI Junpeng
    2015, 35(7):  2105-2108.  DOI: 10.11772/j.issn.1001-9081.2015.07.2105
    Asbtract ( )   PDF (703KB) ( )  
    References | Related Articles | Metrics

    Focusing on the issue that the Signal-to-Clutter-and-Noise Ratio (SCNR) of echo signal is low when cognitive radar detects extended target, a waveform design method based on SCNR was proposed. Firstly, the relation between the SCNR of cognitive radar echo signal and the Energy Spectral Density (ESD) of transmitted signal, was gotten by establishing extended target detection model other than previous point target model; secondly, according to the maximum SCNR criterion, the global optimal solution of the transmitted signal ESD was deduced; finally, in order to get a meaningful time-domain signal, ESD was synthesized to be a constant amplitude based on phase-modulation after combining with the Minimum Mean-Square Error (MMSE) and iterative algorithm, which met the emission requirements of radar. In the simulation, the amplitude of time-domain synthesized signal is uniform, and its SCNR at the output of the matched filter is 19.133 dB, only 0.005 dB less than the ideal value. The results show that not only can the time-domain waveform meet the requirement of constant amplitude, but also the SCNR obtained at receiver output can achieve the best approximation to the ideal value, and it improves the performance of the extended target detection.

    Force estimation in different grasping mode from electromyography
    ZHANG Bingke, DUAN Xiaogang, DENG Hua
    2015, 35(7):  2109-2112.  DOI: 10.11772/j.issn.1001-9081.2015.07.2109
    Asbtract ( )   PDF (577KB) ( )  
    References | Related Articles | Metrics

    A method to analyze the grasping and pattern force of Electromyography (EMG) simultaneously was proposed, in order to solve the problem that most myoelectric survey focused only on pattern recognition regardless of the combination of grasping pattern and force. First, surface EMG signals were collected through 4 EMG electrodes. Force data was obtained by Force Sensor Resistor (FSR). Then, the Linear Discriminant Analysis (LDA) method was used to realize pattern recognition and Artificial Neural Networks (ANN) was applied to estimate force. 4 types of EMG-force relationship were built in 4 different grasping modes. Once the grasping pattern identified, the program called the corresponding force model to estimate force value and achieved the combination force decoding and pattern recognition. The experimental results illustrate that when pattern and force are analyzed simultaneously, the average classification accuracy is about 77.8%; meanwhile the force prediction accuracy rate is about 90%. The proposed method can be applied to myoelectric control of the prosthetic hand, not only the user's intension of grasping mode can be decoded, but also the desired force can also be estimated. The stable grasping can be assisted by this approach.

2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF