Loading...

Table of Content

    10 April 2016, Volume 36 Issue 4
    Load-aware optimal placement for heterogeneous controllers in software-defined networking based on unbalanced graph partitioning
    WANG Yefu, ZENG Guosun, DING Chunling
    2016, 36(4):  883-888.  DOI: 10.11772/j.issn.1001-9081.2016.04.0883
    Asbtract ( )   PDF (1058KB) ( )  
    References | Related Articles | Metrics
    In wide-area Software Defined Networking (SDN) deployments, logically centralized controllers are usually physically placed in a distributed manner. To address the unbalanced controller load problem in controller placement, a load-aware optimal placement for heterogeneous controllers based on unbalanced graph partitioning was proposed. First, the controller placement problem and related metrics, controller load balance and latency, were analyzed. Secondly, controller load balance and latency caused by controller placement were quantified and computed using graph theory and cosine similarity. Moreover, based on graph partitioning theory, the load-aware controller placement problem was transformed into a specified graph partitioning problem. Finally, a load-aware optimal controller placement scheme was presented based on multi-level graph partitioning. Simulation experiments conducted on real network topologies indicate that the proposed scheme can effectively achieve nearly optimal controller load balance.
    Dynamic weight traffic queue scheduling algorithm for network energy efficiency optimization
    XIE Zehua, ZHOU Jinhe, TANG Zhijun
    2016, 36(4):  889-893.  DOI: 10.11772/j.issn.1001-9081.2016.04.0889
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics
    Focusing on the energy efficiency optimization of network traffic transmission, a Dynamic Weight of Weight Fair Queue algorithm (DW_WFQ) was proposed. In this algorithm, weights for various types of flows were dynamically allocated based on Weight Fair Queue (WFQ) algorithm, and service rates for traffic flows were assigned in a more flexible way. Then the energy consumption model of the scheduling algorithm was derived by combining with efficiency function of continuous flow speed scaling model, and the optimization of energy efficiency was carried out. At last, simulation test and comparison on DW_WFQ, First Come First Server (FCFS) and WFQ were given using Matlab. The simulation results show that the proposed traffic scheduling algorithm can effectively reduce system energy consumption, and meet the Quality of Service (QoS) requirement at the same time.
    Energy-aware virtual network reconfiguration algorithm based on resource consolidation
    LYU Xinliang, ZHENG Xiangwei
    2016, 36(4):  894-898.  DOI: 10.11772/j.issn.1001-9081.2016.04.0894
    Asbtract ( )   PDF (951KB) ( )  
    References | Related Articles | Metrics
    Concerning the high energy consumption, low acceptance rate and unbalanced load in virtual network embedding, a comprehensive energy-aware virtual network reconfiguration algorithm based on resource consolidation, namely HEAR algorithm, was proposed, which consists of two stages including node reconfiguration and link reconfiguration. In node reconfiguration stage, the virtual nodes on the physical node with least mapping virtual nodes and their relevant virtual links were moved to other physical nodes except congested nodes to improve acceptance rate and load balance, as well as suspending or closing the physical nodes with empty load to save energy. In link reconfiguration stage, the energy-aware method was adopted to select substrate link candidate set for migration, and Dijkstra algorithm was used to select the shortest available physical path to redeploy the virtual links on it. The simulation results show that, compared with energy-aware relocation heuristic algorithm, HEAR algorithm can reduce energy consumption by about 20%, and increase acceptance rate by about 10%, which means it can save energy consumption, improve the acceptance rate.
    Dynamic spectrum access based on budget and quality of service
    LIAO Yunfeng, CHEN Yong, SUN Aiwei, SHAO Hongxiang
    2016, 36(4):  899-904.  DOI: 10.11772/j.issn.1001-9081.2016.04.0899
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    In the dynamic spectrum access network, how to utilize the spectrum resource efficiently is a vital problem to be solved. Presently, the main study focuses on the scenario that spectrum providers lease spectrum to users for spectrum access. Considering the issue that the user's actual payment capacity and Quality of Service (QoS) demands were different, a joint budget and power control scheme was proposed, the interactions among spectrum providers and users were investigated as a two-stage Stackelberg game model. Based on the budget and QoS demands, providers dynamically adjusted spectrum leasing prices, and users adjusted their demands according to the prices. Finally, the existence and uniqueness of Nash Equilibrium (NE) of maximum profit of users and providers was proved. Simulation results show that the sum of budget of users is the upper bound of the whole network payoff, users and providers can get the optimal profit and achieve the Nash Equilibrium (NE).
    Fault localization for electric power communication network based on fault propagation model and supervised learning
    ZHAO Canming, LI Zhuhong, TAO Lei, ZHANG Xinming
    2016, 36(4):  905-908.  DOI: 10.11772/j.issn.1001-9081.2016.04.0905
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    To solve the fault localization problem in electric power communication network, the large-scale connected area fault alarms caused by device or link faults were investigated, and a fault localization algorithm based on fault propagation model and supervised learning method was proposed. First, an improved fault propagation model was used to obtain an initial result with the minimum faults. Then the fault localization problem was transformed into a supervised classification problem by fault alarm vector decomposition to localize the faults within the fault warning areas. Finally, conjectural fault devices and links were added to improve the location results of previous two steps and increase the accuracy. The simulation results show that accuracy of fault localization of the proposed algorithm reaches 84%-95%, which achieves high reliability in fault location.
    Locomotive wireless access communication strategy of underground linear roadway
    WEI Zhen, ZHANG Xiaoxu, LU Yang, WEI Xing
    2016, 36(4):  909-913.  DOI: 10.11772/j.issn.1001-9081.2016.04.0909
    Asbtract ( )   PDF (690KB) ( )  
    References | Related Articles | Metrics
    To deal with the problem of the locomotive's dynamically wireless access to transmission network in the mine locomotive unmanned system, the Successive Interference Cancellation (SIC) region partition strategy was proposed. Firstly, the nonlinear region partition model was constructed in the Access Point (AP) communication coverage. Secondly, based on the theoretical derivation, the relationship between the number of region partitions and the AP communication coverage, and the relationship between the locomotive position and the transmission power were found. Finally, SIC region partition strategy was designed. The simulation results show that SIC region partition strategy can make one AP access three locomotives at the same time, and the overall optimization effect of locomotive's total passing time and AP coverage utilization increases at least 50%.
    Cloud service QoS prediction method based on Bayesian model
    CHEN Wei, CHEN Jiming
    2016, 36(4):  914-917.  DOI: 10.11772/j.issn.1001-9081.2016.04.0914
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics
    For Quality of Service (QoS) guarantee of cloud service areas, a cloud service QoS prediction method based on time series prediction was proposed to select an appropriate cloud service which met QoS requirements of cloud user and perceive QoS violation may occur. The improved Bayesian constant mean model was used to predict QoS of cloud service accurately. In the experiment, a Hadoop system was established to simulate cloud computing and a lot of QoS data of response time and throughput were collected as predicted object. The experimental result shows that compared with Bayesian constant mean discount model and Autoregressive Integrated Moving Average (ARIMA) model, the proposed prediction method based on improved Bayesian constant mean model is one order of magnitude smaller than the previous methods in Square Sum Error (SSE), Mean Absolute Error (MAE), Mean Squared Error (MSE) and Mean Absolut Percentage Error (MAPE), so it has higher accuracy; and the comparison of prediction accuracy illustrate that the proposed method also has better fitting effect.
    Hadoop adaptive task scheduling algorithm based on computation capacity difference between node sets
    ZHU Jie, LI Wenrui, WANG Jiangping, ZHAO Hong
    2016, 36(4):  918-922.  DOI: 10.11772/j.issn.1001-9081.2016.04.0918
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of the fixed task progress proportions and passive selection of slow tasks in the task speculation execution algorithm for heterogeneous cluster, an adaptive task scheduling algorithm based on the computation capacity difference between node sets was proposed. The computation capacity difference between node sets was quantified to schedule tasks by fast and slow node sets, and dynamic feedback of nodes and tasks speed were calculated to update slow node sets timely to improve the resource utilization rate and task parallelism. Within two node sets, task progress proportions were adjusted dynamically to improve the accuracy of slow tasks identification, and the fast node which executed backup tasks dynamically for slow tasks by substitute execution implementation was selected to improve the task execution efficiency. The experimental results showed that, compared with the Longest Approximate Time to End (LATE) algorithm, the proposed algorithm reduced the running time by 5.21%, 20.51% and 23.86% respectively in short job set, mixed-type job set and mixed-type job set with node performance degradation, and reduced the number of initiated backup tasks significantly. The proposed algorithm can make the task adapt to the node difference, and improves the overall job execution efficiency effectively with reducing slow backup tasks.
    Error back propagation algorithm based on iterative MapReduce framework
    ZHAO Hu, YANG Yu
    2016, 36(4):  923-926.  DOI: 10.11772/j.issn.1001-9081.2016.04.0923
    Asbtract ( )   PDF (542KB) ( )  
    References | Related Articles | Metrics
    Error Back Propagation (BP) algorithm is iterative. How to implement it using iterative MapReduce framework was studied. To avoid the shortage of the traditional MapReduce framework that task must submit repeatedly in iterative program, a transmitting module was added into the iterative MapReduce framework. The training samples obtained from the simulation of the control system in the K/TGR146 radio switch were trained on Hadoop using the traditional framework and the iterative framework respectively. The training of BP algorithm based on the iterative framework is more than 10 times faster than BP algorithm based on the traditional framework, and its accuracy raises by 10%-13%. The iterative framework can effectively reduce the training time and avoid submitting task repeatedly in iterative calculation.
    High-performance regular expressions matching algorithm based on improved FPGA circuit
    ZHUO Yannan, LIU Qiang, JIANG Lei, DAI Qiong
    2016, 36(4):  927-930.  DOI: 10.11772/j.issn.1001-9081.2016.04.0927
    Asbtract ( )   PDF (563KB) ( )  
    References | Related Articles | Metrics
    Concerning the low throughput and too much logic resource usage in the process of regular expressions matching, an improved Deterministic Finite Automaton (DFA) regular expression matching algorithm fully based on Field-Programmable Gate Array (FPGA) logic circuit was designed. Firstly, the result that most transfer edges of each state in DFA would point intensively to the same state characteristics was counted; then an acquiescent transfer edge for each state setting in DFA was provided according to the transfer matrix of regular expressions; finally, simplified logical circuit was given, and measurement was conducted on the L7-filter rule set. The experimental result shows that, compared with the former Nondeterministic Finite Automaton (NFA) algorithm, 10%-60% rules get a higher throughput, and 62%-87% rules cost less logic resources.
    Application of storage and segmentation encoding technology in mobile cloud security
    ZHANG Xiaohong, TU Pingsheng
    2016, 36(4):  931-936.  DOI: 10.11772/j.issn.1001-9081.2016.04.0931
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics
    Concerning the low efficiency of data update and low security of mobile terminal, a storage and segmentation encoding technology was proposed. Firstly, the data of mobile terminal was equally segmented and stored in cloud,then each segment of data was marked by coding techniques; secondly, the corresponding data block information was downloaded to update when users update the data; finally, the updated data was re-encoded, and cryptographically uploaded to the cloud in appropriate position to form a complete data to be stored. The experimental results show that compared with traditional mobile security solution including Advanced Encryption Standard (AES) scheme and Regenerating Code (RC) coding technique, the proposed model can save file conversion time when the data is updated frequently in cloud, and significantly reduce the performance overhead of the mobile terminal. The proposed model can significantly improve the efficiency of updating data and resource utilization rate of mobile terminal, and effectively strengthen the confidentiality and integrity of mobile cloud data,it has obvious advantages for the demand of updating mobile terminal data frequently.
    Subjective trust model based on integrated intuitionistic fuzzy information
    XU Jun
    2016, 36(4):  937-940.  DOI: 10.11772/j.issn.1001-9081.2016.04.0937
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics
    Aiming at the subjectivity and uncertainty of online service environment, as well as existing trust models cannot describe trust degree, distrust degree and uncertainty degree, simultaneously, a subjective trust model based on intuitionistic fuzzy information was proposed. Firstly, an improved approach for aggregating crisp values into Intuitionistic Fuzzy Numbers (IFN) was developed. Then, based on this approach, the direct trust IFN and the indirect trust IFN could be calculated. Furthermore, the final trust was obtained by utilizing weight distribution strategy based on intuitionstic fuzzy entropy. The experimental results demonstrate that the proposed model is effective for credit fraud, and maintains low error level when malicious entities ratio reaches 35%.
    Malicious domain detection based on multiple-dimensional features
    ZHANG Yang, LIU Tingwen, SHA Hongzhou, SHI Jinqiao
    2016, 36(4):  941-944.  DOI: 10.11772/j.issn.1001-9081.2016.04.0941
    Asbtract ( )   PDF (688KB) ( )  
    References | Related Articles | Metrics
    Domain Name System (DNS) provides domain name resolution service, i.e., converting domain names to IP addresses. Malicious domain detection is mainly for discovering illegal activities and ensuring the normal operation of the domain name servers. Prior work on malicious domain name detection was summarized, and a new machine learning based malicious domain detection algorithm for exploiting multiple-dimensional features was further proposed. With respect to domain name lexical features, more fine-grained features were extracted, such as the conversion frequency of the numbers and letters and the maximum length of continuous letters. As for the network attribute features, more attentions were paid to the name servers, such as the quantity, and the degree of dispersion. The experimental results show that the accuracy, recall rate, F1 value of the proposed method reaches 99.8%, which means a better performance on malicious domain name detection.
    Hop difference based secure localization scheme for wireless sensor network
    XIAO Jiqing, LIU Zhenyu, XIAO Jiang
    2016, 36(4):  945-951.  DOI: 10.11772/j.issn.1001-9081.2016.04.0945
    Asbtract ( )   PDF (1052KB) ( )  
    References | Related Articles | Metrics
    To deal with the problem that the localization result of Distance Vector-Hop (DV-HOP) may be rendered far from precision by the Sybil attack in Wireless Sensor Network (WSN), two hop difference based secure localization algorithms, namely HDDV-HOP and EHDDV-HOP, were proposed. Firstly, neighbor node lists of other nodes were got by the detection nodes through the controlled flooding mechanism. Secondly, the neighbor node lists were analyzed to detect fake nodes and white node lists were established. Finally, packets were selectively relayed based on white node lists and the unknown nodes were securely localized. The two algorithms differ in the techniques they used to detect fake nodes. In HDDV-HOP, whether or not neighbor node lists were the same was checked; while in EHDDV-HOP, the ratio of the amount of elements in the intersection of two neighbor node lists to that of elements in the union of the two was analyzed. The simulation results show that, compared with DV-HOP without the Sybil attack, when the ratio of beacon nodes to normal nodes reaches 20% and signal coverage is asymmetric,the localization error of HDDV-HOP is increased by 133.4%, while the error of EHDDV-HOP is increased by 7.3% when the similarity threshold is suitable, but the localization errors of the both algorithms are smaller than that of DV-HOP with the Sybil attack. Both of HDDV-HOP and EHDDV-HOP can defend against the Sybil attack, however EHDDV-HOP outperforms HDDV-HOP.
    Birkhoff interpolation-based verifiable hierarchical threshold secret sharing algorithm
    XU Xiaojie, WANG Lisheng
    2016, 36(4):  952-955.  DOI: 10.11772/j.issn.1001-9081.2016.04.0952
    Asbtract ( )   PDF (690KB) ( )  
    References | Related Articles | Metrics
    A Distributed Key Generation (DKG) protocol is a central component in distributed cryptosystems, it allows a group of participants to jointly generate private key and public key, but only authorised subgroups of participants are able to reconstruct private key. However, the existing literatures based on DKG protocol assume equal authority for participants. Therefore, Birkhoff Interpolation-based Verifiable Hierarchical Threshold Secret Sharing (BI-VHTSS) algorithm was proposed. Considering the problem of DKG, authorized subsets were defined by a hierarchical threshold access structure in BI-VHTSS algorithm. On the basis of intractability of the Discrete Logarithm Problem (DLP) and Birkhoff interpolation, the correctness and security of the proposed algorithm were also proved.
    Identity-based broadcast encryption based on lattice
    HUANG Wenzhen, YANG Xiaoyuan, WANG Xu'an, WU Liqiang
    2016, 36(4):  956-961.  DOI: 10.11772/j.issn.1001-9081.2016.04.0956
    Asbtract ( )   PDF (883KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue of low security and poor practicability in the lattice-based broadcast encryption scheme proposed by Wang et al. (WANG J, BI J. Lattice-based identity-based broadcast encryption. https://eprint.iacr.org/2010/288.pdf.) in the random oracle, an identity-based broadcast encryption shceme based on Learning With Errors (LWE) in the standard model was constructed by expanding control algorithm of bonsai tree and one-time signature algorithm. Firstly, the random oracle was replaced by a coding function to make the scheme be in the standard model. Then, the bonsai tree expanding control algorithm was used to generate the private keys of users and public key. Finally, the one-time signature algorithm was added to improve the security. Analysis shows that compared with existed similar schemes, the scheme gets stronger security, achieves adaptively indistinguishable-chosen ciphertext attack security with dynamic extension, which means the users can be added or deleted by expanding or contracting the identity matrix. Hence it has strong practicability.
    Leveled fully homomorphic encryption scheme based on Ring-LWE-GSW
    WANG Zhao, DING Yong, WANG Huiyong
    2016, 36(4):  962-965.  DOI: 10.11772/j.issn.1001-9081.2016.04.0962
    Asbtract ( )   PDF (654KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that current fully homomorphic encryption schemes are not practical, Gentry-Sahai-Waters (GSW) homomorphic encryption scheme was improved and a leveled fully homomorphic encryption scheme based on Ring Learning with Error (Ring-LWE) and GSW was proposed. Firstly, a basic public key encryption scheme was constructed on Ring-LWE problem, the approximate eigenvector method was used to make it have homomorphic addition and multiplication properties, and the randomized function technique was introduced to simplify the analysis of noise blow-up. Secondly, the correctness and security of the proposed scheme was proved, the correctness of homomorphic addition, multiplication and NAND operation was analyzed in detail. Finally, security parameter was set in accordance with the noise blow-up with homomorphic evaluation and the security of Ring-LWE problem, fast Fourier transformation was adopted to reduce the computational complexity of polynomial multiplication, then a leveled fully homomorphic encryption scheme was given. The size of the pubic key in new scheme is shorter than that in GSW and the computational complexity of NAND gate is reduced from Õ((nL)2.37) to Õ(nL2).
    Image encryption based on chaotic dynamic random grouping and modulating fractional Fourier transform rotation factor
    JIN Jianguo, XIAO Ying, DI Zhigang
    2016, 36(4):  966-972.  DOI: 10.11772/j.issn.1001-9081.2016.04.0966
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics
    Concerning the security problem caused by the singleness of rotation factor and grouping in FRactional Fourier Transform (FRFT) image encryption algorithm, a novel encryption method, namely chaotic dynamic random grouping and random modulating FRFT rotation factor, was proposed. Three chaotic systems were adopted in this process. Firstly, the keys of subsystem 1 were utilized to realize the nondestructive encryption on plaintext and obtain the first cipher text. Then subsystem 2 was utilized to realize the dynamic grouping of FRFT, and the rotation factor of FRFT was randomly modulated by using subsystem 3. After that, the FRFT was utilized to realize the second encryption on first cipher text. The test results on adjacent pixels correlation and pixels change rate show that the method can effectively resist statistical attack and differential attack. Compared with destructive method, in the first encryption, the real-time and reduction test show that the decrypted time of nondestructive method decreases by 52.5% and the similarity of decrypted image improves by 4.5%; thus, the denoising process is avoided and the system overhead is reduced. Security test shows that the information entropy increases by 1.7% and the ability to resist exhaustive attack increases by 103635 times, compared with simple modulate FRFT rotation factor algorithm. The results show that the proposed method is better than the chaotic key simple modulating FRFT rotation factor algorithm in real-time, reduction, security and so on.
    Behavior oriented method of Android malware detection and its effectiveness
    SUN Runkang, PENG Guojun, LI Jingwen, SHEN Shiqi
    2016, 36(4):  973-978.  DOI: 10.11772/j.issn.1001-9081.2016.04.0973
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    Concerning the constrained resources and low detection rate of Android, a software behavior dynamic monitoring framework based on ROM was constructed by considering behavior characteristics of Android in installation mode, trigger mode and malicious load, and the effectivenesses of Support Vector Machine (SVM), decision tree, k-Nearest Neighbor (KNN) and Naive Bayesian (NB) classifier were evaluated using information gain, chi square test and Fisher Score. The results of evaluation on overall classification of the behavior log of 20916 malicious samples and 17086 normal samples show that SVM has the best performance in the detection of malicious software, its accuracy rate can reach 93%, and the False Positive Rate (FPR) is less than 2%. It can be applied to the online cloud analysis environment and detection platform, as well as meeting the needs of mass sample processing.
    File hiding based on capacity disguise and double file system
    WANG Kang, LI Qingbao
    2016, 36(4):  979-984.  DOI: 10.11772/j.issn.1001-9081.2016.04.0979
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor robustness and low hiding strength of existing file hiding method based on Universal Serial Bus (USB), a new file hiding method based on capacity disguised and double file system was proposed. By analyzing the characteristics and management mechanism of Nand flash chips, the capacity disguise was achieved to deceive the host by tampering equipment capacity value in Command Status Wrap (CSW). Based on the memory management mechanism of the Flash Translation Layer (FTL), the storage area was divided into two parts including the hiding area and the common area by different marks, and a double file system was established using format function. Request for switching file system was sent by writing specific data, then it was achieved after user authentication to realize secure access to hiding areas. The experimental results and theoretical analysis show that the proposed method can achieve hiding file which is transparent to operating system, moreover, it is not affected by device operation and has better robustness and stronger hiding effect with respect to the methods based on hooking Application Programming Interface (API), modifying File Allocation Table (FAT) or encryption.
    Entity relationship search over extended knowledge graph
    WANG Qiuyue, QIN Xiongpai, CAO Wei, QIN Biao
    2016, 36(4):  985-991.  DOI: 10.11772/j.issn.1001-9081.2016.04.0985
    Asbtract ( )   PDF (1139KB) ( )  
    References | Related Articles | Metrics
    It is difficult for entity search and question answering over text corpora to join cues from multiple documents to process relationship-centric search tasks, although structured querying over knowledge base can resolve such problem, but it still suffers from poor recall because of the heterogeneity and incompleteness of knowledge base. To address these problems, the knowledge graph was extended with information from textual corpora and a corresponding triple pattern with textual phrases was designed for uniform query of knowledge graph and textual corpora. Accordingly, a model for automatic query relaxation and scoring query answers (tuples of entities) was proposed, and an efficient top-k query processing strategy was put forward. Comparison experiments were conducted with two classical methods on three different benchmarks including entity search, entity-relationship search and complex entity-relationship queries using a combination of the Yago knowledge graph and the entity-annotated ClueWeb '09 corpus. The experimental results show that the entity-relationship search system with query relaxation over extended knowledge base outperforms the comparison systems with a big margin, the Mean Average Precision (MAP) are improved by more than 27%, 37%, 64% respectively on the three benchmarks.
    Automatical construction of Chinese knowledge graph system
    E Shijia, LIN Peiyu, XIANG Yang
    2016, 36(4):  992-996.  DOI: 10.11772/j.issn.1001-9081.2016.04.0992
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that the methods currently used to construct Chinese knowledge graph system are time-consuming, have low accuracy and require a lot of manual intervention, an integrated end-to-end automatically constructed solution based on rich data from Chinese encyclopedia was proposed, and a user-oriented Chinese knowledge graph was implemented. In this solution, some property and related text information of the original encyclopedia data were scraped to local system uninterruptedly by the custom Web crawler, and saved as a triple with extended attributes. Through graph-oriented database Cayley and document-oriented database MongoDB, the data in the archived triple files was imported in the back-end system, and then converted to a huge knowledge graph system in order to provide various services dependent on the Chinese knowledge graph in the front-end system. Compared with other knowledge graph systems, the proposed system significantly reduces the construction time; moreover, the number of entities and relations is at least 50% higher than that of the other knowledge graph systems such as YAGO, HowNet and the Chinese Concept Dictionary.
    Improved frequent itemset mining algorithm based on interval list
    XU Yongxiu, LIU Xumin, XU Weixiang
    2016, 36(4):  997-1001.  DOI: 10.11772/j.issn.1001-9081.2016.04.0997
    Asbtract ( )   PDF (748KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem that PrePost algorithm needs to build complex Pre-order and Post-order Code tree (PPC-tree) and Node list (N-list), an improved frequent itemset mining algorithm based on the Interval list (I-list) was proposed. Firstly, data storage structure with more compression compared to Frequent Pattern tree (FP-tree), called Interval FP-tree (IFP-tree), was adopted, which mined frequent itemsets without iteratively establishing conditional tree. Secondly, the more concise method called I-list was used to replace the complex N-list in PrePost so as to improve mining speed. Finally, in the case of single branch path, some frequent itemsets were directly obtained by the method of combination. The experimental results prove the correctness of the proposed algorithm by getting the same results for the same dataset under same minimum supports, the proposed algorithm is superior to PrePost algorithm by about 10 percent in terms of time and space which has a good application in sparse database or intensive database.
    Parallel instance recovery method based on multi-thread
    LU Dongdong, HE Qingfa
    2016, 36(4):  1002-1007.  DOI: 10.11772/j.issn.1001-9081.2016.04.1002
    Asbtract ( )   PDF (1114KB) ( )  
    References | Related Articles | Metrics
    Concerning the low efficiency of serialized execution in database instance recovery and relying on ShenTong database, a parallel instance recovery method based on multi-thread was proposed. First, two steps including "building dirty page table" and "prefetching dirty pages" were added to the original database instance recovery model to get an improved model. Second, the improved model was processed by the multi-threaded parallel processing way and a parallel instance recovery model was generated. Finally, by using rollback segment management strategy, undo log was managed as normal data and the parallel instance recovery could be finished earlier. In the comparison experiments with the original method, Transaction Processing performance Council-C (TPC-C) benchmark test result of the parallel recovery method showed that the efficiency of reading and parsing redo log increased by 2-7 times, the efficiency of redoing increased by 4-9 times, and the total time for recovery reduced to 20%-40%. The results prove that the parallel instance recovery method can accomplish parallel processing of each stage, reduce the time needed for recovery and ensure the high efficiency of database in practical applications.
    Improved particle swarm optimization based on re-sampling of particle filter and mutation
    HAN Xue, CHENG Qifeng, ZHAO Tingting, ZHANG Limin
    2016, 36(4):  1008-1014.  DOI: 10.11772/j.issn.1001-9081.2016.04.1008
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    Concerning the low accuracy and convergence of standard Particle Swarm Optimization (PSO) algorithm, an improved particle swarm optimization based on particle filter re-sampling and mutation named RSPSO was proposed. By using the resampling characteristic of abandoning particles with low weights and duplicating and retaining particles with high weights, an existing method for mutation was adopted to overcome the disadvantage of particle degeneracy, which greatly enhanced the local search capability in the later searching stage of PSO algorithm. RSPSO algorithm was compared with the standard algorithm and some other improved algorithms in the literature under different benchmark functions. The experimental results show that RSPSO has faster convergence, higher accuracy and better stability, and it is able to solve multi-modal problems globally.
    Fast multi-objective hybrid evolutionary algorithm for flow shop scheduling problem
    ZHANG Wenqiang, LU Jiaming, ZHANG Hongmei
    2016, 36(4):  1015-1021.  DOI: 10.11772/j.issn.1001-9081.2016.04.1015
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    A fast multi-objective hybrid evolutionary algorithm was proposed for solving bi-criteria Flow shop Scheduling Problem (FSP) with the objectives of minimizing makespan and total flow time. The sampling strategy of the Vector Evaluated Genetic Algorithm (VEGA) and a new sampling strategy according to the Pareto dominating and dominated relationship-based fitness function were integrated with the proposed algorithm. The new sampling strategy made up the shortage of the sampling strategy of VEGA. VEGA was good at searching the edge region of the Pareto front, but it neglected the central area of the Pareto front, while the new sampling strategy preferred the center region of the Pareto front. The fusion of these two mechanisms ensured that the hybrid algorithm can converge to the Pareto front quickly and smoothly. Moreover, the algorithm efficiency was improved greatly without calculating the distance. Simulation experiments on Taillard benchmark sets show that, compared with Non-dominated Sorting Genetic Algorithm-Ⅱ (NSGA-Ⅱ) and Strength Pareto Evolutionary Algorithm 2 (SPEA2), the fast multi-objective hybrid evolutionary algorithm is improved in the performance of convergence and distribution, and the efficiency of the algorithm has been improved. The proposed algorithm can be better at solving the bi-criteria flow shop scheduling problem.
    Improved artificial bee colony algorithm based on central solution
    SONG Yuezhen, PEI Tengda, PEI Bingnan
    2016, 36(4):  1022-1026.  DOI: 10.11772/j.issn.1001-9081.2016.04.1022
    Asbtract ( )   PDF (713KB) ( )  
    References | Related Articles | Metrics
    An improved Artificial Bee Colony (ABC) algorithm for function optimization based on central solution was proposed to solve the problem of poor local searching capacity and low accuracy of conventional ABC algorithm. The algorithm combined the advantage of the central solution, which was introduced into the local search process of onlooker bees. Onlooker bees chose some nectar sources with better fitness values using roulette, did the further local optimization based on central solution and updated the value of each dimension of nectar source in every iteration. In order to verify the validity of the proposed algorithm, six standard functions were selected to simulate and compare with the other tow algorithms including ABC algorithm and Best-so-far ABC algorithm, the proposed algorithm greatly improved the quality of solution and reached theoretical optimal value especially for Rastrigin function. The results show that the proposed algorithm has significant improvement on solution accuracy and convergence rate.
    Simulated annealing algorithm for solving the two-species small phylogeny problem
    WU Jingli, LI Xiancheng
    2016, 36(4):  1027-1032.  DOI: 10.11772/j.issn.1001-9081.2016.04.1027
    Asbtract ( )   PDF (872KB) ( )  
    References | Related Articles | Metrics
    In order to solve the two-species Small Phylogeny Problem (SPP) in the duplication-loss model, a simulated annealing algorithm named SA2SP was devised for the duplication-loss alignment problem. An alignment algorithm was introduced to construct the initial solution; a labeling algorithm was used to construct the object function and obtain the evolution cost; and three intelligent neighborhood functions were introduced to generate neighborhood solutions by using the evolutionary characteristics of gene sequences. The ribosomal RiboNucleic Acid (rRNA) and transfer Ribonucleic Acid (tRNA) of four real bacterium were used to test the performance of SA2SP and Pseudo-Boolean Linear Programming (PBLP) algorithm. The experimental results show that the SA2SP algorithm has smaller evolution cost, and it is an effective method for solving the two-species SPP in the duplication-loss model.
    Image target recognition method based on multi-scale block convolutional neural network
    ZHANG Wenda, XU Yuelei, NI Jiacheng, MA Shiping, SHI Hehuan
    2016, 36(4):  1033-1038.  DOI: 10.11772/j.issn.1001-9081.2016.04.1033
    Asbtract ( )   PDF (891KB) ( )  
    References | Related Articles | Metrics
    The deformation such as translation, rotation and random scaling of local images in image recognition tasks is a complicated problem. An algorithm based on pre-training convolutional filters and Multi-Scale block Convolutional Neural Network (MS-CNN) was proposed to solve these problems. Firstly, the training dataset without labels was used to train a sparse autoencoder and get a collection of convolutional filters with characteristics in accord with the dataset and good initial values. To enhance the robustness and reduce the impact of the pooling layer for the feature extraction, a new Convolutional Neural Network (CNN) structure with multiple channels was proposed. The multi-scale block operation was applied to input image to form several channels, and each channel was convolved with corresponding size of filter. Then the convolutional layer, a local contrast normalization layer and a pooling layer were set to obtain invariability. The feature maps were put in the full connected layer and final features were exported for target recognition. The recognition rates of STL-10 database and remote sensing airplane images were both improved compared to traditional CNN. The experimental results show that the proposed method has robust performance when dealing with deformations such as translation, rotation and scaling.
    Human activity pattern recognition based on block sparse Bayesian learning
    WU Jianning, XU Haidong, LING Yun, WANG Jiajing
    2016, 36(4):  1039-1044.  DOI: 10.11772/j.issn.1001-9081.2016.04.1039
    Asbtract ( )   PDF (933KB) ( )  
    References | Related Articles | Metrics
    It is difficult for the traditional Sparse Representation Classification (SRC) algorithm to enhance the performance of human activity recognition because of ignoring the correlation structure information hidden in sparse coefficient vectors of the test sample. To address this problem, a block sparse model-based human activity recognition approach was proposed. The human activity recognition problem was considered as a sparse representation-based classification problem on the basis of the inherent sparse block structure in human activity pattern. The block sparse Bayesian learning algorithm was used to solve the optimal sparse representation coefficients of a test sample for a linear combination of the training samples from the same class, and then the reconstruction residual of sparse coefficients was defined to determine the class of the test sample, which effectively improved the recognition rate of human activity pattern. The USC-HAD database containing different styles of human daily activity was selected to evaluate the effectiveness of the proposed approach. The experimental results show that the activity recognition rate of the proposed approach reaches 97.86%, which is increasd by 5% compared to the traditional human activity methods. These results demonstrate that the proposed method can effectively capture the discriminative information of the different activity pattern, and significantly improve the accuracy of human activity recognition.
    Sentiment analysis of product reviews based on contrastive divergence- restricted Boltzmann machine deep learning
    GAO Yan, CHEN Baifan, CHAO Xuyao, MAO Fang
    2016, 36(4):  1045-1049.  DOI: 10.11772/j.issn.1001-9081.2016.04.1045
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that most of existing approaches need sentiment lexicon annotated manually to extract sentiment features, a sentiment analysis method of product reviews based on Contrastive Divergence-Restricted Boltzmann Machine (CD-RBM) deep learning was proposed. Firstly, product reviews were preprocessed and represented as vectors using the bag-of-words. Secondly, CD-RBM was used to extract the sentiment features from product review vectors. Finally, the sentiment features were classified with Support Vector Machine (SVM) as the sentiment analysis result. Without any manually pre-defined sentiment lexicon, CD-RBM can automatically obtain the sentiment features of higher semantic relevance; combining with SVM, the correctness of the sentiment analysis result is guaranteed. The optimum training period of RBM was experimentally determined as 10. In the comparison experiments with methods including RBM, SVM, PCA+SVM and RBM+SVM, the combination method of CD-RBM feature extraction and SVM classification shows the best precision and best F-measure, as well as better recall.
    Collaborative filtering recommendation algorithm based on score difference level and user preference
    DANG Bo, JIANG Jiulei
    2016, 36(4):  1050-1053.  DOI: 10.11772/j.issn.1001-9081.2016.04.1050
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics
    To address the problem that the traditional collaborative filtering algorithms only use user's rating data to compute the user similarity, which leads to a poor recommendation precision, an improved collaborative filtering recommendation algorithm was put forward. Firstly, the user's score difference level was obtained by using user's average score as the boundary point, which was considered as a weighting factor in the user's similarity. Secondly, according to the user's rating data and the item category information, the user's interest level for the item category and the users item preference were mined to calculate the user's preference similarity. Thirdly, the above two similarities were combined to get the intergrated similarity between users. Finally, the traditional item similarity and the intergrated similarity between users were fusioned to predict score and recommend items. The experimental results show that, compared with the traditional user-based collaborative filtering recommendation algorithm, the Mean Absolute Error (MAE) of the proposed algorithm is reduced by 2.4% on average. The new algorithm can effectively improve the accuracy and quality of the recommendation algorithm.
    Distributed collaborative filtering recommendation algorithm based on gray association analysis
    QIU Gui, YAN Renwu
    2016, 36(4):  1054-1059.  DOI: 10.11772/j.issn.1001-9081.2016.04.1054
    Asbtract ( )   PDF (883KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of "hard classification" clustering, data sparsity and scalability in user-based or item-based Collaborative Filtering Recommendation (CFR) algorithms, a distributed collaborative filtering recommendation algorithm based on gray association analysis was proposed. Based on Hadoop platform, the grey relational coefficient of each item in rating matrix was calculated at first, then the Grey Relational Grade (GRG) of each item was calculated. Finally, the similar items for each item was constructed according to GRG, and item's rating for different users with related similar items was predicted. The experiment was conducted on the MovieLens dataset. The results showed that the Mean Absolute Error (MAE) of proposed algorithm was reduced by 1.07% and 0.06% respectively compared to the user-based and item-based CFR algorithms; and with the scale of dataset expending, the running efficiency was also improved by adding datanode to the Hadoop cluster. The experimental results illustrate that the proposed algorithm can make effective recommendation for large scale dataset and solve the problem of data scalability.
    Community question answering-oriented Chinese question classification
    DONG Caizheng, LIU Baisong
    2016, 36(4):  1060-1065.  DOI: 10.11772/j.issn.1001-9081.2016.04.1060
    Asbtract ( )   PDF (954KB) ( )  
    References | Related Articles | Metrics
    There are many questions without interrogative words in the Community Question Answering (CQA), where non-factoid questions make up a high proportion. In order to solve a specific case that the traditional categories for question classification is based on the factoid questions and the traditional methods for question classification largely depend on the interrogative words, a coarse-grained classification category and a novel hierarchical structure question classification method based on the interrogative words were proposed. The Support Vector Machine (SVM) model was used to classify the questions which contained interrogative words. As for the questions without interrogative words, a classifier based on focus word was constructed. The comparison experiment with method based on SVM was conducted on the dataset of Chinese questions crawled from Zhihu, and the proposed method improved the accuracy by 4.7 percentage points. The experimental results illustrate that the proposed method which selects different classifier according to whether a question contains interrogative words can effectively reduce the dependence on interrogative word, and make more accurate classification for Chinese questions.
    Virtual machine anomaly detection algorithm based on detection region dividing
    WU Tianshu, CHEN Shuyu, ZHANG Hancui, ZHOU Zhen
    2016, 36(4):  1066-1069.  DOI: 10.11772/j.issn.1001-9081.2016.04.1066
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics
    The stable operation of virtual machine is an important support of cloud service. Because of the tremendous amount of virtual machine and their changing status, it is hard for management system to train classifier for each virtual machine individually. In order to improve the performance of real-time performance and detection ability, a new dividing mechanism based on modified k-medoids clustering algorithm for dividing virtual machine detection region was proposed, the iterate process of clustering was optimized to improve the speed of dividing detection region, and the efficiency and accuracy of anomaly detection were enhanced consequently by using this proposed detecting region strategy. Experiments and analysis show that the modified clustering algorithm has lower time complixity, the detection method with dividing detection region performs better than the original algorithm in efficiency and accuracy.
    Improved adaptive random testing algorithm based on crowding level of failure region
    HOU Shaofan, YU Lei, LI Zhibo, LI Gang
    2016, 36(4):  1070-1074.  DOI: 10.11772/j.issn.1001-9081.2016.04.1070
    Asbtract ( )   PDF (837KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues that the effectiveness and efficiency of existing Adaptive Random Testing (ART) algorithms are not as good as Random Testing (RT) for point failure pattern, an improved ART algorithm based on the concept of crowding level of failure region, namely CLART, was proposed to improve the traditional ART algorithm: Fixed Sized Candidate Set (FSCS) and Restricted Random Testing (RRT), etc. Firstly, the main crowding level was estimated according to the input region to determine the local search region. Secondly, some Test Cases (TCs) were generated by traditional ART algorithms in the local region. Finally, if no failure was found, a new local region was re-selected and some TCs were generated again until the first failure was found. The simulation results show that the effectiveness of the proposed CLART algorithm is about 20% higher than that of FSCS algorithm, and the efficiency is about 60% higher than that of FSCS algorithm. The experimental results indicate that the CLART algorithm can quickly locate the concentrated failure regions by searching several regions one by one to improve the effectiveness and efficiency.
    Correction technique for color difference of multi-sensor texture
    MA Qian, GE Baozhen, CHEN Lei
    2016, 36(4):  1075-1079.  DOI: 10.11772/j.issn.1001-9081.2016.04.1075
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    The texture images obtained by multiple sensors of 3D color scanner have color difference, resulting in color block in the 3D color model surface. In order to solve this problem, a modified method based on color transfer was proposed. First, the comprehensive assessment quality function was used to choose the best one of the color texture images obtained by multiple sensors as the standard image. Then, the mean and variance of other texture images in each color channel were adjusted refering to the standard image. The proposed method was applied to texture image color correction of 3D human body color scanner. The result shows that, after modifying the color difference between texture images, the color block of the color 3D body model is significantly improved with more balanced and natural color. Compared with the classical method, the improved color transformation method and the method based on the minimum angle selection method, the subjective and objective evaluation results prove the superiority of the proposed method.
    Wavelet domain distributed depth map video coding based on non-uniform quantization
    CHEN Zhenzhen, QING Linbo, HE Xiaohai, WANG Yun
    2016, 36(4):  1080-1084.  DOI: 10.11772/j.issn.1001-9081.2016.04.1080
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    In order to improve the decoding quality of depth map video in Distributed Multi-view Video plus Depth (DMVD) coding, a new non-uniform quantization scheme based on the sub-band layer and sub-band coefficients was proposed in wavelet domain Distributed Video Coding (DVC). The main idea was allocating more bits to pixels belong to the edge of depth map and consequently improving the quality of the depth map. According to the distribution characteristics of the wavelet coefficients of depth map, the low frequency wavelet coefficients of layer-N kept the uniform quantization scheme, while the high frequency wavelet coefficients of all layers used the non-uniform quantization scheme. For the high frequency wavelet coefficients around "0", larger quantization step was adopted. As the amplitude of the high frequency wavelet coefficients increased, the quantization step decreased, with finer quantization and the quality of the edge was improved consequently. The experimental results show that, for "Dancer" and "PoznanHall2" depth sequence with more edges, the proposed scheme can achieve up to 1.2 dB in terms of the Rate-Distortion (R-D) performance improvement by improving the quality of edges; for "Newspaper" and "Balloons" depth sequences with less edges, the proposed scheme can still get 0.3 dB of the R-D performance.
    Fast intra hierarchical algorithm for high efficiency video coding based on texture property and spatial correlation
    LI Peng, PENG Zongju, LI Chihang, CHEN Fen
    2016, 36(4):  1085-1091.  DOI: 10.11772/j.issn.1001-9081.2016.04.1085
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the encoding complexity of the emerging High Efficiency Video Coding (HEVC), a fast hierarchical algorithm based on texture property and spatial correlation for intra coding was proposed. Firstly, Largest Coding Unit (LCU) level fast algorithm was adopted. The prediction depth of current LCU was derived by weighted prediction using the depth level of neighboring LCUs; the texture complexity of current LCU was determined by means of standard deviation and the strategy of adaptive threshold. Therefore, the Most Probable Depth Range (MPDR) of current LCU could be predicted by combining the prediction depth with the texture complexity of current LCU. Secondly, using Coding Unit (CU) level Depth Decision Fast Algorithm (CUDD-FA), combining the strategy of early decision of CU depth level based on edge-map with early termination of CU depth based on Rate Distortion (RD) cost correlation, the depth level of CU was determined before coding, and the complexity of intra coding process was further reduced. The experimental results show that, compared with the original HEVC encoding scheme, the proposed method reduces encoding time by an average of 41.81%, while the BD-rate (Bjøntegaard Delta bit rate) only increases by 0.74% and the BD-PSNR (Bjøntegaard Delta Peak Signal-to-Noise Rate) only decreases by 0.038 dB; compared with the representative literature algorithms, the proposed algorithm achieves better RD performance with saving more encoding time. The proposed method can significantly reduce the complexity of intra coding process in the premise of negligible RD performance loss, especially for video sequences with high resolution, which is beneficial to the real-time video applications of HEVC standard.
    Adaptive video super-resolution reconstruction algorithm based on multi-order derivative
    JI Xiaohong, XIONG Shuhua, HE Xiaohai, CHEN Honggang
    2016, 36(4):  1092-1095.  DOI: 10.11772/j.issn.1001-9081.2016.04.1092
    Asbtract ( )   PDF (717KB) ( )  
    References | Related Articles | Metrics
    The traditional video super-resolution reconstruction algorithm cannot preserve the details of the image edge effectively while removing the noise. In order to solve this problem, a video super-resolution reconstruction algorithm combining adaptive regularization term with multi-order derivative data item was put forward. Based on the regularization reconstruction model, the multi-order derivative of the noise, which described the statistical characteristics of the noise well, was introduced into the improved data item; meanwhile, Total Variation (TV) and Non-Local Mean (NLM) which has better denoising effect were used as the regularization items to constrain the reconstruction process. In addition, to preserve the details better, the coefficient of regularization was weighted adaptively according to the structural information, which was extracted by the regional spatially adaptive curvature difference algorithm. In the comparison experiments with the kernel-regression algorithm and the clustering algorithm when the noise variance is 3, the video reconstructed by the proposed algorithm has sharper edge, the structure is more accurate and clear; and the average Mean Squared Error (MSE) is decreased by 25.75% and 22.50% respectively; the Peak Signal-to-Noise Ratio (PSNR) is increased by 1.35 dB and 1.14 dB respectively. The results indicate that the proposed algorithm can effectively preserve the details of the image while removing the noise.
    Single image super-resolution via independently adjustable sparse coefficients
    NI Hao, RUAN Ruolin, LIU Fanghua, WANG Jianfeng
    2016, 36(4):  1096-1099.  DOI: 10.11772/j.issn.1001-9081.2016.04.1096
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics
    The recovered image from the example-based super-resolution has sharp edges, but there are obvious artifacts. An improved super-resolution algorithm with independently adjustable sparse coefficients was proposed to eliminate the artifacts. In the dictionary training phase, the sparse coefficients in the high-dimensional space and the low-dimensional space of the image are different because of the known high-resolution training images and low-resolution ones. So the accurate high-resolution dictionary and the low-resolution one were generated separately via online dictionary learning algorithm. In the image reconstruction phase, the sparse coefficients in the two spaces were approximately the same because the input low-resolution image was known but the target high-resolution image was unknown. Different regularization parameters in the two phases were set to tune the corresponding sparse coefficients independently to get the best super-resolution results. According to the experiment results, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.45 dB higher than that of sparse coding super-resolution in average, while the Structural SIMilarity (SSIM) is also 0.011 higher. The proposed algorithm eliminates the artifacts as well as recovers the edge sharpness and texture details effectively to promote the super-resolution results.
    Combination of improved diffusion and bilateral filtering for low-dose CT reconstruction
    ZHANG Pengcheng, ZHANG Quan, ZHANG Fang, CHEN Yan, HAN Jianning, HAO Huiyan, GUI Zhiguo
    2016, 36(4):  1100-1105.  DOI: 10.11772/j.issn.1001-9081.2016.04.1100
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics
    Median Prior (MP) reconstruction algorithm combined with nonlocal means fuzzy diffusion and extended neighborhood bilateral filter was proposed to reduce the streak artifacts in low-dose Computed Tomography (CT) reconstruction. In the new algorithm, the nonlocal means fuzzy diffusion method was used to improve the median of the prior distribution Maximum A Posterior (MAP) reconstruction algorithm at first, which reduced the noise in the reconstruction image; then, the bilateral filtering method based on the expended neighborhood was applied to preserve the edges and details of the reconstruction image and improve the Signal-to-Noise Ratio (SNR). The Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed method has the smaller values of the Normalized Mean Square Distance (NMSD) and Mean Absolute Error (MAE) and the highest SNR (10.20 dB and 15.51 dB, respectively) in the two experiment images, compared with Filtered Back Projection (FBP), Median Root Prior (MRP), NonLocal Mean MP (NLMMP) and NonLocal Mean Bilateral Filter MP (NLMBFMP) algorithms. The experimental results show that the proposed reconstruction algorithm can reduce noise while keeping the edges and details of the image, which improves the deterioration problem of the low-dose CT image and obtains the image with higher SNR and quality.
    Fast image dehazing based on negative correction and contrast stretching transform
    WANG Lin, BI Duyan, LI Xiaohui, HE Linyuan
    2016, 36(4):  1106-1110.  DOI: 10.11772/j.issn.1001-9081.2016.04.1106
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    It is hard for existing image dehazing method to meet the demand of real-time because of high complexity, thus a fast image dehazing method combined with negative correction and contrast stretching transform was proposed to enhance the contrast and saturation of haze images. Contrast stretching transform was employed to negative image of input image to enhance the contrast, which saved computing time. Adaptive parameter was set for structure information got via Lipschitz exponent, it was composed of Lipschitz exponent and mean average function of local pixel block. Finally, the corresponding haze removed image with nature color and clear details was obtained by using Sigmoid function to stretch the image adaptively. The experimental results demonstrate that the proposed method can achieve a good subjective visual effect while ensuring the real-time performance, and meet the requirements of practical engineering applications.
    Estimation of defocus blurring parameter based on grayscale mean gradient and particle swarm optimization
    WU Zhangping, LIU Benyong
    2016, 36(4):  1111-1114.  DOI: 10.11772/j.issn.1001-9081.2016.04.1111
    Asbtract ( )   PDF (678KB) ( )  
    References | Related Articles | Metrics
    For image deblurring application with defocus blurring effect, a parameter estimation method based on Grayscale Mean Gradient (GMG) and Particle Swarm Optimization (PSO) algorithm was proposed to estimate the blurring parameter. First, a group of point spread functions with different blurring radius were randomly generated by PSO algorithm to process a blurred image with Wiener filtering algorithm, then a series of restored images were obtained and the corresponding GMG values were calculated. Secondly, concerning the property that the definition of an image is positively varied with its GMG value, which is shown by experimental results, the GMG values were taken as the fitness function values of the PSO algorithm, then a particle with maximum fitness was found, and the corresponding blurring parameter was taken as the final result of estimation. The experimental results show that the proposed algorithm outperforms spectral estimation method and cepstrum estimation method in estimation accuracy, especially in the case with large blur radius.
    Improvement algorithm of image inpainting based on priority-belief propagation
    WANG Jiajun, YU Qiang, ZHANG Jingjing
    2016, 36(4):  1115-1119.  DOI: 10.11772/j.issn.1001-9081.2016.04.1115
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    Priority Belief Propagation (priority-BP) algorithm cannot satisfy real-time requirement, and there is much room to improve its computational efficiency. As for its application in image inpainting, the main improvements of priority-BP algorithm focused on information transmission and tag searching. In information transmission step, sparse representation of image was wielded into the first iteration and the initial image information of target area was rapidly updated to provide more accurate prior information for the first iteration, which accelerated information transmission and improved the accuracy of tag trimming and information transmission. In tag searching step, the global searching was integrated with local searching instead of just only global searching so as to improve the construction efficiency of tag set. The improved algorithm was verified by examples. The results show that it has obvious advantage in inpainting images with large size; even with the small size of 120×126, it still improves the Peak Signal-to-Noise Ratio (PSNR) by 1.1 dB compared with original priority-BP algorithm, and reduces time consumption up to 1.2 seconds compared with original priority-BP algorithm. The experimental results indicate that the propsed algorithm can effectively improve the inpainting accuracy and efficiency.
    Mean-shift segmentation algorithm based on density revise of saliency
    ZHAO Jiangui, SIMA Haifeng
    2016, 36(4):  1120-1125.  DOI: 10.11772/j.issn.1001-9081.2016.04.1120
    Asbtract ( )   PDF (1013KB) ( )  
    References | Related Articles | Metrics
    To solve the fault segmentation of the mean shift segmentation algorithm based on the fixed space and color bandwidth, a mean-shift segmentation algorithm based on the density revise with saliency feature was proposed. A region saliency computing method was firstly proposed on the basis of density estimation of main color quantization. Secondly, region saliency was fused with pixel level saliency as density modifying factor, and the fused image was modified as input for mean-shift segmentation. Finally, the scatter regions were merged to obtain the final segmentation results. The experimental results show that for the truth boundaries, the average precision and recall of the proposed segmentation algorithm are 0.64 and 0.78 in 4 scales. Compared with other methods, the accuracy of the proposed segmentation method is significantly improved. It can effectively improve the integrity of the target and the robustness of natural color image segmentation.
    Fast arc detection algorithm based on tangent lines matching
    WANG Yonghui, LI Yuxin, GUO Song, YUAN Shuai
    2016, 36(4):  1126-1131.  DOI: 10.11772/j.issn.1001-9081.2016.04.1126
    Asbtract ( )   PDF (884KB) ( )  
    References | Related Articles | Metrics
    Focusing on the low accuracy and long detection time of arc detection in engineering drawing vectorization, a fast arc detection algorithm based on tangent lines matching was proposed. Firstly, tangent lines on the circle outer boundary were detected from eight directions (0, π/4, π/2, …, 7π/4) and were added in tangent lines set. Secondly, the tangent lines in the set were paired up, and the center and radius of circles were estimated to obtain circle candidate set. Finally, tracing detection was performed for every candidate circle after merging data of circle candidate set, and every candidate circle was ascertained as a circle or an arc. The paring process was executed during the tangent lines searching, and the number of pairing was effectively reduced by removing the relative tangent lines of the identified candidate circle. In the contrast experiments with RANdom SAmple Consensus (RANSAC) algorithm and Effective Voting Method (EVM), the proposed method reached average detection accuracy of 97.250%, and the average detection time was 12.290 s, which were better than those of the comparison methods. The experimental results illustrate that the proposed method can effectively detect the arc which length is greater than 1/8 circumference in low noise image, improve the accuracy of detection and shorten the detection time.
    Image segmentation method of pit area in wild environment
    MENG Lingjiang, WANG Ting, YAO Chen
    2016, 36(4):  1132-1136.  DOI: 10.11772/j.issn.1001-9081.2016.04.1132
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    It is difficult for robot to move in wild environment because of pit areas, so a visual coping method was put forward to detect those pit areas. Firstly, according to project requirements, a part of suspected areas with small size were removed, as well as some the suspected areas with edge gradient. Secondly, the oval similarity was calculated to determine gray level segmentation threshold, and the similarity threshold was confirmed by analyzing the oval similarity curve, which was used to separate pit areas from the suspected pit areas. At last, the simulation results on 200 pictures with different angles, scenes and pit umbers show that the proposed method can be applied to extract pit area in complex environment, and is also not sensitive to outline regularity of pit area; besides, it can adapt to complex environment.
    Electrostatic force tactile rendering method for video perception
    WU Saiwen, CHEN Jian, SUN Xiaoying
    2016, 36(4):  1137-1140.  DOI: 10.11772/j.issn.1001-9081.2016.04.1137
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics
    Since the visually impaired person could not enjoy videos and other digital media thoroughly, in order to extend tactile perception channels for video media, an electrostatic force tactile rendering method for video perception was put forward. Firstly, target pixels of the current video frame were acquired according to the location of the finger, and color information of target pixels were transformed from RGB color model to HSI color model. Then the hue parameter of target pixels was used to map stimuli frequencies of electrostatic force, the intensity and saturation parameters of target pixels were used to map stimuli amplitudes of electrostatic force, and the tactile stimuli signal was composited to render the real-time video. Finally, dynamic color perception experiments and identification of brightness perception experiments were designed. The results show that the proposed method can realize to sense the information of objects in the video, the average accuracy of dynamic identification reaches 90.6%, the average accuracy of color identification reaches 69.4%, and the average accuracy of brightness identification reaches 80.0%. The proposed method can extract the dynamic characteristics of video information effectively and enhance the real-time tactile rendering for video.
    Cloth simulation bending model based on mean curvature
    LI Na, DING Hui
    2016, 36(4):  1141-1145.  DOI: 10.11772/j.issn.1001-9081.2016.04.1141
    Asbtract ( )   PDF (818KB) ( )  
    References | Related Articles | Metrics
    In view of the bending properties of cloth, an approximate model of nonlinear bending was proposed based on the analysis of the fabric characteristics and internal structure of cloth. Firstly, the parameters of bending properties were obtained through the measurement of bending properties of real cloth. Then, the bending model based on mean curvature was put forward to calculate the bending force. Secondly, the surface mean curvature and Gauss curvature were used to segment the triangular mesh model of cloth in the dynamic simulation. Finally, the bending force was updated according to the change of the mean curvature. In the comparison experiments with the Volino's bending model, the key frame speed of the proposed model increased by an average of 2.7% in the process of bending and 4.1% in the process of lifting arms without affecting the quality of cloth simulation. The experimental results show that the proposed model is simple and accurate, and it can fully show the details of clothing folds in a natural way.
    Pedestrian route choice model considering subjective judgment delay
    ZHOU Sha, WANG Run, ZHEN Wenjie
    2016, 36(4):  1146-1150.  DOI: 10.11772/j.issn.1001-9081.2016.04.1146
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics
    Route choice is a practical problem that cannot be avoided in daily life. Since pedestrians still need to identify if the real landmark is the same landmark in route instructions under the assistant of pedestrian navigation system, a model of pedestrian road network was established. A pedestrian route choice model was proposed by introducing the subjective judgment of delay time and distance into Prospect Theory (PT). The simulation experiment is performed, in this study, on partial region of China University of Geosciences (Wuhan), the results show that the subjective judgment delay time in the same Origin-Destination (OD) pair calculated by the proposed pedestrian route choice model is 0.6 s less than the shortest subjective judgment delay time, and the lengths calculated by it are longer than the shortest routes, however they are all less than 16 m. The experiment results show that the proposed model gives a more realistic route, which meets the actual needs of pedestrians travel.
    High quality background modeling of LCD-mura defect
    XIE Rui, LI Gang, ZHANG Renbin
    2016, 36(4):  1151-1155.  DOI: 10.11772/j.issn.1001-9081.2016.04.1151
    Asbtract ( )   PDF (837KB) ( )  
    References | Related Articles | Metrics
    Considering the LCD-Mura defect background reconstructed by current background suppression methods was vulnerable to introduced noise and target defects, a kind of defect image background modeling method based on Singular Value Decomposition (SVD) and maximum entropy was proposed. The singular value sequence was obtained by the SVD of the image pixel matrix. The correspondence between the image components and the singular values was derived by the matrix norm, and the entropy of each component of the image was calculated by the ratio of each component singular value, then effective singular values of background reconstruction was determined by the maximum entropy. Finally, the background was got by the matrix reconstruction, and the general method of evaluating the effect of background reconstruction was put forward. Compared with the three B spline curve fitting methods, the proposed method can improve the contrast of region Mura by 0.59 times at least and the line Mura contrast by 7.71 times at most; and compared with the Discrete Cosine Transform (DCT) method, it reduces the noise of the point Mura by 33.8 percent at least and the line Mura noise by 76.76 percent. The simulation results show that, the model has the advantages of low noise, low loss and high brightness, and can be used to construct the background information of the defect image more accurate.
    Space positioning method of bridge crane payload based on monocular vision
    LUO Yuyang, XU Weimin, ZHANG Mengjie, LIU Yuqiang
    2016, 36(4):  1156-1162.  DOI: 10.11772/j.issn.1001-9081.2016.04.1156
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics
    In the bridge crane payload space positioning system based on monocular vision, in order to solve the problem of matching performance degradation caused by the rotation and tilt of the target template, a real-time bridge crane payloadspace positioning method based on circle detection with vertical gradient direction lines and linear pre-interpolation was proposed. Aspherical target was attached to the top of the payload, and the spherical target was not sensitive to the rotation and tilt when it was detected. First, the spherical target in the Region of Interest (ROI) was detected accurately and fastly by the circle detection method based on vertical gradient direction lines. Secondly, the space coordinates of the spherical target center was confirmed by the space geometry method. And then, the space coordinates were fed back by the method of the linear pre-interpolation. In the contrast experiment with traditional method, space coordinates were transformed to the cable lengths and the payload-swing angles. The experimental results show that the payload-swing angles transformed by this method are more accurate than that of traditional method, and the method can meet the real-time requirement. Moreover, the maximum length measurement error between this method and traditional method is 2.49%, which meets the accuracy requirement.
    Geometric structure analysis method based on scalable vector graphics
    LIU Dongming, CHEN Lian, LI Xinyan
    2016, 36(4):  1163-1166.  DOI: 10.11772/j.issn.1001-9081.2016.04.1163
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics
    Complex graphics usually consist of geometric primitives. Based on the recognition of simple geometric primitives, complex graphics recognition focuses on the spatial relationship between the graphic elements. Geometric structure is too complex to use the heuristic rules, and the existing geometric structure analysis methods are also too complicated to use the traditional method. The core technical issues of the structural analysis for the hand-drawn geometry recognition was analyzed, a geometric structure description model based on Scalable Vector Graphics (SVG) tag was designed to represent the graphics by the formal description of the graphical elements and their constraints. It used SVG tags to store the graphical elements and their constraints, and identified the geometry shape and its internal relationship by parsing the SVG tags. It was validated in a prototype system, namely GeoSketch, with promising effect. The experimental results show that the proposed method is simple and low-dimensional, it is easy for determing the shape of the GeoSketch and its internal relations.
    Inpainting of seismic signal using consistent sparse representation method
    WANG Lifu, SUN Yi
    2016, 36(4):  1167-1172.  DOI: 10.11772/j.issn.1001-9081.2016.04.1167
    Asbtract ( )   PDF (860KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that some portions of seismic waveforms were lost due to the mechanical failure of recording devices and damage of seismograms, an inpainting method based on Consistent Sparse Representation (CSR) model was proposed. Firstly, each seismic frame was represented individually in Sparse Representation (SR) model. Secondly, the consistency between spectra of seismic frames was employed. Principal Component Analysis (PCA) method was introduced to extract this consistency between seismic frames. Finally, combining the sparseness of each seismic frame and the consistency between seismic frames, the proposed algorithm was used to inpaint the lost portion. The simulation experiments showed that when the missing duration interval was 50% of the frame length, the inpainting method based on traditional sparse representation model obtained incorrect results whereas the inpainting method based on consistent sparse representation model recovered the lost portion well. The simulation experiments indicate that the performance of the consistent sparse representation inpainting method is much better than the traditional sparse representation inpainting method.
    Laser bathymetry waveform processing based on robust least square support vector machine
    WANG Yong, ZHAO Xianli, FU Chengqun, XIE Lijun
    2016, 36(4):  1173-1178.  DOI: 10.11772/j.issn.1001-9081.2016.04.1173
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    The traditional nonweighted least squares Support Vector Machine (SVM) and weighted least square SVM have a few disadvantages of processing low Signal-to-Noise Ratio (SNR) laser echo in the field of lidar bathymetry, a filtering method named HW-LS-SVM was proposed by combining robust least square and weighted least square SVM. Firstly, strong prior weight function, residual error and mean square error were calculated by elimination weight function, then the weight of least square SVM was computed by weight function. Finally, the echo signal was filtered by iterative computation. The simulation results show that HW-LS-SVM algorithm is more robust than least square SVM, Bayes least square SVM and the traditional weighted least square SVM. The results were satisfactory when the noise rate reached to 45%, and the correct rate of the extracted water surface and bottom was 100%. The extracted water depths from 4 groups of laser echoes in deep area and 4 groups in shallow area all agree with the background data. The proposed method has better anti-noise performance and is more suitable for the filtering processing of the low SNR lidar bathymetry signal.
2024 Vol.44 No.9

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF