Loading...

Table of Content

    10 March 2016, Volume 36 Issue 3
    Random service system model based on UPnP service discovery
    HU Zhikun, SONG Jingye, CHEN Yuan
    2016, 36(3):  591-595.  DOI: 10.11772/j.issn.1001-9081.2016.03.591
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    In the automatic-discovery process of smart home network devices, serious jams occur due to randomly and independently choosing delay time to send service response message. In order to solve this problem, taking Universal Plug and Play (UPnP) service discovery protocol as an example, considering different demands of reliability and real-time performance, a random service system model based on UPnP service discovery was proposed. A profit-loss function including system response index and waiting index was designed. Finally, the relation between the best length of buffer queue and the profit-loss coefficient was obtained. Through the comparison of arrival time, departure time, waiting time and travel time with different buffer queue lengths, the necessity of designing profit-loss function and the feasibility of this proposed model are verified.
    Firm real-time data-transmitting system based on data stream-transmitting mechanism
    CAO Jian, LIU Qiong, WANG Yuan
    2016, 36(3):  596-600.  DOI: 10.11772/j.issn.1001-9081.2016.03.596
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics
    Aiming at the low data-transmitting efficiency of the traditional message-oriented middleware in power information system, a firm real-time data-transmitting system based on data stream-transmitting mechanism was proposed. Queue caching mechanism was adopted to realize the asynchronous sending and batch confirmation of message. Data stream-transmitting mechanism was designed to eliminate the cache latency and the cost of cache resources of the data on transit node to improve the timeliness and concurrency of data transmission. Distributed and data routing thought was used-data to make the node network to the third-party system transparently and achieve a data routing distribution function. The simulation results of a provincial electric power information system data exchange scene, verified the system performance. Concurrent data exchange capacity is 3000 concurrent. Transmission speed in the gigabit bandwidth system environment is 980 MB/s. Switching delay is kept in milliseconds.
    Dynamic scheduling mechanism for wireless video transmission
    LI Yong, CHENG Zhirui
    2016, 36(3):  601-605.  DOI: 10.11772/j.issn.1001-9081.2016.03.601
    Asbtract ( )   PDF (926KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the wireless real-time transmission network of HD (High Definition) video has characteristics of long delay, a dynamic scheduling mechanism based on Enhanced Hybrid Coordination Function (EHCF) was proposed. In this mechanism, firstly, truncation and extension of beacon interval were used to adapt to the burst of the video data. Then, the transmission needs of each station were determined by its cache size of the video data, and accordingly, the priority of the station was defined. Next, the threshold of transmission needs and differences were applied to evaluate the weights of transmission needs of the high priority stations. Finally, the channel resource was allocated according to the assessment. A comparison with traditional Hybrid Coordination Function (HCF) was made according to H.264-based video transmission simulations with 22 Mb/s and same number of stations. The result shows that there is a significant improvement on delay, ranging from 54.3% to 87.6%. The theoretical analysis and simulation results show that the EHCF mechanism can effectively improve the delay of network in wireless video transmission.
    Multipath braided model and fault-tolerant routing scheme for wireless sensor network
    YU Leilei, ZHOU Yongli, HUANG Yu
    2016, 36(3):  606-609.  DOI: 10.11772/j.issn.1001-9081.2016.03.606
    Asbtract ( )   PDF (788KB) ( )  
    References | Related Articles | Metrics
    In Wireless Sensor Network (WSN), disjoint multipath routing can lead to the long-path problem, and braided multipath routing can lead to the weakening of fault-tolerant performance. To address these issues, a multipath braided model and a fault-tolerant routing scheme based upon the model were proposed. Firstly, the intersection of multiple paths were quantified from the source to the destination by establishing corresponding multipath braided model, and then a probability model of fault tolerance was proposed to build the relationship between path interactivity and fault tolerance. Secondly, a fault-tolerant routing scheme was designed based on local intersection adjustment. Experimental results show that, when using the proposed model and its scheme in typical multipath routing schemes—Sequential Assignment Routing (SAR) and Energy Efficient Fault-tolerant Multipath Routing (EEFTMR), the data transfer success rate can be improved effectively. In addition, it also has good performance in the network throughput and energy consumption.
    Multipath error of deep coupling system based on integrity
    LIU Linlin, GUO Chengjun, TIAN Zhong
    2016, 36(3):  610-615.  DOI: 10.11772/j.issn.1001-9081.2016.03.610
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Focused on the elimination of multipath error in Global Positioning System (GPS), a multipath error elimination method based on the combination of integrity and deep coupling structure was proposed. Firstly, the GPS and Strapdown Inertial Navigation System (SINS) were constructed into a deep coupling structure. Then, pseudorange residual and pseudorange rate residual which came from the output of phase frequency detector were used as the test statistics. Secondly, according to that pseudorange residual and pseudorange rate residual subjected to Gaussian distribution, the detection threshold of pseudorange residual and pseudorange rate residual was calculated. Finally, the detection threshold was used to evaluate the test statistics, and the modified pseudorange residual and pseudorange rate residual were put into the Kalman filter. In the simulation comparison of the proposed method with the multipath error elimination method without integrity: the latitude error decreased by about 40 m, the yaw angle error decreased by about 4 degrees, the north velocity error decreased by about 2 m/s. In contrast to the traditional method of eliminating multipath errors (using wavelet filtering): the height error decreased by about 40 m, the pitch angle error decreased by about 5 degrees. The simulation results show that the proposed method based on integrity can effectively eliminate the positioning error caused by multipath (reflected in position error, attitude angle error and velocity error). Meanwhile, comparing to the traditional filtering method, it can more effectively reduce the positioning error caused by multipath.
    Fault detection filter design based on genetic algorithm in wireless sensor and actuator network
    LIU Yong, SHEN Xuanfan, LIAO Yong, ZHAO Ming
    2016, 36(3):  616-619.  DOI: 10.11772/j.issn.1001-9081.2016.03.616
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    To improve the reliability of the Wireless Sensor and Actuator Network (WSAN), an optimal design method based on Genetic Algorithm (GA) for WSAN fault detection filter was proposed. In system modeling, the influence of the wireless network transmission delay on network control system was modeled as an external noise, the composite optimization index which is composed of sensitivity and robustness was made as the design goal of fault detection filter, and the optimization objective was made as the core of GA—the fitness function. At the same time, according to the numerical characteristics of optimization objective in WSAN, the corresponding real coding, uniform mutation, arithmetic crossover and other processing methods were selected to speed up the convergence rate, meanwhile taking the accuracy of the calculation results into account. The optimized filter design mentioned herein, not only restrains the noise signal, but also amplifies the fault signal. Finally, the effectiveness of the proposed design is demonstrated by the results of Matlab/OMNET++ hybrid simulations.
    Existence detection algorithm for non-cooperative burst signals in wideband
    WANG Yang, WANG Bin, JIANG Tianli, LIU Huaixing, CHEN Ting
    2016, 36(3):  620-627.  DOI: 10.11772/j.issn.1001-9081.2016.03.620
    Asbtract ( )   PDF (1062KB) ( )  
    References | Related Articles | Metrics
    With the extensive application of wideband receivers, the blind detection of non-cooperation burst signal in broadband is increasingly important. It is difficult to detect burst signals with low duty cycle time and to distinguish the burst signals with high duty cycle time from continuous-time signals. The problem was solved by constructing two broadband spectral statistics including maximum spectrum and maximum difference spectrum. By keeping the maximum value of instantaneous spectrum, the maximum spectrum has the information of both burst and non-burst signals; by keeping the maximum value of difference between adjacent instantaneous spectrums, the maximum difference spectrum can extract burst information and suppress continuous-time signals. By using these two spectrums, the detection of burst signals in broadband is completed. The test results show that the proposed algorithm can handle burst signals of all the duty cycle time.
    Graphics processor unit parallel computing in Matlab and its application in topology optimization
    CAI Yong, LI Sheng
    2016, 36(3):  628-632.  DOI: 10.11772/j.issn.1001-9081.2016.03.628
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics
    The hardware cost of fast computing of structural topology optimization based on traditional parallel computing method is high and the efficiency of coding development is low. In order to solve the problems, an entire parallel computing method of Bi-directional Evolutionary Structural Optimization (BESO) based on Matlab and Graphics Processor Unit (GPU) was proposed. Firstly, the advantages, disadvantages and application range of three kinds of GPU computing methods used for GPU parallel computing in Matlab were discussed. Secondly, the built-in function was introduced to directly realize parallel computing of the math operations between vector and dense matrix, the MEX function was introduced to realize the fast computing of sparse finite element equations by CUSOLVER library, and the Parallel Thread eXecution (PTX) code was introduced to realize the parallel computing of the optimization decisions of the element sensitivity analysis and other decisions in the topology optimization. The numerical examples show that the GPU parallel computing program based on Matlab has a high coding efficiency, and can avoid the precision difference between the different programming languages. The GPU parallel computing program can ultimately achieve a considerable speedup with the same results.
    Task scheduling method based on template genetic algorithm in cloud environment
    SHENG Xiaodong, LI Qiang, LIU Zhaozhao
    2016, 36(3):  633-636.  DOI: 10.11772/j.issn.1001-9081.2016.03.633
    Asbtract ( )   PDF (529KB) ( )  
    References | Related Articles | Metrics
    Cloud task scheduling is a hot issue in the research of cloud computing. The cloud task scheduling method directly affects the overall performance of the cloud platform. A task scheduling method Template-Based Genetic Algorithm (TBGA) was proposed. Firstly, according to the processor's CPU speed, bandwidth and etc., the amount of tasks that should be allocated to each processor was calculated. andwas called allocation template. Secondly, according to the template, the tasks were combined into multiple subsets and finally each subset of tasks was allocated to the corresponding processor by using genetic algorithm. Experimental results show that the method can obtain shorter time scheduling for total tasks. TBGA reduced 20% of task set completion time compared with Min-Min algorithm and 30% of task set completion time compared with Genetic Algorithm (GA). Therefore, the TBGA is an effective task scheduling algorithm.
    Attribute-based encryption with fast decryption on prime order groups
    LI Zuohui, CHEN Xingyuan
    2016, 36(3):  637-641.  DOI: 10.11772/j.issn.1001-9081.2016.03.637
    Asbtract ( )   PDF (710KB) ( )  
    References | Related Articles | Metrics
    The decryption costs of most Attribute-Based Encryption (ABE) schemes go linearly with the number of attributes used in decryption. Attribute-Based Encryption scheme with Fast decryption (FABE) was used to solve this problem where cipher texts could be decrypted with a constant number of pairings. To solve the problem of existing adaptively secure FABE suffered from superfluous computation overhead because it was designed on composite order groups, an adaptively secure key-policy ABE scheme with fast decryption on prime order groups named PFKP-ABE was proposed. Firstly, based on dual pairing vector space and Linear Secret-Sharing Scheme (LSSS) technology, PFKP-ABE was constructed on prime order groups. Then, a sequence of attacking games indistinguishable from each other was designed to prove that this scheme is adaptively secure in the standard mode when dual system encryption approach was employed. Performance analysis indicates that in comparison with another adaptively secure key-policy ABE scheme with fast decryption on composite order groups (FKP-ABE), the speed of decryption has increased by roughly 15 times.
    Information hiding scheme for 3D model based on profile analysis
    REN Shuai, SHI Fangxia, ZHANG Tao
    2016, 36(3):  642-646.  DOI: 10.11772/j.issn.1001-9081.2016.03.642
    Asbtract ( )   PDF (658KB) ( )  
    References | Related Articles | Metrics
    Aiming at the confidential communication based on information hiding technology, an information hiding algorithm using interval analysis of values on Z-axis for the vertical profile of three-dimensional (3D) models was proposed. First, the 3D model was scaled and rotated disproportionately according to fixed size and angle respectively, and the vertical profile could be obtained by horizontal mapping. Second, the vertical profile was mapped into the two-dimensional coordinate system and the values on the vertical axis could be determined using a fixed step size. Last, the vertical values were converted into binary numbers with interval constraints according to the fixed threshold. By disproportionate scaling with fixed size, the algorithm could be effective against the scaling attack. According to the fixed rotated angle and step size, the data could be embedded in redundancy of the whole model and the algorithm could be robust against cutting. The experimental results illustrate that this algorithm is of strong robustness against the random noise under 0.2%, re-meshing and non-uniform simplification.
    Audio steganalysis method based on fuzzy C-means clustering and one class support vector machine
    WANG Yujie, JIANG Weiwei
    2016, 36(3):  647-652.  DOI: 10.11772/j.issn.1001-9081.2016.03.647
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics
    Concerning the poor adaptability of the traditional audio steganalysis method using two-class classifier to the unknown steganography method, an audio steganalysis method based on Fuzzy C-Means (FCM) clustering and One Class Support Vector Machine (OC-SVM) was proposed. In the process of training, features were extracted from the training audio firstly, including the statistical features of the spectrum of the Short-Time Fourier Transform (STFT), and the features based on audio quality measures; and then FCM clustering was executed on the extracted features to obtain C clusters; finally the extracted features were trained by the OC-SVM classifier with multiple hyperspheres. In the process of detecting, the features were extracted from the testing audio, and the testing audio was detected according to the boundary of the OC-SVM with multiple hyperspheres. The experimental results reveal that,for some typical methods of audio steganography, this steganalysis method can detect accurately, when the embedding capacity is full, the total detection accuracy is 85.1%; furthermore, compared with the method of K-means clustering, this method can improve the detection accuracy by at least 2%. This steganalysis method is more universal than the steganalysis method using two-class classifier, and it is more suitable for the detection of the stego-audio whose steganography method is unknown beforehand.
    Research on properties of shared resource matrix method based on double cross linked list
    YANG Peng, ZHAO Hui, BAO Zhonggui
    2016, 36(3):  653-656.  DOI: 10.11772/j.issn.1001-9081.2016.03.653
    Asbtract ( )   PDF (641KB) ( )  
    References | Related Articles | Metrics
    Concerning the high time-complexity of shared resource matrix method based on array storage in the detection of system covert channel, an improved algorithm based on double cross linked list was proposed. Firstly, traditional array storage was improved by double cross linked list storage in transitive closure operation. Secondly, a probability model for shared resource matrix method was constructed. Finally, the time-complexity of the improved algorithm and features of shared resource matrix were analyzed under the probability model. When the shared resource matrix was a sparse matrix, using the improved algorithm based on double across linker storage could prompt 67% time efficiency of shared resource matrix compared to traditional realization based on array storage. When the scale of shared resource matrix was quite great, the property of transitive closure operation would cause quick filling of elements in shared resource matrix, then the time efficiency advantage of improved algorithm based on double cross linker was declined compared to traditional algorithm based on array storage. This property of transitive closure operation was proven through theoretical deduction under the probability model.
    Mobile mutual authentication protocol based on Hash function
    TAO Yuan, ZHOU Xi, MA Yupeng, ZHAO Fan
    2016, 36(3):  657-660.  DOI: 10.11772/j.issn.1001-9081.2016.03.657
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of channel insecurity caused by wireless transmission in mobile Radio Frequency IDentification (RFID) system, a low-cost mobile mutual authentication protocol based on the Hash function was proposed by considering the complexity of the protocol and the implementation cost of the tag. In the protocol, the square operation was used to dynamically update the tag identifier. And the reader identifier, the pseudo random function and Hash function were used to enhance the identity authentication between the reader and the back-end server, which can improve the mobility of the system. Compared with the typical authentication protocols based on the Hash function and the tag ownership transfer protocol, this proposed protocol can resist tracking, impersonation, replay, man-in-the-middle, Denial of Service (DoS) attacks, etc., which can ensure the security of tag ownership transfer. The efficiency of calculation and storage was analyzed, and the results show that the calculation of the tag is reduced and the storage capacity is lower.
    Anti-fingerprinting model of operation system based on network deception
    CAO Xu, FEI Jinlong, ZHU Yuefei
    2016, 36(3):  661-664.  DOI: 10.11772/j.issn.1001-9081.2016.03.661
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Since traditional host operating system anti-fingerprinting technologies is lack of the ability of integration defense, a Network Deception based operating system Anti-Fingerprinting model (NDAF) was proposed. Firstly, basic working principle was introduced. The deception server made the fingerprint deception template. Each host dynamically changed the protocol stack fingerprint according to the fingerprint deception template, therefore the process of operating system fingerprinting by attacker was misguided. Secondly, a trust management mechanism was proposed to improve the system efficiency. Based on the different degree of threat, different deception strategies were carried out. Experiments show that NDAF makes certain influence on network efficiency, about 11% to 15%. Comparing experiments show that the anti-fingerprinting ability of NDAF is better than typical operating system anti-fingerprinting tools (OSfuscatge and IPmorph). NDAF can effectively increase the security of target network by integration defense and deception defense.
    Query processing method of XML streaming data using list
    HE Zhixue, LIAO Husheng
    2016, 36(3):  665-669.  DOI: 10.11772/j.issn.1001-9081.2016.03.665
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    Focused on the characteristics of processing semi-structure eXtensible Markup Language (XML) streaming data such as the stream real-time arriving continuously, requiring to be read sequentially and only once into memory, the query must be processed on the fly and usable buffer size is very little, and concerned the current status for limitation of query expression and inefficiency in practical applications of processing large scale data, QXList method was proposed for massive data processing based on SAX parsing XML. Data model and algorithm integrated framework were defined firstly. The integrated methods to process predicate and wildcard were discussed in detail. Layer value was used to determine the relationship of two elements and relational pointer was constructed to link multiple candidate nodes' lists to get query results in this method. Two optimal points were analyzed for decreasing buffer size. The experimental results show that the proposed approach is effective and efficient to this problem, and outperforms the state-of-the-art algorithms about 30 percent such as QStream++ and query engines MonetDB and SAXSON especially for large processed data. At the same time, memory usage is nearly constant.
    Non-relational data storage management mechanism for massive unstructured data
    LIU Chao, HU Chengyu, YAO Hong, LIANG Qingzhong, YAN Xuesong
    2016, 36(3):  670-674.  DOI: 10.11772/j.issn.1001-9081.2016.03.670
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Traditional relational data storage systems have been criticized by poor performance and lacking of fault tolerance, therefore it cannot satisfy the efficiency requirement of the massive unstructured data management. A non-relational storage management mechanism with high-performance and high-availability was proposed. First, a user-friendly application interface was designed, and data could be distributed to multiple storage nodes through efficient consistent hashing algorithm. Second, a configurable data replication mechanism was presented to enhance availability of the storage system. Finally, a query fault handling mechanism was proposed to improve the storage system's fault-tolerance and avoid service outages, which were caused by the node failure. The experimental results show that the concurrent access capacity of the proposed storage system increases by 30% and 50% respectively compared to traditional file system and relational database under different user workloads; meanwhile, the availability loss of the storage system under the fault state is less than 14% in a reasonable response time. Therefore, it is applicable for efficient storage management of massive unstructured data.
    Query optimization for distributed database based on parallel genetic algorithm and max-min ant system
    LIN Jiming, BAN Wenjiao, WANG Junyi, TONG Jichao
    2016, 36(3):  675-680.  DOI: 10.11772/j.issn.1001-9081.2016.03.675
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Since relation and its fragments in the distributed database have multiple copies and multi-site storage characteristics, which increases the time and space complexity of query and results in lower search efficiency of Query Execution Plan (QEP), a Parallel Genetic Algorithm and Max-Min Ant System (PGA-MMAS) based on design principles of Fragments Site Selector (FSS) was proposed. Firstly, based on the design requirement of the distributed information management system for actual business, FSS was designed, which selected the best one from many copies of relationship heuristically to decrease query join cost and search space of PGA-MMAS. Secondly, Genetic Algorithm (GA) encoded the final join relations and conducted parallel genetic operation to get a set of relative optimal QEPs by taking advantage of quick convergence of GA. Then, QEPs were transformed into the initial pheromone distribution of Max-Min Ant System (MMAS) to obtain the optimal QEP quickly and efficiently. Finally, simulation experiments were conducted under different number of relation conditions, and the results show that PGA-MMAS based on FSS searches optimal QEP more efficiently than original GA, Fragments Site Selector-Genetic Algorithm (FSS-GA), Fragments Site Selector-Max-Min Ant System (FSS-MMAS) and Fragments Site Selector-Genetic Algorithm-Max-Min Ant System (FSS-GA-MMAS). And in the actual engineering application, the proposed algorithm can search high-quality QEP to improve the efficiency of multi-join query in distributed database.
    Particle swarm optimization algorithm based on multi-strategy synergy
    LI Jun, WANG Chong, LI Bo, FANG Guokang
    2016, 36(3):  681-686.  DOI: 10.11772/j.issn.1001-9081.2016.03.681
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortage that Particle Swarm Optimization (PSO) algorithm is easy to fall into local optima and has low precision at later evolution process, a modified Multi-Strategies synergy PSO (MSPSO) algorithm was proposed. Firstly, a probability threshold value of 0.3 was set. In every iteration, if the randomly generated probability value was less than the threshold, the algorithm with opposition-based learning for the best individual was adopted to generate their opposite solutions, which improved the convergence speed and precision of PSO; otherwise, Gaussian mutation strategy was adopted for the particle position to enhance the diversity of population. Secondly, a Cauchy mutation strategy for linearly decreasing cauchy distribution scale parameter decreased was proposed, to generate better solution to guide the particle to approximate the optimum space. Finally, the simulation experiments were conducted on eight benchmark functions. MSPSO algorithm has the convergence mean value of 1.68E+01, 2.36E-283, 8.88E-16, 2.78E-05, 8.88E-16, respectively in Rosenbrock, Schwefel's P2.22, Rotated Ackley, Quadric Noise and Ackley, and can converge to the optimal solution of 0 in Sphere, Griewank and Rastrigin, which is better than GDPSO (PSO based on Gaussian Disturbance) and GOPSO (PSO based on global best Cauchy mutation and Opposition-based learning). The results show that proposed algorithm has higher convergence accuracy and can effectively avoid being trapped in local optimal solution.
    Improved particle swarm optimization algorithm based on Gaussian disturbance and natural selection
    AI Bing, DONG Minggang
    2016, 36(3):  687-691.  DOI: 10.11772/j.issn.1001-9081.2016.03.687
    Asbtract ( )   PDF (781KB) ( )  
    References | Related Articles | Metrics
    In order to effectively balance the global and local search performance of Particle Swarm Optimization (PSO) algorithm, an improved PSO algorithm based on Gaussian disturbance and natural selection (GDNSPSO) was proposed. Based on the simple PSO algorithm, the improved algorithm took into account the mutual influence among all individual best particles and replaced the individual best value of each particle with the mean value of them which contained Gaussian disturbance. And the evolution mechanism of survival of the fittest in natural selection was employed to improve the performance of algorithm. At the same time, the nonlinear adjustment of the inertia weight was adjusted by the cosine function with adaptive adjustment of the threshold of inertia weight and the adjustment strategy of the asynchronous change was used to improve the learning ability of the particles. The simulation results show that the GDNSPSO algorithm can improve the convergence speed and precision, and it is better than some recently proposed improved PSO algorithms.
    Instance transfer learning model based on sparse hierarchical probabilistic self-organizing graphs
    WU Lei, TIAN Ruya, ZHANG Xuefu
    2016, 36(3):  692-696.  DOI: 10.11772/j.issn.1001-9081.2016.03.692
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    The current study of instance-transfer learning suffers from the mismatch between the granularities of data from multi-source heterogeneous domains. A Transfer Sparse unsupervised Hierarchical Probabilistic Self-Organizing Graph (TSHiPSOG) method based on the framework of Hierarchical Probabilistic Self-Organizing Graph (HiPSOG) method in the single domain was proposed. Firstly, representation vectors with different granularities were extracted from source and target domains by using hierarchical self-organizing model based on a probabilistic mixture of multivariate Gaussian component; and the sparse graph probabilistic criterion was used to control the growth of the model. Secondly, the most similar representation vector of the target domain data was searched in the rich-information source domain by using the Maximum Information Coefficient (MIC). Then, the data in the target domain was classified using labels of similar representation vectors in the source domain. Finally, the experimental results on the international universal 20 Newsgroups dataset and the spam detection dataset show that the proposed method improves the average classifying accuracy of target domain using the information from source domain by 15.26% and 9.05%. Moreover, the approach improves the average classifying accuracy with mining different granularity representation vectors by 4.48% and 4.13%.
    Deep learning algorithm optimization based on combination of auto-encoders
    DENG Junfeng, ZHANG Xiaolong
    2016, 36(3):  697-702.  DOI: 10.11772/j.issn.1001-9081.2016.03.697
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics
    In order to improve the learning accuracy of Auto-Encoder (AE) algorithm and further reduce the classification error rate, Sparse marginalized Denoising Auto-Encoder (SmDAE) was proposed combined with Sparse Auto-Encoder (SAE) and marginalized Denoising Auto-Encoder (mDAE). SmDAE is an auto-encoder which was added the constraint conditions of SAE and mDAE and has the characteristics of SAE and mDAE, so as to enhance the ability of deep learning. Experimental results show that SmDAE outperforms both SAE and mDAE in the given classification tasks; comparative experiments with Convolutional Neural Network (CNN) show that SmDAE with marginalized denoising and a more robust model outperforms convolutional neural network.
    Dynamic neural network structure design based on semi-supervised learning
    REN Hongge, LI Dongmei, LI Fujin
    2016, 36(3):  703-707.  DOI: 10.11772/j.issn.1001-9081.2016.03.703
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics
    In view of the neural network's initial structure set depends on the workers experience and its adaptive ability is poor, a dynamic neural network structure design method based on Semi-Supervised Learning (SSL) algorithm was proposed. In order to get a more perfect performance of the initial network structure, the authors trained neural network based on semi-supervised learning method of using both tagged sample and unmarked sample, and judged the impact of the hidden layer neurons on the network output by using Global Sensitivity Analysis method (GSA). The optimal design of dynamic neural network structure was accomplished by cutting or increasing hidden layer neurons based on sensitivity size timely, and the convergence of the dynamic process was investigated. Theoretical analysis and Matlab simulation experiments show that the neural network hidden layer neurons based on Semi-Supervised Learning algorithm will change with training time, and the structure design of the dynamic network is accomplished. The application of hydraulic Automatic Gauge Control (AGC) system, about 160 s later, the system output is becoming stable, and the output error is as small as about 0.03 mm, and compared with Supervised Learning (SL) method and UnSupervised Learning (USL) method, the output error reduces by 0.03 mm and 0.02 mm respectively, which indicate that dynamic network based on SSL algorithm effectively improve the precision of the system output in actual applications.
    Improved dynamic self-adaptive teaching-learning-based optimization algorithm
    WANG Peichong
    2016, 36(3):  708-712.  DOI: 10.11772/j.issn.1001-9081.2016.03.708
    Asbtract ( )   PDF (816KB) ( )  
    References | Related Articles | Metrics
    The Teaching-Learning-Based Optimization (TLBO) algorithm in function optimization problems has some weakness, such as falling into the local optimum value,converging slowly in the later period and acquiring solution inaccurately. To overcome these shortcomings, an improved TLBO algorithm with dynamic self-adaptive learning and dynamic random searching was proposed. Firstly, a linear increment dynamic variation coefficient was introduced into the teaching process to adjust the value of knowledge to individual learning in the iterative optimization process. Secondly, in order to improve the precision of solution, teacher individual executed dynamic random searching to exploit the solution space around the best individual. The experiments were conducted on 14 classic testing functions, and the experimental results show that the proposed algorithm is much better than standard TLBO at not only the accuracy of solutions but also for the convergence speed. It is suitable to solve the high-dimensional function optimization problem.
    Image recognition algorithm based on dual-view discriminant correlation analysis
    LI Jin, QIAN Xu
    2016, 36(3):  713-717.  DOI: 10.11772/j.issn.1001-9081.2016.03.713
    Asbtract ( )   PDF (850KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that multi-view correlation analysis are not effective to exploit the correlation information and neglect latent discriminant information in images, a Dual-View Discriminant Correlation Analysis (DVDCA) approach based on dual view was proposed. Firstly, the supervised within-class correlation variation and between-class correlation variation were designed; secondly, within-class correlation variation was maximized and between-class correlation variation was minimized to extract the discriminant feature; finally, constrained dual-view discriminant correlation model was designed to exploit rich view information of both within-view and between-view. Compared with multi-view linear discriminant analysis, Canonical Correlation Analysis (CCA), Multi-view Discriminant Latent Space (MDLS), Uncorrelated Multi-view Discrimination Dictionary Learning (UMDDL) on the Multi-PIE dataset, the proposed algorithm can achieve recognition rate increase of 1.45-4.73 percentage points; on the MFD dataset, the proposed algorithm can achieve increase of 1.25-5.29 percentage points.
    Text keyword extraction method based on word frequency statistics
    LUO Yan, ZHAO Shuliang, LI Xiaochao, HAN Yuhui, DING Yafei
    2016, 36(3):  718-725.  DOI: 10.11772/j.issn.1001-9081.2016.03.718
    Asbtract ( )   PDF (1022KB) ( )  
    References | Related Articles | Metrics
    Focused on low efficiency and poor accuracy of the traditional TF-IDF (Term Frequency-Inverse Document Frequency) algorithm in keyword extraction, a text keyword extraction method based on word frequency statistics was proposed. Firstly, the formula of the same frequency words in text was deduced according to Zipf's law; secondly, the proportion of each frequency word in text was determined in accordance with the formula of the same frequency words, most of which were low-frequency words; finally, the TF-IDF algorithm based on word frequency statistics was proposed by applying the word frequency statistics law to keyword extraction. Simulation experiments were conducted on Chinese and English text experiment data sets. The average relative error of the formula of the same frequency words was not more than 0.05; the maximum absolute error of the proportion of each frequency word in text was 0.04. Compared with the traditional TF-IDF algorithm, the average precision, the average recall and the average F1-measure of the TF-IDF algorithm based on word frequency statistics were increased respectively, while the average runtime was decreased. The simulation results show that in text keyword extraction, the TF-IDF algorithm based on word frequency statistics is superior to the traditional TF-IDF algorithm in precision, recall and F1-measure, and it can effectively reduce the runtime in keyword extraction.
    Personal relation extraction based on text headline
    YAN Yang, ZHAO Jiapeng, LI Quangang, ZHANG Yang, LIU Tingwen, SHI Jinqiao
    2016, 36(3):  726-730.  DOI: 10.11772/j.issn.1001-9081.2016.03.726
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    In order to overcome the non-person entity's interference, the difficulties in selection of feature words and muti-person influence on target personal relation extraction, this paper proposed person judgment based on decision tree, relation feature word generation based on minimum set cover and statistical approach based on three-layer sentence pattern rules. In the first step, 18 features were extracted from attribute files of China Conference on Machine Learning (CCML) competition 2015, C4.5 decision was used as the classifier, then 98.2% of recall rate and 92.6% of precision rate were acquired. The results of this step were used as the next step's input. Next, the algorithm based on minimum set cover was used. The feature word set covers all the personal relations as the scale of feature word set is maintained at a proper level, which is used to identify the relation type in text headline. In the last step, a method based on statistics of three-layer sentence pattern rules was used to filter small proportion rules and specify the sentence pattern rules based on positive and negative proportions to judge whether the personal relation is correct or not. The experimental result shows the approach acquires 82.9% in recall rate and 74.4% in precision rate and 78.4% in F1-measure, so the proposed method can be applied to personal relation extraction from text headlines, which helps to construct personal relation knowledge graph.
    Web spam detection based on random forest and under-sampling ensemble
    LU Xiaoyong, CHEN Musheng
    2016, 36(3):  731-734.  DOI: 10.11772/j.issn.1001-9081.2016.03.731
    Asbtract ( )   PDF (658KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of imbalance classification and "curse of dimensionality", a binary classifier algorithm based on Random Forest (RF) and under-sampling ensemble was proposed to detect Web spam. Firstly, majority samples in training dataset were sampled into several sub sample sets, each of them was combined with minority samples and several balanced training sample sub sets were generated; then several RF classifiers were trained by these training sample sub sets to classify the testing samples; finally, the testing samples' classifications were determined by voting. Experiments on the WEBSPAM UK-2006 dataset show that the ensemble classifier outperformed RF, Bagging with RF and Adaboost with RF etc., and its accuracy, F1-measure, AUC increased by at least 14%, 13% and 11%. Compared with the winners of Web spam challenge 2007, the ensemble classifier increased F1-measure by at least 1% and reached to the optimum result in AUC.
    Combining topic similarity with link weight for Web spam ranking detection
    WEI Sha, ZHU Yan
    2016, 36(3):  735-739.  DOI: 10.11772/j.issn.1001-9081.2016.03.735
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that good-to-bad links in the Web degrade the detection performance of ranking algorithms (e.g. Anti-TrustRank), a distrust ranking algorithm—Topic Link Distrust Rank (TLDR) by combining topic similarity with link weight to adjust the propagation was proposed. Firstly, the topic distribution of all the pages was gotten by Latent Dirichlet Allocation (LDA), and the topic similarity of linked pages was computed. Secondly, link weight was computed according to the Web graph, and was combined with topic similarity to achieve the topic-link weight matrix. Then, the Anti-TrustRank and Weighted Anti-TrustRank (WATR) algorithm were improved by measuring the distrust scores correctly based on the topic and link weight. Finally, all the pages were ranked according to their distrust scores, and spam pages were detected by taking a threshold. The experiment results on the dataset WEBSPAM-UK2007 show that, compared with Anti-TrustRank and WATR, SpamFactor of TLDR is raised by 45% and 23.7%, F1-measure (threshold was 600) is improved by 3.4 percentage points and 0.5 percentage points, and spam ration(top 3 of the buckets) is increased by 15 percentage points and 10 percentage points, respectively.
    Multidimensional collaborative intelligence recommendation based on social media context
    LU Zhigang, SUN Yadan
    2016, 36(3):  740-745.  DOI: 10.11772/j.issn.1001-9081.2016.03.740
    Asbtract ( )   PDF (1137KB) ( )  
    References | Related Articles | Metrics
    In allusion to the problem of cold start technology and data scarcity in the traditional collaborative intelligence recommendation technology, in order to improve the efficiency and accuracy of recommendation algorithm, multidimensional collaborative intelligence recommendation based on social media context was proposed. In this model, the feature attributes and behavioral characteristics of the target users were considered into the information of social media context, users' interests in different social media context were dynamically captured in real-time, and OnLine Analytical Processing (OLAP) technology was used to process multidimensional data. The social relationship between users and the political and economic environment were regarded as an important indicator, then, the similarity between users were calculated using Pearson coefficient and cloud model, to get personalized and customized recommendation results. The experimental results show that the average absolute error of the model is significantly less than traditional collaborative intelligent recommendation and simple recommendation technology based on cloud model.
    Improved CBR-BDI reasoning mechanism for sorting operation mechanical arm
    ZHOU Haotian, MIN Huasong
    2016, 36(3):  746-750.  DOI: 10.11772/j.issn.1001-9081.2016.03.746
    Asbtract ( )   PDF (703KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the sorting operation mechanical arm which used Case-Based Reasoning (CBR) mechanism could not be used for the complex scenario with a lot of object information, an improved Case-Based Reasoning-Belief, Desire, Intention (CBR-BDI) reasoning mechanism was proposed. Firstly, the input information was regarded as belief, and the case properties obtained through sentence segmentation and retrieval was regarded as desire. Secondly, map matching, desire analysis and guidance were added to perfect desire. Finally, complete desire generated solution which was regarded as intention. In the scenario of multiple object and information, users could command system for sorting operation through dialogue. The experimental results show that compared with the traditional CBR mechanism, the improved CBR-BDI reasoning mechanism possess the ability of analysis and guide, and can be used for the scenario of multiple object and information.
    Blocked person relation recognition system based on multiple features
    ZHANG Zhihua, WANG Jianxiang, TIAN Junfeng, WU Guoshun, LAN Man
    2016, 36(3):  751-757.  DOI: 10.11772/j.issn.1001-9081.2016.03.751
    Asbtract ( )   PDF (1004KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of Internet, huge amount of textual information is accessible on the Internet. The task of reliable person-person relation extraction from Web page has become an import research topic in the field of information extraction. To address this problem, this work implemented a blocked person relation recognition system and adopted abundant of features, i.e., bag-of-word, relevant frequency, Dependency Tree (DT), Named Entity Recognition (NER) features, etc. A series of experiments were conducted to select out optimal feature set and classification algorithm for each relation type to improve the performance. This system was performed on two tasks in China Conference of Machine Learn Competition (CCML Competition) of 2015, to recognize person relation from single or a set of news titles in Chinese (Task1 and Task2, respectively). For these two tasks, this system achieved the MacroF1 score of 75.68% and 76.58%, respectively and ranked the 1st on both tasks.
    Automatic construction of software engineering linked data
    ZHANG Yuchen, SHEN Beijun
    2016, 36(3):  758-764.  DOI: 10.11772/j.issn.1001-9081.2016.03.758
    Asbtract ( )   PDF (1287KB) ( )  
    References | Related Articles | Metrics
    Information awareness and knowledge discovery has become one of the key issues currently in distributed, heterogeneous and massive software development. In this situation, semantic Web was introduced into software engineering to build fine-grain semantic links between multi-source heterogeneous data. And a novel approach was proposed to build ontology, extract and recover links, and further construct ontology-based software engineering linked data automatically. It extracted and merged ontology concepts, resolved entities and their properties, and built complete linked data without redundancy from structural data sets in software repository. Also it recovered missing linked data from software repository using Natural Language Processing (NLP) and Information Retrieval (IR) techniques with three features including synonym, verb-object phrase and structural information. The experimental results show that the proposed approach can construct and merge software engineering ontology automatically from distributed software engineering data sets, recover missing linked data and enlarge ontology effectively. Compared with Baseline, Phrasing and O-CSTI, this approach performs much better in recall, precision and F-measure.
    Configuration tool design based on control-oriented multi-core real-time operating system
    JIANG Jianchun, CHEN Huiling, DENG Lu, ZHAO Jianpeng
    2016, 36(3):  765-769.  DOI: 10.11772/j.issn.1001-9081.2016.03.765
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics
    With respect to single-core operating system, multi-core real-time operating system is more functional and complicated. Aiming at the problem that multi-core operating system is difficult for configuration, tailoring and transplantation, a new configuration tool for the application of multi-core real-time operating system, which can improve the application development efficiency of multi-core real-time operating system and reduce the error rate was proposed. First, based on a multi-core real-time operating system named CMOS (Control-oriented Multi-core Operating System), which was independently developed by Chongqing University of Posts and Telecommunications, the configuration tool was hierarchically designed. According to the demand of CMOS, a visualized configuration tool was designed to finish interface generation engine and automatical code generation. Afterwards, in order to ensure the correctness of the configuration logic, configuration correlation detection was proposed. The simulation results show that the CMOS configuration tool is suitable for CMOS operating system because of the short processing time for code generation and low error rate. Compared with the method of troubleshooting by developers manually, correlation detection accelerates the speed of troubleshooting with quickly locating the error code and ensure the generation correctness of configuration file. Thus the configuration tool can promote the application of CMOS multi-core operating system.
    Video semantic detection based on topographic independent component analysis and Gaussian mixture model
    KONG Weiting, ZHAN Yongzhao
    2016, 36(3):  770-773.  DOI: 10.11772/j.issn.1001-9081.2016.03.770
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics
    To reduce quantization error in vector quantization of Bag of Words (BoW) for video semantic detection and extract feature automatically and effectively, a new video semantic detection method based on Topographic Independent Component Analysis (TICA) and Gaussian Mixture Model (GMM) was proposed. Firstly, features of each video clip were extracted by TICA algorithm to learn complex invariant features from video clips. Secondly, the feature distribution of each video clip was described by GMM. Finally, a GMM supervector was created from GMM parameters and the GMM supervector for each shot was used as the input of an Support Vector Machine (SVM) for video semantic detection. A GMM can be regard as an extension of the BoW to a probabilistic framework, and thus, has less quantization error, better retaining the information in the original feature vectors. The experiments were conducted on the TRECVID 2012 and OV datasets. The experimental results show that compared with BoW and SIFT (Scale Invariant Feature Transform)-GMM algorithm, the proposed method can improve the mean average precision on both of the TRECVID 2012 and OV datasets for video semantic detection.
    Three-dimensional SLAM using Kinect and visual dictionary
    LONG Chao, HAN Bo, ZHANG Yu
    2016, 36(3):  774-778.  DOI: 10.11772/j.issn.1001-9081.2016.03.774
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics
    Since traditional filter methods to solve Simultaneous Localization And Mapping (SLAM) problems will accumulate errors, a three-dimensional SLAM algorithm based on Bag-Of-Words (BOW) algorithm which can effectively solves the problem of accumulating errors was proposed. Compared to the common algorithms like random selection and k-Dimensional Tree (Kd-Tree), a tree structure visual bag of words loop detection algorithm was designed which could greatly increase the speed of similar scene detection. Firstly, a GPU based feature extraction algorithm was adopted. Through using cross matching and k-Nearest Neighbor (kNN) algorithm, robust inliers were got. Secondly, Random Sample Consensus Singular Value Decomposition (RANSAC SVD) algorithm was used to calculate the initial transformation between two frames. And then a Generalized-Iterative Closest Point (G-ICP) algorithm was used to optimize the transformation to get precise transformation. At last, incremental Smoothing And Mapping (iSAM) Graph optimization algorithm was used to calculate the camera pose and the point cloud map and trajectory were created. The test results on the standard dataset show that the algorithm can achieve good robustness and precision under complex environment.
    New method for image segmentation based on parametric active contour model
    HU Xuegang, LIU Jie
    2016, 36(3):  779-782.  DOI: 10.11772/j.issn.1001-9081.2016.03.779
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    Aiming at the defects that the existing methods based on Parametric Active Contour Models (PACM) cannot accurately locate to corners, and discontinuous edges were easily affected by the surrounding irrelevant information, a new method for image segmentation based on PACM was proposed. In this method, the edge preserving term was first constructed, which was introduced to active contour model of image segmentation, and the tangent direction of Laplace diffusion term still persisted, and then two weight parameters were introduced to control tangential direction and normal direction so that the accuracy and efficiency for segmentation were improved. Experimental results show that the proposed model can detect weak edges and locate accurately corners, meanwhile converges to the depth of concave boundary and reduce the impact of independent information on edge discontinuities. Furthermore, it overcomes the edge leakage and is very good for protecting image details. Both the efficiency and accuracy of segmentation are significantly improved in contrast with the edge preserving gradient vector flow models, the normalized gradient vector flow models and their improved models.
    No-reference stereoscopic image quality assessment model based on natural scene statistics
    MA Yun, WANG Xiaodong, ZHANG Lianjun
    2016, 36(3):  783-788.  DOI: 10.11772/j.issn.1001-9081.2016.03.783
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that most of the existing evaluation methods transform images into different coordinate domain, a spatial Natural Scene Statistics (NSS) based model of no reference stereoscopic image quality assessment method was proposed. Among the stereoscopic image quality assessment, in order to better combine with the binocular visual features of human beings, left and right images were fused to construct a cyclopean map. Firstly, via statistical distribution of the Cyclopean Mean Subtracted Contrast Normalized (CMSCN) coefficients, the natural scene statistical characteristics were extracted in spatial domain from the cyclopean map. Secondly, by getting statistical distribution of the Disparity Mean Subtracted Contrast Normalized (DMSCN) coefficients, and the same characteristics were extracted from the disparity map obtained by optical flow model. Finally, Support Vector Regression (SVR) was performed to predict the objective scores of stereoscopic images by establishing the relationship between the stereoscopic image feature information and the Difference Mean Opinion Score (DMOS). The experimental results show that compared with other methods, the Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank-Order Correlation Coefficient (SROCC) indicators reach 0.94 on symmetric stereoscopic image database, and the PLCC indicator reaches 0.91 and the SROCC indicator reaches 0.93 on asymmetric stereoscopic image database, which indicate the proposed method can achieve higher consistency with subjective assessment of stereoscopic images.
    No-reference image quality assessment based on scale invariance
    TIAN Jinsha, HAN Yongguo, WU Yadong, ZHAO Xiaole, ZHANG Hongying
    2016, 36(3):  789-794.  DOI: 10.11772/j.issn.1001-9081.2016.03.789
    Asbtract ( )   PDF (1088KB) ( )  
    References | Related Articles | Metrics
    The existing general no-reference image quality assessment methods mostly use machine learning method to learn regression models from training images with associated human subjective scores to predict the perceptual quality of testing image. However, such opinion-aware methods expend much time on training, and rely on the distortion types of the training database. These methods have weak generalization capability, hereby limiting their usability in practice. To solve the database dependence, a normalized scale invariance based no-reference image quality assessment method was proposed. In the proposed method, the Natural Scene Statistic (NSS) feature and edge characteristic were combined as the valid features for image quality assessment, and no extra information was required beyond the testing image, then the two feature vectors were used to compute the global difference across scales as the image quality score. The experimental results show that the proposed method has good evaluation for multi-distorted images with low computational complexity. Compared to the state-of-the-art no-reference image quality assessment models, the proposed method has better comprehensive performance, and it is suitable for applications.
    Saliency detection using contrast and spatial location-relation
    LIU Zhiyuan, LI Huafeng
    2016, 36(3):  795-799.  DOI: 10.11772/j.issn.1001-9081.2016.03.795
    Asbtract ( )   PDF (839KB) ( )  
    References | Related Articles | Metrics
    Concerning that the existing methods cannot well detect the salient object boundary and entire region, a new method based on super-pixel segmentation was proposed. Firstly, the bilateral filtering was employed on original images to reduce the local color difference and make the image smoother and more homogeneous; at the same time, the information of salient object edge was retained. The initial detection of salient object's edge was implemented by calculating the pixel' difference within the local window; super-pixel segmentation was adopted to filtered image so that the pixels with the same or similar color were divided into the same super-pixel block, based on this, the local contrast, global contrast and spatial distribution of super-pixel block were considered synthetically to calculate the salient value of each super-pixel block. Finally, the results of the above two parts were fused and optimized by guided filtering. The experiments were conducted on the international open data set MSRA-1000 compared with other seven methods. The average accuracy rate, average recall, and F-measure value of the proposed method are 81.57%, 77.13% and 80.50% respectively. The experimental results show that the proposed method can exact salient object in images effectively and robustly.
    Single image super-resolution algorithm based on unified iterative least squares regulation
    ZHAO Xiaole, WU Yadong, TIAN Jinsha, ZHANG Hongying
    2016, 36(3):  800-805.  DOI: 10.11772/j.issn.1001-9081.2016.03.800
    Asbtract ( )   PDF (984KB) ( )  
    References | Related Articles | Metrics
    Machine learning based image Super-Resolution (SR) has been proved to be a promising single-image SR technology, in which sparseness representation and dictionary learning has become the hotspot. Aiming at the time-consuming dictionary training and low-accuracy SR recovery, an SR algorithm was proposed from the perspective of reducing the inconsistency between Low-Resolution (LR) feature and High-Resolution (HR) feature spaces as far as possible. The authors adopted Iterative Least Squares Dictionary Learning Algorithm (ILS-DLA) to train LR/HR dictionaries and Anchored Neighborhood Regression (ANR) to recover HR images. ILS-DLA was able to train LR/HR dictionaries in relatively short time because of its integral optimization procedure, by adopting the same optimization strategy of ANR, which theoretically reduced the diversity between LR/HR dictionaries effectively. A large number of experiments show that the proposed method achieves superior dictionary learning to K-means Singular Value Decomposition (K-SVD) and Beta Process Joint Dictionary Learning (BPJDL) algorithms etc., and provides better image restoration results than other state-of-the-art SR algorithms.
    Fast image dehazing algorithm based on relative transmittance estimation
    YANG Yan, WANG Fan, BAI Haiping
    2016, 36(3):  806-810.  DOI: 10.11772/j.issn.1001-9081.2016.03.806
    Asbtract ( )   PDF (904KB) ( )  
    References | Related Articles | Metrics
    Since the dark channel prior algorithm has dull restoration effect and too long processing time, a fast dehazing algorithm for single image based on relative transmittance estimation was proposed. On the basis of the analysis of relationship between field depth under haze condition and minimum image of color channel (RGB) images, a preliminary transmittance was estimated through the relative amount of field depth, and then it was adjusted with an improved mean filter. At last, the clear image could be recovered by the atmospheric scattering model and the brightness was enhanced to improve its visual effects. The estimation of transmittance in this paper is simple and effective, the restored images are clear and natural, and have high detail visibility and scenery layering. The experimental results show that the proposed algorithm has great improvement in image dehazing quality and computational time, which is propitious to achieve real-time application.
    Fast reconstruction algorithm for photoacoustic computed tomography in vivo
    JIANG Zibo, ZHAO Jingxiu, ZHANG Yuanke, MENG Jing
    2016, 36(3):  811-814.  DOI: 10.11772/j.issn.1001-9081.2016.03.811
    Asbtract ( )   PDF (602KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the data acquisition amount of Photoacoustic Computed Tomography (PACT) based on ultrasonic array is generally huge, and the imaging process is time-consuming, a fast photoacoustic computed tomography method with Principal Component Analysis (PCA) was proposed to extend its applications to the field of hemodynamics. First, the matrix of image samples was constructed with part of full-sampling data. Second, the projection matrix representing the signal features could be derived by the decomposition of the sample matrix. Finally, the high-quality three-dimensional photoacoustic images could be recovered by this projection matrix under three-fold under-sampling. The experimental results on vivo back-vascular imaging of a rat show that, compared to the traditional back-projection method, the data acquisition amount of PACT using PCA can be decreased by about 35%, and the three-dimensional reconstruction speed is improved by about 40%. As a result, both the fast data acquisition and high-accurate image reconstruction are implemented successfully.
    Medical image retrieval with diffusion on tensor product graph and similarity of textons
    HUANG Bijuan, TANG Qiling, LIU Haihua, TANG Wenfeng
    2016, 36(3):  815-819.  DOI: 10.11772/j.issn.1001-9081.2016.03.815
    Asbtract ( )   PDF (865KB) ( )  
    References | Related Articles | Metrics
    Concerning the difficulty of its similarity to the expression and the effects of noise in medical image retrieval, a diffusion-based approach on a tensor product graph was proposed to improve the texton-based pairwise similarity metric by context information of other database objects. Firstly, medical image features were described and extracted by texton-based statistical method, and then the pairwise similarities were obtained with weights determined by the similarities between textons. A global similarity metric was achieved by utilizing the tensor product graph to propagate the similarity information along the intrinsic structure of the data manifold. Experimental results of ImageCLEFmed 2009 database show that, the proposed algorithm improves the performance by an average class accuracy of 32% and 19% compared with the Gabor-based retrieval algorithm and the Scale-Invariant Feature Transform (SIFT)-based retrieval algorithm respectively, which can be applied to medical image retrieval.
    Craniofacial reconstruction method based on partial least squares regression model of local craniofacial morphological correlation
    HE Yiyue, MA Ziping, GAO Ni, GENG Guohua
    2016, 36(3):  820-826.  DOI: 10.11772/j.issn.1001-9081.2016.03.820
    Asbtract ( )   PDF (1192KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the significant localized characteristics of the influence of skull on the facial surface shape are not fully considered in the existing joint statistical craniofacial reconstruction methods based on Principal Component Analysis (PCA) modeling, which leads to the inadequate description ability of the craniofacial morphological correlation models, by employing these methods and describing the morphological relationship between skull and face, a new craniofacial reconstruction method based on a Partial Least Squares Regression (PLSR) model of local craniofacial morphological correlation was proposed. Firstly, the defects of the joint statistical shape model based on PCA with skull and face as a whole and the advantages of the local morphological correlation model based on PLSR were deeply analyzed. Secondly, by introducing PLSR into the modeling of craniofacial morphological correlation, and based on craniofacial 3D surface model, whose physiological consistent correspondence was established, and classified according to forensic anthropology knowledge, the PLSR coordinate calculation model for each vertex of facial surface was constructed, with those closely related vertex set on skull as its independent variables. Thirdly, with the coordinates of the unknown skull surface model as input values of the coordinate calculation model, the coordinate of each vertex of the predicted face model was acquired, from which the predicted face could be reconstructed, and the concrete procedure of the new reconstruction method was elaborated. Finally, several craniofacial reconstruction experimentations by applying the new reconstruction method based on PLSR were given, and the new reconstruction method was comparatively analyzed and evaluated by the indicators including effectiveness of reconstruction and absolute error. The experimental results show that the new reconstruction method significantly improves the accuracy of craniofacial reconstruction.
    Development of teeth segmentation from computed tomography images using level set method
    WANG Ge, WANG Yuanjun
    2016, 36(3):  827-832.  DOI: 10.11772/j.issn.1001-9081.2016.03.827
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    In oral surgery, segmentation of teeth has important application value. However, due to the ambiguity of tooth boundary, the adhesion of adjacent teeth, and the flexible change of topological structure in dental Computer Tomography (CT) images, it is very difficult to achieve the accurate segmentation. To provide a useful reference for researches, this paper explored the search progress of dental CT image segmentation base on level set methods, summarized the traditional methods of dental CT images segmentation, introduced the level set theory briefly, introduced the details of level set methods for teeth segmentation in recent years, studied the energy terms in level set function, and implemented some contrast experiments. In the dental CT images segmentation based on level set method, the energy terms mainly included competitive energy, edge energy, shape prior energy, global intensity prior energy and local intensity energy. The experimental results show that the performance of hybrid model of the level set method is the best. The segmentation accuracies of incisor and molar teeth were 88.92% and 92.34% respectively. Compared to the method of adaptive threshold and level set without re-initialization, the accuracy of hybrid model improved more than 10% overall. With the utilization of image information and prior knowledge, it is expected to improve the accuracy of segmentation by optimizing and innovating the energy term in the level set function.
    Analyzing and indexing method on LaTeX formulae
    ZHOU Nan, TIAN Xuedong
    2016, 36(3):  833-836.  DOI: 10.11772/j.issn.1001-9081.2016.03.833
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics
    Focused on the topic that the ordinary full-text searching technology could not realize mathematical expression retrieval resulted from the complex two-dimensional structure characteristics of formulae, a method of LaTeX formulae analyzing and indexing was proposed. On the basis of the fully consideration of formulae' characteristics and the structure of LaTeX language, a parsing algorithm was designed for analyzing LaTeX expressions and extracting their retrieval features. Taking it as a foundation, a hierarchical index model was designed which employed the information of operands and operators extracted from mathematical expressions through the parsing algorithm. The index model has two layers, Treap data structure layer and inverted index layer, which laid the foundation of the retrieval and matching to formulae. The experiment was carried out under the mode of browser/server through taking 6234 formulae from mathematical textbooks as data set. The parsing algorithm gets 124960 expression nodes from resource formulae of which the highest baseline level is 11. The average time consumed of the index system is 33.8317 seconds. The experimental results show the proposed parsing algorithm and the index method are helpful for realizing mathematical expression retrieval with high efficiency and correctness.
    Domain-specific modeling for solution oriented knowledge resource
    WANG Dechuan, ZHAN Hongfei, YU Junhe
    2016, 36(3):  837-842.  DOI: 10.11772/j.issn.1001-9081.2016.03.837
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problem of irrational use of knowledge resources in solving enterprise business problems, a knowledge resource model oriented to business problem solving by using domain-specific modeling method was proposed. First, the basic elements in business problem solving area were described, and the relationship between these objects was analyzed. Second, the process of the business problem solving was formatted by the Problem-Knowledge Event-driven Process Chain (PK-EPC) which built in domain template and associated with corresponding knowledge unit. Third, the business problem was resolved by the application model. And the knowledge unit was associated with the corresponding knowledge carrier. According to previous information, a multi-level problem-solving solution model could be constructed. The solution model included business activities, knowledge units and knowledge carriers, and it provided a fast and accurate method for solving business problems. At last, a business problem solving oriented solution modeling system was designed based on Java, and it demonstrated the feasibility of the knowledge resource model.
    Coupling model and its algorithm for coordinated scheduling of quay crane and truck under uncertain environment
    FAN Lubin, LIANG Chengji, SHE Wenjing
    2016, 36(3):  843-848.  DOI: 10.11772/j.issn.1001-9081.2016.03.843
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics
    The container terminal system is a complex production system which is composed of many subsystems, and the equipment scheduling in the system is also a complex problem which involves a variety of uncertainties. The probability distribution of equipment operation parameters was considered emphatically, and the coordinated scheduling problem between quay crane and yard truck was studied. The method of multidisciplinary variable coupling design optimization with time window constraint was presented, the yard truck distribution sub-model and yard truck configuration sub-model were built. Completion time and the number of trucks were regarded as public design variables to connect the two submodels, and the coupling model for coordination scheduling was established. This model selected data from a terminal of Shanghai port, and called Gurobi4.0 to solve the coupling model under the environment of Visual studio2012. Compared the final scheduling project with the original project, the cost of total delay time decreased by 90.69%, and the number of container trucks decreased by 30.76%, which shows that the model is effective and practical.
    Orientation angle estimation based on improved 2-D ESPRIT-like algorithm
    CHEN Xi, YANG Tao, HE Hongsen
    2016, 36(3):  849-853.  DOI: 10.11772/j.issn.1001-9081.2016.03.849
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics
    To deal with the mismatching between elevation and azimuth angles in orientation angle estimation of coherence signals by 2-D Estimating Signal Parameter via Rotational Invariance Techniques (ESPRIT)-like algorithm designed for the two-dimensional cross-shaped MEMS (Micro-Electro-Mechanical System) ultrasonic phased array, the improved 2-D ESPRIT-like algorithm based on the joint diagonalization of receive signal matrices was proposed. Firstly, the correlation matrices along x and y axis were derived by the received signal matrices to reconstruct the Toeplitz matrices to decoherence correspondingly according to ESPRIT-like algorithm. Secondly, the Toeplitz matrices were decomposed equivalently to obtain the equivalent received signal matrices after decoherencing. Finally, the joint diagonalization was used to diagonalize the equivalent received signal matrices to realize the matching between elevation and azimuth angles and estimate the orientation angles correctly. The simulation results show that the improved algorithm can estimate the orientation angles correctly compared with the algorithm before being improved. In comparison with the commonly used 2-D MUltiple SIgnal Classification (MUSIC) algorithm based on spatial smoothing, the response time of the proposed algorithm is decreased by 79%, the resolution of elevation and azimuth angles is increased by about 20% and 40% respectively, and the angle error is about 10% that of MUSIC algorithm when the SNR is 30 dB.
    Rock classification of multi-feature fusion based on collaborative representation
    LIU Juexian, TENG Qizhi, WANG Zhengyong, HE Xiaohai
    2016, 36(3):  854-858.  DOI: 10.11772/j.issn.1001-9081.2016.03.854
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    To solve the issues of time-consuming and low recognition rate in the traditional component analysis of rock slices, a method of component analysis of rock slices based on Collaborative Representation (CR) was proposed. Firstly, texture feature of grain in rock slices was discussed, and the way of combining Hierarchical Multi-scale Local Binary Pattern (HMLBP) and Gray Level Co-occurrence Matrix (GLCM) was proved to characterize the texture of grain in rock slices well. Then, in order to reduce the time complexity of classification, the dimension of new features was reduced to 100 by using Principal Component Analysis (PCA). Finally, the Collaborative Representation based Classification (CRC) was used as the classifier. Differ to Sparse Representation based Classification (SRC), prediction samples were encoded by all the samples in train dictionary collaboratively instead of some single sample alone. Same attributes of different samples can improve the recognition rate. The experimental results show that the recognition speed of the method increases by 300% and the recognition rate of the method increases by 2% compared to SRC. In practical application, it can distinguish quartz and feldspar components in rock slices well.
    Fast algorithm for ship detection based on local window K-distribution
    ZHANG Hao, MENG Xiangwei, LI Desheng, LIU Lei
    2016, 36(3):  859-863.  DOI: 10.11772/j.issn.1001-9081.2016.03.859
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low efficiency and low computational efficiency of local window K-distribution detection algorithm, a fast ship target detection algorithm based on local window K-distribution was proposed. Firstly, the original Synthetic Aperture Radar (SAR) image was selected by the iterative segmentation algorithm, and the potential target pixels in the original SAR image were removed according to the pre-selection. Then two-order and four-order integral images were calculated by using the sliding window at each pixel in the background images. Two-order and four-order moments of the K-distribution were calculated in the integral image in order to estimate the parameters of K-distribution. Secondly, the detection threshold was determined by solving the probability density function and the regions of interest were obtained according to the threshold. Finally, the false alarm target was detected by the method of fuzzy difference. The detection experiment using the real SAR image show that the running time of the algorithm is reduced by 50% compared with the local window K-distribution algorithm, and the quality factor is improved from 44.4% to 100%. The proposed algorithm not only ensures the real-time performance of the algorithm, but also improves the detection accuracy, and it has a certain application value in the automatic detection of SAR ship.
    Parallel discovery of fake plates based on historical automatic number plate recognition data
    LI Yue, LIU Chen
    2016, 36(3):  864-870.  DOI: 10.11772/j.issn.1001-9081.2016.03.864
    Asbtract ( )   PDF (1135KB) ( )  
    References | Related Articles | Metrics
    The existing detection approaches for fake plate vehicles have high cost and low efficiency. A new parallel detection approach, called TP-Finder, was proposed based on historical Automatic Number Plate Recognition (ANPR) dataset. To effectively handle the data skew problem emerged in the parallel processing of large-scale dataset, a new data partition strategy based on the idea of integer partition was implemented, which obviously improved the performance of fake plate vehicle detection. Besides, a prototype system for recognizing fake plate vehicles was developed based on the TP-Finder approach, and it could exactly present historical trajectories of all suspicious fake plate vehicles. Finally, the performance of TP-Finder approach was verified on a real ANPR dataset from a city. The experimental results prove that the partition strategy of TP-Finder can achieve a maximum of 20% performance improvement compared with the default MapReduce partition strategy.
    Multiple-unmanned aerial vehicle environmental monitoring task schedule considering 3G/4G network feature
    OUYANG Qiuping, LI Jie, SHEN Lincheng
    2016, 36(3):  871-877.  DOI: 10.11772/j.issn.1001-9081.2016.03.871
    Asbtract ( )   PDF (1081KB) ( )  
    References | Related Articles | Metrics
    Focused on the limitation of monitoring distance, restriction of online transmission, large amount of information and that high-power data-link is disable to board small environment monitoring Unmanned Aerial Vehicle (UAV), a multiple-UAV environment monitoring task scheduling method considering 3G/4G network features was proposed. First, the time characteristic of 3G/4G network and the multiple-UAV environment monitoring task scheduling were combined, and this issue was modeled as Team Orienteering Problem with Time Window (TOPTW). Secondly, since the problem of huge computation and easily falling into local optimum, an Iterated Local Search (ILS) algorithm was proposed to get the optimization solution. Thirdly, a large amount of test data sets were applied into experiments to verify the feasibility and computing performance, and the comparative result between ILS and Ant Colony Algorithm (ACA) about the average profit and computing time were proposed. Last, the algorithm was applied in typical two UAV environment monitoring task scheduling under 3G/4G network. The results show that, the most profits received from ILS were worse than those from ACA. The average Gap of all test data sets was 1.09% and the largest was 10.8%. There were some results better than those in ACA. And the computing time of ILS was nearly reduced to a thousandth of the computing time of ACA. The experimental results show that ILS algorithm can fast solve the issue of multi-UAV environment monitoring task scheduling and effectively reduce the computing time with profit results in an acceptable scope.
    Complete electromyography decomposition algorithm based on fuzzy k-means clustering technique
    REN Xiaomei, YANG Gang
    2016, 36(3):  878-882.  DOI: 10.11772/j.issn.1001-9081.2016.03.878
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    ElectroMyoGraphy (EMG) signal decomposition is the inverse process of the generation of EMG signals. The complete EMG decomposition was completed based on the superposition waveforms resolution in order to obtain information about Motor Unit (MU) template waveform and firing pattern. Firstly, noise was removed from the original EMG signals based on wavelet filtering and wavelet threshold estimation; then all the Motor Unit Action Potential (MUAP) waveforms were detected using amplitude-slope double threshold filtering, and all the detected MUAPs were classified into their constituent Motor Unit Action Potential Trains (MUAPT) through fuzzy K-means clustering and minimum distance classifier. Finally the superposition waveforms resolution procedure was finished using pseudo-correlation technique and peeling-off technique. This decomposition system has been evaluated using synthetic and real EMG signals. The average accuracy of the EMG decomposition system was above 87%.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF