Loading...

Table of Content

    10 May 2017, Volume 37 Issue 5
    Review on HDD-SSD hybrid storage
    CHEN Zhen, LIU Wenjie, ZHANG Xiao, BO Hailong
    2017, 37(5):  1217-1222.  DOI: 10.11772/j.issn.1001-9081.2017.05.1217
    Asbtract ( )   PDF (995KB) ( )  
    References | Related Articles | Metrics
    The explosion of data in big data environment brings great challenges to the system structure and capacity of storage system. Nowadays, the development of storage systems tends to be large capacity, low-cost, and high performance. Meanwhile, storage devices such as conventional rotated magnetic Hard Disk Drive (HDD), Solid State Drive (SSD) and Non-Volatile Random Access Memory (NVRAM) have limitations caused by their intrinsic characteristics, leading to the fact that a single kind of storage device cannot meet the requirements above. Hybrid storage which utilized different storage medium was a good solution to this problem. SSD, as a kind of memory wiht high reliability, low energy consumption, high performance is more and more widely applied to hybrid storage system. By combining magnetic disk with the solid-state drives, the advantages of the high performance of SSD and the low-cost, high-capacity features of HDD were taken. The hybrid storage could provide users with large capacity of storage space, guarantee the system's high performance, at the same time reduced the cost. The current research status of the SSD-HDD hybrid storage system was described, different SSD-HDD hybrid storage systems were summarized and classified. In view of two different HDD-SSD hybrid storage architectures, the key technologies and insufficiencies of which were discussed. Prediction of trend and the research focus in the hybrid storage future were discussed at last.
    Design of DDR3 protocol parsing logic based on FPGA
    TAN Haiqing, CHEN Zhengguo, CHEN Wei, XIAO Nong
    2017, 37(5):  1223-1228.  DOI: 10.11772/j.issn.1001-9081.2017.05.1223
    Asbtract ( )   PDF (1133KB) ( )  
    References | Related Articles | Metrics
    Since the new generation of flash-based SSD (Solid-State Drivers) use the DDR3 interface as its interface, SSD must communicate with memory controller correctly. FPGA (Field-Programmable Gate Array) was used to design the DDR3 protocol parsing logic. Firstly, the working principle of DDR3 was introduced to understand the controlling mechanism of memory controller. Next, the architecture of this interface parsing logic was designed, and the key technical points, including clock, writing leveling, delay controlling, interface synchronous controlling were designed by FPGA. Last, the validity and feasibility of the proposed design were proved by the modelsim simulation result and board level validation. In terms of performance, through the test of single data, continuous data and mixed read and write data, the bandwidth utilization of DDR3 interface is up to 77.81%. As the test result shows, the design of DDR3 parsing logic can improve the access performance of storage system.
    Research on performance evaluation method of public cloud storage system
    LI Ani, ZHANG Xiao, ZHANG Boyang, LIU Chunyi, ZHAO Xiaonan
    2017, 37(5):  1229-1235.  DOI: 10.11772/j.issn.1001-9081.2017.05.1229
    Asbtract ( )   PDF (1069KB) ( )  
    References | Related Articles | Metrics
    With the rapid development and wide application of cloud storage system, many enterprise developers and individual users migrate their applications from traditional storage to public cloud storage system. Therefore, the performance of cloud storage system has become the focus of enterprise developers and individual users. The traditional test is difficult to simulate simultaneous access with enough users to the cloud storage system, complex to build and has a long test time with high cost. Besides, the evaluation results are unstable due to the network and other outside factors. In view of above critical problems, a kind of "cloud testing cloud" performance evaluation method was put forward for public cloud storage system. Public cloud storage system was evaluated by this method through applying a sufficient number of instances on the cloud computing platform. Firstly, a general performance evaluation framework was built with abilities such as dynamic instance application, automated deployment of assessment tools and load, controlling concurrent access to cloud storage system, automated instance release and evaluation results collection and feedback. Secondly, some multi-dimensional performance evaluation indicators were presented, covering different typical applications and different cloud storage interfaces. Finally, an extensible general performance evaluation model was put forward, which could evaluate the performance of typical applications, analyze the factors influencing cloud storage performance and be applied to any public cloud storage platform. In order to verify the feasibility, rationality, universality and expansibility of this method, these presented methods were applied to evaluate Amazon S3 cloud storage system, and then the accuracy of the evaluation results was verified by s3cmd. The results show that the evaluation output can provide reference comments for enterprise developers and individual users.
    Service capacity testing method of private cloud platform
    LIU Chunyi, ZHANG Xiao, LI Ani, CHEN Zhen
    2017, 37(5):  1236-1240.  DOI: 10.11772/j.issn.1001-9081.2017.05.1236
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the lack of testing methods would lead to mismatch between supply and demand of private clouds, an adaptive and scalable private cloud system testing method was proposed, which can test private cloud computing ability in IaaS (Infrastructure as a Service). The number of virtual machines was dynamically increased through the private cloud application program interface, hardware information and operating system category of the virtual machine configuration were selected by performance-characteristic model, and different load models were used according to different needs of users to form simulation environment. At last, cloud computing Service Level Agreement (SLA) was used as a test standard to measure the ability of private cloud services. The proposed method was implemented in Openstack. The experimental results show that private cloud platform service capacity can be obtained by the proposed method with lower cost and higher efficiency than user test. Compared with Openstack component Rally, scalability and dynamic load simulation of the proposed has greatly been improved.
    Implementation of directory index for Pmfs
    YANG Shun, CHEN Zhiguang, XIAO Nong
    2017, 37(5):  1241-1245.  DOI: 10.11772/j.issn.1001-9081.2017.05.1241
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics
    Emerging non-volatile, byte-addressable memories like phase-change memory can make data persistent at main memory level instead of storage. Since the read/write latency of Non-Volatile Memory (NVM) is very low, the overhead of software in a NVM system has become the main factor in determining the performance of the entire persistent memory system. Pmfs is a file system specifically designed for NVM. However, it still has an undesirable characteristic:each directory operation (create, open or delete) of Pmfs requires a linear search of the entire directory files, resulting in a cost linearly increased with the number of files in the directory. The performance of Pmfs under various workloads was evaluated and the test showed that the overhead of the directory operations had become the bottleneck of the whole system in some circumstance of particular workloads. To solve this problem, a persistent directory entry index was implemented in Pmfs to speed up directory operations. The experimental results show that under a single directory with 100 000 files, the file creation speed is increased by 12 times, the bandwidth is improved by 27.3%.
    Design and implementation of space-borne parallel remote sensing image compression system based on multi-core DSP
    TANG Guofei, ZHOU Haifang, TAN Qingping
    2017, 37(5):  1246-1250.  DOI: 10.11772/j.issn.1001-9081.2017.05.1246
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of space-borne remote sensing technology, the remote sensing data has become increasingly large. At present, the limited bandwidth of communication can not meet the demand of remote sensing image data transmission. Therefore, the research of image compression technology for space-borne applications is of great significance to the development of space application technology. It is difficult to meet the performance requirements by adopting traditional single-core DSP (Digital Signal Processor), and it is difficult to meet the power demand by adopting Field-Programmable Gate Array (FPGA). In recent years, with the development of hardware technology, multi-core DSP technology has matured, and there are mature multi-core DSP image compression solutions in missile-borne scene for space-borne application reference. Based on multi-core DSP, TI's C6678 multi-core floating-point DSP platform, this paper constructed a parallel image compression system and made full use of the hardware resources of multi-core DSP. Considering the compression of space-borne remote sensing image has a high demand on compression quality, compression speed and other indicators, the system took the JPEG2000 standard as the image compression standard, using the main core responsible for external communication and internal task allocation, and using slave core to implement JPEG2000 image compression of the design. Test results show that the system is stable and reliable, and the overall compression performance is excellent, to meet the performance requirements of space-borne remote sensing image compression system.
    Performance optimization of distributed database aggregation computing
    XIAO Zida, ZHU Ligu, FENG Dongyu, ZHANG Di
    2017, 37(5):  1251-1256.  DOI: 10.11772/j.issn.1001-9081.2017.05.1251
    Asbtract ( )   PDF (950KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of low computational performance of distributed database in analysis applications, taking MongoDB database as an example, a method was put forward to improve the performance of database based on chip and index. Firstly, the characteristics of the business was analyzed to guide the choice of shard key field, and the selected key field needed to ensure that the data is evenly distributed on the cluster nodes. Secondly, by studying the index efficiency of the distributed database, the method of deleting the query field index was used to further improve the computing performance, which could make full use of hardware resources to improve the performance of aggregation computing. The analysis and experimental results show that the shard key field with high cordinality can distribute data evenly on each data node in the cluster, and the use of full table query can effectively improve the convergence speed, thus the optimization method can effectively improve the performance of aggregation computing.
    Whole process optimized garbage collection for solid-state drives
    FANG Caihua, LIU Jingning, TONG Wei, GAO Yang, LEI Xia, JIANG Yu
    2017, 37(5):  1257-1262.  DOI: 10.11772/j.issn.1001-9081.2017.05.1257
    Asbtract ( )   PDF (1128KB) ( )  
    References | Related Articles | Metrics
    Due to NAND flash' inherent restrictions like erase-before-write and a large erase unit, flash-based Solid-State Drives (SSD) demand garbage collection operations to reclaim invalid physical pages. However, the high overhead caused by garbage collection significantly decrease the performance and lifetime of SSD. Garbage collection performance will be more serious, especially when the data fragments of SSD are frequently used. Existing Garbage Collection (GC) algorithms only focus on some steps of the garbage collection operation, and none of them provids a comprehensive solution that takes into consideration all the steps of the GC process. On the basis of detailed analysis of the GC process, a whole process optimized garbage collection algorithm named WPO-GC (Whole Process Optimized Garbage Collection) was proposed, which integrated optimizations on each step of the GC in order to reduce the negative impact on normal read/write requests and SSD' lifetime at the greatest extent. Moreover, the WPO-GC was implemented on SSDsim which is an open source SSD simulator to evaluate its efficiency. The experimental results show that the proposed algorithm can decreases read I/O response time by 20%-40% and write I/O response time by 17%-40% respectively, and balance wear nearly 30% to extend the lifetime, compared with typical GC algorithm.
    Real-time data analysis system based on Spark Streaming and its application
    HAN Dezhi, CHEN Xuguang, LEI Yuxin, DAI Yongtao, ZHANG Xiao
    2017, 37(5):  1263-1269.  DOI: 10.11772/j.issn.1001-9081.2017.05.1263
    Asbtract ( )   PDF (1159KB) ( )  
    References | Related Articles | Metrics
    In order to realize the rapid analysis of massive real-time data, a Distributed Real-time Data Analysis System (DRDAS) was designed, which resolved the collection, storage and real-time analysis for mass concurrent data. And according to the operation principle of Spark Streaming, a dynamic sampling K-means parallel algorithm was proposed, which could quickly and efficiently detect all kinds of DDoS (Distributed Denial of Service) attacks. The experimental results show that the DRDAS has good scalability, fault tolerance and real-time processing ability, and along with new K-means parallel algorithm, the DRDAS can real-time detect various DDoS attacks, and shorten the detecting time of attacks.
    SET-MRTS: Schedulability experiment toolkit for multiprocessor real-time systems
    CHEN Zewei, YANG Maolin, LEI Hang, LIAO Yong, XIE Wei
    2017, 37(5):  1270-1275.  DOI: 10.11772/j.issn.1001-9081.2017.05.1270
    Asbtract ( )   PDF (894KB) ( )  
    References | Related Articles | Metrics
    In recent years, the complexity of conducting schedulability experiments increases with the rapid development of real-time scheduling research. In general, schedulability experiments are time-consuming in the absence of standardized and modularized experiment tools. Moreover, since the source codes are not publicly available, it is difficult to verify the reported results in the literature, and to reuse and extend the experiments. In order to reduce the repeative work and help the vertification, a basic schedulability experiment framework was proposed. This experiment framework generated task systems through random distribution, and then tested their schedulability, and based on the framework, a novel open-source schedulability platform called SET-MRTS (Schedulability Experiment Toolkit for Multiprocessor Real-Time Systems) was designed and realized. The platform adopted the modular architecture. SET-MRTS consisted of the task module, the processor module, the shared resource module, the algorithm library, the configuration module and the output module. The experimental results show that, SET-MRTS supports uni- and multi-processor real-time scheduling algorithms and synchronization protocol analyses, which can correctly perform the schedulability test and output intuitive experimental results, and support the expansion of the algorithm library. Compared with algorithms in the algorithms library implemented in the experiment, SET-MRTS has good compatibility and expansibility. SET-MRTS is the first open source platform to support a complete experimental process, including algorithmic implementation, parameter configuration, result statistics, charting, and so on.
    Simplification method for testing behavior of parallel software
    ZHANG Wei, SUN Tao, WAN Xiaoyun
    2017, 37(5):  1276-1281.  DOI: 10.11772/j.issn.1001-9081.2017.05.1276
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues that it is very difficult to test the parallel software system, and the size of state space is too large, a Colored Petri Net (CPN) model for simplifying the tested behavior of the parallel model was proposed. Firstly, the original model was divided into several sub modules according to the number of the special nodes, such as concurrent transitions, synchronous transitions, the branch places, and the confluence places. Secondly, the position of the tested behavior and the test set was created. Finally, the execution priority was set for the non-test behavior in each parallel module which met the reduction condition. By comparing the results of the state space analysis before and after simplification, the reduction rate of nodes in state space is at least 40%, and the full coverage test path generated by the tested behavior was not affected by the simplification.
    Parallel trajectory compression method based on MapReduce
    WU Jiagao, XIA Xuan, LIU Linfeng
    2017, 37(5):  1282-1286.  DOI: 10.11772/j.issn.1001-9081.2017.05.1282
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    The massive spatiotemporal trajectory data is a heavy burden to store, transmit and process, which is caused by the increase Global Positioning System (GPS)-enable devices. In order to reduce the burden, many kinds of trajectory compression methods were generated. A parallel trajectory compression method based on MapReduce was proposed in this paper. In order to solve the destructive problem of correlation nearby segmentation points caused by the parallelization, in this method, the trajectory was divided by two segmentation methods in which the segmentation points were interleaving firstly. Then, the trajectory segments were assigned to different nodes for parallel compression. Lastly, the compression results were matched and merged. The performance test and analysis results show that the proposed method can not only increase the compression efficiency significantly, but also eliminate the error which is caused by the destructive problem of correlation.
    Weighted Slope One algorithm based on clustering and Spark framework
    LI Linlin, NI Jiancheng, YU Pingping, YAO Binxiu, CAO Bo
    2017, 37(5):  1287-1291.  DOI: 10.11772/j.issn.1001-9081.2017.05.1287
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    In view of that the traditional Slope One algorithm does not consider the influence of project attribute information and time factor on project similarity calculation, and there exists high computational complexity and slow processing in current large data background, a weighted Slope One algorithm based on clustering and Spark framework was put forward. Firstly, the time weight was added to the traditional item score similarity calculation, and comprehensive similarity was computed with the similarities of the item attributes. And then the set of nearest neighbors was generated through combining with the Canopy-K-means algorithm. Finally, the data was partitioned and iterated to realize parallelization by Spark framework. The experimental results show that the improved algorithm based on the Spark framework is more accurate than the traditional Slope One algorithm and the Slope One algorithm based on user similarity, which can improve the operating efficiency by 3.5-5 times compared with the Hadoop platform, and is more suitable for large-scale dataset recommendation.
    Mutated bat algorithm for solving discounted {0-1} knapsack problem
    WU Congcong, HE Yichao, CHEN Yiying, LIU Xuejing, CAI Xiufeng
    2017, 37(5):  1292-1299.  DOI: 10.11772/j.issn.1001-9081.2017.05.1292
    Asbtract ( )   PDF (1156KB) ( )  
    References | Related Articles | Metrics
    Since the deterministic algorithms are difficult to solve the Discounted {0-1} Knapsack Problem (D{0-1}KP) with large-scale and wide data range, a Mutated Double codes Binary Bat Algorithm (MDBBA) was proposed. Firstly, the coding problem of D{0-1} KP was solved by double coding. Secondly, the Greedy Repair and Optimization Algorithm (GROA) was applied to the individual fitness calculation of bats, and the algorithm was quickly and effectively solved. Then, the mutation strategy in Differential Evolution (DE) was selected to improve the global optimization ability. Finally, Lévy flight was carried out by the bat individual according to certain probability to enhance the ability of the algorithm to explore and jump out of local extrema. Simulation was tested on four large-scale instances. The result shows that MDBBA is very suitable for solving large-scale D {0-1} KP, which has better optimal value and mean value than FirEGA (First Genetic Algorithm) algorithm and Double Binary Bat Algorithm (DBBA), and MDBBA converges significantly faster than DBBA.
    Optimized routing algorithm based on cooperative communication of cluster parent set for low power and lossy network
    YAO Yukun, LIU Jiangbing, LI Xiaoyong
    2017, 37(5):  1300-1305.  DOI: 10.11772/j.issn.1001-9081.2017.05.1300
    Asbtract ( )   PDF (995KB) ( )  
    References | Related Articles | Metrics
    To deal with the problems that the routing algorithm based on Collaborative communication of Cluster Parent (CRPL) for Low Power and Lossy Network (LLN) can't balance the energy consumption of the node and maximize the extension of the lifetime for network efficiently due to take no account of the residual energy of the node, a high-efficient routing algorithm based on collaborative communications of cluster parent set HE-CRPL was proposed. The proposed algorithm chiefly carried out three optimization schemes. Firstly, the wireless link quality and the residual energy of node could be considered during the cluster parent selection. Secondly, the wireless link quality and the Expected LifeTime (ELT) of cluster parent node were combined while estimating the priority of the cluster parent node and selecting the optimal cluster parent set. Thirdly, the cluster parent nodes were notified the priority list by Destination Advertisement Object (DAO) message during the initialization of the network topology. The simulation results show that, compared with the CRPL algorithm, the performance of the HE-CRPL algorithm is improved obviously in prolonging the network lifetime, increasing the packet delivery success rate and reducing the number of packet retransmissions, and that the lifetime of network prolonging by more than 18.7% and the number of retransmissions decrease by more than 15.9%.
    Localization algorithm based on factor graph and hybrid message passing for wireless networks
    CUI Jianhua, WANG Zhongyong, ZHANG Chuanzong, ZHANG Yuanyuan
    2017, 37(5):  1306-1310.  DOI: 10.11772/j.issn.1001-9081.2017.05.1306
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics
    Concerning the high computational complexity and communication overhead of wireless network node localization algorithm based on message passing algorithm, a ranging-based hybrid message passing node localization method with low complexity and cooperative overhead was proposed. The uncertainty of the reference nodes was taken into account to avoid error accumulation, and the messages on factor graph were restricted to be Gaussian distribution to reduce the communication overhead. Firstly, the factor graph was designed based on the system model and the Bayesian factorization. Secondly, belief propagation and mean filed methods were employed according to the linear state transition model and the nonlinear ranging model to calculate the prediction messages and the cooperation messages, respectively. Finally, in each iteration, the non-Gaussian beliefs were approximated into Gaussian distribution by Taylor expansions of the nonlinear terms. The simulation results show that the positioning accuracy of the proposed algorithm is compareable to that of Sum-Product Algorithm over a Wireless Network (SPAWN), but the information transmitted between nodes decreases from a large number of particles to mean vector and covariance matrix, and the comupational complexity is also dramatically reduced.
    Resource allocation based on femto base station in Macrocell-Femtocell networks
    ZHANG Haibo, PENG Xingying, CHEN Shanxue
    2017, 37(5):  1311-1316.  DOI: 10.11772/j.issn.1001-9081.2017.05.1311
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics
    Aiming at the cross-layer interference between the macrocell user layer and the femtocell user layer and the same layer interference between femtocells in the macrocell-femtocell dual-layer network model, a resource allocation algorithm based on femtocell base stations was proposed. The algorithm consists of two parts:the one is that, the Macrocell base station firstly used the improved difference method, set the virtual Macro User Equipment (MUE), and turned it into a balanced assignment problem and allocated the channel to the macrocell user, and then used water-filling algorithm for power allocation to ensure the transmission of macrocell users. The other part is that, on the basis of guaranteeing the service quality of the macrocell users, an Enhanced Ant Colony Optimization (EACO) algorithm was adopted to group the femtocells after setting the pheromone concentration range, which avoided the possibility that the original ant colony algorithm may fall into a local optimum. Then, a heuristic algorithm and a distributed power allocation algorithm were used to allocate the channel and power to the Femto User Equipment (FUE) respectively. The spectral efficiency was maximized under the data rate requirement of the femtocell users. The simulation results show that EACO effectively suppresses cross-layer interference and same-level interference, which can guarantee the data rate requirement of users and improve the efficiency of network spectrum effectively.
    Switch migration strategy based on improved gravitation search algorithm
    YU Mingqiu, ZHOU Chuangming, WANG Huijie, DU Ruichao
    2017, 37(5):  1317-1320.  DOI: 10.11772/j.issn.1001-9081.2017.05.1317
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics
    In multi-controller Software Defined Network (SDN), since the existed switch migration strategies can not adapt to the change of switch traffic which only consider single migration factor, a Switch Migration Strategy based on Improved Gravitation Search Algorithm (IGS-SMS) was proposed. In the decision-making stage, the multi-objective decision based on fuzzy satisfaction was used to optimize the objectives by the competitive priority of membership. In the calculating phase, the objective function with the top priority was optimized by improved gravitational search algorithm. The simulation results show that the IGS-SMS achieves good load balancing of controllers while ensuring the index of transmission delay and switch redistribution. When local load was heavy in the experiment, Dynamic Switches Migration Algorithm (DSMA) and Progressive Auction based Switches Migration Mechanism (PASMM) couldn't alleviate overload. By contrast, IGS-SMS could alleviate overload, and load balancing degree was lower than DSMA and PASMM.
    Proportional fairness and maximum weighted sum-rate in D2D communications underlaying cellular networks
    HU Jing, ZHENG Wu
    2017, 37(5):  1321-1325.  DOI: 10.11772/j.issn.1001-9081.2017.05.1321
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of user's fairness in D2D (Device-to-Device) communication system, firstly, the existing proportional fairness principle was extended to derive an optimization problem relating to weighted sum-rate, and then a KMPF (Kuhn-Munkras Proportional Fair) resource allocation algorithm was proposed to optimize it. The algorithm maximized the user's weighted sum-rate through power control, and allocated the cellular user's resources that could be reused for the D2D users according to maximization of the total weighted sum-rate by Kuhn-Munkras (KM) algorithm. Simulation results show that the fairness index of the proposed algorithm is 0.4 higher than that of the greedy resource allocation algorithm and the throughput of the system is over 95% of its level, and the throughput of proposed algorithm is about 50% higher than that of the random resource allocation algorithms. It is shown that the algorithm can solve the problem of user's fairness while considering the system throughput.
    Indoor activity-recognition based on received signal strength in WLAN
    WEI Chunling, WANG Bufei
    2017, 37(5):  1326-1330.  DOI: 10.11772/j.issn.1001-9081.2017.05.1326
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics
    The mainstream activity recognition technology depends on professional measurement equipment, which leads to the problem of difficult deployment and use. An activity identification technology based on the characteristics of existing WiFi hotspot received signal strength was proposed. The result shows that the proposed algorithm is capable of identifying the existence of a person in the room with 80% accuracy. And person's standing, walking and lying activity can be inferred with 95% accuracy. The walking direction can also be identified with 80% accuracy. The required signal of the proposed algorithm exists everywhere in daily life, and can be effectively used to identify indoor activities with low power consumption, and high precision.
    Improved method of situation assessment method based on hidden Markov model
    LI Fangwei, LI Qi, ZHU Jiang
    2017, 37(5):  1331-1334.  DOI: 10.11772/j.issn.1001-9081.2017.05.1331
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the Hidden Markov Model (HMM) parameters are difficult to configure, an improved method of situation assessment based on HMM was proposed to reflect the security of the network. The proposed method used the output of intrusion detection system as input, classified the alarm events by Snort manual to get the observation sequence, and established the HMM model, the improved Simulated Annealing (SA) algorithm combined with the Baum_Welch (BW) algorithm to optimize the HMM parameters, and used the method of quantitative analysis to get the security situational value of the network. The experimental results show that the proposed method can improve the accuracy and convergence speed of the model.
    Data destruction model for cloud storage based on lifecycle control
    CAO Jingyuan, LI Lixin, LI Quanliang, DING Yongshan
    2017, 37(5):  1335-1340.  DOI: 10.11772/j.issn.1001-9081.2017.05.1335
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics
    A data destruction model based on lifecycle control under cloud storage environment was proposed to solve the lack of effective data destruction mechanism for user data, and that data security was threatened and destruction time was controlled in the life cycle, which greatly limited the development of cloud services. The plain text was processed by functional transformation to generate the cipher text and metadata and avoid the complex key management. Secondly, in order to improve the controllability of data destruction, a self-destruction data objects based on controllable time was designed, which made any illegal access of expired objects to trigger the assured deletion by rewriting program, and realized the data destruction based on lifecycle control. The analysis and experimental results show that the scheme can enhance the flexibility and controllability of data destruction and reduce the performance cost, while protecting the data safely and effectively.
    Research and design of AES algorithm based on high-level synthesis
    ZHANG Wang, JIA Jia, MENG Yuan, BAI Xu
    2017, 37(5):  1341-1346.  DOI: 10.11772/j.issn.1001-9081.2017.05.1341
    Asbtract ( )   PDF (1026KB) ( )  
    References | Related Articles | Metrics
    Due to the increasingly high performance requirements on the Advanced Encryption Standard (AES) algorithm which was widely used, software-based cryptographic algorithms have been increasingly difficult to meet the demands of high-throughput ciper cracking. As a result, more and more encryption algorithms have been accelerated by using Field-Programmable Gate Array (FPGA) platform. Focused on the issue that the development of AES algorithm based on FPGA has high complexity and long development cycle, with High-Level Synthesis (HLS) design methodologies, AES hardware acceleration algorithm was designed by using high-level programming language. Firstly, loop unrolling, etc were used to improve operation parallelism. Secondly, to make full use of on-chip memory and circuit resources, the resource balance optimization technology was used. Finally, the full pipeline structure was added to improve the clock frequency and throughput of the overall design. The detailed analysis and comparison of the benchmark design and different optimized designs with structural expansion, resource balance and pipeline were decribed. The experimental results show that the clock frequency of AES algorithm is up to 127.06 MHz and the throughput eventually achieves 16.26 Gb/s on Xilinx xc7z020clg484 platform, compared with the benchmark AES design, performance increases by three orders of magnitude.
    Improvement of OpenID Connect protocol and its security analysis
    LU Jintian, YAO Lili, HE Xudong, MENG Bo
    2017, 37(5):  1347-1352.  DOI: 10.11772/j.issn.1001-9081.2017.05.1347
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    OpenID Connect protocol is widely used in identity authentication field and is one of the newest single sign-on protocols. In this paper, the digital signature and asymmetric encryption were used to improve OpenID connect protocol. The secrecy and authentication of the improved protocol were focused. And then the improved OpenID connect protocol was formalized with the applied PI calculus in the symbolic model, next the secrecy was modeled by query and the authentication was modeled by non-injective relations to test the secrecy and authentication of improved OpenID Connect protocol. Finally the formal model of the OpenID Connect protocol was transformed into the input of the automatic tool ProVerif based on symbol model. The results indicate that the improved OpenID Connect protocol is authenticable and secret.
    Vulnerability threat assessment based on improved variable precision rough set
    JIANG Yang, LI Chenghai
    2017, 37(5):  1353-1356.  DOI: 10.11772/j.issn.1001-9081.2017.05.1353
    Asbtract ( )   PDF (623KB) ( )  
    References | Related Articles | Metrics
    Variable Precision Rough Set (VPRS) can effectively process the noise data, but its portability is not good. Aiming at this problem, an improved vulnerability threat assessment model was proposed by introducing the threshold parameter α. First of all, an assessment decision table was created according to characteristic properties of vulnerability. Then, k-means algorithm was used to discretize the continuous attributes. Next, by adjusting the value of β and α, the attributes were reducted and the probabilistic decision rules were concluded. Finally, the test data was matched with the rule base and the vulnerability assessment results were obtained. The simulation results show that the accuracy of the proposed method is 19.66 percentage points higher than that of VPRS method, and the transplantability is enhanced.
    Automatic hierarchical approach of MAXQ based on action space partition
    WANG Qi, QIN Jin
    2017, 37(5):  1357-1362.  DOI: 10.11772/j.issn.1001-9081.2017.05.1357
    Asbtract ( )   PDF (1003KB) ( )  
    References | Related Articles | Metrics
    Since a hierarchy of Markov Decision Process (MDP) need to be constructed manually in hierarchical reinforcement learning and some automatic hierarchical approachs based on state space produce unsatisfactory results in environment with not obvious subgoals, a new automatic hierarchical approach based on action space partition was proposed. Firstly, the set of actions was decomposed into some disjoint subsets through the state component of the action. Then, bottleneck actions were identified by analyzing the executable actions of the Agent in different states. Finally, based on the execution order of actions and bottleneck actions, the relationship of action subsets was determined and a hierarchy was constructed. Furthermore, the termination condition for sub-tasks in the MAXQ method was modified so that by using the hierarchical structure of the proposed algorithm the optimal strategy could be found through the MAXQ method. The experimental results show that the algorithm can automatically construct the hierarchical structure which was not affected by environmental change. Compared with the QLearning and Sarsa algorithms, the MAXQ method with the proposed hierarchy obtains the optimal strategy faster and gets higher returns. It verifies that the proposed algorithm can effectively construct the MAXQ hierarchy and make the optimal strategy more efficient.
    Improved biogeography-based optimization algorithm based on local-decision domain of glowworm swarm optimization
    WANG Zhihao, LIU Peiyu, DING Ding
    2017, 37(5):  1363-1368.  DOI: 10.11772/j.issn.1001-9081.2017.05.1363
    Asbtract ( )   PDF (917KB) ( )  
    References | Related Articles | Metrics
    Aiming at the lack of searching ability of Biogeography-Based Optimization (BBO) algorithm, an improved migration operation based on local-decision domain was proposed to improve the global optimization ability of the algorithm. The improved migration operation can further utilize the interaction between habitats in consideration of the respective migration rates and evapotranspiration rates of different habitats. The improved algorithm was applied to 12 typical function optimization problems to test the performance, and the effectiveness of the improved algorithm was verified. Compared with BBO, Improved BBO (IBBO) and Differential Evolution/BBO (DE/BBO), the experimental results show that the proposed algorithm can improve the capacity of global searaching optimal solution, convergence speed and computational precision of solution.
    Improved particle swarm optimization algorithm combined centroid and Cauchy mutation
    LYU Liguo, JI Weidong
    2017, 37(5):  1369-1375.  DOI: 10.11772/j.issn.1001-9081.2017.05.1369
    Asbtract ( )   PDF (1098KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of low convergence accuracy and being easily to fall into local optimum of the Particle Swarm Optimization (PSO), an improved PSO algorithm combined Centroid and Cauchy Mutation, namely CCMPSO, was proposed. Firstly, at the initialization stage, chaos initialization was adopted to improve the ability of initial particle uniform distribution.Secondly, the concept of centroid was introduced to improve the convergence rate and optimization capability. By calculating the global centroid of all the particles in the population and the individaual centroid formed by all of the individuals' extreme values, sufficient information sharing could be realized in the interior of the particle swarm. To avoid falling into local optimal solution, Cauchy mutation operation was used to perturb the current optimal particle, in addition, the step length of disturbance was adaptively adjusted according to the operation rule of Cauchy mutation; the inertia weights were also dynamically adjusted according to population diversity. Finally, seven classical test functions were used to verify the algorithm. Experimental results indicate that the new algorithm has good performance in convergence precision of the function execution results, including the mean, the variance and the minimum value.
    Decision-making method with bounded rationality under intuitionistic fuzzy information environment
    DENG Daping, XIE Xiaoyun, GUO Zixuan
    2017, 37(5):  1376-1381.  DOI: 10.11772/j.issn.1001-9081.2017.05.1376
    Asbtract ( )   PDF (1108KB) ( )  
    References | Related Articles | Metrics
    Considering the intuitionistic fuzzy multi-attribute decision-making problem that the decision makers with bounded rationality psychological characteristics and attribute weights and probability information of situation are completely unknown, a multi-attribute decision-making based on the prospect theory and Dempster-Shafer theory was proposed. Firstly, the probabilites of the states were calculated by Dempster-Shafer theory, and the decision-making weight functions of the states were determined. Then, the normal distribution probability density function was utilized to construct the intuitionistic fuzzy reference point. Based on the difference between the attribute value and the reference point, the value function matrix and the prospect value matrix were obtained. In addition, an optimization model was developed to derive the attribute weights with the principle of maximizing the comprehensive prospect value, and all the alternatives were further ordered. Finally, the proposed approach was applied to a numerical example about the selection of game products. The experimental results show that the decision-making results are reasonable and reliable, and the actual situation can be reflected.
    Micro blog user recommendation algorithm based on similarity of multi-source information
    YAO Binxiu, NI Jiancheng, YU Pingping, LI Linlin, CAO Bo
    2017, 37(5):  1382-1386.  DOI: 10.11772/j.issn.1001-9081.2017.05.1382
    Asbtract ( )   PDF (872KB) ( )  
    References | Related Articles | Metrics
    Focusing on the data sparsity and low accuracy of recommendation existed in traditional Collaborative Filtering (CF) recommendation algorithm, a micro blog User Recommendation algorithm based on the Similarity of Multi-source Information, named MISUR, was proposed. Firstly, the micro blog users were classified by K-Nearest Neighbor (KNN) algorithm according to their tag information. Secondly, the similarity of the multi-source information, such as micro blog content, interactive relationship and social information, was calculated for each user in each class. Thirdly, the time weight and the richness weight were introduced to calculate the total similarity of multi-source information, and the TOP-N recommendation was used in a descending order. Finally, the experiment was carried out on the parallel computing framework Spark. The experimental results show that, compared with CF recommendation algorithm and micro blog Friend Recommendation algorithm based on Multi-social Behavior (MBFR), the superiority of the MISUR algorithm is validated in terms of accuracy, recall and efficiency.
    Item collaborative filtering recommendation algorithm based on improved similarity measure
    YU Jinming, MENG Jun, WU Qiufeng
    2017, 37(5):  1387-1391.  DOI: 10.11772/j.issn.1001-9081.2017.05.1387
    Asbtract ( )   PDF (922KB) ( )  
    References | Related Articles | Metrics
    Traditional collaborative filtering algorithm can not perform well under the condition of cold start. To solve this problem, IPSS-based (Inverse Item Frequence-based Proximity-Significance-Singularity) Item Collaborative Filtering (ICF_IPSS) was proposed, whose core was a novel similarity measure. The measure was composed of the rating similarity and the structure similarity. The difference between the ratings of two items, the difference between the item rating and the median value, and the difference between the rating value and the average rating value of other items were taken into account in the rating similarity. The structure similarity defined the ⅡF (Inverse Item Frequence) coefficient which fully reflected common-rating ratio and punished active users. Experiments were executed on Movie Lens and Jester data sets to testify the accuracy of the ICF_IPSS. In Movie Lens data set, when the nearest neighbor number was 10, the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) was 3.06%, 1.20% lower than ICF_JMSD (Jaccard-based Mean Square Difference-based Item Collaborative Filtering) respectively. When the recommendation item number was 10, the precision and recall was 67.79%, 67.86% higher than ICF_JMSD respectively. The experimental results show that ICF_IPSS is superior to other traditional collaborative filtering algorithms, such as ICF_JMSD.
    Singular value decomposition recommender model based on phase sequential effect
    HUANG Kai, ZHANG Xihuang
    2017, 37(5):  1392-1396.  DOI: 10.11772/j.issn.1001-9081.2017.05.1392
    Asbtract ( )   PDF (953KB) ( )  
    References | Related Articles | Metrics
    The traditional Singular Value Decomposition (SVD) recommender model based on sequential effect only considers scoring matrix and uses complicated time function to fit item's life cycle and user's preferences, which leads to many problems, such as difficult to explain model, inaccurate to capture user's preferences and low prediction accuracy. In view of the drawbacks, an improved sequential effect model was proposed which considered scoring matrix, item attributes and user rating labels comprehensively. Firstly, the time axis was divided into different phases, the project's popularity was converted to influence in[0,1] to improve project bias by sigmoid function. Secondly, the time variation changes of the user bias were transformed into time variation changes of user rating mean and overall rating mean by nonlinear function. Finally, the influence factors of the user project interaction were generated to achieve the user project interaction improvement by capturing the user's interest, combining with favorable rate of the similar users. The tests on the Movielence 10M and 20M movie scoring data sets show that the improved model can better capture the time variation change of user preferences, improve the accuracy of scoring prediction, and improve the root mean square error by 2.5%.
    Score similarity based matrix factorization recommendation algorithm with group sparsity
    SHENG Wei, WANG Baoyun, HE Miao, YU Ying
    2017, 37(5):  1397-1401.  DOI: 10.11772/j.issn.1001-9081.2017.05.1397
    Asbtract ( )   PDF (745KB) ( )  
    References | Related Articles | Metrics
    How to improve the accuracy of recommendation is an important issue for the current recommendation system. The matrix decomposition model was studied, and in order to exploit the group structure of the rating data, a Score Similarity based Matrix Factorization recommendation algorithm with Group Sparsity (SSMF-GS) was proposed. Firstly, the scoring matrix was divided into groups according to the users' rating behavior, and the similar user group scoring matrix was obtained. Then, similar users' rating matrix was decomposed in group sparsity by SSMF-GS algorithm. Finally, the alternating optimization algorithm was applied to optimize the proposed model. The latent item features of different user groups could be filtered out and the explanability of latent features was enhanced by the proposed model. Simulation experiments were tested on MovieLens datasets provided by GroupLens website. The experimental results show that the proposed algorithm can improve recommendation accuracy significantly, and the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) both have good performance.
    Stance detection method based on entity-emotion evolution belief net
    LU Ling, YANG Wu, LIU Xu, LI Yan
    2017, 37(5):  1402-1406.  DOI: 10.11772/j.issn.1001-9081.2017.05.1402
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    To deal with the problem of stance detection of Chinese social network reviews which lack theme or emotion features, a method of stance detection based on entity-emotion evolution Bayesian belief net was proposed in this paper. Firstly, three types of domain dependent entities, including noun, verb-object phrase and verb-noun compound attributive centered structure were extracted. The domain-related emotion features were extracted, and the variable correlation strength was used as a constraint on the learning of the network structure. Then the 2-dependence Bayesian network classifier was constructed to describe the dependence of entity, stance and emotion features. The stance of reviews was deducted from combination condition of entities and emotion features. Experiments were tested on Natural Language Processing & Chinese Computing 2016 (NLP&CC2016). The experimental results show that the average micro-F reaches 70.8%, and average precision of FAVOR and AGAINST increases by 4.1 percentage points and 3.1 percentage points over Bayesian network classification method with emotion features only respectively. The average micro-F on 5 target data sets of evaluation reaches 62.3%, which exceeds average level of the evaluation.
    Biterm topic evolution model of microblog
    SHI Qingwei, LIU Yushi, ZHANG Fengtian
    2017, 37(5):  1407-1412.  DOI: 10.11772/j.issn.1001-9081.2017.05.1407
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the traditional topic model ignore short text and dynamic evolution of microblog, a Biterm Topic over Time (BToT) model based on microblog text was proposed, and the subject evolution analysis was carried out by the proposed model. A continuous time variable was introduced to describe the dynamic evolution of the topic in the time dimension during the process of text generation in the BToT model, and the "Biterm" structure of the topic sharing in the document was formed to extend short text feature. The Gibbs sampling method was used to estimate the parameters of BToT, and the topic evaluation was analyzed by topic-time distributed parameters. The experimental results on real microblog datasets show that BToT can characterize the latent topic evolution and has lower perplexity than Latent Dirichlet Allocation (LDA), Biterm Topic Model (BTM) and Topic over Time (ToT).
    Margin discriminant projection and its application in expression recognition
    GAN Yanling, JIN Cong
    2017, 37(5):  1413-1418.  DOI: 10.11772/j.issn.1001-9081.2017.05.1413
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    Considering that global dimensionality reduction methods lack useful discriminant information, and local dimensionality reduction methods have defects in measuring neighborhood relationships, a novel dimensionality reduction method based on margin, named Margin Discriminant Projection (MDP), was proposed. Depending on the neighbor structure of mean vector of classes, the boundary vector of the class edge was defined by the heterogeneous neighbor relation of the center mean of the class. On this basis, the between-class scatter matrix was redefined, and the within-class scatter matrix was constructed by the global method. The class margin criterion was established based on discriminant analysis, and discriminant information of samples in projection space was enhanceed by maximizing class margin. The expression recognition on JAFFE and Extended Cohn-Kanade data sets presented the comparison of MDP with PCA (Principal Component Analysis), MMC (Maximum Margin Criterion) and MFA (Marginal Fisher Analysis), and the experiment results show that the proposed method can extract more distinguishable low-dimensional features with relatively higher efficiency, and MDP has better classification accuracy than the other methods.
    Trend prediction of public opinion propagation based on parameter inversion — an empirical study on Sina micro-blog
    LIU Qiaoling, LI Jin, XIAO Renbin
    2017, 37(5):  1419-1423.  DOI: 10.11772/j.issn.1001-9081.2017.05.1419
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics
    Concerning that the existing researches on public opinion propagation model are seldom combined with the practical opinion data and digging out the inherent law of public opinion propagation from the opinion big data is becoming an urgent problem, a parameter inversion algorithm of public opinion propagation model using neural network was proposed based on the practical opinion big data. A network opinion propagation model was constructed by improving the classical disease spreading Susceptible-Infective-Recovered (SIR) model. Based on this model, the parameter inversion algorithm was used to predict the network public opinion's trend of actual cases. The proposed algorithm could accurately predict the specific heat value of public opinion compared with Markov prediction model.The experimental results show that the proposed algorithm has certain superiority in prediction and can be used for data fitting, process simulation and trend prediction of network emergency spreading.
    Mining algorithm of maximal fuzzy frequent patterns
    ZHANG Haiqing, LI Daiwei, LIU Yintian, GONG Cheng, YU Xi
    2017, 37(5):  1424-1429.  DOI: 10.11772/j.issn.1001-9081.2017.05.1424
    Asbtract ( )   PDF (1047KB) ( )  
    References | Related Articles | Metrics
    Combinatorial explosion and the effectiveness of mining results are the essential challenges of meaningful pattern extraction, a Maximal Fuzzy Frequent Pattern Tree Algorithm (MFFP-Tree) based on base-(second-order-effect) pattern structure and uncertainty consideration of items was proposed. Firstly, the fuzziness of items was analyzed comprehensively, the fuzzy support was given, and the fuzzy weight of items in the transaction data set was analyzed, the candidate item set was trimmed according to the fuzzy pruning strategy. Secondly, the database was scanned once to build FFP-Tree, and the overhead of pattern extraction was reduced based on fuzzy pruning strategy. The FFP-array structure was used to streamline the search method to further reduce the space and time complexity. The experimental results gained from the benchmark datasets reveal that the proposed MFFP-Tree has outstanding performance by comparing with PADS and FPMax* algorithms:the time complexity of the proposed algorithm is optimized by twice to one order of magnitude for different datasets, and the spatial complexity of the proposed algorithm is optimized by one order of magnitude to two orders of magnitude, respectively.
    Super-resolution algorithm for remote sensing images based on compressive sensing in wavelet domain
    YANG Xuefeng, CHENG Yaoyu, WANG Gao
    2017, 37(5):  1430-1433.  DOI: 10.11772/j.issn.1001-9081.2017.05.1430
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that complex image texture can not be fully expressed by single dictionary in image Super-Resolution (SR) reconstruction, a remote sensing image super-resolution algorithm based on compressive sensing and wavelet theory using multiple dictionaries was proposed. Firstly, the K-Singular Value Decomposition (K-SVD) algorithm was used to establish the different dictionaries in the different frequency bands in wavelet domain. Secondly, the initial solution of SR image was obtained by using global limited condition. Finally, the sparse solution of multiple dictionaries in wavelet domain was implemented using Orthogonal Matching Pursuit (OMP) algorithm. The experimental results show that the proposed algorithm presents the better subjective visual effect compared with the single dictionary based algorithm. The Peak Signal-to-Noise Ratio (PSNR) and the Structural SIMilarity (SSIM) index increase more than 2.8 dB and 0.01 separately. The computation time is reduced as the dictionaries can be used once again.
    Echocardiogram view recognition using deep convolutional neural network
    TAO Pan, FU Zhongliang, ZHU Kai, WANG Lili
    2017, 37(5):  1434-1438.  DOI: 10.11772/j.issn.1001-9081.2017.05.1434
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    A deep model for automatic recognition of echocardiographic standard views based on deep convolutional neural network was proposed, and the effectiveness of the deep model was analyzed by visualize class activation maps. In order to overcome the shortcomings of the fully connected layer occupying most of the parameters of the model, the spatial pyramid mean pool was used to replace the fully connected layer, and more spatial structure information was obtained. The model parameters and the over-fitting risk were reduced.The attention mechanism was introduced into the model visualization process by the class significance region. The robustness and effectiveness of the deep convolution neural network model were explained by the case recognizing echocardiographic standard views. Visualization analysis on echocardiography show that the decision basis made by the improved depth model is consistent with the standard view classification by the sonographer which indicates the validity and practicability of the proposed method.
    Indoor robot localization and 3D dense mapping based on ORB-SLAM
    HOU Rongbo, WEI Wu, HUANG Ting, DENG Chaofeng
    2017, 37(5):  1439-1444.  DOI: 10.11772/j.issn.1001-9081.2017.05.1439
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics
    In the indoor robot localization and 3D dense mapping, the existing methods can not satisfy the requirements of high-precision localization, large-scale and rapid mapping. The ORB-SLAM (Oriented FAST and Rotated BRIEF-Simultaneous Localization And Mapping) algorithm, which has three parallel threads including tracking, map building and relocation, was used to estimate the three-dimensional (3D) pose of the robot. And then 3D dense point cloud was obtained by using the depth camera KINECT. The key frame extraction method in spatial domain was introduced to eliminate redundant frames, and the sub-map method was proposed to reduce the cost of mapping, thereby the whole speed of the algorithm was improved. The experiment results show that the proposed method can locate the robot position accurately in a large range. In the range of 50 meters, the root-mean-square error of the robot is 1.04 m, namely the error is 2%, the overall speed is 11 frame/s, and the localization speed is up to 17 frame/s. The proposed method can meet the requirements of indoor robot localization and 3D dense mapping with high precision, large-scale and rapidity.
    Improved robust OctoMap based on full visibility model
    LIU Jun, YUAN Peiyan, LI Yongfeng
    2017, 37(5):  1445-1450.  DOI: 10.11772/j.issn.1001-9081.2017.05.1445
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics
    An improved robust OctoMap based on full visibility model was proposed to meet accuracy needs of 3D map for mobile robot autonomous navigation and it was applied to the RGB-D SLAM (Simultaneous Localization And Mapping) based on Kinect. First of all, the connectivity was judged by considering the the relative positional relationship between the camera and the target voxel and the map resolution to get the number and the location of adjacent voxels which met connectivity. Secondly, according to the different connectivity, the visibility model of the target voxel was built respectively to establish the full visibility model which was more universal. The proposed model could effectively overcome the limitations of the robust OctoMap visibility model, and improve the accuracy. Next, the simple depth error model was replaced by the Kinect sensor depth error model based on Gaussian mixture model to overcome the effect of the sensor measurement error on the accuracy of map further and reduce the uncertainty of the map. Finally, the Bayesian formula and linear interpolation algorithm were combined to update the occupancy probability of each node in the octree to build the volumetric occupancy map based on a octree. The experimental results show that the proposed method can effectively overcome the influence of Kinect sensor depth error on map precision and reduce the uncertainty of the map, and the accuracy of map is improved obviously compared with the robust OctoMap.
    Robot hand-eye calibration by convex relaxation global optimization
    LI Wei, LYU Naiguang, DONG Mingli, LOU Xiaoping
    2017, 37(5):  1451-1455.  DOI: 10.11772/j.issn.1001-9081.2017.05.1451
    Asbtract ( )   PDF (814KB) ( )  
    References | Related Articles | Metrics
    Hand-eye calibration based on nonlinear optimization algorithm can not guarantee the convergence of the objective function to the global minimum, when there are errors in both robot forward kinematics and camera external parameters calibration. To solve this tricky problem, a new hand-eye calibration algorithm based on quaternion theory by convex relaxation global optimization was proposed. The critical factor of the angle between different interstation rotation axes by a manipulator was considered, an optimal set of relative movements from calibration data was selected by Random Sample Consensus (RANSAC) approach. Then, rotation matrix was parameterized by a quaternion, polynomial geometric error objective function and constraints were established based on Linear Matrix Inequality (LMI) convex relaxation global optimization algorithm, and the hand-eye transformation matrix could be solved for global optimum. Experimental validation on real data was provided. Compared with the classical quaternion nonlinear optimization algorithm, the proposed algorithm can get global optimal solution, the geometric mean error of hand-eye transformation matrix is no more than 1.4 mm, and the standard deviation is less than 0.16 mm.
    Rapid prototyping method for bone tissue based on medical image surface rendering reconstruction
    LIU Siqi, ZHANG Laifeng, FAN Licheng, SHENG Xiaoming
    2017, 37(5):  1456-1459.  DOI: 10.11772/j.issn.1001-9081.2017.05.1456
    Asbtract ( )   PDF (678KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems that coutour complex path trajectory generation and low slicing efficiency in rapid prototyping of artificial bone tissue, a method to simplify the slicing process of triangle mesh was proposed in this paper. The medical image sequences were reconstructed by the Marching Cubes (MC) algorithm, the triangle meshes were grouped into triangle arrays according to the order of the reconstruction process. And then, the intersection points between the slice plane and the triangle array were calculated by edge tracking. It was found that the slicing efficiency of simplified process was increased by 4.65% on average compared with the triangular mesh STereoLithography (STL) model. The experimental results indicate that the proposed method can generate contour data used for 3D printing directly from medical image sequences of human bone tissu, so as to realize the rapid prototyping of bone tissue.
    Non-stationary subtle motion magnification based on S transform
    LEI Lin, LI Lepeng, YANG Min, DONG Fangmin, SUN Shuifa
    2017, 37(5):  1460-1465.  DOI: 10.11772/j.issn.1001-9081.2017.05.1460
    Asbtract ( )   PDF (1009KB) ( )  
    References | Related Articles | Metrics
    The existing Eulerian video subtle motion magnification method does not take into account the automatic detection of motion information in the video, the appropriate parameters for motion information processing need to be selected in the realization of motion magnification, such as the filter cut-off frequency, and magnification. For the general video, these parameters usually could not be determined directly, but by trial-and-error. In this paper, an automatic amplification method of non-stationary subtle motion in video was proposed based on S transform. The instantaneous correlation parameters of band-pass filter were automatically determined based on S transform and the corresponding dynamic filter was designed. On this basis, an approach of automated subtle motion magnification was achieved. Firstly, the instantaneous frequency of subtle motion was obtained by S transform in video. Then the dynamic band-pass filter was used to process different frequencies at different times. Finally, the effective motion information which was filtered by band-pass filter was magnified to achieve subtle motion magnification. In addition, for the analysis of anti-noise performance, a method for evaluating the signal-to-noise ratio in video area was proposed. The experimental results show that the proposed method can automatically obtain the parameters such as filter and magnification according to the change of the frequency of the motion information when the actual video is amplified, without manual participation. After the motion amplification, the amplification effect of the moving target can be dynamically displayed. At the same time, the accurate dynamic filter can suppress noise to some extent, and makes the motion magnification effect better.
    Long-term visual object tracking algorithm based on correlation filter
    ZHU Mingmin, HU Maohai
    2017, 37(5):  1466-1470.  DOI: 10.11772/j.issn.1001-9081.2017.05.1466
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the Correlation Filter (CF) has poor performance in tracking fast motion object, a Long-term Kernelized Correlation Filter (LKCF) tracking algorithm based on optical flow combining with Kernel Correlation Filter (KCF) was proposed. Firstly, while tracking with the tracker, a value of Peak-to-Sidelobe Ratio (PSR) was calculated. Secondly, the position was achieved in the last frame, optical flow was used to calculate coarse position when the value of PSR less than a threshold in the current frame, which means tracking failure. Finally, accurate position was calculated using the tracker again according to the coarse position. The results of experiment compared with four kinds of tracking algorithms such as Compressive Tracking (CT), Tracking-Learning-Detection (TLD), KCF and Spatio-Temporal Context (STC) show that the proposed algorithm is optimal in distance accuracy and success rate which are 6.2 percentage points and 5.1 percentage points higher than those of KCF. In other words, the proposed algorithm is robust to the tracking of fast motion object.
    Image denoising via weighted nuclear norm minimization and Gaussian mixed model
    SUN Shaochao
    2017, 37(5):  1471-1474.  DOI: 10.11772/j.issn.1001-9081.2017.05.1471
    Asbtract ( )   PDF (635KB) ( )  
    References | Related Articles | Metrics
    Nonlocal Self-Similarity (NSS) prioritization plays an important role in image restoration, but it is worthy of further research that how to make full use of this prior to improve the performance of image restoration. An image denoising via weighted nuclear norm minimization and Gaussian Mixed Model (GMM) was proposed. Firstly, the clean NSS image blocks of the natural image were trained by GMM, and then the trained GMM was used to guide the degraded image to produce NSS image blocks. Then, the weighted nuclear norm minimization was used to realize image denoising, an extended model was proposed by modifying the fidelity item, and the corresponding convergent algorithm was given. The simulation results show, compared with some advanced algorithms such as Block Matching with 3D filtering (BM3D), Learned Simultaneous Sparse Coding (LSSC) and Weighted Nuclear Norm Minimization (WNNM), the proposed method improves the Peak Signal-to-Noise Ratio (PSNR) by 0.11 to 0.49 dB.
    Face recognition based on complement null-space and nearest space distance
    YUAN Haojie, SUN Guiling, XU Yi, ZHENG Bowen
    2017, 37(5):  1475-1480.  DOI: 10.11772/j.issn.1001-9081.2017.05.1475
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that classifiers do not make full use of the differences between different types of face samples in face recognition, an effective method for face recognition was proposed, namely Complement Null-Space (CNS) algorithm; and further more, another method which combined CNS and nearest space Distance (CNSD) was proposed. Firstly, subspaces and complement null-spaces of all types of training images were constructed. Secondly, the distances between the test image and all types of subspaces as well as the distances between the test image and all types of complement null-spaces were calculated. Finally, the test image was classified into the type which has the minimum subspace distance and the maximum complement null-space distance. On ORL and AR face databases, the recognition rates of CNS and CNSD are much higher than those of Nearest Neighbor (NN), Nearest Space (NS) method and Nearest-Farthest Subspace (NFS) method when the number of training samples is small; and it is a little higher than that of NN, NS and NFS when dealing with large samples. Simulation results show that the proposed algorithm can make full use of the differences between different types of images and has good recognition ability.
    Improved dark channel prior dehazing algorithm combined with atmospheric light and transmission
    CHEN Gaoke, YANG Yan, ZHANG Baoshan
    2017, 37(5):  1481-1484.  DOI: 10.11772/j.issn.1001-9081.2017.05.1481
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics
    Since the dark channel prior transmission and atmospheric light in the bright region are poorly estimated, an improved dehazing algorithm combined with atmospheric light and transmission was proposed. On the basis of analysis of the characteristics of Gaussian function, a preliminary transmission was estimated through the Gaussian function of dark channel prior of a fog image, and the maximum and minimum operations were used to eliminate the block effect. Next, the atmospheric light was obtained by atmospheric light description area, which was acquired by halo operation and morphological dilation operation. Finally, a clear image could be reconstructed according to the atmospheric scattering model. The experimental results show that the proposed algorithm can effectively remove the fog from the image and the recovered effect of thick fog is better than the comparison algorithms, such as dark channel prior, meanwhile the algorithm has a faster processing speed and is suitable for real-time applications.
    Robust vehicle route optimization for multi-depot hazardous materials transportation
    XIONG Ruiqi, MA Changxi
    2017, 37(5):  1485-1490.  DOI: 10.11772/j.issn.1001-9081.2017.05.1485
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that the sensitivity of hazardous materials transportation routes to uncertain factors is excessively high, a robust vehicle route optimization method for multi-depot hazardous materials transportation was proposed. Firstly, a robust optimization model was designed under the Bertsimas robust discrete optimization theory with the objective function of minimizing transportation risks and minimizing transportation costs. Secondly, on the basis of Strength Pareto Evolutionary Algorithm 2 (SPEA2), a multi-objective genetic algorithm with three-stage encoding was designed for the model. Then, different crossover and mutation operations were performed on the different segments of chromosomes during genetic manipulation,which effectively avoided the generation of infeasible solutions during population evolution. Finally, part of Qingyang Xifeng district road network was chosen as an empirical research example. Distribution plan was carried out at transportation process to form some specific transportation routes. The results show that better robust hazardous materials transportation routes can be quickly obtained by using the robust model and algorithm under multi-depot situation.
    Application of binary clustering algorithm to crowd evacuation simulation based on social force
    LI Yan, LIU Hong, ZHENG Xiangwei
    2017, 37(5):  1491-1495.  DOI: 10.11772/j.issn.1001-9081.2017.05.1491
    Asbtract ( )   PDF (985KB) ( )  
    References | Related Articles | Metrics
    Pedestrian crowd needs to be divided into groups by using clustering algorithms before using the Social Force Model (SFM) to simulate crowd evacuation. Nevertheless, k-medoids and STatistical INformation Grid (STING) are two traditional clustering algorithms, cannot meet the requirements in the aspect of efficiency and accuracy. To solve the above problem, a new method named Binary Clustering Algorithm (BCA) was proposed in this paper. BCA was composed of two kinds of algorithms:center point clustering and grid clustering. Moreover, the dichotomy was used to divide the grid without repeated clustering. First of all, the data was divided into grids, through the use of dichotomy. Next, the core grid would be selected, according to the data density in a grid. Then, the core grid was used as the center, and the neighbors were clustered. Finally, the residual grids were was merged according to the nearest principle. The experimental results show that, in the clustering time, BCA is only 48.3% of the STING algorithm, less than 14% of the k-medoids algorithm; and in the clustering accuracy, k-medoids is only 50% of BCA, STING doesn't reach to 90% of BCA. Therefore, BCA is better than k-medoids and STING algorithm in both efficiency and accuracy.
    Bi-direction pedestrian flow by the spread of the influence of emergencies
    LIANG Mingfu, FANG Shaomei, HUANG Zhongzhan, CAI Qinyi
    2017, 37(5):  1496-1502.  DOI: 10.11772/j.issn.1001-9081.2017.05.1496
    Asbtract ( )   PDF (1142KB) ( )  
    References | Related Articles | Metrics
    When the emergencies happen, the pedestrian walking behavior would change by the emergencies and their influence. The research on the pedestrian walking characteristics in emergencies could optimize the evacuation efficiency. Aiming at the shortcomings of the data acquisition in the existing research, the real pedestrian scene video was processed and the relevant data were extracted, and the general walking characteristics of the pedestrian without emergencies were analyzed. Aiming at the pedestrian flow in emergencies, the spread of the emergency influence and the pedestrian self-organization phenomena in emergencies were described by k-nearest neighbor algorithm and the resultant force, a novel Cellular Automata (CA) model whose cellulars were suffered from the influence of normal walking, emergencies and security marks was proposed. The simulation of bi-direction pedestrian evacuation in emergencies was carried out by the proposed model. The experimental results show that, when the separation distance of security marks in a narrow pedestrian passage mark is 0, 10, 20 cellular, the distance of the safety mark distribution has not obvious effect on pedestrian evacuation. Through the study of whether there is influence among the population, the effect of evacuation is mainly affected by the spread of emergencies by nearby pedestrians. That the impact of emergencies is too large or too small will cause congestion and is not conducive to the evacuation of the crowd. The simulation results are consistent with the scenario of pedestrian evacuation in reality.
    Redundant group based trajectory abstraction algorithm
    WEI Hao, XU Qing
    2017, 37(5):  1503-1506.  DOI: 10.11772/j.issn.1001-9081.2017.05.1503
    Asbtract ( )   PDF (638KB) ( )  
    References | Related Articles | Metrics
    In order to cluster and detect anomalies for the trajectory data collected by video surveillance equipment, a novel trajectory abstraction algorithm was proposed. Trajectories were firstly resampled by utilizing the Jensen-Shannon Divergence (JSD) measurement to improve the accuracy of similarity measurement between trajectories. Resampled trajectories in equal length, i.e. with the same number of sampling points, were required by the following non-local denoising. The similarity thresholds of the trajectory were determined adaptively, and the non-local means were used to cluster the trajectory data and identify the abnormal trajectory data. From the perspective of signal processing, the grouping trajectory data was filtered by the hard-thresholding method to get the summary trajector. The proposed algorithm was insensitive to the order of input trajectories and provides visual multi-scale abstractions of trajectory data. Compared with the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, the proposed algorithm performs better in terms of precision, recall and F1-mearsure.
    Attitude calculation algorithm of low-cost aircraft based on combined filter
    WANG Shouhua, DENG Guihui, JI Yuanfa, SUN Xiyan
    2017, 37(5):  1507-1511.  DOI: 10.11772/j.issn.1001-9081.2017.05.1507
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    In the low-cost aircraft attitude detection system, the complementary filter was widely used because of simple principle and low computational complexity. Aiming at the problem that the accelerometer can not distinguish between the acceleration of gravity and motion may cause attitude calculation error by complementary filtering, an attitude calculation algorithm of complementary and adaptive limited filter was proposed for low cost aircraft, and the design method of adaptive limited filter gate valve was given. The angular velocity output by a gyroscope and the acceleration output by an accelerometer were fused to obtain the gate valve of the limited filter, and then the normalized output increment of the accelerometer by limiting filter replaced the accelerometer input of the original complementary filter. The accuracy of attitude determination under uniform motion was improved. The actual system test shows that the proposed algorithm has high accuracy and low cost, and is easy to be realized in low cost aircraft control system.
    Prediction of eight-class protein secondary structure based on deep learning
    ZHANG Lei, LI Zheng, ZHENG Fengbin, YANG Wei
    2017, 37(5):  1512-1515.  DOI: 10.11772/j.issn.1001-9081.2017.05.1512
    Asbtract ( )   PDF (644KB) ( )  
    References | Related Articles | Metrics
    Predicting protein secondary structure is an important issue in structural biology. Aiming at the prediction of eight-class protein secondary structure, a novel deep learning prediction algorithm was proposed by combining recurrent neural network and feed-forward neural network. A bidirectional recurrent neural network was used to model locality and long-range interaction between amino acid residues in protein. In order to predict the eight-class protein secondary structure, the outputs of the hidden layer in the bidirectional recurrent neural network were further fed to the three-layer feed-forward neural network. Experimental results show that the proposed method achieves Q8 accuracy of 67.9% on the CB513 dataset, which is significantly better than SSpro8 and SC-GSN (Supervised Convolutional-Generative Stochastic Network).
    Adaptive bee colony algorithm combined with BFGS algorithm for microwave circuit harmonic balance analysis
    NAN Jingchang, ZHANG Yunxue, GAO Mingming
    2017, 37(5):  1516-1520.  DOI: 10.11772/j.issn.1001-9081.2017.05.1516
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics
    In view of the shortcomings of the initial value limitation of traditional algorithms and slow convergence speed of intelligent algorithms in harmonic balance analysis, an adaptive bee colony algorithm based on local search strategy of Broyden-Fleteher-Goldfarl-Shanno (BFGS) algorithm was proposed. Based on the basic bee colony algorithm, nonlinear dynamic adjustment factor was introduced to replace the random variables in the formula, thus improving the adaptability of searching. Meanwhile, BFGS algorithm was applied to the later period of bee colony algorithm to speed up the local search capability. Simulation results show that compared with the standard bee colony algorithm, the number of iterations of the improved algorithm was reduced by 51.9%, and the proposed algorithm has better convergence performance compared with the traditional BFGS algorithm and some other improved intelligent algorithms.
2024 Vol.44 No.5

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF