Loading...

Table of Content

    01 January 2014, Volume 34 Issue 1
    Network and communications
    High-efficiency and low-delay address assignment algorithm for LR-WPAN mesh networks
    REN Zhi SUO Jianwei LIU Yan LEI Hongjiang
    2014, 34(1):  1-3.  DOI: 10.11772/j.issn.1001-9081.2014.01.0001
    Asbtract ( )   PDF (552KB) ( )  
    Related Articles | Metrics
    To solve the redundancy of control overhead and allocation time consumption in the address assignment algorithm of Low-Rate Wireless Personal Area Network (LR-WPAN) mesh, a High-efficiency and Low-delay Address Assignment (HLAA) algorithm for LR-WPAN mesh was proposed. By using the process of network access to realize the function of network access and address assignment, and deleting the redundant fields in the address assignment, HLAA had reduced allocation time as well as control overhead in the premise of realizing the function of address allocation. The simulation results show that compared with the original algorithm, HLAA can decrease the control overhead by 22.15%, and reduce the allocation time by 7.68%.
    Geographic routing algorithm based on directional data transmission for opportunistic networks
    REN Zhi WANG Lulu YANG Yong LEI Hongjiang
    2014, 34(1):  4-7.  DOI: 10.11772/j.issn.1001-9081.2014.01.0004
    Asbtract ( )   PDF (724KB) ( )  
    Related Articles | Metrics
    Opportunistic network routing algorithm based on geographic location information in DIrection based Geographic routing scheme (DIG) has the problems of large delay and low success rate, which is due to that DIG algorithm makes the waiting time of the data in the cache too long and cannot guarantee the data-carrying node move to the destination node. To solve these problems, Geographic Routing algorithm based on Directional Data Transmission (GRDDT) was proposed. The algorithm used a new data forwarding mechanism and a more effective use of the neighbor list information, effectively avoiding the appearance of the above circumstances, so as to reduce data packet transmission delay and to improve the success rate. OPNET simulation results show that, the performance of transmission delay and success rate of GRDDDT algorithm are improved compared with DIG.
    Distributed time division multiple access scheduling strategy for wireless sensor networks
    LIU Tao CHEN Yihong TAN Ying CHEN Yaqian
    2014, 34(1):  8-12.  DOI: 10.11772/j.issn.1001-9081.2014.01.0008
    Asbtract ( )   PDF (705KB) ( )  
    Related Articles | Metrics
    In a periodic report Wireless Sensor Network (WSN), heavy data traffic very easily leads to serious transmission collisions. This paper proposed a distributed Time Division Multiple Access (TDMA) scheduling strategy, called DTSS, to construct an appropriate transmission schedule that avoided transmission collisions. DTSS took advantage of a distributed competitive algorithm to build the transmission schedule. Each node selected its next-hop forwarding node and competed for a transmission time slot with its contending nodes. After the construction of the schedule, the nodes sent and received the data according to the schedule. The simulation results confirm DTSS avoids transmission collisions, decreases the energy consumption of nodes and significantly improves the network lifetime.
    Sparse channel estimation method based on compressed sensing for OFDM cooperation system
    ZHANG Aihua LI Chunlei GUI Guan
    2014, 34(1):  13-17.  DOI: 10.11772/j.issn.1001-9081.2014.01.0013
    Asbtract ( )   PDF (702KB) ( )  
    Related Articles | Metrics
    A compressed channel sensing method was proposed for Orthogonal Frequency Division Multiplexing (OFDM) based Amplify-and-Forward (AF) cooperative communication network over frequency-selective fading channels. First, by using cyclic matrix theory, the system model was established similar to the traditional point-to-point system model, which consisted of a cascaded channel vector and a measurement matrix. And then, using the theory of compressed sensing, the measurement matrix was proven to satisfy Restricted Isometry Property (RIP) with high probability. Finally, convolution channel impulse response was reconstructed with compressed sensing algorithm. According to the figures example, the cooperative channel exhibited an inherent sparse or sparse clustering structure. Hence, the proposed method can fully exploit the inherent sparse structure in cooperative channel. The simulation results confirm that the proposed method provides significant improvement in Mean Square Error (MSE) performance or spectral efficiency compared with the traditional linear channel estimation methods.
    Cascading failures in couple map lattices with harmonious unification hybrid preferential model
    MA Xiujuan ZHAO Haixing
    2014, 34(1):  18-22.  DOI: 10.11772/j.issn.1001-9081.2014.01.0018
    Asbtract ( )   PDF (817KB) ( )  
    Related Articles | Metrics
    Cascading failures in coupled map lattices with Harmonious Unification Hybrid Preferential Model (HUHPM) were investigated through simulation analysis methods in this paper. Two attack strategies: deliberate attack and random attack were adopted in this fixed node number network. According to simulation result, the HUHPM network has better robustness in random attack than in deliberate attack. In addition, the hybrid ration has an important effect on cascading failures of the HUHPM network. In the deliberate attack case, the network became more robust to deliberate attack with the increasing of the random preferential attachment. But in the random attack case, the network became more robust to random attack with the increasing of the deterministic preferential attachment. Therefore, in practical applications, the robustness of HUHPM network can be enhanced by tuning the hybrid ration.
    Unknown protocol reversing engineering for CCSDS protocol
    HOU Zhongyuan JIAO Jiao ZHU Lei
    2014, 34(1):  23-26.  DOI: 10.11772/j.issn.1001-9081.2014.01.0023
    Asbtract ( )   PDF (733KB) ( )  
    Related Articles | Metrics
    Consultative Committee for Space Data System (CCSDS) protocol is the mainstream of international space-ground link standard for space communication. The reversing of unknown CCSDS protocol can be used in at least two areas: one is to analyze the unknown communication traffics; the other is to detect and analyze the network attack aiming at space station as well as other space entities which are networked for international space co-operation. Thus, a computer aided analytical system was designed to reverse unknown protocol based on CCSDS protocol standard framework, and the system included the architecture design and the workflow design. Moreover, to solve the problem of telegram clustering efficiency of iterative phylogenetic tree of unknown protocol in the workflow, an improved algorithm, called Feedback Dynamic Relaxation Factor-Affinity Propagation (FDRF-AP), was given to solve the unknown communication protocol reversing problem. The simulation results indicate that the algorithm enhances the efficiency of protocol reversing engineering.
    Network and distributed techno
    Integrity check method for fine-grained cloud storage data
    YU Xing HU Demin CHANG Huang
    2014, 34(1):  27-30.  DOI: 10.11772/j.issn.1001-9081.2014.01.0027
    Asbtract ( )   PDF (612KB) ( )  
    Related Articles | Metrics
    In the cloud storage service, in order to know the integrity of the data stored by users on the cloud server, this paper proposed a check method for the integrity of data fine-grained cloud stored. In the proposed method, the file would be divided into sub-blocks and then basic blocks, and with bilinear pairings and data blocks to be detected selected by users randomly, data integrity could be infinitely detected. Furthermore, by introducing a trusted Third Party Auditor (TPA), the dispute between users and cloud storage provider could be well solved for public validation of cloud storage data. Afterwards, analysis of the correctness and security of the method were given in this paper. Finally, experiments verify that the method is able to detect the data integrity of cloud storage better.
    Prediction on hard disk failure of cloud computing framework by using SMART on COG-OS framework
    SONG Yunhua BO Wenyang ZHOU Qi
    2014, 34(1):  31-35.  DOI: 10.11772/j.issn.1001-9081.2014.01.0031
    Asbtract ( )   PDF (802KB) ( )  
    Related Articles | Metrics
    The hard disk of cloud computing platform is not reliable. This paper proposed to use Self-Monitoring Analysis and Reporting Technology (SMART) log to predict hard disk failure based on Classification using lOcal clusterinG with Over-Sampling (COG-OS) framework. First, faultless hard disks were divided into multiple disjoint sample subsets by using DBScan or K-means clustering algorithm. And then these subsets and another sample set of faulty hard disks were mixed, and Synthetic Minority Over-sampling TEchnique (SMOTE) was used to make the overall sample set tend to balance. At last, faulty hard disks was predicted by using LIBSVM classification algorithm. The experimental results show that the method is feasible. COG-OS improves SMOTE+Support Vector Machine (SVM) on faulty hard disks' recall and overall performance, when using K-means method to divide samples of faultless hard disks and using LIBSVM method with Radial Basis Function (RBF) kernel to predict faulty hard disks.
    Selection sequence of parallel folding counter
    LI Yang LIANG Huaguo JIANG Cuiyun CHANG Hao YI Maoxiang FANG Xiangsheng YANG Bin
    2014, 34(1):  36-40.  DOI: 10.11772/j.issn.1001-9081.2014.01.0036
    Asbtract ( )   PDF (833KB) ( )  
    Related Articles | Metrics
    In order to reduce the test application time and guarantee high test data compression rate, a selection sequence of parallel folding counter was proposed. Selection test sequences were generated by recording group number and in-group number which represented folding index based on the analysis of parallel folding computing theory, so as to avoid generating useless and redundant test sequences. The experimental results on ISCAS benchmark circuits demonstrate the average test compression rate of the proposed scheme is 94.48%, and the average test application time is 15.31% of the similar scheme.
    Fault detection approach for MPSoC by redundancy core
    TANG Liu HUANG Zhangqin HOU Yibin FANG Fengcai ZHANG Huibing
    2014, 34(1):  41-45.  DOI: 10.11772/j.issn.1001-9081.2014.01.0041
    Asbtract ( )   PDF (737KB) ( )  
    Related Articles | Metrics
    For a better trade-off between fault-tolerance mechanism and fault-tolerance overhead in processor reliability research, a fault detection approach for Multi-Processor System-on-Chip (MPSoC) that placed the calculation task of detecting code on redundancy core was proposed in this paper. The approach achieved MPSoC failure detection by placing the calculation and comparison parts of detecting code on redundancy core. The technique required no additional hardware modification, and shortened the design cycle while reducing performance and memory overheads. The verification experiment was implemented on a MPSoC by fault injection and running multiple benchmark programs. Comparing several previous methods of fault detection in terms of capability, area, memory and performance overhead, the experiment results show that the approach is effective and able to achieve a better trade-off between performance and overhead.
    Survey on energy-aware green databases
    JIN Peiquan XING Baoping JIN Yong YUE Lihua
    2014, 34(1):  46-53.  DOI: 10.11772/j.issn.1001-9081.2014.01.0046
    Asbtract ( )   PDF (1418KB) ( )  
    Related Articles | Metrics
    With the trend of global low-carbon, as well as data-centric computing trends, studying the energy-saving green database systems has become a hot issue of government, business and academia. However, traditional database systems mainly focus on performance, and have little consideration on energy metrics, including energy efficiency and energy proportionality. In this paper, based on the requirement analysis on green database systems, some key issues on this topic were explored, and two critical problems were emphasized, namely the energy efficiency problem for database systems, as well as the energy proportionality problem for database clusters. Furthermore, some future directions on energy-aware green database systems were pointed out to provide some new insights for the research into this new area.
    TRAP-4 based continuous data protection system
    WU Hao LIU Xiaojie LUO Peng
    2014, 34(1):  54-57.  DOI: 10.11772/j.issn.1001-9081.2014.01.0054
    Asbtract ( )   PDF (726KB) ( )  
    Related Articles | Metrics
    Since the common continuous data protection system just backups the modified data directly and consumes a large amount of storage space, this paper presented a continuous data protection system based on Timely Recovery to Any Point-in-time 4 (TRAP-4). This system captured the modified data from the user by volume filter driver, and the data would be backed up to the backup center after calculation and compression. The recovery process can recover the data volume to any time-point through reverse decompressing and reorganizing the compressed data. Experiment shows this system can effectively save the storage space compared to common method. And, as the block size increases and decreases to modify the file, the system further reduces the storage space usage.
    Index mechanism supporting location tracing for radio frequency identification mobile objects
    LIAO Jianguo YE Xiaoyu JIANG Jian DI Guoqiang LIU Dexi
    2014, 34(1):  58-63.  DOI: 10.11772/j.issn.1001-9081.2014.01.0058
    Asbtract ( )   PDF (867KB) ( )  
    Related Articles | Metrics
    As the radio frequency communication technology gets more mature and the hardware manufacturing cost decreases, Radio Frequency IDentification (RFID) technology has been applied in the domains of real-time object monitoring, tracing and tracking. In supply chain applications, there are usually a great number of RFID objects to be monitored and traced, and objects' locations are changed essentially, so how to query the locations and the histories of location change of the RFID objects, from the huge volume of RFID data, is an urgent problem to be addressed. Concerning the characteristics of mobile RFID objects and the tracing query requirements in supply chain applications, an effective spatio-temporal index, called as CR-L, was put forward, and its structure and maintenance algorithms, including insertion, deletion, bi-splitting, and lazy splitting, were discussed in detail. In order to support object queries effectively, a new calculation principle of Minimum Bounding Rectangle (MBR), considering the three dimensional information including readers, time and objects, was presented to cluster the trajectories by the same reader at close time into the same node or the neighboring nodes. As to trajectory queries, a linked list was designed to link all trajectories belonging to the same object. The experimental results verify that CR-L has better query efficiency and lower space utilization rate than the existing method.
    Weakly supervised method for attribute relation extraction
    YANG Yufei DAI Qi JIA Zhen YI Hongfeng
    2014, 34(1):  64-68.  DOI: 10.11772/j.issn.1001-9081.2014.01.0064
    Asbtract ( )   PDF (776KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of insufficient training corpus for extracting attribute relation from Chinese encyclopedia, a weakly supervised method was proposed, which needed minimal human intervention. First, semi-structured attribute relations from Chinese encyclopedia entry infoboxes were used to tag entry texts for obtaining training corpus. Second, the optimized training corpus was obtained based on Naive Bayesian theory. Third, Conditional Random Field (CRF) was used to form attribute relation extraction model. The evaluation of F-score on the Hudong encyclopedia datasets was 80.9%. The experimental result shows that this method can enhance the quality of training corpus and runs a better extraction performance.
    Biclique cryptanalysis of ARIRANG-256
    WEI Hongru ZHEN Yafei WANG Xinyu
    2014, 34(1):  69-72.  DOI: 10.11772/j.issn.1001-9081.2014.01.0069
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    The security of block cipher ARIRANG-256 used in the compression function of ARIRANG, which was one candidate of SHA-3, was analyzed. Based on the key schedule and the encryption structure of the algorithm, 9-round 32 dimensional Bicliques were constructed, and under these Bicliques, full 40-round ARIRANG-256 was attacked. The data complexity is 232 and the time complexity is 2510.8. The attack has very small data requirement and its time complexity is better than exhaustive search.
    Collision attack on Zodiac algorithm
    LIU Qing WEI Hongru PAN Wei
    2014, 34(1):  73-77.  DOI: 10.11772/j.issn.1001-9081.2014.01.0073
    Asbtract ( )   PDF (711KB) ( )  
    Related Articles | Metrics
    In order to research the ability of Zodiac algorithm against the collision attack, two 8-round and 9-round distinguishers of Zodiac algorithm based on an equivalent structure of it were proposed. Firstly, collision attacks were applied to the algorithm from 12-round to 16-round by adding proper rounds before or after the 9-round distinguishers. The data complexities were 215, 231.2, 231.5, 231.7and 263.9, and the time complexities were 233.8, 249.9, 275.1, 2108and 2140.1, respectively. Then the 8-round distinguishers were applied to the full-round algorithm. The data complexity and time complexity were 260.6 and 2173.9, respectively. These results show that both full-round Zodiac-192 and full-round Zodiac-256 are not immune to collision attack.
    Lattice signature and its application based on small integer solution problem
    CAO Jie YANG Yatao LI Zichen
    2014, 34(1):  78-81.  DOI: 10.11772/j.issn.1001-9081.2014.01.0078
    Asbtract ( )   PDF (591KB) ( )  
    Related Articles | Metrics
    A lattice signature scheme was proposed and some parameter choosing rules were illustrated concerning Small Integer Solution (SIS) problem and random oracle model of lattice. Then the results of the length of the keys that were generated under different parameter circumstances were compared. Afterwards the security and efficiency with the signature scheme were verified. At last, for the purpose of fairness, and reliability in multipartite authentication, the signature scheme was combined with key distribution and escrow, a new authentication scheme with the Singular Value Decomposition (SVD) algorithm based on mathematical matrix decomposition theory was proposed.
    Timelag weakening strategy for enhancing reliability of trust evaluation
    HAN Zhigeng CHEN Geng JIANG Jian WANG Liangmin
    2014, 34(1):  82-85.  DOI: 10.11772/j.issn.1001-9081.2014.01.0082
    Asbtract ( )   PDF (608KB) ( )  
    Related Articles | Metrics
    To reduce the negative impact of the inherent time lag property of trust evaluation on the result of evaluation, new time lag weakening policies for enhancing trust evaluation reliability were proposed by integrating the trust trends of target entity into trust evaluation process with second derivative as the measurement tool. In order to check the effectiveness of this strategy, with the idea of reverse engineering, the authors used it to extend the famous trust revaluation model proposed by Srivatsa. The results show the parts of trust evaluation result with the extended model is closer to the real behavior of the target entity than that of the original model, and the former is more capable of inhibiting the fluctuating behavior of malicious entity. All of these indicate that the time lag weakening policies can be used to enhance the reliability of trust evaluation.
    Distributed intrusion detection model based on artificial immune
    CHENG Jian ZHANG Mingqing LIU Xiaohu FAN Tao
    2014, 34(1):  86-89.  DOI: 10.11772/j.issn.1001-9081.2014.01.0086
    Asbtract ( )   PDF (727KB) ( )  
    Related Articles | Metrics
    Concerning the problem of excessive interaction flow, single point failure and low detection efficiency in existing Distributed Intrusion Detection System (DIDS), a new distributed intrusion detection model based on artificial immune theory was proposed. The new distributed intrusion detection model presented a central detector configuration and method of use and combined misuse detection and anomaly detection. The simulation model was designed based on OMNeT+〖KG-*3〗+ network simulation platform and experiments were run. According to the simulation results, the model overcomes excessive interaction flow problem of the fully distributed system, solves the problem of single point failure and improves the detection efficiency effectively. The simulation results verify the validity and effectiveness of the improved model.
    Mixed key management scheme based on domain for wireless sensor network
    WANG Binbin ZHANG Yanyan ZHANG Xuelin
    2014, 34(1):  90-94.  DOI: 10.11772/j.issn.1001-9081.2014.01.0090
    Asbtract ( )   PDF (768KB) ( )  
    Related Articles | Metrics
    Concerning the existing problems in the current key management strategies, lower connectivity, higher storage consumption and communication cost, this paper proposed a mixed key management scheme based on domain for Wireless Sensor Network (WSN). The scheme divided the deployment area into a number of square areas, which consisted of member nodes and head nodes. According to their pre-distribution key space information, any pair of nodes in the same area could find a session key, but the nodes in different areas could only communicate with each other through head nodes. The eigenvalues and eigenvectors of the multiple asymmetric quadratic form polynomials were computed, and then the orthogonal diagonalization information was got, by which the head nodes could achieve identification and generate the session key between its neighbor nodes. The analysis of performance shows that compared with the existing key management schemes, this scheme has full connectivity and a bigger improvement in terms of communication overhead, storage consumption and safety.
    Continuous queries attacking algorithms of location based service
    YANG Qiong YU Lifeng
    2014, 34(1):  95-98.  DOI: 10.11772/j.issn.1001-9081.2014.01.0095
    Asbtract ( )   PDF (711KB) ( )  
    Related Articles | Metrics
    In order to mitigate the security risks in Location Based Service (LBS) with continuous query attacking algorithm, a new algorithm — Continuous Queries Attacking algorithm based on Cellular Ant (CQACA) was proposed by k-anonymity measurement. At first, the objective function of query recognition rate was defined with entropy and anonymity measurement, and the algorithmic process of objective function was presented by cellular ant. Finally, a simulation with the moving object data generator was conducted to study the key factors of CQACA, and the performance between CQACA and Cloaking was compared. Compared with the actual trajectory, the error of CQACA was 13.27%, and error of Cloaking was 17.35%. The result shows that CQACA has better effectiveness.
    Embed safety mechanism of a RFID anti-collision strategy
    LI Jia ZHENG Yiping LIU Chunlong
    2014, 34(1):  99-103.  DOI: 10.11772/j.issn.1001-9081.2014.01.0099
    Asbtract ( )   PDF (761KB) ( )  
    Related Articles | Metrics
    The current Radio Frequency IDentification (RFID) system just simply integrates the collision algorithm and security mechanism together. Based on the analysis of classical adaptive dynamic anti-collision algorithm, an anti-collision strategy of embedded security mechanism was proposed. It combined the first traversal mechanism and Boolean mutual authentication protocol to solve the problem that traditional RFID tag identification system is not efficient and has high cost; it also has high security. Compared with the backward binary, dynamic adaptive and binary tree search algorithms, the proposed strategy can greatly reduce the times of the system search and improve the label throughput.
    Node identity authentication scheme for clustered WSNs based on P-ECC and congruence equation
    ZHOU Zhiping ZHUANG Xuebo
    2014, 34(1):  104-107.  DOI: 10.11772/j.issn.1001-9081.2014.01.0104
    Asbtract ( )   PDF (675KB) ( )  
    Related Articles | Metrics
    Concerning the problems of large node memory occupation, complex calculation, low information safety degree, in the legal identity authentication when new node joins in sensor networks, a node authentication mechanism of highly safety degree applicable to the limited memory network was proposed. The mechanism used the password to add the node itself, and one-way Hash function was applied to the password and IDentity (ID) for hashing. Password was involved in the generation of the elliptic curve signature algorithm and authentication scheme of congruence equation was adopted between credible nodes. Each certification stage used mutual authentication mode. The proposed algorithm not only can prevent eavesdropping, replay, injection and so on, but also is able to resist guessing attack, mediation attack, anonymous attack and denial of service attack. The comparison with the existing algorithms show that the proposed scheme can reduce the node original memory occupation of three unit level and can reduce key detection rate.
    Hybrid model of alert correlation based on attack graph and alert similarity
    ZHU Menging XU Lei
    2014, 34(1):  108-112.  DOI: 10.11772/j.issn.1001-9081.2014.01.0108
    Asbtract ( )   PDF (765KB) ( )  
    Related Articles | Metrics
    In order to reveal logic attack strategy information from alarms generated by intrusion detection system and reconstruct attack scenario, a hybrid model of alarm correlation was proposed, which was based on attack graph and alert similarity analysis. This model combined the advantages of attack graph and alert data analysis. First of all, it described the causal relationship between alarms, according to the initial attack graph defined by the prior knowledge of intrusion attack. Afterwards, it used the similarity analysis of the alert data to repair the defects of the initial attack graph. And then it implemented alert correlation. The experimental results show that the model can not only recover attack scenario but also be able to fully repair the attack graph in the absence of a single attack step.
    Universal blind detection based on multiple classifiers fusion and image complexity
    WANBaoji ZHANG Tao
    2014, 34(1):  113-118.  DOI: 10.11772/j.issn.1001-9081.2014.01.0113
    Asbtract ( )   PDF (888KB) ( )  
    Related Articles | Metrics
    The current blind detection techniques do not consider how the contents of different images influence the steganalysis performance. In this paper, a new approach based on image content and classifier fusion was proposed. In the training phase of the proposed method, the input images were first divided into several classes according to the image complexity, the training process was specialized and then the fuzzy measure was calculated for each class. In the testing phase, the class of image was first obtained, and various classified results were acquired by classifiers and then a fuzzy integral was used to fuse different classes in the decision making process. The experimental results on several sets of images demonstrate that the proposed steganalyzer significantly enhances the detection accuracy of prior art.
    Medical Image Privacy Protection Scheme Based on Reversible Visible Watermarking
    GAO Haibo DENG Xiaohong CHEN Zhigang
    2014, 34(1):  119-123.  DOI: 10.11772/j.issn.1001-9081.2014.01.0119
    Asbtract ( )   PDF (959KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of privacy disclosure in medical image's interest of region, a new reversible visible watermarking based privacy detection algorithm was proposed. The method embedded a binary watermark image into the interest of region of original medical image to protect privacy, and used visual masking of Human Visible System (HSV) and pixel's mapping mechanism to dynamically adjust the visibility and transparence of watermark. In addition, a shrinking projection technology was utilized to solve the problem of potential overflow and underflow during the embedding procedure. Finally, a random key was introduced to enhance the embedded watermark's robustness. The experimental results show that the proposed method achieves better performance in visibility and transparence of watermark, and the number of additional information produced by embedding is only 65608bits. In addition, the deletion of watermark is difficult without knowing the correct key, and the quality difference between the watermarked image and the recovered image is less than 1dB.
    Real-time simulation for 3D-dressing of random clothes and human body
    CHEN Yan XUE Yuan YANG Ruoyu
    2014, 34(1):  124-128.  DOI: 10.11772/j.issn.1001-9081.2014.01.0124
    Asbtract ( )   PDF (768KB) ( )  
    Related Articles | Metrics
    Recently, the research on clothing simulation is becoming hotter. But the flexibility, sense of reality, real-time and integrity are always difficult to be unified. Therefore, a new dressing simulation system was designed concerning the automatic fitting of any human body and clothes. At first, the surface of Non-Uniform Rational B-Spline (NURBS) was used to complete deformable body modeling. Then, particles were reconstructed from the 3DMAX model and multi-type springs were created to complete arbitrary cloth modeling. Finally, Verlet integrator was adopted to complete dressing simulation, while a new simplification algorithm for cloth models and a new method for judging interior point with a triangle were implemented. The results show that the proposed modeling approach for body and clothes guarantees the diversity of dressing effect, and the model simplification and interior point judgment can increase the simulation performance by 30% or so, which ensures the real-time quality.
    Solution method for inverse kinematics of virtual human's upper limb kinematic chain based on improved genetic algorithm
    DENG Gangfeng HUANG Xianxiang GAO Qinhe ZHANG Zhili LI Min
    2014, 34(1):  129-134.  DOI: 10.11772/j.issn.1001-9081.2014.01.0129
    Asbtract ( )   PDF (1016KB) ( )  
    Related Articles | Metrics
    An Improved Genetic Algorithm (IGA) was proposed for the inverse kinematics problem solution of upper limb kinematic chain which had high degree of freedom and was too complex to be solved by using geometric, algebraic, and iterative methods. First, the joint-units of upper limb kinematic chain and its mathematical modeling were constructed by using Denavit-Hartenberg (D-H) method, then population diversity and initialization were completed based on simulating human being population, and the adaptive operators for crossover and mutation were designed. The simulation results show that the IGA can search the high precise solutions and avoid prematurity convergence or inefficient searching in later stage with larger probability than standard genetic algorithm.
    Multi-function rendering technology based on graphics process unit accelerated ray casting algorithm
    LV Xiaoqi ZHANG Chuanting HOU He ZHANG Baohua
    2014, 34(1):  135-138.  DOI: 10.11772/j.issn.1001-9081.2014.01.0135
    Asbtract ( )   PDF (733KB) ( )  
    Related Articles | Metrics
    In order to overcome the rendering drawbacks of traditional algorithms that cannot be interacted fluently with the user and have a big time consumption and single rendering result, a ray casting algorithm based on Graphics Process Unit (GPU) was proposed to be used for the real-time volume rendering of medical tomographic images. Different rendering effects can be switched quickly by the proposed algorithm. Firstly, medical tomographic images were read into the computer memory to construct voxels. Afterwards, properties (interpolating, shading and light) of the corresponding voxels were set. The transfer functions of color and opacity were designed to display different organs and tissues. Finally, the volume data were loaded and the ray casting algorithm was executed by GPU. The experiments show that the rendering speed of the proposed algorithm can reach 40 frames per second, which satisfies the clinical application. On the aspect of rendering quality, jags produced in the process of interaction because of resampling on GPU are apparently lower than the ray casting algorithm on CPU. The time consumption of CPU-based ray casting algorithm is about 9 times that of the proposed algorithm.
    Implementation of calibration for machine vision electronic whiteboard
    XU Xiao WANG Run PENG Guojie YANG Qi WANG Yiwen LI Hui
    2014, 34(1):  139-141.  DOI: 10.11772/j.issn.1001-9081.2014.01.0139
    Asbtract ( )   PDF (564KB) ( )  
    Related Articles | Metrics
    A partitioned calibration approach was applied to electronic whiteboard based on machine vision, since its location error distribution on large screens was non-homogeneous. Based on Human Interface Device (HID)'s implementation, the specific computer software was developed and the communication between the computer and electronic whiteboard was established. Configuration of calibration points on the whiteboard, receiving coordinates of these points, and calculation of calibration coefficients were completed. Thus the whole system calibration was implemented. The experimental results indicate that after calibration, the location accuracy is about 1.2mm on average on electronic whiteboard with the size of 140cm×105cm. And basic touch operations are accurately performed on the electronic whiteboard prototype after calibration.
    Multi-frame image super-resolution reconstruction algorithm with radial basis function neural network
    YANG Xuefeng WANG Gao CHENG Yaoyu
    2014, 34(1):  142-144.  DOI: 10.11772/j.issn.1001-9081.2014.01.0142
    Asbtract ( )   PDF (652KB) ( )  
    Related Articles | Metrics
    Neural networks have strong nonlinear learning ability, so the super-resolution algorithms based on neural networks are preliminarily studied. These algorithms can only be used in controlled microscanning, which has uniform displacement between frames. It is difficult to apply these algorithms to uncontrolled microscanning. In order to overcome the limiting condition and obtain better super-resolution performance, a deblurring algorithm using Radial Basis Function (RBF) neural network was firstly proposed, which was then combined with non-uniform interpolation step to form a new two-step super-resolution algorithm. The simulation results show that the Structural SIMilarity (SSIM) index of proposed algorithm is 0.55-0.7. The proposed two-step super-resolution algorithm not only extends application scope of RBF neural network but also achieves good super-resolution performance.
    Filtering method for medical images based on median filtering and anisotropic diffusion
    FU Lijuan YAO Yu FU Zhongliang
    2014, 34(1):  145-148.  DOI: 10.11772/j.issn.1001-9081.2014.01.0145
    Asbtract ( )   PDF (698KB) ( )  
    Related Articles | Metrics
    Medical image filtering process should retain the edge details of diagnostic significance. For Perona-Malik (PM) anisotropic diffusion model experienced failure when dealing with strong noise and choosing parameter K of diffusion threshold relies on experience, this paper proposed an improved anisotropic diffusion algorithm. First, PM was combined with the median filter algorithm, and then the gradient mode of the original image was replaced with the gradient mode from the image which was smoothed by the median filter to control the process of diffusion. While applying the adaptive diffusion threshold (Median Absolute Deviation (MAD) of the gradient in current neighborhood) and iteration termination criteria, the algorithm improved robustness and efficiency of the algorithm. The experiment was operated respectively on echocardiography, CT images and Lena image to denoise, and used Peak Signal-to-Noise Ratio (PSNR) and Edge Preservation Index (EPI) as evaluation criterion. The experimental results show that the improves algorithm outperforms PM algorithm and Catte-PM method for improving PSNR while preserving image detail information, and meets the requirements for application in medical images more effectively.
    Fast image registration algorithm based on locally significant edge feature
    YANG Jian LI Ruonan HUANG Chenyang WANG Gang DING Chuang
    2014, 34(1):  149-153.  DOI: 10.11772/j.issn.1001-9081.2014.01.0149
    Asbtract ( )   PDF (889KB) ( )  
    Related Articles | Metrics
    Considering that the Scale Invariant Feature Transform (SIFT) algorithm extracts a great number of feature points, consumes a lot of matching time but with low matching accuracy, a fast image registration algorithm based on local significant edge features was proposed. Then SIFT algorithm was used to extract feature points, while wavelet edge detection was also used to extract image edge to establish feature points around the edge of the neighborhood characteristics, which filtered out points with a significant edge feature characteristic as significant feature points. A feature vector was formed by the shape-context operator and edge features. Euclidean distance was used as the match metric function to preliminarily match the feature points extracted from different images. Afterwards, RANdom SAmple Consensus (RANSAC) algorithm was applied to eliminate the mismatching points. The experimental results show that the algorithm effectively controlled the number of feature points, improved qulity of the feature points, reduced the feature search space and enhanced the efficiency of the feature matching.
    Ensemble registration of medical images with Gaussian mixture model and color component regularization
    WANG Yuwen HU Shunbo
    2014, 34(1):  154-157.  DOI: 10.11772/j.issn.1001-9081.2014.01.0154
    Asbtract ( )   PDF (542KB) ( )  
    Related Articles | Metrics
    In order to use the abundant information among several color images, and enhance the registration accuracy, the ensemble registration based on Gaussian Mixture Model (GMM) was extended from gray images to color images in this paper. To decrease the difference among color component deformations of the same image, the color component regularization term was incorporated into ensemble registration and a new total cost function was formulated. Color ensemble registration was applied to gastroscope images and tissue section images of color visible human. The test results show that the proposed color ensemble registration method can successfully align color images.
    Face recognition based on histograms of nonsubsampled contourlet oriented gradient
    FENG Junpeng YANG Huixian CAI Yongyong ZHAi Yunlong LI Qiuqiu
    2014, 34(1):  158-161.  DOI: 10.11772/j.issn.1001-9081.2014.01.0158
    Asbtract ( )   PDF (748KB) ( )  
    Related Articles | Metrics
    Concerning the low accuracy of face recognition systems, a face recognition algorithm based on Histograms of Nonsubsampled contourlet Oriented Gradient (HNOG) was proposed. Firstly, a face image was decomposed with Non-Subsampled Contourlet Transform (NSCT) and the coefficients were divided into several blocks. Then histograms of oriented gradient were calculated all over the blocks and used as face features. Finally, multi-channel nearest neighbor classifier was used to classify the faces. The experimental results on YALE , ORL and CAS-PEAL-R1 face databases show that the descriptor HNOG is discriminative, the feature dimension is small and the feature is robust to variations of illumination, face expression and position.
    Unequal error protection with adaptive genetic algorithm for scalable video coding
    TIAN Bo YANG Yimin CAI Shuting
    2014, 34(1):  162-166.  DOI: 10.11772/j.issn.1001-9081.2014.01.0162
    Asbtract ( )   PDF (697KB) ( )  
    Related Articles | Metrics
    In order to improve the packet loss resilience of Scalable Video Coding (SVC) over communication networks, an efficient Unequal Error Protection (UEP) algorithm for SVC using adaptive genetic algorithm was proposed. A method to encapsulate network abstract layer units according to the feature of the head information of a packet was introduced. Then the problems of pair codes assignment were transformed into the problems of multi-constraint optimization, which could be transformed into unconstrained objective by exploiting penalty function. Therefore, the adaptive genetic algorithm was employed to obtain globally optimal solution. The simulation results reveal that compared with the typical unequal error protection algorithms, the Peak Signal-to-Noise Ratio (PSNR) is improved by 0.8dB-1.95dB, and the proposed algorithm provides substantial improvement for the decoding speed and received video quality over best effort packet networks.
    Fast inter-mode decision algorithm for multi-view video coding
    WANG Fengsui SHEN Qinghong DU Sidan
    2014, 34(1):  167-170.  DOI: 10.11772/j.issn.1001-9081.2014.01.0167
    Asbtract ( )   PDF (585KB) ( )  
    Related Articles | Metrics
    In order to solve greatly computational complexity for the variable block mode decision in Multi-view Video Coding (MVC), a fast inter-mode decision algorithm based on mode complexity for multi-view video coding was proposed. First, the characteristics for each variable block size of the Joint MVC (JMVC) were analyzed in the proposed algorithm. Then, the mode complexity was presented to determine the mode characteristics of the current macroblock. Finally, macroblocks were divided into three different mode classes: for macroblocks with simple mode, only mode size of 16×16 was checked, and other mode sizes were skipped; for macroblocks with medium mode, mode size of 8×8 was skipped; for macroblocks with complex mode, all mode sizes were tested. As a result, the unnecessary mode decision process could be early terminated in the method and computational load can be greatly reduced. The experimental results have demonstrated that the proposed method is able to significantly reduce the computational load by 62.75%, while keeping almost the same coding efficiency, compared with the full mode decision in the reference software of MVC.
    H.264 authentication playing method based on video steganography
    CAI Yangyan ZHANG Yu
    2014, 34(1):  171-174.  DOI: 10.11772/j.issn.1001-9081.2014.01.0171
    Asbtract ( )   PDF (646KB) ( )  
    Related Articles | Metrics
    In the multimedia content distribution and playing system, files played were limited without compromising users' experience. Firstly, binary images were selected adaptively and embedded during the intra prediction by modifying AC coefficients of specific locations. Then, the extracted watermark was matched with the binary image selected adaptively. If not matched properly, the video could not continue to be decoded and played. The experimental results demonstrate watermark algorithm has high robustness. After watermarks are embedded, video Peak Signal-to-Noise Ratio (PSNR) and rate almost remain unchanged. The proposed algorithm also has low complexity and strong practicability, and illegal videos can be filtered effectively.
    Preliminary application of combination resultant theory
    YUAN Xun
    2014, 34(1):  175-178.  DOI: 10.11772/j.issn.1001-9081.2014.01.0175
    Asbtract ( )   PDF (469KB) ( )  
    Related Articles | Metrics
    Making use of the flexibility of the combination resultant method, rapid elimination and the diversity of combination resultant derived polynomial, the author proposed an improved algorithm of constructing Bezout matrix, and the combination resultant method was applied in solving the nonlinear equations, inferring unknown relationship, parametric curve and surface implicitization, structural triangular column, etc. Compared with the original method to solve the problems by examples, combination resultant method is more simple and feasible.
    Differential evolution algorithm for high dimensional optimization problem
    WANG Xu ZHAO Shuguang
    2014, 34(1):  179-181.  DOI: 10.11772/j.issn.1001-9081.2014.01.0179
    Asbtract ( )   PDF (467KB) ( )  
    Related Articles | Metrics
    In order to solve the problem that high dimensional optimization problem is hard to optimize and time-consuming, a Differential Evolution for High Dimensional optimization problem (DEHD) was proposed. By introducing coevolutionary to differential evolution, a new coevolution scheme was adopted, which consisted of state observer and random grouping strategy. Specifically, state observer activated random grouping strategy according to the feedback of search status while random grouping strategy decomposed high dimensional problem into several smaller ones and then evolved them separately. The scheme enhanced the algorithm's search speed and effectiveness. The experimental results show that the proposed algorithm is effective and efficient while solving various high dimensional optimization problems. In particular, its search speed improves significantly. Therefore, the proposed algorithm is competitive on separable high dimensional problems.
    New algorithm for semidefinite programming
    YU Dongmei GAO Leifu
    2014, 34(1):  182-184.  DOI: 10.11772/j.issn.1001-9081.2014.01.0182
    Asbtract ( )   PDF (404KB) ( )  
    Related Articles | Metrics
    In order to improve the operational efficiency of SemiDefinite Programming (SDP), a new nonmonotonic trust region algorithm was proposed. The SDP problem and its duality problem were transformed into unconstrained optimization problem and the trust region subproblem was constructed, the trust region radius correction condition was modified. When the initial search point was near the canyon, the global optimal solution still could be found. The experimental results show that the number of iterations of the algorithm is less than the classical interior point algorithm for small and medium scale semidefinite programming problems, and the proposed algorithm works faster.
    Genetic algorithm for solving linear bilevel programming with interval coefficients
    FAN Yangyang LI Hecheng
    2014, 34(1):  185-188.  DOI: 10.11772/j.issn.1001-9081.2014.01.0185
    Asbtract ( )   PDF (539KB) ( )  
    Related Articles | Metrics
    For a kind of linear bi-level programming problems with interval coefficients in the upper level objective, a Genetic Algorithm (GA) was proposed by using a double fitness function evaluation technique, which was characterized by simultaneously obtaining the best optimal solution as well as the worst one in one run of the genetic algorithm. Firstly, individuals were encoded by using the vertices of the constraint region, and a double fitness was constructed by the upper and lower bounds of the upper level objective coefficients. Secondly, fitness functions were used to sort all individuals in populations. According to the order, the feasibility of individual was checked one by one until a feasible individual was found. Finally, the feasible individual was updated in executing algorithm. The simulation results on four computational examples show that the proposed algorithm is feasible and efficient.
    Cuckoo search algorithm for multi-resource leveling optimization
    SONG Yujian YE Chunming HUANG Zuoxing
    2014, 34(1):  189-193.  DOI: 10.11772/j.issn.1001-9081.2014.01.0189
    Asbtract ( )   PDF (819KB) ( )  
    Related Articles | Metrics
    An improved multi-objective Cuckoo Search Algorithm (CSA) was proposed to overcome basic multi-objective CSA's default of low convergence speed in the later period and low solution quality when it was used to solve the multi-resource leveling problem. Firstly, a non-uniform mutation operator was embedded in the basic multi-objective cuckoo search to make a perfect balance between exploration and exploitation. Secondly, a differential evolution operator was employed for boosting cooperation and information exchange among the groups to enhance the convergence quality. The simulation test illustrates that the improved multi-objective CSA outperforms the basic multi-objective CSA and Vector Evaluated Particle Swarm Optimization Based on Pareto (VEPSO-BP) algorithm when global convergence is considered.
    Continuous function optimization based on improved harmony search algorithm
    LU Jing GU Junhua
    2014, 34(1):  194-198.  DOI: 10.11772/j.issn.1001-9081.2014.01.0194
    Asbtract ( )   PDF (698KB) ( )  
    Related Articles | Metrics
    Concerning the difficulties in solving the continuous functions of general Harmony Search (HS) algorithm, an improved HS algorithm was proposed. With analogies to the concept of the simulated annealing algorithm, the way of updating parameter was redesigned. And it limited the number of identical harmonies stored in the harmony memory to increase the diversity of solutions. Simulation results of the proposed algorithm were compared with other HS approaches. The computational results reveal that the proposed algorithm is more effective in enhancing the solution quality and convergence speed than other HS approaches.
    Optimal iterative max-min ant system for solving quadratic assignment problem
    MOU Lianming DAI Xili LI Kun HE Lingrui
    2014, 34(1):  199-203.  DOI: 10.11772/j.issn.1001-9081.2014.01.0199
    Asbtract ( )   PDF (729KB) ( )  
    Related Articles | Metrics
    In order to improve the quality of the solution in solving Quadratic Assignment Problem (QAP), an effective Max-Min Ant System (MMAS) was designed. Firstly, by using optimal iteration idea, the location and its corresponding task were selected randomly from the current optimal tour as the initial value of next iteration, so as to enhance the effectiveness of each searching in MMAS. Secondly, in order to increase the purpose of the search in every step, the incremental value of target function after adding new task was used as the heuristic factor to guide effectively the state transition. Then, the pheromone was updated by using the multi-elitist strategy so that it could increase the diversity of the solution. And an effective double-mutation technique was designed to improve the quality of solution and accelerate the algorithm convergence speed. Finally, a large number of data sets from QAPLIB were experimented. The experimental result shows that the proposed algorithm is significantly better than other algorithms in accuracy and stability on solving quadratic assignment problem.
    Improved artificial bee colony clustering algorithm based on K-means
    CAO Yongchun CAI Zhenqi SHAO Yabin
    2014, 34(1):  204-207.  DOI: 10.11772/j.issn.1001-9081.2014.01.0204
    Asbtract ( )   PDF (814KB) ( )  
    Related Articles | Metrics
    Since the K-means clustering method is sensitive to initial clustering centers and easy to be trapped by local optimum, an Artificial Bee Colony (ABC) clustering algorithm based on K-means was proposed in this paper. This algorithm integrated the improved ABC algorithm with the K-means iteration, which reduced the dependence on the initial clustering centers and the probability to be trapped by local optimum, thus improving the stability of the algorithm. The initialization strategy based on the opposition-based learning improved the diversity of the initial population. The algorithm overcame the problem of premature convergence and improved the efficiency of searching through introducing nonlinear selection strategy. The convergence speed was accelerated and the capability of local optimization was enhanced by dynamically adjusting the neighborhood search range. The experimental results show that the clustering efficiency and performance has been significantly improved, as well as its stability.
    Improved single pattern matching algorithm based on Sunday algorithm
    ZHU Yongqiang QIN Zhiguang JIANG Xue
    2014, 34(1):  208-212.  DOI: 10.11772/j.issn.1001-9081.2014.01.0208
    Asbtract ( )   PDF (696KB) ( )  
    Related Articles | Metrics
    When Sunday algorithm is applied into the Chinese version of Unicode, there are some problems. On one hand, it causes the expansion of space if using Chinese characters directly to generate a failed jump table. On the other hand, it can reduce the space consumption at the cost of matching speed when Chinese characters are split into two bytes. Concerning the degradation of time performance produced by applying Sunday algorithm to the character-splitting environment of Chinese version of Unicode, in combination with internal relevance of Chinese unit in Unicode, the improved algorithm in this paper optimized the auxiliary jump table and matching rules in original Sunday algorithm in the character-splitting environment. Consequently, the proposed algorithm not only solves the problem of space expansion, but also improves time performance of Sunday algorithm in this environment. Finally, the improved time and space performance of the algorithm gets proved via simulation.
    Trustworthy Web service recommendation based on collaborative filtering
    ZHANG Xuan LIU Cong WANG Lixia ZHAO Qian YANG Shuai
    2014, 34(1):  213-217.  DOI: 10.11772/j.issn.1001-9081.2014.01.0213
    Asbtract ( )   PDF (792KB) ( )  
    Related Articles | Metrics
    In order to recommend trustworthy Web services, the differences between Web service recommendation and electronic commerce recommendation were analyzed, and then based on the collaborative filtering recommendation algorithm, a trustworthy Web service recommendation approach was proposed. At first, non-functional requirements of trustworthy software were evaluated. According to the evaluation results, similar users were filtered for the first time. Then, by using the rating information and basic information, the similar users were filtered for the second time. After finishing these two filtering procedures, the final recommendation users were determined. When using users' ratings information to calculate the similarity between the users, the similarity of the different services to the users was taken into consideration. When using users' basic information to calculate the similarity between the users, the Euclidean distance formula was introduced because of the nonlinear characteristics of the users. The problems of the dishonesty and insufficient number of users were also considered in the approach. At last, the experimental results show that the recommendation approach for trustworthy Web services is effective.
    Hybrid recommendation model for personalized trend prediction of fused recommendation potential
    CHEN Hongtao XIAO Ruliang NI Youcong DU Xin GONG Ping CAI Sheng-zhen
    2014, 34(1):  218-221.  DOI: 10.11772/j.issn.1001-9081.2014.01.0218
    Asbtract ( )   PDF (641KB) ( )  
    Related Articles | Metrics
    In recommendation system, it is difficult to predict the behavior of users on items and give the accurate recommendation. In order to improve the accuracy of recommendation system, the recommendation potential was introduced and a novel personalized hybrid recommendation model fused with recommendation potential was proposed. Firstly, the trend momentum was calculated according to the visits of items in recent short time and long time; then, the current recommendation potential was calculated utilizing trend momentum; finally, the hybrid recommendation model was achieved according to the fusion of recommendation potential and personalized recommendation model. The experimental results show that the personalized trend prediction fused with recommendation potential can improve the accuracy of recommendation system in a large scale.
    Property of trust-based recommender systems
    LONG Yu TONG Xiangrong
    2014, 34(1):  222-226.  DOI: 10.11772/j.issn.1001-9081.2014.01.0222
    Asbtract ( )   PDF (852KB) ( )  
    Related Articles | Metrics
    The data sparseness is due to the nature of traditional collaborative filtering and trust-based recommender systems can effectively deal with the sparse data without losing accuracy. It is appropriate to use different methods for different users to give more personalized recommendation. The vertex characteristic in microcosmic stratums was studied, and the formal definition of interest was proposed. It was used to demonstrate the impact of local structures of the recommended user on the results of recommender systems. In the end, several results were given to illustrate the diversity of the effects of recommender systems on users of different types.
    Fatigue behavior detection by mining keyboard and mouse events
    WANG Tianben WANG Haipeng ZHOUXingshe NI Hongbo LIN Qiang
    2014, 34(1):  227-231.  DOI: 10.11772/j.issn.1001-9081.2014.01.0227
    Asbtract ( )   PDF (747KB) ( )  
    Related Articles | Metrics
    Long-term continuous use of computers would bring negative effects on users' health. In order to detect users fatigue level in a non-invasive manner, an approach that is able to measure fatigue level on hand muscle based on the keyboard and mouse events was proposed. The proposed method integrated keying action match, data noise filtering, and feature vector extraction/classification together to collect and analyze the delay characteristics of both keying and hitting actions, upon which the detection of fatigue level on hand muscle could be enabled. With the detected fatigue level, friends belonging to the same virtual community on current social networks could be, in real-time, alerted and persuaded to take a health-conscious way in their daily use of computers. Particularly, an interesting conclusion has been made that there is an obvious negative correlation between keying (hitting) delay and fatigue level of hand muscle. The experimental validation conducted on two-week data collected from 15 participants shows that the proposed method is effective in detecting users fatigue level and distributing fatigue-related health information on social network platform.
    Trajectory tracking control of manipulator based on FSMC
    CAI Zhuang ZHANG Guoliang TIAN Qi
    2014, 34(1):  232-235.  DOI: 10.11772/j.issn.1001-9081.2014.01.0232
    Asbtract ( )   PDF (539KB) ( )  
    Related Articles | Metrics
    A control law based on Function Sliding Mode Controller (FSMC) was proposed for trajectory tracking control of manipulator. The uncertainties of the system were achieved from dynamic model and sliding mode function. Then RBF neural network was used to approach uncertainties of the system. Because of the approximation error of neural network, especially at the initial phase, the function sliding mode controller and robust compensator were designed for error compensation of neural network. The function sliding mode controller was capable of overcoming chattering problem of common Sliding Mode Control (SMC), and improved the tracking ability of the system. The global stability of closed loop system was proved based on Lyapunov theory, the effectiveness of proposed control approach was also demonstrated by simulation results.
    Parameter estimation methods for pseudo-linear regressive systems based on auxiliary model and data filtering
    DING Sheng
    2014, 34(1):  236-238.  DOI: 10.11772/j.issn.1001-9081.2014.01.0236
    Asbtract ( )   PDF (514KB) ( )  
    Related Articles | Metrics
    For the pseudo-linear output errorregressive systems whose identification model has the unknown variables in the information vector, this paper presented an auxiliary model based recursive least squares parameter estimation algorithm that was derived through constructing an auxiliary model and replacing the unknown inner variables with the outputs of the auxiliary model, but the effect was not good. Furthermore, through filtering the observation data with the estimated transfer function of the noise model and using the filtered data to estimate the parameters, the authors presented a data filtrating based recursive least squares parameter estimation algorithm. The simulation results show that the proposed algorithm can estimate the parameters of pseudo-linear output errorregressive systems effectively.
    Classification of multivariate time series based on singular value decomposition and discriminant locality preserving projection
    DONG Hongyu CHEN Xiaoyun
    2014, 34(1):  239-243.  DOI: 10.11772/j.issn.1001-9081.2014.01.0239
    Asbtract ( )   PDF (704KB) ( )  
    Related Articles | Metrics
    The existing multivariate time series classification algorithms require sequences of equal length and neglect categories information. In order to solve these defects, a multivariate time series classification algorithm was proposed based on Singular Value Decomposition (SVD) and discriminant locality preserving projection. Based on the idea of dimension reduction, the first right singular vector of samples by SVD was used as feature vector to transform unequal length sequence into a sequence of identical size. Then the feature vector was projected by utilizing discriminant locality preserving projection based on maximum margin criterion, which made full use of categories information to ensure samples of the same class as close as possible and heterogeneous samples as dispersed as possible. Finally, it achieved the classification in a low dimension subspace by using 1 Nearest Neighbor (1NN), Parzen windows, Support Vector Machine (SVM) and Naive Bayes classifier. Experiments were carried out on Australian Sign Language (ASL), Japanese Vowels (JV) and Wafer, the three public multivariate time series datasets. The results show that the proposed algorithm achieves lower classification error rate under the condition of the same time complexity basically.
    Method for customer segmentation based on three-way decisions theory
    HUANG Shunliang WANG Qi
    2014, 34(1):  244-248.  DOI: 10.11772/j.issn.1001-9081.2014.01.0244
    Asbtract ( )   PDF (752KB) ( )  
    Related Articles | Metrics
    To solve the uncertainty of customer segmentation, a new method based on three-way decisions theory was proposed. The method considered the risk cost and the profit of customer segmentation comprehensively. The problem of customer segmentation was modeled based on three-way decisions theory that included computing threshold and the procedure of application. Finally, an example was given to illustrate the procedure of application and the superiority of the new method. Three-way decision method was not only used in a procedure of two-way decision, but also used independently as a decision method. In accordance with decision results of three-way decision, there were three results that can provide three different strategies for three decision domains. The introduction of three-way decision theory provides a new view for customer segmentation, which can minimize risk cost.
    Improved K-means algorithm based on latent Dirichlet allocation for text clustering
    WANG Chunlong ZHANG Jingxu
    2014, 34(1):  249-254.  DOI: 10.11772/j.issn.1001-9081.2014.01.0249
    Asbtract ( )   PDF (932KB) ( )  
    Related Articles | Metrics
    The traditional K-means algorithm has an increasing number of iterations, and often falls into local optimal solution and unstable clustering since the initial cluster centers are randomly selected. To solve these problems, an initial clustering centers selection algorithm based on Latent Dirichlet Allocation (LDA) model for the K-means algorithm was proposed. In this improved algorithm, the top-m most important topics in text corpora were first selected. Then, the text corpora was preliminarily clustered based on the m dimensions of topics. As a result, the m cluster centers could be got in the algorithm, which were used to further make clustering on all the dimensions of the text corpora. Theoretically, the center for each cluster can be determined based on the probability without randomly selecting them. The experiment demonstrates that the clustering results of the improved algorithm are more accurate with smaller number of iterations.
    Local clustering based adaptive linear neighborhood propagation algorithm for image classification
    SHENG Hongbo WANG Xili
    2014, 34(1):  255-259.  DOI: 10.11772/j.issn.1001-9081.2014.01.0255
    Asbtract ( )   PDF (772KB) ( )  
    Related Articles | Metrics
    Linear Neighborhood Propagation (LNP) is a graph-based semi-supervised classification method. It can be used for image classification. LNP has high computational complexity for large images. Furthermore, if the number of neighbors is unsuitable, the classification results will be inaccurate. Therefore, a locally clustering based adaptive classification algorithm for LNP was proposed in this paper. The method improved the LNP classification algorithm in two aspects. Firstly, quick shift was used for local clustering to get the point-clusters. Point-clusters replaced pixels as the graph model nodes. Then the size of matrix was reduced. Secondly, the relationship between the geodesic distance and the Euclidean distance was used to dynamically determine the number of neighbors for each point. The experimental results show that better classification results are obtained by the proposed method and the running time is largely reduced.
    Semi-supervised SVM image classification method with pre-selected sample by fuzzy C-mean
    CHEN Yongjian WANG Xili
    2014, 34(1):  260-264.  DOI: 10.11772/j.issn.1001-9081.2014.01.0260
    Asbtract ( )   PDF (691KB) ( )  
    Related Articles | Metrics
    In order to solve the problems that the semi-supervised classification method based on Laplacian Support Vector Machines (LapSVM) requires that all unlabeled sample should be added to the training set to train a classifier, the algorithm demands high time and space and cannot effectively deal with large-scale image classification, Fuzzy C-Mean (FCM) pre-selected sample of LapSVM image classification method was proposed. The method used FCM algorithm for clustering the unlabeled samples. According to the clustering results, unlabeled samples of near optimal separating hyper-plane were selected to add to the training sample set, and these samples may be support vector carrying useful information for classification. The quantity was only a small part of the unlabeled samples, so the training sample set was reduced. The simulation results show that this method takes advantage of the inherent discrimination information of the unlabeled samples, effectively improves the accuracy of classifiers, and reduces the algorithm's time and space complexity.
    Discriminative maximum a posteriori for acoustic model adaptation
    QI Yaohui PAN Fuping GE Fengpei YAN Yonghong
    2014, 34(1):  265-269.  DOI: 10.11772/j.issn.1001-9081.2014.01.0265
    Asbtract ( )   PDF (706KB) ( )  
    Related Articles | Metrics
    For Minimum Phone Error based Maximum A Posteriori (MPE-MAP) adaptation, in order to accurately estimate the center of prior distribution and to improve the recognition performance, the Maximum Mutual Information based MAP (MMI-MAP) adaptation and H-criterion, which was the interpolation of MMI and Maximum Likelihood (ML) criterion, based on MAP (H-MAP) adaptation were used for the estimation of the center of prior distribution, which led to MMI-MAP prior based MPE-MAP (MPE-MMI-MAP) and H-MAP prior based MPE-MAP (MPE-H-MAP). The experimental results of task adaptation show that the two proposed methods both can obtain better recognition performance than MPE-MAP, MMI-MAP and MAP adaptation. MPE-MMI-MAP and MPE-H-MAP can obtain 3.4% and 2.7% relative improvement over MPE-MAP respectively.
    Test clue generation based on UML interaction overview diagram
    ZENG Yi WANG Cuiqin LI Hanyu HONG Hao
    2014, 34(1):  270-275.  DOI: 10.11772/j.issn.1001-9081.2014.01.0270
    Asbtract ( )   PDF (814KB) ( )  
    Related Articles | Metrics
    Concerning the problem that single UML model can not test the software sufficiently, this paper proposed a new method of automatically generating software test clues by combining the characteristics of UML2.0 interaction overview diagram. First, this paper gave the formal definition of UML class diagrams, sequence diagrams and Interaction Overview Diagrams (IOD) . Second, the Node Control Flow Graph (NCFG) was constructed by extracting the process information of the interaction overview diagram while the Message Sequence Diagrams (MSD) were constructed by extracting the object interaction information. The testable model of IOD was constructed by embedding the MSD's message path into NCFG. At last, the paper adopted two-two coverage criterion to generate test clues. The experiment verifies that this method which automatically generates test clues avoids the combinatorial explosion while guaranteeing the test adequacy.
    System-level test case generating method based on UML model
    FENG Qiuyan
    2014, 34(1):  276-280.  DOI: 10.11772/j.issn.1001-9081.2014.01.0276
    Asbtract ( )   PDF (686KB) ( )  
    Related Articles | Metrics
    With the software testing methods based on Unified Model Language (UML) models, use case diagrams and sequence diagrams were integrated for system testing. Firstly, three algorithms were proposed including the algorithm for generating Use case diagram Execution Graph (UEG), the algorithm for generating Sequence diagram Execution Graph (SEG) and the algorithm for generating System Testing Graph (STG) based on UEG and SEG. Then, the UEG, SEG and STG were traversed to generate test cases for system-level testing based on specified three-level criteria, mainly to detect interaction, scenario, use case and use case dependency faults. Finally, the experimental validation shows that the solution can do system-level software testing based on use case diagram and sequence diagram.
    Idle travel optimization of tool path on intensive multi-profile patterns
    LI Xun CHEN Ming
    2014, 34(1):  281-285.  DOI: 10.11772/j.issn.1001-9081.2014.01.0281
    Asbtract ( )   PDF (805KB) ( )  
    References | Related Articles | Metrics
    In the garment industry, shortening idle travel of tool path is important to efficiently cut these patterns from a piece of cloth. As these cutting patterns are of complex shapes and distributed intensively, it is not trivial to obtain the optimal cutting tool path via the existing algorithms. Based on MAX-MIN Ant System (MMAS), a new algorithm was proposed to optimize the idle travel of tool path. The algorithm consists of four steps: 1) using standard MMAS algorithm to define the pattern order; 2) seek the node as the tool entrance on each pattern; 3) optimize the node sequence with MMAS; 4) repeat the step 2) and 3) to achieve the optimal tool path. The experiments show that the proposed algorithm can effectively generate optimal tool path. Compared with the line-scanning algorithm and Novel Ant Colony System (NACS) algorithm, the result has been improved by 60.15% and 22.44% respectively.
    Steel furnace online quality monitoring method based on real-time data processing
    LI Baolian ZHANG Xiaolong
    2014, 34(1):  286-291.  DOI: 10.11772/j.issn.1001-9081.2014.01.0286
    Asbtract ( )   PDF (868KB) ( )  
    Related Articles | Metrics
    This paper proposed a monitoring and online quality analysis method based on real-time data analysis in order to solve the problems that data stream is hard to manage and analyze and production monitoring and online quality analysis can not be handled effectively in the process of steel heating furnace production. By combining real-time database and relational database, and using six Sigma management tools and control chart techniques, the authors proposed an approach to do monitoring and online quality analysis. The implemented system includes real-time data processing, production monitoring as well as online/off-line quality monitoring. The performance of the system indicates that it can be effectively applied in the heating furnace in real-time data analysis and online quality monitoring.
    Real-time temperature monitoring system design based on Matlab GUI serial communication
    XUE Fei YANG Youliang MENG Fanwei DONG Futao
    2014, 34(1):  292-296.  DOI: 10.11772/j.issn.1001-9081.2014.01.0292
    Asbtract ( )   PDF (731KB) ( )  
    Related Articles | Metrics
    In order to improve the speed of data processing and the efficiency of software development, a temperature real-time monitoring system was devised based on Matlab Graphical User Interface (GUI). Serial port tool box in Matlab and Modbus communication protocol were used to link up SHIMADEN SRS13A thermostat to PC, and the real-time monitoring of the metal surface temperature in heating process was implemented. The interface of the system software was simple and the software had convenient operation with smaller memory footprint. Variety operating modes could be achieved by setting different parameters. The test results show that, the system runs rapidly and stably, and the temperature response curves with different parameter configuration settings were plotted promptly and accurately. The system's sampling interval was 1s and the temperature measurement accuracy was 0.1℃.
    Application of OPTICS to lightning nowcasting
    HOU Rongtao LU Yu WANG Qin YUAN Chengsheng WANG Jun
    2014, 34(1):  297-301.  DOI: 10.11772/j.issn.1001-9081.2014.01.0297
    Asbtract ( )   PDF (850KB) ( )  
    Related Articles | Metrics
    Concerning the uneven density distributed lightning location data, a lightning nowcasting model based on Ordering Points To Identify the Clustering Structure (OPTICS) algorithm was proposed. The model analyzed continuous period of lightning location data with OPTICS. It effectively filtered out the sparse points that would affect the lightning clouds distribution. Based on the lightning clusters produced by OPTICS, the model used dilate-corrode algorithm to restore real distribution of lightning clouds. Then future lightning location area was predicted according to the moving trend of lightning clouds. Furthermore, to overcome the traditional algorithm's drawback of consuming longer time, adjacent list and improved seed-list updating strategy were introduced into the OPTICS algorithm. The experimental results show that OPTICS based model is more applicable for lightning nowcasting, and achieves higher accuracy and lower time consumption.
    Application of variance-based optimal combined weight process in evaluating Internet information resources
    JIANG Zhenghua
    2014, 34(1):  302-308.  DOI: 10.11772/j.issn.1001-9081.2014.01.0302
    Asbtract ( )   PDF (1003KB) ( )  
    Related Articles | Metrics
    In view of the essential characteristics of free openness, diffused access and distributed share from Internet information resources, a scientific evaluation of websites that are important information nodes in network space is helpful not only to improve their construction quality but also to obtain better development of network resources. Based on the minimum variance principle, a new optimal combined weight process was proposed to determine the combined weight which combined the objective weight based on entropy weight fuzzy synthetic evaluation and the subjective weight based on analytic hierarchy process. The objective weight or the subjective weight, which was only determined by a single weighting model, was amended. The scores of every index were calculated by the methods of expert investigation and statistical analysis. The combined weighting model can get a more scientific and reasonable result of decision-making for websites evaluation. In the end, the website of the Department of Mathematics at Nanjing University was taken as an example to illustrate the proposed model. Some direct suggestions were put forward to optimize website reconstruction in future.
2025 Vol.45 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF