Loading...

Table of Content

    01 December 2014, Volume 34 Issue 12
    Network and communications
    Dynamical replacement policy based on cost and popularity in named data networking
    HUANG Sheng TENG Mingnian CHEN Shenglan LIU Huanlin XIANG Jinsong
    2014, 34(12):  3369-3372. 
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that data for Named Data Networking (NDN) cache is replaced efficiently, a new replacement policy that considered popularity and request cost of data was proposed in this paper. It dynamically allocated proportion of popularity factor and request cost factor according to the interval time between the two requests of the same data. Therefore, nodes would cache data with high popularity and request cost. Users could get data from local node when requesting data next time, so it could reduce the response time of data request and reduce link congestion. The simulation results show that the proposed replacement policy can efficiently improve the in-network hit rate, reduce the delay and distance for users to fetch data.

    Physical layer model for 802.11n-ZigBee coexistence: subcarrier-nulling multi-input multi-output
    LAI Xinyu ZHAO Zenghua WU Xuanxuan
    2014, 34(12):  3373-3380. 
    Asbtract ( )   PDF (1267KB) ( )  
    References | Related Articles | Metrics

    In view of the problem of a sharp fall on network performance due to network interference caused by channel overlapping aroused by ISM (Industrial Scientific Medical) band shared between WiFi and ZigBee, and severe spectrum underutilization induced by the current CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) mechanism, a subcarrier-nulling 2×2 noncoherent antenna MIMO (Multi-Input Multi-Output) PHY (Physical Layer) model was proposed in this paper. In this model, to avoid co-channel interference, a WiFi transmitter needs to detect ZigBee signals appearing in its adopted channel before data transmission, and if any, this transmitter will null the subcarriers within the spectrum occupied by ZigBees, and take advantage of the rest subcarriers to transmit its packets. The receiver needs to identify the subcarriers used by the transmitter, and finish the follow-up work. By this means, interference will be eliminated by signal spectrum separation, thus achieving the goal of heterogeneous network coexistence and making parallel data transmission available. The experiments were run on the test bed composed of GNURadio/USRP platform and ZigBee nodes, and the experiment results show that subcarrier-nulling enabled 2×2 noncoherent antenna MIMO to gain 50%-70% throughput of that in the full bandwidth scenario, and during parallel data transmission ZigBee’s valid received packets ratio is at least 90%.

    Network selection handover strategy based on context-awareness
    TAO Yang ZHOU Kun
    2014, 34(12):  3381-3386. 
    Asbtract ( )   PDF (847KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that how to select the network dynamically in heterogeneous wireless network environment, an network selection and handover strategy based on context-awareness was proposed. A dynamic network solution and a fuzzy logic handover decision were proposed. Based on them, the strategy chose a selection index to filter those access networks that did not satisfied the requirements and designed a network score function for network ranking calculation. The simulation results show that the proposed handover strategy can select a suitable access network, and effectively use the resources.

    Indoor positioning based on Kalman filter and weighted median
    XIAO Ruliang LI Yinuo JIANG Shaohua MEi Zhong CAI Shengzhen
    2014, 34(12):  3387-3390. 
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of high-precise indoor positioning calculation using received signal strength, a novel WMKF (Kalman Filtering and Weighted Median) positioning algorithm was proposed. The algorithm was different from previous indoor localization algorithms. Firstly, Kalman filter method was used to smooth random error, and weighted median method was made to reduce the influence of gross error, then the log distance path loss model was used to obtain the decline curve and calculate the estimated distance. Finally, the centroid method was used to get the position of the target node. The experimental results show that, this WMKF algorithm initially improve that the poor stability of positioning in a relatively complex environment, and effectively enhanced the positioning accuracy, making the accuracy between 0.81m to 1m.

    Boundary node identification algorithm for three-dimensional sensor networks based on flipping plane
    CHENG Cheng KONG Mengmeng HU Guang-min YU Caifu
    2014, 34(12):  3391-3394. 
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics

    In view of the sensor network boundary identification in 3D environment, this paper presented a distributed algorithm for boundary node identification based on flipping finite plane. Based on three known adjacent nodes, the finite plane took each edge of triangle as axis to flip, the first node scanned is the new boundary node, this node and two nodes on the axis construct a new triangle. Above process was carried out iteratively, eventually the boundary contour was got and the boundary nodes were identified. The experimental result shows that, compared with Alpha-shape3D algorithm, the proposed algorithm can greatly reduce the redundant boundary nodes.

    New UWB localization algorithm based on modified DFP algorithm
    GUO Jianguang ZHEN Ziwei YANG Rener
    2014, 34(12):  3395-3399. 
    Asbtract ( )   PDF (651KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that traditional localization algorithm has a slow convergence speed, combining with the characteristics of perfect immunity to time in UWB (Ultra Wide-Band) communication, a novel Davidon-Fletcher-Powell (DFP) algorithm based on Armijo step size was proposed to locate the target node on TDOA (Time Difference Of Arrival) location model. Taylor series expansion algorithm was further introduced to acquire final location at the initial position, achieving the precise location of UWB communication system. The experimental results show that the proposed algorithm not only decreases the demand of localization optimization algorithm to initial position, but also improves the average localization precision 7 times than the steepest decent method with precise measure time. The proposed localization algorithm has better performance on localization accuracy and efficiency.

    Localization and speed measurement algorithm targeting marine mammals for underwater cognitive acoustic networks
    YAO Guidan JIN Zhigang SHU Yishan
    2014, 34(12):  3400-3404. 
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics

    In view of the problem of environmental sensing in Underwater Cognitive Acoustic Networks (UCAN), a Passive Localization algorithm targeting Marine Mammals (PLM) and Speed Measurement algorithm based on Doppler effect (SMD) were proposed. PLM uses the method of retrieval and screening with received signal power to localize marine mammals based on the source level range of their signals. SMD calculates speed using Doppler effect of the received signals on the basis of PLM localization. The experimental results show that PLM and SMD can achieve high accuracy. The average error of PLM increases with the increase of dolpines speed, and its mean value is about 10m. Success rate of localization using PLM can be 90%. The combination of PLM and SMD can help to estimate the movement area of marine mammals accurately.

    Empirical analysis of symmetry degree for micro-blog social network
    KANG Zedong YU Jinghu DING Yiming
    2014, 34(12):  3405-3408. 
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics

    While Twitter and Sina micro-blogs abundant registered users formed a social network of focusing relationship, by using the degree of symmetry its change regulation with the scale of the social circle was studied. Firstly, based on the collection of 1000000 focusing relationships among the Sina micro-blog users and 236 Twitter users as well as their focusing relationships, the initial social network was established. Here focus lied on the connected sub-networks which had obvious symmetrical connects, then the elimination method was applied to obtain these conclusions: The major factors that affect the symmetry of the maximum connected sub-networks are those who are called big V users and negligible users. After that, comparative analysis method was used to find out that the sub-network consisted of the big V users in Twitter has a stronger symmetry. Finally, the difference between these two kinds of micro-blogs was figured out in terms of functional localization. Through the researches on the symmetry of all connected sub-networks within the initial network, the result shows that when the scale of a public social circle decreases, the corresponding symmetry becomes stronger.

    Distortion estimated model for high definition stereoscopic video transmission
    CHEN Meizi WANG Xiaodong LI Shaobo ZHANG Lianjun
    2014, 34(12):  3409-3413. 
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that high definition stereoscopic video sequences have high resolution, less information of macro block, and network transmission error, an end-to-end transmission distortion model was proposed. Considering error diffusion between frames caused by packet loss and the characteristics of spatial and temporal correlation, the recursive algorithm could estimate distortion accurately. And the error concealment method of copying the previous one of the lost frame was mainly used in the model, reducing the dependencies of the decoder. The simulation results show that the average prediction error of the distortion model can be controlled within 6%, and this model can be adapted to estimate transmission distortion for stereo video sequences with different features and resolutions under different network environments.

    Steiner tree heuristic algorithm based on weighted node
    WANG Xiaolong ZHAO Lifeng
    2014, 34(12):  3414-3416. 
    Asbtract ( )   PDF (547KB) ( )  
    References | Related Articles | Metrics

    Minimum Steiner tree problem is a NP complete problem, and widely used in communication network point to multi-point routing. In order to realize more link sharing, reduce the cost of the desired Steiner tree, an algorithm named NWMPH (Node Weight based Minimum cost Path Heuristic) was proposed to solve the Steiner tree based on weighted node. The algorithm constructed a weighted formula of nonregular points, for each nonregular point weighting value. According to the weights of modifying the link cost. By modifying the cost shortest path in order to connect all regular points, get the minimum tree containing all regular points. For part of the data to calculate STEINLIB standard data set, the results show that: NWMPH algorithm and MPH algorithm used basically the same time. The cost of NWMPH algorithm to get Steiner tree is less than that of MPH algorithm. NWMPH algorithm uses less time and costs less to get Steiner tree than KBMPH algorithm.

    Critical-path-unchanged extensions for parallel computing under fixed structure constraint
    XIONG Huanliang WU Canghai KUANG Guijuan YAGN Wenji
    2014, 34(12):  3417-3423. 
    Asbtract ( )   PDF (1109KB) ( )  
    References | Related Articles | Metrics

    Extending in parallel computing is an effective approach to achieve higher computing performance. However, under the constraint of the fixed structure, it is difficult to improve the performance of parallel computing only by extending its scale simply. Concerning such extension problem of parallel computing, this paper investigated the factors from architecture and parallel tasks which affect its scalability, and modeled the architecture as well as parallel tasks by the weighted graph. Then, an extension method was proposed that the critical path remained unchanged. The novel extension method, in essence, did not change the graph’s structure and only adjusts the graph’s weights. Additionally, some conclusions about the new extension method were drawn by further derivation. Finally, the simulative experiments on the platform SimGrid were conducted to test the effectiveness of the proposed extension method. The results show that the proposed method can solve such extension problem, and at the same time it also realize isospeed-e extension. it can help guide practical extensions of parallel computing.

    Analysis of global convergence of crossover evolutionary algorithm based on state-space model
    WANG Dingxiang LI Maojun LI Xue CHENG Li
    2014, 34(12):  3424-3427. 
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    Evolutionary Algorithm based on State-space model (SEA) is a novel real-coded evolutionary algorithm, it has good optimization effects in engineering optimization problems. Global convergence of crossover SEA (SCEA) was studied to promote the theory and application research of SEA. The conclusion that SCEA is not global convergent was drawn. Modified Crossover Evolutionary Algorithm based on State-space Model (SMCEA) was presented by changing the comstruction way of state evolution matrix and introducing elastic search operation. SMCEA is global convergent was proved by homogeneous finite Markov chain. By using two test functions to experimental analysis, the results show that the SMCEA are improved substantially in such aspects as convergence rate, ability of reaching the optimal value and operation time. Then, the effectiveness of SMCEA is proved and that SMCEA is better than Genetic Algorithm (GA) and SCEA was concluded.

    Artificial intelligence
    Multi-target pinning flocking algorithm combined with local adaptive tracking
    WANG Hai LUO Qi XU Tengfei
    2014, 34(12):  3428-3432. 
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that the traditional multi-Agent flocking algorithms are not universal when a single target tracking is considered, and the existing multi-target flocking control is controlled by centralized coordinated movement based on global target information, rather than the distributed coordinated control based on local destination information. Therefore, a distributed motion cooperative pinning flocking algorithm combined with local adaptive tracking was presented. First, the local adaptive tracking strategy based on separation, aggregation, velocity matching and direct feedback was introduced to achieve local following interaction dynamically. Secondly, a node influence index evaluation algorithm based on pinning idea was presented to select the m information Agents to track m targets, playing an important role in simulating external information; different information individuals indirectly lead individuals with a different target to track the respective target due to local adaptive detection mechanism. Finally, a new class of potential functions of aggregation and exclusion with the advantages of less adjustable parameters and high efficiency was designed; the Agents with same target could gather in the process of tracking, and the Agents with different target could avoid collision based on the potential function. The experimental results under three dimensional space show the feasibility and effectiveness of multi-target tracking.

    Protein function prediction based on directed bi-relational graph and multi-kernel fusion
    MENG Jun DIAO Yin
    2014, 34(12):  3433-3437. 
    Asbtract ( )   PDF (865KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that protein interaction network of multiple kernels from heterogeneous data sources contains huge amount of information. Due to data redundancy, the predicted results could not fully reflect the distribution of data. The functional categories network and protein interaction networks were combined, a multi-label learning algorithm was proposed based on the directed bi-relational graph theory and multi-kernel fusion. First, an adaptive learning model was built by the loss function of equation and expectation maximization algorithm. Then, multiple associative matrices were obtained by using the graph optimization strategy to fuse the functional categories and protein interaction networks. Finally, the prediction model was built by the associative matrices and adaptive learning model. The experimental results using multiple heterogeneous protein data sources of Yeast and Mouse show that the proposed method has higher prediction accuracy and lower loss rate of label.

    Depth-image based 3D map reconstruction of indoor environment for mobile robots
    ZHANG Yi WANG Longfeng YU Jiahang
    2014, 34(12):  3438-3440. 
    Asbtract ( )   PDF (567KB) ( )  
    References | Related Articles | Metrics

    Considering the problem that Extended Kalman Filter (EKF) does better in linear system for real-time 3D mapping and largerly affected by errors to linearize nonlinear systems, Iterated Extended Kalman Filter (IEKF) based on depth data of Kinect was proposed. This method used IEKF to achieve camera trajectory prediction applied to Microsoft Kinect RGB-D(Red-Green-blue-Depth) data, after that Iterative Closest Point (ICP) algorithm was employed to perform fine registration on depth image to generate the 3D point cloud map. The experimental results show that compared with the traditional EKF algorithm, the IEKF generates less error than EKF, and gets the more smooth 3D point cloud map. The method realizes the 3D map-building, and it is more practical.

    Posture recognition method based on Kinect predefined bone
    ZHANG Dan CHEN Xingwen ZHAO Shuying LI Jiwei BAI Yu
    2014, 34(12):  3441-3445. 
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    In view of the problems that posture recognition based on vision requires a lot on environment and has low anti-interference capacity, a posture recognition method based on predefined bone was proposed. The algorithm detected human body by combining Kinect multi-scale depth and gradient information. And it recognized every part of body based on random forest which used positive and negative samples, built the body posture vector. According to the posture category, optimal separating hyperplane and kernel function were built by using improved support vector machine to classify postures. The experimental results show that the recognition rate of this scheme is 94.3%, and it has good real-time performance, strong anti-interference, good robustness, etc.

    Criticality analysis method based on fuzzy Bayesian networks
    QU Sheng SHI Wuxi XIU Chunbo
    2014, 34(12):  3446-3450. 
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics

    Considering the defects of traditional Failure Modes,Effect and Criticality Analysis (FMECA), a criticality analysis method based on fuzzy Bayesian networks was proposed. This approach combined the fuzzy theory with Bayesian network techniques, and fuzzy judgments of experts were described using triangular fuzzy numbers which were transformed into forms of fuzzy subsets of ranking through mapping of fuzzy sets. The fuzzy rules with belief structure were used to represent the relationship between the properties and hazards of the failure modes. The Bayesian network inference algorithms were used to synthesize the fuzzy rules of belief structure, and the hazard degree in the form of fuzzy subsets was obtained by Bayesian inference, through defuzzification calculation, a precise value of fault hazard ranking was gained to determine the hazard degree of the failure mode. The experimental results show that the proposed method is able to improve the accuracy and application range of the traditional analysis method.

    Generalized interval-valued trapezoidal fuzzy soft set and its application in group preferences aggregation
    CHEN Xiuming QIAN Li LI Jingming WU Weiwei CHENG Jiaxing
    2014, 34(12):  3451-3457. 
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics

    Owing to that different users focus on attributes of the same item is not exactly the same, individuals' weight distribution for goods attributes are not the same. A method of the generalized interval-valued trapezoidal fuzzy soft set was proposed to deal with this kind of recommendation problems. First, the concept of generalized interval-valued trapezoidal fuzzy soft set was established by combining the concepts of generalized interval-valued trapezoidal fuzzy set and soft set, some basic operations on a generalized interval-valued trapezoidal fuzzy soft set were defined, such as “and” operation, and “or” operation. Using these operations, as well as the center of gravity method of the generalized interval-valued trapezoidal fuzzy numbers, commodities could be ranked. A group preference model from the preferences of the group members could be constructed. Finally, this paper used the car recommendation as an example to introduce the group preference aggregation algorithm and this numerical example was given to illustrate the feasibility and effectiveness of the proposed method.

    Fully secure identity-based online/offline encryption
    WANG Zhanjun LI Jie MA Haiying WANG Jinhua
    2014, 34(12):  3458-3461. 
    Asbtract ( )   PDF (659KB) ( )  
    References | Related Articles | Metrics

    The existing Identity-Based Online/Offline Encryption (IBOOE) schemes do not allow the attacker to choose the target identity adaptively, since they are only proven to be secure in the selective model. This paper introduced the online/offline technology into fully secure Identity-Based Encryption (IBE) schemes, and proposed a fully secure IBOOE scheme. Based on three static assumptions in composite order groups, this scheme was proven to be fully secure with the dual system encryption methodology. Compared with the famous IBOOE schemes, the proposed scheme not only greatly improves the efficiency of the online encryption, but also can meet the demands for complete safety in the practical systems.

    Robust zero watermarking algorithm based on bit plane theory and singular value decomposition
    QU Changbo WANG Dongfeng
    2014, 34(12):  3462-3465. 
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics

    In view of the watermark robustness in information hiding algorithm of spatial domain, a zero watermarking algorithm which is fast and robust was proposed. And the algorithm was used in information hiding which was based on digital image in order to realize the watermark information extraction and certification. Firstly, the Bit Plane (BP) theory was used to analyze bit planes at different levels, set up the bit plane matrix structure which has no value, combine the numbers of non-zero values in bit planes to generate eigen matrix. Then, the eigen matrix was partitioned, using the singular value decomposition to the largest block singular value matrix was generated, and zero watermarking information was obtained by the matrix two-dimensional chaotic encryption registration. Simulation results show that, the proposed algorithm has high robustness against attacks, improved by 6% to salt and pepper noise attack than similar algorithms, and to common mixed attacks up to 12%.

    Encryption algorithm based on 2D X-type reversible cellular automata
    YUAN Ye LI Jingyi CHEN Juhua
    2014, 34(12):  3466-3469. 
    Asbtract ( )   PDF (570KB) ( )  
    References | Related Articles | Metrics

    Concerning the problems of complicated structure and revolution of 2D traditional neighborhood cellular automata, low encrypting efficiency, little key space of 1D cellular automata, low diffusion speed and needing multiple rounds iteration to produce avalanche effect, a new encryption algorithm based on 2D X-type reversible cellular automata and Arnold transformation was proposed. Firstly, the plaintext was evolved by the proposed cellular automata, then it was transformed by Arnold transformation and cyclic shift transformation after every evolution, until the ciphertext was encrypted well enough. The experimental result shows that the key space is increased by 16.8% and has perfect robustness in resisting brute force attack. In addition the diffusion and confusion is so excellent that it can produce higher avalanche effect and resist chosen plaintext attack.

    Methods of Voronoi diagram construction and near neighbor relations query
    ZHANG Liping LI Song MA Lin TANG Yuanxin HAO Xiaohong
    2014, 34(12):  3470-3474. 
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics

    The existing methods of constructing Voronoi diagram have low efficiency and high complexity, to remedy the disadvantages, a new method of constructing and updating Voronoi diagram based on the hybrid methods was given to query the nearest neighbor of the given spatial data effectively, and a new method of searching the nearest neighbor based on Voronoi diagram and the minimum inscribed circle was presented. To deal with the frequent, changes of the query point position, the method based on Voronoi diagram and the minimum bounding rectangle was proposed. To improve the efficiency of the dual nearest neighbor pair and closest pair query, a new method was given based on Voronoi polygons and their minimum inscribed circles. The experimental results show that the proposed methods reduce the additional computation caused by the uneven distribution of data and have a large advantage for the big dataset and the frequent query.

    Locality-sensitive hashing index for multiple keywords over graphs
    HAN Jingyu YANG Jian
    2014, 34(12):  3475-3480. 
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics

    As existing inverted index not only cannot efficiently handle multiple-keyword query, but also cannot find results for misspelled keywords, a bi-level index leveraging Bitmap and Locality-sensitive Hashing (BLH) was proposed to support multiple-keyword queries. The upper-level of BLH is bitmaps, which map keywords onto clusters of sub-graphs based on the n-grams in the keywords. Each cluster stores the similar sub-graphs. On the lower-level, each cluster has a locality-sensitive hashing index, which helps identify the sub-graphs that contain the keywords based on their n-grams. The indexing scheme of BLH can dramatically decrease query I/Os, thus reducing the query time by 80%. Furthermore, the index based on n-gram can avoid the sensitivity to spelling mistakes of keywords, guaranteeing to return expected results in any case. The experimental results on real data sets demonstrate the effectiveness of the BLH index, which can efficiently support the querying of Web and social network.

    Collaborative filtering recommendation algorithm based on exact Euclidean locality-sensitive hashing
    LI Hongmei HE Wenning CHEN Gang
    2014, 34(12):  3481-3486. 
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics

    In recommendation systems, recommendation results are affected by the matter that rating data is characterized by large volume, high dimensionality, extreme sparsity, and the limitation of traditional similarity measuring methods in finding the nearest neighbors, including huge calculation and inaccurate results. Aiming at the poor recommendation quality, this paper presented a new collaborative filtering recommendation algorithm based on Exact Euclidean Locality-Sensitive Hashing (E2LSH). Firstly, E2LSH algorithm was utilized to lower dimensionality and construct index for large rating data. Based on the index, the nearest neighbor users of target user could be obtained with great efficiency. Then, a weighted strategy was applied to predict the user ratings to perform collaborative filtering recommendation. The experimental results on typical dataset show that the proposed method can overcome the bottleneck of high dimensionality and sparsity to some degree, with high running efficiency and good recommendation performance.

    Dynamically adaptive hybrid intelligent collaborative filtering recommendation algorithm
    CHEN Xiaoyu
    2014, 34(12):  3487-3490. 
    Asbtract ( )   PDF (710KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems of current collaborative filtering algorithm,such as sparse data, inconspicuous user interest changes, timeliness and poor recommendation quality, an adaptive hybrid intelligent algorithm was proposed. The initial neighbor set of the target user was got by modified kernel fuzzy clustering analysis firstly, which reduced the calculation range; furthermore, the initial equivalence relation and equivalence relation similarity were redefined, and a dynamic x nearest neighbor algorithm was proposed to get the accurate neighbor set, and then to fill the matrix using the prediction score, which optimized score data quality. At last, the interest change factor and rating time weight of the users was introduced, and mined potential interest changes to obtain better recommendation. The experimental results show that the algorithm can get more accurate nearest neighbor set, which can improve the prediction accuracy and the quality of recommendation, and provide better personalized recommendation for users.

    Personalized microblogging recommendation based on weighted dynamic degree of interest
    TAO Yongcai HE Zongzhen SHI Lei WEI Lin CAO Yangjie
    2014, 34(12):  3491-3496. 
    Asbtract ( )   PDF (895KB) ( )  
    References | Related Articles | Metrics

    On account of the features that the information in microblogging is enormous and the microbloggers' interests change over time, a personalized microblogging recommendation model based on Weighted Dynamic Degree of Interest (WDDI) was proposed. WDDI model considered the microblogging retweet features and the time factor of tweets, studied the tweets of microbloggers by exploiting the microblog topic model Retweet-Latent Dirichlet Allocation (RT-LDA) and built the individual dynamic interest model. Then WDDI got user's group dynamic interest by the similarity and the interacted frequency between users and their followee. Combining the user's individual interest and the group interest, the weighted dynamic degree of interest model was built. By ranking the new tweets that the user received in descending order by the degree of interest, the dynamic personalized microblogging recommendation was achieved. The experimental results show that WDDI is able to reflect the users' dynamic interest more precisely than the traditional models.

    Self-adaptive microblog hot topic tracking method using term correlation
    SUN Yuexin MA Huifang SHI Yakai CUI Tong
    2014, 34(12):  3497-3501. 
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics

    Aiming at the deficiency of traditional text representation model, which usually ignores term correlation, and topic drifting problem during topic tracking, this paper propose an approach called self-adaptive microblog hot topic tracking method using terms correlation. Mutual information between terms in the same and different microblogs are investigated. Then the conventional text representation model is updated. Similarity calculation is performed to decide whether it is the subsequent discussions of a certain hot topic. Finally, the vectors of microblogs are updated to avoid topic drifting. Experiments show the effectiveness of our method.

    Improved PageRank algorithm based on user feedback and topic relevance
    WANG Chong CAO Shanshan
    2014, 34(12):  3502-3506. 
    Asbtract ( )   PDF (786KB) ( )  
    References | Related Articles | Metrics

    Concerning the problems that exist in traditional PageRank algorithm, such as topic drifting, neglecting user browsing interests and stressing on old Web pages, an improved PageRank algorithm was proposed. To satisfy user requirements better, factors of users' clicks to links, link structure, browser time on pages, topic relevance decided by contents and existing time of pages were taken into consideration. The experimental results show that compared with the traditional PageRank algorithm, the average value of users' degree of satisfaction has been promoted approximately by 2.1% with the proposed algorithm, and ranking results has been optimized in a certain extent.

    Analysis of key disassembly problems based on embedded smart meter
    LIU Jinshuo WANG Xiebing ZHEN Wen DENG Juan CHEN Xin
    2014, 34(12):  3507-3510. 
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    Two key problems, endianness and memory capacity limit appear to be obstacles when electric enterprises implement a function consistency model for embedded smart meter software via disassembly technique, thus affecting the overall performance of the model. To solve these problems, a in-depth analysis was conducted combined with internal features of embedded smart meter and hardware architecture theory. Two algorithms named Code Double Inverse Preprocessing Algorithm (CDIPA) and Segmented Disassembling Algorithm (SDA) were proposed. CIDPA was used to generate adjusted binary code, together with raw binary as two inputs of disassembly. Thus endianness problem was solved by choosing the result more adaptable to hardware environment. SDA was adopted to decrease size of input binary so as to disassemble more times in limited memory. The experimental results show that CDIPA and SDA can effectively resolve the problems mentioned above and show up favorable robustness and portability.

    Optimization method of taint propagation analysis based on semantic rules
    LIN Wei ZHU Yuefei SHI Xiaolong CAI Ruijie
    2014, 34(12):  3511-3514. 
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics

    Time overhead of the taint propagation analysis in the off-line taint analysis is very large, so the research on efficient taint propagation has important significance. In order to solve the problem, an optimization method of taint propagation analysis based on semantic rules was proposed. This method defined semantic description rules for the instruction to describe taint propagation semantics, automatically generated the semantics of assembly instructions by using the intermediate language, and then analyzed taint propagation according to the semantic rules, to avoid the repeated semantic parsing caused by repeating instructions execution in the existing taint analysis method, thus improving the efficiency of taint analysis. The experimental results show that, this method can effectively reduce the time cost of taint propagation analysis, only costs 14% time of the taint analysis based on intermediate language.

    Adaptive moving object detection method based on spatial-temporal background model
    LI Weisheng WANG Gao
    2014, 34(12):  3515-3520. 
    Asbtract ( )   PDF (1007KB) ( )  
    References | Related Articles | Metrics

    The available Visual Background extractor (ViBe) only uses the spatial information of pixels to build background model ignoring the time information,as a result to make the accuracy of detection decrease. In addition, the detection radius and random sampling factor of updating background model are fixed parameters, the effect of detection is not ideal on the circumstances of dynamic background interference and camera shake. In order to solve these problems, an adaptive moving target detection method based on spatial-temporal background model was proposed. Firstly, the time information was added to ViBe to set up spatial-temporal background model. And then the complexity of the background was reflected by the standard deviation of the samples in the background model. So the standard deviation was able to change the detection radius and random sampling factor of updating background model to adapt to the change of background. The experimental results indicate that the proposed method can not only effectively detect the foreground with static background and uniformity of light, but also have certain inhibitory effects in the cases of the light changing greatly, camera shaking, and the dynamic background interference, and so on. It is capable of improving the precision of detection.

    Passenger detection and tracking algorithm based on vehicle video surveillance
    XIE Lu JIN Zhigang WANG Ying
    2014, 34(12):  3521-3525. 
    Asbtract ( )   PDF (864KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of barrier among passengers and unstable illumination on the bus, a detection and tracking algorithm was proposed based on edge feature and local invariant feature of head-shoulder. Firstly, the algorithm used adaptive threshold background subtraction method to achieve passenger segmentation. Secondly, it used Histogram of Oriented Gradient (HOG) feature of different sample sets to train Support Vector Machine (SVM) classifiers, and combined Adaptive Boosting (AdaBoost) algorithm to extract a strong classifier. And then it scanned the foreground using strong classifier to achieve passenger detection. Lastly, it extracted Speeded-Up Robust Feature (SURF) of target region and current search region, and then matched feature points to achieve passenger tracking. The experimental results show that this algorithm has detection rate and tracking rate of more than 80% in the case of barrier among passengers and unstable illumination, and it can meet the requirement of real-time. It can be used for passenger flow counting.

    Particle filter tracking algorithm based on adaptive subspace learning
    WU Tong WANG Ling HE Fan
    2014, 34(12):  3526-3530. 
    Asbtract ( )   PDF (805KB) ( )  
    References | Related Articles | Metrics

    In order to improve the robustness of visual tracking algorithm when the target appearance changes rapidly, a particle filter tracking algorithm based on adaptive subspace learning was presented in this paper. In the particle filter framework, this paper established a state decision mechanism, chose the appropriate learning method by combining the verdict and the characteristics of the Principal Component Analysis (PCA) subspace and orthogonal subspace. It not only can accurately, stably learn target in low dimensional subspace, but also can quickly learn the change trend of the target appearance. For the occlusion problem, robust estimation techniques were added to avoid the impact of the target state estimation. The experimental results show that the algorithm has strong robustness in the case of illumination change, posture change, and occlusion.

    Key salient object detection based on filtering integration method
    WANG Chen FAN Yangyu LI Bo XIONG Lei
    2014, 34(12):  3531-3535. 
    Asbtract ( )   PDF (964KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem of the background interference during the salient object detection, a key salient object detection algorithm was proposed based on filtering integration in this paper. The proposed algorithm integrated the locally guided filtering with the improved DoG (Difference of Gaussia) filtering, and made the salient object more highlighted. Then, the key points set was determined by using the saliency map, and the result of saliency detection was got by adjustment factor, which was more suitable for human visual system. The experimental results show that the proposed algorithm is superior to existing significant detection methods. And it can restrain the background interference effectively, and have higher precision and better recall rate compared with other methods, such as Local Contrast (LC), Spectral Residual (SR), Histogram-based Contrast (HC), Region Contrast (RC) and Frequency-Tuned (FT).

    Clothing extraction algorithm based on pose estimation and salient object detection
    HE Ni ZHAO Bo
    2014, 34(12):  3536-3539. 
    Asbtract ( )   PDF (662KB) ( )  
    References | Related Articles | Metrics

    Considering the influence of clothing recognition on clothing shopping image search, the characteristics of online clothing shopping images were analyzed and a novel clothing extraction algorithm was proposed on the basis of pose estimation and salient object detection, which combined pose estimation and salient object detection. Implementing pose estimation on images, the presented method realized adaptability to poses, and introduced that integrating pose estimation into the region detection part of the salient object detection to obtain the salient detection map combining pose estimation, which took two complementary advantages. The clothing region was automatically located. Clothing was extracted by adopting the graph cut principle iteratively. The experimental results demonstrate that the proposed algorithm can accurately extract the clothing in complex background, illustrating the effectiveness of the introduction of pose estimation and saliency detection to clothing extraction. Besides, it can be applied to most clothing shopping images, and has good universality.

    Video background completion with complexly moving camera
    XU Zhan CAO Zhe
    2014, 34(12):  3540-3544. 
    Asbtract ( )   PDF (1032KB) ( )  
    References | Related Articles | Metrics

    Video background completion is attracting more and more attention. For videos captured by complexly moving camera, the problem is even more challenging. In order to solve the problem, a motion-guided optimizing algorithm was proposed to complete the spatio-temporal hole left by foreground object removal. First, to estimate the motion field in the hole, a global objective function was established, and a hierarchical iterative method was applied to find its optimal solution. Completion problem was then abstracted into a Markov Random Field (MRF) problem. Using motion field as the guidance, video background was completed by optimally assigning available pixels from other parts to the missing regions. Finally, traditional illumination transfer strategy was improved, and a new illumination adjusting method was proposed to eliminate the illumination inconsistency in the completed parts. This approach got good results on a variety of videos. Compared with previous methods, this approach works better in keeping spatio-temporal coherence, and can be applied on videos with complex background captured by complexly moving camera.

    Texture images retrieval based on Float-LBP
    ZHAO Yudan WANG Qian FAN Jiulun
    2014, 34(12):  3545-3548. 
    Asbtract ( )   PDF (596KB) ( )  
    References | Related Articles | Metrics

    An improved method based on Local Binary Pattern (LBP) was proposed to solve the problem that the representing ability of LBP is bad because only the relationship between neighbors and the central pixels are considered while the floating relationship of the gray values in the neighbor region is ignored. Firstly, each neighbor was compared clockwise with its next adjacent neighbor before threshold and an LBP-like code was generated. Secondly, the code was encoded to a decimal number named as Float-LBP (F-LBP). Thirdly, the features extracted by the F-LBP and the basic LBP operators were combined together. The experimental results show that the combination of the F-LBP and the basic LBP operators can improve the retrieval accuracy by extracting more discriminative information while reserving the local micro-texture.

    Image retrieval based on clustering according to color and shape features
    ZHANG Yongku LI Yunfeng SUN Jingguang
    2014, 34(12):  3549-3553. 
    Asbtract ( )   PDF (790KB) ( )  
    References | Related Articles | Metrics

    In order to improve the speed and accuracy of image retrieval, the drawbacks of image retrieval based on a variety of clustering algorithms were analyzed, then a new partition clustering method for image retrieval was presented in this paper. First, based on the asymmetrical quantization of the color in HSV model, color feature of image was extracted by color coherence vectors. Then, global shape feature of image was extracted based on improved Hu invariant moment. Finally,images were clustered based on contribution according to color and shape features, and image feature index library was established. The methods described above were used for image retrieval based Corel image library. The experimental results show that compared with image retrieval algorithms based on improved K-means algorithms, precision ratio and recall ratio of the proposed algorithm are improved greatly.

    Removal of mismatches in scale-invariant feature transform algorithm using image depth information
    LIU Zheng LIU Yongben
    2014, 34(12):  3554-3559. 
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics

    Feature point matching is of central importance in feature-based image registration algorithms such as Scale-Invariant Feature Transform (SIFT) algorithm. Since most of the existed feature matching algorithms are not so powerful and efficient in mismatch removing, in this paper, a mismatch removal algorithm was proposed which adopted the depth information in an image to improve the performance. In the proposed approach, the depth map of an acquired image was produced using the clues of defocusing blurring effect, and machine learning algorithm, followed by SIFT feature point extraction. Then, the correct feature correspondences and the transformation between two feature sets were iteratively estimated using the RANdom SAmple Consensus (RANSAC) algorithm and exploiting the rule of local depth continuity. The experimental results demonstrate that the proposed algorithm outperforms conventional ones in mismatch removing.

    Fast algorithm of high efficiency video coding intra prediction mode decision based on Hough transform
    DONG Duo DUANMU Chunjiang
    2014, 34(12):  3560-3564. 
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the computational complexity associated with intra prediction mode selection in High Efficiency Video Coding (HEVC) is very high, an efficient fast algorithm for HEVC intra prediction mode decision based on Hough transform was proposed, which aimed at reducing the traversed number of the 35 intra prediction modes. Firstly, the edge detection and Hough transform were carried out for the Prediction Units (PU) of various sizes before the Rough Mode Decision (RMD) process. After that, the statistical analysis of the tangent values of detected angles of the straight lines was conducted using the histogram. Finally, the applicable candidate modes were chose for the RMD and Rate-Distortion Optimization (RDO) processes, and the simulation of the proposed algorithm was carried out in the VS 2008 environment using the C++ computer language and the OpenCV libraries. The experimental results show that the encoding time can be reduced by 23% with only a small increase of the code rate of 1.02% and the decrease of peak signal-to-noise ratio of the 0.035dB. The proposed algorithm enhances the real-time performance of the encoder greatly, and it is suitable for the videos with high resolution and large size.

    Improved compression vertex chain code based on Huffman coding
    WEI Wei LIU Yongkui DUAN Xiaodong GUO Chen
    2014, 34(12):  3565-3569. 
    Asbtract ( )   PDF (795KB) ( )  
    References | Related Articles | Metrics

    This paper introduced the research works on all kinds of chain code used in image processing and pattern recognition and a new chain code named Improved Compressed Vertex Chain Code (ICVCC) was proposed based on Compressed Vertex Chain Code (CVCC). ICVCC added one code value compared with CVCC and adopted Huffman coding to encode each code value to achieve a set of chain code with unequal length. The expression ability per code, average length and efficiency as well as compression ratio with respect to 8-Directions Freeman Chain Code (8DFCC) were calculated respectively through the statistis a large number of images. The experimental results show that the efficiency of ICVCC proposed this paper is the highest and compression ratio is ideal.

    Reverse curvature-driven super-resolution algorithm based on Taylor formula
    ZHAO Xiaole WU Yadong ZHANG Hongying ZHAO Jing
    2014, 34(12):  3570-3575. 
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of traditional interpolation and model-based methods usually leading to decrease of the contrast and sharpness of images, a reverse curvature-driven Super-Resolution (SR) algorithm based on Taylor formula was proposed. The algorithm used the Taylor formula to estimate the change trend of image intensity, and then the image edge features were detailed by the curvature of isophote. Gradients were used as constraints to inhibit the jagged edges and ringing effects. The experimental resluts show that the proposed algorithm has obvious advantages over the conventional interpolation algorithm and model-based methods in clarity and information retention, and its result is more in line with human visual effects. The proposed algorithm is more effective than traditional iterative algorithms for reverse diffusion based on Taylor expansion is implemented.

    Noise face hallucination via data-driven local eigentransformation
    DONG Xiaohui GAO Ge CHEN Liang HAN Zhen JIANG Junjun
    2014, 34(12):  3576-3579. 
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that the linear eigentransformation method cannot capture the statistical properties of the nonlinear facial image, a Data-driven Local Eigentransformation (DLE) method for face hallucination was proposed. Firstly, some samples most similar to the input image patch were searched. Secondly, a patch-based eigentransformation method was used for modeling the relationship between the Low-Resolution (LR) and High-Resolution (HR) training samples. Finally, a post-processing approach refined the hallucinated results. The experimental results show the proposed method has better visual performance as well as 1.81dB promotion over method of locality-constrained representation in objective evaluation criterion for face image especially with noise. This method can effectively hallucinate surveillant facial images.

    Single video temporal super-resolution reconstruction algorithm based on maximum a posterior
    GUO Li LIAO Yu CHEN Weilong LIAO Honghua LI Jun XIANG Jun
    2014, 34(12):  3580-3584. 
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics

    Any video camera equipment has certain temporal resolution, so it will cause motion blur and motion aliasing in captured video sequence. Spatial deblurring and temporal interpolation are usually adopted to solve this problem, but these methods can not solve it completely in origin. A temporal super-resolution reconstruction method based on Maximum A Posterior (MAP) probability estimation for single-video was proposed in this paper. The conditional probability model was determined in this method by reconstruction constraint, and then prior information model was established by combining temporal self-similarity in video itself. From these two models, estimation of maximum posteriori was obtained, namely reconstructed a high temporal resolution video through a single low temporal resolution video, so as to effectively remove motion blur for too long exposure time and motion aliasing for inadequate camera frame-rate. Through theoretical analysis and experiments, the validity of the proposed method is proved to be effective and efficient.

    Blind road recognition algorithm based on color and texture information
    PENG Yuqing XUE Jie GUO Yongfang
    2014, 34(12):  3585-3588. 
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that existing blind road recognition method has low recognition rate, simplistic handling, and is easily influenced by light, or shadow, an improved blind road recognition method was proposed. According to the color and texture features of blind road, the algorithm used two segmentation methods respectively including color histogram feature threshold segmentation combined with improved region growing segmentation and fuzzy C-means clustering segmentation for gray level co-occurrence matrix feature. And combined with Canny edge detection and Hough transform algorithm, the proposed algorithm made the blind area separated from the pedestrian area and determines the migration direction for the blind. The experimental results show that the proposed algorithm can segment several kinds of blind road more accurately, detect the boundary and direction of blind road and solve the light and shadow problem partly. It can choose the fastest and the most effective segmentation method adoptively, and can be used in a variety of devices, such as electronic guide ones.

    3D craniofacial registration using parameterization
    QIAO Xuejun ZHAO Junli LU Jianqing XIE Wenkui
    2014, 34(12):  3589-3592. 
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics

    This paper transfered the problem of the 3D craniofacial registration into the one in 2D parameter domain by using surface parameterization. Firstly, six landmarks on the craniofacial surfaces were calibrated according to the physiological characteristics, and the pose and size of the craniofacial surfaces were normalized by projecting the craniofacial surfaces into a unified coordinate system which was determined by using the six landmarks. Secondly, Least Squares Conformal Mapping (LSCM) was performed for a reference craniofacial surface by pinning two outer corners of the eyes, by which the 2D parameters of the six landmarks were computed. Thirdly, any craniofacial surface could be mapped into a 2D domain using LSCM by pinning the six landmarks. Finally, the 3D point correspondences were obtained by mapping the 2D correspondences into the 3D surfaces. To validate the proposed method, the reference model was deformed into the target one by the Thin Plate Spline (TPS) transform with the corresponding vertices being control points, and the average distance between two corresponding point sets after deformation was computed. By the average distance, the proposed method was compared with the principal axes analysis based ICP (Iterative Closest Point) and the random sampling control points based iterative TPS registration methods. The comparison shows that the proposed approach is more accurate and effective.

    Feature extraction using a fusion method based on sub-pattern row-column two-dimensional linear discriminant analysis
    DONG Xiaoqing CHEN Hongcai
    2014, 34(12):  3593-3598. 
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problems, such as facial change and uneven gray, caused by the variations of expression and illumination in face recognition, a novel feature extraction method based on Sub-pattern Row-Column Two-Dimensional Linear Discriminant Analysis (Sp-RC2DLDA) was proposed. In the proposed method, by dividing the original images into smaller sub-images, the local features could be extracted effectively, and the impact of variations in facial expression and illumination was reduced. Also, by combining the sub-images at the same position as a subset, the recognition performance could be improved for making full use of the spatial relationship among sub-images. At the same time, two classes of features which complemented each other can be obtained by synthesizing the local sub-features which were achieved by performing 2DLDA (Two-Dimensional Linear Discriminant Analysis) and Extend 2DLDA (E2DLDA) on a set of partitioned sub-patterns in the row and column directions, respectively. Then, the recognition performance was expected to be improved by employing a fusion method to effectively fuse these two classes of complementary features. Finally, nearest neighbor classifier was applied for classification. The experimental results on Yale and ORL face databases show that the proposed Sp-RC2DLDA method reduces the influence of variations in illumination and facial expression effectively, and has better robustness and classification performance than the other related methods.

    Detection and quantitative evaluation of lung nodule spiculation in CT images
    XING Qiamqiam LIU Zhexing LIN Binquan QIAN Jun CAO Lei
    2014, 34(12):  3599-3604. 
    Asbtract ( )   PDF (912KB) ( )  
    References | Related Articles | Metrics

    A new method was proposed to accurately detect and quantitatively evaluate the lung nodule spiculation. First, the region growing method followed by level set method was used to accurately segment the main part of the lung nodule. Then, spiculated lines connected to the nodule boundary were extracted using a line detector in polar coordinates system. Finally, spiculation index was introduced as the quantitative measurement of spiculation features, which was then used as a criteria for distinguishing between spiculated and non-spiculated nodules. The consistency and correlation of spiculation index of the method and Lung Image Database Consortium (LIDC) were evaluated in detail. The experimental results show that the proposed method can effectively detect and quantitatively describe the lung nodule spiculation in CT images.

    MLEM low-dose CT reconstruction algorithm based on variable exponent anisotropic diffusion and non-locality
    ZHANG Fang CUI Xueying ZHANG Quan DONG Chanchan SUN Weiya BAI Yunjiao GUI Zhiguo
    2014, 34(12):  3605-3608. 
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics

    Concerning the serious recession problems of the low-dose Computed Tomography (CT) reconstruction images, a low-dose CT reconstruction method of MLEM based on non-locality and variable exponent was presented. Considering the traditional anisotropic diffusion noise reduction is insufficient, variable exponent which could effectively compromise between heat conduction and anisotropic diffusion P-M models, and the similarity function which could detect the edge and details instead of gradient were applied to the traditional anisotropic diffusion, so as to achieve the desired effect. In each iteration, firstly, the basic MLEM algorithm was used to reconstruct the low-dose projection data. And then the diffusion function was improved by the non-local similarity measure, variable index and fuzzy mathematics theory, and the improved anisotropic diffusion was used to denoise the reconstructed image. Finally median filter was used to eliminate impulse noise points in the image. The experimental results show the proposed algorithm has a smaller numerical value than OS-PLS (Ordered Subsets-Penalized Least Squares), OS-PML-OSL (Ordered Subsets-Penalized Maximum Likelihood-One Step Late), and the algorithm based on the traditional PM, in the variance of Mean Absolute Error (MAE), and Normalized Mean Square Distance (NMSD), especially its Signal-to-Noise Ratio (SNR) is up to 10.52. This algorithm can effectively eliminate the bar of artifacts, and can keep image edges and details information better.

    Adaptive non-local denoising of magnetic resonance images based on normalized cross correlation
    SHI Li XU Xiaohui CHEN Liwei
    2014, 34(12):  3609-3613. 
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    In order to remove the Rician distribution noise in Magnetic Resonance (MR) images sufficiently, the Normalized Cross Correlation (NCC) of local pixel was proposed to characterize the geometric structure similarity, and was combined with the traditional method of using only pixel intensity to determine its similarity weight. Then the improved method was applied to the non-local mean algorithm and Non-local Linear Minimum Mean Square Error (NLMMSE) estimation algorithm respectively. In order to realize adaptive denoising, the weighted value of pixel to be filtered or the similarity threshold in non-local algorithms were computed according to the local Signal-to-Noise Ratio (SNR) dynamically. The experimental results show that the proposed algorithm not only can better inhibit the Rician noise in MR images, but also can effectively preserve image details, so it possesses a better application value in the further analysis research of MR images and clinical diagnosis.

    Seizure detection based on max-relevance and min-redundancy criteria and extreme learning machine
    ZHANG Xinjing XU Xin LING Zhipei HUANG Yongzhi WANG Shouyan WANG Xinzui
    2014, 34(12):  3614-3617. 
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics

    The seizure detection is important for the localization and classification of epileptic seizures. In order to solve the problem brought by large amount of data and high feature space in EEG (Electroencephalograph) for quickly and accurately detecting the seizures, a method based on max-Relevance and Min-Redundancy (mRMR) criteria and Extreme Learning Machine (ELM) was proposed. The time-frequency measures by Short-Time Fourier Transform (STFT) were extracted as features, and the large set of features were selected based on max-relevance and min-redundancy criteria. The states were classified using the extreme learning machine, Support Vector Machine (SVM) and Back Propagation (BP) algorithm. The result shows that the performance of ELM is better than SVM and BP algorithms in terms of computation time and classification accuracy. The classification accuracy rate of interictal durations and seizures can reach more than 98%, and the computation efficiency is only 0.8s. This approach can detect epileptic seizures accurately in real-time.

    Blind recognition of BCH codes under error conditions
    REN Yabo ZHANG Jian LIU Yinong ZHANG Wei
    2014, 34(12):  3618-3620. 
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics

    A low complexity method was proposed for the blind recognition of BCH codes under error conditions. The existing recognition methods most come from the generic methods of linear block codes, which can't be applied when the code length is long and the bit error rate is high. This method is based on that the BCH codes come from the sub-space of Hamming codes, so the parity check matrix of the hamming codes can be used to check the BCH codes. The method contains recovering the code length, synchronization and generator polynomial. The simulations show that the algorithm runs successfully for a BCH code with length 1023, when the bit error rate is 0.5%.

    Research and improvement of neuro-space mapping structure
    YAN Shuxia ZHANG Qijun
    2014, 34(12):  3621-3623. 
    Asbtract ( )   PDF (449KB) ( )  
    References | Related Articles | Metrics

    In some cases, the difference of DC responses between the coarse model and devices is large, however the nonlinear responses are similar. Concerning the complex modeling process, an improved Neuro-Space Mapping (Neuro-SM) structure was proposed. The capacitors and inductors were added on the traditional Neuro-SM model to constitute a new Neuro-SM model. The DC component of the input signal was adjusted by the mapping network,but the AC component is independent on the mapping network. The new model can improve the DC feature without changing AC characteristic and match the device with a few optimization variables and simple mapping relationship. The simulation experimental results demonstrate that the enhanced Neuro-SM model can make full use of the similar nonlinear responses between the coarse model and devices, maintaining the accuracy of the model as well as simplifying the modeling process.

    Adaptive impedance matching for radio frequency antenna based on dual-aimed chaotic particle swarm optimization algorithm
    LIU Chuqun TAN Yanghong XIONG Zhiting
    2014, 34(12):  3624-3627. 
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics

    Considering the non-ideal factors of the actual impedance matching network, and in order to achieve low Standing Wave Ratio (SWR) and high transmission efficiency, an impedance matching method for radio frequency antenna was proposed based on Dual-aimed Chaotic Particle Swarm Optimization (DCPSO) algorithm. The experimental results of single-frequency impedance matching obtained by DCPSO proves that SWR and output power performance are improved compared with Particle Swarm Optimization (PSO). Combined with real frequency method, broadband impedance matching experiments were done with working band of 2G, 3G and 4G mobile technologies, results show that the entire band can get good results with impedance matching transmission efficiency.

    Network on chip mapping algorithm optimized for testing
    ZHANG Ying WU Yu GE Fen
    2014, 34(12):  3628-3632. 
    Asbtract ( )   PDF (703KB) ( )  
    References | Related Articles | Metrics

    The problem of NoC (Network on Chip) mapping for complex SoC (System on Chip) chip is urgently needed to be solved while most of the existing mapping schemes do not considered testing requirements. This paper proposed a novel NoC mapping algorithm optimized for testing, which considered the improvement of testability and the minimization of mapping cost together. Firstly, the partition algorithm was adopted to arrange all the IP cores into parallel testing groups, combined with the optimized test structure, so that the testing time was minimized. Then, based on traffic information between IP cores, genetic algorithm was applied to accomplish the NoC mapping, which was aimed to the minimum mapping cost. The experimental results on ITC02 benchmark circuits show that the testing time can be reduced by 12.67% on average and the mapping costs decreased by 24.5% on average compared with the random mapping.

    Low power branch encoding scheme based on SoC bus
    LI Dong WANG Xiaoli YANG Bin ZHAO Changrui
    2014, 34(12):  3633-3636. 
    Asbtract ( )   PDF (572KB) ( )  
    References | Related Articles | Metrics

    A low power branch encoding method was presented for decreasing the SoC bus power dissipation. This method's basic principle is: for the address bus, when the address bus is sequential, the address bus is frozen, and when the address bus is non-sequential, the window size is adjusted dynamically to apply the Bus-Invert (BI) method on the address bus. For the data bus, two threshold values are figured out for different data size respectively. If the Hamming distance locates between these two threshold values, the valid-data-channel switching dense area is found and inverted, otherwise applies the BI encoding. This method's encoding and decoding circuits are realized in the Advanced High Performance Bus (AHB) system. The experimental result demonstrates that compared with uncoded situation, this method decreases the address/data bus toggle rate by 51.2%/22.4%, and the system power is reduced by 28.9%. Compared with T0,BI and other encoding methods realized in the same system, the branch encoding is more superior in the toggle rate and power dissipation.

    Novel sliding mode control with power reaching law based on frequency domain identification models
    ZHONG Hua WANG Yong SHAO Changxing
    2014, 34(12):  3637-3640. 
    Asbtract ( )   PDF (680KB) ( )  
    References | Related Articles | Metrics

    Considering the complexity and inaccuracy of traditional theoretical modeling for rigid-flexible couple system, the frequency domain subspace method was used to identify the motor's model and piezoelectric ceramic piece's model in the experimental system. Due to the problem of chattering and long reaching time of traditional reaching law, a novel sliding mode control with power reaching law was proposed. Theoretical analysis shows that the reaching time can be shortened and the range of traditional power reaching law's parameter α can be expanded, which will not affect the chattering. Considering the effect of vibration characteristics of flexible beam on system performance, the method of sub-sliding surface was used to design the sliding mode controller. Lastly, experimental results show that the designed controller can track the angle of the center of the rigid body rapidly and suppress the vibration of the flexible beam quickly.

    Dynamically tuned gyroscope system identification method
    TIAN Lingzi LI Xingfei ZHAO Jianyuan WANG Yahui
    2014, 34(12):  3641-3645. 
    Asbtract ( )   PDF (668KB) ( )  
    References | Related Articles | Metrics

    In Dynamically Tuned Gyroscope (DTG) system, traditional identification methods, including least square identification method and traditional frequency domain identification method, could not achieve acceptable identification fitness degree. To deal with this problem, outlier-eliminated frequency identification method was proposed. In consideration of the characteristics of DTG model structure and intrinsic colored noise, outlier-eliminated method was applied to DTG frequency domain identification. The experimental results indicate that outlier-eliminated frequency identification method, with a fitness degree above 90%, compared with both least square identification method and traditional frequency domain identification method, has a better performance. In addition, outlier-eliminated frequency identification method possesses of good repeatability and stability. Outlier-eliminated frequency identification method could improve the identification fitness degree of DTG system.

    Method of soft close-loop fault-tolerant control in encoder faults based on the T-S fuzzy neural network model
    LI Wei LI Qingpeng MAO Haijie GONG Jianxing
    2014, 34(12):  3646-3650. 
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics

    In order to solve the problem of losing codes and pausing codes in the incremental encoder which conventionally used in the stage speed boom system as speed feedback component and prevent the propagation of fault effect, a fault detection and soft close-loop fault-tolerant control method for encoder faults based on the Takagi-Sugeno Fuzzy Neural Network (T-S FNN) model combined with the data-driven technique was proposed. First, the system of T-S FNN prediction model was established by substracting the system normal operation of historical data, and achieved the residual error information by using measured values of actual encoder and predicted values. Next, encoder fault was detected by using improved Sequential Probability Ratio Test (SPRT) algorithm though the residual error real-time data information, in order to overcome the detection delay and ensure the reliability of fault detection. Then, according to the prediction model output which was used as the output of the encoder failure to accommodate the failure when fault was detected, in order to realize the soft fault-tolerant operation by using close-loop mode. At last, the encoder fault tolerant process for the losing codes and pausing codes was proved by simulation experiment effectively. The simulation results show that the method of this article can detect the encoder fault information rapidly and reliability, and switch from the fault-tolerant mechanism timely and safely by using the reconstruction of the prediction information, in order to realize the soft closed-loop fault-tolerant control of encoder failure and improve the safety and reliability of stage speed boom system operation process.

    Short-term electricity load forecasting based on complementary ensemble empirical mode decomposition-fuzzy permutation and echo state network
    LI Qing LI Jun MA Hao
    2014, 34(12):  3651-3655. 
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics

    Based on Complementary Ensemble Empirical Mode Decomposition (CEEMD)-fuzzy entropy and Echo State Network (ESN) with Leaky integrator neurons (LiESN), a kind of combined forecast method was proposed for improving the precision of short-term power load forecasting. Firstly, in order to reduce the calculation scale of partial analysis for power load series and improve the accuracy of load forecasting, the power load time series was decomposed into a series of power load subsequences with obvious differences in complex degree by using CEEMD-fuzzy entropy, according to the characteristics of each subsequence, and then the corresponding LiESN forecasting submodels were built, the ultimate forecasting results could be obtained by the superposition of the forecasting model. The CEEMD-LiESN method was applied to the instance of short term electricity load forecasting of the New England region. The experimental results show that the proposed combination forecasting method has a high prediction precision.

    Improved partial hierarchical resampling algorithm for particle filtering
    ZENG Xiaohui SHI Yibing LIAN Yi
    2014, 34(12):  3656-3659. 
    Asbtract ( )   PDF (607KB) ( )  
    References | Related Articles | Metrics

    Particle filter is widely applied in many fields due to its ability of dealing with nonlinear and non-Gaussian problems. However, concerning some serious problems such as particle degradation and poverty in particle filtering, an improved resampling algorithm was proposed in the paper. The idea of method was based on partial stratified resampling and residual resampling, to classify particles by large, medium and small weights and replicate samples from three hierarchical sets with different strategies. The efficiency of algorithm was improved while maintaining diversity of particles. Finally through comparison with classic sequential importance sampling and resamplings and other partial resamplings, simulation results of UNG (Univariate Non-stationary Growth) and BOT (Bearings Only Tracking) models also verify the filtering performance and validity of the proposed algorithm in this paper.

    Continuous casting slab surface feature classification method based on complex Contourlet feature vectors
    YU Jirui WANG Zehao LI Peiyu
    2014, 34(12):  3660-3664. 
    Asbtract ( )   PDF (713KB) ( )  
    References | Related Articles | Metrics

    Concerning the problem that it is complex to detect the surface feature of continuous casting slab, a surface trait extraction method based on complex Contourlet decomposition was developed. Compared to conventional method, the new one has the characteristics of shift invariance, excellent directional selectivity and higher retrieval rate. The image was decomposed in Contourlet domain and then the image's sub-bands directional matrices were extracted by using directional filter bands, then the feature vector was constructed by the energy value, stand deviation and skewness. The support vector machine was trained by the feature vectors, to classify images. Industrial test result shows that the accurate rate of classifying surface features is about 90%, and it can be used in image feature extraction and slab flaw detection.

2025 Vol.45 No.5

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF