Loading...

Table of Content

    01 September 2014, Volume 34 Issue 9
    Network and communications
    Uplink multi-base cooperative energy efficiency algorithm based on interference rejection
    DAI Cuiqin LI Tu ZHANG Zufan
    2014, 34(9):  2451-2455.  DOI: 10.11772/j.issn.1001-9081.2014.09.2451
    Asbtract ( )   PDF (824KB) ( )  
    References | Related Articles | Metrics

    Since the energy consumption of joint processing in uplink multi-base cooperative communication system is excessively high, an Inter-Cell Interference Rejection based Uplink Multi-Base Cooperative Energy Efficiency Algorithm (ICIR-UMBCEEA) was proposed. Firstly, equivalent noise and Coordinated Multi-Point (CoMP) estimated channel were gotten by DeModulation Reference Signal (DMRS) sequence, and the Interference Rejecting Combining (IRC) filtering matrix of CoMP channel was deduced; Secondly, an equivalent interference model was established and the average inter-cell interference was obtained by using IRC filtering matrix; Finally, interference level of user in each cell to non-CoMP set was calculated, and a joint processing against strong interference users was made. In the comparison experiments with Uplink Multi-Base Cooperative Algorithm of Optimal Water Filling Control (UMBCA-OWFC), the normalized average interference of ICIR-UMBCEEA decreased by 19.2% in center users and 24.5% in edge users, and the energy efficiency of it increased by 25.48% in center users and 18.03% in edge users; ICIR-UMBCEEA had less energy consumption, and had higher throughput in center users and not much difference in edge users. The theoretical analysis and simulation results show that ICIR-UMBCEEA can effectively improve the energy efficiency of communication system in engineering.

    Cluster-based and energy-balanced time synchronization algorithm for wireless sensor networks
    SUN Yi NAN Jing WU Xin LU Jun
    2014, 34(9):  2456-2459.  DOI: 10.11772/j.issn.1001-9081.2014.09.2456
    Asbtract ( )   PDF (634KB) ( )  
    References | Related Articles | Metrics

    To solve the problems of synchronization error accumulation and unbalanced energy consumption in multi-hop wireless sensor networks, a cluster-based and energy-balanced time synchronization algorithm for wireless sensor networks was proposed. Based on hierarchical clustering topology, cluster heads in adjacent layers adopted pairwise broadcast mechanism instead of bidirectional pair-wire synchronization mechanism to reduce communication overhead and the synchronization error of transmission delay. Cluster members synchronized the cluster head using the combination of bidirectional pair-wise synchronization and reference broadcast synchronization. In addition, the response node was selected according to residual energy to balance energy consumption of cluster nodes. The performance of synchronization precision and energy consumption of the proposed algorithm and traditional algorithm were analyzed by theoretical analysis and simulation. The results show that the new algorithm not only ensures high synchronization accuracy, but also reduces communication overhead and balances network node energy consumption to lengthen the cycle life of the network.

    Local cooperative localization algorithm for wireless sensor networks
    REN Xiuli AN Le
    2014, 34(9):  2460-2463.  DOI: 10.11772/j.issn.1001-9081.2014.09.2460
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics

    In order to overcome the low accuracy and coverage rate of localization algorithm based on distance in wireless sensor networks, a Local Cooperative Localization Algorithm (LCLA) was proposed. The algorithm judged anchor nodes by calculating their local path loss indexes and determined those influenced by communication environment or obstacles to be invalid. At the same time, the idea of cooperative localization was introduced and the nodes that met certain requirements of localization error would be upgraded to anchor nodes to take part in localization of other nodes and improved coverage rate of localization. When nodes received multiple signals from anchor nodes, they first selected the initial valid anchor nodes to calculate their positions in priority. If the number of the initial valid anchor nodes was not sufficient, nodes would then select from the upgraded ones to reduce the cumulative error and improve localization accuracy. The simulation results indicate that LCLA performs better in localization accuracy and coverage rate compared with the algorithm of improved Received Signal Strength Indicator (RSSI) localization, MDS-MAP (MultiDimensional Scaling MAP) and cooperating localization.

    Indoor matching localization algorithm based on two-dimensional grid characteristic parameter fusion
    GUAN Weiguo LU Baochun
    2014, 34(9):  2464-2467.  DOI: 10.11772/j.issn.1001-9081.2014.09.2464
    Asbtract ( )   PDF (780KB) ( )  
    References | Related Articles | Metrics

    Focused on the issue that the time-varying characteristic of indoor Received Signal Strength Indicator (RSSI) drastically degrades the localization accuracy, an indoor matching localization algorithm based on two-dimensional grid characteristic parameter fusion was proposed. The algorithm fused received signal strength and Time Difference of Arrival (TDOA) parameters to build grid feature model, in which two-dimensional grid quick search strategy was adopted to reduce computation amount. Normalized Euclidean distance of grid feature vector was used to realize the optimal grid match localization. Finally, the precise terminal location was computed by reference nodes of the matched grid. In the localization simulation experiments, the proposed algorithm achieved the localization Root Mean Square Error (RMSE) at 1.079m, and the average localization accuracy was within 1.865m in the condition of 3m grid granularity; The probability of 3m localization accuracy reached 94.7%, which was 19.6% higher than that of traditional method only bawsed on RSSI. The proposed algorithm can effectively improve the indoor positioning accuracy, meanwhile reduces the search data quantity and the computational complexity of matching localization.

    Optimized AODV routing protocol to avoid route breaks
    LI Xiangli JING Ruixia HE Yihan
    2014, 34(9):  2468-2471.  DOI: 10.11772/j.issn.1001-9081.2014.09.2468
    Asbtract ( )   PDF (653KB) ( )  
    References | Related Articles | Metrics

    In Mobile Ad Hoc Network (MANET), the movements of nodes are liable to cause link failures, while the local repair in the classic Ad Hoc On-demand Distance Vector (AODV) routing algorithm is performed only after the link breaks, which has some limitations and may result in the cached data packet loss when the repair process fails or goes on too slowly. In order to solve this problem, an optimized AODV routing algorithm named ARB-AODV was proposed, which can avoid route breaks. In ARB-AODV algorithm, the link which seemed to break was predicted and the stability degrees of the nodes' neighbors were calculated. Then the node with the highest stability was added to the weak link to eliminate the edge effect of nodes and avoid route breaks. Experiments were conducted on NS-2 platform using Random Waypoint Mobility Model (RWM) and Constant Bit Rate (CBR) data. When the nodes moved at a speed higher than 10m/s, the packet delivery ratio of ARB-AODV algorithm maintained at 80% or even higher, the average end-to-end delay declined up to 40% and the overhead of normalized routing declined up to 15% compared with AODV. The simulation results show that ARB-AODV outperforms AODV, and it can effectively improve network performance.

    Simulation of switch's processing delay in software defined network
    LYV Yilong HUANG Chuanhe JIA Yonghong ZHANG Hai
    2014, 34(9):  2472-2475.  DOI: 10.11772/j.issn.1001-9081.2014.09.2472
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics

    In the simulation of Software Defined Network (SDN), the existing network simulation tools usually do not consider the processing delay of SDN switchs. To make the simulation result more realistic and accurate, a scheme to simulate the processing delay was proposed. First, the scheme divided the process of the switch forwarding into two aspects: inquiry operations on flow table and execution of various actions, and then transferred the two aspects into processing delay by using processor frequency and memory cycle. Measurement and comparison were conducted on the processing delay of switches with different configuration in real and simulation environments. The results show that the simulated processing delay of the proposed method is almost close to that in real environment, it can accurately estimate the processing delay of switches.

    Routing algorithm based on node similarity in delay/disruption tolerant networks
    DAI Chenqu LI Jianbo YOU Lei XU Jixing
    2014, 34(9):  2476-2481.  DOI: 10.11772/j.issn.1001-9081.2014.09.2476
    Asbtract ( )   PDF (901KB) ( )  
    References | Related Articles | Metrics

    Delay/Disruption Tolerant Network (DTN) has characteristics of long delay, intermittent disruption, and limitation of buffer space and energy. To improve the delivery rate of messages, while reducing network overhead and the average latency, a new Routing Algorithm Based on Node Similarity (RABNS) in DTN was proposed. The algorithm used historical information to predict node encounter probability in future. The nodes which encountered historically were recorded as a collection, then the set intersection operation was applied to evaluate the similarity of a pair of nodes. And the similarity was used to control the number of copies in the network. Simulations were conducted on The ONE platform using RandomWaypoint motion model. In the simulation, RABNS performed better than PROPHET (Probabilistic ROuting Protocol using History of Encounters and Transitivity) in the message delivery rate. And the network overhead of RABNS was about half of PROPHET, which greatly improved the utilization of network resources. The average latency of RABNS was a little higher than Epidemic but lower than PPROPHET, the node cache size did not have a significant impact on average-hops, and its average-hops was about half of PROPHET. The simulation results show that RABNS can effectively limit the message flooding with higher message delivery rate, lower network overhead and average latency, therefore it is suitable for the DTN scenes with limited nodes' storage and also applicable in social DTN with gregarious characteristics.

    Stackelberg game-based power allocation strategy for cooperative networks
    WEI Menghan QIN Shuang SUN Sanshan
    2014, 34(9):  2482-2485.  DOI: 10.11772/j.issn.1001-9081.2014.09.2482
    Asbtract ( )   PDF (529KB) ( )  
    References | Related Articles | Metrics

    A distributed strategy based on Stackelberg game was proposed to allocate cooperative power for cooperative networks. A Stackelberg game model was built at first, and the source node decided the price according to the cooperative power. Considering the relay's available resources, channel state, location and the price determined by source node, the relay node allocated the cooperative power to construct a user utility function. Then, the utility function was demonstrated to satisfy the conditions of concave function to ensure the existence of equilibrium. Subsequently, each node maximized its utility by finding the Stackelberg Equilibrium (SE) of optimum power and price. Finally, the simulation results proved the existence of equilibrium point, and the node's price, cooperative power and each node's utility were analyzed when the source node was in a different position. In the experiments, the cooperative power and price of the closer user respectively were 1.29 times and 1.37 times of the farther user. The experimental results show that the proposed strategy is effective, and it can be used in cooperative network and some other distributed networks.

    Simple method to improve the iterative detection convergence of SCCRFQPSK
    ZHANG Gaoyuan WEN Hong SONG Huanhuan LI Tengfei
    2014, 34(9):  2486-2490.  DOI: 10.11772/j.issn.1001-9081.2014.09.2486
    Asbtract ( )   PDF (739KB) ( )  
    References | Related Articles | Metrics

    The Maximum-A-Posteriori-probability (MAP) demodulation of recursive FQPSK-B in the presence of Additive White Gaussian Noise (AWGN) channel was first presented. Required in the iterative detection of Serial Concatenation of Convolutional coded Recursive FQPSK (SCCRFQPSK), the bit extrinsic Log-Likelihood Ratio (ex-LLR) of FQPSK demodulation was also derived. Secondly, aiming at weakening the phenomena of positive feedback during the iterative detection of SCCRFQPSK, the bit ex-LLR of FQPSK demodulation was appropriately adjusted by linear weighted processing. By Monte Carlo simulation, it was concluded that the optimal weighting factor of the weighted SCCRFQPSK system was 0.7, and it got 0.3dB Signal-to-Noise Ratio (SNR) gain at a Bit Error Rate (BRE) of 10-5 at 4 iterations. The simulation results indicate that the proposed method can not only accelerate the decoding convergence and improve the performance of the SCCRFQPSK system, but also reduce the delay. To a certain extent, it can deal with the deep space communication with low SNR caused by long distance.

    List scheduling algorithm for static task with dependence in Internet of things environment
    YE Jia ZHOU Mingzheng
    2014, 34(9):  2491-2496.  DOI: 10.11772/j.issn.1001-9081.2014.09.2491
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics

    The static task list scheduling problems in distributed heterogeneous computing environment of Internet of things was studied, and a list scheduling algorithm named Heterogeneous Dynamic Priority Task Scheduling (HDPTS) was proposed, which can dynamically change scheduling sequence based on the strategy of the earliest completion time. Concerning that the exsiting list scheduling algorithms can not accurately determine the scheduling order before scheduling, on the basis of Improved Heterogeneous Earliest Finish Time (IHEFT) algorithm, a dynamic priority scheduling policy was added to it. When precursor tasks of a node completed scheduling, the scheduling priority of this node should be changed. Scheduling priority of task was calculated on the basis of choosing the maximum value between the latest completion time of all immediate predecessor tasks and the maximum available time of all the resources. At the same time, some other factors were also considered, including the influence to the subsequent tasks of the tasks assigned to the resource, the resource load, the calculated value of uplink weight and the influence to the exit tasks. All these considerations make the priority calculation be more reasonable, so as to dynamically change the task scheduling sequence reasonably according to the task allocation situation. By a randomly generated example test, the results show that the scheduling length of HDPTS reduced by 14.29% compared with IHEFT, HEFT (Heterogeneous Earliest Finish Time); the test results on a large number of randomly generated Directed Acyclic Graph (DAG) with specific structure prove that HDPTS is more effective than IHEFT, HEFT and LDCP (Longest Dynamic Critic Path) algorithms.

    Design and analysis of a strong fault-tolerant on-board SpaceWire bus network
    NIU Yuehua ZHAO Wenyan
    2014, 34(9):  2497-2500.  DOI: 10.11772/j.issn.1001-9081.2014.09.2497
    Asbtract ( )   PDF (776KB) ( )  
    References | Related Articles | Metrics

    SpaceWire bus is nowadays mainly used in the way that a few nodes are connected to a central router, forming a small star topology. Research on SpaceWire network with large number of nodes in complicated satellite is rare yet. A bus type topology of SpaceWire network was proposed concerning the high reliability demand of spacecraft. Operation principle and hierarchical fault-tolerant scheme of this network were analyzed, reliability and communication performance were also derived. The results indicate that the network topology satisfies requirements of spacecraft application. Design rules of packet length, link data rate and nodes layout in SpaceWire network were provided to support planning on-board SpaceWire network.

    New method for dynamic non-uniform subband decomposition based on distribution of power spectral density
    MA Lingkun DAI Zhimei
    2014, 34(9):  2501-2504.  DOI: 10.11772/j.issn.1001-9081.2014.09.2501
    Asbtract ( )   PDF (554KB) ( )  
    References | Related Articles | Metrics

    In order to solve the subband decomposition problem of the wideband signal with a large spectrum scope, a new method for nun-uniform subband decomposition based on Power Spectral Density (PSD) was proposed to dynamically adjust the number and bandwidth of subband, reasonably control the autocorrelation matrix eigenvalue spread of subband signals and improve the performance and efficiency of subband signal processing. For a given sequence, the number of subbands and the spectrum range of subband were determined through power spectrum estimation. Then the subband signal was shifted to zero frequency using subband modulation to achieve signal decomposition. The eigenvalue spread of subband signal and signal reconstruction performance were analyzed using Matlab. The experimental results show that, compared with the existing methods with equal number of subbands, the proposed one using the distribution information of PSD can effectively control the eigenvalue spread of subband while maintaining well signal reconstruction performance.

    Distributed clustering algorithm with high communication efficiency for streaming data
    ZHU Qiang SUN Yuqiang
    2014, 34(9):  2505-2509.  DOI: 10.11772/j.issn.1001-9081.2014.09.2505
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    The resources of sensor nodes are limited, while high communication overhead will consume much power. In order to reduce the communication overhead of distributed streaming data clustering algorithm, a new efficient algorithm with two phases, including online local clustering and offline coordinate clustering, was proposed. The online local clustering algorithm clustered data on each remote stream data source, then sent the results to the collaborative node by serialization method. The collaborative node collected and analyzed all local clusters to get the global clusters. The experimental results show that the time for sending data is constant, the time for clustering and total time linearly grow with increasing size of sliding window, which means that the execution time of the algorithm is not affected by sliding window size and cluster number. The accuracy of the proposed algorithm is close to centralized algorithm, and the communication overhead is far less than distributed algorithm. The experimental results show that the proposed algorithm has good scalability, and can be applied to the clustering analysis of distributed large-scale streaming data.

    Blind separation method for source signals with temporal structure based on second-order statistics
    QIU Mengmeng ZHOU Li WANG Lei WU Jianqiang
    2014, 34(9):  2510-2513.  DOI: 10.11772/j.issn.1001-9081.2014.09.2510
    Asbtract ( )   PDF (685KB) ( )  
    References | Related Articles | Metrics

    The objective of Blind Source Separation (BSS) is to restore the unobservable source signals from their mixtures without knowing the prior knowledge of the mixing process. It is considered that the potential source signals are spatially uncorrelated but temporally correlated, i.e. they have non-vanishing temporal structure. A second-order statistics based BSS method was proposed for such sources. The robust prewhitening was firstly performed on the observed mixing signals, where the dimension of the sources was estimated based on the Minimum Description Length (MDL) criterion. Then, the blind separation was realized by implementing the Singular Value Decomposition (SVD) on the time-delayed covariance matrix of the whitened signals. The simulation on separation of a group of speech signals proves the effectiveness of the algorithm, and the performance of the algorithm was measured by Signal-to-Interference Ratio (SIR) and Performance Index (PI).

    One projection subspace pursuit for signal reconstruction in compressed sensing
    LIU Xiaoqing LI Youming LI Chengcheng JI Biao CHEN Bin ZHOU Ting
    2014, 34(9):  2514-2517.  DOI: 10.11772/j.issn.1001-9081.2014.09.2514
    Asbtract ( )   PDF (606KB) ( )  
    References | Related Articles | Metrics

    In order to reduce the complexity of signal reconstruction algorithm, and reconstruct the signal with unknown sparsity, a new algorithm named One Projection Subspace Pursuit (OPSP) was proposed. Firstly, the upper and lower bounds of the signal's sparsity were determined based on the restricted isometry property, and the signal's sparsity was set as their integer middle value. Secondly, under the frame of Subspace Pursuit (SP), the projection of the observation onto the support set in each iteration process was removed to decrease the computational complexity of the algorithm. Furthermore, the whole signal's reconstruction rate was used as the index of reconstruction performance. The simulation results show that the proposed algorithm can reconstruct the signals of unknown sparsity with less time and higher reconstruction rate compared with the traditional SP algorithm, and it is effective for signal reconstruction.

    Energy-efficient strategy for disks in RAMCloud
    LU Liang YU Jiong YING Changtian WANG Zhengying LIU Jiankuang
    2014, 34(9):  2518-2522.  DOI: 10.11772/j.issn.1001-9081.2014.09.2518
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics

    The emergence of RAMCloud has improved user experience of Online Data-Intensive (OLDI) applications. However, its energy consumption is higher than traditional cloud data centers. An energy-efficient strategy for disks under this architecture was put forward to solve this problem. Firstly, the fitness function and roulette wheel selection which belong to genetic algorithm were introduced to choose those energy-saving disks to implement persistent data backup; secondly, reasonable buffer size was needed to extend average continuous idle time of disks, so that some of them could be put into standby during their idle time. The simulation experimental results show that the proposed strategy can effectively save energy by about 12.69% in a given RAMCloud system with 50 servers. The buffer size has double impacts on energy-saving effect and data availability, which must be weighed.

    Memory scheduling strategy for virtual machine in private cloud platform
    LI Dawei ZHAO Fengyu
    2014, 34(9):  2523-2526.  DOI: 10.11772/j.issn.1001-9081.2014.09.2523
    Asbtract ( )   PDF (793KB) ( )  
    References | Related Articles | Metrics

    On the private cloud platform, it cannot be flexible to monitor and distribute the virtual machine memory resources effectively using the existing methods. To solve this problem, a Memory Monitor and Scheduler (MMS) model was put forward. And the real-time monitoring and dynamic scheduling of the virtual machine memory shortage and memory free were realized by using the libvirt function library and libxc function library provided by Xen. A small private cloud platform was built using Eucalyptus with regarding one physical machine as master node and two physical machines as child nodes. In the experiments, when the state of host was in memory shortage, MMS system effectively released the memory space by starting the virtual machine migration strategy; when the memory of the virtual machine was approaching the initial maximum memory, MMS system assigned it with a new maximum memory; when the occupied memory decreased, MMS system recycled part of free memory resource, which has little effect on the performance of virtual machines if the release memory did not exceed 150MB (maximum memory is 512MB). The results show that the MMS model of private cloud platform is effective for real-time monitoring and dynamic scheduling of the memory.

    Multi-dimensional QoS cloud task scheduling algorithm based on task replication
    ZHANG Qiaolong ZHANG Guizhu WU Delong
    2014, 34(9):  2527-2531.  DOI: 10.11772/j.issn.1001-9081.2014.09.2527
    Asbtract ( )   PDF (750KB) ( )  
    References | Related Articles | Metrics

    Under the cloud environment, in order to take full advantage of idle time of virtual resources and meet the user's Quality of Service (QoS) requirements, a multi-dimensional QoS cloud task scheduling algorithm based on task replication was proposed. First, a cloud resource model and a user's QoS model were built. Then according to the utilization of resources and QoS satisfaction, the virtual resource with higher overall performance was chosen. Simultaneously, this algorithm duplicated a parent task in idle time to reduce the execution time. In the comparison experiments with HEFT (Heterogeneous Earliest Finish Time) and CPOP (Critical Path On a Processor), when the user's preferences perform reliability, the average reliability of the proposed algorithm was higher than that of HEFT and CPOP; when the user's preferences perform makespan and cost, the average makespan of the proposed algorithm was smaller than that of HEFT and CPOP; when the user's preferences perform nothing, the average makespan and cost of the proposed algorithm was smaller than that of HEFT and CPOP. The experimental results indicate that the proposed algorithm can improve satisfaction of customers and utilization of resources.

    Artificial intelligence
    Hybrid genetic algorithm based on two-dimensional variable neighborhood coding
    ZHU Biying ZHU Fuxi LIU Kegang LI Fanchen
    2014, 34(9):  2537-2542.  DOI: 10.11772/j.issn.1001-9081.2014.09.2537
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics

    Concerning that the general hybrid genetic algorithms cannot give attention to both effectiveness and efficiency, a new hybrid genetic algorithm using two-dimensional variable neighborhood coding named VNHGA was proposed. Firstly, the traditional binary coding method was replaced by a new coding method, which was designed to separate coding and synchronous inheritance for individuals. Secondly, the traditional mutation operator was replaced by a new stable mutation operator to improve efficiency. VNHGA was tested by optimization problem of multi-dimensional functions. It was verified that, after adopting the new coding method, features with more effectiveness and less efficiency were maintained when using "Baldwin effect" relative to using "Lamarckian evolution" as embedding strategy. After introducing the stable mutation operator, effectiveness was maintained and efficiency was improved at the same time, and the running time was shortened about half of before. VNHGA was also compared with other two modified hybrid genetic algorithms to exhibit its advantages. The results indicate that VNHGA is both effective and efficient, and it can be used to solve optimization problems.

    Improved backtracking search optimization algorithm with new effective mutation scale factor and greedy crossover strategy
    WANG Xiaojuan LIU Sanyang TIAN Wenkai
    2014, 34(9):  2543-2546.  DOI: 10.11772/j.issn.1001-9081.2014.09.2543
    Asbtract ( )   PDF (681KB) ( )  
    References | Related Articles | Metrics

    As standard Backtracking Search Optimization Algorithm (BSA) has the shortcoming of slow convergence, a new mutation scale factor based on Maxwell-Boltzmann distribution and a crossover strategy with greedy property were introduced to improve it. Maxwell-Boltzmann distribution was used to generate mutation scale factor, which could enhance search efficiency and convergence speed. Mutation population learning from outstanding individuals was adopted in less exchange-dimensional crossover strategy to add greedy property to crossover as well as fully ensure population diversity, which managed to avoid the problem that most existed algorithms easily trap into local minima when added greedy property. The simulation experiments were conducted on fifteen Benchmark functions. The results show that the improved algorithm has faster convergence speed and higher convergence precision, even in the high-dimensional multimodal functions, the improved algorithm's search results are nearly 14 orders of magnitude higher than those of original BSA after the same iterations, and its convergence precision can reach 10-10 or less.

    Fast multi-objective differential evolution algorithm based on non-dominated solution sorted
    2014, 34(9):  2547-2551.  DOI: 10.11772/j.issn.1001-9081.2014.09.2547
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics

    Concerning the high time-complexity of multi-objective evolutionary algorithm based on Pareto dominated solution sorting, considering the potential features of non-dominated solution sorting, a fast sorting method which only handles individuals with the highest rank in current population was introduced. The individuals could be chosen into the next generation during the sorting. The algorithm was terminated when the population of next generation was selected enough, which reduced the number of individuals for sorting process and the time complexity. In addition, a method of uniform crowding distance calculation was given. Finally, a Fast Multi-Objective Differential Evolution (FMODE) algorithm was proposed which incorporated the introduced sorting method and uniform crowding distance into Differential Evolution (DE). Simulation experiments were conducted on the standard multi-objective optimization problems including ZDTl~ZDT4 and ZDT6. The time consumption of FMODE was far less than NSGAⅡ in large size of population, and its overall performance was much better than the classic NSGAⅡ, SPEAⅡ and DEMO. Moreover, the uniform crowding distance method provided better performance than classic crowding distance in framework of FMODE. The parameters of FMODE were also obtained by experiments. The simulation results show that the proposed algorithm has lower time complexity of calculating level, and better performance in convergence and diversity.

    Improvement analysis and application of firefly algorithm
    WANG Jiquan WANG Fulin
    2014, 34(9):  2552-2556.  DOI: 10.11772/j.issn.1001-9081.2014.09.2552
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics

    The Firefly Algorithm (FA) has a few disadvantages in solving the constrained global optimization problem, including that it is difficult to produce initial population, the size of relative attractiveness has nothing to do with the absolute brightness of fireflies, the inertia weight does not take full advantage of the information of objective function, and it cannot better control and constrain the mobile distance of firefly. Therefore an improved FA was proposed. Firstly, Genetic Algorithm (GA) was used to produce an initial population, which improved the production speed of initial population. Secondly, on the basis of the objective function, a dynamic self-adaptive inertia weight was added to FA to improve the convergence speed. Furthermore, a calculation method of relative attractiveness was given, and the size of relative attractiveness had something to do with the absolute brightness of fireflies. Finally, the compression factor was introduced into the location update formula of FA to control and constrain the movement distance of firefly, and thus improved the convergence speed of FA. The experimental results of four test functions show that, compared with standard FA and FA with inertia weight, the improved FA is more effective, which significantly improves computing speed and reduces iteration number.

    Visual localization for mobile robots in complex urban scene using building features and 2D map
    LI Haifeng WANG Huaiqiang
    2014, 34(9):  2557-2561.  DOI: 10.11772/j.issn.1001-9081.2014.09.2557
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics

    For the localization problem in urban areas, where Global Positioning System (GPS) cannot provide the accurate location as its signal can be easily blocked by the high-rise buildings, a visual localization method based on vertical building facades and 2D bulding boundary map was proposed. Firstly, the vertical line features across two views, which are captured with an onboard camera, were matched into pairs. Then, the vertical building facades were reconstructed using the matched vertical line pairs. Finally, a visual localization method, which utilized the reconstructed vertical building facades and 2D building boundary map, was designed under the RANSAC (RANdom Sample Consensus) framework. The proposed localization method can work in real complex urban scenes. The experimental results show that the average localization error is around 3.6m, which can effectively improve the accuracy and robustness of self-localization of mobile robots in urban environments.

    New particle swarm optimization algorithm for path planning simulation of virtual character
    ZHOU Jing FU Xuchang
    2014, 34(9):  2562-2565.  DOI: 10.11772/j.issn.1001-9081.2014.09.2562
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics

    For the problem of that the particle easily falls into local optimum when avoiding obstacles and cannot walk forward, the method of retreating the position of the local optimal particle to its historical best position and searching the feasible and optimal location in the 8-neighborhood of this position was proposed. The position which is nearest to the target and is not a barrier could be found by this method, then the particle was moved to it. Meanwhile, the global optimal position of current generation particle swarm could be found, and the location of each particle was set for this position to continue iteration. In the experiments for obstacle avoidance in grid map, the fact was found that when encountering obstacles, the particles fell into local optimum by the traditional methods, but particles could successfully avoid obstacles and arrive the end by using the improved algorithm. The improved algorithm was introduced into the 3D visual simulation system, in large map with many obstacles scenarios, the probability of falling into local optimum of the particle was reached to 50% which resulted in routing failure. After adding a circular slope to the obstacles for the further improvement of the algorithm, the probability of successful path-finding of the particle was increased to 83%. The experimental results show that the search ability of the improved algorithm is increased and it can effectively plan path in a complex scene.

    Real-time advertising trigger with advertiser behavioral analysis
    XIE Zhongqian CHANG Xiao JI Donghong
    2014, 34(9):  2566-2570.  DOI: 10.11772/j.issn.1001-9081.2014.09.2566
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    In the process of advertising on search engines, it needs to calculate the correlation between auction word (Bidword) and user's query (Query) in real time. Dynamic Term weight in advertisements and phrase commercial value assessment must be considered in relevant calculation. Thus, a phrase related calculation approach named ADPCB was proposed based on behavioral analysis and Continuous Bag-Of-Words (CBOW) model to deal with those problems. Firstly, this approach got vector of each Term by CBOW. Secondly, to analyze advertiser's behavior and construct a global empowerment tree about phrases, the phrase structure was analyzed to obtain dynamic Term weight. Finally the phrase distributed representation produced by Term weight and linear combination was applied to the related measurement between Bidword and Query. Experiments were conducted on 10000 pairs Query and Bidword (positive and negative ratio is 1∶〖KG-*2〗1) with editorial judgments by using Word2vec, ADPCB performed better than Term Frequency-Inverse Document Frequency (TF-IDF) which combined with CBOW; when the accuracy was 0.70, ADPCB got higher recall than that of Latent Dirichlet Allocation (LDA), BM25 (Best Match25) and TF-IDF. The experimental results and analysis show that ADPCB can recognize the commercial value quality of the phrase to reduce the quantity of advertising trigger of low commercial value Query, it can be used in real-time calculation scene.

    Consumption sentiment classification based on two-dimensional coordinate mapping method
    LIN Mingming QIU Yunfei SHAO Liangshan
    2014, 34(9):  2571-2576.  DOI: 10.11772/j.issn.1001-9081.2014.09.2571
    Asbtract ( )   PDF (1043KB) ( )  
    References | Related Articles | Metrics

    Aiming at the sentiment classification for Chinese consumption comments, a method called two-dimensional coordinate mapping for sentiment classification based on corpus was constructed. According to the Chinese language characteristics, firstly, a more pertinent searching method based on corpus was proposed. Secondly, the rules of extracting the Chinese subjective phrases were defined. Thirdly, the choosing optimal seed words algorithm of the specific field was constructed. Finally, the two-dimensional coordinate mapping algorithm was constructed, which mapped the comment in two-dimensional Cartesian coordinates through calculating the coordinate values of the comment and decided the semantic orientation of it. Experiments were conducted on 1200 comments of milk (half of them are positive or negative comments) in Amazon. In the experiments, word “henhao-lou” was chosen as the optimal seed word by using choosing optimal seed words algorithm, then the sentiment orientation of it was decided according to two-dimensional coordinate mapping algorithm. The average F-measure of the proposed algorithm reached more than 85%. The result shows that the proposed algorithm can classify the sentiment of Chinese consumption comments.

    Ensemble learning algorithm for labels matching based on pairwise labelsets
    ZHANG Danpu WANG Lili FU Zhongliang LI Xin
    2014, 34(9):  2577-2580.  DOI: 10.11772/j.issn.1001-9081.2014.09.2577
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    It is called labels matching problem when two labels of an instance come from two labelsets respectively in multi-label classification, however there is no any specific algorithm for solving such problem. Although the labels matching problem could be solved by tranditional multi-label classification algorithms, but this problem has its own particularity. After analyzing the labels matching problem, a new labels matching algorithm based on pairwise labelsets was proposed using adaptive method, which considered the real Adaptive Boosting (real AdaBoost) and the global optimization idea. This algorithm could learn the rule of labels matching well and complete matching. The experimental results show that, compared with the traditional algorithms, the new algorithm can not only reduce searching scope of the labels space, but also decrease the minimum learning error as the number of weak classifiers increases, and make the classification more accurate and faster.

    Solution of 0-1 knapsack problem based on expected efficiency and linear fitting
    ZHANG Lingling ZHANG Hong
    2014, 34(9):  2581-2584.  DOI: 10.11772/j.issn.1001-9081.2014.09.2581
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics

    In order to do further optimization on 0-1 Knapsack Problem (KP), the relationship among the backpack capacity, the number of objects, the weight of object, the price of object and the cost performance of object were analyzed, a linear fitting model based on mathematical theory was determined, and a hybrid algorithm for 0-1 KP was proposed with the expected efficiency. Three groups of experiments were given. For those examples with ρ<0.7, when the backpack capacity was changed, in comparison with artificial glowworm swarm algorithm, the proposed algorithm improved convergence speed of the objective value and saved the storage space; In comparison with the algorithm of absolute greedy and expected efficiency, the proposed algorithm got the optimal solution. The results prove that this hybrid algorithm is reasonable and exact, and it can be used widely to solve 0-1 KP.

    Regional blood supply system optimization under stochastic demand
    YU Juan WANG Wenxian ZHONG Qinglun
    2014, 34(9):  2585-2589.  DOI: 10.11772/j.issn.1001-9081.2014.09.2585
    Asbtract ( )   PDF (628KB) ( )  
    References | Related Articles | Metrics

    Concerning the perspective of supply chain integration, a blood supply model was developed, which aimed to minimize the blood acquisition risk, system operation cost, the punishment for both excessive and insufficient acquisition by the multi-objective programming method. Taking into account the feature that the amount of expired blood is proportional to time, as well as the cost for expired blood processing, a regional supply and demand equilibrium model characterized by stochastic demand of the four types of blood was built. The model was proved to be convex, and the variational inequality of the blood supply and demand network equilibrium was derived. By modified quasi-Newton method, the solutions of the blood supply chain supply and demand equilibrium under stochastic demand condition were obtained. Finally, a case study in Chengdu verified the model's applicability.

    Face recognition algorithm based on Gabor wavelet and deep belief networks
    CHAI Ruimin CAO Zhenji
    2014, 34(9):  2590-2594.  DOI: 10.11772/j.issn.1001-9081.2014.09.2590
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics

    Feature extraction and pattern classification are two key problems in face recognition. In order to solve the high-dimensional and Small Sample Size (SSS) problem of face recognition, start with the feature extraction of human face and dimensionality reduction algorithms, a quadratic feature extraction and dimensionality reduction algorithm model was put forward based on Restricted Boltzmann Machine (RBM). At first, the image was evenly divided into a number of local image blocks and quantified, then the image was processed by Gabor wavelet transformation. The Gabor facial features were encoded by RBM to learn more intrinsic characteristics of data, so as to achieve the purpose of dimensionality reduction of high-dimensional facial features. On the basis of that, a multimodal face recognition algorithm based on Deep Belief Network (DBN) was proposed. The recognition results on ORL, UMIST and FERET face databases with different sample sizes and different resolution images show that, compared with the linear dimension reduction method and shallow network method, the proposed method achieves better learning efficiency and good recognition result.

    3D face recognition of single sample based on fuzzy ARTMAP
    WANG Siteng TANG Xusheng CHEN Dan
    2014, 34(9):  2595-2599.  DOI: 10.11772/j.issn.1001-9081.2014.09.2595
    Asbtract ( )   PDF (820KB) ( )  
    References | Related Articles | Metrics

    The traditional 3D face recognition and classification algorithms require multiple samples for training. However, the recognition performance will be seriously degraded on single sample training. To resolve the above problem, Fuzzy Adaptive Resonance theory MAP (Fuzzy ARTMAP) algorithm was used to classify the 3D face database. Firstly, the features of the 3D face deep image were extracted by Local Binary Pattern (LBP). Then the frequency-domain features of LBP features extracted by Log-Gabor wavelet were used as the input vectors for training. Finally the set of feature vectors were sent to Fuzzy ARTMAP classifier for recognition. The experiments compared with Probabilistic Neural Network (PNN) and Extreme Learning Machine (ELM) were conducted on FRGC v2.0 database, the recognition rate of the proposed algorithm reached 87.15%, the classifier training time was 24.88s, the matching time of single sample to single registered face was 0.0015s, and the searching time of a new face sample in the database was 1.08s. The experimental results show that the proposed method outperforms to PNN and ELM, it achieves a higher recognition rate with shorter training time, and has stable time performance with strong controllability.

    Wavelet thresholding method based on genetic optimization function curve for ECG noise removal
    WANG Zheng HE Hong TAN Yonghong
    2014, 34(9):  2600-2603.  DOI: 10.11772/j.issn.1001-9081.2014.09.2600
    Asbtract ( )   PDF (641KB) ( )  
    References | Related Articles | Metrics

    In order to overcome the oscillation caused by hard threshold wavelet filtering and the waveform distortion brought by soft threshold wavelet filtering, a wavelet threshold de-noising method based on genetic optimization function curve named GOCWT was proposed. In the GOCWT, a quadratic function was used to simulate the optimal threshold function curve. The Root Mean Square Error (RMSE) and smoothness of the reconstructed signal were applied to design the fitness function. Furthermore, the Genetic Algorithm (GA) was utilized to optimize the parameters of the new thresholding function. Through the analysis of 48 segments of ECG signals, it was found that the new method resulted in a 36% increase of smoothness value comparing to the hard threshold method, and a 32% decrease of RMSE value comparing to the soft threshold method. The results show that the proposed algorithm outperforms hard threshold wavelet filtering and soft threshold wavelet filtering, it can not only avoid the undesirable oscillation phenomenon of the filtered signal, but also reserve the minute features of the signal including peak value.

    Meta path-based dynamic similarity search in heterogeneous information network
    CHEN Xiangtao DING Pingjian WANG Jing
    2014, 34(9):  2604-2607.  DOI: 10.11772/j.issn.1001-9081.2014.09.2604
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics

    The existing similarity search algorithms do not consider the time factor. To address this problem, a meta path-based dynamic similarity search algorithm named PDSim was proposed for the heterogeneous information network. Firstly, PDSim calculated the link matrix of object under the given meta-path, thus obtained the instances ratio of meta-path between different objects. Meanwhile, the differences of establishing time were calculated. Finally, the dynamic similarity was measured under the given meta-path. In multiple instances of the similarity search, PDSim kept up with the interest variation of object which dynamically changed with time. Compared with the PathSim (Meta Path-Based Similarity) and PCRW (Path-Constrained Random Walks) methods, the clustering accuracy of Normalized Mutual Information (NMI) could be increased by 0.17% to 9.24% when applied to clustering. The experimental results show that, compared to the traditional similarity search algorithm based on link, the efficiency of dynamic similarity search and the satisfaction of user of PDSim are significantly improved, and it is a dynamic similarity search algorithm for object changes with time.

    Feature selection method based on integration of mutual information and fuzzy C-means clustering
    ZHU Jiewen XIAO Jun
    2014, 34(9):  2608-2611. 
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics

    Plenty of redundant features may reduce the performance of data classification in massive dataset, so a new method of automatic feature selection based on the integration of Mutual Information and Fuzzy C-Means (FCM) clustering, named FCC-MI, was proposed to resolve this problem. Firstly, MI and its correlation function were analyzed, then the features were sorted according to the correlation value. Secondly, the data was grouped according to the feature with the maximum correlation, and the number of the optimal features were determined automatically by FCM clustering method. At last, the optimization selection of the features was performed using correlation value. Experiments on seven datasets of UCI machine learning database were conducted to compare FCC-MI with three methods come from the literatures, including WCMFS (Within class variance and Correlation Measure Feature Selection), B-AMBDMI (Based on Approximating Markov Blank and Dynamic Mutual Information), and T-MI-GA (Two-stage feature selection algorithm based on MI and GA). The theoretical analysis and experimental results show that the proposed method not only improves the efficiency of data classification, but also ensures the classification accuracy and automatically determine the optimal feature subset, which reduces the number of the features of the dataset, thus it is suitable for feature reduction and analysis of mass data with large correlation features.

    Mining multiple sequential patterns with gap constraints
    WANG Huadong YANG Jie LI Yajuan
    2014, 34(9):  2612-2616.  DOI: 10.11772/j.issn.1001-9081.2014.09.2612
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics

    For the given multiple sequences, a certain threshold and the gap constraints, the study objective is to discover frequent patterns whose supports in multiple sequences are no less than the given threshold value, where any two successive elements of pattern fulfill the user-specified gap constraints, and any two occurrences of a pattern in a given sequence meet the one-off condition. To solve this problem, the existing algorithms only consider the first occurrence of each character of a pattern when they compute the support of a pattern in a given sequence, so that many frequent patterns are not mined. An efficient mining algorithm of multiple sequential patterns with gap constraints, named MMSP, was proposed. Firstly, it stored the candidate positions of a pattern using two-dimensional table, then it selected the position from the candidate positions according to the left-most strategy. The experiments were conducted on DNA sequences. The number of frequent patterns mined by MMSP was 3.23 times of that mined by the related algorithm named M-OneOffMine when the number of multiple sequence elements is constant and the sequence length changes, and the average number of mining patterns by MMSP was 4.11 times of that mined by M-OneOffMine when the number of multiple sequence elements changes. The average number of mined patterns by MMSP was 2.21 and 5.24 times of that mined by M-OneOffMine and MPP respectively when the number of multiple sequence elements changes, and the frequent patterns mined by M-OneOffMine was a subset of MMSP. The experimental results show that MMSP can mine more frequent patterns with shorter time, and it is more suitable for practical applications.

    HBase-based distributed storage system for meteorological gound minute data
    CHEN Donghui ZENG Le LIANG Zhongjun XIAO Weiqing
    2014, 34(9):  2617-2621.  DOI: 10.11772/j.issn.1001-9081.2014.09.2617
    Asbtract ( )   PDF (742KB) ( )  
    References | Related Articles | Metrics

    The meteorological ground minute data has characteristics including various elements, large amounts of information and high frequency generation, therefore the traditional relational database system has some problems such as server overload and low read and write performance in data storage and management. With the research of storage model of distributed databases HBase, the database model of the meteorological ground minute data was proposed to achieve distributed storage of massive meteorological data and meta-information management, in which the row key was designed by the method of time plus station number. When processing the complex meteorological query case, the response time of unique index in HBase is too long. To address this defect and meet the requirements of retrieval time efficiency, with considering the query case, API interface offered by search engine solr was used to establish secondary index for related field. The experimental results show that this system has high efficiency of storage and index, the maximum storage efficiency can be up to 34000 records/s. When generic query cases return, the time consuming can be down to millisecond level. This method can satisfy the performance requirements of large-scale meteorological data in business applications.

    Improved network security situational assessment method based on FAHP
    LI Fangwei YANG Shaocheng ZHU Jiang
    2014, 34(9):  2622-2626.  DOI: 10.11772/j.issn.1001-9081.2014.09.2622
    Asbtract ( )   PDF (894KB) ( )  
    References | Related Articles | Metrics

    To minimize damage from network security problem, an improved network security situation assessment model based on Fuzzy Analytic Hierarchy Process (FAHP) was proposed. First, a set of index system in conformity with actual environment which consists of index layer, criterion layer and decision layer was established in consideration of the large-scale network environment in the future. Aiming at the influence on evaluation by data distribution uncertainty and fuzziness in situation assessment, the proposed model used Fuzzy C-Means (FCM) clustering algorithm and the best clustering criterion for data preprocessing to get the optimal cluster number and cluster center. Finally, multi-factor secondary assessment model was established for situation assessment vector. The simulation results show that, compared with the present situation assessment method based on FAHP, the improved method takes the factors which have small weights into consideration better, so the standard deviation is smaller and evaluation results are more objective and accurate.

    Entropy-based evaluation method of weighted network invulnerability
    ZHAO Jingxian
    2014, 34(9):  2627-2629.  DOI: 10.11772/j.issn.1001-9081.2014.09.2627
    Asbtract ( )   PDF (433KB) ( )  
    References | Related Articles | Metrics

    In order to study the invulnerability of weighted network after being partially destroyed, combining the stability of network topology structure and the network flow by calculating the contribution degree to flow by nonoverlapping path between nodes and adopting entropy concept, a standard stability entropy index was proposed to evaluate the invulnerability between nodes in fully-connected network as a reference. On this basis, a model to evaluate the whole network invulnerability was given. The simulation results show that network invulnerability not only relates to network topology and the capacity sum of all edge weights, but also relates to the uniformity of all edge weights. The more uniform the critical edge weights are, the more strong the overall invulnerability is.

    Matrix obfuscation based on data refinement
    SUN Yongyong HUANG Guangqiu
    2014, 34(9):  2630-2634.  DOI: 10.11772/j.issn.1001-9081.2014.09.2630
    Asbtract ( )   PDF (655KB) ( )  
    References | Related Articles | Metrics

    At present, data obfuscation is usually used for single concrete data structure. In order to apply same obfuscation method to different data structures, the obfuscation was considered as data refinement, and treated as an abstract data type. General equations were established so as to prove the correctness of obfuscation. The meaning of matrix was concealed by splitting the matrix and altering the pattern of elements. Based on the operations of this data type, the obfuscation framework of the standard operations of the matrix was constructed using functional language. That how to use matrix to confuse the scalar and its arithmetic operations was also described. The correctness of the obfuscation operations was proved by mathematical method. The results show that the complexity of obfuscation operations is the same as the original operations, indicating that this kind of obfuscation method increases the difficulty of operations, and is an effective method of data obfuscation.

    New scheme for privacy-preserving in electronic transaction
    YANG Bo LI Shundong
    2014, 34(9):  2635-2638.  DOI: 10.11772/j.issn.1001-9081.2014.09.2635
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics

    For the users' privacy security in electronic transactions, an electronic transaction scheme was proposed to protect the users' privacy. The scheme combined the oblivious transfer and ElGamal signature, achieved both traders privacy security in electronic transactions. A user used a serial number to choose digital goods and paid the bank anonymously and correctly. After that, the bank sent a digital signature of the digital goods to the user, then the user interacted with the merchant obliviously through the digital signature that he had paid. The user got the key though the number of exponentiation encryption, the merchant could not distinguish the digital goods ordered. The serial number was concealed and restricted, so the user could not open the message with the unselected serial number, they could and only could get the digital goods they paid. Correctness proof and security analysis shows that the proposed scheme can protect both traders mutual information in electronic transactions and prevent merchant's malicious fraud. The scheme has short signature, small amount of calculation and dynamic changed keys, its security is strong.

    Sensitive information detection approach for documents based on document smoothing and query expansion
    SU Yingbin DU Xuehui XIA Chuntao LI Haihua
    2014, 34(9):  2639-2644.  DOI: 10.11772/j.issn.1001-9081.2014.09.2639
    Asbtract ( )   PDF (925KB) ( )  
    References | Related Articles | Metrics

    Detecting sensitive information on terminal documents becomes extremely important due to the potential risk of sensitive information leakage. In order to resolve the problems of imprecise document model caused by context-free index and inadequate semantic extension, firstly, a context-sensitive document smoothing algorithm was proposed to build document index, which can retain much more document information; secondly, combining the sensitivity of concept in the domain ontology, semantic extension was improved to expand the detection range of sensitive information; finally, document smoothing and query expansion were integrated into the language model, and a sensitive information detection approach based on the language model was proposed. Comparative experiments on four approaches using different index mechanisms, query expansion algorithms and detection models, the recall, precision and F-Measure of the proposed approach were 0.798, 0.786 and 0.792 respectively, and the various performance indicators were obviously better than the compared algorithms. The experimental results show that the proposed approach is a more effective one.

    Information hiding technology based on digital screening and its application in covert secrecy communication
    GUO Wei LIU Huiping
    2014, 34(9):  2645-2649.  DOI: 10.11772/j.issn.1001-9081.2014.09.2645
    Asbtract ( )   PDF (826KB) ( )  
    References | Related Articles | Metrics

    For the confidentiality and capacity problems of modern information communication network environment, an information hiding method based on digital screening technology was proposed, in which the information is embedded into the digital text documents to achieve the purpose of security communication. In this method, the watermark information was hidden into the background shades composed of screen dots, then fused with shades and stochastic Frequency Modulation (FM) screen dots image. Finally the background shades with information embedded were added to the text document as regular elements. The analysis and experimental results indicate that the proposed method has huge information capacity, and can embed the information of 72000 Chinese characters in one A4 size document page. In addition, it has perfect visual effects, good concealment ability, high security level and small file size, thus it can be widely used in the modern network security communication.

    Improvement on chosen-prefix collisions for MD5 and complexity analysis
    CHENG Kuan HAN Wenbao
    2014, 34(9):  2650-2655.  DOI: 10.11772/j.issn.1001-9081.2014.09.2650
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics

    According to the unbalanced distribution of the complexity of chosen-prefix collisions under the practical requirement, an improved algorithm of chosen-prefix collisions for MD5 was designed. Combined with Non-Adjacent Form (NAF), the probability related to the complexity of birthday search was deduced under certain conditions, and the relation between the balance parameter and the complexity of birthday search was established. Then based on the above results, the improved algorithm was designed through improving the form of birthday collision by introducing new message block differences. Under the practical requirement for parameters, the complexity of the improved algorithm can reduce one bit on average. The results show that, compared with the original MD5, the improved algorithm defuses imbalance of complexity distribution, reduces the complexity, and is more suitable for practical applications.

    Cryptanalysis of Image encryption algorithm based on improved ergodic matrix and pixel value diffusion
    YANG Jiyun TIAN Weixing ZHOU Fagui
    2014, 34(9):  2656-2658.  DOI: 10.11772/j.issn.1001-9081.2014.09.2656
    Asbtract ( )   PDF (500KB) ( )  
    References | Related Articles | Metrics

    Recently, an image encryption algorithm based on improved ergodic matrix and pixel value diffusion was proposed, where an ergodic matrix was constructed to be used in the iterative permutation of the spatial image by means of the Logistic chaotic mapping and then the pixel value diffusion was realized according to a new chaotic sequence. According to the analysis of this algorithm, the security hole could be found, so the chosen/known plaintext attack method was put forward to reveal the secret key, and recovered the ciphertext image of the same size by choosing some special plaintext images and the corresponding ciphertext images without the secret key. And the simulation results illustrate the effectiveness of the proposed attack method.

    Certificateless signcryption with online/offline technique
    ZHAO Jingjing ZHAO Xuexia SHI Yuerong
    2014, 34(9):  2659-2663.  DOI: 10.11772/j.issn.1001-9081.2014.09.2659
    Asbtract ( )   PDF (759KB) ( )  
    References | Related Articles | Metrics

    Signcryption as a cryptographic primitive is a splendid combination of signature with authentication and encryption with confidentiality simultaneously. Online/offline signcryption, with the online/offline technique, provides higher efficiency for the system. However, most of the present signcryption schemes are implemented in the identity-based setting in which there exists key escrow problem. Based on the certificateless cryptography system's advantages with revocation of certificate management and without key escrow problem, a secure online/offline certificateless signcryption scheme was proposed. The proposed scheme satisfied the requirement that there is no need to determine the recipient's information in the offline stage. Moreover, its security was proved in the Random Oracle Model (ROM).

    Efficient and provably-secure certificate-based aggregate signature scheme
    LIU Yunfang ZUO Weiping
    2014, 34(9):  2664-2667.  DOI: 10.11772/j.issn.1001-9081.2014.09.2664
    Asbtract ( )   PDF (629KB) ( )  
    References | Related Articles | Metrics

    Aggregate signature is useful in special areas where the signatures on many different messages generated by many different users need to be aggregated. Since the existing certificate-based aggregate signature schemes cannot achieve high efficiency, a efficient certificate-based aggregate signature scheme from bilinear pairing was proposed. Under the random oracle model, the scheme was proved to be existentially unforgeable against adaptive chosen message and identity attacks, and the security could be reduced to Computational Diffie-Hellman (CDH) assumption. The analysis shows that the scheme has constant pairing computations, and only requires three pairing computations, thus it is efficient.

    Low-rank optimization characteristic dictionary training approach with category constraint
    LYV Xuan LIU Yushu DING Hongfu LI Aidi
    2014, 34(9):  2668-2672.  DOI: 10.11772/j.issn.1001-9081.2014.09.2668
    Asbtract ( )   PDF (869KB) ( )  
    References | Related Articles | Metrics

    Bag Of Words (BOW) is a classical approach of image description, and the method of constructing the characteristic dictionary in this model is very important. A category constrained low-rank optimization characteristic dictionary training approach named LRC-DT was proposed for the characteristic dictionary construction. Through the low-rank optimization, the rank of the coefficient matrix constructed by same category images was minimized. Then the classification information was introduced into the characteristic dictionary learning to improve the identifiability of characteristic dictionary for image description. Some experiments were conducted on two standard image databases including Caltech-101 and Caltech-256, and the characteristic dictionary of SPM (Spatial Pyramid Matching), ScSPM (Sparse codes SPM), LLC (Locality-constrained Linear Coding) and LSPM (Linear SPM) were replaced by constrained low-rank optimization characteristic dictionary. The experimental results show that the proposed method can consistently offer better performance than not employing the category constrained low-rank optimization, its classification accuracy is improved with the increase of the training sample number.

    Fast panorama stitching algorithm adaptive for mobile devices
    DAI Huayang RAN Feipeng
    2014, 34(9):  2673-2677.  DOI: 10.11772/j.issn.1001-9081.2014.09.2673
    Asbtract ( )   PDF (823KB) ( )  
    References | Related Articles | Metrics

    A new panorama generation algorithm for mobile devices was proposed to solve the problem of low stitching speed, more memory consumption, chromatic aberration and ghosting. First, the color correction was performed on source image sequences to balance color and luminance between adjacent images. Then ghosting artifacts were detected when stitching panorama. If a ghosting artifact was found, the corresponding object in the source image would be located, and a gradient domain object removing and region filling operation would be applied to remove the moving object. In addition, Poisson blending was used to further smoothen color transitions and hide visible seams. The time of Poisson blending was greatly reduced after color correction, and a unique memory allocation mechanism was also applied during image stitching process to decrease memory consumption. Finally, the method was tested on a mobile phone with configuration of 332MHz processor and 128MB memory by taking photos of resolution of 1280×720 under different illumination conditions, and compared with the traditional global panorama stitching algorithm by stitching 2 to 9 original sequential images, the memory consumption of global panorama stitching algorithm was from 12.3MB to 23.6MB, while the proposed method took up less memory, only from 9.9MB to 14.5MB. The experimental results show that this method eliminates image seams and "ghost" effect more thoroughly with high mosaic speed and low memory consumption, and the quality of generated panoramic images is better, thus it can be used on mobile devices for panoramic image generation.

    Application of scale invariant feature transform descriptor based on rotation invariant feature in image registration
    WANG Shuai SUN Wei JIANG Shuming LIU Xiaohui PENG Peng
    2014, 34(9):  2678-2682.  DOI: 10.11772/j.issn.1001-9081.2014.09.2678
    Asbtract ( )   PDF (828KB) ( )  
    References | Related Articles | Metrics

    To solve the problem that high dimension of descriptor decreases the matching speed of Scale Invariant Feature Transform (SIFT) algorithm, an improved SIFT algorithm was proposed. The feature point was acted as the center, the circular rotation invariance structure was used to construct feature descriptor in the approximate size circular feature points' neighborhood, which was divided into several sub-rings. In each sub-ring, the pixel information was to maintain a relatively constant and positions changed only. The accumulated value of the gradient within each ring element was sorted to generate the feature vector descriptor when the image was rotated. The dimensions and complexity of the algorithm was reduced and the dimensions of feature descriptor were reduced from 128 to 48. The experimental results show that, the improved algorithm can improve rotating registration repetition rate to more than 85%. Compared with the SIFT algorithm, the average matching registration rate increases by 5%, the average time of image registration reduces by about 30% in the image rotation, zoom and illumination change cases. The improved SIFT algorithm is effective.

    Layer depth determination and projection transformation method oriented to tile-pyramid
    LI Jianxun GUO Lianli LI Yang SUN Xiao
    2014, 34(9):  2683-2686.  DOI: 10.11772/j.issn.1001-9081.2014.09.2683
    Asbtract ( )   PDF (872KB) ( )  
    References | Related Articles | Metrics

    In order to improve the transformation efficiency of tile-pyramid image, a 15-parameter projection transformation method was established by quartic polynomial based on the view model of digital earth. The influencing factors for selecting the size of tile image were discussed theoretically, and an optimization method to determine the size and depth of tile-pyramid was given. To test this algorithm, a basic digital earth environment BDE2 was constructed by adopting JOGL. The analysis and experimental results show that tile-pyramid in 10m pixel accuracy constructed by this algorithm only has 10 layers and less than 5×10-5 average error; meanwhile, the proposed algrithm has low complexity, close stitching, high definition and low distortion, and can effectively avoid stitch cracks and characteristics distortion after the image is transformed.

    Homomorphic compensation of recaptured image detection based on direction predict
    XIE Zhe WANG Rangding YAN Diqun LIU Huacheng
    2014, 34(9):  2687-2690.  DOI: 10.11772/j.issn.1001-9081.2014.09.2687
    Asbtract ( )   PDF (769KB) ( )  
    References | Related Articles | Metrics

    To resist recaptured image's attack towards face recognition system, an algorithm based on predicting face image's gradient direction was proposed. The contrast of real image and recaptured image was enhanced by adaptive Gauss homomorphic's illumination compensation. A Support Vector Machine (SVM) classifier was chosen for training and testing two kinds of pictures with convoluting 8-direction Sobel operator. Using 522 live and recaptured faces come from domestic and foreign face databases including NUAA Imposter Database and Yale Face Database for experiment, the detection rate reached 99.51%; Taking 261 live face photos using Samsung Galaxy Nexus phone, then remaked them to get 522 samples library, the detection rate was 98.08% and the time of feature extraction was 167.04s. The results show that the proposed algorithm can classify live and recaptured faces with high extraction efficiency.

    Quasi-periodicity background algorithm for restraining swing objects
    HE Feiyue LI Jiatian XU Heng ZHANG Lan XU Yanzhu WANG Hongmei
    2014, 34(9):  2691-2696.  DOI: 10.11772/j.issn.1001-9081.2014.09.2691
    Asbtract ( )   PDF (1023KB) ( )  
    References | Related Articles | Metrics

    Accurate background model is the paramount base for object extracting and tracing. In response to swing objects which part quasi-periodically changed in intricate scene, based on multi-Gaussian background model, a new Quasi-Periodic Background Algorithm (QPBA) was proposed to suppress the swing objects and establish an accurate and stable background model. The specific process included: According to multi-Gaussian background model, the object classification in scene was set up, and the effect on Gaussian model's parameters caused by swing objects was analyzed. By using color distribution values as samples to establish Gaussian model to keep swing pixels, the swing model in swing pixels was integrated into background model with weight factors of occurrence frequency and time interval. Comparison among QPBA and the classical background modeling algorithms such as GMM (Gaussian Mixture Model), ViBe (Visual Background extractor) and CodeBook was put forward, and the results were assessed in aspects of quality, quantity and efficiency. It shows that QPBA has a more obvious suppression on swing objects, and its fall-out ratio is less than 1%, so that it can handle the scene with swing objects. At the same time, its correct detection number is consistent with other algorithms, thus the moving objects can be reserved perfectly. In addition, the efficiency of QPBA is high, and its resolving time is approximate to CodeBook, which can satisfy the requirements of real-time computation.

    Compressive fusion for remote sensing images in Contourlet transform domain
    YANG Senlin GAO Jinghuai WAN Guobing
    2014, 34(9):  2697-2701.  DOI: 10.11772/j.issn.1001-9081.2014.09.2697
    Asbtract ( )   PDF (984KB) ( )  
    References | Related Articles | Metrics

    Since the compressive sampling of Block-Based Compressed Sensing (BCS) in spatial domain lacks of considering the global features of an image, image fusion based on conventional BCS sampling suffers from reduced quality and blocking artifacts during reconstruction. Firstly, the input images were sparsely represented by Contourlet Transform (CT), then the Contourlet Transform Block-Based Compressed Sensing (CTBCS) sampling was implemented in the CT domain. Secondly, the compressive samplings were fused by the rule of linear weighting. Finally, the fused image was reconstructed by Iterative Thresholding Projection (ITP) algorithm with consideration of blocking artifacts. The fusion method based on CTBCS was proposed for remote-sensing images, and the implementation algorithm was also presented in detail. In the simulation experiments, BCS and CTBCS were used for compressive sampling, then ITP algorithm was used for image reconstruction. The simulation results show that, compared with BCS, CTBCS sampling which considered the global characteristics has higher convergence speed, less computational complexity and higher reconstructing accuracy, the corresponding Peak Signal-to-Noise Ratio (PSNR) of recovery image is also higher. The real data tests indicate that the compressive fusion based on CTBCS achieves better result than that based on BCS. With very small amount of samples, the CTBCS-based compressive fusion can achieve a comparable result with fusion by the conventional CT method. Therefore, the proposed fusion method effectively implements the compressive fusion for the remote-sensing images with large amounts of data.

    Real-time image/video haze removal algorithm with color restoration
    DIAO Yangjie ZHANG Hongying WU Yadong CHEN Meng
    2014, 34(9):  2702-2707.  DOI: 10.11772/j.issn.1001-9081.2014.09.2702
    Asbtract ( )   PDF (1045KB) ( )  
    References | Related Articles | Metrics

    To overcome the defects of the existing algorithms, such as the poor real-time performance, bad effect in sky area and dark dehazed image, a real-time image haze removal algorithm was proposed. Firstly, dark channel prior was used to estimate the rough transmission map. Secondly, the method of optimized guided filtering was used to refine the down-sampled rough transmission map, which can real-time process higher resolution image. Thirdly, refined transmission map was up-sampled and corrected to obtain the final transmission map, which can overcome the defect of bad effect in sky area. Finally, the clear image was got by adaptive brightness adjustment with color restoration. The complexity of the algorithm is only a linear function of the number of input image pixels, which brings a very fast implementation. For the image which resolution is 600×400, the processing time is 80ms.

    Bayesian blind deblurring based on Gauss-Markov random field
    ZHOU Luoyu ZHANG Zhengbing
    2014, 34(9):  2708-2710.  DOI: 10.11772/j.issn.1001-9081.2014.09.2708
    Asbtract ( )   PDF (660KB) ( )  
    References | Related Articles | Metrics

    A Bayesian blind deblurring algorithm was proposed for solving the contradiction of image details restoration and blocking effect amplification. Based on Bayesian framework, prior models were established for origin image, observed image, point spread function and model parameters. Gauss-Markov random field model that can effectively describe local statistical features of image was introduced as prior model of origin image. Then the iterative formulas of origin image and the point spread function were deduced by using Bayesian formula. The experimental results show that image restorted by the proposed algorithm has fewer blocking effect and better visual effect than the restored image by Total Variation (TV) prior model. Whether the size of point spread function is known or not, compared with TV prior model, the proposed algorithm can increase the Improved Signal to Noise Ratio (ISNR) of the restored image about 1dB.

    Brain tumor segmentation based on morphological multi-scale modification and fuzzy C-means clustering
    LIU Yue WANG Xiaopeng YU Hui ZHANG Wen
    2014, 34(9):  2711-2715.  DOI: 10.11772/j.issn.1001-9081.2014.09.2711
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics

    Tumor in brain Magnetic Resonance Imaging (MRI) images is often difficult to be segmented accurately due to noise, gray inhomogeneity, complex structrue, fuzzy and discontinuous boundaries. For the purpose of getting precise segmentation with less position bias, a new method based on Fuzzy C-Means (FCM) clustering and morphological multi-scale modification was proposed. Firstly, a control parameter was introduced to distinguish noise points, edge points and regional interior points in neighborhood, and the function relationship between pixels and the sizes of structure elements was established by combining with spatial information. Then, different pixels were modified with different-sized structure elements using morphological closing operation. Thus most local minimums caused by irregular details and noises were removed, while region contours positions corresponding to the target area were largely unchanged. Finally, FCM clustering algorithm was employed to implement segmentation on the basis of multi-scale modified image, which avoids the local optimization, misclassification and region contours position bias, while remaining accurate positioning of contour area. Compared with the standard FCM, Kernel FCM (KFCM), Genetic FCM (GFCM), Fuzzy Local Information C-Means (FLICM) and expert hand sketch, the experimental results show that the suggested method can achieve more accurate segmentation result, owing to its lower over-segmentation and under-segmentation, as well as higher similarity index compared with the standard segmentation.

    Fast topological reconstruction algorithm for a STL file
    WANG Zengbo
    2014, 34(9):  2720-2724.  DOI: 10.11772/j.issn.1001-9081.2014.09.2716
    Asbtract ( )   PDF (808KB) ( )  
    References | Related Articles | Metrics

    Because the lack of the necessary topological relation between graphic factors, through analyzing the STL (Stereolithographic) format file and reading it, and using the hash table as a lookup table, the topological relation among various elements of three-dimensional model is quickly created. Using the Hash table, this algorithm created the point table and the surface table for the elements, and realized the topological reconstruction. The time complexity of this algorithm is O(n), while the space complexity of this algorithm is O(3n+(4+m)f+m). Finally, the algorithm was compared with the direct algorithm and red-black tree algorithm through five examples, and the results show that the proposed algorithm costs less time and the model with 650 thousand triangular facets can be reconstructed within 2.3 seconds on PC.

    Virtual-real registration method based on improved ORB algorithm
    ZHAO Jian HAN Bin ZHANG Qiliang
    2014, 34(9):  2725-2729.  DOI: 10.11772/j.issn.1001-9081.2014.09.2720
    Asbtract ( )   PDF (851KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that virtual-real registered accuracy and real-time performance are influenced by image texture and uneven illumination in Augmented Reality (AR), a method based on improved ORB (Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features)) algorithm was proposed to solve it. The method firstly optimized the dense region of image feature points by setting the number and distance threshold of it and used parallel algorithm to reserve N points of greater eigenvalue; Then, the method adopted discrete difference feature to enhance the stability of uneven illumination changes and combined the improved ORB with BOF (Bag-of-Features) model to realize quick retrieval of Benchmark image. Finally, it realized the virtual-real registration by using the homographics between images. Comparative experiments among the proposed method, original ORB, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF) algorithms were performed from the aspects of accuracy and efficiency, and the proposed method reduced the registration time to about 40% and reached the accuracy more than 95%. The experimental results show that the proposed method can get a better real-time performance and higher accuracy in different texture and uneven illumination.

    Static register reallocation approach for soft error reduction of register files
    YAN Guochang HE Yanxiang LI Qingan
    2014, 34(9):  2730-2733.  DOI: 10.11772/j.issn.1001-9081.2014.09.2725
    Asbtract ( )   PDF (787KB) ( )  
    References | Related Articles | Metrics

    Because the Register Swapping (RS) method does not consider register allocation's effect in reducing soft error of register files, a static register reallocation approach was proposed concerning live variable's effect on soft error. First, this approach introduced live variable's weight to evaluate its impact on soft error of register files, then two rules were put forward to reallocate the live variable after the register swapping phase. This approach can reduce the soft error in the level of live variable further. The experiments and analysis show that this approach can reduce the soft error by 30% further than the RS method, which can enhance the register's reliability.

    Backward recovery of transient fault in multi-cross channel model
    MA Manfu YAO Jun ZHANG Qiang JIA Yongxin
    2014, 34(9):  2734-2737.  DOI: 10.11772/j.issn.1001-9081.2014.09.2734
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    In the research and application of multi-cross channel model, to maximize fault recovery of individual channel is the basis of the correctness to vote. There is some time redundancy in a task period. For a task processing in a given step, to summarize the time redundancy of pre-voting step, and assume fault-free on succedent step, then there will be a time redundancy on succedent step. The redundancy time of previous and succedent steps was counted, then a superior time window was used to do more deep recovery of fault. Based on the above ideas, a dynamic time series of multi-cross channel model was proposed, which was analyzed for deep recovery, and a backward recovery algorithm was given, which endowed more time to the fault unit, then the instantaneous fault could be eliminated to the utmost. Moreover, a monitoring logic was put forward to support the recovery algorithm. Theoretical analysis and experiments show that the backward recovery algorithm is effective to enhance the recovery rate and to reduce in the number of steps falling out. Compared with the statical recovery, the recovery rate increased by 47.49% and 72.35% respectively, and the number of out of step decreased by 58% and 85% respectively in the condition of 4 channel and 6 channel, which boosts the reliability of multi-cross channel model, especial in the condition of a large number of voting steps.

    On-line debugging technique for new programmable logic controller system based on field programmable gate array
    LUO Kui YAN Yi
    2014, 34(9):  2738-2741.  DOI: 10.11772/j.issn.1001-9081.2014.09.2738
    Asbtract ( )   PDF (640KB) ( )  
    References | Related Articles | Metrics

    In order to realize online monitoring of the new Programmable Logic Controller based on Field Programmable Gate Array (FPGA based PLC), a broader approach employed FPGA technology for debuging embedded SoC (System on Chip) was introduced. A monitoring system which composed of ModBus module and Double RAM (DRAM) module primary was designed on FPGA chip to improve data communication efficiency between target SoC and monitor terminal. The monitoring system realized ModBus-RTU communication on the basis of UART, and the status data transmitted to PC through serial port. The DRAM module was shared by target SoC and PC, and the data exchange was realized by using the interrupt mechanism. By applying the proposed approach, the communication time that the target CPU spended in handling monitor procedures can be shortened to 0.002%, which ensures the real-time performance of monitoring data transmission and improves the control performance of SoC. The precept's feasibility was verified on Altera's FPGA chip.

    Optimization of CRC packet ALOHA algorithm for ultra-high frequency RFID system
    ZHANG Xiaohong LU Juan
    2014, 34(9):  2742-2746.  DOI: 10.11772/j.issn.1001-9081.2014.09.2742
    Asbtract ( )   PDF (772KB) ( )  
    References | Related Articles | Metrics

    Tag collision in Radio Frequency Identification (RFID) system increases the time overhead and energy consumption, reduces the speed of recognition. With the increasing number of tags, the collision is more obvious, thus the system performance decreases sharply. In order to solve the problem of collision among multiple tags in RFID system, an optimized anti-collision algorithm for RFID system based on tag grouping was proposed by analyzing frame slotted ALOHA algorithm. The tags of this algorithm were divided into several groups through the Cyclic Redundancy Check (CRC) code which tags carry, then it recorded tag group number and identified each group according to the grouping sequence, therefore the number of tags which simultaneously responded to the reader's order would be reduced. For the problem of timeslot selective confliction in the identification process, the chaotic system was used to generate uniformly distributed pseudorandom numbers, and it was conducive to select timeslots randomly for the tags within identification state, which made the timeslots selection more uniform in a frame and finally reached the purpose of reducing frequency of tag collision. In the comparative experiments with traditional algorithm, the optimization algorithm needed less orders when the number of tags to be identified was equal, and the order number and tag number showed an approximate linear relationship. The tag identification speed improvement of the optimization algorithm was stable at 50% when the number of tags to be identified was less than 256, and the speed improvement increased to 80% when the number of tags to be identified was more than 256. Theoretical analysis and simulation results indicate that the optimization algorithm has faster tag identification speed, and its performance is more obvious with the increasing of the tags.

    Multi-Agent system with information fusion for smart home
    WANG Liangzhou YU Weihong HUANG Guangchao
    2014, 34(9):  2747-2751.  DOI: 10.11772/j.issn.1001-9081.2014.09.2747
    Asbtract ( )   PDF (812KB) ( )  
    References | Related Articles | Metrics

    A smart and green home is a dynamic large-scale system with high complexity and a huge amount of information. In order to further improve coordination between subsystems and make the best of multi-source information for the smart home, a multi-Agent intelligent home system based on multi-source information fusion was designed. The framework and interaction mechanisms of Agent were introduced and a multi-source information fusion model based on Adaptive Neural-network-based Fuzzy Interference System (ANFIS) was put forward to conduct the feature extraction and learn occupant's personal behavior. A simulation platform using lightweight embedded Jade Agent on Android and Matlab on personal computer was developed to control the natural lighting system in smart home. The theoretical analysis and the simulation results show that the model can improve synergistic interaction of home systems, and finally enhance the efficiency of multi-source information fusion in decision making process.

    Design of positioning and attitude data acquisition system for geostress monitoring
    GU Jingbo GUAN Guixia ZHAO Haimeng TAN Xiang YAN Lei WANG Wenxiang
    2014, 34(9):  2752-2756.  DOI: 10.11772/j.issn.1001-9081.2014.09.2752
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics

    Aiming at efficient data acquisition, real-time precise positioning and attitude measurement problems of geostress low-frequency electromagnetic monitoring, real-time data acquisition system was designed and implemented in combination with positioning and attitude measurement module. The hardware system took ARM microprocessor (S3C6410) as control core based on embedded Linux. The hardware and software design architecture were introduced in detail. In addition, the algorithm of positioning and attitude measurement characteristics data extraction was proposed. Monitoring terminal of data acquisition and processing was designed using Qt/Embedded GUI programming technique based on LCD (Liquid Crystal Display) and achieved human-computer interaction. Meanwhile, the required data could be real-time stored to SD card. The results of system debugging and actual field experiments indicate that the system can complete the positioning and attitude data acquisition and processing, effectively solve the problem of real-time positioning for in-situ monitoring. It also can realize geostress low-frequency electromagnetic monitoring with high-speed, real-time and high reliability.

    New method for designing fractional Hilbert transformer
    LIU Weiqing
    2014, 34(9):  2757-2760.  DOI: 10.11772/j.issn.1001-9081.2014.09.2757
    Asbtract ( )   PDF (561KB) ( )  
    References | Related Articles | Metrics

    A new method for designing Fractional Hilbert Transformer (FHT) was proposed. The basic idea is to realize the FHT to design the allpass filter with desired phase characteristic. It is well known that the denominator polynomial of a stable allpass filter must be a minimum phase system. By constructing a pure imaginary odd symmetry phase function, and using symmetry properties of the Fourier transform, this method could obtain the cepstral sequence of the denominator polynomial using the relationship between cepstral sequence and phase function of a minimum phase system. Then, from the cepstral spectrum theory, the denominator polynomial coefficients could be determined through a nonlinear recursive difference equation. Approximated ideal and non-ideal characteristic methods were given. Design examples indicate that the proposed filters exhibit good approximation to the desired phase response, and have the advantage of simple, efficient and infinite precision.

2025 Vol.45 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF