Loading...

Table of Content

    01 November 2010, Volume 30 Issue 11
    Advanced computing
    Scheduling algorithm on grid resources in consideration of local workloads
    2010, 30(11):  2861-2863. 
    Asbtract ( )   PDF (477KB) ( )  
    Related Articles | Metrics
    Performances of two resource scheduling algorithms on grid resources with and without stochastic local workloads were studied. A stochastic local workload model of grid resources was established. Two scheduling algorithms on grid resources, Highest CPU-Rating Available Resource First (HRARF) and Most Suitable CPU-Number Available Resource First (MSNARF), were proposed. Makespans of grid workloads scheduled by the two proposed resource scheduling algorithms with and without stochastic local workloads were simulated. The simulation results show that when the loads of resources are heavy, the relative performance of MSNARF algorithm and HRARF algorithm on grid resources with and without stochastic local workload is reverse. In grid computing, relative performance of two scheduling algorithms on sharing resources and exclusive resources may be different.
    Scheduling algorithm for instance-intensive cloud workflow
    2010, 30(11):  2864-2866. 
    Asbtract ( )   PDF (434KB) ( )  
    Related Articles | Metrics
    The existing workflow scheduling algorithms are simply designed for single complex instance, unsuitable for scheduling instance-intensive cloud workflows. To address this problem, a new scheduling algorithm, named Minimum Total Cost Under User-designated Total Deadline (MCUD), was proposed based on multiple instances. For the workflow instances of the same type, after classification, MCUD algorithm distributed the user-designated overall deadline into each task with a new distribution method. In addition, MCUD algorithm adjusted the sub-deadline of successive tasks dynamically during the scheduling process. Instances of the same nature are given the sub-deadline distribution results of some difference, which can avoid the fierce competition of cheaper services and increase the efficiency of resource utilization. The simulation results show that MCUD algorithm further decreases the total execution cost and total execution time while meeting the user-designated deadline in comparison with other algorithms.
    Task scheduling algorithm based on model decomposition for multi-machine with time windows
    2010, 30(11):  2867-2869. 
    Asbtract ( )   PDF (591KB) ( )  
    Related Articles | Metrics
    The algorithm based on model decomposition was proposed for the problem of multi-machine task scheduling with time windows. The classical Benders decomposition was extended into the field of mixed integer linear programming model by introducing the logic based Benders decomposition. The states of art software MOSEK and GECODE were deployed to solve the master and sub-problems respectively. The method of generating Benders cuts was presented. The solution space was converged to a satisfied feasible solution through running the algorithm iteratively. The algorithm was implemented and tested by testing cases, and its effectiveness was verified.
    Unified task scheduling algorithm of reconfigurable system based on placement cost
    2010, 30(11):  2870-2872. 
    Asbtract ( )   PDF (480KB) ( )  
    Related Articles | Metrics
    High efficient task scheduling algorithms can greatly influence the performance of reconfigurable systems. This paper analyzed the disadvantages of some current on-line reconfigurable system task scheduling algorithms, and proposed a new scheduling algorithm based on placement cost. This scheduling algorithm considered three types of cost, including hardware task execution time in FPGA, occupied FPGA area and fragmentation situation of FPGA, and also considered the unified scheduling of hardware and software tasks. When a hardware task was in scheduling, if the placement cost exceeded the setting threshold value, the task would be rejected to run in FPGA, but its software implementation could run in CPU. By reasonably rejecting some high cost tasks, this algorithm could achieve a higher overall successful scheduling rate. The simulation results show that this scheduling algorithm can achieve a higher deadline-guarantee ratio than others.
    High speed image processing based on Java processor for embedded systems
    2010, 30(11):  2873-2875. 
    Asbtract ( )   PDF (655KB) ( )  
    Related Articles | Metrics
    Currently, in order to improve development efficiency and enhance portability, Java technology is paid more attention by image processing researchers. However, the Java virtual machine implemented by software, meaning low speed and poor real-time performance, can not meet the image processing demand of complex computations on performance. So a Java processor architecture implemented by hardware was presented, which can execute byte code directly, and the monitor and pre-processor were implemented at the same time, making up a complete test platform. The experimental results show that the efficiency of the platform is 860 times as the same as the Java virtual machine and applying Java processor to image processing for embedded systems would be a viable option.
    Parallel garbage collection algorithm based on LISP2 for multi-core systems
    2010, 30(11):  2876-2879. 
    Asbtract ( )   PDF (549KB) ( )  
    Related Articles | Metrics
    Parallel multicore systems are widely used not only in high performance servers, but also in low-end devices such as embedded controllers. When garbage collection is used in these systems, it becomes very important to decrease the overhead of garbage collection. This paper described a new parallel copying garbage collector based on LISP2 algorithm. It was implemented by paralleling four garbage collection steps respectively. The experimental results demonstrate that the proposed method can improve the efficiency of garbage collection.
    Fast computation for point-to-point shortest path based on four closest nodes in higher level road network
    Cong Teng
    2010, 30(11):  2880-2883. 
    Asbtract ( )   PDF (808KB) ( )  
    Related Articles | Metrics
    The point-to-point shortest path computation is one of the hot research topics today. One straight forward application is to find the optimal driving directions. To solve the difficulties in shortest path computation for large scale graph, an efficient approximation algorithm was proposed based on road network hierarchies. Four closest nodes in higher level road network to starting node and four closest nodes to ending node were computed first along with 8 corresponding shortest paths. For subgraph T which consists of only higher level roads, 8 edges corresponding to the previously computed 8 shortest paths were then added to T and results in a graph T'. In graph T', search for the shortest path from starting node to ending node, which completed the task. This design demonstrates that the proposed algorithm is suitable to solve large scale problems. An error bound is provided for approximation shortest path. It is also possible to preprocess the data first. In real application, the computational results are quite competitive, which shows that the proposed algorithm is effective.
    Artificial intelligence
    Virus-evolutionary genetic algorithm based on hierarchy
    2010, 30(11):  2884-2886. 
    Asbtract ( )   PDF (456KB) ( )  
    Related Articles | Metrics
    The host population was divided into high-rank and low-rank sub-populations according to fitness. Correspondingly, the viruses were divided into small virus and big virus population population. Small-scale change in the phenotype value of high-rank host individual occurred due to the infection of small virus. Large-scale change in the phenotype value of low-rank host individual occurred due to the infection of big virus, which made the best individual search in its own small-scale region, and made the poor individual search away from their own region, so as to enhance its search speed and accuracy. The experiments demonstrate that the proposed algorithm outperforms the traditional virus-evolutionary genetic algorithm.
    Adaptive multi-objective hybrid differential evolution algorithm in union transport scheduling
    2010, 30(11):  2887-2890. 
    Asbtract ( )   PDF (609KB) ( )  
    Related Articles | Metrics
    The traditional single-objective algorithm can only get one solution, but the multi-objective algorithm can get a solution set after every run. The algorithm (DEASA) improved the differential evolution strategy, designed reconstruction, adjusted parameter adaptively, adopted arena's principle to build non-dominating set rules, and added simulated annealing strategies into the differential evolution algorithm which enhanced the ability to further improve the performance of the algorithm and reduced the time complexity to avoid falling into local optimum. The experiments show that this algorithm can effectively solve the union transport problem.
    Optimized research on order picking based on partheno-genetic algorithm
    2010, 30(11):  2891-2893. 
    Asbtract ( )   PDF (515KB) ( )  
    Related Articles | Metrics
    Concerning the characteristics of high-storage, high-speed and high-efficiency in Automated Storage and Retrieval System (AS/RS), the running process of order picking was analyzed, the corresponding optimized model of order picking was established, and an efficient Partheno-Genetic Algorithm (PGA) was designed. Through simulation, the results show that the optimized design has good global searching capability, gives dual attention to the optimized time and the optimized effect, meets the actual operation requirement, and is suitable for actual projects.
    Rational strategy in decentralized multi-factory resource scheduling
    2010, 30(11):  2894-2897. 
    Asbtract ( )   PDF (620KB) ( )  
    Related Articles | Metrics
    To raise the efficiency of multi-factory resource scheduling and better solve the problem of lack of production and maintenance resources faced by some special industries, this paper proposed GD2 bidding strategy, applying it to continuous double auction mechanism, achieving more efficiency on multi-factory resource scheduling with a decentralized approach. The GD2 strategy was two-dimensional bidding strategy which contains bidding price and quantity. The Agents adjusted bidding price by establishing belief function and calculating maximum expected profit. The experimental results show that the GD2 strategy can achieve high resource scheduling efficiency in the multi-factory resource scheduling, and the overall average efficiency reaches 92%.
    Ant colony optimization and heuristic algorithms for rectangle layout optimization problem with equilibrium constraints
    2010, 30(11):  2898-2901. 
    Asbtract ( )   PDF (720KB) ( )  
    Related Articles | Metrics
    Taking the problem of satellite module layout as research background, this paper discussed the problem of Rectangle Layout Optimization with Equilibrium Constraint (RLOEC). These works designed a sub-regional distribution strategy through heuristic strategies. This strategy divided the circular container into four regions, which were layout synchronized. When the layout rectangle and the area were decided, the Bottom Left Fill (BLF) strategy was used to layout the rectangle. The heuristic algorithm made the output layout compact by the geometry constraints and balanced by controlling the mass center of the system. On the basis of the heuristic strategy, an Ant Colony Optimization (ACO) algorithm was developed to search for the optimal positioning order and then the optimal layout. The simulation results show that the proposed method has good computation performance.
    Optimal configuration of manufacturing resources based on transportation factors in networked manufacturing
    2010, 30(11):  2902-2905. 
    Asbtract ( )   PDF (523KB) ( )  
    Related Articles | Metrics
    Concerning the optimal configuration of manufacturing resources in networked manufacturing environment, the transportation time and cost were considered based on process time, process cost and other factors in order to increase practicability of the configurated result. The comprehensive optimization model was built. The problem was solved by using a genetic algorithm which included the elitist strategy. The proposed model and algorithm were proved through an application case. The research shows that the total transportation time and cost have been decreased by taking transportation factors into account, and the result becomes more practicable.
    Graphical model-based multi-Agent coordination fault diagnosis for complex system
    2010, 30(11):  2906-2909. 
    Asbtract ( )   PDF (621KB) ( )  
    Related Articles | Metrics
    To solve the real-time inference problem in complex, uncertain system fault diagnosis, a multi-Agent cooperative inference fault diagnosis approach based on Multiple Sectioned Bayesian Network (MSBN), which is a kind of graphical models, was proposed. This method partitioned a complex Bayesian Network (BN) into some overlapped small BNs. Each Agent, which monitored the sub-system, was abstracted as a moderate size BN which owned the local knowledge about the sub-system. Autonomous inferences can be conducted by the Agents through existing BN inference algorithms. Then the multi-Agent cooperative inference for fault diagnosis can be taken through the message propagation along the overlapped interfaces among the sub nets. The experimental results demonstrate that the proposed graphical model-based multi-Agent coordination fault diagnosis approach is correct and effective.
    Map matching algorithm based on short-term prediction
    2010, 30(11):  2910-2913. 
    Asbtract ( )   PDF (831KB) ( )  
    Related Articles | Metrics
    Efficient and reliable map matching algorithms are essential for vehicle navigation systems, while most existing solutions cannot provide trustworthy outputs when the situation is ambiguous (such as at road intersections). In order to improve the precision of map matching, a new map matching algorithm based short-term prediction was proposed. Firstly, the algorithm employed the history positioning information to set up the model of short-term prediction. Accordingly, the future positions would be obtained after the current matching time; secondly, the distance comparability between vehicle and route was defined by the modified average distance, and it replaced the projecting distance between current matching position and route; finally, the Dempster-Shafer evidence was adopted to fuse the modified average distance and direction information between vehicle and route. It could effectively expand the credibility differences of the candidate routes and enhance the robustness of the algorithm. The results of simulation and experiments demonstrate the better efficiency and reliability of the estimates even for ambiguous environment.
    Electromagnetism-like mechanism algorithm for global optimization
    2010, 30(11):  2914-2916. 
    Asbtract ( )   PDF (391KB) ( )  
    Related Articles | Metrics
    A new Electromagnetism-like Mechanism (EM) algorithm was proposed in this paper in order to prevent data overflow and reduce the computation load. The formulas of the particle charge and the total force vector were improved. The lower bound of objective function and the formulas of the particle charge filtration were introduced. The standard test functions were tested and the new algorithm was compared with EM algorithm, which proved that the new algorithm converges faster. Furthermore, the numerical results show that the approach is efficient and valid.
    Manifold learning algorithm based on the small world model
    2010, 30(11):  2917-2920. 
    Asbtract ( )   PDF (650KB) ( )  
    Related Articles | Metrics
    Isometric Feature Mapping (ISOMAP) not only has high complexity but also can not learn new samples. L-ISOMAP has lower complexity by only preserving the geodesic distances between some landmark points. However, landmark point set randomly selected often leads to worse embedding results. A manifold learning algorithm based on the small world model was proposed, which only preserve the geodesic distances between each point and its k nearest neighbors as well as some distant points randomly chosen according to the small world model. The deepest gradient descent method was used to optimize the iterative process to obtain the low dimensional representation of data. The theoretic analysis demonstrates that the complexity of the proposed algorithm is far below one of ISOMAP. The stress function and the residual variance were used to compare the three methods. The experiments show that the results from the new method are close to those from ISOMAP and are superior to those from L-ISOMAP. Moreover, the algorithm can treat new data and is also not sensitive to noise.
    Two-dimensional hyperbolic equilibrium manifold method
    2010, 30(11):  2921-2923. 
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    The computation of manifold plays an important role in understanding the dynamics of a nonlinear system. This paper presented a revised algorithm for computing two dimensional manifolds of vector fields. The trajectory was grown by solving appropriate initial value problem, then the manifold was grown step by step with equal arc-length along the trajectory and points on the trajectory were selected according to the local curvature. For the first time, both curvature-control technique and distance constraints were employed when interpolation between mesh points was needed. And in this way, the accuracy of interpolated points was guaranteed and it was the main advantage over the existing methods. The simulation results show that the proposed algorithm is effective.
    Database and data mining
    Research on shared ontology model for infectious disease emergency case
    2010, 30(11):  2924-2927. 
    Asbtract ( )   PDF (675KB) ( )  
    Related Articles | Metrics
    When large-scale infectious disease outbreaks, the emergency response personnel need a large amount of emergency knowledge and information. In order to solve the knowledge sharing and semantic conflict of infectious diseases emergency cases, the paper defined the infectious disease emergency case ontology model, and proposed the infectious disease emergency case sharing framework based on ontology. As the result, taking the response to SARS in Beijing for example, the part knowledge fragment of infectious disease emergency case based on ontology was described.
    Musical named entity recognition method
    2010, 30(11):  2928-2931. 
    Asbtract ( )   PDF (779KB) ( )  
    Related Articles | Metrics
    In order to extract musical entities from different Web pages quickly and correctly, this paper presented a hybrid approach based on rules and statistics for Chinese named entity recognition in music domain based on the characteristics in music domain, and implemented the musical named entity recognition system. The experimental results show that this system has a higher precision and recall rate.
    New method for building ASP knowledge base from knowledge in classical logic
    2010, 30(11):  2932-2936. 
    Asbtract ( )   PDF (696KB) ( )  
    Related Articles | Metrics
    Answer Set Programming (ASP) is now a mainstream tool for the representation of non-monotonic knowledge. In order to make use of the existing knowledge in classic logic in the process of using ASP for problem solving, a method was proposed for translating knowledge in classic logic formulas to an ASP program or ASP knowledge base so that the models of the formulas and the answer sets of the ASP program were in one-to-one cor-respondence. Some examples were presented to illustrate the effectiveness of the method. Two classes of knowledge were distinguished in this paper, i.e. constraint knowledge that requires a formula to be satisfactory and definition knowledge that defines a predicate. In practice, the method provides a way of building non-monotonic ASP knowledge bases from the existing knowledge bases that use predicate logic as representation language.
    Orientation analysis of Web reviews
    2010, 30(11):  2937-2940. 
    Asbtract ( )   PDF (627KB) ( )  
    Related Articles | Metrics
    The rise of Web 2.0 leads a lot of Web reviews including news review and product review. Problems during monitoring and using these review information are various, and the orientation analysis of Web reviews was emphasized. HowNet was used as the basic semantic dictionary and an improved method to calculate the word similarity was proposed, then the calculation of word orientation was improved by combining Tongyici Cilin, after that the orientation discrimination was realized from low granularity' word to high granularity' sentence by using language knowledge. The experimental results show that the method has high accuracy to the orientation analysis of Web reviews in the real Internet environment.
    Transformable method for XML Schema to relational schemata in data integration
    2010, 30(11):  2941-2944. 
    Asbtract ( )   PDF (831KB) ( )  
    Related Articles | Metrics
    This paper built a set of structure mapping rules and semantic mapping rules according to the definition of elements and the nested relation between elements in XML schema. A new mapping method based on these mapping rules was proposed to transform XML Schema to relational schemata. Furthermore, the relational schemata mapped from XML Schema were proved in 4NF. The results indicate that the relational schemata not only contain all the information of structure and content in XML Schema, but also preserve most of the semantic constraints and reduce storage redundancy.
    XML keywords retrieval by integrating semantics of document and user inquiries
    2010, 30(11):  2945-2948. 
    Asbtract ( )   PDF (626KB) ( )  
    Related Articles | Metrics
    A keywords retrieval method of semantic relevant was proposed to deal with the loss of semantics information in XML keywords retrieval. The implied semantics in document were fetched by using the semi-structured feature of XML document; the user inquiry intents were also captured by analyzing the inquiry syntax. And then, the elements satisfying the demands were retrieved according to user inquiry intent. Finally, in combination with semantics of the document, the expressions of inquiry results were improved by using the semantic relevant entity sub-tree set, instead of the traditional Smallest Lowest Common Ancestor (SLCA). The experimental results indicate that the precision ratio of keywords retrieval can be improved by using this method.
    Algorithm for mining data stream outliers based on distance
    2010, 30(11):  2949-2951. 
    Asbtract ( )   PDF (598KB) ( )  
    Related Articles | Metrics
    The traditional algorithm of mining outliers cannot mine outliers in data stream effectively. Concerning the infinite input and dynamic change in data stream environment, a new algorithm for detecting data stream outliers based on distance was proposed. Change of data stream probability distribution was dynamically detected by Hoeffding theorem and independent identical distribution central limit theorem. Making use of detection outcome to self adaptation, sliding window size was adjusted to mine outliers in data stream. The experimental results show this algorithm can effectively mine data stream outliers in artificial data set and KDD-CUP99 date set.
    Research and improvement on Apriori algorithm of association rule mining
    2010, 30(11):  2952-2955. 
    Asbtract ( )   PDF (628KB) ( )  
    Related Articles | Metrics
    The classic Apriori algorithm for discovering frequent itemsets scans the database many times and the pattern matching between candidate itemsets and transactions is used repeatedly, so a large number of candidate itemsets were produced, which results in low efficiency of the algorithm. The improved Apriori algorithm improved it from three aspects: firstly, the strategy of the join step and the prune step was improved when candidate frequent (k+1)-itemsets were generated from frequent k-itemsets; secondly, the method of dealing with transaction was improved to reduce the time of pattern matching to be used in the Apriori algorithm; in the end, the method of dealing with database was improved, which lead to only once scanning of the database during the whole course of the algorithm. According to these improvements, an improved algorithm was introduced. The efficiency of Apriori algorithm got improvement both in time and in space. The experimental results of the improved algorithm show that the improved algorithm is more efficient than the original.
    Prediction method for outliers over data stream based on sparse representation
    2010, 30(11):  2956-2958. 
    Asbtract ( )   PDF (597KB) ( )  
    Related Articles | Metrics
    This paper proposed a new prediction method for outliers over data stream based on sparse representation to improve the optimum prediction speed and performance of outliers over data stream. Combining the wavelet noise detection method, using newly developed tools for sparse representation, a transformation method for outliers over data stream was proposed. In order to identify outliers, the introduction of random measurement matrix of wavelet transform coefficients was applied with sparse representation to forecast data value in the future timestamp. The simulation results on actual data source show that this method can provide precise instantaneous detection under certain conditions.
    Prediction of O-glycosylation sites in protein sequence by kernel Fisher discriminant analysis
    2010, 30(11):  2959-2961. 
    Asbtract ( )   PDF (422KB) ( )  
    Related Articles | Metrics
    To predict the O-glycosylation sites in protein sequence, the method of Kernel Fisher Discriminant Analysis (KFDA) was proposed under various window sizes. Encoded by the sparse coding, the samples were first mapped onto a feature space implicitly defined by a kernel function, and then they were classified into two classes in the feature space by Fisher discriminant analysis. Furthermore, the majority-vote scheme was used to combine all the pre-classifiers to improve the prediction performance. The results indicate that the performance of ensembles of KFDA is better than that of FDA, PCA and pre-classifier. The prediction accuracy is about 86.5%.
    Storage zoning algorithm based on page migration for database with hybrid architecture
    2010, 30(11):  2962-2964. 
    Asbtract ( )   PDF (505KB) ( )  
    Related Articles | Metrics
    To effectively take advantage of high read speed of Solid-State Disk (SSD)'s and low cost of disk storage, under a mixed structure which is the coexistence of disk and SSD, a storage zoning algorithm based on page migration named SZA was proposed. A different calculating method of migration cost from NUMA was presented. The SZA algorithm was explained to choose corresponding storage medium based on the cost of migration. Migrating activity was processed on different workload of data. The simulation results show that the algorithm improves I/O performance effectively and the erasing times of flash have been reduced significantly.
    Effective solution to Hash collision
    2010, 30(11):  2965-2966. 
    Asbtract ( )   PDF (493KB) ( )  
    Related Articles | Metrics
    To improve the efficiency of Hash conflict-solving, a more effective method to deal with Hash collision was proposed based on a conflict-solving mechanism and the prior probability of the searched elements and combined with the advantages of heap sort. This method was defined as a prior probability-based Hash heap big top, which established a corresponding Hash heap big top based on the prior probability of the searched elements and then began the research with it. The time complexity of searching is O(n log n) at worst. Hence, this algorithm can not only reduce the searching length when conflicts occur so as to shorten the time complexity of searching, but also be applicable to huge sets of keys.
    Parallel evolutionary algorithm for dynamic data structures optimization in embedded system
    wang xiaosheng
    2010, 30(11):  2967-2969. 
    Asbtract ( )   PDF (447KB) ( )  
    Related Articles | Metrics
    In order to better solve dynamic data structures optimization in embedded system, this paper combined NSGA-II and SPEA2, and adopted island model and multi-thread technique to describe a parallel multi-objective evolutionary algorithm. Using its specific three parallel algorithms and sequential NSGA-II and SPEA2, one embedded application on multi-core architecture was optimized in experiment. The results show that not only the speed of optimization process is enhanced, but also the quality and the variety of the solutions was improved.
    Pattern recognition
    Face recognition method based on multi-channel Log-Gabor and (2D)^2PCALDA
    2010, 30(11):  2970-2973. 
    Asbtract ( )   PDF (600KB) ( )  
    Related Articles | Metrics
    To reduce the influence resulting from lighting variations on the performance of face recognition methods based on images, a new face recognition method, which combined the multi-channel strategy with (2D)2PCALDA feature extraction technique, was proposed. Different scale and orientation as an independent channel, the features were extracted using (2D)2PCALDA and classification was performed in each channel, and the final classification could be obtained by fusing different channel classification results. The experimental results in the CAS-PEAL-R1, ORL and Yale face database show that the proposed method has better recognition performance.
    Object matching with fuzzy randomized generalized Hough transform
    2010, 30(11):  2974-2976. 
    Asbtract ( )   PDF (593KB) ( )  
    Related Articles | Metrics
    A new algorithm called Fuzzy Randomized Generalized Hough Transform (FRGHT) was proposed to improve the industrial detection accuracy and the speed of image matching in this paper. This algorithm combined Fuzzy Inference System (FIS) and Random Generalized Hough Transform (RGHT), in which fuzzy sets of FIS were used to compute the votes of edge points of reference image for registration parameters, can effectively solve the problem of noise and distortion and improves the matching accuracy; and the random sampling giving a many-to-one mapping reduces the memory requirements and improves the matching speed. The experiments demonstrate that the proposed algorithm exhibits faster speed and higher accuracy than RGHT and Fuzzy GHT (FGHT), moreover it is robust to the serious noise pollution, distortion, occlusions, clutter, etc.
    Research on combined method for SAR target azimuth estimation
    2010, 30(11):  2977-2979. 
    Asbtract ( )   PDF (701KB) ( )  
    Related Articles | Metrics
    In order to obtain accurate estimation for any azimuth, a combined estimation method for Synthetic Aperture Radar (SAR) targets was proposed, integrating leading contour fitting and peak value fitting. According to the projection ratio and the area proportion of object region to the envelop rectangle, different estimation algorithms were exploited based on the initial judgment of the aspect. The simulation results using Moving and Stationary Target Acquisition and Recognition (MSTAR) data indicate that the error of the proposed method is small, and thus it has high accuracy. The combined method can improve the adjustability of azimuth estimation, and then can facilitate target classification and recognition effectively.
    Document images retrieval based on texture spectrum descriptor
    2010, 30(11):  2980-2982. 
    Asbtract ( )   PDF (474KB) ( )  
    Related Articles | Metrics
    The paper proposed a new method for document images retrieval based on texture spectrum descriptor. This algorithm firstly segmented the text area by document image characteristics, and then described the texture spectrum based on the text character edge, and computed the texture spectrum histogram. Compared with direct use of gray histogram image retrieval, this algorithm possesses high distinguish degree. The experimental results show that this method has high retrieval precision and resists cutting or rotating operations, and is suitable for document image retrieval.
    Defect detection of low contrast watermark image
    2010, 30(11):  2983-2985. 
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    Based on the fundamental principles of feature extraction of PCA and KPCA, an improved method to extract features of non-linear image so as to rebuild image was proposed for detecting embedded watermark image defects. This method decreased greatly the dimension of Core Matrix and kept the information of embedded watermark image effectively. Thus the defects of images could be found out through comparison. The experimental results show that this method enables input data to reduce dimension effectively, shortens computation time and improves detection effect and accuracy. KPCA has a higher performance index and wider range of application than PCA.
    Automatic image annotation method based on Gaussian mixture model
    2010, 30(11):  2986-2987. 
    Asbtract ( )   PDF (480KB) ( )  
    Related Articles | Metrics
    Automatic image annotation already becomes a feasible way to reduce "semantic gap". In order to improve the performance of automatic image annotation, Gaussian Mixture Model (GMM) based automatic image annotation method was proposed in this paper. GMM was built for each keyword to accurately characterize its semantic content. Simultaneously, the precise results can be obtained to promote the performance of image annotation. At last, based on the COREL database, the proposed method is verified to be effective in average precision and average recall.
    Pulmonary nodule feature extraction based on short CT image series
    2010, 30(11):  2988-2990. 
    Asbtract ( )   PDF (563KB) ( )  
    Related Articles | Metrics
    Concerning the limitation of using single CT image to detect the lung nodule, short image series consisting of a few sequential CT image was used in nodule's auto-detection in this paper. Meanwhile, the image corresponding to the Region of Interest (ROI) was taken as a surface of some 2-d function. Then the new features, different from traditional image region features, were extracted, which depicted the surface's shape and its variances in short image series. At last, the effectiveness of the extracted features is proved by using the Support Vector Machine (SVM).
    Automatic quantitative analysis on mitochondrial morphology method
    2010, 30(11):  2991-2994. 
    Asbtract ( )   PDF (596KB) ( )  
    Related Articles | Metrics
    In order to achieve fast and accurate quantification on the feature of mitochondrial structure, a new method based on multi-directional template response and tracing algorithm was presented. Firstly, the seed points were selected from the original mitochondrial images. Edge points and centerline points of mitochondria structure were detected using directional template response on 16 directions. Then, it initiated from the seed points, and automatically traced the mitochondrial structure recursively by detecting the edge points. The number, average length and average area of mitochondria were calculated from the tracing process. The experiments show that mitochondrial morphology is significantly different (P<0.05) between controls and smooth muscle cells with overloaded free cholesterol. The proposed approach is more efficient and more accurate than the interactive method.
    Interpretation classification of river channel images
    2010, 30(11):  2995-2997. 
    Asbtract ( )   PDF (496KB) ( )  
    Related Articles | Metrics
    Due to low efficiency of river channel remote sensing images artificial interpretation, river classification of image recognition system was proposed. Sensitive factor combination was adopted, and multi-band combination method and region growing methods such as splitting and merging were taken to extract river from remote image, and mathematical morphology was applied to standard river channel. The type classification of river characteristics was determined on the obtained images, and the eigenvector method was given. As features vector received from extraction river was of a greater degree of aggregation and poor class, fuzzy support vector machine recognition was used. Fuzzy membership was reflected by the contribution of the sample properties, noise or outliers in the classification of samples were reduced. The experiments show using fuzzy support vector machine significantly improves the overall recognition rate.
    Graphics and image processing
    Fast 3D model generation from silhouettes
    2010, 30(11):  2998-3001. 
    Asbtract ( )   PDF (618KB) ( )  
    Related Articles | Metrics
    In this paper, a new approach for fast 3D model generation from silhouettes was proposed, in which the traditional 3D cones intersection problem was converted to the 2D silhouettes intersection problem. Firstly, the 2D silhouettes of different viewpoints were projected back to parallel 3D planes, then the intersectional silhouettes of all back-projections on the 3D plane were calculated, finally the corresponding points between the intersectional silhouettes of two neighboring 3D planes were matched, so the mesh of 3D model was directly obtained. Both theoretical analysis and experimental results show that the time complexity of the proposed algorithm is increased linearly with the number of viewpoints. Since the proposed method mainly improves the accuracy of 3D model by increasing the number of viewpoints, it makes out the precise generation of 3D models faster than the 3D cones intersection method.
    A fast topological reconstruction algorithm for 3D mesh model
    2010, 30(11):  3002-3004. 
    Asbtract ( )   PDF (442KB) ( )  
    Related Articles | Metrics
    In order to speed up the reconstruction of the topology of 3D mesh model, half-edge structure was selected to represent the topological relation of solid model. A new index method to quicken vertices combination was designed. During the process of vertex combination, directly locating the vertex position was searched, without the AVL lookup table, so that the time complexity of the topological reconstruction was reduced to O(n) from O(n log n). The results of test by SMF format files show that the model with one hundred thousand triangular facets can be reconstructed within a second in popular PC.
    Regularized super-resolution reconstruction based on M-estimation and bilateral filtering
    2010, 30(11):  3005-3007. 
    Asbtract ( )   PDF (530KB) ( )  
    Related Articles | Metrics
    In regularized super-resolution reconstruction framework, a unified and robust energy function for super-resolution reconstruction was constructed, which incorporated both the robustness of M-estimation and the double-weighting idea of bilateral filtering, and hence behaving much better in robustness and edge-preserving. Because of drawback of the constrained least square algorithm using least square estimator and Farsiu's algorithm using least absolute deviation estimator in the edge-preserving, the robust Huber estimator was used in the unified energy function. The experimental results demonstrate the effectiveness of proposed algorithm, both in the visual effect and the Peak Signalto Noise Ratio (PSNR) value.
    Electronic image stabilization algorithm based on filtering and curve fitting
    2010, 30(11):  3008-3010. 
    Asbtract ( )   PDF (477KB) ( )  
    Related Articles | Metrics
    To solve the problem that now there is no method that can remove high frequency noise and low frequency noise simultaneously, this paper proposed a new method combining filtering and curve fitting, which can solve the high frequency noise and low frequency noise problem simultaneously. Firstly the offset between frames was estimated fast using bit-plane matching algorithm in this new method. Secondly the inter-frame offset was cumulated to get the global motion vector relative to the reference frame. And the global motion vector was filtered with Kalman filter to remove the high frequency noise. Finally curve fitting was executed on the global motion vector that has been filtered with Kalman filter to remove the low frequency noise. At last the steady intentional trajectory was got. The experimental results show that the method can effectively remove the high frequency and low frequency noise simultaneously, and video stabilization effect is good.
    Fusion of infrared and visible images based on the second generation Curvelet transform and MPCA
    2010, 30(11):  3011-3014. 
    Asbtract ( )   PDF (633KB) ( )  
    Related Articles | Metrics
    For the fusion problem of infrared and visible light images with the same scene, an image fusion algorithm based on the second generation Curvelet and Modular Principal Component Analysis (MPCA) was proposed. Firstly, the fast discrete Curvelet transform was performed on the original images to obtain coarse scale and fine scale coefficients at different scales and in various directions respectively. Secondly, according to the different physical features of infrared and visible light images and human visual system features, the fusion weights were determined by MPCA method for coarse scale coefficients; while the fusion rule based on local region energy was used for fine scale coefficients. Finally, the fusion results were obtained through the inverse Curvelet transform. The experimental results illustrate that the proposed algorithm is effective for extracting the characteristics of the original images and has better fusion results than others in the subjective visible effect and objective evaluation index.
    Network and distributed techno
    Color transfer method with multi-source images
    2010, 30(11):  3015-3018. 
    Asbtract ( )   PDF  
    Related Articles | Metrics
    Color transfer is an effective approach to change the color of the image. Traditional methods mostly adopt single source image, so there is big limitation in actual application. This paper presented a color transfer techniques with multi-source images. This method chose the best-matched source colors for re-coloring each target region, by a series of operation, for example, brightness re-mapping; referencing color area; transferring synthesis and so on. Then, different transferring synthesis methods were given according to different color images and grayscale images. Finally, this paper achieved color transferring to every target area. Using colorfulness as the evaluation index, the experiments show that the proposed method can generate convincing results with better performance.
    Graphics and image processing
    Weighted mean filter based on local histogram
    2010, 30(11):  3019-3021. 
    Asbtract ( )   PDF (533KB) ( )  
    Related Articles | Metrics
    A weighted mean filtering algorithm was proposed for the filtering problem about gray-scale image that was polluted by sale and pepper noise of different degree. According to the characteristic of salt and pepper noise, the algorithm detected image noise, established noise marked matrix, without processing the pixels marked as signal. As for the pixels marked as noise, weighted mean filtering based on degree of pollution of its neighbor pixels with different window size was carried out. The weight of pixel was determined by the local histogram of noise point region. The result of image simulation shows that the algorithm can suppress noise efficiently and preserve the image detail information. Finally, the comparison of weighted mean filtering and other improved algorithms proves the effectiveness of the algorithm.
    Video steganography based on motion vector
    2010, 30(11):  3022-3024. 
    Asbtract ( )   PDF (436KB) ( )  
    Related Articles | Metrics
    In order to reduce the modification rate of cover media which is caused by embedding process, a new video steganography scheme based on motion vectors and linear block codes was proposed in this paper. The method embedded secret messages in the motion vectors of cover media during the process of H.264 compressing. Linear block codes were used to increase the utilization rate and reduce the modification rate of the motion vectors. The proposed steganographic scheme not only has lower computational complexity, but also is highly imperceptible to human being. Furthermore, the secret information can be extracted directly without using the original video sequences. Experiments were designed to prove the feasibility of the proposed method. The experimental results show that the proposed scheme can embed large amounts of information and can maintain good video quality as well.
    Watermarking algorithm for four color images based on Hadamard transform and singular value decomposition
    2010, 30(11):  3025-3027. 
    Asbtract ( )   PDF (489KB) ( )  
    Related Articles | Metrics
    Because embedding only one kind of watermark in digital products can not satisfy people's demand anymore, in this paper, based on the orthogonality principle of Hadamard transform and the relative stability of Singular Value Decomposition (SVD) and so on, a digital watermarking algorithm for four color images using Hadamard transform and SVD in Discrete Wavelet Transform-Discrete Cosine Transform (DWT-DCT) domain was proposed. First, the four watermarking images were processed by Hadamard transform in order to become one watermark, and the one watermarking was transformed by SVD. The original image was transformed by DWT, DCT and SVD. Then the image was embedded by watermarking. Tests show that the algorithm not only can embed multi-watermarking at the same time to improve inbuilt information quantity, but also has a very strong robustness.
    Fast single image haze removal algorithm
    2010, 30(11):  3028-3031. 
    Asbtract ( )   PDF (650KB) ( )  
    Related Articles | Metrics
    Firstly, the global atmospheric light was calculated using the Dark Channel Prior (DCP). Secondly, the global atmospheric light and the atmospheric veil estimated were used to compute the media transfer rate. Finally, the recovered image could be obtained using the atmospheric attenuation model. This algorithm combined both the priorities of the accurate estimation of global atmospheric light by DCP and the fast calculated procedure of the atmospheric veil. The experimental results demonstrate that the proposed method not only achieves good visual recovery results, but also shortens the time to remove images haze.
    Network and distributed techno
    Universal composable security of multi-signature schemes
    2010, 30(11):  3032-3035. 
    Asbtract ( )   PDF (837KB) ( )  
    Related Articles | Metrics
    Since up to now the security of multi-signature protocols has only been considered in a single instance, the security of a multi-signature protocol when it is running with many other protocols concurrently was studied in the universal composable security framework. Firstly, the idea functionality of a multi-signature protocol was defined formally. Then, based on Waters' signature scheme, a multi-signature protocol was presented, which was proved to be Universally Composable (UC) secure. Thus, the presented multi-signature protocol can run safely in multi-protocol executing environments, such as Internet.
    Information security
    Dynamic general secret sharing based on bilinear pairing
    2010, 30(11):  3036-3037. 
    Asbtract ( )   PDF (311KB) ( )  
    Related Articles | Metrics
    An efficient multi-secret sharing scheme with a generalized access structure based on bilinear pairing was proposed. Each participant's private key was used as his secret share. No security channel existed between the dealer and the participants. The shadows of the participants did not need to reselect when the system accepted a new participant or fired an old participant, which was easier to implement. The analysis shows that this scheme is correct, can prevent the participant's cheating attacks, and the sub-secrets of participants can be shared.
    Stream cipher encryption scheme based on piecewise nonlinear chaotic map
    2010, 30(11):  3038-3039. 
    Asbtract ( )   PDF (420KB) ( )  
    Related Articles | Metrics
    A stream cipher encryption scheme was designed based on piecewise nonlinear chaotic map. The control parameter and iteration numbers of piecewise nonlinear chaotic map was produced by Logistic map and Henon map after the computation, and its outputs were added to the plaintext with modulus to obtain the ciphertext. The simulation and security analysis indicate that the proposed scheme possesses large key space, and has high sensitivity to plaintext and key. It can fight against brute-force attack, differential attack, and statistical attack efficiently, which also has excellent real-time characteristics.
    High capacity reversible watermarking algorithm based on prediction and sorting
    2010, 30(11):  3040-3043. 
    Asbtract ( )   PDF (586KB) ( )  
    Related Articles | Metrics
    Concerning most of the existing reversible watermarking algorithms need a location map, this paper presented a reversible watermarking algorithm for images without using a location map. This algorithm employed full-enclosing prediction based on a new high efficient sorting technique and results in a forecast error set after sorting, which could embed data with a low distortion. The experimental results clearly indicate that this scheme is superior to most of the existing reversible watermarking algorithms and can embed large data with low distortion.
    Self-embedded watermarking algorithm for precise image tamper location against JPEG compression
    2010, 30(11):  3044-3045. 
    Asbtract ( )   PDF (496KB) ( )  
    Related Articles | Metrics
    To improve the precision of tamper location and the capability of resisting against JPEG compression, this paper presented a new image dual-watermarking scheme for both tamper detection and tampered image recovery. The basic principle of this scheme was to let the original image be performed by the 1-level discrete wavelet transformation. Then the authentication watermarking and self-recovery watermarking were respectively embedded into high-frequency and low-frequency component of original image. The experimental results show that the proposed scheme not only can detect and locate temper accurately, but also is robust to resist against JPEG compression.
    Network threat analysis based on vulnerability relation model
    WANG Chun-zi HUANG Guang-qiu
    2010, 30(11):  3046-3050. 
    Asbtract ( )   PDF (879KB) ( )  
    Related Articles | Metrics
    To solve the problems in network vulnerability model and threat analysis method, the paper proposed a network vulnerability relation model based on extended time Petri net. By introducing complexity and harmfulness of network attack, and defining each index's quantization, the generation algorithm of vulnerability relation model was proposed. Combined with network threat definition, a non-target oriented network threat analysis method based on improved Dijkstra algorithm was presented. The model was suitable for describing complicated network attack, which could reduce the scale of state space effectively. The experiment proves the correctness and performance of vulnerability relation model, and the threat analysis method based on the model is more reasonable and effective.
    Application of clustering algorithm based on density and grid in intrusion detection
    2010, 30(11):  3051-3052. 
    Asbtract ( )   PDF (456KB) ( )  
    Related Articles | Metrics
    After discussing some problems in the current intrusion detection techniques, an intrusion detection method that applies cluster algorithm based on the density and grid was proposed. This algorithm shifted from non-dense part to dense part based on CLIQUE. Inaccuracy of clustering result of CLIQUE algorithm was avoided. The algorithm has the merit of grid-based clustering which is of low-complexity in time and space, and has the merit of density-based clustering which is of good noise immunity. Using the data sets of KDDCUP99, the results of simulation experiments show the effectiveness and feasibility of the clustering algorithm.
    Network and communications
    Efficient packet classification algorithm based on rules compression
    2010, 30(11):  3053-3055. 
    Asbtract ( )   PDF (596KB) ( )  
    Related Articles | Metrics
    This paper found out the fast packet classification algorithm EGT-PC's search time and storage space performance were decreased by the rules' redundant copies. According to the rules aggregation character, a new rules compression mechanism for the origin algorithm was designed, then a new packet classification algorithm EGT-SC was put forward. The experiments show that the new algorithm improves search time and storage space performance significantly.
    New MPH-based delay-constrained Steiner tree algorithm
    2010, 30(11):  3056-3058. 
    Asbtract ( )   PDF (466KB) ( )  
    Related Articles | Metrics
    In order to optimize cost and decrease complexity with a delay upper bound, the delay-constrained Steiner tree problem was studied. Based on the Delay-Constrained Minimum Path Heuristic (DCMPH) algorithm and through improvement on the search path, a new MPH-based delay-constrained Steiner tree algorithm was presented in this paper. With the new algorithm, a destination node could join the existing multicast tree by selecting the path whose cost was the least; if the path's delay destroyed the delay upper bound, the least-delay path computed by shortest path tree algorithm was used to take the place of the least-cost path to join the current multicast tree. By the way, a low-cost multicast spanning tree could be constructed and the delay upper bound would not be destroyed. The simulation results show that the new algorithm is superior to DCMPH algorithm in the performance of spanning tree and space complexity.
    Dynamic branch elimination algorithm for topological design of PTN mesh networks
    2010, 30(11):  3059-3061. 
    Asbtract ( )   PDF (521KB) ( )  
    Related Articles | Metrics
    According to the characteristics of Packet Transport Network (PTN) mesh network topology, an improved dynamic elimination algorithm for topological design of PTN mesh networks, Dynamic Elimination of Stable Route (SR-DE) algorithm, was proposed to improve computational efficiency of PTN mesh network topology design. The algorithm first analyzed the PTN network resource and business information, and then the algorithm dynamically changed the number of eliminated branches in each loop of eliminating redundant links, routed the business through stable route, so it could reduce the number of network weights change and avoid the repeat route of business and increase the efficiency of computation. The simulation results show it can improve the computational efficiency of PTN mesh topology design.
    Research on Hub nodes in scale-free networks
    2010, 30(11):  3062-3064. 
    Asbtract ( )   PDF (394KB) ( )  
    Related Articles | Metrics
    Scale-free networks was a kind of extremely uneven networks, in which exist small number of nodes (Hub nodes) with very high degrees, while large number of nodes are with small degrees. Through both theoretical analysis and simulation, the relations between the degrees, the number of Hubs and the scaling exponent of networks were carefully studied in this paper. It is found that scaling exponent equal to 2 is a typical cutoff of scaling exponent for scale-free networks.
    Hybrid sensor network with high throughput and fairness
    2010, 30(11):  3065-3068. 
    Asbtract ( )   PDF (641KB) ( )  
    Related Articles | Metrics
    Though the higher transmission rate and fairness are demanded in Wireless Sensor Netwoks (WSNs), it is hard to enhance the network throughput and fairness due to the constraints of WSNs. However, inspired by some special scenarios in practice, Hybrid Sensor Networks (HSNs) can be designed to make up the drawbacks in the deployment of WSNs. In this paper an optimal throughput allocation mechanism for fixed sensor network was proposed, and then three heuristic algorithms that were called greedy algorithm, K-increment clustering algorithm and hybrid algorithm for the wire deployment problem were presented. The simulation results show that the hybrid algorithm, which performs best among these three algorithms, achieves up to 75% improvement on minimum node throughput in the network. Therefore, the whole network performance could be significantly improved.
    Design of immune model and clustering algorithm in wireless sensor networks
    2010, 30(11):  3069-3071. 
    Asbtract ( )   PDF (484KB) ( )  
    Related Articles | Metrics
    Concerning these issues that Wireless Sensor Network (WSN) is liable to the degradation of energy and the trouble of dividing clusters, a model of WSN based on artificial immune system called aiCWSN was designed, which divided first clusters according to the grid ideas and defined the node and the clusters, etc. At last, a clustering algorithm of aiCWSN was presented. By testing, the mode and the algorithm can reduce energy consumption and improve the astringency of networks.
    Joint relay selection and power allocation optimization in industrial cognitive radio networks
    2010, 30(11):  3072-3076. 
    Asbtract ( )   PDF (721KB) ( )  
    Related Articles | Metrics
    In the industrial cognitive radio networks, the wireless channel suffers from wireless interference and serious conflict. Specially the metal environment and mobility of the industry scene cause the multipath and shadow fading. These phenomena can lead to the transmission reliability without guarantee. To solve the problem, a new joint optimization of relay selection and power allocation algorithm was proposed based on the concept of channel sensing probability and channel availability. For this algorithm, three cognitive relay selection schemes, maximum channel gain selection, nearest neighbor selection and harmonic mean selection, were discussed respectively. The distributed power among the source and relay nodes was selected optimally to meet outage probability minimally simultaneously. The simulation results show that the proposed algorithm can achieve lower system outage probability and improves the transmission reliability compared with the algorithm of average power assignment.
    Nonlinear precoding based on geometric mean decomposition for V-BLAST systems
    2010, 30(11):  3077-3079. 
    Asbtract ( )   PDF (469KB) ( )  
    Related Articles | Metrics
    Considering the error propagation effect and high complexity of Vertical Bell Laboratory Layered Space Time (V-BLAST), a new nonlinear module algebra precoding based on geometric mean decomposition for V-BLAST in MIMO-OFDM downlink systems was proposed. Geometric mean decomposition was used for precoding matrix that has the same equivalent noise gain among each sub-channel; the nonlinear module algebra precoding was used between sub-carrier channels of Orthogonal Frequency Division Multiplexing (OFDM) to eliminate interference from other signals at the transmitter, and could eliminate the error propagation effect of layered space-time codes effectively. At the receiver, minimum mean square error criterion was used. The simulation results show that the proposed method is better than the traditional methods in the performance of system Bit Error Rate (BER), and the complexity of the downlink receiver can be reduced to some extent.
    Typical applications
    Design of performance monitoring system for next generation telecom networks
    2010, 30(11):  3080-3083. 
    Asbtract ( )   PDF (653KB) ( )  
    Related Articles | Metrics
    In order to actively find the troubles of telecom network service quality and effectively track them, and support network management smooth transition, a performance monitoring system oriented to lifecycle management of performance troubles was suggested. Reference to Next Generation Operational System and Software (NGOSS) ideas, the end to end process flow for lifecycle management of performance troubles was proposed, the shared information and data model for performance monitoring was built, and a component system structure was designed. Applications indicate that the system can help effectively find service quality troubles, and support their analysis, transaction, solution and evaluation, and it can support flexible system integration and continuous business evolution.
    Virtual human simulation method based on Java 3D
    2010, 30(11):  3084-3086. 
    Asbtract ( )   PDF (510KB) ( )  
    Related Articles | Metrics
    The paper proposed a simulation method of virtual human by combining 3DS MAX, MS3D and Java 3D programming, which could achieve a relatively realistic strong interaction effect on virtual human. First, 3DS MAX role animation techniques were taken as a tool to construct the human static and dynamic models, then the underlying basic action footage were transformed into MS3D format for skeletal animation model interface of Java 3D, last the high-level behaviors of virtual human were controlled by Java 3D programming. The experiments demonstrate that the method is effective for the division of responsibilities between role modeling, motion simulation and behavior control, and it is also suitable for the multi-role, complex motions of the virtual human simulation in the network environment.
    Implementation of interactive control in open control system for industrial robot based on embedded PC
    2010, 30(11):  3087-3090. 
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    In order to implement interactive control of multi Degree Of Freedom (DOF) articulated industrial robot, an open control system for this kind of robot was designed. Hardware of the system was designed based on embedded industrial PC and Field Programmable Gate Array (FPGA). And the functional software of the system was modularized based on RT-Linux operating system. The interactive control signal was transmitted by shared memory of the system. And the system implemented interactive control through different M instructions defined in PLC program. The implementation of interactive control between a 3-axes articulated casting robot and 2 casting machines proves that the control system is a real open system, and it also has strong real-time control ability and reliability.
    Location and path planning of mobile robots based on data fusion
    2010, 30(11):  3091-3093. 
    Asbtract ( )   PDF (564KB) ( )  
    Related Articles | Metrics
    Focusing on the range error of sonar due to specular reflection, a method of weighted fusion was designed to fuse the data of sonar and camera and make the mobile robots complete the location accurately at the area of corner. A path planning for the robot was then given. After testing experiments, the results show that the method can make the robot passing through a corner safely and smoothly.
    High-speed image acquisition system and FPGA implementation
    2010, 30(11):  3094-3096. 
    Asbtract ( )   PDF (458KB) ( )  
    Related Articles | Metrics
    To solve the problem of low speed and poor quality of image acquisition, a system of high-speed image acquisition based on NiosⅡ was introduced. First of all, Field-Programmable Gate Array (FPGA) was used for controlling image sensor, and the image was captured through ping-pang operation. Then, the principle of area exchange rate was used for image processing. After that, the data was kept and transmitted to the upper device by image compression algorithm of BP neural network. Through the simulation experiments from the acquired data, the simulation results demonstrate that this system provides image acquisition of higher speed and quality than traditional system.
    Parametric spectrum estimation of flight vehicle vibration time series based on TVARMA model
    2010, 30(11):  3097-3100. 
    Asbtract ( )   PDF (458KB) ( )  
    Related Articles | Metrics
    To solve the problem of spectral deviation for Time-Varied Auto-Regressive Moving Average (TVARMA) parametric spectrum estimation, a TVARMA parameter estimation method based on Genetic Algorithm (GA) and combined objective function was proposed and then used for estimating the spectrum of flight vibration time series. Firstly, an initial estimation of the model parameters was acquired by the long time-varied auto-regressive and augmented Least-Square (LS) method; secondly, through extremum condition of continuous function, a constraint equation of model parameters was derived and a combined objective function was constructed based on the punishment function idea; lastly, the GA was used to optimize the initial parameters and the optimum parameters were the ones that made the combined objective function be the least. The experimental results demonstrate that the new method can overcome the spectrum deviation and improve the model precision both in time domain and frequency domain.
    Weighted multi-step chaotic method of deep pump impeller ablation index in sandiness water
    Gang Jiang
    2010, 30(11):  3101-3104. 
    Asbtract ( )   PDF (696KB) ( )  
    Related Articles | Metrics
    Based on complete overhaul log files data of a water factory 1~6 pump sets, which applied 500VYM pumps of Ebara corporation limited, the principle of sandiness water ablate box-shrouded impeller and its serious affects were analyzed. Chaos theory was adopted to forecast ablation feature of impeller, which could be used to plan pump sets duty time, overhaul time, human and outlay arrangement, high-priced spare parts purchase, and so on. Analysis algorithm was derived and comparison experiment was done. The experimental results can provide reference to the dealing of similar questions.
    ECoG signals classification using band power and k nearest neighbor
    2010, 30(11):  3105-3107. 
    Asbtract ( )   PDF (488KB) ( )  
    Related Articles | Metrics
    Brain-Computer Interface (BCI) systems support direct communication and control between brain and external devices without use of peripheral nerves and muscles. A typical Electrocorticography (ECoG) based invasive BCI system was off-line analyzed in this paper. Firstly, Band Power (BP) features were used for channels selection, and 11 channels with distinctive features were selected from 64 channels. Then, BP features were used for feature extraction of 11 channels ECoG, and feature vectors of 22 dimensions were got. Finally, k Nearest Neighbor (kNN) was used for classification of two different mental tasks (imaged movement of left finger or tongue). The off-line analysis results show that this method has got good classification accuracy for the test data set.
    Anti-Radiation-Missile detection based on parameterized time-frequency analysis
    2010, 30(11):  3108-3110. 
    Asbtract ( )   PDF (609KB) ( )  
    Related Articles | Metrics
    Based on the characteristics of Anti-Radiation-Missile (ARM) echo wave, 3 parameters Chirplet Transform (CT) was proposed from the perspective of parameterized Time-Frequency (TF) analysis, which combining 3 kinds of affine TF transform to be a kind of TF atom. What's more, based on the time-shift feature of ARM and aircraft echo wave's 3 parameters CT, a new method of ARM detection which can be implemented fast by Fast Fourier Transform (FFT) algorithm was proposed, considering the difference of the 3 parameters CT's modulus between the received signal and its delay signal as the detectable quantity. Aircraft echo interference could be counteracted effectively, while ARM echo energy could be saved without much attenuation. Therefore, the detection system was simplified. Computer simulation results show that ARM can be detected fast and precisely by this way in the surrounding of high aircraft echo interference and low Signal to Noise Ratio (SNR), so that a reliable early warning of ARM launching can be achieved.
    Adaptive threshold speech de-noising based on Bark scale wavelet package
    2010, 30(11):  3111-3114. 
    Asbtract ( )   PDF (585KB) ( )  
    Related Articles | Metrics
    When input signal has low Signal-to-Noise Ratio (SNR), the commonly used speech de-noising algorithm will cause distortion for reconstructed signal because of unvoiced sounds weak information losses. In order to overcome this, this paper presented a new method for speech enhancement. Wavelet packet decomposition was used to fit speech critical band, and the voiced and unvoiced sounds were processed separately based on sub-band energy ratio. Then, eight scales of wavelet packet decomposition and four scales of wavelet packet decomposition were employed for the unvoiced and the voiced sounds. A new wavelet adaptive threshold algorithm was obtained based on Bark sub-band, in Bark frequency domain real-time tracking noise level and the adaptive adjustment of coefficient can increase the accuracy of threshold value judgment, and effectively reduces signal reconstruction distortion. The computer simulation results indicate that the new method compared to traditional algorithm has obvious advantages in improving output SNR and effectively reducing the speech distortion. When this new algorithm is combined with spectral subtraction, it can further improve the quality of speech de-noising.
    File monitoring system based on minifilter
    2010, 30(11):  3115-3117. 
    Asbtract ( )   PDF (419KB) ( )  
    Related Articles | Metrics
    File security access control is the core of the bank automatic teller machine security. File monitoring system based on minifilter, combining users and processes to access control rights, real-time monitored files and achieved the file security access. Minifilter model shortened the development time, and was stable and compatible. The log file operation based on mutex lock, achieved the log events generat and the log written synchronization, and improved the log-written efficiency. The file monitoring system enhances the file security and the system stability.
    Collision-free interleaver applied to parallel decoding Turbo codes
    2010, 30(11):  3118-3120. 
    Asbtract ( )   PDF (538KB) ( )  
    Related Articles | Metrics
    To improve the decoding performance of Parallel Decoding Turbo Codes (PDTC), a memory collision-free interleaver to avoid the data collision was proposed. The information was written into a matrix first, and then S-random interleaving was done on its rows and columns. The simulation shows the distances spectrum of this interleaver is close to the full random interleaver, the bit error rate performance of PDTC is better than other interleavers which have the same algorithm complexity, and with the increase of the length frame, the improved result is more obvious. Hence this design has outstanding performances in PDTC.
    Design and implementation of float point multiplier in X-DSP
    2010, 30(11):  3121-3125. 
    Asbtract ( )   PDF (847KB) ( )  
    Related Articles | Metrics
    In order to meet the requirements on performance, power, area of float point multiplier in X-DSP, the architecture of X-DSP was studied, and the characteristics of all the instructions related to its float point multiplier were analyzed. A high-performance and low-power float point multiplier, which used Booth 2 encoding algorithm and 4:2 compress tree structure and adopt a 4-stage pipeline structure, was designed and implemented. The floating-point multiplier was also synthesized by using design compiler with 0.13μm CMOS technique of a third-party company. The results show that the frequency is 500MHz, the area of the circuit is 67529.36μm2, and the total circuit power consumption is 22.3424mW.
    Parking lot management system based on embedded design
    2010, 30(11):  3126-3129. 
    Asbtract ( )   PDF (582KB) ( )  
    Related Articles | Metrics
    To improve poor stability problem of the traditional PC-based parking management system working in harsh environment, a parking lot management system based on embedded design was proposed in this paper. In this design, being combined with MVC patterns and QT/E techniques to achieve a friendly interactive interface for embedded devices; taking into account the multiple-in and multiple-out work mode of large parking lot, using the heartbeat signal to reinforce unstable link which is a common problem in network communication; making use of data collection based on multithreading technology, the real-time and reliability of data acquisition were effectively guaranteed. By a great number of tests and practicalities, it is certificated that this new parking management system is stable and the cost is decreased.
    High-speed data acquisition system based on PCI9656
    2010, 30(11):  3130-3133. 
    Asbtract ( )   PDF (584KB) ( )  
    Related Articles | Metrics
    The design of a high-speed data acquisition and real-time storage system based on PCI-X bus was described in this paper. To achieve high-speed data transmission, the PCI9656 was used as PCI bus master I/O accelerator. An IA server that supported 64bit/66MHz PCI-X and SCSI RAID was adopted as the computer platform, to ensure high-speed real-time data storage. The time control of data acquisition and transmission was realized by programmable logic device. This system had been successfully applied to the echo signal acquisition of a radar system. The continuous radar echo acquisition and storage was implemented, with 100MSPS sampling rate, 10bit quantization precision, and signal distortion less than 0.2%, which met the requirements of high-speed acquisition and real-time storage.
    Design and implementation of digital chaotic signal generator
    2010, 30(11):  3134-3137. 
    Asbtract ( )   PDF (508KB) ( )  
    Related Articles | Metrics
    In order to generate new chaotic pseudo-random sequences, a new chaotic system was constructed. Some basic properties, including dissipativity, equilibrium, stability, Lyapunov exponent spectrum and bifurcation of this chaotic system were analyzed in detail through theoretical analysis and numerical simulation. Chaotic behaviors of the system were confirmed by designing an analog chaotic circuit. Moreover, a voltage comparator consisting of integrated op-amps was designed to quantify the chaotic analog signal, and the digital pseudo-random sequences generated from the quantization circuit were experimentally obtained. The approach to generating digital chaotic sequence can be applied in secret communications and information encryption.
    Design of high performance barrel integer adder
    2010, 30(11):  3138-3140. 
    Asbtract ( )   PDF (351KB) ( )  
    Related Articles | Metrics
    To accelerate the adder, a new parallel integer addition algorithm - carry barrel adder algorithm was proposed. The adder applied half-adder, combining parallel and iterative feedback ideas, judging the completion of a summation according to the value of the carry chain produced after each round of iteration, which can maintain the acceleration of calculation on low power consumption. The simulation results show that the proposed design of the barrel integer adder can accelerate prominently in slight augmentation of areas.
2025 Vol.45 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF