Loading...

Table of Content

    01 February 2011, Volume 31 Issue 02
    Network and communications
    Research and implementation of intelligent selection mechanism in P2P
    2011, 31(02):  293-297. 
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    P2P traffic problem was analyzed to point out the reasons of P2P traffic problem firstly. Then the information of Internet Service Provider (ISP) and geographic position was used to design an intelligent node selection mechanism. In this mechanism, one node chose nodes which belonged to the same ISP and were in geographical proximity to the objects for exchanging data, so the mechanism could control the data flow in the local network and the same ISPs network to the greatest extent, thus reducing the network load, and improving data transmission performance. To prove the validity of intelligent node selection mechanism, RoundTrip Time (RTT) and number of hops were tested. The simulation results show that the mechanism can effectively reduce the "flow travel" and improve the P2P transmission efficiency.
    Entropy-based adaptive QoS routing for wireless sensor networks
    2011, 31(02):  298-300. 
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    Concerning the different QoS requirements in different services and the realtime change of QoS indicators during the networks operation, an Entropybased Adaptive QoS Routing (EAQR) algorithm for wireless sensor networks was proposed. In EAQR, the routing establishment process was abstracted as multiindex weighted mark problems, nodes load, average energy potential, as well as timedelay were selected as QoS evaluation indexes, index weights were adaptively determined by the method of entropy, and then a sensor node could choose an optimal node to relay the data. The simulation results show that compared with the algorithms of Sequential Assignment Routing (SAR) and Energyaware QoS Routing(EQR), EAQR algorithm can effectively reduce the average endtoend delay, decrease the packet loss, and prolong the network lifetime.
    QoS routing algorithm research based on cognitive radio
    2011, 31(02):  301-303. 
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    General wireless network routing algorithms can not be directly used to cognitive radio network, therefore a new routing algorithm needs to be proposed to satisfy the endtoend QoS performance in cognitive radio network. Nodes in the cognitive radio network can select channels and switch between spectrums autonomously, trying the best to meet capacity demand and avoid intraflow competition. Being combined with the basic process of ondemand routing and taking full account of the influence of channel capacity and interference arising from intraflow competition on the routing selection, the routing algorithm based capacity and interference which was suitable for cognitive radio network was proposed. The experimental results show that the routing algorithm based on the capacity and interference has better endtoend QoS performance compared with other two algorithms.
    Scheme of BitTorrent traffic control
    2011, 31(02):  304-307. 
    Asbtract ( )   PDF (633KB) ( )  
    Related Articles | Metrics
    In order to solve the problem that BitTorrent (BT) traffic occupies too much import and export bandwidth, a cloned tracker was designed and implemented. In this method, the cloned tracker employed list selection algorithm different from original tracker to return the list, when the BT client sent its list request to tracker. Cloned tracker used list requestion algorithm which was different from original tracker, and returned Peers list instead of original tracker. Cloned tracker fetched and updated the list from original tracker. Finally, on the condition of unmodified BT client and original tracker server, the cloned tracker could efficiently control the import and export bandwidth occupied by BitTorrent traffic in the real network environment.
    P2P trust model for resisting collusion and dynamic behavior of node
    2011, 31(02):  308-312. 
    Asbtract ( )   PDF (794KB) ( )  
    Related Articles | Metrics
    To solve the problems of great network costs, the deficiency in dealing with dynamic characteristic and collusion attack in current P2P trust model, a new trust model was proposed. It computed global reputation value through that the direct interaction peers local evaluation weighted its evaluation credibility to avoid iterative computing process. The standard deviation and concentration ratio of local evaluation was used to identify and control collusion attack. Moreover, according to the node changes, its reputation value and evaluation credibility were updated dynamically. The simulation and experimental results indicate that the new model has advantages in reducing network costs, controlling dynamic characteristic of node and collusion attack over the existing trust models.
    Joint power control for trunking communication system in TD-SCDMA network
    2011, 31(02):  313-316. 
    Asbtract ( )   PDF (616KB) ( )  
    Related Articles | Metrics
    The trunking systems based on TD-SCDMA network are limited by the interference that users create to each other. Conventional power control algorithm cannot eliminate adjacent group interference. The beamforming was used to suppress the interference outside the main lobe, the joint detection of multiple groups was conducted to remove both intragroup and intergroup Multiple Access Interference (MAI) in the lobe for the uplink receiver, then the two-step iterative power control algorithms were combined with joint detection to eliminate effectively the interference in the system, thus reducing system requirements on power control and improving the performance. The simulation results validate that the algorithm can effectively reduce the system transmission power and to suppress the interference.
    AOA location and tracking algorithm in non-line-of-sight propagation environment
    2011, 31(02):  317-319. 
    Asbtract ( )   PDF (441KB) ( )  
    Related Articles | Metrics
    Based on Geometrically Based SingleBounce (GBSB) statistical channel model, a Angel of Arrival (AOA)-based location and tracking algorithm in Non-Line-Of-Sight (NLOS) environment for Mobile Station (MS) was proposed in this paper. The algorithm using Radical Basis Function (RBF) neural network was able to correct the NLOS errors, and then the positions of MS could be estimated by LeastSquare (LS) algorithm. Furthermore, cooperating with correlation detection gate, the MS was tracked by the algorithm. The simulation results show that the proposed algorithm can efficiently track the MS dynamically, and has good results.
    Available bandwidth measurement accuracy based on non-fluid model
    2011, 31(02):  320-323. 
    Asbtract ( )   PDF (775KB) ( )  
    Related Articles | Metrics
    Most of the available bandwidth measurement technologies are based on singlehop and the fluid model, which cannot get high accuracy in the situation of multihop path and bursty cross traffic. Therefore, a nonfluid model was analyzed and a method of parameters setting was introduced to reduce the impacts of packet size, cross traffic rate, probe traffic rate on measurement accuracy. A developed structure of packetpair, combined with setting the Time To Live (TTL) value, reduced the measurement error of packetpair spacings, which improved the robustness of the method in the context of a multihop path. The NS2 simulation results show that the measurement accuracy is improved by the parameter setting method based on nonfluid model and improved packet structure.
    Improved LEACH-ID algorithm for wireless sensor networks
    2011, 31(02):  324-327. 
    Asbtract ( )   PDF (548KB) ( )  
    Related Articles | Metrics
    Classical clustering communication protocol of LEACH was analyzed. Concerning the problem that the amounts of cluster heads and too many or too few members of the cluster may cause the accelerated death of the nodes and low energy use of the network, by calculating optimal clustering heads and controlling members of the cluster, the consumed energy was balanced, the usage rate of the network energy was improved and the networks lifetime was prolonged. At the same time, a simple and effective method of assigning temporary ID was given, which can assure the dissimilarity of the IDs with large probability. The simulation results indicate that, compared with LEACH, LEACHID extends the lifetime of network, delays the first nodes death time, and enhances the energy efficiency.
    Channel assignment scheme and routing protocol with channel reuse in multi-channel mobile Ad Hoc network
    2011, 31(02):  328-331. 
    Asbtract ( )   PDF (606KB) ( )  
    Related Articles | Metrics
    From the angle of channel reuse, a simple and effective ondemand fixed channel assignment scheme and routing protocol (CA-AODV-R) was proposed, which combined channel assignment with the classical AODV routing. By carrying channel information in RREQ and RREP/HELLO, this protocol could avoid frequently calling channel allocation algorithm and did not require modifications to IEEE 802.11. A new fixed channel assignment algorithm that divided every three data channels into a group was also proposed. The simulation results show that compared to the single-channel AODV protocol, CA-AODV-R protocol can enhance network throughput and packet delivery rate, and reduce the end-to-end delay of the network.
    Ant colony optimization applied in Ad Hoc network routing
    2011, 31(02):  332-334. 
    Asbtract ( )   PDF (468KB) ( )  
    Related Articles | Metrics
    Concerning the shortcomings of ant colony algorithm such as inherent long search time, being easy to fall into the local optimal solution, an improved Ad Hoc network routing algorithm based on ant colony algorithm was proposed. By using the roulette wheel method and spreading pheromones to improve the routing searching capability, bypassing the low energy neighbor node to equilibrium network node energy, while modifying routing tables, the routing algorithm performance was improved, and the adaptability of the algorithm was enhanced. By comparing the improved routing algorithm with AODV, the simulation results show that the algorithm not only increases the search diversity of roads and reduces the convergence time, but also improves network lifetime.
    Ant colony optimization and energy management routing algorithm for ZigBee network
    2011, 31(02):  335-337. 
    Asbtract ( )   PDF (430KB) ( )  
    Related Articles | Metrics
    To prolong the life of ZigBee networks is an important goal to design ZigBee routing protocol. Ant Colony Optimization-Ad hoc On-Demand Distance Vector Routing(ACO-AODV) based on energy management routing protocol was proposed, which can keep the good network performance and extend the life of ZigBee networks. The simulation results show that ACO-AODV algorithm is feasible and energy-saving. This approach can maintain a low delay of the average end-to-end data packet while effectively reducing energy consumption,so the design goals of low energy consumption and low-delay are achieved.
    Coverage optimization of wireless sensor networks based on chaos particle swarm algorithm
    2011, 31(02):  338-340. 
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    To improve the unreasonable distribution of sensors random deployment, increase network coverage rate, taking the network coverage rate as the optimized goal, an optimization method of wireless sensor networks coverage based on Chaos Particle Swarm Optimization (CPSO) was proposed in this paper. Based on the ergodicity, stochastic property of chaos, the algorithm can avoid the shortage of being easily trapped in a local extremum at the later evolution stage. The simulation results indicate that the addressed algorithm is superior to particle swarm optimization in coverage optimization.
    Graphics and image processing
    Extracting disparity map from bifocal monocular stereo vision in a novel way
    2011, 31(02):  341-343. 
    Asbtract ( )   PDF (693KB) ( )  
    Related Articles | Metrics
    Disparity is a key point in stereo vision as it shows the depth information of the scene indirectly, and disparity calculation is the basis of the depth calculation. The traditional disparity calculation methods are all targeted at binocular stereo. However, compared with disparity in binocular, the disparity in monocular stereo is radial along epipolar line. Concerning the characteristics of bi-focal monocular stereo vision, an approach to get disparity map from bi-focal images was proposed in this paper. Preliminary depth map was obtained by matching cost calculated. By using Mean-Shift algorithm, discontinuities and misunderstandings in primary depth map were smoothed and eliminated according to the matched points and graph cut result. The experimental results show that this method can get disparity map efficiently from pairs of bi-focal images.
    3D voxel reconstruction under multiple constraints in multi-view environment
    2011, 31(02):  344-346. 
    Asbtract ( )   PDF (545KB) ( )  
    Related Articles | Metrics
    A new voxel reconstruction under multiple constraints was proposed. Firstly, the scene was discreted to voxel, and target silhouette information was extracted in multi-view-obtained 2D image. Secondly, two constraints were used to recover target 3D information, which were silhouette constraint and the color consistency constraint. Smooth constrict was used to solve the problem that reconstructed model had float voxel and glitch. Based on the composition of the three forces, the floating voxels and the burrs could be restrained efficiently. Finally, a model recovering algorithm was proposed to fill the holes on the surface of the reconstructed model. The experimental results demonstrate that the method can reconstruct the target appearance with correct color and texture information, also the model is smooth and delicate, thus the hole can be filled effectively.
    Collision detection based on topology hierarchy graph
    2011, 31(02):  347-350. 
    Asbtract ( )   PDF (675KB) ( )  
    Related Articles | Metrics
    To improve realtime and accurate performance of collision detection in virtual environment, a collision detection method based on topology hierarchy graph was proposed. Firstly, the original model was separated into convex set using the relationship of topology, and combining the merit of convex set and Oriented Bounding Box (OBB), a topology hierarchy graph for model was constructed, which effectively eliminated nonintersected bounding box and enhanced detection accuracy. By using intelligent search algorithm-improved A* algorithm to search for Potential Collision Sets (PCS), thus the speed and accuracy of collision detection were improved. The experimental results show that the algorithm has higher accuracy and efficiency for realtime collision detection in virtual environment.
    Dressing simulation system for costume structure design
    2011, 31(02):  351-355. 
    Asbtract ( )   PDF (784KB) ( )  
    Related Articles | Metrics
    The computer-aided designing system has been widely used in the costume structure design; however, the traditional costume structure design is based on 2D planar drawing, and has some shortcomings. For example, the design effect is not intuitive and the design process is not continuous. In addition, most dressing simulations in the field of computer graphics can only achieve a rough result of morphology, lack accuracy and cannot meet the parametric requirement in the costume industry. Hence, in this paper, a dressing simulation system for costume structure design was designed and realized. The system, whose input was a standard drawing, got various kinds of characteristic parameters in the drawing. The threedimensional cloth constructed by these parameters can reflect the original design, which improves the accuracy of the simulation. At the same time, this system can put the costume on the human model quickly, which is able to help the fashion designer to work intuitively and continuously.
    Multi-scale Harris corner detection based on image block
    2011, 31(02):  356-357. 
    Asbtract ( )   PDF (328KB) ( )  
    Related Articles | Metrics
    Harris corner detection is a classical algorithm and is widely used nowadays, but it does not have the invariant property. For modifying its single-scale and making corner detection more accurate and valid, the conception of multi-scale-space and image block method were introduced into the Harris algorithm in this paper. Corner detection was used to corner extraction by multi-scale-space. Under each scale, the maximum response of local corner was regarded as a candidate corner point, while the image was segmented. Finally, along the direction of smallscale to largescale to judge whether the candidate corner was true, the false corner was eliminated, thus the corner detection was more accurate. By comparison test, the new algorithm significantly improves the image corner detection performance.
    Key frame extraction based on particle swarm optimization
    2011, 31(02):  358-361. 
    Asbtract ( )   PDF (633KB) ( )  
    Related Articles | Metrics
    Key frame extraction was an important step in video retrieval. In order to effectively extract key frames of different video types, a key frame extraction algorithm based on particle swarm was proposed in this paper. This method first extracted the global motion and local motion features in each frame, and video key frame was extracted by Particle Swarm Optimization (PSO) adaptively. The experimental results show that the key frame extraction algorithm for different types of video is more representative.
    New color image multi-level-inpainting method
    2011, 31(02):  362-365. 
    Asbtract ( )   PDF (762KB) ( )  
    Related Articles | Metrics
    The Color Total Variation (CTV) model can denoise while maintaining the edge of the damaged images, but it should be improved in domains of complicated texture. In this paper, a new image multilevelinpainting method for color images was proposed. The main research topic is to improve the visual result of inpainted images of complicated texture. Damaged images were calculated on the basis of CTV model, and then the patchbased texture synthesis method was used to improve the inpainting results of complicated texture domain. The experimental results show that the inpainting performance is better both in the structure and the texture of color images. Moreover, the new image multilevelinpainting method can be used in the image inpainting with bigger damaged domains.
    Block increasing texture synthesis algorithm based on shuffled frog leaping algorithm and particle swarm optimization
    2011, 31(02):  366-368. 
    Asbtract ( )   PDF (525KB) ( )  
    Related Articles | Metrics
    A texture synthesis algorithm based on Shuffled Frog Leaping Algorithm and Particle Swarm Optimization (SFLA-PSO) and block increasing was proposed for example-based synthesis. It can be used to speed up texture synthesizing through using the texture block of which size was increased two times during the process of texture synthesis. As texture block matching optimum strategy, SFLA and PSO were combined to accelerate and improve the global searching performance. The experimental results prove that the algorithm can significantly accelerate the speed of examplebased synthesis on the premise of fine synthesis quality, and also can overcome the defect of easily falling into local optimal solution of PSO algorithm.
    Watershed segmentation algorithm based on gradient modification and region merging
    2011, 31(02):  369-371. 
    Asbtract ( )   PDF (541KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of over-segmentation of the traditional watershed algorithm, a new algorithm of watershed segmentation method was proposed based on gradient modification and hierarchical region merging. Firstly, nonlinear transformation and morphological openingbyreconstruction and closingbyreconstruction were used to modify the gradient image; secondly, the floatingpoint activeimage was computed as the input of watershed algorithm; finally, some small segmented regions were incorporated into the near bigger regions which was based on criterion of the average gray value of the region, and according to the contrast control and the edge strength criterion. The experimental results show that this method has good robustness and adaptability which can settle the over-segmentation problem effectively.
    Image segmentation by fissive K-mean clustering method
    2011, 31(02):  372-374. 
    Asbtract ( )   PDF (467KB) ( )  
    Related Articles | Metrics
    Fuzzy CMeans (FCM) clustering algorithm is an efficient unsupervised segmentation method, which is suitable for any classification number without the need to predict the image characteristics, but its clustering result has direct impact from sample noise component and the set of initial conditions. Therefore, a Fissive K-Means(FKM) clustering algorithm for color image segmentation was proposed, which firstly denoised the sample data using median filtering, then preclassified the image samples according to a fissive clustering method to get an initial partition of sample set, finally iteratively optimized segmentation using K-means clustering based on the rule of probability distance from the initial partition. The experimental results show that the algorithm can avoid the misclassification of FCM such as dead center, center overlapping and local minima, and accelerate the segmentation speed.
    Improved FCM algorithm using difference of neighborhood information
    2011, 31(02):  375-378. 
    Asbtract ( )   PDF (673KB) ( )  
    Related Articles | Metrics
    In order to overcome the shortcoming of Fuzzy Cmeans (FCM) that cannot deal with image noise and weaknesses of its common improved algorithms, an improved FCM algorithm using the difference of neighborhood information was proposed in this paper, which used Gaussian function to characterize the difference of space and gray value about neighborhood pixels reasonably, and to adjust the center pixel membership to achieve the purpose of noise image segmentation. The experimental results show that the proposed algorithm can effectively deal with the images with Gaussian and Pepper & Salt noise, and can remove the noise while retaining more complete details of image. Its segmentation results are better than several improved FCM algorithms in the literatures.
    3D sculpture algorithm based on hierarchical slicing theory
    2011, 31(02):  379-382. 
    Asbtract ( )   PDF (561KB) ( )  
    Related Articles | Metrics
    Some kinds of fast slicing algorithms based on the STL models were analyzed in this paper. Firstly, sorting the apex of the triangular facets according to ascending order in their slice direction was proposed to filter out redundant vertex quickly; secondly, the triangle-vertex list and facet vector were established to reconstruct the topology information of the STL model; finally, the efficient adaptive slicing algorithm was realized with geometric continuity algorithm, and through C++ programming, this algorithm was realized. The experimental results show that the technique is feasible that the layered manufacturing in rapid prototyping technology can be applied to threedimensional carving CAD/CAM.
    Improved PDE image denoising method based on logarithmic image processing
    2011, 31(02):  383-385. 
    Asbtract ( )   PDF (566KB) ( )  
    Related Articles | Metrics
    Concerning the defects of Logarithmic Image ProcessingTotal Variation (LIP_TV) denoising model, an improved Partial Differential Equation (PDE) image denoising method based on LIP was proposed. Based on LIP mathematic theory, the new LIP gradient operator was obtained by introducing four directional derivatives in the original one, which can control the diffusion process effectively because it measures image information comprehensively and objectively. The fidelity coefficient was constructed by adopting the noise visibility function based on the structure characteristic of human visual system, which can further preserve the edge details and avoid estimating noise level factitiously. The theoretical analysis and experimental results show that the improved method has superiority in the visual effect and objective quality, which can better remove noise and preserve detailed edge features.
    Color image filtering algorithm based on neighborhood Mean-Shift
    Xiping He
    2011, 31(02):  386-389. 
    Asbtract ( )   PDF (595KB) ( )  
    Related Articles | Metrics
    Given proper values of shift windows in spatial domain and color domain respectively, the color data were filtered by means of Mean-Shift clustering, during which the color data in the rcircular neighboring domain of current data point were used as the clustering samples. Then image data at current position were updated with the cluster center newly obtained. This algorithm overcomes the difficulty to choose proper window radius for the model of Mean-Shift filtering combining spatial domain and color domain to adopt the possible variation of image size. Finally, the experimental results verify the validity of MeanShift filtering.
    Dynamic window-based adaptive median filter algorithm
    2011, 31(02):  390-392. 
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    Aiming to solve the problem that median filtering algorithm has poor processing capacity on high-density noise image and delicate texture image, a dynamic window-based adaptive median filter algorithm was proposed. According to the associated level between noisepoint information and its surrounding, the new algorithm adjusted the noisepoint filter value, which can deal with the details of the images better. The adaptive strategies of the new algorithm strengthen the denoising performance of the filtering algorithm, which is good at dealing with any density noise. The simulation analysis shows that the new algorithm can effectively improve the peak signal to noise ratio of the image , and the denoising effect is more satisfactory than other methods.
    New fast prediction mode selection algorithm in H.264/AVC frame
    SU QI
    2011, 31(02):  393-395. 
    Asbtract ( )   PDF (452KB) ( )  
    Related Articles | Metrics
    In order to gain the optimal mode, H.264 adopts Rate Distortion Optimization (RDO) technique to compute the cost of every mode for each macroblock, but it increases computational complexity. A precision mode selection algorithm in fast frame was proposed which combined PAN algorithm proposed by Pan. Firstly, the intra predictions type were decided by whether a macroblock was smooth. Secondly, the PAN algorithm was improved and a new algorithm based on the statistical properties of adjacent blocks was proposed. The experimental results show that the method mentioned in this paper reduces the total encoding time with negligible loss in Peak SignaltoNoise Ratio (PSNR) and a slightly increased bitrate, compared with full search algorithm and PAN algorithm.
    Pattern recognition
    Paper currency number recognition based on binary tree and Adaboost algorithm
    2011, 31(02):  396-398. 
    Asbtract ( )   PDF (479KB) ( )  
    Related Articles | Metrics
    A fast weak classifier training algorithm and a fast caching strategy were used to accelerate Adaboost training. Integrated learning algorithm Adaboost can accurately construct two classifiers, so paper currency number recognition was formulated as a series of Adaboost two-class classification problems quickly and flexibly by using binary tree structure. The experimental results demonstrate that the fast Adaboost training algorithm can speed up the training and the paper currency number recognition system based on binary tree and Adaboost algorithm has good recognition rate and processing speed, and it has widely been used in currency counter, cash sorter and ATM.
    Range profile target recognition based on second-order fuzzy clustering
    2011, 31(02):  399-401. 
    Asbtract ( )   PDF (431KB) ( )  
    Related Articles | Metrics
    Concerning that fuzzy C-means (FCM) algorithm was sensitive to the initial value of cluster centers, a fuzzy clustering method based on Second-Order Fuzzy Clustering (SOFC) was proposed, which took advantage of Transitive Closure (TC) algorithm's non-initialization, the samples were classified according to certain classification level firstly, and then the number of class was selected. The sample means of these classes was used to initialize the FCM algorithms cluster centers. On one hand, it can obtain good initial value of cluster centers; on the other hand, the level of value through the categories can optimize the number of cluster centers and cluster centers, avoiding local optimum and overcoming the consistency of clustering. The algorithm was used to recognize three types of aircraft targets based on onedimensional image data. The experimental results show that the recognition of SOFC algorithm gets obvious improvement than FCM algorithm.
    Effective video feature for action recognition
    2011, 31(02):  406-409. 
    Asbtract ( )   PDF (839KB) ( )  
    Related Articles | Metrics
    A video feature for action recognition was proposed. By observing 2D videos of human movement, different movement behaviors show different telescopic changes in human body and outline to some degree. Body outside silhouette and inner silhouette were termed as current frame poses, and variable poses movement. The pose-change-sequence frequencies and time mean squared errors were gathered to construct the eigenvectors. Several classification methods such as cross validating, features selecting and linear discriminant analysis were conducted on the collected data. The experimental results show that the eigenvectors have good linear separability, are unsensitive, contain appropriate distinction information, and have higher recognition precision.
    Sudden violence identification algorithm based on motion feature
    2011, 31(02):  410-412. 
    Asbtract ( )   PDF (430KB) ( )  
    Related Articles | Metrics
    In the intelligent monitoring system, violence behavior in public place should be warned in time; otherwise, it can lead to serious consequences. In this paper, the information of image was abstracted comprehensively when violence behavior happened, and an energy algorithm based on motion feature was proposed. The digital features of image from moving intensity, moving irregularity and location between targets when violence behavior happened were described, and the energy function using kinetic energy and potential energy was summarized to identify violence behavior. Finally, lots of videos that include all kinds of behaviors were used to carry out experiments and draw the conclusion that this method has a higher identification rate when handles sudden violence incident.
    Feature extraction and recognition based on banana-radar echo images
    2011, 31(02):  413-415. 
    Asbtract ( )   PDF (484KB) ( )  
    Related Articles | Metrics
    It is difficult for the high-way green channel inspection department to detect prohibited goods in the vehicle by artificial methods. In order to solve this problem, this paper put forward a discernment and classification method based on Back-Propagation (BP) neural network, making use of gray level co-occurrence matrix to extract feature from collected radar echo image with banana. Furthermore, the software for discernment and classification of radar echo image with banana was programmed. Practical application in the highway green channel inspection department in Henan province shows that the software has good discernment and classification performance.
    Application of improved semi-supervised clustering in MEG brain computer interface
    2011, 31(02):  416-419. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    The Magneto-Encephalo-Graphy (MEG) can be used as an input signal for Brain Computer Interface (BCI), which contains the pattern information of the hand movement direction. In view of the fact that the semi-supervised clustering combines the advantages of training data prior knowledge, a semisupervies fuzzy clustering algorithm based on training center was put forward. The algorithm was divided into lower-dimensional and improved semisupervised clustering. Principal component analysis and linear discriminant analysis were used to reduce the data from high-dimension to low-dimension. Improved semisupervised clustering based on fuzzy clustering for the training data added the training center in proportion to the test data center. The experimental results show that the average recognition rate of the proposed algorithm reaches to 55.1%, higher than that of the winner of the 2008 competition Ⅳ.
    Face recognition algorithm fusing 2DPCA and fuzzy 2DLDA
    2011, 31(02):  420-422. 
    Asbtract ( )   PDF (577KB) ( )  
    Related Articles | Metrics
    A new face image feature extraction method which combined fuzzy set theory and Two-Dimentional Two-Dimentional Principal Component Analysis-Linear Discriminant Analysis ((2D)2PCALDA) was proposed. Firstly, Two-Dimentional Principal Component Analysis (2DPCA) was used to extract the optimal projective vectors from face image. Then, the membership degree matrix was calculated by fuzzy K-nearest neighbor algorithm, and it was merged into the process of Two-Dimentional Linear Discriminant Analysis (2DLDA). Finally, fuzziness between-class scatter matrix and fuzziness withinclass scatter matrix were obtained. Compared with (2D)2PCALDA, the method made full use of the advantages of (2D)2PCALDA. It not only effectively extracted the row and column recognition information, but also made full use of the distribution information of samples. The experiments based on Yale and FERET face databases show that the method can have better recognition effect than (2D)2PCALDA and (2D)2PCA.
    Dorsal hand vein recognition based on wavelet decomposition and K2DPCA-2DLDA
    2011, 31(02):  423-425. 
    Asbtract ( )   PDF (445KB) ( )  
    Related Articles | Metrics
    Dorsal hand vein recognition based on wavelet decomposition and K2DPCA-2DLDA was proposed in this paper, and db4 wavelet was used to decompose the original image. K2DPCA transformation was used for the subimage of low frequency to obtain low dimensional space characteristics. Then, 2DLDA transformation was used to further reduce the dimension for obtaining the final feature expression. Finally, the features were classified according to the nearest neighbor classification rule. The experimental results show that the method can improve the hand dorsal vein recognition rate and reduce the recognition time effectively.
    Discrimination between closed and open shell pistachio nuts using machine vision
    2011, 31(02):  426-427. 
    Asbtract ( )   PDF (435KB) ( )  
    Related Articles | Metrics
    To improve the quality of pistachio nuts, it is necessary to remove the closed shell pistachio nuts. A single row equally spacing method was taken to carry the pistachio nuts on the conveyor belt, and every pistachio nut image was captured by camera during the transmission. Then the image processing algorithm was used to detect whether its shell was open or not. Finally, gas nozzle, which was at the end of the conveyor belt, was used to blow away the closed shell pistachio nuts. Pistachio image was reduced to 0.2 times, and then the centroid of binary pistachio image was searched. After that, three horizontal pixel lines were described from the binary pistachio image, which was positioned in the centroid of pistachio, the middle of the upper part of pistachio and the middle of the lower part of pistachio. The gray value changes of all pixels of three lines were calculated respectively, and the features of those three pixel lines were integrated to judge whether the pistachio shell was open or not. Experiment results prove that the correct discrimination rates of the open shell pistachio nuts and the closed shell pistachio nuts are 93% and 100% respectively.
    Database and data mining
    Parallel PSO combined with K-means clustering algorithm based on MPI
    2011, 31(02):  428-431. 
    Asbtract ( )   PDF (798KB) ( )  
    Related Articles | Metrics
    The performance of traditional serial clustering algorithm cannot meet the needs of data clustering of the huge amounts of data. To enhance the performance of clustering algorithm, a new clustering algorithm combining parallel Particle Swarm Optimization (PSO) with K-means based on MPI was proposed in this paper. Firstly, the improved PSO was combined with K-means to enhance the capacity of global search, and then a new parallel clustering algorithm was proposed, which was compared with K-means and PSO clustering algorithms. The experimental results show that the new algorithm has better global convergence, and also has higher speed-up ratio.
    Improved K-means algorithm and its implementation based on density
    FU De-Sheng
    2011, 31(02):  432-434. 
    Asbtract ( )   PDF (441KB) ( )  
    Related Articles | Metrics
    The initial clustering center of the traditional K-means algorithm was generated randomly from the data set, and the clustering result was unstable. An improved K-means algorithm based on density algorithm optimizing initial clustering center was proposed, which selected the furthest mutual distance k points in highdensity region as the initial centers. The experimental results demonstrate that the improved K-means algorithm can eliminate the dependence on the initial cluster center, and the clustering result has been greatly improved.
    Effectiveness and Implementation of Low Frequency rule Based on Apriori Algorithm
    2011, 31(02):  435-437. 
    Asbtract ( )   PDF (429KB) ( )  
    Related Articles | Metrics
    Firstly, the defects of classical Apriori algorithm based on global view and high frequency were pointed out, and the effectiveness of low frequency rule of transaction database was presented. By constructing the rules of C4.5 decision tree, that the low frequency rule exists in transaction database also was proved. On the foundation of this, a mining algorithm based on low frequency rule of Apriori algorithm was given, which was compatible with classical Apriori algorithm. However, it was not a simple extension of Apriori algorithm, it had broken theoretically Apriori algorithm view based on global view and high frequency. Finally, case database was mined by mining algorithm based on low frequency rule of Apriori and C4.5 algorithms, and the consistency of two methods and the effectiveness of low frequency rule were proved. Moreover, the effectiveness of mining algorithm based on low frequency rule of Apriori algorithm was validated.
    Fast construction algorithm based on FP-tree
    2011, 31(02):  438-440. 
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    Access frequency of database is one of the key factors to affect association rule mining performance. With the analysis of FP-tree, a fast construction algorithm based on FP-tree was proposed in this paper to scan database only one time. It dynamically adjusted not only the order of items in Item Entry Table (IET), but also the order of nodes in FP-tree whose order was not consistent with the order of nodes in IET. Finally, it removed the un-frequent items in the IET with related nodes in FP-tree, and completed the construction of FP-tree. The experimental results validate the efficiency of the new algorithm.
    Clustering ensemble method based on co-occurrence similarity
    2011, 31(02):  441-445. 
    Asbtract ( )   PDF (816KB) ( )  
    Related Articles | Metrics
    Firstly, a strict mathematical definition of co-occurrence similarity between categorical attribute values was given. Secondly, three other equivalent definitions were proposed. Then, the definition of the co-occurrence similarity between attribute values was extended to calculate the co-occurrence similarity for data objects, and was applied in clustering ensemble successfully. Using the co-occurrence similarity between data objects, the individual similarity matrix of an initial clustering result can be calculated by taking other initial clustering results into account. The experimental results show that Co-occurrence Similarity based on Clustering Ensemble (CSCE) method can effectively identify the subtle structures in data, and improve the accuracy of clustering ensemble greatly.
    XML document clustering method based on quantum genetic algorithm
    2011, 31(02):  446-449. 
    Asbtract ( )   PDF (615KB) ( )  
    Related Articles | Metrics
    This paper mainly targets on XML clustering with kernel methods for pattern analysis and the quantum genetic algorithm。Then, a new method based on the quantum genetic algorithm and kernel clustering algorithm was proposed. To eliminate the XML documents first, the vector space kernels kernel matrix was generated with frequenttag sequence, the initial clustering and clustering center with the Gaussian kernel functions were solved, then the quantum genetic algorithms initial populations were constructed by the initial clustering center structure. Clustering of the globally optimal solutions was obtained through the combination of quantum genetic algorithm and kernel clustering algorithm. The experimental results show that the proposed algorithm is superior to the improved kernel clustering algorithm and K-means in good astringency, stability and overall optimal solutions.
    Mining frequent items on stream data
    2011, 31(02):  450-453. 
    Asbtract ( )   PDF (583KB) ( )  
    Related Articles | Metrics
    A frequent items mining algorithm of stream data (SW-COUNT) was proposed, which used data sampling technique to mine frequent items of data flow under sliding windows. Given an error threshold ε, SWCOUNT can detect ε-approximate frequent items of a data stream using O(ε-1) memory space and the processing time for each data item was O(1). A lot of experiments show that SW-COUNT outperforms other methods in terms of the accuracy, memory requirement, and time and space efficiency.
    Bayesian network classifier based on PSO with predatory escape behavior
    2011, 31(02):  454-457. 
    Asbtract ( )   PDF (678KB) ( )  
    Related Articles | Metrics
    Bayesian network classifier with precise structure has been proven to be NP-hard problem. A Bayesian network classifier based on Particle Swarm Optimization-Predatory Escape (PSO_PE) algorithm was proposed in this paper, which could effectively avoid the direct influence of feature reduction on the performance of classification and complete the precise learning Bayesian network. In addition, the proposed classifier was exploited in employment predication of vocational college and was experimentally tested on Weka. The experimental results show that compared with other Bayesian classifiers, the new classifier is more effective and precise to learn Bayesian network.
    Access technology of remote database integrated Spring and Aglet
    2011, 31(02):  458-461. 
    Asbtract ( )   PDF (583KB) ( )  
    Related Articles | Metrics
    Aiming at solving the problem of the remote database access insufficiency based on traditional Client/Server mode, a remote database access technology integrating Spring and Aglets was proposed. Mobile Agent Aglets was used in the remote database access system, which would increase communication efficiency and improve performance significantly. Furthermore, Spring was used as the basic frame in the system, which could absorb every component of the system into the Spring IoC container for uniform management, and it could improve the maintainability and flexibility of the system. The experimental results demonstrate that the proposed technology has obvious performance superiority compared to traditional remote database access technology based on Client/Server mode.
    Decentralized approach for metadata management in computing resource sharing platform
    hua yanjiang
    2011, 31(02):  462-465. 
    Asbtract ( )   PDF (680KB) ( )  
    Related Articles | Metrics
    A decentralized approach for metadata management in computing resource sharing platform was proposed, which used peer network to distribute data and metadata to other nodes on the network instead of the back server. The random access patterns, variable access grains and potential heavy concurrency were supported in this system. Scalability under heavy concurrency was also achieved, which was supported by an original metadata scheme using a distributed segment tree built on top of a Distributed Hash Table (DHT). In order to validate our approach, a serial of simulation experiments were conducted. The experimental results show the aggregated bandwidth always increases when adding metadata servers. A wide average client bandwidth was also obtained when workers generate and write output data simultaneously.
    Artificial intelligence
    Design of fuzzy selftuning PID controller based on field programmable analog array
    2011, 31(02):  466-469. 
    Asbtract ( )   PDF (722KB) ( )  
    Related Articles | Metrics
    To improve the response rate of process control, a method for hardware implementation of fuzzy self-tuning PID controller based on Field Programmable Analog Array (FPAA) was proposed. All cell circuits such as analog multiplier, seeking small and sum, divider were implemented on 8 AN221E04s, and then an intact controller was assembled with the cell circuits. As a hardware circuit, this controller has a better realtime quality than that of the fuzzy PID controller based on software programming; As an analog circuit, it does not need to be equipped with A/D and D/A conversion circuit because all of the internal transmission signals in this controller are analog values. Compared with digital circuit based fuzzy PID controllers, this controller has the features of simpler structure and faster operation. The experimental results show that fuzzy selftuning PID controller based on FPAA has small over modulation and steady-state error, and the response time is brought down to microseconds.
    Variable discourse fuzzy control of VAV air-conditioning terminal unit
    2011, 31(02):  470-472. 
    Asbtract ( )   PDF (371KB) ( )  
    Related Articles | Metrics
    Concerning the nonlinearity, timevariant and imprecise model in the Variable Air Volume(VAV) air-conditioning systems, an adaptive variable discourse fuzzy control was proposed to improve the dynamic performance and accuracy of the control system. Based on derivation of VAV air-conditioning terminal model, simulation was conducted. The simulation results show that compared with the traditional PID control and conventional fuzzy control, variable discourse fuzzy control improves the system dynamic and static characteristics and steady state accuracy. Therefore, the fuzzy controller's performance has been greatly improved with good control effect.
    Temperature state recognition based on dynamical selection of measuring point location
    2011, 31(02):  473-477. 
    Asbtract ( )   PDF (813KB) ( )  
    Related Articles | Metrics
    In order to realize intelligent recognition of temperature state and related analysis for devices surface temperature states, an improved Analytic Hierarchy Process (AHP) model was introduced, which could dynamically analyze the relevance among several measuring points of temperature and selected key measuring point which could reflect the temperature state for devices. At the same time, Kohonen SelfOrganizing Feature Map (SOFM) neural network was established,which could update and follow and recognize temperature serials value of key measuring points during some time, so that show device temperature status. Take traction motor for example, Matlab software simulation analysis show its recognition rate is 89%,which effectively reduces the false positive rate of fire.
    Decentralized adaptive control of large-scale nonaffine time-varying delay systems
    2011, 31(02):  478-482. 
    Asbtract ( )   PDF (761KB) ( )  
    Related Articles | Metrics
    Based on the approximation capability of the neural networks, a decentralized adaptive neural network control scheme was proposed for a kind of unknown time-varying delay nonaffine interconnected largescale system. The unknown nonaffine functions were separated by the mean value theorem, while the restrictions of the unknown time delays and the uncertain time-varying delay interconnections were relaxed by utilizing the separation technique and the Young's inequality in the design. The number of adjustable parameters was considerably reduced. In addition, time delay uncertainties were compensated by using LyapunovKrasovskii functionals. Through the theoretical analysis, all of the signals in the closed-loop system are proven to be bounded, while the output tracking errors converge to a small neighborhood of the origin. The simulation results show the effectiveness of the proposed control scheme.
    PID controller design of closedloop gain shaping in CSTR process
    2011, 31(02):  483-484. 
    Asbtract ( )   PDF (304KB) ( )  
    Related Articles | Metrics
    To solve the control problem of Continuous-Stirred-Tank-Reactor (CSTR), a straightforward PID design based on closed-loop gain shaping algorithm was proposed in this paper to enhance the simplicity and robustness of PID controller. Firstly, the transfer function of the anticipant closed-loop control system was assumed as a 1st order system, and the actual closed-loop transfer function was consisted of the 1st order transfer function and PID controller. Then, the anticipated closedloop transfer function was compared with that of the actual closedloop, thus the PID controller coefficients could be calculated. Finally, the robust PID controller was designed in a CSTR system. The simulation results demonstrate that the PID controller has better robust stability and dynamic performance.
    Particle swarm optimization algorithm based on adaptive Tent chaos search
    2011, 31(02):  485-489. 
    Asbtract ( )   PDF (702KB) ( )  
    Related Articles | Metrics
    To solve the premature convergence problem of Particle Swarm Optimization (PSO), a new PSO algorithm based on adaptive chaos search was proposed. The uniform particles were produced by Tent mapping so as to improve the quality of the initial solutions. Tent chaotic sequence based an optimal location was produced, and the adaptive adjustment of search scopes can avoid the redundant computation and accelerate the convergence speed of the evolutionary process. The experimental results show that the new introduced algorithm outperforms several other famous improved PSO algorithms on many well-known benchmark problems.
    Fuzzy inference system based on B-spline membership function
    2011, 31(02):  490-492. 
    Asbtract ( )   PDF (593KB) ( )  
    Related Articles | Metrics
    It is hard to determine the membership function and inference rules in fuzzy reasoning. By studying the fuzzy reasoning process and the characteristics of B-spline function, the reasoning method on the application of B-spline function fitting to fuzzy membership function was improved. Through the calculation and selection of the extreme points of error and the extreme point of curvature, the B-spline function data points were obtained. After obtaining the control points by inversing data points, the curve fitting of the membership function was increased and the B-spline membership function fitting problem was solved on the curve of the B-spline by increasing the control points. B-spline inference rule was established and B-spline inference system was constructed, and then the final result of the system was calculated as a B-spline hypersurface. Finally, the experimental results validate the effectiveness and feasibility of this method.
    Optimal credible rules based on rough sets in concept hierarchies
    Liang DeCui
    2011, 31(02):  493-497. 
    Asbtract ( )   PDF (783KB) ( )  
    Related Articles | Metrics
    The concept hierarchies provide a new way to solve such problems as the data are increasing too fast, there are so many rules, and different decision makers need obtain some rules from different hierarchies. In the context that every condition attribute had a concept hierarchy, the relations of positive regions and rules from different levels were analyzed based on rough set theory. Then, the algorithm of optimal credible rules based on rough sets in concept hierarchies was proposed, which was along with concept hierarchies from top to down. The algorithm improved the existing reduction strategy, realized the reduction and obtained the optimal credible rules by descriptors. Besides, it also considered there was null or not a new object in the positive region, and the effectiveness of extracting rules was improved. Finally, the feasibility of this algorithm was validated by an example.
    Parameter optimization of mixed kernel SVM based on momentum particle swarm optimization
    2011, 31(02):  501-503. 
    Asbtract ( )   PDF (576KB) ( )  
    Related Articles | Metrics
    Support Vector Machine (SVM) can be used to solve classification problems, and it is very important to optimize its parameters. With the introduction of mixed kernels, SVM has one more adjustable parameter. Because it is hard to obtain the parameter by manual or experience, Momentum Particle Swarm Optimization (MPSO) was used to find the best combination of the basic parameters and mixed adjustable nuclear parameter of SVM. Finally, the simulations of UCI data show that the proposed algorithm provides an effective way to search the best parameters combination, and makes SVM have higher performance and better classification accuracy.
    Information security
    Correlation power analysis and implementation on KATAN32 cipher
    2011, 31(02):  504-506. 
    Asbtract ( )   PDF (587KB) ( )  
    Related Articles | Metrics
    KATAN32 proposed in CHES2009 is a light-weight block cipher with the features of simple hardwareimplementation and low cost of power. A kind of chosenplaintext correlation power analysis on KATAN32 was presented and the true key could be extracted finally. The experimental results show that the method is effective and avoids the influence of environmental factors when the actual circuit is running. This new method only need choose 160 different plaintexts and collect 160 power traces to realize the correlation power analysis on KATAN32.
    Threshold rekeying protocol based on multi-receiver signcryption algorithm
    2011, 31(02):  507-510. 
    Asbtract ( )   PDF (583KB) ( )  
    Related Articles | Metrics
    An ID-based multi-message and multi-receiver signcryption algorithm was proposed, which could simultaneously signcrypt lots of messages for lots of receivers. Based on it, a threshold rekeying protocol was proposed to be applied in Ad Hoc networks, and the new algorithm and protocol were analyzed. The analysis indicates that the algorithm achieves authenticity and confidentiality while being of high efficiency, and that the rekeying protocol not only possesses the characteristics of the mentioned algorithm, but also can resist collusion attacks.
    Design and analysis of USB-Key based strong password authentication scheme
    2011, 31(02):  511-513. 
    Asbtract ( )   PDF (446KB) ( )  
    Related Articles | Metrics
    Concerning that the OSPA protocol is vulnerable to the replay attack and the denialofservice attack, in this paper, a USB-Key based strong password authentication scheme was proposed, which used USB-Key to verify the users password and store the security parameter. In this scheme, user's identity can be protected by using the temporary identity and the authentication parameters computation by Hash function. This scheme can achieve mutual authentication between user and server by transferring the authentication parameters. The security analysis of the scheme proves that the scheme is resistant to replay attack, impersonation attack and Denial of Service (DoS) attack, and it has high security, and it can be used by users with limited computation ability.
    Inverse insert algorithm based on segment hash
    2011, 31(02):  514-516. 
    Asbtract ( )   PDF (440KB) ( )  
    Related Articles | Metrics
    A segment hash with inversed insertion based on peacock hash and segment hash was proposed. This algorithm ensured the average times of off-chip access close to 1 through changing the operating sequence and data structure of peacock hash. The analysis and experimental results show that the algorithm has high efficiency and can reduce the memory overhead.
    Encryption algorithm based on nonuniform B-spline curve
    2011, 31(02):  517-519. 
    Asbtract ( )   PDF (493KB) ( )  
    Related Articles | Metrics
    A new encryption method for software encryption and authentication was proposed. This approach was based on non-uniform B-spline curve function constructing method and geometric characteristics. Key was embedded into nodes of non-uniform B-spline basis function, and then the control points which can construct a rich shape curve were generated according to explicitly, thus the spline curve was obtained. Finally, effective information was collected through the curve as the cipher text for encryption. The encryption algorithm used effectively curve features information to encrypt hash, at the same time strengthening encryption and enhancing the encryption efficiency. The proposed algorithm can be widely used in software encryption and authentication. Furthermore, encryption algorithm can satisfy current needs of frequent updates.
    Adaptive digital audio watermarking method based on DCT and lifting wavelet transform
    2011, 31(02):  520-522. 
    Asbtract ( )   PDF (637KB) ( )  
    Related Articles | Metrics
    In order to protect copyright, an adaptive digital audio watermarking method based on Discrete Cosine Transform (DCT) and lifting wavelet transform was proposed, which was based on computing power of lifting wavelet and the largest auditory tolerance of DC coefficients after DCT transform. Firstly, the original audio signal was decomposed into low-frequency and high frequency component after lifting wavelet transform, then DCT was performed on the low-frequency component and watermarks were embedded into the DC coefficient sequences according to the largest DC auditory tolerance after DCT coefficients transform. Meanwhile, considering the balance between the robustness and the imperceptibility of the audio watermarking, the adaptive adjustments were used for embedded watermarking. The experimental results show that the proposed method not only has low computational complexity, but also is robust to malicious replace operation and common signal attacks like noise adding and lowpass filtering.
    Departmentrole based finely granular access control model in management information system
    2011, 31(02):  523-526. 
    Asbtract ( )   PDF (656KB) ( )  
    Related Articles | Metrics
    Concerning the characteristics and disadvantages of RoleBased Access Control (RBAC) model, the departmentrole based access control (D-RBAC) finely granular model was proposed in this paper. A formal description for the model elements, the implement mechanism of the model, and the algorithm of access control were given. In D-RBAC model, role was related to department, which effectively implemented the accurate control of access objects and data, and the permission assignment problem of the same role in different departments was resolved. The fine-grained permission control was realized as well. Through the model, the number of roles was decreased, the development assignments were simplified and the accuracy and flexibility of permission management were increased. Finally, an application example of this model being used in one equipment safeguard comprehensive information system was given.
    Data recovery algorithm based on file feature on Windows platform
    2011, 31(02):  527-529. 
    Asbtract ( )   PDF (479KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of data recovery in Windows system, a recovery algorithm based on file feature, especially when the directory data of the file system was lost, was presented in this paper. The algorithm identified the start and end sector of the lost file by scanning all sectors of the disk and matched them according to the head and foot feature codes of the lost files, then recovered the files by restoring the data between the start and the end sector. Concerning that the restored word documents could not be displayed as a result of their partial data being covered or some other reasons, the users most interested characters in terms of the word document structure and coding rules of the characters were extracted. The experimental results show that the proposed algorithm has good performance.
    Advanced computing and signal processing
    Relationship between makespan of grid job and granularity of job partitioning
    2011, 31(02):  530-532. 
    Asbtract ( )   PDF (575KB) ( )  
    Related Articles | Metrics
    There have been many divisible computeintensive grand-challenge jobs running on volunteer grid. The relationship between makespans of such grid jobs and granularities of jobs partitioning was studied. Firstly, the relationship between makespan and partitioning granularity of computeintensive jobs with and without communications between subjobs were analyzed theoretically. Then, the relationship between makespan and partitioning granularity of a job with and without communications between subjobs running on dedicated grid resources in parallel mode were simulated. The simulation results show that grandchallenge jobs makespan decreases at first and then increases when granularity increases. Granularity can be more grain, and the best makespan will decrease when the ratio of computation time to the communication time of a subjob increases. To optimize jobs makespan, the job's partitioning granularity should not be too coarse or too fine.
    High-order FIR filter design on FPGA using MA distributed algorithm
    2011, 31(02):  533-536. 
    Asbtract ( )   PDF (545KB) ( )  
    Related Articles | Metrics
    Concerning the problems of too much resource consumption and too low processing speed, a new highorder FIR filter targeted Field Programmable Gate Array (FPGA) was proposed. Firstly, polyphase decomposition architecture and pipeline technology were adopted to decompose highorder FIR filter into loworder ones, and then the proposed MA distributed algorithm architecture was used to implement the decomposed filters in the method. A series of serial and parallel FIR filters which order from 8 to 256 were implemented by ISE10.1 targeted Xilinx Xc2vp307ff896 FPGA device. The experimental results show that the proposed method effectively reduces the system resource consumption and improves the timing performance of the system.
    Direct data domain based method for ground moving target indication by bistatic airborne radar
    2011, 31(02):  537-539. 
    Asbtract ( )   PDF (633KB) ( )  
    Related Articles | Metrics
    Since the clutter received by bistatic airborne radar expresses severe range dependence, the clutter plus noise covariance matrix cannot be estimated accurately by clutter range samples, thus the performance of clutter suppression obtained by Space-Time Adaptive Processing (STAP) degrades greatly. To solve this problem, a method using Direct Data Domain (DDD) was proposed. Because DDD method obtains enough samples by spatialtemporal sliding only from one range gate, this method does not need to compensate for clutter range dependence. The simulation results verify the effectiveness of this method.
    Fast implementation of point multiplication over elliptic curveFm2based on FPGA
    2011, 31(02):  540-542. 
    Asbtract ( )   PDF (448KB) ( )  
    Related Articles | Metrics
    The implementation speed of Elliptic Curve Cryptography (ECC) depends on the implementation speed of elliptic curve point multiplication. Point multiplication of elliptic curve using Montgomery algorithm was proposed in this paper. Parallel algorithm was used in modular multiplication algorithm and modular square algorithm, as well as Fermats Little Theorem was used and optimized in modular inversion, thus the fast operation of elliptic curve point multiplication was implemented. Synthesis and implementation were realized in a Xilinx device of XC5VLX220T. Through timing simulation, the clock frequency can achieve 40MHz. It takes only 14.9μs to carry out one point multiplication operation.
    Variable metric threshold algorithm for identification signal denoise of fractional system based on wavelet analysis
    2011, 31(02):  543-547. 
    Asbtract ( )   PDF (802KB) ( )  
    Related Articles | Metrics
    The identification theory and method of fractional system is an important research direction which has drawn much research attention recently, and how to reduce the noise about the identification test data is one of the subjects which must be focused on. In this paper, on the basis of wavelet theory and method, the characteristics of noise and output signal of fractional system were analyzed firstly. In order to overcome the limitations of the conventional threshold denoise method, a nonlinear variable metric algorithm for the multilevel wavelet decomposition coefficient was proposed, and then a denoising method for identification signal of fractional system was formed. The simulation experiments indicate that this method can reduce the noise to a satisfactory level, and it also has good adaptability for different Signal-to-Noise Ratio (SNR) cases. Our research purpose is to provide a reference for further identification algorithm design and to improve identification precision.
    Correlation analysis of LSF and differential LSF parameters
    2011, 31(02):  548-552. 
    Asbtract ( )   PDF (802KB) ( )  
    Related Articles | Metrics
    In the light of intraframe correlation of Line Spectrum Frequencies (LSFs) and Differential LSFs (DLSFs) from English/Chinese female/male speech database, an optimal partition scheme for LSFs and DLSFs was proposed. The experimental results show that if the size of codebook is not limited, dividing 10order LSF vector into two subvectors of (4,6) can get better quantization performance, otherwise dividing LSF vector as (4,2,4) or (4,4,2) can get better quantization performance. The intra-frame correlation between DLSFs is significantly smaller than that between LSFs, and at least 68% of DLSFs has feeble intra-frame correlation. DLSFs were quantized by DSQ and EEDSVQ, and the experiments show that the quantization performance of DLSFs is better than that of LSFs. In speech coding systems, adopting DLSFs instead of LSFs can get less spectral distortion and reach high quality speech at lower bitrates.
    Algorithm for underdetermined blind source separation based on DSNMF
    2011, 31(02):  553-555. 
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    The decomposed left matrix of Non-negative Matrix Factorization (NMF) is required to be full column rank, which limits of its application to Underdetermined Blind Source Separation (UBSS). To address this issue, an algorithm for UBSS based on determinant and sparsity constraint of NMF, named DSNMF, was proposed in this paper. On the basis of standard NMF, determinant criterion was used for constraining the left matrix of NMF, while sparsity was used for constraining the right one. In this way, the reconstruction error, the uniqueness of mixing matrix and the spasity of original sources can be equipoised, which leads to the underdetermined blind separation of mixing matrix and original sources. The simulation results show that DSNMF both works well for good and poor sparsity of sources separation.
    Independent component analysis with innovation model
    2011, 31(02):  556-558. 
    Asbtract ( )   PDF (471KB) ( )  
    Related Articles | Metrics
    In order to improve the convergence rate and accuracy of Independent Component Analysis (ICA) algorithm, an independent component analysis of innovation model was proposed. The fundamental mechanism of innovation model was to reduce the redundancy among the observed samples, thus it could increase the nonGaussianity of the latent components. Approximately independent image signals were taken to do the simulation. The simulation results show that the new method has a superior performance in both converge rate and accuracy to the traditional one.
    Typical applications
    Design and implementation of Android phone based access and control in smart space
    2011, 31(02):  559-561. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    As a new computing mode, the most essential features of ubiquitous computing, embodied in intelligent space, are physical integration and spontaneous interoperation. Due to them, Intelligent phone can get personalized service. A remote access and control system was proposed in this paper, which was based on the Open Service Gateway Initiative (OSGI) intelligent gateway technology. Firstly, the system architecture was introduced, then the realization of each module was analyzed, finally, testing on actual devices was carried out. Using surrounding services, Android mobile phone realizes the universal access and remote control on other devices.
    Design and implementation of parallel architecture for legacy system
    2011, 31(02):  562-564. 
    Asbtract ( )   PDF (488KB) ( )  
    Related Articles | Metrics
    Service-Oriented Architecture (SOA) provides a solution for reengineering legacy systems to support distributed application environment. But due to the limitation of technology and architecture involved, issues such as nonsupport of a multithreaded environment, memory leaking, non-support of parallel environment still exist in some parts of the legacy systems, which restrict their applications immensely. Through analyzing and doing research on the communication mechanism of Windows Communication Foundation (WCF), a parallel architecture was proposed to solve these problems. The architecture improved the default architecture of WCF by adding a service controller to it, which could deliver message between the client and the server and selected service for clients. The proposed architecture has already solved these problems and been applied in a large financial system.
    FPGA implementation and application of PCI interface controller in bus based computer numerical control system
    2011, 31(02):  565-567. 
    Asbtract ( )   PDF (609KB) ( )  
    Related Articles | Metrics
    To meet the requirements for high speed and high accuracy machining,how to transmit large amounts of data fast and stably between operation system and peripheral interface becomes very important in bus based multiprocessor advanced Computer Numerical Control (CNC) system. After analyzing the advantage of transmission application based on PCI in the CNC system,a PCI interface controller was designed in the main control chip FPGA on the CNC device's interface control board, also the key components of the internal structure of PCI interface controller, as well as its core state machine design were discussed in detail, then the method to use PCI interface controller in bus based CNC system was explained. Finally, the experimental platform was set up to verify the feasibility and effectiveness of the program.
    Directed level graph-based approach to automatic Web services composition
    2011, 31(02):  568-571. 
    Asbtract ( )   PDF (632KB) ( )  
    Related Articles | Metrics
    To solve the problem of automatic Web services composition with multiple inputs/outputs,an approach based on directed level graph was proposed.It provided an optimal composition sequence through these steps as follows:1) Built a directed level graph by inputs/outputs of user request;2) Built a complete reduction graph of the directed level graph;3) Searched all reachable paths for every node of complete reduction graph;4) Converted the optimal path for user request into services composition sequence.This approach can generate all composition sequences with least steps and an optimal composition sequence according to the quality of services.Compared with traditional graphbased approach,it reduces search space and avoids cycle searching and can be applied in a large scale of Web services repository.
    Enterprise unified authentication and authorization system based on Web services
    2011, 31(02):  577-580. 
    Asbtract ( )   PDF (614KB) ( )  
    Related Articles | Metrics
    In order to achieve the goal of unified scheduling and unified management in application system, unified authentication and authorization has become more and more important and sophisticated. The background and the goal of unified authentication and unified authorization system were analyzed firstly, then the unified authentication, authorization and Web services were analyzed and compared, and a unified method of authentication and authorization in enterprise combining the Web services was proposed; therefore, it was more convenient to administer the concrete application system for administrator, which truly reconstructed completely unified authentication and authorization, and the design and implementation of the concrete service specification (or interface) by the practical project were given.
    3D flight path planning based on optimized potential field theory
    2011, 31(02):  581-583. 
    Asbtract ( )   PDF (501KB) ( )  
    Related Articles | Metrics
    An improved potential field theory was used to plan three dimensional paths for airplane. The potential field theory was optimized with the aim of effectively evading not only radar threat and fire threat, but also terrain threat, which made three dimensional flight path planning have certain practicality. Simulated terrain elevation data was overlapped with radar threat and fire threat in accordance with their weights respectively to form integrated threat electric field, and the weights of terrain threat and radar threat, fire threat were determined according to the needs of given penetration task. Through locating the search area between start point and end point, it is ensured that the final flight path can be restrained on end point; at last, the above paths were smoothed by gradient smooth algorithm and curvature smoothing limitation algorithm on acceleration and curvature, so that the performance of paths can be in line with the requirements of maneuverability and flying. The simulation results show that the optimal potential field theory can further consider terrain threat and other threat (radar threat and/or fire threat) near end point, which improves the method's practicability, and deduces the planning time.
    Feature gene selection for Chinese hamster classification based on support vector machine
    2011, 31(02):  584-586. 
    Asbtract ( )   PDF (433KB) ( )  
    Related Articles | Metrics
    Concerning the gene expression profile of Chinese hamster feature, such as highdimension and small sample, a method of feature selection for Chinese hamster classification based on Support Vector Machine (SVM) was proposed in this paper. The method used improved FDR gene feature score criterion to remove the genes irrelevant to the classification. A new distance composed by space distance and function distance was proposed as the criterion of comparability to remove redundant genes. A SVM was used as classifier to validate the classification performance of the feature genes selected. The experimental results show that this method effectively removes the irrelevant and redundant genes, and selected the feature genes that meet the needs of least feature genes which classify accurately on Chinese hamster.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF