Loading...

Table of Content

    01 January 2011, Volume 31 Issue 01
    The 8th Chinagraph conference excellent papers
    3D models semantic retrieval method in combination with descriptive text
    2011, 31(01):  1-5. 
    Asbtract ( )   PDF (907KB) ( )  
    Related Articles | Metrics
    To improve the retrieval performance of 3D model ,aiming at the problem that the semantic-based 3D model retrieval system is hard to support people's subjective words, a 3D model semantic retrieval method based on content and descriptive text is proposed. This method constructs a semantic tree for 3D models firstly. Then, it calculates the similarity among the input and node of tree by the word statistics method, and gets some 3D models from those nodes with high similarity,and gets a smaller 3D models set by semantic constraint. Finally, user input’s 3D model examples may matches the shape similarity in the smaller set of 3D model through semantic constraint , and returned search results to the user. The WordNet definitions of some words were as input in experiments. Experiments on PSB were shown that this method performs better results than the comparing 3D model retrieval method on recall-precision.
    Context-aware Focus+Context visualization technique
    2011, 31(01):  6-10. 
    Asbtract ( )   PDF (860KB) ( )  
    Related Articles | Metrics
    Rendering vast amounts of information on the relatively small screens has become increasingly problematic. Various Focus+Context visualization techniques have been proposed to address these problems. While some of these techniques have provided good results with the various visualization techniques, only a few methods can provide an intuitive and flexible interaction manner for visualization. This paper presented a method which allowed the user to specify an arbitrary polygon for the focus area. A welldesigned energy model was proposed to preserve the details in the focused area. To reduce the distortion in the context region, a glue area can also be specified by the user. This distributes the distortion in the focused area to the other region by propagating the deformation energy smoothly. A number of experimental results have demonstrated that the proposed method can largely improve the visualization effects and help users understand much more information on small screens.
    NPU-based image composition and display in parallel rendering system
    2011, 31(01):  11-15. 
    Asbtract ( )   PDF (1023KB) ( )  
    Related Articles | Metrics
    In real-time rendering of massive data sets, it is a popular common solution to use PC clusters for the real-time parallel rendering. The image composition of all the results parallelly rendered by the cluster computers is a notorious bottleneck in sort-last clustered rendering systems. This paper presents a network processing unit (NPU for short)-based image composition method and a sort-last distributed rendering system, called NPUPR. The experimental results show that the NPU-based scheme can get three times faster frame rate than the ‘direct send scheme’ in the case of four rendering nodes. This paper also presents the scheme to extend four rendering nodes to more rendering nodes. Analytically, the system is fully scalable with negligible penalty in frame rate.
    Computing hierarchical curve-skeletons of 3D objects based on generalized potential field
    Rui Ma
    2011, 31(01):  16-19. 
    Asbtract ( )   PDF (812KB) ( )  
    Related Articles | Metrics
    The curve-skeleton is an abstraction representation of a 3D object in one dimention. It can reflect the essential topology of the object and be widely used in animation, virtual navigation and matching. This paper give a new algorithm to compute hierarchical curve-skeletons based on the algorithm of generalized potential field given by Cornea. The algorithm reduces the force field computing time by using different r to simplify the boundary points. The difference is that it uses surface variation to replace curvature and uses the boundary points with higher local surface variation as seed points to get hierarchical curve-skeletons. Because the surface variation is more applicable to reflect the properties of point-sample surface than curvature and faster to compute, it is more applicable to deal with point cloud and robust. The paper analyses the relationship between different r and the connectivity and the computing time. Experiment results indicate that computing time of the force field by using simplified boundary points reduces about half to the original algorithm, and the result curve-skeletons keep good smoothness and connectivity. The paper also tries another rule to simplify the boundary points based on surface variation and studies carefully the influences of higher surface variation points, neighbourhood size k and different spatial division n to the hierarchical curve-skeletons.
    Edge-based adaptive real-time 3D tracking
    2011, 31(01):  20-24. 
    Asbtract ( )   PDF (840KB) ( )  
    Related Articles | Metrics
    In order to effectively handle the tracking of textureless objects, this paper proposes an edge-based adaptive real-time 3D tracking method. While the 3D model of the tracking object is present, the proposed method can robustly detect and track the object edges using historical motion information, and then accurately caculate the extrinsic camera parameters. Contributions of this paper are summarized as follows: 1) Utilizing adaptive threshold and historical information for motion prediction, which can improve the robustness of tracking while movement is fast; 2)Proposing a RANSAC-based edge matching strategy, which can effectively eliminate outliers to make the tracking of complex objects robust;3) Extending the type of model in edge-based tracking from CAD model to general mesh model by extracting the silhouette from the 3D model. Experimental results demonstrate the robustness and efficiency of this method, which can satisfy the demand of augmented reality and virtual assembly.
    3D model data compression in parallel rendering system Chromium
    2011, 31(01):  25-28. 
    Asbtract ( )   PDF (675KB) ( )  
    Related Articles | Metrics
    The deficient network bandwidth has severely curtailed the rendering speed of large geometric scenes in parallel graphics rendering systems such as Chromium. Through lossless compression of geometric data in the network transmission, an efficient method which alleviates the network burden is proposed. This method can easily be used to implement different algorithms for a specific geometry data compression. We have used ZLib and Huffman compression algorithms in Chromium, and tested the speedup and compression ratios of 10 kinds of OpenGL applications, and the parallel performance in the four configurations. The speed of the tested programs that uses ZLib has been improved to varying degrees and the maximum raising is 3 times; the average ratio of data compression is above 5, up to the maximum 30; the speedup ratio of parallel rendering on a single server is the highest. The overall performance of the ZLib algorithm is good, which can effectively reduce the network traffic.
    SIFT feature matching algorithm based on second moment matrix
    2011, 31(01):  29-32. 
    Asbtract ( )   PDF (652KB) ( )  
    Related Articles | Metrics
    Abstract: An improved Scale Invariable Feature Transformation (SIFT) matching algorithm based on second moment matrix is presented to solve the problems that SIFT result in low matching ratio when view point of image is changed. Feature points are detected in scale space, using affine second moment matrix point neighborhood is estimated, then feature vectors are computed by dominant orientation assignment to each feature point based on elliptical neighboring region, finally the feature vectors are matched by using Euclidean distance. The experimental results show that this algorithm is as robust as SIFT, but also acquires good performance on affine invariance of view point change, improves matching results greatly.
    Relief pasting algorithm based on normal vector adjustment
    2011, 31(01):  33-36. 
    Asbtract ( )   PDF (667KB) ( )  
    Related Articles | Metrics
    Relief is a kind of complex surface where the detail is attached onto a flat or curved background surface.The detail is usually designed on a plane background and then pasted onto 3D shapes for various applications, which is defined as a relief pasting problem. In this paper, both relief and target object are expressed as triangular mesh models. Firstly, an area on the target object is specified and parameterized onto a plane, so the relationship between the source relief and the target area is established. Secondly, an algorithm based on normal vector adjustment is designed for relief wrapping on the new base with less distortion. Finally, the relief and the target area are integrated into a complete triangular mesh. The method proposed is not only suitable for the target surface with a shape of flat or small curvature, but also has a good result for the target area with large curvature.
    Real-time rendering dynamic grass blade based on FFT
    2011, 31(01):  37-41. 
    Asbtract ( )   PDF (879KB) ( )  
    Related Articles | Metrics
    Wide range natural environment has become the indispensable factor of 3D games and simulation systems. Vegetation is an essential element of the natural environment. It helps to improve the immersion of simulation scenarios. Accurate rendering of the geometry based grass often resorts to drawing a large number of patches. The computational complexity is significantly increased when the number of the geometric patches be increased. When the grass blade is moved by the wind, the geometry model of the grass must be dynamically changed. It will undoubtedly make the real-time rendering difficult. The dynamically created primitive strip by the GeometryShader and image-based rendering method are effective methods to decrease the magnitude of the geometric patches. The dynamically created primitive strip by the GeometryShader,image-based rendering method and the FFT-based flow control method make the real-time rendering dynamic grass blade possible.
    Motion fusion technique based on characteristics of human motion
    2011, 31(01):  42-44. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    This paper proposes a motion fusion method without manual intervention, suggests a way to identify the motion cycle based on joint movement. The cycle is calculated through the analysis of the motion capture data and the angle between the knees and the hips node. Then high-quality motion blending animation comes into being after the steps of time and space warping, interpolation, and constraints reconstruction. The result shows that the proposed algorithm can calculate the motion cycle accurately and make the fusion motion under constraints more realistic.
    Frame design and implementation of realistic and controllable fire animation
    2011, 31(01):  45-49. 
    Asbtract ( )  
    Related Articles | Metrics
    The realistic and controllable fire animation has been widely used in many fields, such as film-making, entertainment, fire protection, and so on. This paper presents a general flame simulation frame, which includes three phases: pre-processing, fire simulation and post-processing. In this way, the realistic fire effects that user needs can be generated. In order to implement this fire animation frame, the corresponding solutions is provided to address the key problems in every phase. According to the two-phase flow characteristics of fire and the performance of turbulence fluid, a turbulence model is proposed for fire to rich the flame details using the limited computing resources. The experimental results show that the effectiveness of our provided flame simulation frame, which can not only generates the basic fire phenomena but also other fire animations under the control of complex curves or surfaces and spreading rules.
    Separation method for handwritten shape and text based on improved stroke entropy
    2011, 31(01):  50-52. 
    Asbtract ( )   PDF (572KB) ( )  
    Related Articles | Metrics
    As the stroke constitutional complexity of shapes and texts is different, this paper proposed a shape and text separation method based on the calculation of the entropy of the strokes. Because the entropy of a document may vary with stroke size, adaptive resampling was introduced to handle different writing stroke sizes. In addition, the paper employed a symmetrical judgement mechanism to handle the separation of texts and shapes with equivalent stroke constitutions. The experimental results demonstrate the effectiveness of the proposed method.
    Non-uniform subdivision approach for free-form curves and surfaces of arbitrary degree
    2011, 31(01):  53-57. 
    Asbtract ( )   PDF (677KB) ( )  
    Related Articles | Metrics
    In this paper, a new recursive nonuniform subdivision approach for modeling freeform curves and surfaces was presented. Based on the knot insertion technique, the non-uniform subdivision rules were provided for modeling arbitrary-degree curves and surfaces of arbitrary topology with nonuniform knot intervals. In particular, this approach generalized the traditional uniform subdivisions, such as DooSabin and Catmull-Clark subdivision.
    Coordinate calibration based on vanishing point
    2011, 31(01):  58-60. 
    Asbtract ( )   PDF (467KB) ( )  
    Related Articles | Metrics
    A new coordinate calibration algorithm based on vanishing point is proposed and applied to moving vehicle tracking and detection. According to this algorithm, coordinate calibration can be implemented with knowledge of the vanishing point and the position of scaled objects. It is more robust and flexible than others because the configuration and parameters of camera are not required. Foreground matrix is presented by 1-D array instead of 2-D array, which effectively reduces the space complexity of the algorithm.
    Multi-resolution morphable model and face modeling based on features refinement
    2011, 31(01):  61-64. 
    Asbtract ( )   PDF (733KB) ( )  
    Related Articles | Metrics
    An algorithm of aligning prototypical 3D faces based features refinement is proposed to establish multi-resolution face morphable model, which is used to rebuild 3D face modeling through multi-resolution model matching method. This algorithm takes the eye, the eyebrow, the mouth, the nose, etc. as main geometric features to mark base grid, then it aligns 3D faces and establishes face morphable model by using refinement. According to the characteristics of the morphable model, we refine an input image in the same way as the model, and then rebuild face model through multi-resolution matching. Experimental results show that the novel alignment algorithm can align the face prototypes well and improve the precision of the morphable model; the novel matching algorithm can accelerate the speed of model matching, improve precision and efficiency of model matching, and shorten the time of model matching.
    Self-feedback photometric correction approach for multi-projector tiled display
    2011, 31(01):  65-69. 
    Asbtract ( )   PDF (877KB) ( )  
    Related Articles | Metrics
    In order to solve the problems in the traditional photometric correction, this paper presented a selffeedback photometric correction approach. The proposed approach first calculated an initial mask, and then updated the photometric correction mask based on the images captured in a feedback process with a camera. It iterated the feedback process until the uniformity of luminance was less than a threshold value. The initial mask computed by the approach had better photometric transition than the traditional blending masks, and the maskcorrection rules adopted in the approach can get better color uniformity after several selffeedback steps. The experimental results show that the approach can efficiently solve the problems of photometric correction for multiprojector autostereoscopic systems.
    Image sharing scheme based on error diffusion
    2011, 31(01):  74-77. 
    Asbtract ( )   PDF (958KB) ( )  
    Related Articles | Metrics
    In this paper, a novel (n,n)-threshold binary image sharing scheme is proposed. The secret binary image is encoded into n meaningful halftone shares using the proposed modified error diffusion algorithm. Numerical results show that our proposed sharing scheme is secure and effective. The halftone shares generated from this scheme have good visual quality.
    Modeling adaptive perception system of virtual human based on Q-learning
    zhen liu
    2011, 31(01):  78-81. 
    Asbtract ( )   PDF (679KB) ( )  
    Related Articles | Metrics
    In design of modern computer games, it’s very important to model a virtual human with believable perception behaviors. In previous researches, the range of a perception system is always unchangeable. An adaptive perception system based on Q-Learning is proposed, a virtual human confirms the range of a perception system dynamically according to his appraises to the objects in the environment. A demo system of a virtual human searching for specific herbs is realized on PC. The results show that the model can drive a virtual human to have believable perception behaviors.
    Artificial intelligence
    Multi-objective particle swarm optimization based on crossover and mutation
    2011, 31(01):  82-84. 
    Asbtract ( )   PDF (569KB) ( )  
    Related Articles | Metrics
    In order to minimize the distance of the Pareto front produced by PSO with respect to the global Pareto front and maximize the spread of solutions found by PSO, we proposed an multi-objective particle swarm optimizer based on crossover and mutation (CMMOPSO for short). In the CMMOPSO algorithm, firstly, the number of particle in sparse part of Pareto front is defined and the crossover operator is employed to increase the diversity of the nondominated solutions; next, the mutation operation is used for the particles far away from Pareto front to improve the probability to fly to Pareto front. In benchmark functions, CMMOPSO achieves better solutions than other algorithms. Consequently, CMMOPSO can be used as an effective algorithm to solve multi-objective problems.
    Perceptive particle swarm optimization algorithm for constrained optimization problems
    2011, 31(01):  85-88. 
    Asbtract ( )   PDF (706KB) ( )  
    Related Articles | Metrics
    A new Particle Swarm Optimization (PPSO) algorithm is proposed for solving constrained optimization problems. A feasibility-based rule is used for updating the individual and global best solutions. Adaptive perceptive ability is assigned to particles in PSO for balancing their global and local searching and avoiding prematurity. The velocity of particle around the boundary is revised by the results of perceiving to enhance the searching around the boundary. Simulation results show that the proposed approach has fast convergence and good optimization ability, and is suitable for solving constrained optimization problems.
    Real-time recommendation method based on browsing preferences mining
    2011, 31(01):  89-92. 
    Asbtract ( )   PDF (707KB) ( )  
    Related Articles | Metrics
    Nowadays, Recommendation system is a hot point in the research area of information services. After analyzing the advantages and disadvantages of various algorithms and the main problems of the current recommended technology, this paper puts forward a real-time recommendation method based on user’s browsing preferences mining. Experiments indicate that this method can provide personal recommendations more accurately and efficiently.
    Method for BBS topic tracking based on semantic similarity
    2011, 31(01):  93-96. 
    Asbtract ( )   PDF (639KB) ( )  
    Related Articles | Metrics
    To study the BBS topic tracking problem, the paper discovered that most of the traditional methods of topic tracking deal with news reports, and they aren’t appropriate when they are applied to BBS. The paper utilizes the characteristics of BBS and presents a topic tracking method for BBS data based on semantic similarity. This method firstly constructs keywords tables of topic and post as their representation models, then computes the two tables’ semantic similarity with the help of HowNet which is served as correlation degree between post and topic. Finally, this method uses the correlation degree to realize BBS-oriented topic tracking. This method effectively avoids the shortage of Vector Space Model. The Experiment results show that this method can solve the problem of BBS-oriented topic tracking effectively.
    Rough K-Modes clustering algorithm
    2011, 31(01):  97-100. 
    Asbtract ( )   PDF (607KB) ( )  
    Related Articles | Metrics
    Michael K.Ng et al proposed the new K-Modes clustering algorithm. It takes the heuristic dissimilarity measure method based on the relative frequency and improves the clustering accuracy. However, when computing the attribute category frequency in each cluster, it assumes each object of the samples plays a uniform contribution to the cluster center. To consider the particular contribution of the different objects, a rough K-Modes algorithm is proposed in this paper. By a new approach based on the upper and lower approximate of rough set to measure the important level of each object in its corresponding cluster, the better clustering results can be achieved than the new K-Modes algorithm, and the computational complexity can be reduced in comparison with the improved K-Modes clustering algorithm based on rough sets of Bai Liang et al with the equivalent clustering results. The experimental results on several UCI data sets illustrate the effectiveness of the proposed algorithm.
    Frequent pattern mining algorithm based on Improved FP-tree
    2011, 31(01):  101-103. 
    Asbtract ( )   PDF (446KB) ( )  
    Related Articles | Metrics
    FP-growth is an efficient frequent pattern mining algorithm based on data structure of FP-tree, which does not generate candidate sets. Constructing frequent pattern tree TP-tree requires to scan data twice, what’s more, transactions which only contain non-frequent items are also scanned during the second scanning. In order to solve this problem, after analyzing particularity of FP-tree deeply, we improve construction process of FP-tree and employ an auxiliary storage structure that bases on hash table, which saves time of searching items and enhances mining efficiency.
    Method based on equivalent effect for transforming vague sets into fuzzy sets
    2011, 31(01):  104-106. 
    Asbtract ( )   PDF (460KB) ( )  
    Related Articles | Metrics
    The current method about Vague sets into a Fuzzy sets is usually based on intuition, but it did not elaborate theory . In order to better understand the process of Vague sets into Fuzzy sets , the process is simulated to be a multi-Vote which has no abstention , so as to put forward the kind of effect function definition to indicate degree of support .To ensure that the overall effect of interest has not changed in the transformation process , a new method based on conversion of equivalent effect is proposed in view of the first mean value theorem for the integrals .Finally , according to the data, the good nature and rationality of the method is carefully analyzed.
    Information security
    Pattern matching engine based on multi-dimensional bloom filters
    2011, 31(01):  107-109. 
    Asbtract ( )   PDF (619KB) ( )  
    Related Articles | Metrics
    Concerning the defects of traditional rule matching engine, a solution using Multi-dimensional Bloom filters based on FPGA is proposed. The rule matching engine is designed to process both packet header and payload in parallel. The suspicious strings are picked up by the Multi-dimensional Bloom filters engines, and then sent to bit-split state machine for verification. The experimental results demonstrate that the false positive probability of the engine is reduced by using the Multi-dimensional Bloom filters which result in a higher throughput.
    Integrated security over outsourced database services based on encryption
    2011, 31(01):  110-114. 
    Asbtract ( )   PDF (833KB) ( )  
    Related Articles | Metrics
    Privacy requirements have an increasing impact on the real-world applications. Technical considerations and many significant commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. In this paper, we propose a solution to enforce data confidentiality, data privacy, user privacy and access control over outsourced database services. The approach starts from a flexible definition of privacy constraints on a relational schema, applies encryption on information in a parsimonious way and mostly relies on attribute partition to protect sensitive information. Based on the approximation algorithm for the minimal encryption attribute partition with quasi-identifier detection, the approach allow storing the outsourced data on a single database server and minimizing the amount of data represented in encrypted format. Meanwhile,by applying cryptographic technology on the auxiliary random server protocol, the approach can solve the problem of private information retrieval to protect user privacy and access control. The theoretical analysis shows that our new model can provide efficient data privacy protection and query processing, efficient in computational complexity and dose not increase the cost of communication complexity of user privacy protection and access control.
    Change impact analysis in authorization policies
    2011, 31(01):  115-117. 
    Asbtract ( )   PDF (444KB) ( )  
    Related Articles | Metrics
    Due to the lack of tools for analyzing policies, most authorization policies on the Internet have been plagued with policy errors. A policy error either creates security holes that will compromise the security of IT system. A major source of policy errors stem from policy changes. Authorization policies often need to be changed as networks evolve and new requests emerge. The theory and algorithms for authorization policy change-impact analysis are presented. Algorithms in this paper take as input an authorization policy and a proposed change, then output the accurate impact of the change. Thus, an administrator can verify a proposed change before committing it. A prototype was built to demonstrate the use of the algorithms.
    Universally composable security of identity-based signature schemes
    Zecheng Wang
    2011, 31(01):  118-122. 
    Asbtract ( )   PDF (1054KB) ( )  
    Related Articles | Metrics
    A definition of universally composable security of identity-based signature schemes is proposed in the universally composable security framework. The equivalence of the universally composable security and the traditional security of identity-based signature schemes is proved. This result shows that an identity-based signature scheme can be used as a primitive block to design more complicated cryptographic protocols.
    Quantitative approach to dynamic security of intrusion tolerant systems
    2011, 31(01):  123-126. 
    Asbtract ( )   PDF (625KB) ( )  
    Related Articles | Metrics
    A quantitative analysis approach to the security of intrusion tolerant systems is proposed. The exposure window of intrusion tolerant systems is introduced into quantitative analysis, and the parameter can represent the deteriorating process of the systems. A Markov analysis process with the parameter is discussed, and the simulation results through this method greatly conform to the practical course of the security change. The new method provides a theoretical basis for establishing a safer intrusion tolerant system.
    ID-based bidirectional threshold proxy re-signature
    Yu-lei ZHANG Xiao-dong YANG Cai-fen WANG
    2011, 31(01):  127-128. 
    Asbtract ( )   PDF (446KB) ( )  
    Related Articles | Metrics
    Based on Shao et al’s ID-based proxy re-signature, an ID-based bidirectional threshold proxy re-signature scheme in the standard model is presented in this paper. Our scheme eliminates the cost of restoring and managing certificates, and solves the difficult problem of excessive rights of the proxy in the proxy re-signature shceme. The scheme can tolerate t
    Distributed trust model in wireless Ad Hoc networks
    2011, 31(01):  129-132. 
    Asbtract ( )   PDF (649KB) ( )  
    Related Articles | Metrics
    For the security of wireless Ad hoc networks, a new trust model is proposed in this paper. Due to the introduction of risk element, the model becomes more sensitive to malicious behaviors and more immune to the sudden changes of nodes’ behaviors. Meanwhile, as a result of direct trust computation with weights assigned to files, it is effective to prevent malicious behaviors implemented through the accumulation of reputation. Subsequent experiments show that, compared to wireless Ad hoc networks without trust model, the proposed model can identify malicious nodes effectively and reduce the number of bad transactions notably.
    Key agreement protocol for Web service authentication in wireless environment
    2011, 31(01):  133-134. 
    Asbtract ( )   PDF (487KB) ( )  
    Related Articles | Metrics
    This paper puts forward a new web service authentication key agreement protocol in wireless network, which provides privacy password, secure mutual authentication and key secret. The protocol combines techniques of challenge response protocols with SEKE protocols,and uses the Diffie-Hellman protocol in the process of key agreement. At the same time, secure property, computational and communication performance are analyzed.
    Local search immunization strategy in inhomogeneous networks
    2011, 31(01):  135-138. 
    Asbtract ( )   PDF (607KB) ( )  
    Related Articles | Metrics
    There is much interest in the question of how to immunize a population or a computer network with a minimal number of immunization doses. It is widely accepted that the targeted strategy, based on global information of nodes’ connectivity hierarchy, is most efficient immunization strategy. We present a newly developed local search immunization strategy in inhomogeneous networks. Our proposed strategy gets the same dense of infected nodes and requires no more immunization doses than the targeted strategy. we use the susceptible-infectious-susceptible epidemic spreading model to demonstrate the efficiency of our proposed strategy on ER, BA scale-free and two real networks. The efficiency of our strategy increases with the clustering coefficient’s increasing.
    Dynamic fuzzy comprehensive trust model based on P2P network
    2011, 31(01):  139-142. 
    Asbtract ( )   PDF (638KB) ( )  
    Related Articles | Metrics
    According to the deficiency of trust model employed by the existing P2P netwok, which inadquately consider the difference and the dynamic in peers’ behavior when aggregating the peer trust, a new dynamic fuzzy comprhensive trust model (DFCTrust) was proposed in this paper, that is the time attemiation factor and the fluctuation punisive measure were added on the basis of static fuzzy comprehensive evaluation. In the model, the satisfactory degree of every transction was firstly computed by static fuzzy comprehensive evaluation. Attribute to taking the transction context factor into consideration, the model is good at avoiding the malicious act which is trust in small transction while distrust in large one. Secondly, because the time attemiation factor and the fluctuation punisive measure were added when aggregating peertrust, the model do well not only in reducing the failure due to trading with inactive peers but also in resisting the periodic oscillation cheating from malicious peers. Simulation experiment shows that DFCTrust has good dynamic adaptive ability to cope whith the behavior changes with strategy and then can improve transction success rate.
    Join-tree-based contributory group key management scheme for key update
    2011, 31(01):  143-146. 
    Asbtract ( )   PDF (783KB) ( )  
    Related Articles | Metrics
    To provide content protection in large groups with highly dynamic memberships, a secure group key management efficient in key establishment and update is the foundation. In this paper, a join-tree-based contributory group key management (JDH) is presented to achieve better time efficiency. First, a new key tree topology comprised of main tree and join tree is put forward. Then, a new join algorithm in the join tree is proposed to reduce the time complexity. Last, optimal capacity of the join tree is selected through optimization method. Theoretical analysis and simulations show that the asymptotic average join time is reduced to, where is the group size.
    Reputation-based P2P trust system
    2011, 31(01):  147-150. 
    Asbtract ( )   PDF (639KB) ( )  
    Related Articles | Metrics
    Concerning the "hot spots" problem of resource access in the enhancedreputation system, the P2P reputation system of resource balance access mechanism was proposed, and automated trust negotiation was joined to improve the system reason mechanism of confidence and negotiation efficiency. The simulation results show that P2P reputation system solves the bottleneck problem providing services of among nodes, and the success rate of interaction between requesting and providing resource nodes has been significantly improved.
    Graphics and image processing
    Compressed video super-resolution reconstruction based on adaptive quantization constrain set
    2011, 31(01):  151-153. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    Abstract: Super-resolution technique is the task of reconstructing High-Resolution (HR) image from a sequence of Low-Resolution (LR) images. Quantization Constrain Set (QCS) was widely used as priori information about the coding process in super-resolution reconstruction of compressed video. An adaptive quantization constrain set (AQCS) was proposed by using statistical property of quantization errors based on theory of projection onto the Narrow Quantization Constrain Set (NQCS). A new smooth constrain set (SCS) was proposed by using the property of DCT transformed block edge. The experimental results showed that proposed AQCS outperformed traditional QCS in both Peak Signal to Noise Ratio (PSNR) and subjective image quality.
    Compression method for ordered dither halftone image
    2011, 31(01):  154-155. 
    Asbtract ( )   PDF (463KB) ( )  
    Related Articles | Metrics
    A initial codebook algorithm that can distribute uniformly in training set and combine with lossless compress method is proposed.To overcome disadvantages of existing initial codebook algorithms, the initial codebook with uniform distribution is produced,and iteratively produce final codebook using Linde-Buzo-Gray(LBG). Experimented results show that new method can get better visual effect.
    Interpolation method of color image metamorphosis
    2011, 31(01):  156-158. 
    Asbtract ( )   PDF (479KB) ( )  
    Related Articles | Metrics
    The current gradient algorithm only consider the gradient between the two color images without considering the inherent correlation between the three color components of the problem. To solve this problem, a new nonlinear method of image metamorphosis among multiple images based on bivariate rational interpolation was represented. First, a Newton-Thiele’s vector valued interpolation surface was constructed for the values of RGB pixels, and then series of morphing images were generated by resampling the surface of this interpolation function. The experimental results show that this algorithm is better than several other algorithms both in integrality of image feature and visibility of transition images.
    Difference analysis and registration research of a pair of complementary cylindrical panoramic images
    2011, 31(01):  159-162. 
    Asbtract ( )   PDF (689KB) ( )  
    Related Articles | Metrics
    Based on the introduction of the complementary catadioptric cmnidirectional imaging system, the difference between correspondence points in two complementary cylindrical panoramic images is analyzed in detail. For the coordinate difference of correspondence points in two images, Harris corner is chosen for registration to search transform model of registration, which is researched to solve the registration problem of the two complementary cylindrical panoramic images. Experiment shows respectively the registration results and precision of using affine model, projective model and polynomial model for registration, which suggests that third-order polynomials model be more suitable.
    Adaptive stereo matching algorithms for color stereo images
    2011, 31(01):  163-166. 
    Asbtract ( )   PDF (657KB) ( )  
    Related Articles | Metrics
    Two algorithms are proposed in this article to improve the matching of stereo image pairs. One is the improved algorithm of adaptive image window. Compared with the previous rectangle image window, the new image window obtained by the improved algorithm includes more intensity variations in low texture areas. It becomes more effective to make the new image window approach the real edge of a certain texture area in the stereo image pairs. The other is the improved algorithm in edge pixel areas, which is used to enhance the correction rate of matching the edge pixels by decreasing the Corr (color correlation measurement) of the edge pixels. These two algorithms are investigated by matching four stereo images (Tsukuba, Venus, Teddy and Cones) with ground truth provided in Middlebury stereo database.
    Fast multi-spectral image registration based on NCCSS
    2011, 31(01):  167-169. 
    Asbtract ( )   PDF (585KB) ( )  
    Related Articles | Metrics
    Among the methods for image registration, the normalized cross correlation (NCC) method is the most widely used one, which is conceptually straightforward and easy to implement. The classic NCC method is based on spatial domain, works for single band image and does not leverage the spectral information of all spectral bands of images. A novel normalized spatial-spectral cross correlation (NSSCC) method was proposed recently by some researchers, the NSSCC method utilizes all spectral bands for multi-spectral image registration. This NSSCC method effectively increases the registration reliability and discrimination compared to the classic NCC method. However, if the image contains many spectral bands and its size is large, the NSSCC method will require high computation cost. This paper presents an improved algorithm for fast calculation of the NSSCC method and its application to the problem of multi-spectral image registration. The simulation results show that the improved algorithm can effectively reduce the computation cost of the NSSCC method.
    Survey on image mosaic algorithm of unmanned aerial vehicle
    2011, 31(01):  170-174. 
    Asbtract ( )   PDF (836KB) ( )  
    Related Articles | Metrics
    Image mosaic of Unmanned Aerial Vehicle (UAV) is one of the increasingly popular areas of research interests. It has become the hotspot of the research into photo cartography, computer graphics, etc. Firstly, it gave the general steps of image mosaics, and emphazed three kinds of mosaic algorithm. Secondly, it represented the steps and algorithm of image fusion. Finally, it selects an algorithm adapted to image mosaic after analysis and discussed the future prospects of image mosaic of UAV.
    Stereo matching algorithm based on image segmentation
    2011, 31(01):  175-178. 
    Asbtract ( )   PDF (859KB) ( )  
    Related Articles | Metrics
    The stereo matching algorithm based on MRF restricts the continuity of the disparity by the MRF model, but it can not describe the image feature exactly due to the generative property of the model. This paper presents a stereo matching algorithm of SGC based on image segmentation. The SGC algorithm builds the MRF model using the result of image segmentation, thereby the edge information of the disparity map can be kept in the continuity (smoothness) constraints. Moreover, to improve the disparity accuracy, a novel energy function is well designed to restrict the depth-continuity of an image and applied to the Graph Cut algorithm to describe the image feature. The experiments show that our SGC algorithm can reflect the depth information more exactly than the existing algorithm and achieve a high-precision disparity by avoiding the error arisen from the continuity constraints.
    Sub-pixel edge detection algorithm based on Gauss fitting
    2011, 31(01):  179-181. 
    Asbtract ( )   PDF (394KB) ( )  
    Related Articles | Metrics
    Aiming at the low in localization and sensitive to the noises of traditional edge detection algorithms, a sub-pixel edge detection algorithm based on the function curve fittings is proposed——Gauss Fitting of gradient direction sub-pixel edge detection algorithm. According to the gradient distribution of the picture, this text use gauss curves fitting the edges to realize sub-pixel localization. Compared this algorithm with other two algorithms, the text get that the running time of this algorithm is shorter, and the efficiency is relatively high.
    Normalized cut segmentation algorithm combined with wavelet coefficient
    2011, 31(01):  182-183. 
    Asbtract ( )   PDF (462KB) ( )  
    Related Articles | Metrics
    Wavelet coefficient is used to calculate the edge information of the image. At first, a graph is constructed for the original image, and the Laplacian matrix is obtained and the first k eigenvalues is computed. The eigenvector corresponding to the second eigenvalue is classified to get the final segmentation results. Experimental results show that the proposed method can get more accurate results, and preserve more useful information.
    Network and communications
    Task scheduling algorithm based on improved genetic algorithm in cloud computing environment
    2011, 31(01):  184-186. 
    Asbtract ( )   PDF (435KB) ( )  
    Related Articles | Metrics
    The number of user is huge in Cloud Computing, and the number of tasks and the amount of data are also huge. How to schedule tasks efficiently is an important issue to be resolved in Cloud Computing environment. An Double-Fitness Genetic Algorithm (DFGA) is brought up for the programming framework of Cloud Computing. Through this algorithm the better task scheduling result which has not only shorter total-task-completion time and also has shorter average-completion time can be found out. There is a contrast between DFGA and Adaptive Genetic Algorithm (AGA) through simulation experiment, and the result is: the DFGA is better, it is an efficiently task scheduling algorithm in Cloud Computing environment.
    Storage design in peer-to-peer based video-on-demand systems
    2011, 31(01):  187-189. 
    Asbtract ( )   PDF (626KB) ( )  
    Related Articles | Metrics
    Peer-to-Peer based Video-on-Demand systems (VoD/P2P) have received a lot of research attention in recent years. Based on practical experience from a real system, the work on VoD/P2P storage consists of three parts. 1) By decomposing the VoD/P2P storage process into key modules, it is able to analyze the functionality and relationships of these modules. 2) A general storage algorithm VSVR is presented, which can represent most of existing data redundancy algorithms and derive new designs. 3) By system modeling, the main objective as well as the key principle of the storage scheduling policy at server side is given.
    Graph coloring-based scheduling algorithm for P2P media streaming
    2011, 31(01):  190-193. 
    Asbtract ( )   PDF (637KB) ( )  
    Related Articles | Metrics
    To increase the transmission performance of P2P media streaming, the graph coloring_based scheduling (GCB) algorithm was proposed. In GCB scheduling algorithm, every peer accessed to the system and every chunk was allocated to one color, the chunk which had the same color as request peer would be required firstly. In order to select the perfect chunk’s supplier, the algorithm considered the urgency, rarity and freshness of chunks and then defined the priority of chunks as well as supply capacity of neighbors. With GCB scheduling algorithm, the load balance of nodes was strengthened, and bandwidth of system was used efficiently, and data distribution was uniformed, thus, the transmission performance of system was well improved. Simulation results show that the proposed GCB scheduling algorithm for Peer-to-Peer media streaming outperforms conventional scheduling algorithms in the rate of data fullness, chunks arriving ratio and start delay.
    Node sleeping algorithm for wireless sensor networks based on the minimal hop routing protocol
    2011, 31(01):  194-197. 
    Asbtract ( )   PDF (781KB) ( )  
    Related Articles | Metrics
    Network nodes are divided into terminal nodes and intermediate nodes according to the different functions which are based on the minimum hop routing protocol in wireless sensor networks. Terminal nodes only collect data, while intermediate nodes not only collect data but also forward them. A node sleeping algorithm that has different sleep/wake-up strategy for the two kinds of nodes is proposed to reduce energy consumption. Theoretical analysis and simulation results show that our proposed node sleeping algorithm can reduce the energy consumption of node and extend the life of wireless sensor networks.
    Pre-handoff mechanism with adaptive threshold for heterogeneous wireless networks
    2011, 31(01):  198-201. 
    Asbtract ( )   PDF (600KB) ( )  
    Related Articles | Metrics
    Taking heterogeneous networks which are composed of WLAN (Wireless Local Area Network, WLAN) and WiMAX networks as the research object, the operation process of vertical handoff for multi-mode terminal in the heterogeneous wireless networks system is studied, which is based on FMIPv6 mechanism. To avoid the shortage of the pre-handoff mechanism with fixed threshold in the vertical handoff process, the pre-handoff with adaptive threshold is proposed, and the required handoff time to the target network in the vertical handoff process is analyzed in detail. For simulation, the existing functional modules on NS2 simulation platform are expanded, so that the performance of the mechanism with adaptive threshold is verified.
    Energy efficient reliable delivery protocol for wireless sensor networks
    2011, 31(01):  202-207. 
    Asbtract ( )   PDF (1009KB) ( )  
    Related Articles | Metrics
    A fuzzy integrated assessment based reliable forwarding mechanism (FiaRD) is proposed to enhance transmission efficiency in the wireless sensor network(WSN) featured by unreliable link quality. Utilizing characteristic of dense deployment of sensor nodes and broadcast feature of the wireless channel, FiaRD enables multihop clusters to be self-organized by the neighboring nodes along transmission path and then leverages the collaborative forwarding among cluster members to enhance delivery ratio and to reduce energy consumption induced by redundant transmission. In FiaRD, forwarding cluster at each hop is dynamically elected through distributed “fuzzy integrated assessment” combined with “back-off competition”. In the mechanism, a minority of superior clusters is selected from several nexthop forwarding candidate clusters to compete for forwarding based on integrated assessment for the sake of lower collision probability and higher forwarding efficiency. Simulation result have corroborated FiaRD can effectively reduce the transmission energy consumption while ensuring the reliable transmission
    PSMR: Power control and scheduling scheme in multi-rate wireless mesh networks
    2011, 31(01):  208-211. 
    Asbtract ( )   PDF (580KB) ( )  
    Related Articles | Metrics
    Transmission power control (TPC) is a key technique in wireless networking. In this paper, A Power control and Scheduling scheme in Multi-Rate wireless mesh networks (PSMR) is introduced to improve the throughput and the fairness in the context of multi-rate Wireless Mesh Network (WMN). Traffic patterns of the system are analyzed based on a model of conflict graph, and a differential evolution based algorithm is proposed to optimize the time allocation vector. Simulations demonstrate that PSMR can improve the throughput and also can strike a balance between throughput and fairness.
    Improved fast topology estimation method based on maximum likelihood
    2011, 31(01):  212-214. 
    Asbtract ( )   PDF (600KB) ( )  
    Related Articles | Metrics
    Maximum likelihood based topology estimation method can obtain globally optimal estimation results, and its performance is better than that of other methods such as general local optimization and node fusion methods. However, when network scale is large, the topology estimation is computationally intensive. To solve this problem, firstly, the authors proved that a network topology estimation likelihood function was a single-peaked function with only one extreme value, namely, the maximum value called peak value. Using the single-peaked property of likelihood function, they improved the current maximum likelihood based topology estimation method. Without returning to a smaller likelihood value state when searching for the maximum likelihood tree, improved method effectively reduces the computational complexity. Finally, the results of Matlab and ns-2 simulations show that improved method cuts computational complexity by 30%-46% without reducing topological estimation accuracy.
    TDMA-based quality of service mechanism for VoWLAN
    2011, 31(01):  215-218. 
    Asbtract ( )   PDF (719KB) ( )  
    Related Articles | Metrics
    Wireless Local Area Networks (WLANs) have become a ubiquitous networking technology deployed everywhere. And Voice over Internet Protocol (VoIP) is one of the most popular application and an alternative to traditional Public Switched Telephone Network (PSTN) due to its cost efficiency. Voice over WLAN (VoWLAN) is an emerging application taking advantage of the promising VoIP technology and the wide deployment of WLANs all over the world. By focusing on the lack of Quality of Service (QoS) for application VoWLAN, a new QoS mechanism was proposed. This mechanism applies the central scheduling strategy inspiring from IEEE802.15.3 Wireless Personal Access Network (WPAN),which allows all nodes to share the channel with Time Division Multiple Access (TDMA) technology. For IEEE802.11b, IEEE802.11e and proposed mechanism, the performance on access delay, jitter, and packet loss rate are compared. Simulation results show our proposed mechanism could improve the access delay,the packet loss rate about 20%, 50% respectively compared to IEEE802.15.3.
    Reliable multicast protocol based on hybrid bus-ring architecture
    2011, 31(01):  219-221. 
    Asbtract ( )   PDF (690KB) ( )  
    Related Articles | Metrics
    The paper first discusses the conflict between the network overhead and the scalability of the existing reliable multicast protocol, then proposes a Bus-ring Reliable Multicast Protocol (BRMP), and analyses it in the view of architecture,the establishment of coverage network, reliability etc. In addition, it describes the reliability of multicast protocol in the realization of the principle and implementation mechanism. The simulation results indicate that BRMP based on NACK could achieves much lower network overhead and better scalability.
    Optimal routing for hybrid optical switching networks under traffic demand uncertainties
    2011, 31(01):  222-224. 
    Asbtract ( )   PDF (593KB) ( )  
    Related Articles | Metrics
    Abstract: Based on the Hybrid Optical Switching(HOS) networks, which combine advantages of OCS and OBS, we propose a novel method to address the problem of designing optimization routing for HOS networks under traffic demand uncertainties. Gigen the number of wavelength, we first reserve several wavelengths for OBS; and then construct an OCS virtual topology; finally, using the reserved wavelengths and optimization method to achieve OBS routing. In this paper, the network performance optimization metrics is minimizing packet loss probability. Simulation results show that, in the case of a given the number of wavelength, our optimal routing method can effectively reduce packet loss probability of the whole network than that the shortest path routing; at the same time, in order to describe the uncertainty of traffic, we introduce the uncertainty factor. With the uncertainty factor decreases, the optimal routing method has less packet loss probability.
    Geographic routing protocol in Ad Hoc networks
    2011, 31(01):  225-228. 
    Asbtract ( )   PDF (611KB) ( )  
    Related Articles | Metrics
    Geographic routing in Ad Hoc Networks faces a local minimum problem when greedy forwarding strategy fails. A Geographic Ad hoc Routing protocol (GAR) was proposed. GAR protocol divides the routing regions and makes use of the slope forwarding strategy. As a result, the scope of the search is narrowed, and the paths are optimized. GALMR protocol is proposed by improving GAR protocol. The proposed protocol takes advantage of landmarks to reduce the hops of routing paths, the performance is improved. Analytical and experimental results show that GALMR protocol has high data packet arrival rate and low average end-to-end delay.
    Object identification methodology in multi-user digital beamforming system
    2011, 31(01):  229-231. 
    Asbtract ( )   PDF (393KB) ( )  
    Related Articles | Metrics
    In the content of traditional digital beamforming (DBF), it cannot implement the identification between the user signals and interferences, although the direction of arrival (DOA) estimation strategy digs out the spacial information of these signals. Similar as the multi-user detection in CDMA mobile communication system, different users are allocated with one pseudo-random (PN) code in the transmitters respectively. Furthermore, the receiver generates the replicas of transmitted PN codes and does correlation calculation with the received signals to harnesses the identification. The simulation illustrates that proposed approach picks up the desired signals and insures the low false alarm rate and missing detection rate.
    Study of the grouping of conjugate algorithm for ICI cancellation in OFDM
    2011, 31(01):  232-234. 
    Asbtract ( )   PDF (440KB) ( )  
    Related Articles | Metrics
    Based on the OFDM wireless mobile environment, study around the elimination of ICI, although some traditional ICI cancellation algorithm can achieve the elimination of certain effects, there is still a lack of accuracy, lack the necessary mathematical analysis and other shortcomings. This paper, Compared the former ICI self-cancellation algorithm, Improved system model and proposed a new error rate analytical method with frequency offset and a lower inter-carrier interference method in OFDM digital communication systems, which is groups conjugate elimination algorithm. Also analyze the realization of a mathematical method for this algorithm, Simulation results demonstrate this method with other methods of BER performance comparison.
    Chaotic spread spectrum-based model design and simulation of multi-user emergency communication system
    2011, 31(01):  235-238. 
    Asbtract ( )   PDF (557KB) ( )  
    Related Articles | Metrics
    In view of the present practical issues that channel noise seriously affected the communication quality of emergency communications, this paper studied the channel noise source and essential characteristics of emergency communication system and proposed a channel model of emergency communication, discussed the principles of interference noise suppression in chaotic spread-spectrum emergency communication system, designed a multi-user simulation model of chaotic spread spectrum emergency communications systems, and advanced its BER simulation and analysis in typical channel interference noise. Simulation results show that the chaotic spread spectrum communication system can effectively suppress strong noise interference and improve system BER performance in emergency communications significantly.
    Pattern recognition
    Blind identification algorithm for retouched images based on Bi-Laplacian
    2011, 31(01):  239-242. 
    Asbtract ( )   PDF (691KB) ( )  
    Related Articles | Metrics
    Image retouching is a technique widespreadly used in image tampering.To implement image retouching blind detection, a blind identification algorithm for image was proposed. The algorithm firstly searches for each block in image, and inserts each block into the KD tree. Thus obtain the same value or the nearest match block, then it uses the hierarchical clustering of the position vector to eliminate the scattered matching. Finally, it applies the 7-tap Laplacian filter to count weight of zero connectivity of suspicious blocks to eliminate false positives, thereby locate accurately the tampered area. Experiments show that this method can efficiently and accurately identify the use of a class of image manipulation techniques for uncompressed images and high quality compressed images. When it is applied to the images that have higher compression levels, accurate results are also obtained if the region that has been retouched is sufficiently large.
    Images classification based on combination of positive and negative fuzzy rules
    2011, 31(01):  243-246. 
    Asbtract ( )   PDF (621KB) ( )  
    Related Articles | Metrics
    We often use the positive fuzzy rules only for image classification in the traditional image classification system. However, the role of the negative fuzzy rules also is sometimes important in image classification. Accordingly, this paper proposes a new fuzzy rule system which uses the combination of positive and negative fuzzy rules to classify images. We focus on combining negative fuzzy rules with traditional positive ones. Experiments show that the proposed method here has high accuracy and better results than the methods using positive rules only.
    Modified linear local tangent space alignment algorithm
    Li WenHua
    2011, 31(01):  247-249. 
    Asbtract ( )   PDF (602KB) ( )  
    Related Articles | Metrics
    Linear local tangent space alignment (LLTSA) algorithm is a non-linear dimension reducing method which can easily apply to recognition problems, it pays attention on the local geometric structure of data, but it neglects the global information of data. In this paper an improved LLTSA algorithm which based on principal component analysis (PCA) is proposed, this method take the global structure of sample into consider and contain a better reducing dimension result. In the classical experiment of 3D manifold and MNSIT image dataset script recognize, PLLTSA has a higher recognition rate contrast to PCA, LPP and LLTSA, which verify the effectiveness of PLLTSA.
    Improved linear discriminant analysis method
    2011, 31(01):  250-253. 
    Asbtract ( )   PDF (527KB) ( )  
    Related Articles | Metrics
    Linear Discriminant Analysis (LDA) is an effective feature extraction method, but there exist at least two critical drawbacks in it: small sample size problem and rank limitation. In order to solve the above problems, this paper presents an improved LDA algorithm (ILDA), which introduces between-class scatter scalar and within-class scatter scalar and extract features through computing the weight of each dimension in sample space. Numerical experiments on ORL facial database and man-made datasets show ILDA achieves good performance in feature extraction.
    Mean-Shift tracking algorithm based on adaptive bandwidth
    2011, 31(01):  254-257. 
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    Mean-Shift algorithm with fixed bandwidth often fails in tracking an object that moves too fast or owns a dramatic change in scale. To solve the problem, a novel Mean-Shift tracking algorithm based on adaptive bandwidth is proposed. Mean-Shift vector is used to predict the center position and automatically modulate the size of tracking window that fix the object inside the window and gain accurate object position. After the confirming of position, a Bhattacharyya coefficient based dichotomy is adopted to select the pantograph ratio automatically, and a tracking window adapt to the scale of object is obtained. Experiment results proved the algorithm’s capability in locating object’s position and scale.
    Video watermarking based on moving object detection
    2011, 31(01):  258-259. 
    Asbtract ( )   PDF (478KB) ( )  
    Related Articles | Metrics
    Abstract:To improve the robustness of video watermarking, a video watermarking algrithom based on moving object detection was proposed. The paper maked use of the temporal differecing algrithom to extract and sign the moving targets in the video image sequences, and then, achieved watermark embedding and extraction processes by singular value decomposition (SVD) method. In the simulation, calculating the peak signal-to-noise ratio to show that this scheme has a great invisibility and concealment, and using the strimark software to geometric attack on watermarked image and analysising the correlation coefficient to prove that this algorithm has a great robustness.
    Application of Otsu method in motion detection system
    2011, 31(01):  260-262. 
    Asbtract ( )   PDF (405KB) ( )  
    Related Articles | Metrics
    In the motion detection system, fast and efficient image segmentation is an essential part. Otsu method is a common and efficient image segmentation algorithm, and has been used in many various real-time systems. To meet the requirements of real-time systems, an implementation on FPGA of Altera's Cyclone II series for the BCVC (Between Class Variance Computation) of Otsu’s method is presented. It uses Verilog language to modeling and simulates in QuartusⅡ platform. Experimental results show that the design can obtain threshold quickly and effectively, and can effectively protects the fuzzy goals.
    Moving object segmentation algorithm for river surveillance video
    2011, 31(01):  263-265. 
    Asbtract ( )   PDF (478KB) ( )  
    Related Articles | Metrics
    An object segmentation algorithm based on background subtraction is presented according to the features of river surveillance video. First pixels are classified by the hue and intensity values in the HSI color space; then the background pixels are determined by a block-based algorithm; at last the moving objects are extracted with the background subtraction.
    Text extraction algorithm for traffic signs
    2011, 31(01):  266-269. 
    Asbtract ( )   PDF (631KB) ( )  
    Related Articles | Metrics
    A fast and robust Approach for the extraction of text on road signs based on color and stroke is proposed.First, a novel color model derived from Karhunen-Loeve(KL) transform was applied to find all possible road sign candidates. Then, affine transformation was performed to restore road signs to let every road sign seems to be vertical to the camera optical axis which can improve the accuracy in detecting texts embedded in road signs. Finally, mathematical morphology and region growing algorithms were used to obtain a clearer binary picture which is sent to OCR software. Experimental results demonstrate the great robustness and efficiency of proposed algorithm.
    Typical applications
    Application of unequal error protection method with low density parity check codes in scalable video coding
    2011, 31(01):  270-272. 
    Asbtract ( )   PDF (587KB) ( )  
    Related Articles | Metrics
    Based on the analysis of Joint Source Channel Coding theory, an unequal error protection(UEP) scheme for scalable video coding with Low Density Parity Check(LDPC) codes was proposed. It can minimize the total end-to-end distortion of the reconstructed video. This article propose a bit allocation algorithm that can optimally allocate bits to each frame depending on their contribution in the reconstructed video. LDPC codes with different code rate are applied for different layers in order to provide UEP. The simulation results show that the proposed algorithm yields competitive results against the Lagrangian-based algorithm, The complexity of the algorithm is lower and the peak signal to noise ratio (PSNR) of the reconstructed video is improved obviously.
    Extension and implementation from spatial-only to spatiotemporal Kriging interpolation
    2011, 31(01):  273-276. 
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    The spatial-only Kriging interpolation is usually used to solve the problem which is short of sampled data. When sampled and unsampled locations are relative with time and space, it will bring on the loss of worthy data in time dimension if apply directly the spatial-only Kriging interpolation to spatiotemporal domain, which has led to the study on spatiotemporal Kriging interpolation. The goal of this paper is to extend spatial-only Kriging interpolation to spatiotemporal domain and implement the spatiotemporal variograms, spatiotemporal interpolations and spatiotemporal cross validation. Firstly the maximum likely variograms model and effective spatiotemporal sill, nugget, range are derived, secondly the spatiotemporal kriging interpolation is implemented. Lastly cross validation is done to test the validity of spatiotemporal interpolation. The experiment shows that the spatiotemporal extending of spatial-only Kriging interpolation can provide enough information on random domains at some certain accuracy. It provides an effective approach to spatiotemporal estimation and interpolation for various spatiotemporal phenomena.
    Process of workflow exception handling based on extended UML activity diagrams
    2011, 31(01):  277-280. 
    Asbtract ( )   PDF (799KB) ( )  
    Related Articles | Metrics
    Generation of testing script based on XML for safety-critical system
    2011, 31(01):  281-285. 
    Asbtract ( )   PDF (687KB) ( )  
    Related Articles | Metrics
    Aiming at the large scale, high complexity and difficulty in maintaining of testing script in the automatic testing of SCS, an automatic generation approach of testing script based on XML is put forward. This paper adopted XML as a testing script language, modeled the SCS operation scenario with FSM, designed the testing scenario in SED schema and automatically generated the XML testing script by designing a series of algorithms. Thus the automatic generation of testing script for SCS is realized and the approach is successfully applied into the simulation testing of CTCS-2 train control command system.
    Whole-body online modification of biped robot walking pattern
    2011, 31(01):  286-288. 
    Asbtract ( )   PDF (510KB) ( )  
    Related Articles | Metrics
    This paper introduces a method for whole-body motion modification online. According to the simplified dynamic model of robot we generate the walking pattern. When walking in the real environment, there would be errors between pre-planned walking pattern and the actual state of motion, in order to reduce and suppress the errors. we adopted the compensation of CoM to correct joints gait online. This method could reduce the robot's ZMP errors, improve the stability of the robot walking. The biped walking robot AFU-09 proved the effectiveness of the method.
    Real-time cloth simulation based on particle constraints
    2011, 31(01):  289-292. 
    Asbtract ( )   PDF (579KB) ( )  
    Related Articles | Metrics
    A physicsbased cloth simulation using modified massspring model was presented. There was no force generated by springs between particles, but a set of constraints was built between particles. And realtime cloth simulation was achieved according to physical rules to continually adjust the positions of the particles to satisfy these constraints. Based on the spatial hash function, the cloth selfcollision detection and response were solved. Using NVIDIA video’s CUDA technology, hardware acceleration on cloth simulation was implemented and its frame rate was increased about several times.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF