Loading...

Table of Content

    01 April 2011, Volume 31 Issue 04
    Special column of database technology
    Mining causality, segment-wise intervention and contrast inequality based on intervention rules
    Chang-jie TANG Lei DUAN Jiao-ling ZHENG Ning YANG Yue WANG Jun ZHU
    2011, 31(04):  869-873.  DOI: 10.3724/SP.J.1087.2011.00869
    Asbtract ( )   PDF (819KB) ( )  
    Related Articles | Metrics
    In order to discover the special behaviors of Sub Complex System (SCS) under intervention, the authors proposed the concept of contrast inequality, proposed and implemented the algorithm for mining the segmentwise intervention; by imposing perturbance intervention on SCS, the authors proposed and implemented the causality discovery algorithm. The experiments on the real data show that segmentwise intervention algorithm discovers new intervention rules, and the causality discovery algorithm discovers the causality relations in the air pollution data set, and both are difficultly discovered by traditional methods.
    Pattern match queries oriented to uncertain planar graphs
    Guo-ren WANG Ye YUAN Xi-jia ZHANG
    2011, 31(04):  874-881. 
    Asbtract ( )   PDF (1231KB) ( )  
    Related Articles | Metrics
    Pattern match search oriented to planar graphs is widely used in biological networks, social networks, fingerprint identification and image segmentation. Meanwhile, data extracted from those applications is inherently uncertain due to noise, incompleteness and inaccuracy, and query processing techniques of certain planar graphs cannot be applied to uncertain graph data. Therefore, in this paper, the pattern match query oriented to uncertain planar graphs was studied under the probabilistic semantics. Specifically, Uncertain Pattern Match (UPM) queries using the possible world semantics were investigated. Firstly, to avoid enumerating all possible worlds, a basic deterministic algorithm that can exactly compute UPM query was proposed. To further speed up the basic algorithm, an improved deterministic approach was developed based on tighten bounds. Secondly, a sampling algorithm that can quickly and accurately estimate matching probability was designed. The effectiveness of the proposed solutions for UPM queries was verified through extensive experiments on real uncertain planar graph datasets. Finally, UPM queries were applied to the segmentation on pulmonary CT images. The results show that the proposed approaches are superior to classical techniques of image segmentation.
    Integration of metabolic pathway database based on similarity derivation
    Li-hua YUE Qi-zhou WANG Rong-feng CAI
    2011, 31(04):  882-884.  DOI: 10.3724/SP.J.1087.2011.00882
    Asbtract ( )   PDF (472KB) ( )  
    Related Articles | Metrics
    To solve the problem of biologic database integration, combining the metabolic pathway data feature and automated schema matching methods, based on the characteristics that the same molecular and enzyme reaction have the same representation, a similarity derivation based metabolic pathway data integration algorithm was proposed. The method was verified by being effectively applied to LIGAND, EcoCyc and MetaCyc metabolic pathway databases. In addition, considering a friend interface for end user, a metabolic pathway visualized integration tool was designed and implemented using dynamic layout graphic user interface.
    Birth defects detection algorithm based on emerging patterns
    Bao-hua WU Lei DUAN Zhong-hua YU Chang-jie TANG Jun ZHU
    2011, 31(04):  885-889. 
    Asbtract ( )   PDF (767KB) ( )  
    Related Articles | Metrics
    The problem of birth defects is one of the most important public health problems in the world, and the application of data mining method to improve the diagnostic accuracy for birth defects is a hot medical research issue. The authors proposed two emerging patterns for birth defects feature extraction: the defection contrast to normal and the normal contrast to defection. The Birth Defects Detection based on Emerging Patterns (BDD-EP) algorithm was implemented through combining the proposed patterns with C4.5 decision tree. The extensive experimental results show that the detection accuracy of BDD-EP is as high as 90.1%, the F-measure of normal samples is 93.9%, and the F-measure of defect samples is 741%. Compared with other famous classical classification algorithms, BDD-EP algorithm can get better results.
    Network and communications
    Improved nonlinear random early detection algorithm
    Jun MA Yan-ping ZHANG Yong-cheng WANG Xiao-yan CHEN
    2011, 31(04):  890-892.  DOI: 10.3724/SP.J.1087.2011.00890
    Asbtract ( )   PDF (595KB) ( )  
    Related Articles | Metrics
    Active queue management is a focus of current research. Random Early Detection (RED) is one kind of classical queue management algorithms. Linear RED is simple and easy to calculate; however, when average queue size is near to the minimum and maximum threshold, the loss rate is unreasonable. After verifying the nonlinear character between average queue size and packet loss rate, an improved RED algorithm named JRED was presented. The simulation on NS2 shows that the average throughput is improved, and the packet loss rate is decreased. With the JRED algorithm, the stableness and reliability of network are enhanced.
    Resource allocation strategy in wireless heterogeneous networks
    Zhi-yuan HU Ning LI Jian-ding GUO Lei XU
    2011, 31(04):  893-896.  DOI: 10.3724/SP.J.1087.2011.00893
    Asbtract ( )   PDF (815KB) ( )  
    Related Articles | Metrics
    The wireless heterogeneous networks require efficient resource allocation between different communication systems with different modes. To gain effective wireless resource utilization between different systems, a resource allocation strategy based on XG system architecture was proposed. And then a multidimensional resource container was presented, which can meet the traffic demands of different communication systems. A two-level resource allocation strategy and its corresponding resource allocation algorithm were utilized to match the multidimensional resource container with diversified traffic demands between different communication systems. The simulation and performance analysis show that the proposed strategy can improve wireless frequency resource utilization and satisfy the traffic demands between different communication systems.
    Analysis of approximate algorithm about adaptative p-persistent slotted ALOHA
    Fei FANG Yu-ming MAO Su-peng LENG Jian-bing MAO
    2011, 31(04):  897-900.  DOI: 10.3724/SP.J.1087.2011.00897
    Asbtract ( )   PDF (418KB) ( )  
    Related Articles | Metrics
    The distribution network of Dynamic Spectrum Sharing Communication System (DSSCS) uses p-persist slotted ALOHA as media access control protocol, but Media Access Control (MAC) driver does not support floating-point operations. After analyzing the principle of dynamic adaptive p-persistent algorithm (DA-pPAA), and toward the issue of calculating the logarithm function, the replaceable algorithms as Binary Shifted Approximate Algorithm (BSAA) and Thaler Series Approximate Algorithm (TSAA) were proposed. The system performance of approximate algorithm was tested via numerical calculation and simulation. Test results show that the approximate algorithm has simplified the computational complexity, and can get the same performance as the p-persistent theoretical algorithm.
    Network traffic prediction based on wavelet FARIMA model
    Yong SUN Guang-wei BAI Lu ZHAO
    2011, 31(04):  901-903.  DOI: 10.3724/SP.J.1087.2011.00901
    Asbtract ( )   PDF (456KB) ( )  
    Related Articles | Metrics
    Many researches indicate that the Internet traffic exhibits Long-Range Dependence (LRD) and Short-Range Dependence (SRD) features simultaneously. To capture these features of traffic flow, a prediction method was proposed based on W-FARIMA (Wavelet Fractal Autoregressive Integrated Moving Average) model. First, original data series were decomposed by Haar wavelet into a low frequency signal and several high frequency signals. Second, the low frequency signal was modeled and predicted by FARIMA model. Third, the high frequency signals were predicted based on the weighted first order local prediction method. Finally, the respective prediction series were synthesized to get the final prediction data. The experimental results and mathematical analysis confirm that the model performs well in the long-term network traffic prediction.
    Quality control approach of variable-rate communications based on satellite-to-ground communication system
    Hai-yan LIU Fei CAI Cheng-sheng PAN Rui-yan CAI
    2011, 31(04):  904-906.  DOI: 10.3724/SP.J.1087.2011.00904
    Asbtract ( )   PDF (664KB) ( )  
    Related Articles | Metrics
    In order to improve the communication quality effectively, an approach of changing information transmission rate named VCTRM was proposed. The approach changed the modulation type and the chip transmission rate adaptively according to the current channel status when satellite communicated with ground station. The approach not only overcomes the defect that variable modulation type cannot meet the requirement of Bit Error Rate (BER) in system, but also solves the problem that the variable chip transmission rate can not improve the throughput in bad channel status. Simulation was given on three kinds of information transmission methods with Matlab: variable modulation method, variable chip transmission method and VCTRM method. The simulation results indicate that the proposed approach can not only meet the requirement of the system BER at any time but also get larger throughput than other approaches apparently.
    Sparse channel estimation based on modified subspace pursuit algorithm
    Ying GUO Tian-shuang QIU
    2011, 31(04):  907-909.  DOI: 10.3724/SP.J.1087.2011.00907
    Asbtract ( )   PDF (601KB) ( )  
    Related Articles | Metrics
    Due to the sparse structure of channels in a number of communication systems, the sparse channel estimation problem can be formulated as the reconstruction problem of sparse signals, and then being solved by certain algorithm in Compressive Sensing (CS) theory. To avoid needing prior knowledge for sparseness, a Modified Subspace Pursuit (MSP) was proposed. The feedback and refining processes of MSP are the same as those of the existing Subspace Pursuit (SP), the difference between them is that, in MSP, the number of vectors added to the candidate set is increased one by one, not equal to the number of sparseness in SP in every iteration. The simulation results show that, compared with the existing subspace pursuit method, the main innovative feature of the proposed algorithm is that it does not need to assume the sparseness of channel but offers superior estimation resolution.
    Self-adaptive communication architecture of wireless sensor network based on Ptolemy Ⅱ
    Jian-feng GUO Xiao-jun CHEN Jia KE Zhu-jue CHEN
    2011, 31(04):  910-914.  DOI: 10.3724/SP.J.1087.2011.00910
    Asbtract ( )   PDF (770KB) ( )  
    Related Articles | Metrics
    In the existing heterogeneous Wireless Sensor Network (WSN) architecture, various data link layers lack a common structure. To solve this problem, the concept of attribute assembly layer was proposed. Based on the modeling and simulation platform of Ptolemy Ⅱ, the authors modeled the attribute assembly layer and the data link layer respectively, and then put forward an adaptive architecture for WSN. In the attribute assembly layer, the attribute factory was designed to classify various prototypes of communication protocols and encapsulate upper applications, while the assembly factory was designed to generate packet headers and distribute packets to different networks. This architecture unifies the data link layers for heterogeneous networks, and it is well compatible with the existing platforms, communication protocols and network mechanisms. What's more, it can be applied to potential communication protocols and mechanisms. The experimental results show that the communication systems based on this architecture have low memory occupation and time cost, and also have good adaptive capacity.
    Web QoS identification and proportional-integral control based on admission probability
    Fu-quan TIAN Wen-bo XU
    2011, 31(04):  915-917.  DOI: 10.3724/SP.J.1087.2011.00915
    Asbtract ( )   PDF (454KB) ( )  
    Related Articles | Metrics
    A new control strategy named Admission Probability Control (APC) was proposed for the Web QoS control problem. In the new control process, admission control was executed randomly according to Admission Probability (AP) instead of determinably. Introducing service differentiation into the control process, a service differentiation APC named SDAPC was established. Moreover, Adaptive Genetic Algorithm (AGA) was used to adjust the control parameters for control optimization. The simulation results demonstrate that the SDAPC system is capable to guarantee the QoS of the Web server, as well as provide more desirable service for premium requests.
    Active scheduling protocol of local multi-line barrier coverage sensors
    Ying-ying CAO Jian-jiang YU Li-cai ZHU Jia-jun SUN Xiao-xia WAN
    2011, 31(04):  918-921.  DOI: 10.3724/SP.J.1087.2011.00918
    Asbtract ( )   PDF (711KB) ( )  
    Related Articles | Metrics
    To meet the need of instruction detection system used in complex natural environment, such as coastal mudflats, an improved barrier coverage model, a multi-line barrier coverage scheduling protocol named k-MLBCSP, a coverage layout algorithm and a coverage adjustment algorithm were proposed. The k-MLBCSP protocol divided the network lifetime into three phases. In the initialization phase, the coverage layout algorithm guaranteed reasonable network settings. In the adjustment phase, the coverage adjustment algorithm provided an effective way for the sink and alive senosrs to further negotiate coverage layout strategies. The theoretical analysis and simulations show that compared with LBCP and RIS, k-MLBCSP increases the sensor network's coverage probability and lifetime. Furthermore, k-MLBCSP reduces the time complexity and the network load.
    Improvement of DV-Hop localization based on shuffled frog leaping algorithm
    Yu GE Xue-ping WANG Jing LIANG
    2011, 31(04):  922-924.  DOI: 10.3724/SP.J.1087.2011.00922
    Asbtract ( )   PDF (610KB) ( )  
    Related Articles | Metrics
    In order to reduce the node localization error of DV-Hop algorithm in Wireless Sensor Network (WSN), a calculation method of average distance per hop was adjusted by using the shuffled frog leaping algorithm. The improved DV-Hop algorithm makes the average distance per hop closer to the actual value, thereby reducing the localization error. The simulation results indicate that the improved DV-Hop algorithm reduces localization error effectively and has good stability without additional devices; therefore, it is a practical localization solution for WSN.
    Delay-constrained multicast routing algorithm based on optimized path selection
    Wei-qun LIU Yuan-chen LI
    2011, 31(04):  925-927.  DOI: 10.3724/SP.J.1087.2011.00925
    Asbtract ( )   PDF (456KB) ( )  
    Related Articles | Metrics
    A fast and effective delay-constrained multicast routing algorithm was put forward according to the generation of delay-constrained multicast tree. Referencing KPP, this algorithm designed a new path selection function which can balance cost and delay. In the mean time, this algorithm guaranteed the performance of multicast tree and has lower complexity while considering the optimization of cost and delay. The simulation results show that this algorithm can not only construct delay-constrained multicast tree correctly, but also has a less cost and a lower computational complexity than KPP.
    Minimum delay and minimum interference routing algorithm for multi-protocol label switch traffic engineering
    Xiao FU Xing-ming LI
    2011, 31(04):  928-930.  DOI: 10.3724/SP.J.1087.2011.00928
    Asbtract ( )   PDF (585KB) ( )  
    Related Articles | Metrics
    Based on the idea of Minimum Interference Routing Algorithm (MIRA), a Minimum Delay Minimum Interference (MDMI) routing algorithm was proposed to implement QoS-based routing scheme for Multi-Protocol Label Switch (MPLS) traffic engineering. The algorithm firstly calculated k minimum-delay candidate routes, selected optimal route and avoided critical resource through computing network flow. The algorithm can balance overall load, improve resource utilization efficiency and offer a delay-constrained routing mechanism at the same time. The simulation results show that the new algorithm works better at decreasing Label Switched Path (LSP) rejection rate and increase network throughput, however at the cost of k times multiplication of time complexity.
    Simulation and design of space communication network based on IP
    Jia-qiang DONG
    2011, 31(04):  931-934.  DOI: 10.3724/SP.J.1087.2011.00931
    Asbtract ( )   PDF (615KB) ( )  
    Related Articles | Metrics
    In order to adapt to the development of space communications, a space communication network model based on IP was put forward. The pivotal technologies of the communication network, including the structure of data link, the protocol of data transport, and the structure of data package, etc, were particularly designed. Finally, the communication capacity of the network model Teledesic was contrasted with the commerce communication system. The results of simulation demonstrate that the communication network can effectively reduce the switching frequency and time delay of communications between space craft and communication satellite, provide long-time and stable communications to space craft, and the capacity of average throughput is strong, which meet the demands of future space communications.
    Computer software technology
    Operation partitioning for heterogeneous VLIW DSP based on dataflow graph
    Peng-fei QIU Yi HONG Rui GENG Yun XU
    2011, 31(04):  935-937.  DOI: 10.3724/SP.J.1087.2011.00935
    Asbtract ( )   PDF (673KB) ( )  
    Related Articles | Metrics
    The Instruction Level Parallelism (ILP) of VLIW DSP processor is acquired through operation partitioning and software pipeline. In the previous research of operation partitioning, people always focus on reducing move operations between clusters, but rarely consider the effect of heterogeneous architecture and some registers that should be placed on reserved cluster. A method based on DataFlow Graph (DFG) for heterogeneous architecture was described to solve this problem. First, the DFG was partitioned into several sub-graphs according to the relations between operations, then the sub-graphs were refined with a heuristic method to meet the requirements of special registers. The experimental results show that this method can make the load of cluster more balanced, and achieve an average of 8% improvement over traditional method.
    Realization of multiprocessor scheduling algorithm and its modeling simulation based on Petri net
    Yi-qi WANG Qing-kun LIU Jian ZHANG
    2011, 31(04):  938-941.  DOI: 10.3724/SP.J.1087.2011.00938
    Asbtract ( )   PDF (594KB) ( )  
    Related Articles | Metrics
    Multiprocessor scheduling algorithm is the key in the embedded real-time systems. According to the multiprocessor features, a new dynamic parallel scheduling algorithm of real-time multiprocessor, named Split-Parallel (SPara), was proposed. The algorithm solved the problem that the previous algorithms, such as Myopic, EDPF, only judge by the deadline to schedule the tasks, and it was also developed by adding the restriction of the urgency and an effective method as the task with long execution time and tight deadline. Furthermore, the multiprocessor scheduling algorithm which combined the theory of high-level coloured time Petri net was analyzed by modeling, and according to the model, an example of SPara algothrim was simulated and tested. The experimental results show that SPara performances are much better than the other algorithms like Myopic in processor utilization and scheduling success ratio.
    Storage solution to security of embedded remote upgrade
    Heng YIN Hua YAN
    2011, 31(04):  942-944.  DOI: 10.3724/SP.J.1087.2011.00942
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    According to the limitations of boot function and memory of embedded remote upgrade, the failure of remote upgrade procedure for ARM7 industrial control devices was analyzed. A storage structure named "Double-System" based on scatter loading was proposed, and its theory was described in detail. A set of experimental results verify its effect to ensure the safety of remote upgrade.
    Construction method of low density parity check code matrix based on two-generation tree structure
    Jing ZHANG
    2011, 31(04):  945-947.  DOI: 10.3724/SP.J.1087.2011.00945
    Asbtract ( )   PDF (608KB) ( )  
    Related Articles | Metrics
    To remedy the defects of the traditional sparse matrix construction algorithm in Low Density Parity Check (LDPC) code which is hard to be fulfilled or the obtained results are not satisfactory, a new parity-check matrix searching algorithm was proposed based on two-generation tree structure. This algorithm used the data of tree structure, and could more reasonably the skipping relations of the non-zero elements in the rows and lines of the sparse check matrix. Combined with Ant Colony Algorithm (ACA) which has advantages in path seeking, the proposed algorithm is simple and easy to realize. Furthermore, it is easy to apply the algorithm to the irregular code, when importing the outside information to the code.
    Data generation method of database system test based on reverse query process
    Li-yun FENG Hong MEI Qiu-hui YANG Hong-yu ZHOU Kang ZANG
    2011, 31(04):  948-951.  DOI: 10.3724/SP.J.1087.2011.00948
    Asbtract ( )   PDF (639KB) ( )  
    Related Articles | Metrics
    It is important to generate test data and database state for database system testing. The algorithm of Reverse Query Processing (RQP) provides a method to generate test data for database system testing, but it can only be used to process the Select query. In order to address this limitation, RQP was expanded into RMP (Reverse Manipulate Processing) so that it can be used for all the data manipulation statement in SQL. The main idea of RMP was to transform DML statements, such as Delete, Insert, Update, into Select. That is to use Select to describe the conditions that the database instance should meet the requirement of testing DML statements. Then RQP gets the transformed Select statements as inputs and returns a possible database instance that satisfies the conditions. RMP can reversely process the fundamental SQL statements and provide better support to automatic data generation for database system test.
    Information security
    Security model of elastic applications for mobile cloud computation
    Guang-xia XU Shu-yu CHEN
    2011, 31(04):  952-955.  DOI: 10.3724/SP.J.1087.2011.00952
    Asbtract ( )   PDF (864KB) ( )  
    Related Articles | Metrics
    Based on elastic computing resources from clouds, a security model for elastic applications of resource-constrained devices was set up. First, an elastic application consisting of one or more Weblets was introduced. Each Weblet could be launched on a device or cloud, and could be migrated between them according to dynamic changes of the computation environment or user preferences on the device. Then, the security requirement of access pattern was analyzed. Security design model for elastic applications was proposed, which included the authentication and secure session management between Weblets running mobile device side and those on the cloud. The proposed model resolves security migration among Weblets and authorizes cloud Weblets to access sensitive user data via external Web services. The principles can be applied in cloud computation scenarios such as application integration between private and public clouds in an enterprise environment.
    Steganalysis for JPEG MB2 steganography based on histogram differentiation random sequence
    Ke QI Dong-qing XIE
    2011, 31(04):  956-959.  DOI: 10.3724/SP.J.1087.2011.00956
    Asbtract ( )   PDF (793KB) ( )  
    Related Articles | Metrics
    Based on the Discrete Cosine Transform (DCT) coefficient histogram difference sequences of JPEG images, a new steganalytic method was proposed to realize the steganalysis of JPEG MB2. DCT coefficient histogram difference sequences based on JPEG images were defined as a measure of the correlation between JPEG images and its Cauchy fitting distributed model, and then the measure was used to construct the classification feature sequences to discriminate the stego-image from the carrier-image. The Hilbert-Huang transform decomposed the sequences with empirical mode and constructed Hilbert-oriented characteristic vectors. The experimental results demonstrate the correct rate to detect the hiding message achieves 84.48% with Support Vector Machines (SVM) classifier. The approach is also applicable for JPEG MB1 steganalysis.
    Big-capacity video steganography based on motion vector phase and convolutional code
    Peng YANG Li-xian WEI Xiao-yuan YANG
    2011, 31(04):  960-962.  DOI: 10.3724/SP.J.1087.2011.00960
    Asbtract ( )   PDF (605KB) ( )  
    Related Articles | Metrics
    In order to increase the capacity of secret information in video steganography, a big-capacity video steganography algorithm was proposed based on motion vector phase and convolutional code. By studying the moving information of P frame and B frame in every Group of Pictures (GOP), the interlace switch was used to trade with the secret information at first, and then the different size of motion vector phase was used to represent different information to denote the basic generator matrix and the convolutional code was used to insert secret information. The experimental results show that the proposed algorithm not only has a big capacity of insert information, but also has a good imperceptibility and great robustness for secret information. It can achieve high capacity in video steganogrphy and maintain good video quality as well.
    Choice of hiding capacity and frequency coefficients in DCT domain hiding algorithm
    Jian-quan XIE Qing XIE Li-jun TIAN
    2011, 31(04):  963-965.  DOI: 10.3724/SP.J.1087.2011.00963
    Asbtract ( )   PDF (653KB) ( )  
    Related Articles | Metrics
    Hiding capacity, robustness and invisibility are some of the key parameters in information hiding system. Moreover, these parameters are seriously impacted by the difference of Discrete Cosine Transform (DCT) coefficients in DCT domain hiding algorithm. Embedded capacity, which is impacted by the mutual interference of visual perception of different DCT coefficients and the reverse DCT, was analyzed in this paper. Furthermore, the relation between hiding capacity and coefficients-chosen was given out in DCT domain hiding algorithm. A conclusion that there is no correlation between embedded position and robustness against compression of embedded information was put forward, and embedded capacity could be improved in reference to this conclusion. The experimental results show that this conclusion is correct even if the system is disturbed by noise.
    Precise three-watermarking algorithm for image tamper localization and recovery
    Yan ZHOU Fan-zhi ZENG Yang-ju ZHUO
    2011, 31(04):  966-969.  DOI: 10.3724/SP.J.1087.2011.00966
    Asbtract ( )   PDF (643KB) ( )  
    Related Articles | Metrics
    Concerning the shortage of the tamper localization accuracy and tamper recovery performance in the existing image tampers localization and recovery algorithms, the authors proposed a precise three-watermarking algorithm. It generated three types of watermarks such as detection watermark, localization watermark and recovery watermark by binary coding based on Least Significant Bit (LSB). The watermarks were imbedded into the low bits of image. Tamper detection and recovery were implemented by detection watermark and recovery watermark based on blocks, and precise localization was implemented by localization watermark based on single pixel. The simulation results show that the proposed algorithm has precise tamper localization to any size of brightness images and RGB images, and has good tamper recovery performance.
    Software protection model for measurement applications
    Qin-gui XU Gui-xiong LIU Fu-rong GAO
    2011, 31(04):  970-974.  DOI: 10.3724/SP.J.1087.2011.00970
    Asbtract ( )   PDF (829KB) ( )  
    Related Articles | Metrics
    Measurement applications such as trade settlement require their metrological software and running environment protected against unauthorized modifications from attackers including management user, which is nevertheless not fully supported by the existing secure models. A measurement-oriented software protection model named MBSPM was proposed. Role-domain-type access control strategy was adopted to support authorization of data access permissions to software modules. Mandatory access control was employed to enforce multi-level data protection and separation of legal relevant software. Integrity of system software was validated by use of Trusted Platform Module (TPM). And unauthorized modification on metrology parameters was prevented with tamper-proof storage. The experimental results with a virtual weighing system show that MBSPM supports software protection features required by metrological applications. Compared with the situation without enforcing MBSPM, except for that the startup time increases by about 50%, execution speed of opening files and starting application drops by no more than 20%.
    Modification of Cao's multicast key management scheme based on generalized cat map
    Quan-di WANG Jin-feng LI Jie ZHOU
    2011, 31(04):  975-977.  DOI: 10.3724/SP.J.1087.2011.00975
    Asbtract ( )   PDF (466KB) ( )  
    Related Articles | Metrics
    It is indicated that Cao Guoliang et al's multicast key management scheme based on generalized cat map does not satisfy the independent security requirement of individual keys of group members. A modification scheme was proposed by using one way function. The presented scheme satisfies the independent security requirement of individual keys of group members. Compared with Cao Guoliang et al's scheme, this modification scheme has higher security and lower computation and communication overheads.
    Broadcast encryption scheme based on secret sharing
    Zhi-wei LIAO Xiao-ming WANG
    2011, 31(04):  978-980.  DOI: 10.3724/SP.J.1087.2011.00978
    Asbtract ( )   PDF (426KB) ( )  
    Related Articles | Metrics
    The broadcast encryption scheme was required to minimize the amount of decryption computation by many applications. Concerning this requirement, a new broadcast encryption scheme was proposed by using secret sharing in another way. The improved scheme reduced the amount of decryption computation by pre-reconstructing the interpolation share. Analysis shows that the improved scheme just needs to encrypt once the plaintext, and then the subscribers can decrypt the cipher text using their secret keys with less computation. The improved scheme can also remove and add subscribers securely without the changing of subscribers' secret keys, and is of collusion-resistant property.
    Identity-based group key management scheme in pervasive computing environment
    Shi-wei HUO Zhong-min CAI Chang-yuan LUO
    2011, 31(04):  981-983.  DOI: 10.3724/SP.J.1087.2011.00981
    Asbtract ( )   PDF (430KB) ( )  
    Related Articles | Metrics
    The requirements of group key management scheme in pervasive environment were analyzed. A new identity-based group key management scheme was proposed by combing the identity-based cryptography and STR protocol. Concerning nodes' free joining and leaving the group, group key renewing protocol was proposed, which could guarantee the forward security and backward security of the group key. The scheme can achieve security requirements and has less computation and communications load.
    Sensitive information transmission scheme based on magic cube algorithm in automated trust negotiation
    Jian-li LI Guang-lei HUO Bo LIU Yong GAO
    2011, 31(04):  984-988.  DOI: 10.3724/SP.J.1087.2011.00984
    Asbtract ( )   PDF (816KB) ( )  
    Related Articles | Metrics
    To solve the problem of transmitting credentials and other resources through unsafe physical channels during an Automated Trust Negotiation (ATN), a transmission scheme for credentials and resources was proposed based on magic cube algorithm. Through the magic cube algorithm, a transformation sequence was formed in terms of the request or the resource of negotiation initiator, followed by the digital digest to generate the information transformation sequence. According to the logical expression composed of credentials which represent the condition negotiation success, the information transformation sequence was shuffled to form an information transmission sequence, which was sent to the negotiation receiver. The information transmission sequence was reciprocally transformed by the negotiation receiver according to his own credentials. This scheme has many features of the one-round credential exchange, and little network cost. The example shows that the scheme is feasible, and the experimental results show that the scheme has good security and efficiency and low information transmission capacity.
    Analysis and improvement of a proxy blind signature
    Li WAN Fang-wei LI Shao-jun YAN
    2011, 31(04):  989-991.  DOI: 10.3724/SP.J.1087.2011.00989
    Asbtract ( )   PDF (457KB) ( )  
    Related Articles | Metrics
    After analyzing a proxy blind signature scheme proposed by Huang et al., it was pointed out that the scheme was insecure against the original signer and signature receiver's forgery attack. To overcome the security problems existing in Huang's scheme, an improved proxy blind signature scheme was proposed. The new scheme resolved forgery attack in the former scheme and met the security requirement of proxy blind signature scheme. The analytical results prove that the new scheme is more secure and practicable, and suitable for electronic cash.
    Efficient partially blind signature scheme without trusted private key generator
    Xiao-ping ZHANG Cheng ZHONG
    2011, 31(04):  992-995.  DOI: 10.3724/SP.J.1087.2011.00992
    Asbtract ( )   PDF (619KB) ( )  
    Related Articles | Metrics
    The partially blind signature scheme without trusted Private Key Generator (PKG) proposed by Feng Tao et al. was analyzed, and it was found that this scheme did not satisfy unforgeability. A dishonest PKG could forge a valid partially blind signature. By using gap Diffie-Hellman group and bilinear pairings, a new partially blind signature scheme without trusted PKG was proposed. The analysis shows that the proposed scheme can overcome the defect of the original scheme and it possesses unforgeability, correctness, partially and tracility. Compared with the original scheme, the proposed scheme performs more efficiently in computation because it reduces two pairing operations.
    Simple improvement for S/KEY authorization scheme
    Bing HE
    2011, 31(04):  996-998.  DOI: 10.3724/SP.J.1087.2011.00996
    Asbtract ( )   PDF (518KB) ( )  
    Related Articles | Metrics
    The author analyzed some defects of the traditional S/KEY One-Time Password (OTP) authorization scheme and recent improvements, and proposed a new S/KEY improvement scheme. The new scheme provided mutual authorization with Hash values of user password as an authentication factor. It can effectively prevent key message from being forged by adding message integrity protection. The new scheme is as simple and easy to be implemented as traditional S/KEY scheme. Additionally, it can effectively avoid replay attack, small integer attack and impersonating attack.
    Privacy protection method for composite sensitive attribute based on semantics similarity and multi-dimensional weighting
    Long-qin XU Shuang-yin LIU
    2011, 31(04):  999-1002.  DOI: 10.3724/SP.J.1087.2011.00999
    Asbtract ( )   PDF (677KB) ( )  
    Related Articles | Metrics
    In view of a large number of privacy disclosure issues when using k-anonymity method directly for multi-sensitive attribute data publishing, a joint privacy-sensitive properties preserving algorithm based on semantic similarity and multidimensional weighting was proposed. This algorithm realized security protection of the joint-sensitive property value and the semantic diversity of the privacy group with the help of the semantic similarity anti-clustering principle and counter-sensitive property value. According to different application needs, data privacy protection of different extent was provided. The experimental results show that this method can effectively protect data privacy and enhance data security and practicality.
    AES security model based on multi-core multithread
    Dan-hua LU Cheng ZHONG Feng YANG
    2011, 31(04):  1003-1005.  DOI: 10.3724/SP.J.1087.2011.01003
    Asbtract ( )   PDF (440KB) ( )  
    Related Articles | Metrics
    To meet the requirements for encrypting and decrypting speed of high-capacity file in high speed network, an Advanced Encryption Standard (AES) security model based on multi-core technology named MACBC was presented. By using the characteristics such as multi-level buffer and sharing memory of multi-core computer, MACBC split the high-capacity file into some data blocks that were encrypted and decrypted in multi-threads, under the condition of security guarantee and basically invariable memory space. The experimental results indicate that acceleration effect of this model is obvious. And the larger the capacity of file is, the higher the acceleration ratio is.
    Malware detection based on attributes order reduction
    Ning GUO Xiao-yan SUN He LIN Hua MOU
    2011, 31(04):  1006-1009.  DOI: 10.3724/SP.J.1087.2011.01006
    Asbtract ( )   PDF (633KB) ( )  
    Related Articles | Metrics
    The existing methods of malware feature selection and reduction methods were studied. Current attribute reduction methods of malware do not take advantage of the information of feature selection evaluation function. So a method was proposed to order all features based on their value of information gain and their size, and used attributes order reduction method to get a reduction. An analysis of spatial and temporal complexity was given, and the overall design was given. Test results show that the application of attributes order reduction can obtain fewer reduction results in less time, and get higher classification accuracy using the reduction result.
    Graphics and image technology
    Local-global algorithm for triangular mesh optimization
    Wei LI Wen-biao JIN Xian-qian XIAO
    2011, 31(04):  1013-1015.  DOI: 10.3724/SP.J.1087.2011.01013
    Asbtract ( )   PDF (638KB) ( )  
    Related Articles | Metrics
    In the image resizing algorithm based on mesh deformation, the mesh quality is crucial. A new local-global based triangular mesh optimization algorithm was proposed to improve the quality of the triangular mesh representing the image being resized. In the local step, the equilateral triangle, which is most similar to each triangle in the mesh, was obtained using custom rules, and a set of objectives affine transformation function was got. While in the global step, the optimal position of each node was solved by least-square method based on as rigid as possible method to minimize the value of the deformation energy function. Simultaneously, constrained control was added in optimization process to protect the critical areas of the grid from changing. The experimental results demonstrate that the quality of the planar triangular mesh is greatly improved.
    Improved rectangle NAM algorithm for image representation
    Peng WU Hou-quan XU Chuan-bo CHEN Guang-yue LU Kai XIE
    2011, 31(04):  1016-1018.  DOI: 10.3724/SP.J.1087.2011.01016
    Asbtract ( )   PDF (689KB) ( )  
    Related Articles | Metrics
    In order to improve image representation efficiency, an improved rectangle Non-symmetry Anti-packing representation Model (NAM) named IRNAM was proposed for image coding. This scheme adopted double-rectangle sub-patterns to represent gray image, integrated bit plane optimization strategy and stored the sub-patterns continuously, thus the number of sub-patterns had decreased sharply. The experimental results show that compared with rectangle NAM algorithm and other improved NAM algorithms, the number of sub-patterns can be reduced apparently after employing IRNAM algorithm, which can save data storage space effectively and prove to be a high performance image representation method.
    Feature matching of scale invariant feature transform images based on fractional differential approach
    Li-min ZHANG Shang-bo ZHOU
    2011, 31(04):  1019-1023.  DOI: 10.3724/SP.J.1087.2011.01019
    Asbtract ( )   PDF (842KB) ( )  
    Related Articles | Metrics
    The fractional differential approach can strengthen and extract textural features of two dimensional digital images; therefore, the digital image feature can be enhanced and extracted more easily. In order to improve the accuracy of image matching,based on fractional differential theory, an improved Scale Invariant Feature Transform (SIFT) matching algorithm was proposed. Combining the Gauss filter with the fractional differential filter to enhance the feature of image, more extrema and keypoints could be detected. Compared with the original SIFT, the simulation results show that the proposed algorithm can detect more key points, and improves the accuracy of image matching.
    Image feature extraction and matching of color-based scale-invariant feature transform
    Yin-chu WU Rong MA
    2011, 31(04):  1024-1026.  DOI: 10.3724/SP.J.1087.2011.01024
    Asbtract ( )   PDF (656KB) ( )  
    Related Articles | Metrics
    That image feature extraction and matching will be done after the color image is transformed into gray one may cause color information loss and lead to wrong matching. In order to solve such problem, an image feature extraction and matching method named CSIFT (Color-based Scale-Invariant Feature Transform) was used to realize color image feature extraction and matching. Combining the color features with the geometrical features of the objects, feature points were extracted and neighbor information around these points was described using color invariant value as input information. Then the points between two images were matched using the nearest neighbor method. The algorithms were applied to vision-odometer, feature extraction and matching with two adjacent frames from camera were operated and compared with the SIFT algorithm in the experiment. The result shows that the algorithm is effective.
    Vascular segmentation for lung CT images based on fractional differential enhancement
    Jun LAI Mei XIE
    2011, 31(04):  1027-1029.  DOI: 10.3724/SP.J.1087.2011.01027
    Asbtract ( )   PDF (662KB) ( )  
    Related Articles | Metrics
    In order to improve the accuracy of automatic vascular segmentation in CT lung images, after researching image enhancement, segmentation method and the enhancing ability of the fractional differential operator toward image details, a local sub-regional segmentation method based on fractional differential enhancing was proposed. This method enhanced the lung CT images with fractional differential operator firstly, and then the optimal threshold acquired under two control indexes was used for vascular segmentation in the local region. The experimental results show that the proposed method can effectively extract the vascular network in which the blood vessels have more detailed information. And compared to traditional pulmonary vascular segmentation method, it has more accurate segmentation ability of the pulmonary vascular in CT images.
    Segmentation of maximum entropy threshold based on gradient boundary control
    Qian WANG
    2011, 31(04):  1030-1032.  DOI: 10.3724/SP.J.1087.2011.01030
    Asbtract ( )   PDF (644KB) ( )  
    Related Articles | Metrics
    Combining the two essential characteristics of the image, the gradient and the gray level, a threshold segmentation approach using maximum entropy with the gradient boundary control was proposed. In this approach, a boundary gradient controlling function was defined to quantify the richness of the detail information in the images. The local maximums of this function indicated a possible threshold set for the image segmentation. Within this set, a best threshold could be selected by using maximum entropy principle to get the binary image. The experimental results show that the regions segmented by this method can contain more semantics of the images for the ample detail information reserved in it. Moreover, this method is also with noise immunity in some degrees.
    Adaptive and reversible image pre-processing for efficient entropy coding
    Yu-ying ZHENG
    2011, 31(04):  1033-1036.  DOI: 10.3724/SP.J.1087.2011.01033
    Asbtract ( )   PDF (573KB) ( )  
    Related Articles | Metrics
    Most image pre-processing approaches before encoding usually cause information loss. An adaptive and reversible image pre-processing method was proposed for a better performance of entropy coding. It was addressed by two steps, namely the bit-plane decomposition and the new reversible image transform scheme under different transform patterns in 8/4 pixel processing. Among transformed images, the one with the least entropy was selected as the pre-processing output with the histogram U-shape redistributed. The experimental results illustrate that the proposed method efficiently improves the performance of entropy coding by increasing the compression ratio than that of plain entropy coding.
    Adaptive filtering method for images based on pulse-coupled neural network
    Hai-yan LI Yu-feng ZHANG Xin-ling SHI Jian-hua CHEN
    2011, 31(04):  1037-1039. 
    Asbtract ( )   PDF (670KB) ( )  
    Related Articles | Metrics
    An adaptive filter was proposed to detect and remove pepper and salt noise in an image based on Pulse Coupled Neural Network (PCNN) firing matrix. The PCNN was simplified and a unidirectional decaying threshold was proposed to avoid complex parameter selection and improve processing speed. The noise-polluted pixels were detected through analyzing the PCNN firing matrix, and then only the noisy pixels were filtered by a median filter while protecting image edges and details. The window size of the filter and the filtering time were adaptively determined by calculating the noise intensity of the contaminated image. The experimental results show that the proposed method performs better in removing noise while conserving detailed information than traditional filters do.
    Color filter array interpolation based on contourlet local Gaussian model and total variation
    Li-yuan DING Qiu-sheng LIAN
    2011, 31(04):  1040-1042.  DOI: 10.3724/SP.J.1087.2011.01040
    Asbtract ( )   PDF (600KB) ( )  
    Related Articles | Metrics
    Single-chip digital cameras use Color Filter Array (CFA) to sample different color information; the color image CFA interpolation algorithm interpolates these data to produce an RGB image. A color image CFA interpolation algorithm was proposed based on contourlet local Gaussian model and Total Variation (TV). In order to improve the edge interpolation quality, the sparsity of image gradient was integrated in interpolation process, and the Color Total Variation (CTV) was introduced to measure the sparsity of the image gradient. The experimental results show that the proposed algorithm outperforms the classical algorithms in terms of both Peak Signal-to-Noise Ratio (PSNR) and visual quality.
    3D reconstruction algorithm for computer vision using four camera array
    Ze-xi FENG Hui ZHANG Yong-ming XIE Min ZHU
    2011, 31(04):  1043-1046.  DOI: 10.3724/SP.J.1087.2011.01043
    Asbtract ( )   PDF (881KB) ( )  
    Related Articles | Metrics
    Current three-dimensional reconstruction algorithms of the computer vision field have limitations that they need to deploy and calibrate the cameras around the scene, or they need a structure light. Furthermore, these algorithms are not robust enough to every object. A new kind of four camera array reconstruction algorithms which properly combined the image registration algorithm and the camera array method was proposed to solve the robustness and limitation problems. It does not need calibration or structure light support. The experiments based on complex indoor sense with shadows demonstrate that this method is able to do dense point cloud reconstruction robustly and can overcome the shortcomings of current reconstruction algorithms.
    Precise spatial-temporal tracking method for maneuvering video objects
    Bo HU
    2011, 31(04):  1047-1049.  DOI: 10.3724/SP.J.1087.2011.01047
    Asbtract ( )   PDF (609KB) ( )  
    Related Articles | Metrics
    A video object tracking method combining the Bhattacharyya coefficients maximization with spatial-temporal information was proposed. The Kalman filter was used to predict the target movement information in the time domain, while in the space domain the target was precisely matched by using Camshift algorithm. Due to the strong maneuverability of the moving target, there will be a relatively big discrepancy between the predicted and true position, which will cause failure in tracking for the upcoming frame. To deal with this problem, a kernel matching approach based on Bhattacharyya coefficients was adopted in a way from rough to precise. The search scope was properly increased based on the position of the prediction, and the initial matching window was defined according to the Bhattacharyya coefficients maximization. Finally, the target was precisely matched by applying Camshift algorithm. The experimental results indicate that the proposed method is highly precise in tracking fast maneuvering moving target.
    Computation of approximate geodesics on point cloud
    Bin YANG Yuan-yuan FAN Ji-dong WANG
    2011, 31(04):  1050-1052.  DOI: 10.3724/SP.J.1087.2011.01050
    Asbtract ( )   PDF (634KB) ( )  
    Related Articles | Metrics
    In order to compute approximate geodesic efficiently between two points on point cloud, a weighted graph was constructed by splitting point cloud along the Cartesian coordinate axes, thus initial approximate geodesic between any two given points could be computed out using Dijkstra's algorithm. Then the conjugate gradient method was adopted to minimize the energy function defined; finally, approximate geodesic could be obtained after some iterative steps. This proposed algorithm avoids meshing or reconstructing the point cloud to be local or global surface, and it is suitable for computing geodesic on large scale point cloud.
    Matching method of plane image based on random sample consensus algorithm
    Bo ZHOU Jian YANG Dong-ping WANG
    2011, 31(04):  1053-1056.  DOI: 10.3724/SP.J.1087.2011.01053
    Asbtract ( )   PDF (570KB) ( )  
    Related Articles | Metrics
    Concerning the problems of large amount of calculation and poor accuracy of traditional matching method of plane image target point, a fast and highly precise matching method of plane image target point was proposed based on RANdom SAmple Consensus (RANSAC) random sampling algorithm. Firstly, based on the dual conic model, the ellipse's gradient vector was estimated by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Secondly, the ellipse's parameters and centers with the gradient vector field were matched. At last, the points in image with the target point in calibration board were matched with the help of RANSAC random sampling algorithm. The experimental results verify the method is simple and has a high degree of accuracy.
    Line drawing algorithm based on pixel chain
    Xiao-lin ZHU Yong CAI Jiang-sheng ZHANG
    2011, 31(04):  1057-1061.  DOI: 10.3724/SP.J.1087.2011.01057
    Asbtract ( )   PDF (669KB) ( )  
    Related Articles | Metrics
    In order to increase the efficiency of the line drawing algorithm when the slope of the line is greater than 0.5, a line drawing algorithm based on pixel chain was proposed. A straight line was treated as an aggregation of several horizontal pixel chains or diagonal pixel chains. An algorithm of line drawing in a reverse direction, which was similar to Bresenham algorithm, was introduced, by which the slope between 0.5 and 1 was converted to that between 0 and 0.5 while generating line. One pixel chain was generated by one judgment. The simulation results show the straight line generated by new algorithm is as same as that generated by Bresenham algorithm, but the calculation is greatly reduced. The new algorithm has generated two integer arithmetic: addition and multiplication, and it is suitable for hardware implementation. The generation speed of the new algorithm is 4 times of Bresenham algorithm with the same complexity of design.
    Artificial intelligence
    Representing object role modeling models with Web ontology language description logic axioms
    Wen-lin PAN Da-xin LIU
    2011, 31(04):  1062-1066.  DOI: 10.3724/SP.J.1087.2011.01062
    Asbtract ( )   PDF (964KB) ( )  
    Related Articles | Metrics
    Object Role Modeling (ORM) has been used in ontology engineering to model domain ontology, which needs to represent ORM models in OWL DL axioms to check semantic conflicts and redundancy with DL reasoners, and to publish ORM ontology on the semantic Web. By means of comparing the semantics of ORM model and OWL DL axioms, equivalently model-converting, and introducing new operators and properties, that mapping rules to represent ORM models in OWL DL axioms was proposed. Except a few constraints, most ORM model elements can be represented by OWL DL axioms.
    Improved ontology matching method
    Yu-fang ZHANG Chuan LI Zhong-yang XIONG
    2011, 31(04):  1067-1069.  DOI: 10.3724/SP.J.1087.2011.01067
    Asbtract ( )   PDF (472KB) ( )  
    Related Articles | Metrics
    The traditional ontology matching methods that use the ontology's structure to find the matches do not really make good use of the ontology's structural feature, which leads to considerable computation redundancies during the entire matching process. Therefore, a modified method named TARA was proposed to improve the matching process in this paper. The method firstly casted matching process by strictly using the ontology's structural information, and then a re-match process was applied to overcome the inevitable defect that caused by the matching process before. The experimental results show that the method has good performances in both recall and precision.
    Method of mutual information filtration with dual-threshold for term extraction
    Shi-chao CHEN Bin YU
    2011, 31(04):  1070-1073.  DOI: 10.3724/SP.J.1087.2011.01070
    Asbtract ( )   PDF (789KB) ( )  
    Related Articles | Metrics
    In order to reduce the impact of problems inherent in the mutual information method on the filtering effect, a method of candidate term filtration and extraction was proposed. And a determination algorithm based on partial evaluating indicator was given, which can give the best upper and lower thresholds fast and accurately through data sampling, statistics and computation. Compared with the method of mutual information filtration with single threshold, the proposed method filtered and extracted candidate terms by setting two thresholds in the premise of not changing the calculating formula of mutual information. The experimental results show that the proposed method can improve the precision rate and F-measurement significantly under the same conditions.
    Discriminative parameter learning of Bayesian network classifier
    Hong-bo SHI Ya-qin LIU Ai-jun LI
    2011, 31(04):  1074-1078.  DOI: 10.3724/SP.J.1087.2011.01074
    Asbtract ( )   PDF (799KB) ( )  
    Related Articles | Metrics
    Concerning the characteristics of Bayesian networks classifier, a discriminative parameter learning algorithm of Bayesian networks classifier based on parameters ensemble named PEBNC was proposed to improve the classification performance of Bayesian classifier. This algorithm regarded the parameter learning as a regression problem, applied the additive regression model to the parameter learning of Bayesian networks classifier, and realized a discriminative parameter learning of Bayesian networks classifier. The experimental results indicate that the PEBNC classifier can improve the classification performance in most cases. Furthermore, compared with the general Bayesian classifier ensemble, PEBNC requires less space because there is no need to save parameters of individual classifiers.
    Maximum a posteriori classification method based on kernel method under t distribution
    Ru-yan ZHANG Shi-tong WANG Yao XU
    2011, 31(04):  1079-1083.  DOI: 10.3724/SP.J.1087.2011.01079
    Asbtract ( )   PDF (738KB) ( )  
    Related Articles | Metrics
    In order to solve the problem of multivariate normal distribution failing to comply with the distribution of sample data when it has a serious tailing phenomenon, a multiclass classification method under t distribution was proposed. Sample data were extended to the high dimensional feature space by kernel method and Maximum A Posteriori (MAP) was obtained by Bayesian classifier, and then classification result could be gotten. Because it has another degree of freedom parameter v in multivariate t distribution, it can more easily capture the complexity of the sample data, and enjoys better robustness. A large number of experiments have been done on the five international standard UCI data sets and three facial image data sets, and the experimental results show that the method achieves better classification and is feasible.
    High-dimensional optimization problems via disturbance chaotic ant swarm algorithm
    Fang-zen GE Zhen WEI Yi-ming TIAN Lu-yang LU
    2011, 31(04):  1084-1089.  DOI: 10.3724/SP.J.1087.2011.01084
    Asbtract ( )   PDF (913KB) ( )  
    Related Articles | Metrics
    To resolve the problems of computational complexity and search precision existing in Chaotic Ant Swarm (CAS), a Disturbance CAS (DCAS) algorithm was proposed to significantly improve the performance of the original algorithm. DCAS algorithm reduced computational complexity by a new greedy method of updating ant's best position and a random neighbor selection method. Furthermore, a self-adaptive disturbance strategy was introduced to improve the precision of DCAS by developing ant's local search. Extensive computational studies were also carried out to evaluate the performance of DCAS on a new suite of benchmark functions with up to 1000 dimensions. The results show clearly that the proposed algorithm is effective as well as efficient for the complex high-dimensional optimization problems.
    Ethnic group evolution algorithm for constrained numerical optimization
    Hao CHEN Xiao-ying PAN Du-wu CUI
    2011, 31(04):  1090-1093.  DOI: 10.3724/SP.J.1087.2011.01090
    Asbtract ( )   PDF (556KB) ( )  
    Related Articles | Metrics
    In order to improve the performance of Ethnic Group Evolution Algorithm (EGEA) for constrained functions, a macrogamete filter mechanism based on linear truncation strategy was proposed to keep macrogamete scale stable in evolution process. This strategy can reduce the hefty fluctuation of ethnic group structure and improve the searching efficiency of EGEA effectively. The simulations of six classical constrained functions show the linear truncation strategy enables EGEA to be a competent algorithm for constrained functions.
    Self-adaptive differential evolution algorithm based on orthogonal and niche elite for high-dimensional multi-modal optimization
    Shou-heng TUO Wen-yong WANG
    2011, 31(04):  1094-1098.  DOI: 10.3724/SP.J.1087.2011.01094
    Asbtract ( )   PDF (741KB) ( )  
    Related Articles | Metrics
    Traditional Differential Evolution (DE) algorithm has shortcomings, such as being trapped into local optimum easily, low convergence speed and solution precision. An Orthogonal Niche Differential Evolution (ONDE) algorithm was proposed to resolve these problems. Firstly, the orthogonal table was used to generate initial population; secondly, the niche elite selection strategy was utilized to produce Niche Population (NP), and update Elite Population (EP) with niche population; thirdly, trapping into local search was prevented by crowded cutting; finally, differential evolution operator was improved by using self-adaptive mutation operators. Simulations on seven benchmark functions were used to test the proposed algorithm. The experimental results illustrate that ONDE algorithm has some advantages in convergence velocity, solution precision and stability.
    Improved differential evolution algorithm based on Laplace distribution mutation
    Xing-yang LIU Li MAO
    2011, 31(04):  1099-1102.  DOI: 10.3724/SP.J.1087.2011.01099
    Asbtract ( )   PDF (512KB) ( )  
    Related Articles | Metrics
    To improve the optimum speed and optimization accuracy of Differential Evolution Algorithm (DEA), an improved DEA was proposed. In this algorithm, a new mutation operator following the Laplace distribution was used during the mutation, and both the mutation strategy and the crossover probability could be gradually self-adapted to fit different phases of evolution by learning from their previous successful experience. Experimental studies were carried out on five classical Benchmark functions, and the computational results show that the algorithm has faster convergence, higher accuracy and stronger robustness, and it is suitable to solve high-dimensional complex global optimization problems.
    Improved genetic algorithm based on Latin hypercube sampling and immune mechanism
    Ben-da ZHOU Hong-liang YAO Ming-hua ZHOU
    2011, 31(04):  1103-1106.  DOI: 10.3724/SP.J.1087.2011.01103
    Asbtract ( )   PDF (621KB) ( )  
    Related Articles | Metrics
    Concerning the defects of Genetic Algorithm (GA) in the deficiency of keeping population diversity, prematurity, low success rate and so on, the crossover operation in GA was redesigned by Latin hypercube sampling. Combined with immune mechanism, chromosome concentration was defined and selection strategy was designed, thus an improved genetic algorithm was given based on Latin hypercube sampling and immune mechanism. The Traveling Salesman Problem (TSP) and the Maximum Clique Problem (MCP) were used to verify the new algorithm. The results show, in terms of solution quality, convergence speed, and other indicators, the new algorithm is better than the classical genetic algorithm and good-point-set genetic algorithm.
    Performance of an improved artificial bee colony algorithm
    Ke HU Xun-bo LI Zhen-lin WANG
    2011, 31(04):  1107-1110.  DOI: 10.3724/SP.J.1087.2011.01107
    Asbtract ( )   PDF (516KB) ( )  
    Related Articles | Metrics
    An improved algorithm based on Artificial Bee Colony (ABC) algorithm was proposed to solve the problem that traditional ABC algorithm is inclined to fall into local optima. In the first stage, the improved ABC algorithm was derived from the skills of extrapolation in mathematics to update the new location of ABC. In the second stage, in order to overcome the deficiency of high position similarity in later stage of evolution and slow renewal rate and enhance the ability of local search in feasible region, a fine-tuning mechanism was introduced to ABC. Simultaneously, the effect of convergence subjected to different perturbation factors was discussed. Finally, the simulation results in three benchmark functions show that the proposed algorithm has better performance than traditional algorithm in search ability and accuracy.
    Multi-criteria decision making based on Vague integral
    Lin-lin CAO Jia-rong LIANG
    2011, 31(04):  1111-1113.  DOI: 10.3724/SP.J.1087.2011.01111
    Asbtract ( )   PDF (421KB) ( )  
    Related Articles | Metrics
    Current decision making methods ignore the mutual-impact among the criteria, which causes the inconsistence with the reality. To overcome this drawback, a new approach of multi-criteria decision making based on Vague integral was given. The new approach used Vague measure to measure the importance of the criteria, and used Vague integral to obtain the overall evaluation of the alternative, and then got the optimal alternative based on the evaluation. Finally, a practical example was provided to illustrate the effectiveness and correctness of the new approach.
    Classification method for SVDD based on information entropy
    Wei-cheng HE Jing-long FANG
    2011, 31(04):  1114-1116.  DOI: 10.3724/SP.J.1087.2011.01114
    Asbtract ( )   PDF (428KB) ( )  
    Related Articles | Metrics
    Most of Support Vector Data Description (SVDD) methods have blindness and bias issues when working on two-class problems. The authors proposed a new SVDD method based on information entropy. In this algorithm, firstly, the entropy values were resolved respectively of the two classes of samples. Secondly, according to the size of the value, one class was placed inside the ball. Finally, the penalty was given based on the information provided by the sizes of the two sample data and their entropy values. The efficiency of this algorithm was verified by using artificial data and UCI datasets for the data imbalanced classification problem. The experimental results on artificial data sets and UCI data sets show the feasibility and effectiveness of the proposed method.
    Recognition of splice sites based on fuzzy support vector machine
    Bo SUN Xiao-xia LI Cheng-guo LI
    2011, 31(04):  1117-1120.  DOI: 10.3724/SP.J.1087.2011.01117
    Asbtract ( )   PDF (592KB) ( )  
    Related Articles | Metrics
    In order to improve the splice site recognition accuracy of Fuzzy Support Vector Machine (FSVM), a new method for computing the membership degree of sample was proposed. The initial membership was defined as the distance ratio of the sample to the two cluster centers of positive and negative samples, K-Nearest Neighbor (KNN) was adopted to compute the tightness of the samples, and the multiplication of the tightness and the initial membership degree was used as the ultimate membership. It will not only improve the membership degree of support vector, but also reduce the membership degree of noise sample. This method was applied to recognize the splice site, and the experimental results show that the recognition accuracy of constitutive 5′ and 3′ splice site reaches 94.65% and 88.97% respectively. Compared with the classical support vector machine,the recognition accuracy of constitutive 3′ splice site increases by 7.94%.
    Vehicle Route Planning Study for Cash Transport Van
    Xiao-chong LIU Min DAI Gang ZHENG Qing-jun HUANG
    2011, 31(04):  1121-1124.  DOI: 10.3724/SP.J.1087.2011.01121
    Asbtract ( )   PDF (629KB) ( )  
    Related Articles | Metrics
    Since the real node number in cash transport network changes dynamically, a route planning strategy for dynamic cash transport routing was proposed. The strategy did partitioning and optimizing in sequence. Firstly, Dijkstra algorithm was adopted to compute the shortest route between two nodes, and then vehicle number and node on each route were gotten by nearest neighbor algorithm and workload balancing factor. Secondly, the pre-cross genetic algorithm was adopted to optimize each route and get node sequence on the route, which could get the route with shortest distance and minimum time consumption. The experimental results show that the proposed strategy can meet the requirements of dynamic vehicle number and route, and achieve the purpose of saving resources.
    Typical applications
    Scheduling optimization model and algorithm for Milk-run auto parts
    Xu WANG Dong CHEN Zhen-feng WANG
    2011, 31(04):  1125-1128.  DOI: 10.3724/SP.J.1087.2011.01125
    Asbtract ( )   PDF (796KB) ( )  
    Related Articles | Metrics
    To seek the optimal path for the vehicles to take delivery of auto parts under the Milk-run, a modeling idea that each components supplier's spare parts were delivered by the way of circular and batch delivery to make as full use of the vehicle as possible was put forward. The optimizing model of vehicle routing problem was established with the constraints of vehicle cubage, arriving time window, supplier supplying dynamic time window and maximum running distance. After that, a heuristic saving algorithm (or C-W algorithm) was designed to provide a solution to the model. Finally, one example was given to prove the validity of the model and algorithm.
    Design and implementation of YHFT-DSP simulation platform based on E-Bus
    Hai-yan CHEN Hong HUANG
    2011, 31(04):  1129-1132.  DOI: 10.3724/SP.J.1087.2011.01129
    Asbtract ( )   PDF (623KB) ( )  
    Related Articles | Metrics
    The E-Bus is a peripheral parallel interface of a self-developed Digital Signal Processor (DSP) chip named YHFT-DSP. It could support various synchronous or asynchronous commerce standards of host-device interface protocols in order to exchange data effectively between them. In order to design, test and apply the E-Bus interface effectively, a set of simulation platform based on E-Bus was implemented. It used hardware/software cooperation and Field Programmable Gate Array (FPGA) technology to carry out the protocols transformation between E-Bus and USB2.0. The results of test show that the platform has a nice operable interface and implements the data exchange functions between host-device and YHFT-DSP under the master or slave modes of E-Bus.
    Study on performance of typical source camera classification algorithms
    Chang-hui ZHOU Yong-jian HU Li-ling TAN
    2011, 31(04):  1133-1137.  DOI: 10.3724/SP.J.1087.2011.01133
    Asbtract ( )   PDF (795KB) ( )  
    Related Articles | Metrics
    In literature, there are very few discussions on the change of performance of source camera classification algorithms when test images are subjected to minor image processing. Using Support Vector Machines (SVM), this paper analyzed the performance and robustness of source camera classification algorithms. It compared the detection accuracy for unprocessed images with that for processed images, and investigated the robustness of different types of image features. Since pattern classification-based algorithms often need to reduce the number of image features for computational efficiency, this paper also discussed the performance of camera classification algorithms using the image feature subsets. The impact of using these subsets on the robustness of camera classification algorithms was explored as well.
    Evaluation model for selection scheme of command protection engineering site
    Guo-qing QIU Duo-dian WANG Ting-ting DAI Yu-qing LONG
    2011, 31(04):  1138-1140.  DOI: 10.3724/SP.J.1087.2011.01138
    Asbtract ( )   PDF (488KB) ( )  
    Related Articles | Metrics
    The model, which is available to Command Protection Engineering (CPE) site selection, was built taking the complex characteristics into consideration. The weight of evaluation index was determined by improved Analytic Hierarchy Process (AHP), and was modified by entropy method. According to the characteristic of the evaluation index of CPE site selection, the combined method of the spatial information analysis and multi-expert group decision was adopted to achieve spatial and non-spatial index value. Grey Correlation Analysis (GCA) was used to improve Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), avoided the limitation that traditional TOPSIS adopts Euclidean distance alone. This model can achieve an ideal evaluation result through an example.
    Active intelligent parking guidance system
    Long-fei WANG Hong CHEN Yang LI Hai-peng SHAO
    2011, 31(04):  1141-1144.  DOI: 10.3724/SP.J.1087.2011.01141
    Asbtract ( )   PDF (652KB) ( )  
    Related Articles | Metrics
    Based on the intrinsic features of spatial distribution, temporal distribution and high dynamic of parking activities, a negotiation approach was introduced to the design of an intelligent parking guidance system. The IEEE FIPA compliant multi-Agent system called active negotiation-based intelligent parking guidance system (AIPGIS) was proposed. The architecture, operation mechanism, negotiation algorithms and characteristics were analyzed and presented. The AIPGIS can implement effective sharing of urban traffic state information and strengthen the coordination and decision-making capacities of the active Agents.
    Application of quasi-cycle low-density parity check codes with high performance to satellite navigation signals
    Hong QIAN Guang-xia LI Jiang CHANG
    2011, 31(04):  1145-1147.  DOI: 10.3724/SP.J.1087.2011.01145
    Asbtract ( )   PDF (562KB) ( )  
    Related Articles | Metrics
    In the Global Positioning System (GPS) modernization plan, Low-Density Parity Check (LDPC) code is used for the channel coding scheme of L1C signals, which brings out excellent decoding performance, but higher complexity. The encoder and decoder of random constructed LDPC codes are difficult to implement with Very Large Scale Integration (VLSI). A kind of enhanced Quasi-Cycle LDPC (QC-LDPC) codes based on LDPC codes was proposed, which had both quasi-cycle structure and approximate lower triangular form. The girth of the proposed codes is 8. The simulation results show that QC-LDPC codes are superior to LDPC codes used in 802.16e and GPS L1C signals, and it is helpful to the development of new signal scheme of COMPASS navigation system.
    Image-pyramid-oriented linear quadtree coding and features
    Jian-xun LI Bing SHEN Jian-hua GUO
    2011, 31(04):  1148-1151.  DOI: 10.3724/SP.J.1087.2011.01148
    Asbtract ( )   PDF (645KB) ( )  
    Related Articles | Metrics
    Based on the linear quadtree, an image-element coding method oriented to image pyramid was introduced. According to the coding rules and Bounding BOX (BBOX) recurrence formula, the corresponding feature, location feature, existence feature and neighborhood feature were analyzed. Furthermore, a global multi-resolution virtual terrain environment and zoom-in operation algorithm were constructed to test the coding method. The results show that the method can distinguish boundary image-element and neighborhood image-element rapidly, and has a higher rate than other similar algorithms.
    Data fusion estimation algorithm based on serial communication missing measurements
    2011, 31(04):  1152-1154.  DOI: 10.3724/SP.J.1087.2011.01152
    Asbtract ( )   PDF (413KB) ( )  
    Related Articles | Metrics
    Concerning the multi-way delay problem in the serial communication monitoring system, a new algorithm of data fusion estimation based on serial communication missing measurements was proposed. Firstly, the algorithm mapped measurements randomly received from the multi-sensor system in the reference frame with fusion center. During the fusion period, the measurements, which were close to last fusion period, were used in the state update step. In state and measurements prediction step, the extended Kalman filter was used to obtain the global estimation and error. Finally, the simulation results show that the algorithm has a lower estimation error, and with the increase in the number of serial systems, estimation error rate of increase has decreased.
    Skeleton structure of conception innovative design based on genetic algorithm
    Hai-long WU Xi-yu LIU Lai-sheng XIANG
    2011, 31(04):  1155-1158.  DOI: 10.3724/SP.J.1087.2011.01155
    Asbtract ( )   PDF (678KB) ( )  
    Related Articles | Metrics
    A new kind of computer aided conceptual design method was studied, control parameter of the skeleton structure theory was analyzed, the radian as control parameter was increased and the entities were analyzed. The skeleton structure parameters were optimized by Genetic Algorithm (GA), and the quality of the results of GA was optimized by controlling parents' codes. The experiment used Visual C++ as the platform, and ACIS-HOOPS class as foundation. A conceptual design of chandelier shape was given, after abstracting skeleton structure, modeling, genetic variation and producing results of conceptual design steps, a large number of overall coordination and elegant chandelier forms were available for user to choose.
2025 Vol.45 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF