Loading...

Table of Content

    01 January 2012, Volume 32 Issue 01
    Network and distributed techno
    Research survey of network security situation awareness
    XI Rong-rong YUN Xiao-chun JIN Shu-yuan ZHANG Yong-zheng
    2012, 32(01):  1-4.  DOI: 10.3724/SP.J.1087.2012.00001
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics
    The research of network security Situation Awareness (SA) is important in improving the abilities of network detection, response to emergency and predicting the network security trend. In this paper, based on the conceptual model of situational awareness, three main problems with regard to network security situational awareness were discussed: extraction of the elements in the network security situation, comprehension of the network security situation and projection of future situation. The core issues to be resolved, and major algorithms as well as the advantages and disadvantages of various algorithms were focused. Finally, the opening issues and challenges for network security situation awareness concerning both theory and implementation in near future were proposed.
    Research of trust model based on multidimensional trust cloud
    CAI Hong-yun DU Rui-zhong TIAN Jun-feng
    2012, 32(01):  5-7.  DOI: 10.3724/SP.J.1087.2012.00005
    Asbtract ( )   PDF (654KB) ( )  
    References | Related Articles | Metrics
    According to the characteristics of fuzziness and uncertainty in subjective trust, and the deficiency of coarse granularity in the existing trust model based on the cloud model, a new trust model based on multidimensional trust cloud was proposed. Based on the feedback and time for direct transactions between the entities, direct trust cloud for every entity could be generated using the weighted backward cloud algorithm. Recommend trust cloud was obtained by integrating recommend information and the recommend credibility of entity was the weight of the recommend information. Finally, the direct trust cloud and recommend trust cloud was synthesized together and the result could be a reference for the object selection. The experimental results show that the multidimensional trust cloud model can detect every kind of service provider effectively and improve the success rate between the entities.
    Data integrity verification protocol in cloud storage system
    CAO Xi, XU li, CHEN Lan-xiang
    2012, 32(01):  8-12.  DOI: 10.3724/SP.J.1087.2012.00008
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    In the cloud storage network, the security and integrity of data are the major concerns of clients. Taking full consideration of the security requirements for cloud storage network, a new Cloud Storage-Data Integrity Verification (CS-DIV) protocol was proposed. The clients uploaded files and tags to the servers and then did random check; the servers returned proofs and the clients judged the result. This protocol could not only ensure the integrity of data in the cloud storage effectively, but also resist the cheat from the un-trusted servers and the attack from the malicious clients, and then improved the reliability and stability of the whole cloud storage system. The simulation experimental results show that the proposed protocol realizes the protection of data integrity at low cost of storage, communication and delay.
    Research on framework of security service cloud computing
    SUN Lei DAI Zi-shan
    2012, 32(01):  13-15.  DOI: 10.3724/SP.J.1087.2012.00013
    Asbtract ( )   PDF (584KB) ( )  
    References | Related Articles | Metrics
    Following the analysis of cloud computing security in the paper, a framework of security service cloud computing was proposed based on cloud computing service pattern, which provided consistent standard model. Furthermore, the mechanism of the framework was introduced and analyzed, and a deployment algorithm of security service was proposed based on selection of the best computing server. The simulation results show that the proposed algorithm is better than random algorithm in terms of system load balance and service time.
    Performance vector-based algorithm for virtual machine deployment in infrastructure clouds
    YANG Xing MA Zi-tang SUN Lei
    2012, 32(01):  16-19.  DOI: 10.3724/SP.J.1087.2012.00016
    Asbtract ( )   PDF (625KB) ( )  
    References | Related Articles | Metrics
    Regarding the virtual machine deployment issues in cloud computing, the Performance Matching-Load Balancing (PM-LB) algorithm of virtual machine deployment was proposed. With performance vector, the performance standardization of virtual infrastructure was described. The matching vector was obtained by calculating the relative vector distance of virtual machine and the servers, then a comprehensive analysis of matching vector and load balancing vector was done to get the deployment result. The results of simulation in CloudSim environment prove that using the proposed algorithm can obtain better load-balancing performance and higher resource utilization.
    Design of DoS attack script language based on domain specific language
    ZHU Ning ZHANG Yong-fu CHEN Xing-yuan
    2012, 32(01):  20-24.  DOI: 10.3724/SP.J.1087.2012.00020
    Asbtract ( )   PDF (722KB) ( )  
    References | Related Articles | Metrics
    Considering the basic need of the attack resistance test for trustworthiness, controllability and effectiveness of attack operation, a Denial of Service (DoS) Attack Script Language (DASL) was designed based on Domain Specific Language (DSL), which could be used to develop DoS attacks simply, quickly and conveniently. In this article, attack unit was defined, the domain specific syntactic was constructed based on the analysis of attack samples, the semantic function was realized based on LIBNET, and the interpreter of DASL was designed on the basis of ANTLR. The experimental results show that, attacks developed by DASL were effective and controllable. And DASL can lower the complexity of development, reduce the amount of code to write, increase the efficiency of development and provide powerful support for DoS penetration testing.
    Software network behavior analysis based on message semantics analysis
    WU Yi-lun ZHANG Bo-feng LAI Zhi-quan SU Jin-shu
    2012, 32(01):  25-29.  DOI: 10.3724/SP.J.1087.2012.00025
    Asbtract ( )   PDF (885KB) ( )  
    References | Related Articles | Metrics
    Through studying software network behavior, a new system model for analyzing the software network behavior based on dynamic binary analysis and message semantics analysis was proposed. The system consisted of dynamic binary analysis module, message semantics analysis module and network behavior analyzer. With the dynamic binary analysis, the Application Programming Interface (API) functions and system functions called by software could be obtained; by using the dynamic taint analysis, the message semantics could be extracted. The experimental results show that, combining the dynamic binary analysis and message semantics extraction can be used for analyzing the software network behavior.
    Lightweight authentication and evaluation protocol for mobile trusted access
    QIN Xi GAO Li CHANG Chao-wen HAN Pei-sheng
    2012, 32(01):  30-34.  DOI: 10.3724/SP.J.1087.2012.00030
    Asbtract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics
    For enhancing the usability of the authentication and evaluation protocol for mobile terminal trusted network access and reducing the overload of network communication and terminal calculation, a lightweight authentication and evaluation protocol was proposed. Depending on the authentication shared key and platform configuration information at the first access time, the both parties of communication could complete quick authentication and evaluation without trusted third party. The proposed protocol reduced the times of data switch and computing task, it not only ensured the security attributes of authentication and integrity verification, but also enhanced the privacy of platform configuring information and the ability of avoiding replay attack. The security and performance analysis shows that the proposed protocol adapts to mobile trusted access for wireless network.
    Efficient ID-based authenticated key agreement protocol
    GAO Hai-ying
    2012, 32(01):  35-37.  DOI: 10.3724/SP.J.1087.2012.00035
    Asbtract ( )   PDF (598KB) ( )  
    References | Related Articles | Metrics
    Wang et al. (WANG SHENG-BAO, CAO ZHEN-FU, DONG XIAO-LEI. Provably secure identity-based authenticated key agreement protocols in the standard model. Chinese Journal of Computers, 2007,30(10):1842-1854) proposed an ID-based Authenticated Key Agreement (IDAKA) protocol which was proved secure under standard model but without attribute of Private Key Generator (PKG) forward security. In order to remedy the flaw, a new protocol was introduced in which the shared secret message was calculated by the private key and temporary secret information of users of the protocol, and its security was also proved in standard model. Compared with known protocols, the new protocol is more efficient. Additionally, a method of jointly generating private key by PKG and user was proposed. The private key of user was calculated by the main secret key of system and secret information provided by user. It effectively solves the problem of PKG forward security of ID-based authenticated key agreement protocol.
    Fully secure attribute-based authenticated key exchange protocol
    WEI Jiang-hong LIU Wei-fen HU Xue-xian
    2012, 32(01):  38-41.  DOI: 10.3724/SP.J.1087.2012.00038
    Asbtract ( )   PDF (616KB) ( )  
    References | Related Articles | Metrics
    Attribute-Based Encryption (ABE) scheme has been drawing attention for having a broad application in the area of fine-grained access control, directed broadcast, and so on. Combined with NAXOS technique, this paper proposed a fully secure Attribute-Based Authenticated Key Exchange (ABAKE) protocol based on an ABE scheme, and gave a detailed security proof in the Attribute-Based eCK (ABeCK) model by provable security theory. Compared with other similar protocols, the proposed protocol obtains stronger security and flexible attribute authentication policy, while decreasing communications cost.
    Research of man-in-the-middle attack in robust security network
    WANG Ding MA Chun-guang WENG Chen JIA Chun-fu
    2012, 32(01):  42-44.  DOI: 10.3724/SP.J.1087.2012.00042
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics
    Man-in-the-Middle (MitM) attacks pose severe threats to the Robust Security Network (RSN). Based on the state machine model of the authenticator and supplicant in 802.1X-2004, MitM attacks were analyzed systematically from the respect of the whole establishment of RSN associations. With the unilateral cognition of the MitM attacks in RSN clarified, a framework for the MitM attacks in RSN and its conditions of the effective launch of the attacks were brought forward, which were fully verified by an effective attack instance. The analytical results reveal that RSN can withstand MitM attacks if strong mutual authentication methods are adopted; otherwise it is vulnerable to this threat.
    New impossible deferential attack on 7-round reduced ARIA
    SU Chong-mao
    2012, 32(01):  45-48.  DOI: 10.3724/SP.J.1087.2012.00045
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics
    How to give new security comments on the standard block cipher ARIA is a current hot issue. Based on the structure of ARIA cipher, a new 4-round distinguisher was designed by adopting meet-in-the-middle principle. Based on this distinguisher, and combining the features of ARIA algorithm, a new attack on 7-round ARIA-256 was proposed by adding 2-round at the beginning and 1-round at the end. It is shown that the new attack requires a data complexity of about 2120 chosen plaintexts and a time complexity of about 2219 7-round ARIA-256 encryptions. Compared with the previous known impossible differential attacks,the new attack efficiently reduces the data complexity and time complexity.
    Construction of Boolean functions with optimum algebraic immunity
    WANG Yong-juan ZHANG Shi-wu
    2012, 32(01):  49-51.  DOI: 10.3724/SP.J.1087.2012.00049
    Asbtract ( )   PDF (553KB) ( )  
    References | Related Articles | Metrics
    Any Boolean function can be uniquely expressed as a univariate polynomial function on finite field. By using this representation and algebraic coding theory to discuss the criterion, by which a Boolean function achieves its Maximum Algebraic Immunity (MAI), the authors provided an equivalent criterion by which the algebraic immunity of a Boolean function with odd variables reaches MAI. According to the criterion, an equivalent condition to Boolean function of three variables that achieves MAI was reached. Thus, all MAI Boolean functions of three variables got constructed.
    Design and implementation of uniform identity authentication system based on enterprise service bus
    LI Fu-lin XU Kai-yong LI Li-xin
    2012, 32(01):  52-55.  DOI: 10.3724/SP.J.1087.2012.00052
    Asbtract ( )   PDF (569KB) ( )  
    References | Related Articles | Metrics
    The self-governed identity authentication and user management lead to different identity, redundant information, self-governed system and bad security in heterogeneous information systems. A new method of integration based on uniform data exchange standard and interface standard, especially the system model, data flow and authentication protocol were put forward. Furthermore, a uniform identity authentication system based on Enterprise Service Bus (ESB) was realized. The experimental results show that the system can avoid redundant authentication logic and data, and it also enhances authentication efficiency and makes the best use of the available resources.
    Trusted attestation of measurement action information base
    YAN Jian-hong PENG Xin-guang
    2012, 32(01):  56-59.  DOI: 10.3724/SP.J.1087.2012.00056
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics
    To improve the flexibility and efficiency of remote attestation, behavior dynamic attestation was proposed based on Merkle Hash tree. The process of creating AM_AIB tree was designed. The client measured and calculated current root Hash value which was signed by Trusted Platform Module (TPM), and then transmitted it to server-side for certification. If it was consistent with Hash value of server-side, the behavior was supposed to be credible. The model of Attestation Measurement Action Information Base (AM_AIB) could also be designed in different granularity according to the characteristics of behavior. The experimental results show the proposed method can improve the time performance and protect the privacy of platform. It is flexible and it also can overcome the static feature based on attribute verification and ensure that the platform application software runs credibly.
    Research of digital time-stamping service in unreliable networks
    CHANG Chao-wen CHEN Jun-feng QIN Xi
    2012, 32(01):  60-65.  DOI: 10.3724/SP.J.1087.2012.00060
    Asbtract ( )   PDF (955KB) ( )  
    References | Related Articles | Metrics
    The technology of Digital Time-Stamping (DTS) is widely used in digital signature, electronic commerce and patents and property right protection of various software and hardware. For some unreliable networks, of which the network situation is poor, the net speed changes greatly and the net links are usually intermittent, there is no necessary technological means to guarantee the normal and effective operation of DTS service. According to the characteristics of the unreliable networks, a new time-stamping scheme was proposed. In the scheme, it did not need to communicate with Time Stamp Authority (TSA) each time when a time-stamping service was required. The local trusted platform would offer the time-stamping service itself. A new DTS service protocol based on Trusted Platform Module (TPM) was also proposed under the circumstances of unreliable networks. The results of the security analysis of the protocol show that the protocol is secure and the time error in the protocol can be kept under control. The adaptability of the protocol for the unreliable network is excellent.
    Key techniques of conditional access system for Internet-TV based on network coding
    LI Wei-jian
    2012, 32(01):  66-69.  DOI: 10.3724/SP.J.1087.2012.00066
    Asbtract ( )   PDF (768KB) ( )  
    References | Related Articles | Metrics
    Network coding has the advantage of providing higher network throughput, using bandwidth efficiently and balancing the traffic, which is suitable for wireless networks, Ad Hoc, P2P content distribution and streaming media service, especially for Internet-TV. The authors did some research into the conditional access system for Internet-TV under network coding, proposed a scheme for conditional access technique based on Random Linear Network Coding (RLNC) and Secure Practical Network Coding (SPOC), and hierarchical key distribution. This scheme also provided different media quality to different paying customers. The scheme has low-complexity cryptographic overhead, and it is thus suitable for the real-time encryption of streaming media. The performance analysis shows that the proposed scheme improves the throughput of Internet-TV, the quantity of encrypted data is far less than the traditional methods, and the hierarchical group key distribution effectively solves the problem of key distribution.
    Node secure localization algorithm of wireless sensor network based on reputation mechanism
    LING Yuan-jing YE A-yong XU Li HUANG Chen-zhong
    2012, 32(01):  70-73.  DOI: 10.3724/SP.J.1087.2012.00070
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    A new localization algorithm based on reputation mechanism was proposed to improve the robustness of the node positioning system in Wireless Sensor Network (WSN). This algorithm introduced a monitoring mechanism and reputation model to filter out malicious beacon nodes giving the false location information, used Beta distribution to update and integrate the reputation of the beacon nodes. Through the cluster head node, the proposed algorithm collected and judged which beacon nodes were reliable, increased the malicious beacon nodes detection rates while the positioning error was reduced. Finally, the simulation and detailed analysis prove its efficiency and robustness. The algorithm is efficient in self-positioning of sensor nodes in distributed WSN, and the localization accuracy and security are greatly improved.
    Real-time monitoring method based on improved A* algorithm for topology state of wireless mesh network
    NIU Ling GUO Yuan-bo LIU Wei
    2012, 32(01):  74-77.  DOI: 10.3724/SP.J.1087.2012.00074
    Asbtract ( )   PDF (856KB) ( )  
    References | Related Articles | Metrics
    Since it is difficult to determine the network boundaries and topology is very flexible in Wireless Mesh Network (WMN), topology information collection and reconstruction have great delay, so that real-time WMN monitoring accuracy can not be ensured. This paper proposed a real-time monitoring method based on improved A algorithm for the topology state of WMN to get the real-time state and give out response to abnormity. Through limiting the path length, reducing the search scope and adding the number of repeated searched edges to heuristic of A, the method solved the problem that path may be recovered and too long for topology real-time monitoring. The simulation results show that compared with the original algorithm, the improved algorithm has a higher speed in convergence, and it can update the topology construction in shorter time.
    Implementation of network covert channel model based on shared file
    WANG Biao ZHANG Shi-tao FANG Ying-jue
    2012, 32(01):  78-81.  DOI: 10.3724/SP.J.1087.2012.00078
    Asbtract ( )   PDF (664KB) ( )  
    References | Related Articles | Metrics
    Network covert channel technique is a secret information leaking channel which violates the Bell-La Padula (BLP) model by avoiding the detection of mandatory access control measures, which threatens the confidentiality of high level information. The authors first discussed the relations between covert channel and non-discretionary access control model, and then formed the covert channel model by designing different protocols of covert channel model according to different number of shared files under the assumed scenario and transmitting pattern. The performances of this kind of network covert channel model led by these protocols were compared by experiments, and the extent of threat to the confidentiality they might make was discussed separately. Finally, the authors summarized the transferring characteristics of the model generated by these protocols and the menace they might bring, which made sense to the prevention of the network covert channels.
    Privacy protection method in E-government information resource sharing
    Lǚ Xin GAO Feng
    2012, 32(01):  82-85.  DOI: 10.3724/SP.J.1087.2012.00082
    Asbtract ( )   PDF (656KB) ( )  
    References | Related Articles | Metrics
    To protect privacy in E-government information resource sharing, a privacy protection model was proposed. The model could classify information resource sharing into two types: one is decision making business based on data mining or statistics and the other is business collaboration. The model adopted data preprocessing method to generalize privacy and a business collaboration simulator to determine the minimum privacy set for business collaboration respectively to protect privacy in the two types of businesses. The analytical results show the proposed method is effective in privacy protection.
    Fault resistant finite state machine controller design of elliptic curve scalar multiplication
    YAN Ying-jian LI Zhi-qiang DUAN Er-peng ZHU Wei-wei
    2012, 32(01):  86-88.  DOI: 10.3724/SP.J.1087.2012.00086
    Asbtract ( )   PDF (499KB) ( )  
    References | Related Articles | Metrics
    To enhance its resistibility to fault attacks, this paper proposed a non-concurrent fault detection scheme for controller circuit based on Finite State Machine (FSM). Using linear codes, this scheme was carried out by constructing one path to detect faults in the FSM. Finally, this paper used the scheme to design the security FSM circuit for NAF-based left-to-right scalar multiplication algorithm, and simulated and analyzed the circuit in resistibility to fault attacks. Through the simulation, and compared with the scheme of concurrent error detection, in the case of reducing frequently decoding workload of the state machine, this design can detect the error correctly and alarm, and it also improves the ability of fighting against fault attacks.
    Modified algorithm of bit-plane complexity segmentation steganography based on preprocessing
    LIU Hu YUAN Hai-dong
    2012, 32(01):  89-91.  DOI: 10.3724/SP.J.1087.2012.00089
    Asbtract ( )   PDF (657KB) ( )  
    References | Related Articles | Metrics
    Since Bit-Plane Complexity Segmentation (BPCS) steganography is vulnerable to complex histogram attack, this paper proposed an improved algorithm based on preprocessing. The steganography derived compensatory rule from distribution of the cover image. Then it used reversed preprocessing in compensation to the change of complexity caused by embedded information. The experimental results show that the proposed algorithm can properly hide information and counteract the attack of complex histogram. The compensation happens before information hiding, so it can maintain the big capacity characteristic of the original algorithm.
    Samples selection method of differential power attack against advanced encryption standard
    LI Zhi-qiang YAN Ying-jian DUAN Er-peng
    2012, 32(01):  92-94.  DOI: 10.3724/SP.J.1087.2012.00092
    Asbtract ( )   PDF (631KB) ( )  
    References | Related Articles | Metrics
    To resolve the problem with selecting the samples in the Differential Power Attack (DPA), this paper proposed a set of samples selection method. Based on the given experimental platform, the mode and amount of samples selection were proposed through theoretical analysis, and then were validated by experiments. For Advanced Encryption Standard (AES), this paper put forward the samples selection methods for simulation test and practical experimentation, and proved that the proposed method was right. The results show that the simulation sample plaintext attack should be taken in sequence, with the quantity of a full array. And the attack should be measured directly using a large number of random numbers. There is a big difference in the explicit requirements of the sample.
    Information security
    Efficient strongly-secure identity-based authenticated key agreement protocol
    SHU Jian
    2012, 32(01):  95-98.  DOI: 10.3724/SP.J.1087.2012.00095
    Asbtract ( )   PDF (650KB) ( )  
    References | Related Articles | Metrics
    Most of the existing Identity-based (ID) authenticated protocols are proven secure in the Canetti-Krawczyk (CK) model which is weaker than the extended Canetti-Krawczyk (eCK) model. Based on NAXOS trick, a new scheme using bilinear pairing was proposed. The security of the scheme was proven in the eCK model under the random oracle assumption and the Gap Bilinear Diffie-Hellman (GBDH) assumption. The proposed protocol was efficient in computational cost and communication round when compared with other solutions. The new protocol also satisfied master key forward security, perfect forward security and anti-key-leak disguise.
    ID-based public verifiability signcryption scheme
    LI Zhi-min XU Xin LI Cun-hua
    2012, 32(01):  99-103.  DOI: 10.3724/SP.J.1087.2012.00099
    Asbtract ( )   PDF (830KB) ( )  
    References | Related Articles | Metrics
    Using bilinear pairing, a new identity-based signcryption scheme was proposed in this paper. Under the assumption that the Computational Diffie-Hellman (CDH) problem is hard, the newly proposed scheme had been proved to be secure against the existing unforgeability on adaptively chosen message/ciphertext and identity attack in random oracle model. The advantage of the proposed scheme is that it is identity-based which needs no certificates so that it has a simple key management. In addition, the proposed scheme can provide public verifiability, and it allows a third party to convince that the signcryption is valid for the given message without providing the receiver's private key.
    Identity-based key management scheme for Ad Hoc network
    SUN Mei ZHAO Bing
    2012, 32(01):  104-106.  DOI: 10.3724/SP.J.1087.2012.00104
    Asbtract ( )   PDF (612KB) ( )  
    References | Related Articles | Metrics
    According to the characteristics of Ad Hoc networks, such as mobility and self-organization, an identity-based key management scheme for Ad Hoc networks was proposed. In the paper, by the method of secure distributed key generation based on threshold cryptography, the interior members of Ad Hoc networks collaborated to conduct the system private key. Compared with the existent protocol, the proposed scheme, does not require the fixed structure of service nodes, and service nodes can dynamically join and leave network. At the same time, the system key is updatable among service nodes. The analytical results show the proposed scheme is flexible and secure, and it is suitable for Mobile Ad Hoc Network (MANET).
    Security localization based on DV-Hop in wireless sensor network
    LIU Xiao-shuang CHEN Jia-xing LIU Zhi-hua LI Gai-yan
    2012, 32(01):  107-110.  DOI: 10.3724/SP.J.1087.2012.00107
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the impact of illegal nodes (including the node unable to locate) on the localization process in DV-Hop localization algorithm has not been taken into consideration, this paper proposed a secure localization mechanism based on DV-Hop. In other words, the character of message exchange between the nodes was introduced to detect the wormhole attacks in this paper. Time property and space property were used to define the valid beacon nodes, along with encryption and authentication mechanisms to resist against the node-tampering attack in the communication process. Finally, the nodes were located securely. The simulation results show that, in hostile environment, the proposed mechanism has a high probability to detect the wormhole attacks, and the relative localization error can be reduced by 63% or so.
    Anonymous bidirectional RFID authentication protocol based on low-cost tags
    HU Tao WEI Guo-heng
    2012, 32(01):  111-114.  DOI: 10.3724/SP.J.1087.2012.00111
    Asbtract ( )   PDF (792KB) ( )  
    References | Related Articles | Metrics
    To remove hidden risks and solve neglected Denial-of-Service (DoS) attacks in the back-end database in current low-cost Radio Frequency Identification (RFID) authentication protocols, a new anonymous bidirectional RFID authentication protocol based on low-cost tags was proposed, and the performance analysis was conducted. According to the simple operative logic and shielded operation of reader, this scheme was designed by utilizing two cascading 16 bit Cyclic Redundancy Check (CRC) messages as mutual authentication factor between the label and the reader. The analytical results show the proposed protocol possesses untraceability, authenticity and usability and resists replaying attack and synchronization attack. Overall, it is a RFID low-power security authentication scheme of security, efficiency and practicality.
    Security assurance capability assessment based on entropy weight method for cryptographic module
    SU Deng-yin XU Kai-yong GAO Yang
    2012, 32(01):  115-118.  DOI: 10.3724/SP.J.1087.2012.00115
    Asbtract ( )   PDF (556KB) ( )  
    References | Related Articles | Metrics
    To solve the problems that the index value of cryptographic modules is not fixed, the index system is hardly built, and the security assurance ability can not be quantitatively assessed, a security assurance capability assessment for cryptographic module was proposed. The description on indexes by interval number was applied to illustrate the security attribute of cryptographic modules. This paper determined the weight vector of each period point by entropy weight coefficient method combined with expert decision weight method. According to the interval multi-attribute decision methodology, a feasible methodology was adopted to solve the interval Information Assurance (IA) capability evaluation problem of cryptographic modules. Finally, through analyzing two kinds of cryptographic modules, the experimental results show that the proposed method is feasible.
    Network and communications
    Modeling and simulation of multi-states random mobility model
    ZHANG Heng-yang ZHENG Bo CHEN Xiao-ping
    2012, 32(01):  119-122.  DOI: 10.3724/SP.J.1087.2012.00119
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics
    Mobility model is the basis of protocol design and evaluation for mobile Ad Hoc network. A multiple states entity random mobility model was proposed according to the requirements of entity mobility modeling, which could reflect the realistic node movement and had more controllable parameters. Several familiar mobility models were derived from it by adjusting some parameters. The mobility model could be applied to simulation of mobile Ad Hoc network for its flexibility and versatility.
    Congestion control mechanism based on sensitive queue in wireless access network
    YAN Li-ming NIU Yu-gang
    2012, 32(01):  123-126.  DOI: 10.3724/SP.J.1087.2012.00123
    Asbtract ( )   PDF (538KB) ( )  
    References | Related Articles | Metrics
    Since the wireless access network is subject to the effects of strong nonlinearity, large delay and random link loss, the classical Active Queue Management (AQM) has the problems of slow convergence rate and long response to queue in the actual control process. By analyzing the characteristics of Random Exponential Marking (REM) algorithm in the wireless access network, this paper proposed a new congestion control method of wireless access network based on sensitive queue, which improved the original price model of REM and overcame the insensitivity of price to the change of queue size. Moreover, a single neuron was utilized to optimize the parameters. Finally, the proposed algorithm was compared with REM, PI on NS2 simulation platform. The simulation results show that the proposed algorithm has fast convergence and strong robustness.
    Approach of enhancing network survivability by optimizing weights based on particle swarm optimization
    YUAN Rong-kun MENG Xiang-ru LI Ming-xun WEN Xiang-xi
    2012, 32(01):  127-130.  DOI: 10.3724/SP.J.1087.2012.00127
    Asbtract ( )   PDF (646KB) ( )  
    References | Related Articles | Metrics
    As most of network failures are transient single link failures, a new approach of using Particle Swarm Optimization (PSO) algorithm to optimize link weights for enhancing network survivability was proposed. A cost function was introduced to put high cost on links with high utilizations for avoiding link overloaded. The objective function was a weighted sum of two proportions: one is the maximum cost under normal state, and the other is the maximum link cost under all single link failures. Then the algorithm model was built and PSO algorithm was used to find the optimal weights. The experimental results show that the weight calculated by the proposed method can keep lower link utilization under failure states, and prevent the network from congestion due to traffic diversion. Therefore, the network survivability can be improved.
    Uneven clustering routing algorithm based on particle swarm optimization
    ZOU Jie SHI Chang-qiong JI Wen-yan
    2012, 32(01):  131-133.  DOI: 10.3724/SP.J.1087.2012.00131
    Asbtract ( )   PDF (471KB) ( )  
    References | Related Articles | Metrics
    To deal with the “hot area” problem and cluster heads selection in clustering routing algorithm of Wireless Sensor Network (WSN), the paper designed an uneven clustering routing algorithm based on adaptive Particle Swarm Optimization (PSO). Firstly, according to the distance between candidate nodes and sink node, the competitive radius was calculated and clusters of various sizes were constructed. Then this paper introduced the PSO according to the cluster size. The PSO was used to select the final cluster heads by evaluating factors such as residual energy of nodes and distance between nodes. The cluster heads with more residual energy were chosen as the next hop to form multi-top route in which the sink node is the root. The simulation results show that compared with other two similar algorithms, LEACH and EUCC, the proposed algorithm extends 34% and 16% of survival time of network separately, reduces 22% and 12% of average energy consumption respectively, and effectively decreases the network nodes energy consumption.
    Three-dimensional DV-Hop algorithm based on mobile Agent
    CAO Dun ZHANG Jing FU Ming
    2012, 32(01):  134-138.  DOI: 10.3724/SP.J.1087.2012.00134
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics
    The node localization algorithm of Wireless Sensor Network (WSN) in three-dimensional space is one of the hot research topics currently. A three-dimensional DV-Hop localization algorithm based on mobile Agent was proposed by analyzing the lack of the existing three-dimensional localization algorithms, extending the rang-free algorithm of DV-Hop to three-dimensional space, and improving the performance in traffic and positioning accuracy. The simulation results show that the proposed algorithm can position the sensor nodes in the three-dimensional environment effectively, the beacon nodes' density and communication radius have little effect on the positioning error and the coverage, and the positioning accuracy and coverage relative to other algorithms have improved significantly.
    Functional test method for electronic control unit based on controller area network bus
    CHENG An-yu ZHAO Guo-qing FENG Hui-zong ZHANG Ling
    2012, 32(01):  139-142.  DOI: 10.3724/SP.J.1087.2012.00139
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of automotive electronic market, more and more Electronic Control Units (ECU) for vehicle controller appear and the functional test also becomes more complex. In order to solve the problem of ECU functional test, the ECU's automatic test method based on Controller Area Network (CAN) was studied. The system included the software and hardware platform of National Instrument (NI) and communication platform of CAN bus, by which the system and ECU formed a closed-loop structure. To transmit the test message through CAN bus, the system could achieve batch test of ECUs with the same type. By using the new test method, the system can reduce the test errors, and support assembly line test of ECU, which greatly reduces the complexity of ECU functional test and test work. At the same time, the system can also apply to other types of ECU functional test by improving the generation module of simulated signal and use case library.
    Research of super-node based on leader-follower algorithm
    WANG Xiao-juan ZHOU Zhu-rong
    2012, 32(01):  143-146.  DOI: 10.3724/SP.J.1087.2012.00143
    Asbtract ( )   PDF (746KB) ( )  
    References | Related Articles | Metrics
    Analyzing how to deal with new-node that does not match the super-node in super-node P2P network based on leader-follower algorithm can help improve the efficiency and performance of super-node. The paper introduced general class node and splitting algorithm, and the nodes that do not match every super-node were managed by the general class node. When the nodes reached a certain number, the splitting algorithm was used to split these nodes into several semantic similarity clusters. Finally, the merge sorting algorithm chose the optimal node as super-node. The experimental results show that the proposed method improves the efficiency and performance of super-node, and it has good feasibility.
    Correlation rotation precoding algorithm based on criterion of minimum mean square error
    QI Mei-juan WU Yu-cheng
    2012, 32(01):  147-149.  DOI: 10.3724/SP.J.1087.2012.00147
    Asbtract ( )   PDF (426KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that traditional Correlation Rotation (CR) precoding algorithm enlarges noise, Lagrange function was explored to minimize the error between received signal and transmitted signal, statistics and Bayesian theory were explored to calculate the imperfect channel state information, and a scheme of CR precoding algorithm was designed based on Minimum Mean Square Error (MMSE) criterion under perfect Channel State Information (CSI) and imperfect CSI. The simulation results show that under perfect CSI the bit error ratio performance of the designed scheme is improved about 2-3dB at the same Signal-to-Noise Ratio (SNR), and under imperfect CSI, the system performance is improved obviously compared to CR precoding based on Zero Forcing (ZF) criterion.
    Improved Normalized BP-Based decoding algorithm for low density parity check code
    ZHANG Xiao-hua LI Yan-ping
    2012, 32(01):  150-152.  DOI: 10.3724/SP.J.1087.2012.00150
    Asbtract ( )   PDF (541KB) ( )  
    References | Related Articles | Metrics
    To further improve the decoding performance of the Normalized BP-Based algorithm, an improved Normalized BP-Based algorithm was proposed. The correction factor was changed dynamically according the amount of the information from the check nodes to variable nodes in order to achieve non-linear compensation of BP-Based algorithm. The simulation results show that when the bit error rate was less than 0.5×10-2, the improved algorithms could get about 0.1dB gain, compared with Normalized BP-Based algorithm, only with slight increase in complexity, and the number of iterations did not increase.
    Network and distributed techno
    Partial transshipment strategy in a three-echelon emergency supply system under uncertain circumstances
    LIU Xue-heng XU Chang-yan WANG Chuan-xu
    2012, 32(01):  153-157.  DOI: 10.3724/SP.J.1087.2012.00153
    Asbtract ( )   PDF (860KB) ( )  
    References | Related Articles | Metrics
    To solve the multi-spot inventory sharing problem in an emergency system, emergency transportation strategy was studied in a system with random fuzzy demand in this paper through a multi-product and three-echelon emergency supply system. When the stockout happened, the nearest emergency lateral transshipment principle and partial inventory sharing strategy among the spots were permitted to satisfy the demand, and the model for the total cost expectation of random fuzzy demand was developed according to it, taking account of the service time constraints and the spots' storage space limitation. An advanced computing method combining Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithm, called PSO-SA algorithm, was proposed to calculate the model, and the effects on the partial transshipment with the variation of the transshipment trigger inventory level, the per-item transshipment time and the inventory storage space were analyzed through a numerical example. The availability of the proposed algorithm and the model applicability were verified at last.
    Reconfigurable hardware task partitioning algorithm based on depth first greedy search
    CHEN Nai-jin
    2012, 32(01):  158-162.  DOI: 10.3724/SP.J.1087.2012.00158
    Asbtract ( )   PDF (744KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a hardware task partitioning algorithm according to the problems of communication cost minimum in reconfigurable computing, called DFGSP (Depth First Greedy Search Partitioning). At first, the front task was taken from the ready queue, a Directed Acyclic Graph (DAG), which was transformed from a computing-intensive task, was scanned and partitioned by Depth First Search (DFS). Then, the number of outputting-edges (quantized to communication cost) of current partitioning module was computed when the task node did not meet the area constraints. Finally, the ready task node, which considered sufficiently partitioning module outputting-edges which were not increasing and made good use of reconfigurable resources hardware fragment as soon as possible, was scanned and partitioned, after skipping the task node which did not meet the area constraints. In comparison with the Cluster-Based Partitioning (CBP) algorithm and Level Sensitive Cluster-Based Partitioning (LSCBP) algorithm, the experimental results show that the proposed algorithm can obtain the least number of partitioning modules and the least average number of input-output edges crossing modules, and the practical results indicate the proposed algorithm gets a prominent improvement in hardware task partitioning performance over previous algorithms, while the run-time efficiency is preserved.
    Performance analysis and improvement of parallel molecular dynamics algorithm based on OpenMP
    BAI Ming-ze CHENG Li DOU Yu-sheng SUN Shi-xin
    2012, 32(01):  163-166.  DOI: 10.3724/SP.J.1087.2012.00163
    Asbtract ( )   PDF (676KB) ( )  
    References | Related Articles | Metrics
    To enhance the computing speed of the molecular dynamics simulations on the shared memory servers, the performance of parallel molecular dynamics program based on Open Multi-Processing (OpenMP) approach with the critical section method was analyzed and improved. After testing performance on a multi-core server, as well as the calculations of speedup and parallel efficiency, an optimized triangle method was developed. In this method, stationary atom sets were assigned to threads respectively, and the number of atoms increased stepwise, which made the threads arrive at critical sections at different time. The triangle method can efficiently halve the idle time in critical sections and therefore can significantly enhance the parallel performance.
    Grid service discovery algorithm based on attribute weight and rough set
    ZHAO Xu HUANG Yong-zhong AN Liu-yang
    2012, 32(01):  167-169.  DOI: 10.3724/SP.J.1087.2012.00167
    Asbtract ( )   PDF (440KB) ( )  
    References | Related Articles | Metrics
    To solve the low efficiency problem of grid service discovery, based on ontology technology, the theory of decision table, and knowledge representation system of rough sets, the paper put forward an optimized service discovery algorithm that considered the weight of the service properties. By rule extraction of the service invocation history and the calculation of the service properties weight, two main phases of the service discovery algorithm: information pre-processing and rough set service matching could be achieved. This article also gave theoretical analysis and experimental verification on both precision rate and recall rate. The results show that the proposed algorithm can provide higher precision and recall rate; besides, the ranking results of the candidate services are more preferable.
    Service-oriented self-adapted software architecture of cloud resource information integration
    WAN Nian-hong
    2012, 32(01):  170-174.  DOI: 10.3724/SP.J.1087.2012.00170
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    Service-Oriented Architecture (SOA) is the key technology of software development for realizing cloud resource information integration. Nowadays, common SOA platforms usually have lower cloud service efficiency, especially incapable of supporting dynamic evolution of integration software of self-adapted cloud resource information. To improve software efficiency and extension for cloud resource information integration, first by studying cloud model of software resource integration, behavior specification and service combination algorithms of cloud resource integration software architecture, then improving corresponding algorithms, a software architecture was proposed for service-oriented and self-adapted cloud resource information integration. Finally, the application experiments were made. The experimental results demonstrate the proposed model has sound resource information integration effects and utility compared with conventional architectures.
    Vibration source bearing estimation based on array time delay system
    YANG Chun-zhi
    2012, 32(01):  175-178.  DOI: 10.3724/SP.J.1087.2012.00175
    Asbtract ( )   PDF (524KB) ( )  
    References | Related Articles | Metrics
    In order to effectively achieve vibration source bearing estimation, a triangle array time delay system composed by three acceleration sensors was designed. Localization algorithm and distance determination algorithm were derived for single triangle array and double triangles array respectively, and then the localization accuracy of single triangle array time delay system was also analyzed, so the relationship between localization accuracy and distance determination accuracy was investigated. The experimental results show that both vibration source position estimation by single triangle array and vibration source distance estimation by double triangles array are effective.
    Autonomous landing of unmanned helicopter based on landmark's geometrical feature
    SUN Wei-guang HAO Ying-guang
    2012, 32(01):  179-181.  DOI: 10.3724/SP.J.1087.2012.00179
    Asbtract ( )   PDF (497KB) ( )  
    References | Related Articles | Metrics
    To acquire landmark's information and calculate unmanned helicopter's attitude information, a landmark recognition method based on image's contour fitting was proposed in this paper. The method judged the landmark's situation through imposing geometric constraint. If the image contained full landmark, real-time calculation of corners could obtain the helicopter's attitude information; if the image contained part of the landmark, the method could estimate the direction and size of the helicopter's movement which would make the landmark presented in the view completely. The simulations under the condition of laboratory show that the proposed method is stable and feasible.
    Database technology
    Research and advances on graph data mining
    DING Yue ZHANG Yang LI Zhan-huai WANG Yong
    2012, 32(01):  182-190.  DOI: 10.3724/SP.J.1087.2012.00182
    Asbtract ( )   PDF (1495KB) ( )  
    References | Related Articles | Metrics
    With the rapid growth of bioinformatics (protein structure analysis, genome identification), social networks (links between entities), Web analysis (interlinkage structure analysis, content mining and Web log retrieval), as well as the complex structure of text information retrievals, mining graph data has become a hot research field in recent years. Some traditional data mining algorithms have been gradually extended to graph data, such as clustering, classification, and frequent pattern mining. In this paper, the authors presented several state-of-art mainstream techniques for mining graph data, and gave a comprehensive summary of their characteristics, practical significance, as well as real-life applications on mining graph data. Finally, several research directions on graph data, and particularly, uncertain graph data were pointed out.
    E-learning resource library model based on domain ontology
    ZHANG Hu-yin ZHANG Ming-yang LI Xin
    2012, 32(01):  191-195.  DOI: 10.3724/SP.J.1087.2012.00191
    Asbtract ( )   PDF (801KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of E-learning system, E-learning resources grow explosively. How to effectively organize E-learning resources is a key factor of constructing efficient E-learning system. Concerning the existing resources organization deficiency of E-learning resource library, this paper proposed an E-learning resource retrieval model based on domain ontology. This model built a domain knowledge library by making use of the domain knowledge and constructed E-learning resources metadata database by extracting resources metadata, realized semantic organization of E-learning resources through mapping relations, and constructed a semantic retrieval model on this basis, in order to effectively solve the problem of the loss of semantic background in the E-learning resource retrieving. The model has also enhanced the recall rate and the precision rate on the retrieval results, and it is more in line with the needs of the users.
    Method of data tendency measure mining in dynamic association rules
    ZHANG Zhong-lin ZENG Qing-fei XU Fan
    2012, 32(01):  196-198.  DOI: 10.3724/SP.J.1087.2012.00196
    Asbtract ( )   PDF (494KB) ( )  
    References | Related Articles | Metrics
    Based on the original definition and classification of Support Vector (SV) and confidence vector, this paper put forward a method of data tendency measure mining in dynamic association rules, according to the characteristic of rules with time changing. First, taking advantage of tendency measure threshold to eliminate useless rules, the item sets candidates can be reduced. Second, producing the dynamic association rule, this method found out valuable rules and improved the mining quality. Finally, by analyzing a transaction database that is characterized by the tendency of changes and cycles, the analytical results verify the validity of the proposed method.
    Improvement of data mining algorithm and its application in Chord network
    WANG Chun-feng ZHOU Ning
    2012, 32(01):  199-201.  DOI: 10.3724/SP.J.1087.2012.00199
    Asbtract ( )   PDF (633KB) ( )  
    References | Related Articles | Metrics
    To improve the efficiency of data mining algorithms and the speed of Chord resource location, the paper optimized the data mining algorithm by introducing the conditional model and the depth-first strategy, and then applied the data mining algorithm in the Chord network routing table. The paper speeded up the process of resource location by deleting the routing information of invalid or low frequency use, and adding the relevant routing information. Finally, the performance comparison experiments show that the improved data mining algorithm reflects the superior performance, and effectively improves the positioning performance of the system's resources by mining the association rules of the Chord network.
    Artificial intelligence
    Concept similarity computation method based on edge weighting between concepts
    FENG Yong ZHANG Yang
    2012, 32(01):  202-205.  DOI: 10.3724/SP.J.1087.2012.00202
    Asbtract ( )   PDF (613KB) ( )  
    References | Related Articles | Metrics
    The traditional distance-based similarity calculation method was described. Concerning that the method of distance calculation does not contain sufficient semantic information, this paper proposed an improved method which used WordNet and edge weighting information between the concepts to measure the similarity. It considered the level of depth and density of concepts in corpus, i.e. the semantic richness of concept. Using this method, the authors can solve the semantic similarity calculation issues and make the calculation of similarity among concepts easy. The experimental results show that, the proposed method has a 0.9109 correlation with the benchmark data set-Rubenstein concept pairs. Compared with the classical method, the proposed method has higher accuracy.
    Knowledge representation method of product design based on ontology
    ZHANG Dong-ming NIU Zhan-wen ZHAO Nan HUO Ming
    2012, 32(01):  206-209.  DOI: 10.3724/SP.J.1087.2012.00206
    Asbtract ( )   PDF (704KB) ( )  
    References | Related Articles | Metrics
    According to the diversity, dynamics and correlation of product design knowledge, this paper put forward a knowledge representation method for product design based on ontology, established a knowledge unit with the core of object, concept set, property set, proposition set and function set, and designed the links between them. On this basis, this paper introduced input and output modules to enhance the comprehensiveness and flexibility of product design knowledge representation. Finally, the paper took cylindrical spiral spring design as an example and verified the effectiveness of the proposed method.
    Classification algorithm for imbalance dataset based on quotient space theory
    ZHANG Jian FANG Hong-bin SUN Qi-lin LIU Mingshu
    2012, 32(01):  210-212.  DOI: 10.3724/SP.J.1087.2012.00210
    Asbtract ( )   PDF (438KB) ( )  
    References | Related Articles | Metrics
    The application of data classification is usually confronted with a problem named imbalanced dataset in the machine learning. To improve the performance of imbalanced dataset classification, the over-sampling classification algorithm based on quotient space theory (QMSVM) was proposed. The algorithm partitioned majority data on clustering structure, and combined the results and minority data for linear Support Vector Machine (SVM) learning. Support vectors and sample of fault of majority data were obtained from those granules. On the other hand, support vectors and sample of fault of minority data were obtained and the Synthetic Minority Over-sampling Technique (SMOTE) was adopted. Thus, two new kinds of samples were merged for SVM learning, so as to rebalance the training set and get a more reasonable classification of hyperplanes. The experimental results show that, in comparison with several other algorithms, the accuracy of the proposed algorithm decreases, but it significantly improves the g_means value and classification accuracy of positives and the effect is better on the imbalance rate of larger datasets.
    Load balance strategy of cloud computing based on fuzzy clustering analysis
    YAO Jing HE Ju-hou
    2012, 32(01):  213-217.  DOI: 10.3724/SP.J.1087.2012.00213
    Asbtract ( )   PDF (771KB) ( )  
    References | Related Articles | Metrics
    It plays an important role in realizing cloud computing to implement the load balance of accessing resources. Therefore, based on the characteristics of cloud computing environment, an improved fuzzy clustering analysis algorithm was proposed in this paper. Furthermore, integrating Particle Swarm Optimization (PSO) algorithm with fuzzy C-means algorithm improved the algorithm accuracy. Then, using the improved fuzzy clustering algorithm in analyzing utilization ratio of I/O and Central Processing Unit (CPU) of all computing nodes, each node was divided into an affirmatory collection. And it represented its load level, as a basis for judging the node which needed to transfer tasks, so as to achieve the load balance. According to the experimental results, in terms of algorithm accuracy, no matter what is in the UCI machine learning repository, or for the proposed load balancing mechanism, the improved fuzzy clustering algorithm's reached the traditional one's by 110%. Besides, it surpasses the traditional one in the stability of the algorithm.
    Multiple attribute decision-making method with intervals based on possibility degree matrix
    GUO Kai-hong MU You-jing
    2012, 32(01):  218-222.  DOI: 10.3724/SP.J.1087.2012.00218
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    The authors studied the relation between several possibility degree formulas, and proposed a possibility degree matrices-based method that aimed to objectively determine the weights of criteria in Multiple Attribute Decision-Making (MADM) with intervals. Each pair of interval values belonging to the same attributes in a decision matrix was compared to construct corresponding possibility degree matrices, whose priority vectors were subsequently utilized to convert the decision matrix expressed as intervals into a matrix with precise numbers as a measure. In this way, an uncertainty of determining weights of criteria in MADM with intervals could be converted into a certainty which was easier to handle, and with the attribute weights obtained, the possibility degree method for ranking interval numbers was still used to get the priorities of alternatives. Two numerical examples were given to illustrate the proposed method and examine its feasibility and validity. Finally, a necessary discussion was made on the conversion from uncertainty to certainty in MADM with intervals, and some potential problems coming from it.
    Learning Naive Bayes Parameters Gradually on a Series of Contracting Spaces
    OUYANG Ze-hua GUO Hua-ping FAN Ming
    2012, 32(01):  223-227.  DOI: 10.3724/SP.J.1087.2012.00223
    Asbtract ( )   PDF (773KB) ( )  
    References | Related Articles | Metrics
    Locally Weighted Naive Bayes (LWNB) is a good improvement of Naive Bayes (NB) and Discriminative Frequency Estimate (DFE) remarkably improves the generalization accuracy of Naive Bayes. Inspired by LWNB and DFE, this paper proposed Gradually Contracting Spaces (GCS) algorithm to learn parameters of Naive Bayes. Given a test instance, GCS found a series of subspaces in global space which contained all training instances. All of these subspaces contained the test instance and any of them must be contained by others that are bigger than it. Then GCS used training instances contained in those subspaces to gradually learn parameters of Naive Bayes (NB) by Modified version of DFE (MDFE) which was a modified version of DFE and used NB to classify test instances. GSC trained Naive Bayes with all training data and achieved an eager version, which was the essential difference between GSC and LWNB. Decision tree version of GCS named GCS-T was implemented in this paper. The experimental results show that GCS-T has higher generalization accuracy compared with C4.5 and some Bayesian classification algorithms such as Naive Bayes, BaysianNet, NBTree, Hidden Naive Bayes (HNB), LWNB, and the classification speed of GCS-T is remarkably faster than LWNB.
    Simulation and research on processing of pre-mRNA in electronic cell
    WANG Yu-xian LU Xin-hua
    2012, 32(01):  228-233.  DOI: 10.3724/SP.J.1087.2012.00228
    Asbtract ( )   PDF (928KB) ( )  
    References | Related Articles | Metrics
    The processing of pre-mRNA is one necessary step in gene expression, and it is an important mechanism to regulate the procedure of gene expression and produce the proteins which affect the life activities of cells. The existing electronic cell (E-Cell) models seldom relate to the simulation on the processing of pre-mRNA. The paper proposed an E-Cell model, named Analog-Cell, and reappeared realistically the procedure of gene expression through defining rational reaction rules and several algorithms which simulated the processing of pre-mRNA, thus Analog-Cell obtained recommendable simulation results.
    Improved shuffled frog leaping algorithm
    GE Yu WANG Xue-ping LIANG Jing
    2012, 32(01):  234-237.  DOI: 10.3724/SP.J.1087.2012.00234
    Asbtract ( )   PDF (570KB) ( )  
    References | Related Articles | Metrics
    To enhance the performance of Shuffled Frog Leaping Algorithm (SFLA) in solving optimization problems,this paper proposed an improved shuffled frog leaping algorithm. By adding mutation operator to the original algorithm, the improved algorithm regulated the scale of mutation operator via fuzzy controller, made a dynamic adjustment of mutation operator in the searching range of solution space with different phase and candidate solution distribution of evolution process. The simulation results of four typical functions of optimization problems show that the proposed algorithm can attain above twice improvement on accuracy, convergent speed and success rate, and it demonstrates a better optimization capability especially in solving the high dimensional complex optimization problem, in comparison with the basic shuffled frog leaping algorithm and the known improved algorithm.
    Graphics and image technology
    Depth map acquisition technique based on Quaternion-Gabor wavelet motion estimation
    LUO Gui-e XU Yun-bin
    2012, 32(01):  238-240.  DOI: 10.3724/SP.J.1087.2012.00238
    Asbtract ( )   PDF (571KB) ( )  
    References | Related Articles | Metrics
    Depth map is the key technology of “2D video+depth map” for 3D display. On the basis of the research into quaternion and Gabor filter, the depth map acquisition technique based on Quaternion-Gabor wavelet motion estimation was proposed. Through calculating the global motion vector of image from ordinary video, the background motion model was estimated and the motion field was gotten. In the end, the foreground and the background of the image were isolated, and the depth map of the image was obtained. Through expanding ordinary Gabor filter to Quaternion-Gabor filter, it can not only get extra information through transforming the picture to frequency domain, but also can get independent filter to each pixel's RGB. The experimental results show that the changes of depth map obtained by Quaternion-Gabor wavelet motion estimation will be very smooth and edges will be more outstanding.
    Application of exposure fusion to single image dehazing
    CHEN Chen HU Shi-qiang ZHANG Jun
    2012, 32(01):  241-244.  DOI: 10.3724/SP.J.1087.2012.00241
    Asbtract ( )   PDF (723KB) ( )  
    References | Related Articles | Metrics
    This paper proposed a simple and effective method to remove haze from a single input image degraded by bad weather. First, it estimated the airlight by using dark channel prior knowledge. Then, under a mathematical model for describing the formation of a hazy image, the depth of field of each pixel was sampled to get a virtual haze-free image sequence. Finally, an exposure criterion introduced by exposure fusion algorithm was used to extract a haze-free image from the image sequence by multi-scale fashion image fusion method. The experimental results show that the proposed method can yield good results and it is appropriate for real-time applications.
    Geometric active contour model based on signed pressure force function
    YANG Jian-gong WANG Xi-li
    2012, 32(01):  245-247.  DOI: 10.3724/SP.J.1087.2012.00245
    Asbtract ( )   PDF (632KB) ( )  
    References | Related Articles | Metrics
    With regard to several drawbacks of Geometric Active Contour (GAC) model, a new active contour model based on Signed Pressure Force (SPF) was proposed in this article. A region-based signed pressure force function was constructed as the edge detector in this model, which made evolving contour more sensitive to blurred edge during evolution and more robust against noise. By contrast with the traditional geometric active contour model, the proposed model has the following advantages: firstly it can segment objects with blurred boundary; secondly, it is more anti-noising; thirdly, the movement of contour is bi-directional. The experimental results show the efficiency of the proposed model.
    Image segmentation algorithm based on incomplete K-means clustering and category optimization
    YANG Ming-chuan Lǚ Xue-bin ZHOU Qun-biao
    2012, 32(01):  248-251.  DOI: 10.3724/SP.J.1087.2012.00248
    Asbtract ( )   PDF (758KB) ( )  
    References | Related Articles | Metrics
    To improve the clustering efficiency and image segmentation effect, the paper proposed an Incomplete K-means and Category Optimization (IKCO) method. First of all, the algorithm used simple approach to finish data subsampling and initial centers determining. Then, according to the clustering rules, the proposed algorithm finished image's segmentation. Finally, the algorithm used category optimization method to improve segmentation results. The experimental results show that, compared with the traditional K-means clustering method, the proposed algorithm has better segmentation efficiency, and the segmentation result has a higher consistency with human visual perception.
    Crowd motion segmentation algorithm based on video particle flow and FTLE field
    TONG Chao ZHANG Dong-ping CHEN Fei-yu
    2012, 32(01):  252-255.  DOI: 10.3724/SP.J.1087.2012.00252
    Asbtract ( )   PDF (693KB) ( )  
    References | Related Articles | Metrics
    To segment moving crowd with different dynamics in complex video surveillance scenes, this paper proposed a crowd motion segmentation algorithm which was based on video particle flow and Finite Time Lyapunov Exponent (FTLE) field. Firstly, video particle flow was used to represent the long-range particle motion estimation. To optimize these particles trajectories, an energy function containing point-based appearance matching and distortion between the particles was minimized. Then the spatial gradient of the particle flow map was solved and the FTLE field was constructed. Finally, the Lagrangian Coherent Structure (LCS) in the FTLE field was used to divide flow into regions of qualitatively different dynamics. The experimental results show that the proposed algorithm can effectively segment crowd flow with different dynamics in complex video surveillance scenes, and it has strong robustness.
    Multi-pose and expression face synthesis method based on tensor representation
    Lǚ Xuan WANG Zhi-cheng ZHAO Wei-dong
    2012, 32(01):  256-260.  DOI: 10.3724/SP.J.1087.2012.00256
    Asbtract ( )   PDF (938KB) ( )  
    References | Related Articles | Metrics
    To synthesize facial pose and expression images simultaneously from one image, a tensor-based subspace projection method for synthesizing multi-pose and expression face images was proposed. Firstly, the forth order texture tensor and shape tensor were created from the feature annotated images respectively. Then a tucker tensor decomposition technique was applied to build projection subspaces (person, expression, pose and feature subspaces). Core tensors, expressions, poses and feature subspaces were organized into a new tensor properly which was used for synthesizing new facial poses and expressions. The proposed method took full advantage of the intrinsic relationship among the facial affected various factors. The experimental results show that the proposed method can synthesize different facial expressions with kinds of poses of the face using a known facial expression and pose image.
    Image restoration algorithm based on sparse regularized optimization
    XIAO Su HAN Guo-qiang
    2012, 32(01):  261-263.  DOI: 10.3724/SP.J.1087.2012.00261
    Asbtract ( )   PDF (427KB) ( )  
    References | Related Articles | Metrics
    For speeding up image restoration and improving the restored results, a new algorithm was proposed. The image restoration was represented as a class of standard optimization problem, which was decomposed into two subproblems by the alternating minimization algorithm. By iteratively solving the two subproblems, a solution to the image restoration problems was obtained. During the subproblem solving, the iterative soft-thresholding algorithm was introduced for the denoising subproblem. In the experiment, the images blurred by different type of blur were restored. The experimental results show the effectiveness of the proposed algorithm. When dealing with the images, compared with Multilevel Thresholded Landweber (MLTL) and Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), the proposed algorithm can reduce the time by 28% and 71% respectively, and it improves the Signal-to-Noise Ratio (SNR) values by 0.7dB to 3.5dB.
    Multi-biometric feature fusion identification based on D-S evident theory of image quality with different weights
    XIAO Bin-jie
    2012, 32(01):  264-268.  DOI: 10.3724/SP.J.1087.2012.00264
    Asbtract ( )   PDF (845KB) ( )  
    References | Related Articles | Metrics
    For the fusing recognition with face and finger-vein at decision level, a new quality score of image by combining three indexes was presented, and an improved fusion strategy based on D-S evident theory was adopted to fuse two biometric characteristics. At first, the quality score of image was computed by combining index of distinct, contrast and coefficient. Then an improved method based on image quality and D-S evident theory was adopted. This improved method reduced the impact of maximum of image quality score and made the weighted parameter with the actual situation more consistent. Compared with the result of D-S evident theory with no regard to image quality, the results reveal that the fusion method in this paper based on D-S evident theory taking account of image quality information improves the performance.
    Taboo matching method for carton missing detection
    NI Song-peng WANG Xiao-nian ZHU Jin
    2012, 32(01):  269-271.  DOI: 10.3724/SP.J.1087.2012.00269
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics
    To avoid the problem of the carton missing in the process of cigarette production, this paper introduced a new method of pattern matching based on machine. Using the method could avoid the effects of the random reflecting light on images. After getting the taboo area of the image, the result of the pattern matching was used to determine whether some cartons miss or not. In addition, the taboo matching method could also adjust the image and get the template image automatically without considering the pattern or color of the carton. The taboo matching would reduce the error detection rate in a real system and provide a way of solving problems of the similar kind.
    Typical applications
    Recent advances in sparse representation of non-stationary signal
    FAN Hong GUO Peng WANG Fang-mei
    2012, 32(01):  272-278.  DOI: 10.3724/SP.J.1087.2012.00272
    Asbtract ( )   PDF (1220KB) ( )  
    References | Related Articles | Metrics
    Signal decomposition is a process that obtains information from signals and it is a foundational and key technique for many fields such as pattern recognition, intelligent system and machinery fault diagnosis. It is very important to study non-stationary signal decomposition which always includes lots of information that can reflect the changing of the system and widely exists. After improving the sparsity of signal representation, the engineering background of feature extraction for non-stationary signal was studied in this paper, the characteristics, mechanisms, development history and current and future challenges of five types of methods were analyzed in depth, the models of these methods were compared, together with the state-of-the-art of feature extraction models in signal processing and analysis and some successful applications available were systematically reviewed. Finally, several main problems and a few deficiencies were pointed out, and future research directions were anticipated.
    Development and application of video driver based on RAW capture mode of DM642
    HE Wei YOU Jing ZHANG Ling
    2012, 32(01):  279-283.  DOI: 10.3724/SP.J.1087.2012.00279
    Asbtract ( )   PDF (705KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that video driver for C64X Digital Signal Processor (DSP) family can not be used in RAW capture mode, the authors designed a data interface between DM642 and the high definition sensor of Charge-Coupled Device (CCD), and analyzed and modified the video driver of standard format. After being optimized, the video driver could be used in RAW capture mode. Applications had been developed based on multilevel buffer management framework. At last, the capture speed achieves the minimum speed of 15 frame per second.
    Design and implementation of taxi anti-counterfeiting management system based on radio frequency identification technique
    DU Cheng-yang WEN Guang-jun LEI Bin-bin
    2012, 32(01):  284-287.  DOI: 10.3724/SP.J.1087.2012.00284
    Asbtract ( )   PDF (559KB) ( )  
    References | Related Articles | Metrics
    Taking comprehensive use of 2.45GHz active Radio Frequency Identification (RFID) technique, information processing technology, wireless communication technology of General Packet Radio Service (GPRS), Global Positioning System (GPS) technique, mobile computing and network technology, this paper designed the software and hardware structure of the taxi anti-counterfeiting management system, developed the 2.45GHz active tags and information terminals with the functions of recognition, orientation navigation and mobile communication. Meanwhile, on the basis of the analysis of the main application models, it developed the upper application software of the system. By setting up the application system and testing, the results show that the system can work with ultra-low power, the peak current is only 2mA; And the data transmit in real-time while delaying less than 4 seconds; Also the identifiable distance of the RFID terminal is about 110m and it can read no less than 150 tags at the same time.
    RFID anti-collision algorithm based on novel jumping and dynamic searching
    FENG Na PAN Wei-jie LI Shao-bo YANG Guan-ci
    2012, 32(01):  288-291.  DOI: 10.3724/SP.J.1087.2012.00288
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    The paper briefly introduced the merits and shortcomings of the existing anti-collision algorithms. Based on the idea of Jumping and Dynamic Searching (JDS) algorithm, a Novel JDS (NJDS) algorithm for tags' anti-collision was proposed. The algorithm brought stack into the new jumping before and after searching strategy to reduce the number of collision slots and avoid idle slots. When requested by readers, it adopted dynamic transmission and variable length adjustment strategy, and used the known information remembered by the feedback tags' information to identify the unknown data bits of tags, which reduced the number of search of readers and the transmission of system. The analysis on simulation results indicates that the proposed algorithm performs significantly better than the existing anti-collision algorithms. The transmission is greatly reduced, and throughput of the system has increased significantly.
    Cataplasm uniformity detection system based on fuzzy pattern recognition
    CAI Gui-fang SU Han-song
    2012, 32(01):  292-294.  DOI: 10.3724/SP.J.1087.2012.00292
    Asbtract ( )   PDF (501KB) ( )  
    References | Related Articles | Metrics
    To detect cataplasm's uniformity, a method based on fuzzy pattern recognition was proposed. According to the spatial and temporal correlation of pixels and the fuzziness of character boundary, the fuzzy theory was introduced and the fuzzy algorithm was used to recognize and classify the pixels' value. The CycloneⅡ Field-Programmable Gate Array (FPGA) of Altera was chosen, and the modeling and realization were performed by making use of Verilog HDL. The detection system passed the simulation and verification. In the on-line detection system, after analyzing, processing and recognizing the data of digital video, cataplasm's uniformity detection was completed. According to the statistic results, the accuracy of fuzzy pattern recognition in digital image signals is up to 95%. After experiments and online detection, the feasibility of fuzzy pattern recognition and the reliability of this quality detection system are verified.
    Dynamic initialization reset algorithm for particle filtering based on kernel density
    BAI Jian-feng NAN Jian-guo WU Meng ZHA Xiang
    2012, 32(01):  295-298.  DOI: 10.3724/SP.J.1087.2012.00295
    Asbtract ( )   PDF (600KB) ( )  
    References | Related Articles | Metrics
    It has been found that the accuracy of particle filtering is much lower when the maneuvering target tracking process has been executed for a long time. The reason for this problem is that the diversity of the sampled particles is rapidly lost because of the excessive resampling. Therefore, the track of the maneuvering target estimated by the particle filtering will be widely wiggly from the true one. Through the research of the distribution of the sampled particles, a new algorithm was proposed. And a detected threshold was set to detect if the particle was dried up badly. When the particle was dried up badly, the particles of the state-space would be reset to relax the degree, so the new particles could contain more distribution information. The new algorithm has a high capability in the simulation of the 2-D maneuvering target tracking.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF