Loading...

Table of Content

    01 July 2014, Volume 34 Issue 7
    High-speed algorithm for density of parallel tasks in cloud computing load testing
    BAI Yu GUO Xiane
    2014, 34(7):  1839-1842.  DOI: 10.11772/j.issn.1001-9081.2014.07.1839
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics

    During the current cloud computing load testing, in order to solve the problem of low efficiency of the algorithm for calculating the density of parallel task to the data collected, based on the idea of space for time, using the method of mathematical analysis, a high-speed algorithm for density of parallel task was proposed. Its time complexity is O(n lb n) and the space complexity is O(n). The experimental results show that, compared with the OpenSTA algorithm with the same time complexity, the increment of efficiency is about 6 to 8 times. The algorithm of multiple the same density of parallel tasks can obtain maximum duration, which can accurately reflect the situation of the heaviest load. The algorithm is applied to obtain the real reference data for the design of cloud computing load balancing algorithm.

    Service performance analysis of cloud computing center based on M/M/n/n+r queuing model
    HE Huaiwen FU Yu YANG Yihong XIAO Tao
    2014, 34(7):  1843-1847.  DOI: 10.11772/j.issn.1001-9081.2014.07.1843
    Asbtract ( )   PDF (634KB) ( )  
    References | Related Articles | Metrics

    Since it is necessary to evaluate and analyze the service performance of cloud computing center to guarantee Quality of Service (QoS) and avoid violation of Service Layer Agreement (SLA), a approximated analysis model based on M/M/n/n+r queue theory was proposed for cloud computing center. By solving this model the probability distribution function of response time and other QoS indicators were acquired, meanwhile the relationship among the number of servers, size of queue buffers, response time, blocking probability and instance service probability were revealed and verified by simulation.The experimental results indicate that improving server service rate is better than increasing the number of servers for improving service performance.

    Cloud resource game allocation based on cooperation
    ZHANG Xiaoqing YUE Qiang
    2014, 34(7):  1848-1851.  DOI: 10.11772/j.issn.1001-9081.2014.07.1848
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics

    For the heterogeneity of user requirements and the competition in clouds, a cooperative resource allocation game strategy was presented. The cooperative game model of resource allocation was established and the utility function and the evaluation function of users were defined. Meanwhile, it was proved that there exists unique Nash equilibrium of cooperative game in utility function, and how users coalition effected on the charateristic function and the whole utility was discussed. The experimental results show that in this cooperative game strategy, the individual user in the coalition could gain more utility and a Pareto improvement is implemented with a convergence through building a user coalition with multiple users.

    Parallel algorithm of polygon topology validation for simple feature model
    REN Yibin CHEN Zhenjie LI Feixue ZHOU Chen YANG Liyun
    2014, 34(7):  1852-1856.  DOI: 10.11772/j.issn.1001-9081.2014.07.1852
    Asbtract ( )   PDF (789KB) ( )  
    References | Related Articles | Metrics

    Methods of parallel computation are used in validating topology of polygons stored in simple feature model. This paper designed and implemented a parallel algorithm of validating topology of polygons stored in simple feature model. The algorithm changed the master-slave strategy based on characteristics of topology validation and generated threads in master processor to implement task parallelism. Running time of computing and writing topology errors was hidden in this way. MPI and PThread were used to achieve the combination of processes and threads. The land use data of 5 cities in Jiangsu, China, was used to check the performance of this algorithm. After testing, this parallel algorithm is able to validate topology of massive polygons stored in simple feature model correctly and efficiently. Compared with master-slave strategy, the speedup of this algorithm increases by 20%.

    Parallel solving shortest common superstring using genetic algorithm and ant colony optimization
    WU Shigang ZHONG Cheng
    2014, 34(7):  1857-1861.  DOI: 10.11772/j.issn.1001-9081.2014.07.1857
    Asbtract ( )   PDF (949KB) ( )  
    References | Related Articles | Metrics

    According to the capacity of multi-level caches, the population individuality and ant data in CPU main memory were assigned to L3 cache, L2 cache and L1 cache to reduce data transfer overhead among multiple caches during parallel computing. The asynchronous and incomplete transmission was performed between CPU and GPU, and multiple flows were asynchronously executed by multiple GPU kernel functions. The thread number of GPU block was set to the size of 16 times and GPU public memory was divided into bank with the size of 32 times. GPU constant memory was used to store read-only parameters such as cross probability and mutate probability which were read frequently. The read-only big data structure such as string set and overlap matrix were bound to GPU texture memory, and a computation, cache and communication-efficient parallel algorithm for CPU and GPU to coordinate solving shortest common superstring problem was designed and implemented. The experimental results for solving shortest common superstring problem with several sizes show the proposed CPU and GPU parallel algorithm is faster over 70 times than the sequential algorithm.

    Multivariate linear regression forecasting model based on MapReduce
    DAI Liang XU Hongke CHEN Ting QIAN Chao LIANG Dianpeng
    2014, 34(7):  1862-1866.  DOI: 10.11772/j.issn.1001-9081.2014.07.1862
    Asbtract ( )   PDF (730KB) ( )  
    References | Related Articles | Metrics

    According to the characteristics of traditional multivariate linear regression method for long processing time and limited memory, a parallel multivariate linear regression forecasting model was designed based on MapReduce for the time-series sample data. The model was composed of three MapReduce processes which were used to solve the eigenvector and standard orthogonal vector of cross product matrix composed by historical data, to forecast the future parameter of the eigenvalues and eigenvectors matrix, and to estimate the regression parameters in the next moment respectively. Experiments were designed and implemented to the validity effectiveness of the proposed parallel multivariate linear regression forecasting model. The experimental results show multivariate linear regression prediction model based on MapReduce has good speedup and scaleup, and suits for analysis and forecasting of large data.

    Anonymity-preserving remote user password authentication with key agreement scheme based on smart cards
    LIU Sha ZHU Shuhua
    2014, 34(7):  1867-1870.  DOI: 10.11772/j.issn.1001-9081.2014.07.1867
    Asbtract ( )   PDF (689KB) ( )  
    References | Related Articles | Metrics

    The paper firstly analyzed some security problems in Li-Niu's (LI X, NIU J W, KHAN M K, et al. An enhanced smart card based remote user password authentication scheme[J]. Journal of Network and Computer Applications, 2013, 36(5):1365-1371.) enhanced smart card based remote user password authentication scheme, and then proposed a novel smart-card-based scheme. In new scheme, a self-verified timestamp technique was combined with symmetric encryption methods to solve the problem of implementing clock synchronization in most typical smart-card-based schemes. Compared with Li-Niu's scheme, this scheme can not only provide the users' anonymity, but also resist the impersonation attacks and the privileged insider attacks. The scheme is more secure and efficient for the complicated network environment.

    Review on lightweight cryptography suitable for constrained devices
    YANG Wei WAN Wunan CHEN Yun ZHANG Yantao
    2014, 34(7):  1871-1877.  DOI: 10.11772/j.issn.1001-9081.2014.07.1871
    Asbtract ( )   PDF (1113KB) ( )  
    References | Related Articles | Metrics

    With the rapid development of the Internet of Things (IoT), security of constrained devices suffer a serious challenge. LightWeight Cryptography (LWC) as the main security measure of constrained devices is getting more and more attention of researchers. The recent advance in issues of lightweight cryptography such as design strategy, security and performance were reviewed. Firstly, design strategies and the key issues during the design were elaborated, and many aspects such as principle and implementation mechanisms of some typical and common lightweight cryptography were analyzed and discussed. Then not only the commonly used cryptanalysis methods were summarized but also the threat of side channel attacks and the issues should be noted when adding resistant mechanism were emphasized. Furthermore, detailed comparison and analysis of the existing lightweight cryptography from the perspective of the important indicators of the performance of lightweight cryptography were made, and the suitable environments of hardware-oriented and software-oriented lightweight cryptography were given. Finally, some unresolved difficult issues in the current and possible development direction in the future of lightweight cryptography research were pointed out. Considering characteristics of lightweight cryptography and its application environment, comprehensive assessment of security and performance will be the issues which worth depth researching in the future.

    Overview of public-key encryption with keyword search
    YANG Jian YANG Dengqi WANG Jian
    2014, 34(7):  1878-1883.  DOI: 10.11772/j.issn.1001-9081.2014.07.1878
    Asbtract ( )   PDF (1139KB) ( )  
    References | Related Articles | Metrics

    With the intensive research and application of cloud computing, the problems about the security and privacy protection of the data stored on the remote server become hot issues of business and academia. Traditional cryptography methods can partly solve the above problems, but bring lots of obstacles to the remote user for data query and usage as well. Searchable encryption is an option to solve this contradiction, because of which enables users to search encrypted data stored on remote server, and has become a research hotspot in the field of information security at present. An overview of the public key searchable encryption was made in this paper, of which especially focused on the origin, background and research results in recent years of the Public key Encryption with Keyword Search (PEKS). The PEKS formal definition and security model definitions were given, followed by the detail discussion about the other aspects of this domain, such as the PEKS secure channel dependency problem, the query functional improvement, et al. Finally, the future trends, hotspot and open-ended problems of this novel research area were put forward.

    Radio frequency identification group proof protocol based on secret key-sharing tree
    YANG Chao ZHANG Hongqi YANG Zhi SHAN Dibin
    2014, 34(7):  1884-1889.  DOI: 10.11772/j.issn.1001-9081.2014.07.1884
    Asbtract ( )   PDF (911KB) ( )  
    References | Related Articles | Metrics

    Aimed at the problem that existing RFID (Radio Frequency Identification) group proof protocols are inefficient and easily encounter many attacks like replay, tracking and so on, this paper proposed a new group proof protocol based on secret key-sharing tree. This protocol designed a new secret group-proofing key construction based on secret key sharing scheme. The group-proofing key was divided many times into many sub-keys to creat a key tree. This method increased the complexity of the construction of the secret key, increased the difficulty of that attackers attempt to recover the group key and increased the security of tag's group proof. The reader interacts with each tag only once to authenticate its validity and collect the group-proof information. This protocol enormously increases the proof efficiency. Compared to the existing protocols such as Yoking-Proofs, ECC-based and Tree-based, this protocol has better security and higher efficiency.

    Strongly blind and efficient certificateless blind signature scheme
    GONG Guochang SHI Zhihan
    2014, 34(7):  1890-1892.  DOI: 10.11772/j.issn.1001-9081.2014.07.1890
    Asbtract ( )   PDF (563KB) ( )  
    References | Related Articles | Metrics

    Concerning that the existing schemes of certificateless signature are neither strongly blind nor efficient, a strongly blind and efficient scheme was proposed. The scheme was divided into four stages: setup, extract, sign and verify, in strict accordance with the definition of certificateless blind signature. Meanwhile, the scheme was based on Elliptic Curve Discrete Logarithm Problem (ECDLP), and introduced three random blind parameters. The results of performance analysis show that the proposed scheme is safe under the random oracle model, and they also show that the scheme is efficient and strongly blind.

    Efficient and secure ID-based partially blind signature scheme
    YIN Heng JIANG Chaohui
    2014, 34(7):  1893-1896.  DOI: 10.11772/j.issn.1001-9081.2014.07.1893
    Asbtract ( )   PDF (611KB) ( )  
    References | Related Articles | Metrics

    Partially blind signature allows a signer to explicitly embed a pre-agreed common information into a blind signature without the loss of blindness property. It overcomes the defects of the completely blind signature and the limited blind signature. For the problem of low efficiency and security common in present ID-based partially blind signature schemes, a new efficient and secure ID-based partially blind signature scheme was proposed. Utilizing the Chosen-Target Accompanied Computational Diffie-Hellman (CT-ACDH) assumption and effective precomputation, not only made the scheme possess the unforgeability for adapting chosen-message and identity attacks in the random oracle model, but also reduced the whole computational complexity. Compared with the existing ID-based partially blind signature schemes in the random oracle model, the proposed scheme has the best efficiency, and compared with the Chow's scheme (CHOW S,HUI L,YIU S. Two improved partially blind signature schemes from bilinear pairings[C]// Proceedings of ACISP'05. Berlin: Springer-Verlag,2005:316-328.) and He's one(HE J,SUN F,QI C. Cryptanalysis and improvement of ID-based partially blind signature scheme[J].Journal of Computer Applications,2013,33(3):762-765.), the computational efficiency is increased by about 64.1% and 13.2% respectively. Hence, the scheme can enhance the efficiency and safety of electronic voting and electronic cash systems, etc.

    Privacy-preserving access control for mobile cloud services
    JI Zhengbo BAI Guangwei SHEN Hang ZHANG Peng
    2014, 34(7):  1897-1901.  DOI: 10.11772/j.issn.1001-9081.2014.07.1897
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics

    In response to the issue of security and privacy-preserving in mobile cloud computing, an anonymous mechanism using cloud storage was proposed. Zero-knowledge proofs and the digital signature technology were introduced into anonymous registration to simplify the steps of key authentication, building upon which the third party was used to bind users and their identity certificates that avoid legitimate cloud services for malicious purposes. The focus of data sharing is on how to take advantage of account parameters of sharers so as to solve the security issues due to secret key loss. Theoretical analysis shows that the proposed identity certificate and shared key generation schemes contribute to users' privacy.

    Image encryption algorithm based on maze permutation and Logistic chaotic map
    YAGN Lu SHAO Liping GUO Yi SHI Jun
    2014, 34(7):  1902-1908.  DOI: 10.11772/j.issn.1001-9081.2014.07.1902
    Asbtract ( )   PDF (1243KB) ( )  
    References | Related Articles | Metrics

    In conventional permutation and confusion based image encryption algorithm, there usually exists some problems such as inefficient permutation and difficult to resist known or chosen plaintext attack. To solve these problems, an image encryption algorithm based on maze permutation and Logistic mapping was proposed, where Depth First Search (DFS) maze permutation was used to product permutation efficiently. In order to resist known or chosen plaintext attack, the plaintext image Message Digest Algorithm 5 (MD5) digest was bound with the user key to generate maze starting coordinates, Logistic chaotic map parameters and initial values which drive Logistic maps to generate random numbers. These random numbers were used to determine maze node probing directions and participate in image confusion to make all encryption stages tight coupled with the plaintext image. Experiments show the proposed algorithm has better performance in encryption quality and it can resist known or chosen plaintext attack with high security.

    Conformance verification method for e-government network based on graph approximate matching
    ZENG Guang CHEN Xingyuan DU Xuehui XIA Chuntao
    2014, 34(7):  1909-1914.  DOI: 10.11772/j.issn.1001-9081.2014.07.1909
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics

    In view of the problem that verifying the conformance of e-government network structure, a conformance verification method for e-government network based on graph approximate matching was proposed. The method firstly abstracted the graph model of e-government network, then used the modular characteristic of network structure and k-hop neighboring relationship of vertices to realize extendible approximate graph matching which got all the similar structures between the two graphs. And then it proposed an improved graph similarity measure function by introducing the node importance factor and path distance attenuation factor so as to make the conformity assessment results more accurate. The experimental result shows that the method can accurately evaluate the conformance degree of e-government network structure, and fine-grainedly reflect the similarities or differences between the network structures which include all kinds of violations in the network topology and system deployment.

    Generalized hybrid dislocated function projective synchronization between different-order chaotic systems and its application to secure communication
    LI Rui ZHANG Guangjun ZHU Tao WANG Xiangbo WANG Yu
    2014, 34(7):  1915-1918.  DOI: 10.11772/j.issn.1001-9081.2014.07.1915
    Asbtract ( )   PDF (684KB) ( )  
    References | Related Articles | Metrics

    In order to improve the security of secure communication, a new Generalized Hybrid Dislocated Function Projective Synchronization (GHDFPS) based on generalized hybrid dislocated projective synchronization and function projective synchronization was researched by Lyapunov stability theory and adaptive active control method. At the same time, the control methods of GHDFPS between two different-order chaotic systems with uncertain parameter and parameter identification were presented, and the application of the novel synchronization on secure communication was analyzed. By strict mathematical proof and numerical simulation, the GHDFPS between two different-order chaotic systems with uncertain parameter were achieved, the uncertain parameter was identified. Because of the variety of function scaling factor matrix, the security of secure communication has been increased by GHDFPS. Moreover, this synchronization form and method of control were applied to secure communication via chaotic masking modulation. Many information signals can be recovered and validated.

    New random number generator based on decimal sequence
    BAO Long LIU Hongli
    2014, 34(7):  1919-1921.  DOI: 10.11772/j.issn.1001-9081.2014.07.1919
    Asbtract ( )   PDF (568KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of existing random number generator in high computational and storage cost, a new random number generator was proposed. It generated new any random sequences with longer length by introducing random variable into the process. It has four advantages: simple structure, low computational cost, low storage cost and excellent chaotic property. Besides, it solves the problem of the decimal sequence, limited length of random sequence. The auto-correlation, correlation and probability distribution analysis demonstrates that new Decimal sequence outperforms existing one in random property. These properties make the new random number generator more suitable than the existing complex random number generators for applications in Wireless Sensor Network (WSN), such as chaotic-based and hardware-based random number generators, considering limited computational ability, storage and energy.

    Data-leakage detection scheme based on fingerprint and Bloom filters
    HUANG Weiwen LUO Jia
    2014, 34(7):  1922-1928.  DOI: 10.11772/j.issn.1001-9081.2014.07.1922
    Asbtract ( )   PDF (1131KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problems that the existing Data-Leakage Prevention (DLP) solutions are based on generic search for keywords in outgoing data, and hence severely lack the ability to control data flow at a fine granularity with low false probability. In this paper, an DLP architecture based on the white-listing was firstly designed, which used a white-listing for providing the strong security of data transmission. On this basis, a data leakage detection algorithm by combining document fingerprinting with Bloom filters was proposed. This algorithm computed the optimal locations by using dynamic programming to minimize the memory overhead and enable high-speed implementation. The simulation results show that the proposed algorithm for checking the fingerprints for a large amount of documents at very low cost. For example, for 1TB of documents, the proposed solution only requires 340MB memory to achieve worst case expected detection lag (i.e. leakage length) of 1000Bytes.

    Cryptographic procedure analysis based on cryptographic library function
    ZHANG Yanwen YIN Qing LI Zhenglian SHU Hui CHANG Rui
    2014, 34(7):  1929-1935.  DOI: 10.11772/j.issn.1001-9081.2014.07.1929
    Asbtract ( )   PDF (1118KB) ( )  
    References | Related Articles | Metrics

    Since it's hard to analyze the cryptographic procedure using method of property scan or debugging for the variety and different implementation of cryptographic algorithms, a method was proposed based on library function prototype analysis and their calling-graph building. Library functions prototype analysis is analyzing cryptographic algorithm knowledge and library frame knowledge to form a knowledge base. Calling-graph building is building a calling-graph that reflects the function calling order according to parameter value of the functions. Finally cryptographic algorithm knowledge and library frame knowledge on the calling-graph were extracted. The method discriminated common cryptographic algorithm almost 100%, and it could not only recover cryptographic data, key and cryptographic mode, but also help to analyze the relationship between more than two cryptographic algorithms dealing with the same data. The method could be used to analyze Trojan, worm and test whether the library is used correctly.

    Network interconnection model based on trusted computing
    LIU Yibo YIN Xiaochuan GAO Peiyong ZHANG Yibo
    2014, 34(7):  1936-1940.  DOI: 10.11772/j.issn.1001-9081.2014.07.1936
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics

    Problem of intranet security is almost birth with network interconnection, especially when the demand for network interconnection is booming throughout the world. The traditional technology can not achieve both security and connectivity well. In view of this,a method was put forward based on trusted computing technology. Basic idea is to build a trusted model about the network interconnection system,and the core part of this model is credible on access to the person's identity and conduct verification:first, the IBA algorithm is reformed to design an cryptographic protocol between authentication system and accessors,and the effectiveness is analyzed in two aspects of function and accuracy; second,an evaluation tree model is established through the analysis of the entity sustainable behavior, so the security situation of access terminals can be evaluated.At last,the evaluation method is verified through an experiment.

    Network and communications
    Single-sink scheduling problem in wireless sensor networks
    ZHANG Meiping GU Yu XU Li
    2014, 34(7):  1941-1946.  DOI: 10.11772/j.issn.1001-9081.2014.07.1941
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics

    This article focused on the mobile sink scheduling problem in Wireless Sensor Networks (WSN). A mobile single-sink scheduling algorithm in wireless sensor networks was proposed based on Linear Programming (LP). Firstly, the problem was mathematically modeled and formulated in time domain, and the problem was re-formulated from time to space domain to reduce the complexity. Then a polynomial-time optimal algorithm was proposed based on linear programming. The simulations confirm the efficiency of the algorithm and the results show that the algorithm can significantly improve the network lifetime of wireless sensor networks.

    Variable-length slot based MAC for underwater wireless sensor networks
    TIAN Zhihui JIN Zhigang WANG Ying
    2014, 34(7):  1947-1950.  DOI: 10.11772/j.issn.1001-9081.2014.07.1947
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics

    A motion model of underwater sensor node was analyzed, and a new Medium Access Control (MAC) protocol of underwater mobile sensor networks based on variable-length slots: VS FAMA was proposed. In VS FAMA, the length of time-slot would be adaptive adjusted according to sensor location that measured periodically, which was positively correlated to the distance between nodes. New protocol made better use of the time-slot resource in underwater sensor networks. NS-2 simulation results show that VS FAMA can get about 15% higher network goodput under the circumstance of motion.

    Relay selection and power allocation optimization algorithm based on long-delay channel in underwater wireless sensor networks
    LIU Zixin JIN Zhigang SHU Yishan LI Yun
    2014, 34(7):  1951-1955.  DOI: 10.11772/j.issn.1001-9081.2014.07.1951
    Asbtract ( )   PDF (648KB) ( )  
    References | Related Articles | Metrics

    In order to deal with the channel fading in Underwater Wireless Sensor Networks (UWSN) changing randomly in time-space-frequency domain, underwater cooperative communication model with relays was proposed in this paper to improve reliability and obtain diversity gain of the communication system. Based on the new model, a relay selection algorithm for UWSN was proposed. The new relay selection algorithm used new evaluation criteria to select the best relay node by considering two indicators: channel gain and long delay. With the selected relay node, source node and relay nodes could adjust their sending power by the power allocation algorithm which was based on the principle of minimizing the bit error rate. In a typical scenario, by comparing with the traditional relay selecting algorithm and equal power allocation algorithm, the new algorithm reduces the delay by 16.7% and lowers bit error rate by 1.81dB.

    Traffic balancing routing algorithm for wireless mesh networks based on Grover search
    LIU Yongguang
    2014, 34(7):  1956-1959.  DOI: 10.11772/j.issn.1001-9081.2014.07.1956
    Asbtract ( )   PDF (547KB) ( )  
    References | Related Articles | Metrics

    In applications of Wireless Mesh Networks (WMN), users can access Internet through mesh gateways. This architecture is prone to cause traffic unbalance between mesh routers located at different places, make some mesh routers become bottleneck and hence affect network performance and user's Quality of Service (QoS). To solve this problem, a traffic balancing routing algorithm based on Grover quantum search algorithm was presented. In this algorithm, the parallel character of quantum computation was utilized. The operation matrix was constructed according to model of traffic balancing function. The traffic balancing paths were gotten by Grover iteration. Simulations show that the paths selected by the algorithm can balance traffic of WMN effectively and make the minimum bandwidth every user got maximized. The executive efficiency of the algorithm is also better than the similar ones.

    Dynamic information spreading model based on online social network
    MENG Zaiqiao FU Xiufen
    2014, 34(7):  1960-1963.  DOI: 10.11772/j.issn.1001-9081.2014.07.1960
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    Traditional spreading models have difficulties in descripting the complex activity patterns and the topological differences between nodes in online social networks, and the contact-based spreader annihilation mechanisms in these models do not fit with the reality. To filling the gap between spreading simulations of theoretical model and realities of information spreading, a new dynamic information spreading model (D-SIR) based on online social network was proposed. With consideration of some practical factors in information dissemination process, this model introduced the time delay annihilation mechanism that spreaders changed to stiflers spontaneously and the dynamic authority and resistance of nodes mechanism to apply to inhomogeneous networks, and considered the receiving reinforced signal effect and the social reinforcement. With the variances of parameters, the simulations on the real-world online social network which is constructed by crawled Sina microblog data verify that D-SIR model can reflect the real spreading situation in online social network. And compared to the traditional spreading model, the new model is more flexible and extensible.

    Fault recovery reconfigurable service carrying networks mechanism for based on equivalent resource
    XING Chiqiang LAN Julong HU Yuxiang
    2014, 34(7):  1964-1968.  DOI: 10.11772/j.issn.1001-9081.2014.07.1964
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics

    Aiming at the low recovery efficiency by using the traditional re-mapping failure recovery algorithm and prolonged interruption of service, a Fault Recovery Algorithm based on Equivalent Resource (FRA-ER) was proposed. The FRA-ER converted the recovery problem to finding equivalent resource problem, achieving to recovery all or part of the fault RSCNs by once. A Network Reconfigure Algorithm (NRA) was also proposed to detect and regulate the RSCNs periodically to optimize their architectures and reduce the costs. Finally, the numerical results show that the proposed FRA-ER could achieve 15% recovery time reduction compared with conventional overall re-mapping algorithm and fast recovery algorithm. The NRA could achieve 80 band reduction on average, improving the recovery success ratio by 10%.

    Energy efficiency and time efficiency based joint optimization scheme for green communication
    WU Pengyue JI Wei
    2014, 34(7):  1969-1973.  DOI: 10.11772/j.issn.1001-9081.2014.07.1969
    Asbtract ( )   PDF (728KB) ( )  
    References | Related Articles | Metrics

    Traditional power allocation schemes have ignored channel estimation errors and circuit energy consumption. To solve this problem, an improved green joint optimization scheme was proposed in this paper. On the premise of guaranteeing user's QoS (Quality of Service), energy efficiency and time efficiency for relay selection and each relay's power allocation were jointly optimized in the improved scheme, with taking channel estimation errors and relay circuit energy consumption into consideration. In the end, the closed solutions of transmit power of the source node and relay nodes were obtained. The simulation results show that the proposed scheme performs 30% better than traditional optimization scheme in energy efficiency with high SNR (Signal-to-Noise Ratio) and has a close performance with low SNR.

    Improved time synchronization algorithm for time division long term evolution system
    TIAN Zengshan BO Chen YUAN Zheng-Wu
    2014, 34(7):  1974-1977.  DOI: 10.11772/j.issn.1001-9081.2014.07.1974
    Asbtract ( )   PDF (715KB) ( )  
    References | Related Articles | Metrics

    To deal with high computing complexity and bad anti-CFO (anti-Carrier Frequency Offset) performance of conventional time synchronization algorithms for Time Division Long Term Evolution (TD-LTE) system, an improved algorithm based on Secondary Synchronization Signal (SSS) conjugate-symmetric in time domain was proposed in this paper. For the algorithm, SSS location was estimated as the peak of cross-correlation of received signal and its time reversal. And by combining SSS location with the detection of cell group ID, CP (Cyclic Prefix) type could also be judged. Analysis and simulation results demonstrate that the improved algorithm has low computing complexity, good performs on anti-CFO and better reliability compared with normal methods, especially, it also has good performs in multi-path channels. By applying to the third party TD-LTE UE detecting system, the algorithm is proved to be effective and feasible.

    Design of multi-carrier transceivers based on time domain improved discrete Fourier transform
    JI Xiang GUO Zhigang WANG Kai
    2014, 34(7):  1978-1982.  DOI: 10.11772/j.issn.1001-9081.2014.07.1978
    Asbtract ( )   PDF (720KB) ( )  
    References | Related Articles | Metrics

    Concerning the power complementary limitation due to the time-reversed assumption of prototype filter in the design of traditional DFT (Discrete Fourier Transform) modulated filter banks, a time domain modified method was introduced to design the DFT filter banks from the time domain perfect reconstruction perspectives in this paper. Moreover, the designed filter banks were applied to the filter banks based multi-carrier transceivers. The time domain modified method relaxed the time-reversed assumption of prototype filter, that is, the filter banks at the receiver were conjugate transpose form of the filter banks at the transmitter. Moreover, it adopted the time domain formula of the perfect reconstruction property as the solution to design the filter banks at the receiver, which would ensure the perfect reconstruction of filter banks and avoid the power complementary limitation in the design of prototype filter at the same time. Compared to the traditional design method, the time domain modified method improves the design freedom of prototype in the filter banks, so suitable prototype filters could be obtained according to the various application environments without considering power complementary restrictions. Moreover, the time domain modified DFT filter banks based multi-carrier transceivers has a better SER (Symbol Error Ratio) performance in QPSK (Quadrature Reference Phase Shift Keying) modulation, ideal and the 3GPP TS 25.104 pedestrian multipath channel and one-tap frequency-domain equalization.

    Peer-to-peer based Web service registration system
    LONG Yunjian HE Qian WANG Yong WANG Xiaofeng
    2014, 34(7):  1983-1987.  DOI: 10.11772/j.issn.1001-9081.2014.07.1983
    Asbtract ( )   PDF (725KB) ( )  
    References | Related Articles | Metrics

    Since the traditional centralized architecture Web service registry system suffers from such problems as performance bottleneck, single-point-of-failures, a structure Peer-to-Peer (P2P) based Web service registry system was designed and implemented. The registry system consists of six modules including configuration, schedule and distribution, peer-to-peer communication, rank validation, JUDDI, and network resources monitoring. The pastry based system scheduling and communication algorithms were proposed, and the corresponding Web service registration and discovery process was designed. The Web service registration system was tested and analyzed using SoapUI and LoadRunner. The experimental results show that the system can support large-scale accessing and has dynamic scalability. In the multi-concurrent simulation experiments, the response speeds of Web services registration and discovery are increased 1 times.

    Relationships retrospect algorithm on kinship network
    GUO Ruiqiang YAN Shaohui ZHAO Shuliang SHEN Yufeng
    2014, 34(7):  1988-1991.  DOI: 10.11772/j.issn.1001-9081.2014.07.1988
    Asbtract ( )   PDF (652KB) ( )  
    References | Related Articles | Metrics

    Kinship network is made up of marriage and parent-child relationship. Searching a special relationship on a huge kinship network is very difficult. This paper proposed two algorithms by extending breadth-first-search method: radius-search and directional-search. The data of the kinship network was extracted from Hebei province population database, which included about 4150000 vertexes, and about 10880000 edges. The network stored bilateral relationships, which declined some unnecessary back tracking. The experimental results show that the kinship retrospect algorithm can exactly locate some specific persons by the network. At the same time the algorithms can achieve high performance and guarantee high flexibility.

    Spatial keyword query based on collective object
    LIANG Yin DONG Yongquan
    2014, 34(7):  1992-1996.  DOI: 10.11772/j.issn.1001-9081.2014.07.1992
    Asbtract ( )   PDF (740KB) ( )  
    References | Related Articles | Metrics

    In spatial keyword query, a group of objects involving minimum number that cover the query keywords and are compact and nearest to the query location will be queried, but generally, the current query method only can return the single spatial object containing all query keywords. To address this query, an approximate algorithm and an exact algorithm were introduced. The query problem was formally defined. A new cost function to describe the quality of the collective objects was defined and normalized. Then the approximate algorithm utilized a best-first search based on the IR-tree to prune the search space. The exact algorithm utilized a breadth-first search based on the IR-tree to retrieve the objects that contain some query keywords to reduce query processing cost. The experiments show that query efficiency of the approximate algorithm is much better than that of the exact algorithm, and approximate algorithm is capable of achieving very accurate results.

    Improved K-medoids clustering algorithm based on improved granular computing
    PAN Chu LUO Ke
    2014, 34(7):  1997-2000.  DOI: 10.11772/j.issn.1001-9081.2014.07.1997
    Asbtract ( )   PDF (632KB) ( )  
    References | Related Articles | Metrics

    Due to the disadvantages such as sensitive to the initial selection of the center, slow convergent speed and poor accuracy in traditional K-medoids clustering algorithm, a novel K-medoids algorithm based on improved Granular Computing (GrC), granule iterative search strategy and a new fitness function was proposed in this paper. The algorithm selected K granules using the granular computing thinking in the high-density area which were far apart, selected its center point as the K initial cluster centers, and updated K center points in candidate granules to reduce the number of iterations. What's more, a new fitness function was presented based on between-class distance and within-class distance to improve clustering accuracy. Tested on a number of standard data sets in UCI, the experimental results show that this new algorithm reuduces the number of iterations effectively and improves the accuracy of clustering.

    Scene classification based details preserving histogram equalization
    HU Jing MA Xiaofeng SHENG Weixing HAN Yubing
    2014, 34(7):  2001-2004.  DOI: 10.11772/j.issn.1001-9081.2014.07.2001
    Asbtract ( )   PDF (770KB) ( )  
    References | Related Articles | Metrics

    Due to the swallow and over-enhancement problems of traditional histogram equalization, an improved histogram equalization algorithm combining scene classification and details preservation was proposed. In this algorithm, images were classified according to their histogram features. The parameter of piecewise histogram equalization was optimized according to the scene classification and the characteristics of image histogram. The complexity of the improved algorithm is only O(L).L is the level of image grayscale, and equals to 256 here. The improved algorithm has the small amount of computation and solves the swallow and over-enhancement problems of traditional histogram equalization. The results from TI (Texas Instruments) DM648 platform show the algorithm can be used for real-time video image enhancement.

    Motion blurred image blind restoration based on Radon transform
    LIAO Yongzhong CAI Zixing HE Xianghua
    2014, 34(7):  2005-2009.  DOI: 10.11772/j.issn.1001-9081.2014.07.2005
    Asbtract ( )   PDF (682KB) ( )  
    References | Related Articles | Metrics

    In this paper, a fast blind restoration algorithm for motion blurred image was proposed, using a robust algorithm based on Radon transform-domain to determine the blur kernel function, then a modified total variation algorithm was used to restore the blurred images. Its cost function is the sum of three terms corresponding to total variation l2-norm regularization, least squares fidelity term and l1-norm fidelity term. Compared with Fergus' and Levin' algorithm, the experiment results show that the algorithm for a class of motion blurred image caused by the linear movement parallel to the lens has higher speed and good recovery effect.

    Adaptive median filtering algorithm based on neighborhood correlation
    ZHANG Jieyu WANG Feng
    2014, 34(7):  2010-2013.  DOI: 10.11772/j.issn.1001-9081.2014.07.2010
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics

    Aiming at the impulse noise widely exiting in images, an adaptive median algorithm was proposed in this paper. The algorithm could suppress noise effectively and preserve more image details. Firstly, noise pixels and signal pixels were preliminary distinguished according to the characteristic of the impulse noise gray value of 0 or 1. Whether the pixel was noise or not depended on the correlativity of adjacent pixels in the window centered on the pixel. The gray value of contaminated pixel was reconstructed by the median of those uncontaminated pixels in the window. And the signal pointed directly output without changes. The results with visible-light images and infrared images show that the proposed method has a better performance because of the highest peak signal-to-noise ratio than other methods, such as Traditional Median (TM) and Extremal Median (EM) filtering algorithm.

    Improved block-based image noise estimation algorithm
    CHEN Huijuan DAI Shenkui
    2014, 34(7):  2014-2017.  DOI: 10.11772/j.issn.1001-9081.2014.07.2014
    Asbtract ( )   PDF (673KB) ( )  
    References | Related Articles | Metrics

    To estimate noise variance in a white Gaussian noise image, an improved block-based algorithm was proposed. The improved noise estimation approach put forward that the gray-level restrains some of the noise. When dealing with brighter or darker images, this phenomenon may cause serious underestimation. The proposed approach started with the key to underestimation, got the boundary condition of overflowing the gray-level by combining filter-based method. The improved block-based method selected window size for partition, sub-blocks without overflowing, mathematical proportion parameter self-adaptively. The approach both applied to the classical noise estimation images and surveillance images which were more common in daily life. The improved block-based method was hardly affected by image details, it performed well in images with different sizes, different Signal-to-Noise Ratio (SNR) or uneven brightness. The experimental result shows that the proposed algorithm possesses wider applicability, higher accuracy and better robustness.

    Strong scattering objects segmentation based on graph cut and Mean Shift algorithm from SAR images
    LYU Qian GAO Jun GAO Xin
    2014, 34(7):  2018-2022.  DOI: 10.11772/j.issn.1001-9081.2014.07.2018
    Asbtract ( )   PDF (807KB) ( )  
    References | Related Articles | Metrics

    Aiming at the characteristics of Synthetic Aperture Radar (SAR) images and the problem of the standard graph cut segmentation algorithm's high computational complexity, a method of strong scattering objects segmentation based on graph cut and Mean Shift algorithm was proposed. Firstly, the image was pre-processed with the Mean Shift algorithm to produce over-segmentation areas. Then, a graph was built with nodes responding to over-segmentation areas, and then the results of SAR strong scattering targets segmentation were obtained by using graph cut algorithm. Compared with nodes responding to pixels in the standard graph cut algorithm, the number of nodes and edges in the graph were reduced by two orders of magnitude and the computational efficiency was significantly improved. Furthermore, according to the strong scattering characteristics of the targets in SAR images, the “object” terminal and the “background” terminal were defined automatically to reduce human interaction. The experiments show that the proposed method combines the advantages of Mean Shift and graph cut effectively, and it can effectively extract SAR strong scattering targets from the background clutter.

    Support vector machine based approach for leaf occlusion detection in security surveillance video
    YUAN Yuan DINGSheng XU Xin CHEN li
    2014, 34(7):  2023-2027.  DOI: 10.11772/j.issn.1001-9081.2014.07.2023
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem that the security surveillance cameras have been hidden by leaves, a leaf occlusion detection algorithm based on Support Vector Machine (SVM) was proposed. The algorithm contains three steps. First, the regions of the leaf existing in the video were segmented. The accumulated frame subtraction method was applied to achieve this purpose. Second, the color and area information of the whole video image and the segmented regions were extracted as the key features. Third, these features were used for modeling and detecting obstacle occlusion by SVM. For all the collected samples, the detection accuracy of this method can reach up to 84%. The experimental results show that the proposed algorithm can detect the leaf occlusion in security surveillance video effectively.

    Real-time image positioning and scaling for mobile video terminals
    LAI Chunlei XUE He ZHOU Yimin
    2014, 34(7):  2028-2032.  DOI: 10.11772/j.issn.1001-9081.2014.07.2028
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics

    An image positioning and scaling architecture for mobile video terminals was proposed for freely zooming and viewing video in detail. Then a gesture recognition processing approach was adopted in the architecture. Single-finger dragging and double-finger zooming detection were proposed for the gesture recognition. In addition, an approach to coordinates conversion calculation was proposed with boundary binding of coordinate transformation parameters using crossing boundary detection. Novel video display system was presented which consists of the video decoding, the image rendering and the interaction with synchronization. Finally these parts were concurrently implemented by three threads. The simulation results show that the proposed system obtains real-time image positioning and scaling while the traditional way of video playback is reserved. Interaction response time is controlled within 6ms to eliminate the screen flicker and skipping caused by interaction. Real-time image positioning and scaling of video playback for resources-limited mobile terminals will lead to a wide range of potential applications.

    Pedestrian detection based on improved color self-similarity feature
    GU Huijian CHEN Junzhou
    2014, 34(7):  2033-2035.  DOI: 10.11772/j.issn.1001-9081.2014.07.2033
    Asbtract ( )   PDF (594KB) ( )  
    References | Related Articles | Metrics

    In recent years, multiscale pedestrian detection received extensive attentions in the field of computer vision. In traditional methods, the input image must be resized with different scales to compute the features, which significantly reduces the detection speed. Color Self-Similarity Feature (CSSF) was presented to overcome this problem. An improved CSSF with lower dimension was proposed for the CSSF whose dimension is too high and time-consuming in the training process of the classifiers. Combined with pedestrian structural similarity, a fixed-size window was defined at first, and then the improved CSSF was extracted by sliding the fixed-size window in different color space. Finally, the pedestrian detection classifier was constructed by combining with AdaBoost algorithm. Test shows that compared with the traditional CSSF whose dimension is ten millions, new feature dimension is only a few thousand, and it can be extracted and trained faster, but detection effect decreases slightly; compared with the Histogram of Oriented Gradient (HOG), feature extraction speed improves 5 times, detection effect is essentially the same. The new method has a good application value in real-time pedestrian detection and monitoring systems.

    Multi-scale local binary pattern fourier histogram features for facial expression recognition
    WANG Li LI Ruifeng WANG Ke
    2014, 34(7):  2036-2039.  DOI: 10.11772/j.issn.1001-9081.2014.07.2036
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    To achieve simple and convenient facial expression recognition, a method combining multi-scale Local Binary Pattern Histogram Fourier (LBP-HF) and Active Shape Model (ASM) was proposed. Firstly, the face regions were detected and segmented by ASM to reduce the influence of unrelated regions, and then LBP-HF were extracted to form recognition vectors. Finally, the nearest neighborhood classifier was applied to recognize expressions. The influences of various scale LBP-HF features on facial expression recognition were studied through extracting LBP-HF features from different scales. At last, multi-scale LBP-HF features were concatenated to discriminate expressions, and more effective expression features were obtained. By comparison with the experimental result of Gabor features, its feasibility and simplication are validated, and the highest mean recognition rate is 93.50%. The experimental results demonstrate that the method can be used for human-computer interaction.

    Fast recognition model for cerebrospinal fluid images based on sparse coding
    HUANG Wenming CAI Wenzheng DENG Zhenrong
    2014, 34(7):  2040-2043.  DOI: 10.11772/j.issn.1001-9081.2014.07.2040
    Asbtract ( )   PDF (791KB) ( )  
    References | Related Articles | Metrics

    Considering the traditional image segmentation algorithm was difficult to segment cerebrospinal fluid cell images accurately, a fast recognition model based on sparse coding for cerebrospinal fluid cell images was presented in this paper. First in this model local features and feature descriptors from the image were extracted by sparse coding. Then the feature descriptors were transformed into linear Spatial Pyramid Matching (SPM) structure. Finally, the calculated result was input into the linear Support Vector Machine (SVM) for training and prediction. In this paper, a test was made for recognizing abnormal cerebrospinal fluid cell images and classification, and the abnormal recognition accuracy rate of the experimental results was up to 89.4±0.9%, and the average recognition time of each 760×570 image is just 1.3 seconds. Therefore, the presented model can effectively and quickly distinguish normal and abnormal cerebrospinal fluid cell images.

    Research on cluster analysis in pulmonary nodule recognition
    SUN Juan WANG Bing YANG Ying TIAN Xuedong
    2014, 34(7):  2050-2053.  DOI: 10.11772/j.issn.1001-9081.2014.07.2050
    Asbtract ( )   PDF (620KB) ( )  
    References | Related Articles | Metrics

    Aiming at the problem of pulmonary small nodules was difficult to identify, a method using fuzzy C-means clustering algorithm to analyse the lung Region Of Interest (ROI) was presented. An improved Fuzzy C-Means clustering algorithm based on Plurality of Weight (PWFCM) was presented to enhance the accurate rate and speed of small nodules recognition. To improve the convergence, each sample and its features were weighted and a new membership constraint was introduced. The low sensitivity from the uneven ROI data was decreased by using a double clustering strategy. The experimental results tested on the real CT image data show that PWFCM algorithm can detect lung nodules with a higher sensitivity and lower false positive rate.

    Smoothening in surface blending of quadric algebraic surfaces
    LI Yaohui XUAN Zhaocheng WU Zhifeng SUN Yuan
    2014, 34(7):  2054-2057.  DOI: 10.11772/j.issn.1001-9081.2014.07.2054
    Asbtract ( )   PDF (643KB) ( )  
    References | Related Articles | Metrics

    To solve the problem of discontinuity when blending two surfaces with coplanar perpendicular axis, this paper discussed how to improve the equations about the blending surface so as to obtain the smooth and continuous blending surface. At first, this paper analyzed the reason of the uncontinuousness in the blending surface and pointed out that the items in one variable were removed when other variables equaled to some specified values. In this case, the blending equation was independent to this variable in these values and this indicated that the belending surface was disconnected. Then, a method which guarantees the blending surface countinuous was presented on the basis of above discussion. Besides this, this paper discussed how to smoothen it once the continuous blending surface was computed out. As for the G0 blending surface, regarding the polynomial of auxiliary surface as a factor, this factor was mulitiplied to a function f′ with degree one and the result was added to the primary surface fi. The smoothness of blending surface can be implemented by changing the coefficients in f. For the Gn blending surface, a compensated polynomial with degree at most 2 was added to the proposed primary blending equation directly when computing blending surface. This method smoothens the blending surface but does not increase the degree of G0 blending surface.

    Artificial intelligence
    Multi-label classification based on singular value decomposition-partial least squares regression
    MA Zongjie LIU Huawen
    2014, 34(7):  2058-2060.  DOI: 10.11772/j.issn.1001-9081.2014.07.2058
    Asbtract ( )   PDF (581KB) ( )  
    References | Related Articles | Metrics

    To tackle multi-label data with high dimensionality and label correlations, a multi-label classification approach based on Singular Value Decomposition (SVD)-Partial Least Squares Regression (PLSR) was proposed, which aimed at performing dimensionality reduction and regression analysis. Firstly, the label space was taken into a whole so as to exploit the label correlations. After that, the score vectors of both the instance space and label space were obtained by SVD, which was used for dimensionality reduction. Finally, the model of multi-label classification was established based on PLSR. The experiments performed on four real data sets with higher dimensionality verify the effectiveness of the proposed method.

    Hyper-spherical multi-task learning algorithm with adaptive grouping
    MAO Wentao WANG Haicheng LIU Shangwang
    2014, 34(7):  2061-2065.  DOI: 10.11772/j.issn.1001-9081.2014.07.2061
    Asbtract ( )   PDF (741KB) ( )  
    References | Related Articles | Metrics

    To solve the problem in most of conventional multi-task learning algorithms which evaluate risk independently for single task and lack uniform constraint across all tasks, a new hyper-spherical multi-task learning algorithm with adaptive grouping was proposed in this paper. Based on Extreme Learning Machine (ELM) as basic framework, this algorithm introduced hyper-spherical loss function to evaluate the risks of all tasks uniformly, and got decision model via iterative reweighted least squares solution. Furthermore, considering the existence of relatedness between tasks, this paper also constructed regularizer with grouping structure based on the assumption that related tasks had more similar weight vector, which would make the tasks in same group be trained independently. Finally, the optimization object was transformed into a mixed 0-1 programming problem, and a multi-objective method was utilized to identify optimal grouping structure and get model parameters. The simulation results on toy data and cylindrical vibration signal data show that the proposed algorithm outperforms state-of-the-art methods in terms of generalization performance and the ability of identifying inner structure in tasks.

    Uncertainty data processing by fuzzy support vector machine with fuzzy similarity measure and fuzzy mapping
    WANG Yufan LIANG Gongqian YANG Jing
    2014, 34(7):  2066-2070.  DOI: 10.11772/j.issn.1001-9081.2014.07.2066
    Asbtract ( )   PDF (697KB) ( )  
    References | Related Articles | Metrics

    In order to improve the processing ability for uncertainty data using the traditional Fuzzy Support Vector Machine (FSVM), FSVM with fuzzy similarity measure and high dimensional space fuzzy mapping was proposed. Firstly, by using Gregson similarity measure, the fuzzy similarity measure function was established, which was effective to explain the uncertainty information. And then, using the theory of mapping and Mercer, fuzzy similarity kernel learning was formulated and used in the algorithm of the FSVM. Finally, this algorithm was used to the modeling of the material removal rate in the rotary ultrasonic machining with uncertainty data. Compared to the results using traditional FSVM methods, the current approach can better process uncertainty data with less operation steps. And the proposed method has higher accuracy in processing uncertainty data with lower computational complexity.

    Floating point representation genetic algorithm based on adaptive wavelet shrinkage
    CUI Mingyi SHAO Chao
    2014, 34(7):  2071-2073.  DOI: 10.11772/j.issn.1001-9081.2014.07.2071
    Asbtract ( )   PDF (514KB) ( )  
    References | Related Articles | Metrics

    Noise of floating point representation was analyzed to be independent identically distributed. The noise was removed by adaptive wavelet shrinkage, to improve the performance of genetic algorithm. Denoising was implemented with mutation operation in algorithm running. Aiming at influence on wavelet coefficient with threshold variety, correctness of wavelet denoising mutation was proved with single gene. Soft threshold function was proposed with adaptive wavelet shrinkage, and the function was embedded into dynamic algorithm running. A concrete algorithm was given, and an example was used to verify the feasibility of the algorithm. The simulation experiment indicates the algorithm improves convergence rate, and the convergence point is the same as theoretical convergence point.

    Hybrid discrete optimization algorithm based on gravity search and estimation of distribution
    JIANG Yue SHEN Dongmei ZHAO Yan GAO Shangce
    2014, 34(7):  2074-2079.  DOI: 10.11772/j.issn.1001-9081.2014.07.2074
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics

    According to the problem of the traditional Gravitational Search Algorithm (GSA) such as falling into the local minimum point easily, a hybrid algorithm based on Estimation of Distribution (ED) and gravitational search (GSEDA) was proposed. By characterizing the distribution of current solutions found by GSA, ED was used to generate promising solutions based on the constructed probability matrix, thus guiding the search to new solution areas. The proposed GSEDA was able to balance the exploration and exploitation of the search, therefore possessing a better local optima jumping capacity. The experimental results based on the traveling salesman problem indicate that GSEDA performs better than traditional algorithms in terms of solution quality and robustness.

    Improved probabilistic algorithm of mechanical geometry theorem proving
    CHEN Mingyan ZENG Zhenbing
    2014, 34(7):  2080-2084.  DOI: 10.11772/j.issn.1001-9081.2014.07.2080
    Asbtract ( )   PDF (835KB) ( )  
    References | Related Articles | Metrics

    The research methods of mechanical geometry theorem proving were summed up into two categories, deterministic algorithms and probabilistic algorithms, and then an improved probabilistic algorithm was proposed to overcome the drawbacks such as poor efficiency or memory consumption in the existing methods. That was, the upper bounds of the degrees of variables in the pseudo-remainder were estimated by adopting an improved algorithm, and then on the basis of combining Schwartz-Zippel theorem and statistical theory, a geometric theorem could be proved by checking several random instances, the probability of error result could also be calculated and controlled within any given small range. Through the improved probabilistic algorithm, the Five Circles Theorem had already been proved successfully within two seconds which is quite difficult to be proved by existing algebra methods for its high complexity. Comparative experiment results also show that the improved probabilistic algorithm is high efficient.

    New particle swarm optimization based on blast wave model
    YAN Tao GU Leye Ruanbo
    2014, 34(7):  2085-2089.  DOI: 10.11772/j.issn.1001-9081.2014.07.2085
    Asbtract ( )   PDF (632KB) ( )  
    References | Related Articles | Metrics

    A new Particle Swarm Optimization (PSO) algorithm based on the blast wave model (referred to as BW-PSO algorithm) was proposed aiming at the problem that the basic PSO algorithm when solving complex multimodal problems is easy to fall into local optimal solution. The supervision conditions of population diversity were added to the basic PSO algorithm so that the process of particle shock was triggered when the population decreased to a given threshold value. Crossover and mutation occurred between optimal and suboptimal particles so that the particles within the blast radius by the traction were subjected to accelerate convergence to the current extreme and the particles outside the blast radius were subjected to spread out, which increased the possibility of finding the global optimum value. BW-PSO algorithm not only improved the accuracy of the current solution by the mutation between optimal and suboptimal particles, but also increased the population diversity with the shock wave process of the particles and enhances the ability of the global space development of the particles. Compared with the mutative PSO and charged PSO, the results indicate that the BW-PSO algorithm has a better performance to solve multi-modal optimization problem.

    Robust model and algorithms for uncertain traveling salesman problem
    MA Cunrui MA Changxi
    2014, 34(7):  2090-2092.  DOI: 10.11772/j.issn.1001-9081.2014.07.2090
    Asbtract ( )   PDF (624KB) ( )  
    References | Related Articles | Metrics

    Considering the fact that uncertain parameters were widespread in the Traveling Salesman Problem (TSP), this paper built a robust optimization model for traveling salesman problem under the frame of Bertsimas robust discrete optimization theory, and then transformed it into robust counterpart model according to transformational rule. In addition, a single parent genetic algorithm based on Prufer coding was designed to solve the traveling salesman problem. Compared with the traditional genetic algorithm, the method has shortened the length of the chromosome and prevented feasible solutions being destructed by the traditional crossover and mutation operators. According to the validation by numerical examples, the results show that the algorithm has a higher solving efficiency, and the robust model developed in the uncertain environment can get some better robust solutions.

    Route choice model and algorithm under restriction of multi-source traffic information
    GUO Hongxiang ZHANG Xi LIU Lan LIU Xuhai YAN Kai
    2014, 34(7):  2093-2098.  DOI: 10.11772/j.issn.1001-9081.2014.07.2093
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics

    To the shortage of theoretical support in the policy-making process of traffic guidance management, the research method of choice behavior with confinement mechanism of traffic information was proposed. From the perspective of human perception, the deep analysis of Multi-Source Traffic Information (MSTI) constraint rule was presented based on fuzzy clustering algorithm, then the road network environment was simulated by VISSIM and the traffic state pattern recognition model was established to simulate the mental activity of traveler under restriction of information. Then by means of Biogeme software, the choice model was constructed based on the behavior survey data, which was obtained in the road network example by using Stated Preference (SP) investigate method. Results show that the sanction of traffic information on travel behavior is very limited and the travelers prefer the preference path when traffic of this preference path is not very heavy, while this sanction enhances gradually and the path change behavior, which is influenced by the information, becomes more frequent when the preference path is more congested. The conclusions provided a new idea and reference for incomplete rational behavior research under the information environment, and also provided decision support for traffic management department.

    Joint scheduling game model and optimization of emergency resources
    YANG Jijun XU Chenhua
    2014, 34(7):  2099-2102.  DOI: 10.11772/j.issn.1001-9081.2014.07.2099
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics

    After unconventional emergency breaks out, how to use different modes of transportation to coordinately convey emergency resources becomes the main problem. Emergency resources are delivered among emergency resource centers, resource transfer stations and demanding spots (crisis locations). In order to depict the moving of emergency resources, a flow model of emergency resources was designed. Then the cooperative game model and algorithm for emergency resources scheduling were presented in unconventional emergency. In view of the shortcomings of the classic core method to solve the model, the improved core method were put forward. By analyzing and comparing the results of a numerical example of the emergency resources scheduling, the scheduling model and algorithm are availiable and the scheduling strategies based on improved core method are better.

    Route optimization of unban waterlogging rescue based on improved ant colony optimization
    JIANG Jingui ZHANG Pengfei
    2014, 34(7):  2103-2106.  DOI: 10.11772/j.issn.1001-9081.2014.07.2103
    Asbtract ( )   PDF (585KB) ( )  
    References | Related Articles | Metrics

    When urban waterlogging disasters occur, the scientific deployment of rescue resources can improve the efficiency of urban emergency rescue, and minimize disaster losses. In view of the fact that urban routes are affected by terrain, road conditions and the seriousness of waterlogging, the authors introduced the connected coefficient and the unblocked coefficient, so as to better reflect the urban route conditions and waterlogging disaster. Considering that the ant colony algorithm has some disadvantages, such as slow convergence, easy to fall into local optimum, by randomly selecting the affected areas and introducing a pheromone update operator strategy the ant colony algorithm was improved, which is used to solve the route optimization model. Empirical analysis shows that the improved ant colony algorithm of solving urban waterlogging rescue route optimization has better result.

    Trajectory outlier detection method based on kernel principal component analysis
    BAO Suning ZHANG Lei YANG Guang
    2014, 34(7):  2107-2110.  DOI: 10.11772/j.issn.1001-9081.2014.07.2107
    Asbtract ( )   PDF (591KB) ( )  
    References | Related Articles | Metrics

    In view of the fact that the existing algorithms cannot effectively be applied to multi-factor trajectory outlier detection, this paper proposed a new method named TOD-KPCA (Trajectory Outlier Detection method based on Kernel Principal Component Analysis). Firstly, in order to enhance the effect of trajectory feature extraction, the method used KPCA to do the space transformation for trajectories and converted nonlinear space to a high dimension linear space. Furthermore, in order to improve the accuracy of outlier detection, the method used one-class Support Vector Machine (SVM) to do unsupervised learning and prediction with trajectory feature data. Finally, the method detected those trajectories with abnormal behavior. The proposed algorithm was tested on the Atlantic hurricane data. The experimental results show that the proposed algorithm can effectively extract trajectory features, and compared with the same algorithm, the proposed algorithm has better detection results in terms of multi-factor trajectory outlier detection.

    Methods to detect the radio frequency identification's reading error based on workflow technology
    GUO Chao
    2014, 34(7):  2111-2114.  DOI: 10.11772/j.issn.1001-9081.2014.07.2011
    Asbtract ( )   PDF (742KB) ( )  
    References | Related Articles | Metrics

    In a wofkflow and Radio Frequency Identification (RFID) combined system, RFID reader's reading error on object can lead to incorrect execution of workflow task. This paper proposed a method to detect the radio frequency identification's reading error by using workflow theoretical model. At first, a workflow process model that reflects the changes of resource was built based on actual background, it is called Workflow Resource Change Model (WRCM). By using Bayesian network, the predicted data about the model could be given which would be compared with the actual detected data. The inaccurate RFID readers could be found out after the analysis of the comparison between real data and predictable data.The experiment shows that the method is able to find out the inaccurate RFID readers to a certain extent.

    Annotation-based compliance checking for business processes
    GONG Ping FENG Zaiwen
    2014, 34(7):  2115-2123.  DOI: 10.11772/j.issn.1001-9081.2014.07.2115
    Asbtract ( )   PDF (1301KB) ( )  
    References | Related Articles | Metrics

    At present, enterprise's business are more and more circumscribed by the laws, regulations, standards and internal control system. How to enforce enterprises process-aware information system compliant has already become an important issue in Information System (IS) research. Ensure compliance of the process model is the important premise to realize process perception system compliance. In view of the compliance of process model at the process design stage, by extending previous work on executabililty checking for the semantically annotated process model, an annotation-based compliance checking was proposed, which mainly included techniques for generating annotation expressions for the compliance rule patterns and analyzing compliance annotated process model. Compliance annotation expressions specify the involved activities and constraints, which are the essential information for compliance debugging and run-time detection and evaluation. By using Satisfiability (SAT) solver, the compliance annotated process model can be efficiently checked and debugged.

    Optimized design for automatic test system based on multithreading
    ZHAO Yuan JIANG Xiaofeng
    2014, 34(7):  2124-2128.  DOI: 10.11772/j.issn.1001-9081.2014.07.2124
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics

    The traditional testing process does not specifically consider the system performance. With the wide application of parallel testing method, more attention was paid to the system performance and data throughput capacity. Optimizing the software design with multithreading technology becomes an effective way to improve the performance of automatic test system. By modeling testing pipeline process, using asynchronous pipeline design patterns and combining task-oriented concepts, an available test system programming model was proposed. The experiment results prove that the model can significantly shorten the average test time in the ideal case of random input of test tasks. Applying this model to an instance of measuring characteristic parameters of Alternating Current (AC) contactor, the results further indicate that this model can significantly increase the flexibility of test configuration and avoid the complexity of multi-threaded programming.

    Construction of protein-compound interactions model
    LI Huaisong YUAN Qin WANG Caihua LIU Juan
    2014, 34(7):  2129-2131.  DOI: 10.11772/j.issn.1001-9081.2014.07.2129
    Asbtract ( )   PDF (586KB) ( )  
    References | Related Articles | Metrics

    Building an interpretable and large-scale protein-compound interactions model is an very important subject. A new chemical interpretable model to cover the protein-compound interactions was proposed. The core idea of the model is based on the hypothesis that a protein-compound interaction can be decomposed as protein fragments and compound fragments interactions, so composing the fragments interactions brings about a protein-compound interaction. Firstly, amino acid oligomer clusters and compound substructures were applied to describe protein and compound respectively. And then the protein fragments and the compound fragments were viewed as the two parts of a bipartite graph, fragments interactions as the edges. Based on the hypothesis, the protein-compound interaction is determined by the summation of protein fragments and compound fragments interactions. The experiment demonstrates that the model prediction accuracy achieves 97% and has the very good explanatory.

    ECG beat classification algorithm based on cluster analysis
    YAN Yu SUN Cheng
    2014, 34(7):  2132-2135.  DOI: 10.11772/j.issn.1001-9081.2014.07.2132
    Asbtract ( )   PDF (737KB) ( )  
    References | Related Articles | Metrics

    In order to improve the accuracy and universality of computer-assisted classification algorithm, a Electrocardiography (ECG) beat classification algorithm based on cluster analysis was presented in this paper. The algorithm considered that one patients' ECG beats repeated periodically, and used the method of two-stage cluster analysis, and selecting representative ECG beats, combined with the diagnosis of cardiac physicians to achieve accurate ECG beat classification rate. In order to verify the accuracy of the algorithm, using the internationally standard database MIT-BIH arrhythmia database, the ECG beat classification method and the accuracy evaluation method specified by AAMI/ANSI standard were used to perform simulation experiments, the final overall classification accuracy rate is 99.07%. Compared with Kiranyaz' method(KIRANYAZ S, INCE T,PULKKINEN J, et al. Personalized long-term ECG classification: A systematic approach[J]. Expert Systems with Applications, 2011, 38(4): 3220-3226.), this method does not require specific training step, and the sensitivity of the ECG beats which labeled as S raise to 89.82% from 40.15%, significantly improving classification algorithm's generalization capability.

    Big data processing system based on opportunistic cooperation for agricultural Internet of things
    FEN Yuan XU Congfu
    2014, 34(7):  2136-2139.  DOI: 10.11772/j.issn.1001-9081.2014.07.2136
    Asbtract ( )   PDF (614KB) ( )  
    References | Related Articles | Metrics

    Aiming at the complex communication environment and low efficiency of big data processing in agricultural Internet Of Things (IOT), a big data processing mechanism was proposed based on the adaptive collaborative opportunities. According to the requirements of agricultural application and the impact of agricultural environment on wireless data transmission, a cross-layer interaction analysis model was established, which was combined with opportunities for collaborative mechanisms and big data processing requirements. Then the design of a large data processing was proposed. Experimental analysis and testing show that the proposed big data processing scheme has better system throughput, reliability, and system processing performance than traditional coordination mechanism and data processing programs.

    Non-fragile H∞ control of linear system with time-domain constraints
    GAO Xingquan HU Yunfeng
    2014, 34(7):  2140-2144.  DOI: 10.11772/j.issn.1001-9081.2014.07.2140
    Asbtract ( )   PDF (673KB) ( )  
    References | Related Articles | Metrics

    For linear system with time-domain constraints including control input constraints, state constraints or their mixed constraints, an H∞ control scheme via Linear Matrix Inequalities (LMI) optimization was proposed in this paper. First, by assumption of initial states and the energy of external disturbance, a fixed ellipsoid containing all perturbed feasible trajectories was confirmed. Then, sufficient conditions of the closed-loop system satisfying time-domain constraints with the controller gain varying during a certain scope were derived and converted to LMI, and the derivation process was given in detail. Finally, the non-fragile H∞ controller design with time-domain constraints was led to solving an optimization problem with LMI constraints. Simulation results for application in the disturbance reject problem of mass-spring-damper system were discussed. The simulation application results show that the designed controller can improve the robustness of the closed-cloop system with controller gain variations while the time-domain constraints are respected.

2025 Vol.45 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF