Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
01 March 2012, Volume 32 Issue 03
Previous Issue
Next Issue
Network and distributed techno
BI-PaaS: parallel-based business intelligence system
JIANG Zhi-xiong JIN Hai HUANG Xiao-qing
2012, 32(03): 595-598. DOI:
10.3724/SP.J.1087.2012.00595
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
Concerning the challenge of massive information to the traditional Business Intelligence (BI) system, the prototype of a BI system (BI-PaaS) based on parallel mechanism was designed and implemented. The structure was built upon the project named big cloud in China mobile, being powered by massively parallel computing and distributed storage and integrated with several technologies in terms of ETL, DM, OLAP, Report, etc. The results of the experiment prove that the function based upon parallel computing greatly raises the competency of data processing and effectively supports the data analysis.
Multi-objective evolutionary algorithm for grid job scheduling based on adaptive neighborhood
YANG Ming XUE Sheng-jun CHEN Liang LIU Yong-sheng
2012, 32(03): 599-602. DOI:
10.3724/SP.J.1087.2012.00599
Asbtract
(
)
PDF
(608KB) (
)
References
|
Related Articles
|
Metrics
A new adaptive neighborhood Multi-Objective Grid Task Scheduling Algorithm (ANMO-GTSA) was proposed in this paper for the multi-objective job scheduling collaborative optimization problem in grid computing. In the ANMO-GTSA, an adaptive neighborhood method was applied to find the non-inferior set of solutions and maintain the diversity of the multi-objective job scheduling population. The experimental results indicate that the algorithm proposed in this paper can not only balance the multi-objective job scheduling, but also improve the resource utilization and efficiency of task execution. Moreover, the proposed algorithm can achieve better performance on time-dimension and cost-dimension than the traditional Min-min and Max-min algorithms.
Analysis on schedulability of fixed-priority multiprocessor scheduling
BAI Lu YAN Li
2012, 32(03): 603-605. DOI:
10.3724/SP.J.1087.2012.00603
Asbtract
(
)
PDF
(613KB) (
)
References
|
Related Articles
|
Metrics
Concerning the Fixed-Priority (FP) algorithm of multiprocessor real-time scheduling, an improved schedulability test was proposed. This paper applied Baruah's window analytical framework of Earliest Deadline First (EDF) to FP, bounded the max number of higher priority tasks doing carry-in by m-1 (with m being the number of processors), and thus got a new upper bound of interference a task suffered. Then, a tighter sufficient condition to determine schedulability was derived. The simulation results show the schedulability test is more efficient by increasing the number of detected schedulable task sets.
Quorum generation algorithm of dynamic and multi-node initiation based on local recursion
LI Mei-an LIN Lan CHEN Zhi-dang
2012, 32(03): 606-608. DOI:
10.3724/SP.J.1087.2012.00606
Asbtract
(
)
PDF
(467KB) (
)
References
|
Related Articles
|
Metrics
How to reduce the time complexity of the quorum generation algorithm effectively when the quorum length does not increase significantly is a question must be resolved to all researchers of symmetric quorum generation algorithm for distributed mutual exclusion. A new quorum generation algorithm was proposed in this paper by adopting the local recursion. This algorithm can reduce the time complexity of the quorum generation algorithm effectively and ensure the quorum length not increasing significantly than WK's algorithm and global recursion algorithm. Therefore, through researching the features of quorum, the contradiction between quorum length and time complexity of the quorum generation algorithm can be improved.
Improved L7-Filter's pattern matching algorithm based on multi-core processors
YU Tao WU Wei-dong
2012, 32(03): 609-613. DOI:
10.3724/SP.J.1087.2012.00609
Asbtract
(
)
PDF
(816KB) (
)
References
|
Related Articles
|
Metrics
According to the architecture of multi-core processors and the temporal local characteristics of network data flow, a division and dynamic adaptation algorithm was proposed based on multi-core processors. Classifying network data flow by the type and optimizing chain of rules dynamically by the temporal locality of network flow, the count of the multi-core's L7-Filter matching network data flow were reduced effectively and the processing efficiency was improved dramatically. The simulation result shows that given the number of packets in the same conditions, the algorithm has about 7 percent improvement of the multi-core processing performance. With the increasing number of network packets, the performance superiority becomes more obvious.
Adaptive accrual failure detection model
2012, 32(03): 614-616. DOI:
10.3724/SP.J.1087.2012.00614
Asbtract
(
)
PDF
(597KB) (
)
References
|
Related Articles
|
Metrics
The output binary information of traditional failure detection represents trust or suspicion respectively. However, the mechanism is lack of flexibility. For this problem, cumulative-type failure detection, taking the level of suspicion as output, can adapt to the QoS requirements of different processes running simultaneously. An accrual failure detection model named EXP-ACC-FD was proposed on the basis of analyzing and researching the existing failure detection models and accrual failure detection algorithms. It calculates the weighted mean of heartbeat inter-arrivals with power law and substitutes the weighted mean and the time between last heartbeat coming and current time into exponential distribution to get suspicion level of monitored process. The simulation analyses show that the accuracy of EXP-ACC-FD is higher than NFD-E and PHI within the same detection time.
Compressed sensing parallel processing algorithm based on OpenMP
WU Xiao-ting DENG Jia-xian
2012, 32(03): 617-619. DOI:
10.3724/SP.J.1087.2012.00617
Asbtract
(
)
PDF
(454KB) (
)
References
|
Related Articles
|
Metrics
Concerning the high complexity and long-time running of the compressed sensing reconstructed algorithm, a compressed sensing parallel algorithm based on multi-core processors was proposed. On the basis of a careful analysis of the compressed sensing algorithm, OpenMP was used for compressed sensing measurement and Orthogonal Matching Pursuit (OMP) algorithm for parallel processing to improve program performance. The experimental results show that the speedup is in linear growth with the increasing threads. The execution of the procedure is more effective. Moreover, the more complex the reconstruction process is, the more obvious the performance optimization will be.
Replica placement study in large-scale cloud storage system
DONG Ji-guang CHEN Wei-wei TIAN Lang-jun WU Hai-jia
2012, 32(03): 620-624.
Asbtract
(
)
PDF
(814KB) (
)
References
|
Related Articles
|
Metrics
In the large-scale cloud storage system based on copy redundancy, previous layout algorithm can only partially meet the requirements of high reliability, high scalability and high efficiency in the replica layout. To solve this problem, this paper proposed a Replica Placement algorithm based on Grouping and Consistent Hashing (RPGCH). The storage nodes were classified into different groups by their correlativity, then the replicas of one object were assigned in different groups by consistent hashing algorithm, after that each replica was placed into corresponding storage node in the group by consistent Hashing algorithm. The theoretical analysis proves that the reliability of data is improved. The simulation results show that RPGCH can assign data evenly among storage nodes and adapt well with the changing scale of cloud storage system. Moreover, RPGCH is time efficient with little memory overhead.
RFID data compression storage method based on three-level storage model
XIA Xiu-feng ZHAO Long
2012, 32(03): 625-628. DOI:
10.3724/SP.J.1087.2012.00625
Asbtract
(
)
PDF
(683KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of massive data storage in the technology of the Internet of things, this paper proposed a three-level Radio Frequency IDentification (RFID) data compression storage model. This model divided the data into the current level, the temporary level and the historical level. Corresponding data collection algorithm was designed for every level according to the features of data in each level. And a coding algorithm for paths based on the model was proposed to store the compressed paths. The experimental results show the three-level storage model can effectively store compression data, and demonstrate the algorithm of data gathering has higher data compression rate as well as lower time complexity.
Artificial intelligence
Ensemble classification algorithm for high speed data stream
LI Nan GUO Gong-de
2012, 32(03): 629-633. DOI:
10.3724/SP.J.1087.2012.00629
Asbtract
(
)
PDF
(760KB) (
)
References
|
Related Articles
|
Metrics
The algorithms for mining data streams have to make fast response and adapt to the concept drift at the premise of light demands on memory resources. This paper proposed an ensemble classification algorithm for high speed data stream. After dividing a given data stream into several data blocks, it computed the central point and subspace for every class on each block which were integrated as the classification model. Meanwhile, it made use of statistics to detect concept drift. The experimental results show that the proposed method not only classifies the data stream fast and adapt to the concept drift with higher speed, but also has a better classification performance.
Bacteria foraging optimization algorithm based on immune algorithm
LIU Xiao-long ZHAO Kui-ling
2012, 32(03): 634-637. DOI:
10.3724/SP.J.1087.2012.00634
Asbtract
(
)
PDF
(811KB) (
)
References
|
Related Articles
|
Metrics
To correct the defects such as slower speed, step consistence in bacteria foraging optimization algorithm, this paper presented the concept of the sensitivity of bacteria to increase convergence speed by adjusting the step size of bacterial swimming. The clonal selection ideas in immune algorithm were used to achieve bacterial cloning, high-frequency variation and random crossover of the elite group, and to guide the search algorithm to improve accuracy. A number of typical high-dimensional function tests show that the improved algorithm has been greatly improved in terms of search speed and accuracy, and is more appropriate to solve practical engineering optimization problems such as high dimensionality, constraints.
Improvement rival penalized competitive learning algorithm based on pattern distribution of samples
XIE Juan-ying GUO Wen-juan XIE Wei-xin GAO Xin-bo
2012, 32(03): 638-642. DOI:
10.3724/SP.J.1087.2012.00638
Asbtract
(
)
PDF
(784KB) (
)
References
|
Related Articles
|
Metrics
The original Rival Penalized Competitive Learning (RPCL) algorithm ignores the influence of the geometry structure of a dataset on the weight variation of its nodes. A new RPCL algorithm proposed by Wei Limei et al. (WEI LIMEI, XIE WEIXIN. A new competitive learning algorithm for clustering analysis. Journal of Electronics, 2000, 22(1): 13-18) overcame the drawback of the original RPCL by introducing the density of samples to adjust the weights of nodes, while the density was not much objective. This paper defined a new density for a sample according to the pattern distribution of samples in a dataset, and introduced the density into the adjusting for the weights of nodes in RPCL to overcome the disadvantages of the available RPCL algorithms. The authors' improved RPCL algorithm was tested on some well-known datasets from UCI machine learning repository and on some synthetic data sets with noisy samples. The accuracy of determining the number of clusters of a dataset and the run time and the clustering error of the algorithms were compared. The Rand index, the Jaccard coefficient and the Adjust Rand index were used to analyze the performance of the algorithms. The experimental results show that the improved RPCL algorithm outperforms the original RPCL and the new RPCL proposed by WEI LIMEI et al. greatly, and achieves much better clustering results and has a stronger anti-interference performance for noisy data than that of the other two RPCL algorithms. All the analyses demonstrate that the improved RPCL algorithm can not only determine the right number of clusters for a dataset according to its sample distribution, but also uncover the suitable centers of clusters and advance the clustering accuracy as well as approximate the global optimal clustering result as fast as possible.
Semi-supervised binary classification algorithm based on global and local regularization
Lü Jia
2012, 32(03): 643-645. DOI:
10.3724/SP.J.1087.2012.00643
Asbtract
(
)
PDF
(570KB) (
)
References
|
Related Articles
|
Metrics
As for semi-supervised classification problem, it is difficult to obtain a good classification function for the entire input space if global learning is used alone, while if local learning is utilized alone, a good classification function on some specified regions of the input space can be got. Accordingly, a new semi-supervised binary classification algorithm based on a mixed local and global regularization was presented in this paper. The algorithm integrated the benefits of global regularizer and local regularizer. Global regularizer was built to smooth the class labels of the data so as to lessen insufficient training of local regularizer, and based upon the neighboring region, local regularizer was constructed to make class label of each data have the desired property, thus the objective function of semi-supervised binary classification problem was constructed. Comparative semi-supervised binary classification experiments on some benchmark datasets validate that the average classification accuracy and the standard error of the proposed algorithm are obviously superior to other algorithms.
Improved fuzzy C-means clustering algorithm based on distance correction
LOU Xiao-jun LI Jun-ying LIU Hai-tao
2012, 32(03): 646-648. DOI:
10.3724/SP.J.1087.2012.00646
Asbtract
(
)
PDF
(446KB) (
)
References
|
Related Articles
|
Metrics
Based on Euclidean distance, the classic Fuzzy C-Means (FCM) clustering algorithm has the limitation of equal partition trend for data sets. And the clustering accuracy is lower when the distribution of data points is not spherical. To solve these problems, a distance correction factor based on dot density was introduced. Then a distance matrix with this factor was built for measuring the differences between data points. Finally, the new matrix was applied to modify the classic FCM algorithm. Two sets of experiments using artificial data and UCI data were operated, and the results show that the proposed algorithm is suitable for non-spherical data sets and outperforms the classic FCM algorithm in clustering accuracy.
Method for multi-attribute group decision-making based on multi-experts' interval numbers
MAO Jun-jun WANG Cui-cui YAO Deng-bao
2012, 32(03): 649-653. DOI:
10.3724/SP.J.1087.2012.00649
Asbtract
(
)
PDF
(703KB) (
)
References
|
Related Articles
|
Metrics
A group decision-making method based on non-linear programming model was proposed for multi-attribute problem based on multi-experts' interval numbers. This method had constructed the following principles: under different objects and attribute conditions, the weight of an expert would be bigger if his evaluation value was close to the mean value of all experts' evaluation; on the other hand, smaller. Based on this, the problem that experts' weights were hard to be determined had been solved successfully with interval distance formula and programming model. According to aggregated operator theory, decision-making matrices had be aggregated into a collective decision-making matrix by use of interval weighted arithmetic aggregated operator, then aggregated into an overall attribute value by attribute weights, and with two-dimensions possibility degree, a possibility degree matrix had been constructed to rank all objects by ranking vectors method. Finally, a case study was presented to verify the proposed method's feasibility and rationality.
Personalized recommendation algorithm based on weighted bipartite network
2012, 32(03): 654-657. DOI:
10.3724/SP.J.1087.2012.00654
Asbtract
(
)
PDF
(767KB) (
)
References
|
Related Articles
|
Metrics
In Network-Based Inference (NBI) algorithm, the weight of edge between user and item is ignored; therefore, the items with high rating have not got the priority to be recommended. In order to solve the problem, a Weigted Network-Based Inference (WNBI) algorithm was proposed. The edge between user and item was weighted with item's rating by proposed algorithm, the resources were allocated according to the ratio of the edge's weight to total edges' weight of the node, so that high rating items could be recommended with priority. The experimental results on data set MovieLens demonstrate that the number of hit high rating items by WNBI increases obviously in contrast with NBI, especially when the length of recommendation list is shorter than 20, the numbers of hit items and hit high rating items both increase.
Collaborative filtering recommendation algorithm based on item attribute and cloud model filling
SUN Jin-gang AI Li-rong
2012, 32(03): 658-660. DOI:
10.3724/SP.J.1087.2012.00658
Asbtract
(
)
PDF
(593KB) (
)
References
|
Related Articles
|
Metrics
The user rating data in traditional collaborative filtering recommendation algorithm are extremely sparse, which results in bad similarity measurement and poor recommendation quality. In view of this problem, this paper presented an improved collaborative filtering algorithm, which was based on item attribute and cloud model filling. The algorithm proposed a new similarity measurement method, using the data filling based on cloud model and the similarity of the item's attributes. The new method computed the rating similarity by using the traditional similarity measurement on the basis of the filling matrix and computed the attributing similarity by using item's attributes, then got the last similarity by using weighting factor. The experimental results show that this method can efficiently solve the problem of similarity measurement inaccuracy caused by the extreme sparsity of user rating data, and provide better recommendation results than traditional collaborative filtering algorithms.
Enhanced M-ary support vector machine by error correction coding for multi-category classification
BAO Jian LIU Ran
2012, 32(03): 661-664. DOI:
10.3724/SP.J.1087.2012.00661
Asbtract
(
)
PDF
(687KB) (
)
References
|
Related Articles
|
Metrics
M-ary Support Vector Machine (M-ary SVM) for multi-category classification has the advantage of simple structure, but the disadvantage of weak generalization ability. This paper presented an enhanced M-ary SVM algorithm in combination with error correction coding theory. The main idea of the approach was to generate a group of best codes based on information codes derived from the original category flags information, then utilize such codes as the basis for training the classifier, while in the final feed-forward phase the output codes composed of each sub-classifier could be corrected by error detection and correction principle if there exists any identifying error. The experimental results confirm the effectiveness of the improved algorithm brought about by introducing as few sub-classifiers as possible.
Multi-tree anti-collision algorithm based on heuristic function
DING Zhi-guo ZHU Xue-yong LEI Ying-ke WANG Xin-ling
2012, 32(03): 665-668. DOI:
10.3724/SP.J.1087.2012.00665
Asbtract
(
)
PDF
(587KB) (
)
References
|
Related Articles
|
Metrics
In order to overcome the low efficiency of traditional binary-tree anti-collision algorithms, an adaptive multi-tree anti-collision algorithm based on heuristic function was presented in the paper. By defining the heuristic function which was computed by the number of collision bits, the new algorithm can estimate the number of tags in the branch effectively. Because the new algorithm can adjust the number of searching fork in different branches and depths dynamically, it improves the searching efficiency. The theoretical analyses and simulation results show that the new algorithm overcomes the deficiency of traditional algorithms. For the large number of tags in particular, it can reduce the searching and recognition time and increase the throughput of Radio Frequency IDentification (RFID) system.
Information security
Secret image sharing and its algebraic coding method
WANG Xiao-jing FANG Jia-jia CAI Hong-liang WANG Yi-ding
2012, 32(03): 669-678. DOI:
10.3724/SP.J.1087.2012.00669
Asbtract
(
)
PDF
(1792KB) (
)
References
|
Related Articles
|
Metrics
Image sharing is an attractive research subject in computer image information security field. Seeking for Perfect and Ideal image threshold secret sharing scheme (i.e. the complete image sharing scheme) is one of the unresolved challenging problems. By introducing into the methods of pixel matrix secret sharing over pixel value field GF(2m) and algebraic-geometry coding, a complete scheme of image sharing with a (t, n) threshold structure was achieved in this paper. The scheme could encode secret images into n shadow images in such a way that all the shadow images were in a Perfect and Ideal (t, n) threshold structure, while each shadow image had its own visual content assigned at random. This approach to image sharing was able to be applied to the new information carrier technology, e.g. network multipathed transmission of secret image in high security level, distributed storage control of secret image, bar-code in k dimension and Popcode. This paper also presented a method to cut down a great deal of computational time for image sharing based on a pixel field GF(2m), called "partition and paralleling of m-bit pixel".
Vulnerability threat correlation assessment method
XIE Li-xia JIANG Dian-sheng ZHANG Li YANG Hong-yu
2012, 32(03): 679-682. DOI:
10.3724/SP.J.1087.2012.00679
Asbtract
(
)
PDF
(494KB) (
)
References
|
Related Articles
|
Metrics
Since the present network security assessment methods cannot evaluate vulnerability relevance effectively, a vulnerability threat assessment method based on relevance was presented. Firstly, an attack graph must be created as the source data. Secondly, by taking both pre-nodes and post-nodes diversity into consideration, integrating the methods of Forward In (FI) and Backward Out (BO), the authors calculated the probability of vulnerability being used on multiple attack routes through optimizing calculation formulas originating from Bayesian network, then the weighted average method was utilized to evaluate the risk of certain vulnerability on a particular host, and finally the quantitative results were achieved. The experimental results show that this method can clearly and effectively describe the security features of systems.
Low-cost RFID authentication protocol based on PUF
HE Zhang-qing ZHENG Zhao-xia DAI Kui ZOU Xue-cheng
2012, 32(03): 683-685. DOI:
10.3724/SP.J.1087.2012.00683
Asbtract
(
)
PDF
(687KB) (
)
References
|
Related Articles
|
Metrics
The available security mechanisms for the low-cost Radio Frequency IDentification (RFID) systems are either defective or high-cost. Therefore, this paper proposed an efficient security authentication protocol for low-cost RFID system based on Physical Unclonable Function (PUF) and Linear Feedback Shift Register (LFSR). The protocol provides strong security and can resist physical attack and tag clone with strong privacy.
Cyclic policy interdependency detection in automated trust negotiation
WANG Kai ZHANG Hong-qi REN Zhi-yu
2012, 32(03): 686-689. DOI:
10.3724/SP.J.1087.2012.00686
Asbtract
(
)
PDF
(804KB) (
)
References
|
Related Articles
|
Metrics
For Automated Trust Negotiation (ATN) consultative process may encounter the infinite cycling problem, the causes of the cycle were analyzed and the corresponding detection algorithm was designed to find and terminate the negotiation cycle. Interdependency relationships among policies in ATN were modeled as simple graph and the model's correctness was proved. The process of calculating simple grahp's reachability matrix was analyzed and cycle detection theorem was given. The algorithm of detecting cyclic policy interdependency was designed according to the theorem. Finally, a case study verifies the feasibility of the algorithm.
Study on usability of privacy control functions in domestic social networking service
SHEN Hong-zhou ZONG Qian-jin YUAN Qin-jian ZHU Qing-hua
2012, 32(03): 690-693. DOI:
10.3724/SP.J.1087.2012.00690
Asbtract
(
)
PDF
(739KB) (
)
References
|
Related Articles
|
Metrics
Concerning the privacy disclosure in Social Networking Service (SNS), the usability of the privacy control in domestic SNS was studied. From the users' point of view, with the method of experiment and interview, usability testing and comparative analysis on the privacy control in Renren and Pengyou were handled. The result indicates that the privacy control in Pengyou is better than that in Renren, but there is no significant difference between the two sites. Both of them need some improvements. Renren needs to improve its centralized navigation of privacy control and the centralized privacy setting interface. Pengyou should improve its decentralized navigation of privacy control and the blacklist function.
Security reconsideration of knapsack public-key cryptosystem
2012, 32(03): 694-698. DOI:
10.3724/SP.J.1087.2012.00694
Asbtract
(
)
PDF
(764KB) (
)
References
|
Related Articles
|
Metrics
Concerning the situation that knapsack public-key cryptosystem has been broken repeatedly, this paper analyzed the cause. It is expounded that a knapsack public-key sequence is generated by transforming an initial sequence composed of an easy knapsack problem with redundancy; hence, a knapsack public-key sequence is unlikely completely random. Currently, most broken knapsack cryptosystems only use confusion, such as modular multiplication, so as not to conceal the redundancy of the initial sequence adequately. It is necessary to utilize the redundancy for breaking a cryptosystem. Therefore, addition diffusion was introduced in this paper to diffuse the redundancy of an initial sequence, so that an adversary can not make use of the redundancy when breaking a cryptosystem. Inner-item diffusion and inter-item diffusion were illustrated. The analysis indicates the cryptosystem is secure against the known attacks with diffusion.
Hidden identity-based signature scheme with distributed open authorities
LIU Xin
2012, 32(03): 699-704. DOI:
10.3724/SP.J.1087.2012.00699
Asbtract
(
)
PDF
(1095KB) (
)
References
|
Related Articles
|
Metrics
Hidden identity-based signature schemes from bilinear maps do not achieve exculpability and Chosen-Ciphertext Attack (CCA) anonymity, while schemes of this type built on RSA groups suffer from significant communication and computation overheads. Concerning this situation, an improved scheme with distributed open authorities was put forward, which satisfied exculpability by making use of the block messages signature. It achieved efficient distribution of the open authority by applying distributed key extraction and simultaneous proof of knowledge to the underlying threshold encryption scheme. Furthermore, to cope with the shortcomings of traditional serial registration, i.e., being vulnerable to the denial-of-service attack, its registration protocol was enhanced to be concurrent-secure by using the method of committed proof of knowledge. In the random oracle model, the proposed scheme could be proved to fulfill all the required properties. Performance comparison shows that the resultant signature is shorter and the algorithms (i.e., Sign and Verify) are more efficient. Moreover, the process of threshold decryption by trusted servers is proved to be concurrently-secure and it is also immune to adaptive adversaries.
Security analysis and improvement of certificateless proxy blind signature
GE Rong-liang GAO De-zhi LIANG Jing-ling ZHANG Yun
2012, 32(03): 705-706. DOI:
10.3724/SP.J.1087.2012.00705
Asbtract
(
)
PDF
(451KB) (
)
References
|
Related Articles
|
Metrics
The blind signature is widely applied to the electronic voting system and electronic paying system, etc. While giving a blind signature, the signer does not know the content of the signed message. This paper analyzed the security of a new certificateless proxy blind signature scheme (WEI CHUN-YAN, CAI XIAO-QIU. New certificateless proxy blind signature scheme. Journal of Computer Applications, 2010,30(12):3341-3342) and found out the security loophole. The signer can link the signed message with the original message. Thus the scheme can not satisfy the security requirements of the blind signature scheme. To solve this problem, an improved scheme was proposed. The improved scheme eliminates the defect of the original one.
Attack detection method based on statistical process control in collaborative recommender system
LIU Qing-lin MENG Ke LI Su-feng
2012, 32(03): 707-709. DOI:
10.3724/SP.J.1087.2012.00707
Asbtract
(
)
PDF
(471KB) (
)
References
|
Related Articles
|
Metrics
Because of the open nature of collaborative recommender systems and their reliance on user-specified judgments for building profiles, an attacker could affect the prediction by injecting a lot of biased data. In order to keep the authenticity of recommendations, the attack detection method based on Statistical Process Control (SPC) was proposed. The method constructed the Shewhart control chart by using the users' deviation from the average of rating numbers and detected attackers according to the warning rules of the chart, thus improving the robustness of collaborative recommender systems. The experiments demonstrate that the method is effective with high precision and high recall against a variety of attack models.
Graphics and image technology
Objective quality evaluation method of stereo image based on steerable pyramid
WEI Jin-jin LI Su-mei LIU Wen-juan ZANG Yan-jun
2012, 32(03): 710-714. DOI:
10.3724/SP.J.1087.2012.00710
Asbtract
(
)
PDF
(797KB) (
)
References
|
Related Articles
|
Metrics
Through analyzing and simulating human visual perception of stereo image, an objective quality evaluation method of stereo image was proposed. The method combined the characteristics of Human Visual System (HVS) with Structural Similarity, using steerable pyramid to simulate multi-channel effects. Meanwhile, the proposed method used stereo matching algorithm to assess the stereo sense. The experimental results show that the proposed objective method achieves consistent stereoscopic image quality evaluation result with subjective assessment and can better reflect the level of image quality and stereo sense.
Video quality evaluation based on temporal feature of HVS
WANG Hai-feng
2012, 32(03): 715-718. DOI:
10.3724/SP.J.1087.2012.00715
Asbtract
(
)
PDF
(647KB) (
)
References
|
Related Articles
|
Metrics
For the videos that have fast changing motion scenes, the existing simulation of the human visual system is less effective in quality assessment. Due to the bandpass and masking features of the subjective testers, the objective evaluation model ignores the two temporal features of the Human Visual System (HVS), which leads to the deviation between the subjective and objective evaluation results. To improve the evluation performance of the rapidly changing videos, the visual threshold was determined by using the statistic learning method and the filtering of the HVS was built up. The masking of human eyes was emulated through a new attenuation-weight function. The experimental results demonstrate that the proposed method obtained the best performance when the lost packets rates was lower than 5 percent compared with the Peak Signal-to-Noise Ratio (PSNR) method, the constant-weight evaluation model and the rule evaluation model.With the filter function of the bandpass model, it improved the execution efficiency. In brief, the proposed method not only improves the evaluation performance but also reduces the computational complexity.
Fast collision detection method in virtual surgery
XIE Qian-ru GENG Guo-hua
2012, 32(03): 719-721. DOI:
10.3724/SP.J.1087.2012.00719
Asbtract
(
)
PDF
(486KB) (
)
References
|
Related Articles
|
Metrics
The paper proposed an efficient algorithm of collision detection by using Bounding Volume Hierarchy (BVH) in order to improve the real-time performance in virtual surgery. The main contribution of this work was to use the technology of mixed bounding volume hierarchy to represent different objects according to different topology structure. First, surgical instruments and objects were represented as hierarchy tree. Then the intersection test was implemented between sphere and oriented bounding box for eliminating disjoint parts fast. After that more accurate triangle collision test was used to determine the contact status in overlapping parts. Experimental results show that our algorithm achieves higher speed compared to the algorithm of single bounding box.
Contour correspondence algorithm based on circumcircles of triangles
CHEN Min ZHANG Zhi-yi TIAN Su-lei ZHANG Xian
2012, 32(03): 722-724. DOI:
10.3724/SP.J.1087.2012.00722
Asbtract
(
)
PDF
(521KB) (
)
References
|
Related Articles
|
Metrics
Since the current contour correspondence algorithms are apt to cause erroneous correspondence relationship and their calculation efficiency is low, a method based on the circumcircles of triangles was proposed in this paper. Each contour in every cross-section was triangulated first, and then the circumcircles of triangles that had been legalized were extracted. By investigating the correspondence relationship among circumcircles, the correspondence relationship among contours, which lay on adjacent cross-sections, can be determined. The experimental results show that the method proposed in this paper can robustly and rapidly process objects of complicated shapes.
Variational image zooming based on nonlocal total variation
JIANG Dong-huan XU Guang-bao DONGYE Chang-lei
2012, 32(03): 725-728. DOI:
10.3724/SP.J.1087.2012.00725
Asbtract
(
)
PDF
(646KB) (
)
References
|
Related Articles
|
Metrics
A regularized image zooming model based on nonlocal total variation was proposed, with regard to that the Chambolle image zooming model has blocky effects. It consisted of regular term and fidelity term. The zoomed image was obtained by minimizing the variational function which used the nonlocal total variation norm to measure the regularity of the image. Unlike the traditional image zooming by interpolation, the variational model was incorporated in the new zooming algorithm and the use of nonlocal operator made the algorithm not just use a single pixel of the image, or gray and gradient information in a neighborhood to amplify, but use the information of image content itself widely that will avoid blocky effects of Chambolle's model. The experimental results show that the new algorithm can preserve better the border and details. It achieves better effect than Chambolle's method and the interpolation by using spline.
Mixed noise filtering via limited grayscale pulse coupled neural network
CHENG Yuan-yuan LI Hai-yan CHEN Hai-tao SHI Xin-ling
2012, 32(03): 729-731. DOI:
10.3724/SP.J.1087.2012.00729
Asbtract
(
)
PDF
(667KB) (
)
References
|
Related Articles
|
Metrics
A new method of filtering mixed noise based on limited grayscale and Pulse Coupled Neural Network (PCNN) was proposed for an image contaminated by salt and pepper noise and Gaussian noise. First, salt and pepper noise was identified according to the limited grayscale in a detecting window. Then the noise was filtered via mean filter in a filtering window. Subsequently, Gaussian noise was identified by using the time matrix of PCNN. Finally the Gaussian noise was filtered by some different filters based on variable step. The experimental results show that the proposed method has more advantages not only in filtering effects but also in objective evaluation indexes of Peak Signal-to-Noise Ratio (PSNR) and Improved Signal-to-Noise Ratio (ISNR) compared to some traditional methods.
Improved set partitioning in hierarchical trees algorithm based on adaptive coding order
HUANG Ke-kun
2012, 32(03): 732-735. DOI:
10.3724/SP.J.1087.2012.00732
Asbtract
(
)
PDF
(636KB) (
)
References
|
Related Articles
|
Metrics
In order to obtain better compression on image edge, an improved Set Partitioning In Hierarchical Trees (SPIHT) algorithm based on prior scanning the coefficients around which there were more significant coefficients was proposed. The coefficients or sets were sorted according to the number of surrounding significant coefficients before being coded, and the previous significant coefficients were refined as soon as the sets around which there existed any significant coefficients had been scanned. The scanning order was confirmed adaptively and did not need any extra storage. It can code more significant coefficients at a specified compression ratio. The experimental results show that the method can improve PSNR and the subjective visual experience compared with SPIHT.
Adaptive median filtering algorithm based on slope
LIU Shu-juan ZHAO Ye DONG Rui WANG Zhi-wei YANG Fang-fang
2012, 32(03): 736-738. DOI:
10.3724/SP.J.1087.2012.00736
Asbtract
(
)
PDF
(502KB) (
)
References
|
Related Articles
|
Metrics
For estimating and removing the salt-and-pepper noise point accurately in image, a new adaptive median filtering algorithm was proposed.Firstly, if the pixel in the center of n×n (n is an odd integer not less than three) template was the extreme value of all the pixels in the window, it was supposed to be probably a noise point. The pixel gray value in the sequence difference between the two scripts and a template sequence of the slope of the pixel gray value within the region were used to determine the mean quasi-adaptive noise point to be the real noise points. Finally, mean filtering was done on the noised pixels. Compared with median filter, the condition of detecting noises with this method has been largely enhanced. And the method can both effectively restrain noises and maintain details.
Edge-preserving filter with similarity noise detection for impulse noise reduction
LIU Xin GE Hong-wei XU Bing-chun
2012, 32(03): 739-741. DOI:
10.3724/SP.J.1087.2012.00739
Asbtract
(
)
PDF
(479KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the filtering effect of noise image, this paper put forward a new filtering algorithm. This algorithm consisted of three stages. Firstly, the similarities of the pixels were used in the image to detect the impulse noise. Then the filter window was divided into eight main directions to determine the directions of the edges, and at last these impulse noises were restored using an edge-preserving method. The simulation results indicate that this algorithm can not only accurately detect the noise points, but aslo protect the noise-free pixels and the boundaries in the noise image when the noise density is small.
Anisotropic diffusion denoising method based on image feature
KE Dan-dan CAI Guang-cheng CAO Qian-qian
2012, 32(03): 742-745. DOI:
10.3724/SP.J.1087.2012.00742
Asbtract
(
)
PDF
(596KB) (
)
References
|
Related Articles
|
Metrics
As for the image denoising filter method, the model proposed by J. Weickert does not consider the distinctions between the smooth area and other image features. The diffusion in smooth area is also in accordance with the eigenvalues of local structure characteristics, thus inevitably producing false edges in smooth area. An improved anisotropic diffusion method was proposed. This method firstly used the Wiener filter to weaken the influence of noise on the image, then coherence was applied to judge image feature correctly, as edge region, smooth area, T-shape corner and so on, and the diffusion tensor's eigenvalues in corresponding region were set based on image feature. The experimental results show that the improved method can not only achieve better results in elimination of noise and protection of edge, but also remove false edge in smooth area effectively and get higher peak signal-to-noise ratio.
Speckle reduction of SAR image based on morphological Haar wavelet
LI Min ZHANG Zi-you LU Lin-ju
2012, 32(03): 746-748. DOI:
10.3724/SP.J.1087.2012.00746
Asbtract
(
)
PDF
(538KB) (
)
References
|
Related Articles
|
Metrics
The existing speckle reduction algorithms of Synthetic Aperture Radar (SAR) image can efficiently reduce the speckle effects but unfortunately smear edges and details. A new method, based on morphological Haar wavelet, was proposed. In this method, the SAR image was firstly decomposed by 2-D morphological Haar wavelet. Thus, the edges, details and textures were well preserved in low-frequency sub-band. The speckle noise was mainly distributed in high-frequency sub-bands. Then, the average filtering and median filtering were run on the corresponding high frequency sub-bands according to the noise features. Finally, 2-D morphological Haar wavelet inverse transform was carried on to low-frequency sub-band coefficients and filtered high-frequency sub-bands coefficients to reconstruct SAR image accurately. The experimental results show that the proposed method can not only filter the speckle noise efficiently, but well preserve the image textures and details of SAR image. The proposed method is better than the traditional Lee filtering, Frost filtering, Kuan filtering and wavelet soft-threshold overall.
New colorful images segmentation algorithm based on level set
CHEN Yuan-tao XU Wei-hong WU Jia-ying
2012, 32(03): 749-751. DOI:
10.3724/SP.J.1087.2012.00749
Asbtract
(
)
PDF
(641KB) (
)
References
|
Related Articles
|
Metrics
Since the functional form in consideration is of non-convex variational nature, the calculation results of the image segmentation model often fall into local minimum. Based on the global vector-valued image segmentation of active contour, the global vector-valued image segmentation and image denoising were integrated in a new variational form within the framework of global minimum. The new model was easy to construct and of less computation. Compared to the classical level set method, tedious repetition of the level set could be avoided. With the analyses on artificial images and real images, the new method is verified to have better segmentation results.
Segmentation method for crop disease leaf images based on watershed algorithm
REN Yu-gang ZHANG Jian LI Miao YUAN Yuan
2012, 32(03): 752-755. DOI:
10.3724/SP.J.1087.2012.00752
Asbtract
(
)
PDF
(676KB) (
)
References
|
Related Articles
|
Metrics
A new method based on watershed algorithm was proposed to raise the segmentation accuracy of the crop disease leaf images. At first, distance transformation and watershed segmentation were conducted on the binary crop disease leaf images to get the background marker, and the preliminary foreground markers were generated by extracting the regional minimum from the reconstructed gradient images, and then some fake foreground markers were eliminated by the further filter. In the next step, both background markers and foreground markers were imposed on the gradient image by the compulsive minimum algorithm. At last, the watershed transformation was carried out on the modified gradient image. Lots of cucumber disease leaf images were segmented effectively using the method. The results of experiment indicate that disease spots can be separated precisely from the crop leaf images. Additionally, the segmentation results are not influenced by leaf texture and its accuracy is up to more than 90 percent, so the method has certain validity and practical value.
Application of PDE model based on K-SVD in millimeter wave image restoration
SHANG Li SU Pin-gang
2012, 32(03): 756-758. DOI:
10.3724/SP.J.1087.2012.00756
Asbtract
(
)
PDF
(542KB) (
)
References
|
Related Articles
|
Metrics
When an image contaminated by large noise or with lower resolution is processed by the traditional Partial Differential Equation (PDE) model, the stable solutions of PDE can generate a distinct step effect and the restored image's quality is relatively poor. Therefore, a new PDE image restoration method based on K-Singular Value Decomposition (K-SVD) was proposed and used successfully to restore MilliMeter Wave (MMW) image. K-SVD was a sparse representation method of images. An image can be denoised when it is sparsely estimated by K-SVD. Especially, for images with large noise variance, K-SVD has better denoising robustness. At first, the MMW image was denoised by K-SVD, and then PDE method based on Total Variation (TV) was utilized to restore the denoised images obtained by K-SVD. In test, a simulated MMW image and a real MMW image were used respectively to testify the proposed algorithm, and then the results were compared with those of K-SVD and PDE. At the same time, the Pick Signal-to-Noise Ratio (PSNR) criterion was used to measure restored images. In terms of PSNR values and the vision effect of restored images with different noise variance, the simulation results show that the proposed method can efficiently denoise MMW images.
Automatic image registration based on feature region
SHU Xiao-hua SHEN Zhen-kang
2012, 32(03): 759-761. DOI:
10.3724/SP.J.1087.2012.00759
Asbtract
(
)
PDF
(460KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of feature points definition and extraction in image registration based on feature points, an approach was proposed in this paper. Feature region was defined and extracted instead of feature point. Moravec operator was applied to choose the preparatory feature regions, and rotation-invariant Zernike moment was used to characterize the feature regions. Two-step matching strategy was employed for matching the feature regions, i.e. the initial matching was based on self-organizing mapping network and the fine matching. The automatic image registration framework was established and the image registration was realized. The experiments show that this method can effectively extract the image feature points and perform accurate matching of the feature points, the registration process is completely automated.
Application of neighborhood feature in point cloud registration
HE Yong-xing OU Xin-liang KUANG Xiao-lan
2012, 32(03): 762-765. DOI:
10.3724/SP.J.1087.2012.00762
Asbtract
(
)
PDF
(819KB) (
)
References
|
Related Articles
|
Metrics
A new registration method of large-scale scattered point clouds based on invariant features of neighborhood was proposed, which consisted of preliminary registration and exact registration. Firstly, the target point set was weighted to reduce the amount of corresponding point-pairs efficiently. Secondly, on the basis of distance features between points and their neighborhood centroids, this paper added an additional geometric feature vector of included angle to eliminate bad point-pairs, and then the preliminary registration was completed. Finally, the Iterative Closest Point (ICP) algorithm with improved invariant feature was used to register accurately. The experimental results indicate the good results of the preliminary registration and the better results of the exact registration, which have met the requirement of registering point clouds from different viewpoints.
Breakage detection for grid images based on improved Harris corner
GAO Qing-ji XU Ping YANG Lu
2012, 32(03): 766-769. DOI:
10.3724/SP.J.1087.2012.00766
Asbtract
(
)
PDF
(665KB) (
)
References
|
Related Articles
|
Metrics
Concerning the breakage warning problem of grid fence, a grid breakage detection algorithm based on improved Harris corner was proposed. As for traditional Harris corner extraction algorithm, the first derivative in vertical and horizontal direction and the corner response function value need to be calculated, and it must be done for each image pixel, which makes this method time-consuming. Therefore, the parameter of gray "similarity" was introduced to demonstrate the gray similarity between pixel and its ambient pixels, through which to filter pseudo corners, reducing the Harris corner extraction time. Then after analyzing the corners' distribution, the breakage areas can be locked. Breakage detection experiments were taken to various fence images taken by robot, and the results indicate that the Harris corner extraction time decreases largely, and the proposed algorithm is effective, meeting the practical application requirements of fence breakage detection.
Accurate estimation of blurred motion direction based on edge detection of spectrum
GUO Hong-wei
2012, 32(03): 770-772. DOI:
10.3724/SP.J.1087.2012.00770
Asbtract
(
)
PDF
(605KB) (
)
References
|
Related Articles
|
Metrics
With regard to the problem of estimating the blurred direction of motion-blurred image, the degradation model of blurred image in uniform linear motion was analyzed in detail, and a method which can estimate blurred motion direction accurately in frequency domain was proposed. Firstly, the spectrum of degraded images was calculated, Laplacian of Gaussian (LoG) edge detection operator was used to detect the contour of dark stripes in spectrum; then the Radon transform was employed to find the perpendicular angle to the dark stripes; finally, according to the aspect ratio of the image to determine the relationship between the spectrum dark stripes and blur direction, the blur direction was calculated. The simulation results show that estimated results are very accurate and the estimation error of blurred direction is no more than one degree when the blur scale of degraded image varies from 7 to 30 pixels.
Texture classification based on quaternion wavelet transform and multifractal characteristics
GAO Zhi ZHU Zhi-hao XU Yong-hong HONG Wen-xue
2012, 32(03): 773-776. DOI:
10.3724/SP.J.1087.2012.00773
Asbtract
(
)
PDF
(665KB) (
)
References
|
Related Articles
|
Metrics
The paper incorporated the multifractal analysis method into the idea of Quaternion Wavelet Transform (QWT), which took advantage of the rotation-invariant properties and multifractal properties of texture image, and could make up for the lacks of ability to decompose input image into multiple orientation in texture classification when using wavelet transform. The experiment of texture classification using the images from UIUC shows the method has higher classification accuracy and the average correct classification rate is 96.69%. It proves this texture classification method is reasonable and effective.
Network and communications
Low complexity sphere decoding algorithm in LTE system
LI Xiao-wen PENG De-yi TAN Bing WANG Zhen-yu
2012, 32(03): 777-779. DOI:
10.3724/SP.J.1087.2012.00777
Asbtract
(
)
PDF
(439KB) (
)
References
|
Related Articles
|
Metrics
The sphere decoding algorithm has the optimal Bit Error Ratio (BER) performance that approximates to Maxmun Liklihood (ML) in Long Term Evolution (LTE) system. Concerning the computational complexity and required hardware resources of this algorithm increase significantly for detection of 16-QAM and 64-QAM modulated signal streams, an improved sphere decoding algorithm, which changed symbol search strategy, was proposed. A given symbol search scheme at different detection layer, and combined with a new definition for sphere radius of dynamic modifications was adopted in this algorithm. Both of the traditional and improved algorithms were simulated on the condition of Rayleigh fading channel. The simulation results show that the improved algorithm has a small BER degradation, and it also effectively reduces both computational complexity and required hardware resources compared to the traditional sphere decoding algorithm.
Adaptive temporal-spatial error concealment method based on AVS-P2
RUAN Ruo-lin HU Rui-min CHEN Hao YIN Li-ming
2012, 32(03): 780-782. DOI:
10.3724/SP.J.1087.2012.00780
Asbtract
(
)
PDF
(504KB) (
)
References
|
Related Articles
|
Metrics
The error concealment is an important technique in the video transmission, and it can ensure the reconstruction video quality and efficiently recover the data loss and the data errors in the transmission process caused by severe transmission environments. In order to enhance the error resilience of AVS-P2, the paper proposed a new adaptive temporal-spatial error concealment method based on the redundancy motion vectors. To conceal a lost block, the paper used the spatial error concealment for the I-frame macroblocks, and used the temporal error concealment for the non-I-frame macroblocks. At the same time, according to the motion intensity of the macroblocks, it used the default error concealment of AVS-P2 and error concealment method based on redundancy motion vectors, respectively. Lastly, the proposed algorithm was realized based on the platform of the AVS-P2 RM52_20080721. The simulation results show that the proposed method is significantly better than the existing techniques in terms of both objective and subjective quality of reconstruction video.
New mixed blind equalization algorithm
GENG Tian-yu SHU Qin YING DA-li
2012, 32(03): 783-786. DOI:
10.3724/SP.J.1087.2012.00783
Asbtract
(
)
PDF
(731KB) (
)
References
|
Related Articles
|
Metrics
Constant Modulus Algorithm (CMA) is only suited to balance the signals with modulus of the same value, but not suited to balance the high-order Quadrature Amplitude Modulation (QAM) signals. Combined with the virtues of CMA and Decision-Directed (DD) algorithm, an improved mixed blind equalization algorithm had been presented. First, the improved algorithm would make the system eye open using the CMA+DD parallel algorithm, and then switch to the DD algorithm to further reduce the residual error. The simulation results indicate that the improved algorithm has a very fast convergence speed, as well as a very small residual error for high-order QAM signals, and constellation recovery plan is very tight. Meanwhile, the improved algorithm can correct phase, and trace channel.
Uneven clustering routing algorithm based on minimum spanning tree
ZHANG Ming-cai XUE An-rong WANG Wei
2012, 32(03): 787-790. DOI:
10.3724/SP.J.1087.2012.00787
Asbtract
(
)
PDF
(712KB) (
)
References
|
Related Articles
|
Metrics
The existing uneven clustering routing algorithms do not consider the optimal path selection between cluster heads and base station, which leads to unbalanced energy consumption. In order to balance energy consumption of transmission paths, this paper proposed an uneven clustering routing algorithm based on minimum spanning tree. The algorithm utilized residual energy of nodes and the distance between nodes and base station to select cluster heads, and then generated minimum spanning tree to search the optimal transmission paths, which reduced energy consumption on the transmission paths and effectively solved unbalanced energy consumption. The theoretical analysis and experimental results show that the algorithm is better than the existing Energy Efficient Uneven Clustering (EEUC) and Energy Balancing Clustering Algorithm (EBCA) in terms of the number of live nodes and energy consumption.
Performance of network coding protocol based epidemic routing
HAN Xu YANG Yu-wang WANG Lei
2012, 32(03): 791-794. DOI:
10.3724/SP.J.1087.2012.00791
Asbtract
(
)
PDF
(764KB) (
)
References
|
Related Articles
|
Metrics
Many different communication radius of the communication nodes that may cause an unstable network performance can be easily found in Epidemic Routing (ER) network. A network model that combines network coding and epidemic routing can solve this problem. Compared with the traditional epidemic routing, the Network Coding Based Epidemic Routing (NCER) can transmit packets with network coding. In order to compare the performances of the ER and NCER, a probability model of the transmission delay of the network was built. The comparative results between the two protocols with the probability model above show that NCER can be more efficient and stable than ER. The correctness of this probability model has been proved in the simulation. Finally, according to the model evaluation results, a scheme has been given to reduce the network transmission delay.
Fault detection approach of network storage based on random packet dropout network
YANG Guang ZHOU Jing-li XIONG Ting JI Hou-ling
2012, 32(03): 795-799. DOI:
10.3724/SP.J.1087.2012.00795
Asbtract
(
)
PDF
(671KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the random packet loss, the high failure rate of failure detection for network storage system with random packet loss was studied. A Fault Detection (FD) for network storage with random packet dropout was presented. The residual generation and residual evaluation as well as False Alarm Rate (FAR) were used in the approach. First, residual generation was carried out in the periodic system framework. Then, residual evaluation was got by making use of the stochastic properties of the random packet loss. Finally, performance evaluation of the computation of FAR is fulfilled with the assistance of Chebyshev's inequality, and the algorithms of fault detection were given. The simulation results show that this approach can effectively detect the fault. Moreover, this approach is sensitive to fault.
Improvement on multi-hop performance of underground mine emergency communication system based on WMN
ZHU Quan JIANG Xin-hua ZOU Fu-min XU Shao-feng
2012, 32(03): 800-803. DOI:
10.3724/SP.J.1087.2012.00800
Asbtract
(
)
PDF
(612KB) (
)
References
|
Related Articles
|
Metrics
The multi-hop transmission of multimedia emergency communication system based on Wireless Mesh Network (WMN) in underground mine have two problems: low basis bandwidth and high multi-hop transmission attenuation. This paper aimed to improve the multi-hop transmission performance for the system. In this paper, a trunk line network structure of multimedia emergency communication system based on WMN in under-ground mine was proposed. The authors established its transmission model, and then had a research on the main factors that affected the transmission performance. The multi-radio node structure of multi-hop mesh backbone network based on 802.11n was proposed and solved the two problems of multi-hop transmission. The experimental results show that it has more than 165Mbps basis bandwidth, and under the limited 60Mbps environment, the bandwidth attenuation of per hop is less than 1%, basically satisfying the application requirements of multimedia transmission in underground mine.
Border node placement method in wireless sensor networks
ZHOU Yun ZHAN Hua-wei
2012, 32(03): 804-807. DOI:
10.3724/SP.J.1087.2012.00804
Asbtract
(
)
PDF
(751KB) (
)
References
|
Related Articles
|
Metrics
Because the base stations can only be placed at the border of the monitored area, the border placement problem was formally defined. For the goal to place the minimum number of base stations to cover as much as possible the monitored areas, an improved placement algorithm with polynomial time was proposed. The coverage percentage of initial algorithm was analyzed first. When initial coverage percentage is larger than guaranteed coverage percentage, it is possible to reduce the size of initial placement set. Finally, placement set was gradually improved to achieve the minimun of placement set. The results indicate that the coverage percentage and placement set of the proposed algorithm are superior to random algorithm in different test environments.
Impact of fading channel on decision fusion in wireless sensor networks
XIAO Lei ZHANG Zhi-feng
2012, 32(03): 808-811. DOI:
10.3724/SP.J.1087.2012.00808
Asbtract
(
)
PDF
(581KB) (
)
References
|
Related Articles
|
Metrics
Wireless channel of monitoring the outdoor environment is very complex, and gets impacted by many factors such as multipath fading and noise, which deteriorates awfully the quality of signal reception. Further research on fading channel is helpful for receiving better signal and improving system performance. The factors of impacting fading channel were analyzed in detail, and the transmitting performance with fading channel was researched. The impact of fading channel on the probability of system detection was simulated, and the multi-bit local decisions in decision fusion with fading channel were researched. The simulation results suggest that the probability of system detection is lower compared to non-fading channel, and the sensor transmitting a bit decision is optimal.
Weak signal acquisition method for GPS software receiver
LI Shan YI Qing-ming CHEN Qing SHI Min
2012, 32(03): 816-818. DOI:
10.3724/SP.J.1087.2012.00816
Asbtract
(
)
PDF
(611KB) (
)
References
|
Related Articles
|
Metrics
For high sensitivity and operation efficiency in weak signal acquisition of Global Positioning System (GPS) software receiver, a differential coherent accumulated acquisition algorithm based on Fast Fourier Transform (FFT) was proposed. The limitation of coherent integration time was overcome by block accumulation of demodulated GPS intermediate frequency data. Based on FFT frequency shift characteristics, a Doppler circular frequency search was used to achieve low computation instead of frequency compensation search. The loss in frequency was resolved by different down conversions. Compared to the original incoherent accumulation, Signal-to-Noise Ratio (SNR) was improved by differential coherent accumulation of coherent results. The weak signal in a -39dB poor SNR environment was successfully acquired in experiments. High sensitivity and operation efficiency of the proposed algorithm were confirmed by the experimental results.
Comparison and optimization of light source design schemes for indoor optical wireless communication based on light emitting diode
XU Chun
2012, 32(03): 819-822. DOI:
10.3724/SP.J.1087.2012.00819
Asbtract
(
)
PDF
(649KB) (
)
References
|
Related Articles
|
Metrics
The existing indoor optical wireless communication systems can not provide good wireless coverage uniformity and are not suitable for commercial applications. Two distributed light design schemes were proposed to solve the above problems, which increased the group number of LED array, and then increased the uniformity of light distribution of every group. The simulation results indicate that with the same number of LED chips, the distributed light design scheme is superior to the traditional one in uniformity of wireless signal coverage and complexity of commercial implementation, and can avoid the coverage valley.
Network and distributed techno
Just-in-time compilation for improving response speed of user interaction
LIU Li GU You-peng TANG De-bo
2012, 32(03): 823-826. DOI:
10.3724/SP.J.1087.2012.00823
Asbtract
(
)
PDF
(896KB) (
)
References
|
Related Articles
|
Metrics
For the bottleneck code that impacts the user interaction speed, the current Just-In-Time (JIT) compiler cannot select it accurately or accelerate it during program start-up phase. The code selection strategy and compiling mode of current JIT compiler were improved in this paper. According to the new code selection strategy, application could select the code to be compiled on its own initiative in a given situation, which ensured all the bottleneck codes to be selected and accelerated. As for the new compiling mode, the native code could be saved and be used for the next program running, which ensured bottleneck code to be accelerated even during program start-up phase. The experimental result shows that the response speed of user interaction by using the improved JIT compiler is about two times that by using the old JIT compiler.
Web software complexity metrics based on projection pursuit
ZENG Yi HU Xiao-wei LI Juan
2012, 32(03): 827-830. DOI:
10.3724/SP.J.1087.2012.00827
Asbtract
(
)
PDF
(645KB) (
)
References
|
Related Articles
|
Metrics
Web software complexity metrics does play a very important role in the software development. The traditional software complexity metrics method mainly targets on the non-Web applications which use language like C/C++ and Ada. This paper took object-oriented Web software based on Struts framework as research subject and put forward three complexity metrics suitable for the Web-Struts software. Besides, this paper also proposed a method for computing Web software complexity metrics based on Artificial Fish Swarm Algorithm (AFSA) with cross operator and Projection Pursuit (PP) algorithm. After integrating multiple complexity metrics into one-dimension comprehensive projection value, the optimized projection direction could be acquired through sample data. Then the comprehensive projection value of evaluation grades could also be determined. According to the comparison between the comprehensive projection values of the testing samples and the interval of level, the comprehensive metrics result could be finally obtained. The example evaluation results prove the feasibility and effectiveness of the proposed method.
Dynamic configuration model of virtual machine execution environment based on cooperative VMM
LU Jian-ping GUO Yu-dong WAND Xiao-rui ZHAO Yu-chun
2012, 32(03): 831-834. DOI:
10.3724/SP.J.1087.2012.00831
Asbtract
(
)
PDF
(651KB) (
)
References
|
Related Articles
|
Metrics
There are some deficiencies in the current Virtual Machine Monitor (VMM) that could not customize different virtual machine execution environment flexibly. This paper illustrated the feasibility of designing and implementing the dynamic model to make better use of the VMM, and then presented a model to design a user-configured virtual machine execution environment and implemented it upon the cooperative VMM. The model made full use of the control domains in Virtual Machine Control Structure (VMCS) configuration progress. With this model, users can change the features of the virtual machine execution environment for real-time dynamic configuration, and build virtual machines with different characteristics based on a VMM at the same time. The test results show that the model can improve the availability of the VMM.
Typical applications
Power saving scheme for supercomputing system based on unified resource management
TIAN Bao-hua JIANG Ju-ping LI Bao-feng ZHANG Xiao-ming QU Wan-xia
2012, 32(03): 835-838. DOI:
10.3724/SP.J.1087.2012.00835
Asbtract
(
)
PDF
(695KB) (
)
References
|
Related Articles
|
Metrics
This paper presented a sophisticated power saving scheme based on system-level resource management for TH-1A supercomputer system. The scheme introduced a uniform framework for centralized management of various power-consuming resources, i.e. computing elements, communication components, power supply and cooling devices. And many efficient management policies such as LRU etc. were applied within the framework.
Low-power scheduling scheme for wireless physiological monitoring system
LIU Hao LI Wei-min LI Xiao-li
2012, 32(03): 839-842. DOI:
10.3724/SP.J.1087.2012.00839
Asbtract
(
)
PDF
(814KB) (
)
References
|
Related Articles
|
Metrics
Based on Ultra-WideBand (UWB) Body Area Network (BAN) and heterogeneous biosensors, a kind of wireless physiological monitoring system was designed, which could acquire and process multiple physiological signals. For reducing the total energy consumption, a low-power scheduling scheme was further proposed so as to provide long-term continuous sensing for wearer's health and safety. According to the physiological state machine of the monitoring system, a coordinator could adaptively determine the biosensor set for next monitoring cycle, and those selected biosensors can cooperatively process the heterogeneous data. The simulation results show that with the same monitoring conditions, the proposed scheme can effectively reduce the biosensor's invalid operation and wireless data transmission, and thus extend the operational lifetime of the BAN-based monitoring system.
Short time traffic flow prediction based on Volterra model using multiplication-coupled configuration
ZHANG Yu-mei BAI Shu-lin
2012, 32(03): 843-846. DOI:
10.3724/SP.J.1087.2012.00843
Asbtract
(
)
PDF
(601KB) (
)
References
|
Related Articles
|
Metrics
An adaptive third-order Volterra filter for short-time traffic flow prediction was proposed, which was based on intrinsic nonlinearity and deterministic mechanism of chaotic time series and nonlinear expression of Volterra series. Concerning the problem that the coefficients of Volterra filter exponentially increase with its order, an approximate multiplication-coupled structure for Volterra filter was studied. At first, through properly choosing time delay and embedding dimension using mutual information method and false nearest neighbor method, respectively, the largest Lyapunov exponent was estimated by applying small data sets method so as to validate that chaos existed in traffic flow series. Then, the approximate multiplication-coupled structure for third-order Volterra filter, of which coefficients were adaptively adjusted by using an improved nonlinear Normalized Least Mean Square (NLMS) algorithm, was employed to reduce computational complexity. Finally, by applying the proposed method to the real measured traffic flow data, the experimental results show that chaos exists in traffic flow series and the proposed scheme can effectively predict traffic flow series and reduce model complexity.
Prediction model for lightning nowcasting based on DBSCAN
HOU Rong-tao ZHU Bin FENG Min-xue SHI Xin-ming LU Yu
2012, 32(03): 847-851. DOI:
10.3724/SP.J.1087.2012.00847
Asbtract
(
)
PDF
(731KB) (
)
References
|
Related Articles
|
Metrics
Against the massive monitoring data of lightning locating system, a lightning nowcasting model based on Improved Density-Based Spatial Clustering of Application with Noise (IDBSCAN) clustering algorithm was put forward. Based on the lightning location data in real-time monitoring system, this method searched for lightning-density flash point greater than the threshold value of the land, built the cluster with up to the maximum ground flash density, and located the core of the cluster. Besides, with the application of adjacency list search algorithm, time and space consumed for the initial search set of lightning data had been greatly reduced. Furthermore, using regression fitting algorithm, the proposed algorithm can predict the path of movement of lightning cluster. The experimental results show that IDBSCAN algorithm used in the lightning nowcasting is effective.
Continuous layout optimization of urban fire station
LU Hou-qing YUAN Hui LIU Cheng
2012, 32(03): 852-854. DOI:
10.3724/SP.J.1087.2012.00852
Asbtract
(
)
PDF
(580KB) (
)
References
|
Related Articles
|
Metrics
With the rapid urbanization and industrialization, the risk of urban disasters increases and the layout of the city fire stations cannot meet fire safety requirements. In order to effectively overcome the traditional polygon coverage and edge coverage in the location problem of big error and low efficiency, the algorithm expanded the discrete (node-edge) graphic patterns as the continuous network structure to achieve the continuous coverage of road network. In the optimization of the selection process, this paper introduced the simulated annealing and improved the annealing process. A case has verified the selection method of location, and a comparative analysis of the coverage indicators of the location between discrete and continuous optimization was made. The result shows that the continuous optimization has better performance. It is of feasibility, generality and rationality and is a preferable method to resolve the site selection of the urban fire station properly.
Hardware/software partitioning method of embedded system based on π-nets
GUO Rong-zuo HUANG Jun WANG Lin
2012, 32(03): 855-860. DOI:
10.3724/SP.J.1087.2012.00855
Asbtract
(
)
PDF
(941KB) (
)
References
|
Related Articles
|
Metrics
Concerning the partitioning problems of the embedded system software and hardware, a method based on π-nets was proposed to partition the software and hardware of the embedded system. This paper gave a brief introduction to the definition and π-nets rules, and then described and defined the target, and established the division Embedded-system Software and Hardware Partition Model (ESHPM) applying the π-nets of software and hardware of embedded system. Finally this paper analyzed the consistency, deadlock and compatibility; at the same time, optimized the ERSHPM. The ESHPM established in this paper satisfied the consistency and no deadlock between the various processes. And the interaction between the various processes was compatible. The ESHPM effectively improves the accuracy of division, and a more reasonable division method of software and hardware has been got.
Design of development platform for hosted applications in integrated module avionics system
WANG Yun-sheng LEI Hang
2012, 32(03): 861-863. DOI:
10.3724/SP.J.1087.2012.00861
Asbtract
(
)
PDF
(652KB) (
)
References
|
Related Articles
|
Metrics
The common computing resources in the Integrated Modular Avionics (IMA) system provide the hosted applications with temporal and spatial partitioning platform. The platform for applications development should comply with the ARINC 653 specification which defines the application executive interfaces for partitioning operation system. By porting and developing the Board Support Package (BSP) and AFDX network driver, for the first time, an IMA platform solution for hosted application development was achieved based on the C2K, a Commercial Off-The-Shelf (COTS) single board computer. The functionality and performance of the COTS based platform are similar to the popular common computing resources in IMA of modern civil transportation aircraft, offering a development platform for hosted applications development and debug at a pretty lower costs.
Reconfigurable Keccak algorithm and its implementation on FPGA platform
WU Wu-fei WANG Yi LI Ren-fa
2012, 32(03): 864-866. DOI:
10.3724/SP.J.1087.2012.00864
Asbtract
(
)
PDF
(434KB) (
)
References
|
Related Articles
|
Metrics
Based on the analysis of Keccak algorithm, concerning the situation that the existing hardware implementations of Keccak algorithm lack of flexibility and could only support one version, this paper proposed a new reconfigurable Keccak hardware implementation, which could support four versions algorithms. The proposed design achieved 214MHz clock frequency using 1607slices when being ported to Xilinx Virtex-5 FPGA platform. The experimental results show that the proposed design has the advantages of high throughput (9131Mbps), good flexibility and supporting four versions.
Design and FPGA implementation of parallel high-efficiency BCH decoder
ZHANG Xiang-xian YANG Tao WEI Dong-mei XIANG Ling
2012, 32(03): 867-869. DOI:
10.3724/SP.J.1087.2012.00867
Asbtract
(
)
PDF
(510KB) (
)
References
|
Related Articles
|
Metrics
According to the characteristics of parallel BCH decoder, the multiplication of constant coefficient in finite field was realized by using XOR gates to reduce hardware complexity. The part of the error location polynomial was calculated, and then the remaining error location polynomial could be obtained using the theory of affine polynomial and Gray code. The proposed algorithm reduces the system resources occupied.Through timing simulation on Field Programmable Gate Array (FPGA)'s development software ISE10.1, the high-efficiency of the algorithm on time and space has got verified.
Performance evaluation of space information data processing system based on queuing network
WANG Jian-jiang QIU Di-shan PENG Li
2012, 32(03): 870-873. DOI:
10.3724/SP.J.1087.2012.00870
Asbtract
(
)
PDF
(555KB) (
)
References
|
Related Articles
|
Metrics
In order to scientifically evaluate the performance of Space Information Data Processing System (SIDPS), this paper presented an evaluation method based on queuing network. The processing patterns of space information data were analyzed. In addition, core index systems of performance evaluation were constructed, and a performance evaluation model of SIDPS with limited waiting queuing network was established. The experimental results confirm the effectiveness of the approach.
Design and implementation of variable fertilization formula system for dispersive farmer
TAN Xu WANG Xiu TONG Ling
2012, 32(03): 874-876. DOI:
10.3724/SP.J.1087.2012.00874
Asbtract
(
)
PDF
(672KB) (
)
References
|
Related Articles
|
Metrics
Farmers manage the cropland in a decentralized manner at present in China, and usually fertilization could not be applied rationally and scientifically. This paper presented a variable fertilization system for dispersive farmer, which was designed on the basis of Geographic Information System (GIS). The system referred a relational database SQL Server 2008 as a built-in database, in order to store, inquire and update kinds of information, such as the variability of soil, the crop yields over the years, and the fertilization formulas. The statistical analysis of the soil attribute can be realized by spatial interpolation, the conversion from raster to vector, data fusion, and overlay analysis in the system. And a fertilization formula for a cropland can be generated automatically by invoking the yield model and the fertilization model. Furthermore, the fertilization formula can be delivered to SMC6480 to match the fertilizer scientifically. The experimental results indicate that the system is stable and reliable, and all the modules ran well.
Energy consumption of bus rapid transit system based on cellular automata theory
CHEN Yong WANG Xiao-ming DANG Jian-wu
2012, 32(03): 877-880. DOI:
10.3724/SP.J.1087.2012.00877
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Transportation energy consumption has aroused high attention of the decision-makers. In this case, an energy consumption of Bus Rapid Transit (BRT) Cellular Automata (CA) model was designed, which was based on NaSch traffic model and Kinetic energy theorem. Taking Lanzhou city bus rapid transit traffic system as an example, the effects of BRT vehicle random slowness in the different traffic density, different road condition and drivers' behavior under periodic boundary conditions was studied. And corresponding quantitative analysis conclusion was get. The simulation results show that the rapid transit vehicle stops longer, the greater the range of congestion and the smaller the energy loss of the road traffic flow. Besides, the corresponding flow rate is also smaller, thus the system is plugged into congestion phase earlier.
Application of support vector data description to detection of foreign bodies in tobacco
HUANG Shi-jian
2012, 32(03): 881-884. DOI:
10.3724/SP.J.1087.2012.00881
Asbtract
(
)
PDF
(542KB) (
)
References
|
Related Articles
|
Metrics
It is difficult to fully collect foreign body sample in detecting foreign bodies from tobacco. A detection method based on Support Vector Data Description (SVDD) was proposed. Thus a one-class classifier can be developed by using tobacco samples soly. RGB and HSV of tobacco and several typical foreign bodies were firstly extracted; then the HV component was used as eigenvector. A developed SVDD classifier was applied to distinguish foreign bodies from tobacco by inputting the HV eigenvector. Finally through the Receiver Operating Characteristic (ROC) curve, the SVDD classifier was compared with three other methods in classification effect. The experimental results show that by adopting feature extraction with HV component, the data dimension was reduced and a higher computation efficiency was achieved. The SVDD classifier has a stronger classification ability and higher efficiency, which could distinguish foreign bodies from tobacco better.
New feature description based on feature relationships for gait recognition
XIANG Jun DA Bang-you LIANG Juan HOU Jian-hua
2012, 32(03): 885-888. DOI:
10.3724/SP.J.1087.2012.00885
Asbtract
(
)
PDF
(769KB) (
)
References
|
Related Articles
|
Metrics
In order to carry on the gait recognition fast and efficiently, a new feature relationship based feature representation was proposed in this paper, which utilized nonstationarity in the distribution of feature relationships. Firstly, relative direction between two adjacent edge pixels in 8-neighborhood region was labeled as one of the attributes characterizing relationship, and distance from edge pixel to shape centroid point as the other attribute. Joint probability function of the two attributes was estimated by normalized histogram of observed values. Secondly, Principal Component Analysis (PCA) was adopted for feature reduction. Finally, the nearest-neighbor classifier was adopted for classification. The experimental result demonstrates that the proposed method was used to CASIA gait database, and got the best recognition rate of more than 90%. Feature dimension of the attributes joint probability matrix is reduced from 900 to 240 with relatively lower computational cost.
Image fire detection based on independent component analysis and support vector machine
HU Yan WANG Hui-qin MA Zong-fang LIANG Jun-shan
2012, 32(03): 889-892. DOI:
10.3724/SP.J.1087.2012.00889
Asbtract
(
)
PDF
(610KB) (
)
References
|
Related Articles
|
Metrics
Image-based fire detection can effectively solve the problems of large space fire detection contactlessly and rapidly. It is a new research direction in fire detection. Its essential issue is the classification of flames and disruptors. The ordinary detection methods are to extract one or a few characteristics of the flame in the image as a basis for identification. The disadvantages are to need a large number of experiential thresholds and the lower recognition rate by the inappropriate feature selection. Considering the entire characteristics of fire flame, a flame detection method based on Independent Component Analysis (ICA) and Support Vector Machine (SVM) was proposed. Firstly, a series of frames were pre-processed in RGB space. And suspected target areas were extracted depending on the flickering feature and fuzzy clustering analysis. Then the flame image features were described with ICA. Finally, SVM model was used in order to achieve flame recognition. The experimental result shows that the proposed method improves the accuracy and speed of image fire detection in a variety of fire detection environments.
2025 Vol.45 No.4
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF