Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 October 2016, Volume 36 Issue 10
Previous Issue
Next Issue
Data preprocessing based recovery model in wireless meteorological sensor network
WANG Jun, YANG Yang, CHENG Yong
2016, 36(10): 2647-2652. DOI:
10.11772/j.issn.1001-9081.2016.10.2647
Asbtract
(
)
PDF
(1082KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of excessive communication energy consumption caused by large number of sensor nodes and high redundant sensor data in wireless meteorological sensor network, a Data Preprocessing Model based on Joint Sparsity (DPMJS) was proposed. By combining the meteorological forecast value with every cluster head's value in Wireless Sensor Network (WSN), DPMJS was used to compute a common portions to process sensor data. A data collection framework based on distributed compressed sensing was also applied to reduce data transmission and balance energy consumption in cluster network; data measured in common nodes was recovered in sink node, so as to reduce data communication radically. A suitable method to sparse the abnormal data was also designed. In simulation, using DPMJS can enhance the data sparsity by exploiting spatio-temporal correlation efficiently, and improve data recovery rate by 25%; compared with compressed sensing, data recovery rate is improved by 46%; meanwhile, the abnormal data processing can recovery data successfully by high probability of 96%. Experimental results indicate that the proposed data preprocessing model can increase efficiency of data recovery, reduce the amount of transmission significantly, and prolong the network lifetime.
MAC protocol for wireless sensor network based on fuzzy clustering in application of traffic monitoring
REN Xiuli, YAN Kun
2016, 36(10): 2653-2658. DOI:
10.11772/j.issn.1001-9081.2016.10.2653
Asbtract
(
)
PDF
(969KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the real-time issue of burst data in traffic monitoring, an Medium Access Control (MAC) protocol based on fuzzy clustering called FC-MAC was proposed. This protocol works by alternating between Time Division Multiple Access (TDMA) and improved Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), which not only ensures periodic data transmission, but also enhances the real-time property of burst data. The method of fuzzy clustering was introduced to the phase of CSMA/CA, the nodes in a cluster were clustered according to factor vectors and assigned with different levels of priority according to their vectors, and burst data with higher priority was transmitted earlier. In addition, according to the timing slot allocation strategy of FC-MAC, a method of hierarchical random delay was presented to reduce the number of nodes accessing to sink node simultaneously and decrease the data delay caused by backoff mechanism. Simulation results show that the energy consumption of FC-MAC is between Z-MAC and S-LMAC. In the case of reducing delay of burst data, FC-MAC increases the network throughput by 11.2% and 21.3% respectively compared to Z-MAC and S-LMAC, and it is more adaptive to the change of network traffic.
Node localization based on improved flooding broadcast and particle filtering in wireless sensor network
ZHAO Haijun, CUI Mengtian, LI Mingdong, LI Jia
2016, 36(10): 2659-2663. DOI:
10.11772/j.issn.1001-9081.2016.10.2659
Asbtract
(
)
PDF
(899KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the shortage of current mobile Wireless Sensor Network (WSN) localization, a localization algorithm based on improved flooding broadcast mechanism and particle filtering was proposed. For a given unknown node, firstly, by the improved flooding broadcast mechanism, the effective average hop distance of an unknown node from its closest anchor node was used to calculate the distances to its all neighbor nodes. Then a differential error correction scheme was devised to reduce the measurement error accumulated over multiple hops for the average hop distance. Secondly, the particle filter and the virtual anchor node were used to narrow the prediction area, and more effective particle prediction area was obtained so as to further decrease the estimation error of the position of unknown node. The simulation results show that compared with DV-Hop, Monte Carlo Baggio (MCB) and Range-based Monte Carlo Localization (MCL) algorithms, the proposed positioning algorithm can effectively inhibit the broadcast redundancy and reduce the message overhead related to the node localization, and can achieve higher-accuracy positioning performance with lower communication cost.
Throughput analysis and optimization of MAC protocol with multiple rates in mobile Ad Hoc network
ZHU Qingchao, CHEN Jing, GONG Shuiqing
2016, 36(10): 2664-2669. DOI:
10.11772/j.issn.1001-9081.2016.10.2664
Asbtract
(
)
PDF
(818KB) (
)
References
|
Related Articles
|
Metrics
Concerning the low throughput and fairness of multi-rate Medium Access Control (MAC) protocol in Mobile Ad Hoc Network (MANET), saturation throughput expressions of nodes with different rates were deduced, and the key factor to restrict performance of MAC protocol was quantitatively analyzed, which is unfairness of channel occupancy time between Slow rate Nodes (SN) and Fast rate Nodes (FN). Secondly, focusing on maximizing time fairness among different rates of nodes, two optimal methods of Contention Window (CW) and packet size were presented without affecting the throughput of SN but maximizing the throughput of FN, which makes the saturation throughput of MANET optimal. Experimental results show that, to maximize Jain fairness index when packet rates of two types of nodes are set to 1 Mb/s and 11 Mb/s, optimal values of CW in simulation and analysis are 320 and 350, respectively; similarly, values of packet size are 64 B and 60 B. Moreover, although saturation throughput value of SN basically remains unchanged, the total throughput of analysis is still 0.2-0.5 Mb/s higher than that of simulation. Therefore, throughput and fairness are both improved.
Sensor network clustering algorithm with clustering time span optimization
LIANG Juan, ZHAO Kaixin, WU Yuan
2016, 36(10): 2670-2674. DOI:
10.11772/j.issn.1001-9081.2016.10.2670
Asbtract
(
)
PDF
(791KB) (
)
References
|
Related Articles
|
Metrics
Concerning the low energy efficiency and network energy imbalance of cluster head in Wireless Sensor Network (WSN), a sensor network clustering algorithm with Clustering Time Span Optimization (CTSO) was proposed. Firstly, the constraints within the cluster membership and cluster head spacing in cluster head election was considered to avoid overlapping between the various clusters as much as possible and optimize the energy of the cluster nodes. Secondly, the cluster head election cycle was optimized and divided into rounds by considering the task excution cycle as time span, by minimizing the cluster head election rounds, the cost for selecting cluster heads and the energy for broadcasting messages were reduced, and energy utilization of cluster nodes was improved. Simulation results showed that, compared to the homogeneous state data routing scheme based on multiple Agents and adaptive data aggregation routing policy, the average energy efficiency of CTSO was increased by 62.0% and 138.4% respectively, and the node life was increased by 17% and 9 % respectively. CTSO algorithm has a good effect on promoting the energy efficiency of cluster head node and balancing the energy of nodes in WSN.
Adaptability of light source array simplification in wireless optical access network
XU Chun, GUO Wenqiang, GUNIMIRE Awudan
2016, 36(10): 2675-2679. DOI:
10.11772/j.issn.1001-9081.2016.10.2675
Asbtract
(
)
PDF
(708KB) (
)
References
|
Related Articles
|
Metrics
The adaptability of light source array simplification in channel modeling of wireless optical access network was evaluated, the applicable performance of this simplification to channel characterization under different transmitter configuration, different Field Of View (FOV) and different source radiation pattern was discussed. Simulation results illustrate that the applicable performance has a strong dependence on the FOV, only when the FOV is no less than 60°, the induced deviation to optical path loss and root mean square delay spread are limited within 1.53 dBo and 0.77 ns, respectively.
Design and simulation of channel propagation model for vehicular Ad Hoc network under urban environment
LI Guisen, CHEN Ren, ZHU Shunzhi
2016, 36(10): 2680-2685. DOI:
10.11772/j.issn.1001-9081.2016.10.2680
Asbtract
(
)
PDF
(887KB) (
)
References
|
Related Articles
|
Metrics
Since the channel propagation model for Vehicular Ad Hoc NETwork (VANET) is unrealistic in the urban environment, an propagation model considering the impact of obstacle was proposed. Firstly, using the position of the vehicle in the digital map, the propagation of the signal was classified into three types: Line Of Sight (LOS), Not Line Of Sight with one (NLOS-1) turn and prohibitive propagation. Secondly, the expression of the received signal power was introduced under the conditions of LOS and NLOS-1. Finally, the packet delivery ratio with Nakagami distribution was deduced. Theoretical analysis and simulation results show that the model can reflect an realistic propagation of the signal which is hindered by the obstacles at roadside, the reachability is decreased by 31.4 percentage points in light load and sparse network, while it is increased by 13.32 percentage points in heavy load and dense network. The rectilinear propagation effect of the high frequency 5.9 GHz signal was simulated by the proposed model, which provides a basis for the communication protocol design and the realistic simulation of VANET.
Fairness-optimized resource allocation method in cloud environment
XUE Shengjun, HU Minda, XU Xiaolong
2016, 36(10): 2686-2691. DOI:
10.11772/j.issn.1001-9081.2016.10.2686
Asbtract
(
)
PDF
(878KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problems of resource allocation about uneven distribution, low efficiency, dislocation and so on, a new algorithm named Global Dominant Resource Fair (GDRF) allocation algorithm which adopts several rounds of allocation was proposed to meet the needs of different users, achieve multiple types of resource fairness, and get high resource utilization. First, a qualification queue was determined by allocated resource amount of the users, then the specific user was determined to allocate resource through the global dominant resource share and the global dominant resource weight. The matching condition of resources was took into account in allocation process and the progressive filling of Max-Min strategy was used. In addition, the universal fairness evaluation model of multi-resource allocation was applied to the specific algorithm. Comparison experiments were conducted based on a Google's cluster. Experimental results show that compared with maximizing multi-resource fairness based on dominant resource, the amount of allocated virtual machine is increased by 12%, the resource utilization is increased by 0.5 percentage points, and fairness evaluation value is increased by about 15%. The proposed algorithm has a high degree of adaptation of resources combination allocation, allowing the supply to better match users' demand.
Energy-aware fairness enhanced resource scheduling method in cloud environment
XUE Shengjun, QIU Shuang, XU Xiaolong
2016, 36(10): 2692-2697. DOI:
10.11772/j.issn.1001-9081.2016.10.2692
Asbtract
(
)
PDF
(905KB) (
)
References
|
Related Articles
|
Metrics
To address the problems of large energy consumption and illegal possession of computing resources by users in cloud environment, a new algorithm named Fair and Green Resource Scheduling Algorithm (FGRSA) was proposed to save resources and enhance the fairness of the system, so that all users can reasonably use all the resources in the data center. By using the proposed method, various types of resources can be can scheduled to make use of all resources to achieve relative fairness. The simulation experiments of the proposed scheduling strategy was conducted on CloudSim. Experimental resutls show that, compared with Greedy algorithm and Round Robin algorithm, FGRSA can significantly reduce the energy consumption and simultaneously ensure fair use of all types of resources.
Virtual machine dynamic consolidation method based on adaptive overloaded threshold selection
YAN Chengyu, LI Zhihua, YU Xinrong
2016, 36(10): 2698-2703. DOI:
10.11772/j.issn.1001-9081.2016.10.2698
Asbtract
(
)
PDF
(1169KB) (
)
References
|
Related Articles
|
Metrics
Considering the uncertainty of dynamic workloads in cloud computing, an Virtual Machine (VM) dynamic consolidation method based on adaptive overloaded threshold selection was proposed. In order to make a trade-off between energy efficiency and Quantity of Services (QoS) of data centers, an adaptive overloaded threshold selection problem model based on Markov decision processes was designed. The optimal decision was calculated by solving this problem model, and the overloaded threshold was dynamically adjusted by using the optimal decision according to energy efficiency and QoS of data center. Overloaded threshold was used to predict overloaded hosts and trigger VM migrations. According to the principle of minimum migration time and minimum energy consumption growth, the VM migration strategy under overloaded threshold constraint was given, and the underloaded hosts were switched to sleep mode. Simulation results show that this method can significantly avoid excessive virtual machine migrations and decrease the energy consumption while improving QoS effectively; in addition, it can achieve an ideal balance between QoS and energy consumption of data center.
Resource allocation framework based on cloud computing package cluster mapping
LU Haoyang, CHEN Shiping
2016, 36(10): 2704-2709. DOI:
10.11772/j.issn.1001-9081.2016.10.2704
Asbtract
(
)
PDF
(914KB) (
)
References
|
Related Articles
|
Metrics
Concerning the complex structure and huge amount of data of resource management in cloud computing, a package-cluster mapping based resource management framework was proposed. In this framework, resources are allowed to be shared in a package among virtual machines, and the resource scheduling becomes more flexibly by using a specific resource sharing model. Moreover, an improved package-based Genetic Algorithm (GA) was used in this framework, which encoded package groups with chromosome group and resource pattern, and designed crossover operators and mutation operators according to the change of the chromosome length. The number of clusters and the resources of the packages were integrated, the scale of the problem was solved by using an abstract model. Experimental results showed that, compared with the traditional virtual machine centered framework based genetic algorithm and the package-cluster framework based adaptive algorithm, the CPU utilization of the proposed method was improved by 9% and 5% respectively, and the memory utilization was improved by 14% and 7%, respectively. It proves that the proposed package-cluster mapping framework based algorithm can effectively reduce the number of the used cluster nodes and increase the resource utilization rate.
Distributed power iteration clustering based on GraphX
ZHAO Jun, XU Xiaoyan
2016, 36(10): 2710-2714. DOI:
10.11772/j.issn.1001-9081.2016.10.2710
Asbtract
(
)
PDF
(706KB) (
)
References
|
Related Articles
|
Metrics
Concerning the cumbersome programming and low efficiency in parallel power iteration clustering algorithm, a new method for power iteration clustering in distributed environment was put forward based on Spark, a general computational engine for large-scale data processing, and its component GraphX. Firstly, the raw data was transformed into an affinity matrix which can be viewed as a graph by using some kind of similarity measure ment method. Secondly, by using vertex-cut technology, the row-normalized affinity matrix was divided into a number of subgraphs, which were stored on different machines of a cluster. Finally, using the in-memory computational framework Spark, several iterations were performed on the subgraphs stored in the cluster to get a cut of the original graph, and each subgraph of the original graph corresponded to a cluster. The experiments were carried out on datasets with different sizes and different number of executors. Experimental results show that the proposed distributed power iteration clustering algorithm has a good scalability, its running time is negatively correlated with the number of executors, the speedup of the algorithm ranges between 2.09 to 3.77 in a cluster of 6 executors compared with a single executor. Meanwhile, compared with the Hadoop-based power iteration clustering version, the running time of the proposed algorithm decreased significantly by 61% when dealing with 40000 pieces of news.
Trust evaluation mechanism for distributed Hash table network nodes in cloud data secure self-destruction system
WANG Dong, XIONG Jinbo, ZHANG Xiaoying
2016, 36(10): 2715-2722. DOI:
10.11772/j.issn.1001-9081.2016.10.2715
Asbtract
(
)
PDF
(1230KB) (
)
References
|
Related Articles
|
Metrics
Distributed Hash Table (DHT) network is widely used in secure self-destruction schemes of privacy data in cloud computing environment, but malicious nodes and dishonest nodes in the DHT network easily lead to key shares loss or leakage. To tackle those problems, a trust evaluation mechanism was proposed for the DHT network used in cloud-data secure self-destruction system. In this mechanism, a trust cloud model was established for DHT nodes to describe their trust information qualitatively and quantitatively. By introducing an improved calculation method of direct trust value together with recommended trust value and fully considering the internal and external factors of DHT nodes, the trust value of nodes were first calculated on two dimensions consisted of operating experiment and interactive experience. The result data were used to build trust evaluation sub-cloud for each index. After that, all these trust evaluation sub-clouds were summed up to generate the comprehensive trust cloud according to the weights of different evaluation indexes. Then, the comprehensive trust cloud, by means of cloud generator algorithm, could be described as one-dimensional normal cloud. Finally, the reliable and efficient nodes could be selected using trust decision algorithm. Experimental results show that the proposed mechanism can help original data self-destruction system making comprehensive trust decision and finding reliable DHT network nodes, further enhancing disaster-tolerant capability and reducing computational cost of the system.
Cloud outsourcing private set intersection protocol based on garbled Bloom filter
ZHANG En, LIU Yapeng
2016, 36(10): 2723-2727. DOI:
10.11772/j.issn.1001-9081.2016.10.2723
Asbtract
(
)
PDF
(978KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues that information acquired by different participants are not equal in the Private Set Intersection (PSI) protocol based on Garbled Bloom Filter (GBF), which can not be effectively applied to the cloud environment, a cloud outsourcing PSI protocol combined the garbled Bloom filter algorithm with the proxy oblivious transfer protocol was proposed. Firstly, by introducing the garbled Bloom filter, the problem of false positive in the traditional standard Bloom filter was solved to achieve efficient storage and large data transmission. Secondly, the complex time-consuming computation could be outsourced to the cloud proxy server by using proxy oblivious transfer protocol, so that the cloud tenants did not need to be online in real-time and only needed a small amount of computation. Finally, in the processing of the cloud outsourcing privacy set intersection, the comparison results could be fairly obtained without the interaction among the cloud tenants. Theoretical analysis and performance comparison show that the communication and computation complexities of the proposed protocol are linear, and the proposed protocol is safe and effective.
Cloud service behavior trust model based on non-interference theory
XIE Hong'an, LIU Dafu, SU Yang, ZHANG Yingnan
2016, 36(10): 2728-2732. DOI:
10.11772/j.issn.1001-9081.2016.10.2728
Asbtract
(
)
PDF
(729KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the security threat of resource sharing and privilege existed in cloud service environment, a new cloud trust model based on non-interference theory, namely NICTM, was proposed. The elements existed in cloud service such as domains, actions, situations, and outputs were abstracted to formally define the trusted domain in cloud services. Besides, the theorem of trusted user domain behavior was proved, and the user domain which followed the theorem could be proved to be trusted. Finally the prototype system was built on Xen virtualization platform, and the feasibility of the model was verified by experiments.
Revocable fuzzy identity based encryption scheme over ideal lattice
XIANG Wen, YANG Xiaoyuan, WU Liqiang
2016, 36(10): 2733-2737. DOI:
10.11772/j.issn.1001-9081.2016.10.2733
Asbtract
(
)
PDF
(737KB) (
)
References
|
Related Articles
|
Metrics
The present Identity Based Encryption (IBE) scheme cannot meet user revocation and fuzzy identity extraction at the same time, a Revocable Fuzzy IBE (RFIBE) scheme based on hardness of Learning With Errors (LWE) problem over ideal lattice was proposed to resolve the above problems by using revocable binary trees and threshold secret sharing algorithm. Firstly, the trapdoor generating function over ideal lattice and threshold secret sharing algorithm were used to generate user' private key. Then an RFIBE scheme was put forward by using revocable binary trees. Finally, the scheme was proved to be INDistinguishabity against selective IDentity and Chosen Plaintext Attack (IND-sID-CPA) secure. Compared with previous IBE scheme, RFIBE has stronger practicability with the function of revocation and efficient fuzzy identity extraction.
Provably secure undeniable signature scheme based on identity
WANG Xiong, DENG Lunzhi
2016, 36(10): 2738-2741. DOI:
10.11772/j.issn.1001-9081.2016.10.2738
Asbtract
(
)
PDF
(749KB) (
)
References
|
Related Articles
|
Metrics
Concerning the low efficiency of identity-based undeniable signature schemes, a new identity-based undeniable signature scheme was proposed. Under the assumption that it is hard to solve the Computational Bilinear Diffie-Hellman (CBDH) problem and the Decisional Bilinear Diffie-Hellman (DBDH) problem, the proposed scheme was proven to be unforgeable and invisible in the random oracle model, and it reduced the number of bilinear pairing operations. Analysis shows that the proposed scheme is more efficient than undeniable signature schemes proposed by Libert, Duan and Behnia, and it is more suitable for the computation-constrained environment.
Adaptive asynchronous and anti-noise secure communication scheme based on hyper-chaotic Lorenz system
ABDURAHMAN Kadir, MIREGULI Aili, MUTALLIP Sattar
2016, 36(10): 2742-2746. DOI:
10.11772/j.issn.1001-9081.2016.10.2742
Asbtract
(
)
PDF
(670KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the high security requirements of non real-time communication and the noise existed in channel, an adaptive asynchronous anti-noise secure communication scheme based on chaotic masking-modulation was proposed. Four pseudo-random vectors of state variables were generated by hyper-chaotic Lorenz system, which were adjusted to an identical interval by their signal gains. Two vectors, which were generated by the PieceWise Linear Chaotic Map (PWLCM), were applied to randomly select and determine the step length of dynamic delay in the four state variables to generate the carrier signal. Finally, the pre-encoded binary message was marked into the carrier signal in pairs and sent out after being added Gaussain noise. Experimental results reveal that the ratio of the minimum signal magnification factor to noise coefficient lies within a small interval of [0.08, 0.11], if the ratio is set beyond the upper limit of the interval, the Bit Error Rate (BER) can be ensured to reach zero, so the masked signal can be perfectly recovered by the receiver under channel noise. The nonlinear dynamical character of hyper-chaotic system was used to implement adaptively asynchronous secure communication with channel noise, the numerical simulation demonstrates the effectiveness of the secure communication scheme.
Outsourced attribute-based encryption for general circuit from multilinear maps
CHEN Fei, HAN Yiliang, LI Xiaoce, SUN Jiahao, YANG Xiaoyuan
2016, 36(10): 2747-2752. DOI:
10.11772/j.issn.1001-9081.2016.10.2747
Asbtract
(
)
PDF
(1053KB) (
)
References
|
Related Articles
|
Metrics
Since the ciphertext length of attribute-based encryption scheme from multilinear maps is large, the decryption is inefficient and the scheme has key escrow problem, a key-policy attribute-based encryption scheme from multilinear maps was proposed by using outsourcing technology and user's secret value. The proposed scheme supported general polynomial-size circuit and arbitrary fanout, the private key was generated by key generation center and user. The length of the ciphertext is fixed to |
G
|+|
Z
|, compared with the known ciphertext scheme with the minimum ciphertext, the storage cost is decreased by 25% after setting reasonable parameters in accordance with the standards elliptic curves. Users only need to compute transformation ciphertext and the ciphertext is verifiable. The decryption multilinear operation count is only 3, which greatly reduces the computional cost. Selective security is proved in standard model under the multilinear decisional Diffie-Hellman problem. Additionally, it also can be applied in small mobile devices with limited computing capability.
Anonymized data privacy protection method based on differential privacy
SONG Jian, XU Guoyan, YAO Rongpeng
2016, 36(10): 2753-2757. DOI:
10.11772/j.issn.1001-9081.2016.10.2753
Asbtract
(
)
PDF
(791KB) (
)
References
|
Related Articles
|
Metrics
There exists the problem of security insufficience among the data privacy protecting technology which is the privacy leakage caused by homogeneity and background knowledge attack when computing equivalence classes in the anonymity process. To solve the problem, an anonymized data privacy protection method based on differential privacy was put forward, and its model was constructed.
ε
-MDAV (Maximum Distance to Average Vector) algorithm was presented, in which micro-aggregation MDAV algorithm was used to partition similar equivalence classes, and SuLQ frame framework was introduced into the anonymous attribute process. Laplace mechanism was used to reasonably control the privacy protection budget. The comparison of availability and security under different privacy protect budgets verifies that the proposed method effectively improve data security while guaranteeing high data availability.
Improved certificate-based aggregate proxy signature scheme
ZUO Liming, GUO Hongli, ZHANG Tingting, CHEN Zuosong
2016, 36(10): 2758-2761. DOI:
10.11772/j.issn.1001-9081.2016.10.2758
Asbtract
(
)
PDF
(708KB) (
)
References
|
Related Articles
|
Metrics
The analysis of aggregate proxy signature scheme proposed by Yu et al. (YU X Y, HE D K. A certificate-based aggregate proxy signature scheme. Journal of Central South University (Science and Technology), 2015, 46(12): 4535-4541.) showed that a valid signatures could be forged for any messages while knowing a valid signature. Therefore, an improved certificate-based aggregate proxy signature scheme was proposed and a new attack model was given. The new scheme was proved to be existentially unforgeable for the new attacker in random oracle model. The results show that the proposed scheme can resist conspiracy attacks and forgery attacks, and it is more suitable for the computation-constrained and real-time tasks.
Efficient identity-based anonymous broadcast encryption scheme in standard model
MING Yang, YUAN Hongping, SUN Bian, QIAO Zhengyang
2016, 36(10): 2762-2766. DOI:
10.11772/j.issn.1001-9081.2016.10.2762
Asbtract
(
)
PDF
(698KB) (
)
References
|
Related Articles
|
Metrics
Concerning the broadcast encryption security problem in reality, a new identity-based anonymous broadcast encryption scheme in the standard model was proposed. In a anonymous broadcast encryption scheme, broadcaster sent encrypted data to the user via a broadcast channel, which only authorized users could decrypt and access the data; meanwhile, no one knew whom the encrypted data was sent to. Thereby the recipient user's privacy was protected. The scheme was proposed by combining with dual system encryption and composite-order bilinear groups. Based on static assumptions, the proposed scheme is chosen plaintext secure in the standard model, the ciphertext and private key in the scheme has fix length. Compared with the contrast scheme, the length of key is only two group elements, and the proposed scheme can satisfy the anonymity.
Review helpfulness based on opinion support of user discussion
LI Xueming, ZHANG Chaoyang, SHE Weijun
2016, 36(10): 2767-2771. DOI:
10.11772/j.issn.1001-9081.2016.10.2767
Asbtract
(
)
PDF
(941KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issues in review helpfulness prediction methods that training datasets are difficult to construct in supervised models and unsupervised methods do not take sentiment information in to account, an unsupervised model combining semantics and sentiment information was proposed. Firstly, opinion helpfulness score was calculated based on opinion support score of reviews and replies, and then review helpfulness score was calculated. In addition, a review summary method combining syntactic analysis and improved Latent Dirichlet Allocation (LDA) model was proposed to extract opinions for review helpfulness prediction, and two kinds of constraint conditions named must-link and cannot-link were constructed to guide topic learning based on the result of syntactic analysis, which can improve the accuracy of the model with ensuring the recall rate. The
F
1 value of the proposed model is 70% and the sorting accuracy is nearly 90% in the experimental data set, and the instance also shows that the proposed model has good explanatory ability.
Micro-blog new word discovery method based on improved mutual information and branch entropy
YAO Rongpeng, XU Guoyan, SONG Jian
2016, 36(10): 2772-2776. DOI:
10.11772/j.issn.1001-9081.2016.10.2772
Asbtract
(
)
PDF
(729KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of data sparsity, poor portability and lack of recognition of multiple words (more than three words) in micro-blog new word discovery algorithm, a new word discovery algorithm based on improved Mutual Information (MI) and Branch Entropy (BE), named MBN-Gram, was proposed. Firstly, the N-Gram was used to extract the candidate terms of new words, and the rules of using frequency and stop words were used to filter the candidates. Then the improved MI and BE were used to expand and filter the candidates again. Finally, the corresponding dictionary was used to screen, so as to get new words. Theoretical and experimental analysis show that the accuracy rate, recall rate and
F
value of MBN-Gram algorithm were improved. Experimental results shows that the MBN-Gram algorithm is effective and feasible.
Optimization algorithm for accurately theme-aware task assignment in crowd computing on big data
WANG Qing, TAN Liang
2016, 36(10): 2777-2783. DOI:
10.11772/j.issn.1001-9081.2016.10.2777
Asbtract
(
)
PDF
(1131KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of massive data analysis requirement, complex cognitive inference in big data tasks, low efficiency of random assignment algorithm and virtual property and uncertainty of Internet users, an optimization algorithm for accurately theme-aware task assignment in crowd computing on big data was proposed. Firstly, the themes in crowd computing were extracted by method which combined with theme extraction model with fuzzy-kmeans adaptation, then the correlations were computed through task model and user model. Secondly, new users' real theme and initial accuracy were tested by historical tasks with high quality answers. Lastly, the probability that a user can participate in a certain kind of task was calculated and a sequence of candidate sequences was predicted by Logistic Regression (LR), and then the appropriate workers were assigned accurately to the tasks. Compared with random algorithm, the accuracy of the proposed algorithm was more than 20 percentage points higher, which increases with the increase of the training data, and the accuracy was nearly close to 100% especially in correlation tasks through full training. The simulation results show that the proposed algorithm has a higher accuracy with more cost-effective and performance in big data environment.
Collaborative filtering algorithm based on trust and item preference
ZHENG Jie, QIAN Yurong, YANG Xingyao, HUANG Lan, MA Wanzhen
2016, 36(10): 2784-2788. DOI:
10.11772/j.issn.1001-9081.2016.10.2784
Asbtract
(
)
PDF
(865KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the fact that the traditional collaborative filtering algorithm cannot deeply mine user relationship and recommend new items to users, a Trust and Item Preference Collaborative Filtering (TIPCF) recommendation algorithm was proposed. Firstly, in order to mine the latent trust relationship of the users, the user reliability was gotten and the trust degree between users was quantified by analyzing user ratings. Secondly, by considering that the difference of users' preference for different target items has an effect on user similarity, user preference was added to the traditional user similarity algorithm to improve the similarity algorithm. Thirdly, the choice of nearest neighbor set was more accurate by incorporating user reliability and improved similarity. Finally, the users' preference on item attribute was used to recommend new items. Experimental results show that, compared with traditional collaborative algorithm, the Mean Absolute Error (MAE) of TIPCF was decreased by 6.7%, and the MAE of TIPCF was decreased by 10.7% when recommending new items on the Movielens dataset. TIPCF not only improves the accuracy of recommendation, but also increases the recommended probablity of new items.
Micro-blog recommendation algorithm by combining tag and artificial bee colony
WANG Ningning, LU Ran, WANG Zhihao
2016, 36(10): 2789-2793. DOI:
10.11772/j.issn.1001-9081.2016.10.2789
Asbtract
(
)
PDF
(781KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the cold-start problem existed in the recommendation algorithm, a micro-blog Recommendation algorithm combined Tag and Artificial Bee Colony, namely TABC-R, was proposed. Firstly, the tag information for user was defined, and the tag set was used as user's interest. Secondly, the fitness function of Artificial Bee Colony (ABC) algorithm was established by three variables including tag weight, tag attribute weight and the similarity of the micro-blog words and the tags. Finally, the micro-blog with the best fitness value was obtained and recommended to users according to the search strategy of ABC algorithm. Compared with Tag-based Recommendation (T-R) algorithm and the Recommendation algorithm based on ABC (ABC-R), TABC-R algorithm has a light increase in the precision and recall, which proves the effectiveness of TARC-R.
Chinese word segment based on character representation learning
LIU Chunli, LI Xiaoge, LIU Rui, FAN Xian, DU Liping
2016, 36(10): 2794-2798. DOI:
10.11772/j.issn.1001-9081.2016.10.2794
Asbtract
(
)
PDF
(754KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the accuracy and the Out Of Vocabulary (OOV) recognition rate of the Chinese word segmentation, a Chinese word segmentation system based on character representation learning method was proposed. Firstly, the word in the text was mapped to a vector in a high-dimentioanl vecter space using Skip-gram model; then the
K
-means clustering algorithm was used to acquire clusters of the word vector, and the clustering results were regarded as features of Conditional Random Fields (CRF) model for training. Finally the CRF model was used for word segmentation and OOV recognition. The influences of the word vector dimensions, the number of clusters and different cluster algorithm on word segmentation were analyzed. Experiments were conducted on the 4th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC2015) corpus. Experimental results show that the proposed system can effectively improve Chinese short text segmentation performance without using external knowledge, the
F
-value and the OOV recognition rate achieve to 95.67% and 94.78% respectively.
Adaptive tracking control and vibration suppression by fuzzy neural network for free-floating flexible space robot with limited torque
PANG Zhenan, ZHANG Guoliang, YANG Fan, JIA Xiao, LIN Zhilin
2016, 36(10): 2799-2805. DOI:
10.11772/j.issn.1001-9081.2016.10.2799
Asbtract
(
)
PDF
(1101KB) (
)
References
|
Related Articles
|
Metrics
Joint trajectory tracking control and flexible vibration suppression techniques for a Free-Floating Flexible Space Robot (FFFSR) were discussed under parameter uncertainty and limited torque. A composite controller containing a slow control subsystem for joint trajectory tracking and a fast control subsystem for flexible vibration description were proposed using singular perturbation method. A model-free Fuzzy Radial Basis Function Neural Network (FRBFNN) adaptive tracking control strategy was applied in the slow subsystem. FRBFNN was adopted to support the estimation of velocity signals performed by the observer, the approximation of the unknown nonlinear functions of the observer as well as the controller. The fast subsystem adopted an Extended State Observer (ESO) to estimate coordinate derivatives of flexible modal and uncertain disturbance, which could hardly be measured, and used Linear Quadratic Regulator (LQR) method to suppress the flexible vibration. Numerical simulation results show that the composite controller can achieve stable joint trajectory tracking in 2.5 s, and the flexible vibration amplitude is restricted in ±1×10
-3
m, when the control torque is limited within ±20 N·m and ±10 N·m.
Method of program path validation based on satisfiability modulo theory solver
REN Shengbing, WU Bin, ZHANG Jianwei, WANG Zhijian
2016, 36(10): 2806-2810. DOI:
10.11772/j.issn.1001-9081.2016.10.2806
Asbtract
(
)
PDF
(797KB) (
)
References
|
Related Articles
|
Metrics
In programs, the path search space is too large because there are too many paths or complicated cycle paths, which directly affect the efficiency and accuracy for path validation. To resolve the above problem, a new program path validation method based on the Satisfiability Modulo Theory (SMT) solver was proposed. Firstly, loop invariants were extracted from the complicated cycle paths by using the method of decision tree, then the No-Loop Control Flow Graph (NLCFG) was constructed. Secondly, the information for basic paths was extracted via traversing Control Flow Graph (CFG) by using basic path method. Finally, the problem of path validation was converted into the problem of constraint solving by using a SMT solver as a constraint solver. Compared with CBMC and FSoft-SMT which were also based on SMT solver, the proposed method reduced validation time on test programs by more than 25% and 15% respectively. As for verification accuracy, the proposed method had significantly improvement. Experimental results show that this method can efficiently resolve the problem with too large path search space, and improve the efficiency and accuracy of path validation.
Android GUI traversal method based on static analysis
TANG Yang, ZENG Fanping, WANG Jiankang, HUANG Xinyi
2016, 36(10): 2811-2815. DOI:
10.11772/j.issn.1001-9081.2016.10.2811
Asbtract
(
)
PDF
(759KB) (
)
References
|
Related Articles
|
Metrics
Traditional security testing methods (such as symbolic execution, fuzz testing, and taint analysis) cannot obtain high coverage of Graph User Interface (GUI) for Android programs. To solve this problem, an Android program testing method combining both static and dynamic analysis was proposed. Based on the static analysis of data flow of Android applications, activity translation graph and function call graph were constructed, and the GUI elements of the program were parsed, then scripts were written to dynamically traverse GUI elements of applications. This method was applied to the testing of the applications including Booking Calendar, Wifi Master Key and 360 Weather, the result showed that the average coverage of activity reached 76%, which was significantly higher than that of manual testing (30.08%) as well as GUI tree traversal (42.05%-61.29%). Experimental result demonstrate that the method can effectively traverse GUI of Android applications.
Testing data generation method based on fireworks explosion optimization algorithm
DING Rui, DONG Hongbin, FENG Xianbin, ZHAO Jiahua
2016, 36(10): 2816-2821. DOI:
10.11772/j.issn.1001-9081.2016.10.2816
Asbtract
(
)
PDF
(969KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of path coverage test data generation, a new test data generation method based on improved Fireworks Xxplosion Optimization (FXO) algorithm was proposed. First, key-point path method was used to represent the program paths, and the hard-covered paths were defined by the theoretical paths, easy-covered paths and infeasible paths; the easy-covered paths adjacent to the hard-covered paths and their testing data were recorded and used as part of the initial fireworks to improve convergence speed, and the remaining initial fireworks were created randomly. Then according to the individuals' fitness values, an adaptive blast radius was designed to improve convergence rate, and the thought of boundary value test was introduced to modify the border-crossing sparkles. Compared with other seven optimization algorithms that generate testing data, including fireworks explosion optimization with adaptive radius and heuristic information (NFEO), FEO, F-method, NF-method, etc, the simulation results show that the proposed algorithm has lower time complexity of calculating level, and better performance in convergence.
Bilinear image similarity matching algorithm based on deep feature analysis
LI Ming, ZHANG Hong
2016, 36(10): 2822-2825. DOI:
10.11772/j.issn.1001-9081.2016.10.2822
Asbtract
(
)
PDF
(770KB) (
)
References
|
Related Articles
|
Metrics
Content-based image retrieval has being faced the problem of "semantic gap", feature selection has a direct influence on semantic learning results; while traditional distance metric often calculates the similarity from a single perspective, which cannot well express the similarity between images. To resolve the above problem, a bilinear image similarity matching algorithm based on deep feature analysis was proposed. First, the image dataset was fine-tuning trained on the Convolutional Neural Network (CNN) model, then the image features were extracted by using the trained CNN. After getting the output features of the full connection layer, the image similarity was calculated by the bilinear similarity matching algorithm, and the most similar image instance was returned after sorting the similarity. Experimental results on Caltech101 and Caltech 256 datasets show that compared with the contrast algorithms, the proposed algorithm can get higher mean average precision, Top
K
precision and recall, which demonstrates the effectiveness of the proposed algorithm.
Denoising algorithm for random-valued impulse noise based on weighted spatial local outlier measure
YANG Hao, CHEN Leiting, QIU Hang
2016, 36(10): 2826-2831. DOI:
10.11772/j.issn.1001-9081.2016.10.2826
Asbtract
(
)
PDF
(895KB) (
)
References
|
Related Articles
|
Metrics
In order to alleviate the problem of inaccurate noise identifying and blurred restoration in image edges and details, a novel algorithm based on weighted Spatial Local Outlier Measure (SLOM) was proposed for removing random-valued impulse noise, namely WSLOM-EPR. Based on optimized spatial distance difference, the mean and standard deviation of neighborhood were introduced to set up a noise detection method for reflecting local characters in image edges, which could improve the precision of noise identification in edges. According to the precision detection results, the Edge-Preserving Regularization (EPR) function was optimized to improve the computation efficiency and preserving capability of edges and details. The simulation results showed that, with 40% to 60% noisy level, the overall performance in noise points detection was better than that of the contrast detection algorithms, which can maintain a good balance in false detection and miss detection of noise. The Peak Signal-to-Noise Ratios (PSNR) of WSLOM-EPR was better than that of the most of the contrast algorithms, and the restoring image had clear and continuous edges. Experimental results show that WSLOM-EPR can improve detection precision and preserve more edges and details information.
Supervised active contour image segmentation by kernel self-organizing map
FAN Haiju, LIU Guoqi
2016, 36(10): 2832-2836. DOI:
10.11772/j.issn.1001-9081.2016.10.2832
Asbtract
(
)
PDF
(887KB) (
)
References
|
Related Articles
|
Metrics
The objects with inhomogeneous intensity or multi-gray intensity by using active contour, a supervised active contour algorithm named KSOAC was proposed based on Kernel Self-Organizing Map (KSOM). Firstly, prior examples extracted from foreground and background were input into KSOM for training respectively, and two topographic maps of input patterns were obtained to characterize their distribution and get the synaptic weight vector. Secondly, the average training error of unit pixel of two maps were computed and added to energy function to modify the contour evolution; meanwhile, the controlling parameter of energy item was obtained by the area ratio of foreground and background. Finally, supervised active contour energy function and iterative equation integrated with synaptic weight vectors were deduced, and simulation experiments were conducted on multiple images using Matlab 7.11.0. Experimental results and simulation data show that the map obtained by KSOM is closer to prior example distribution in comparison with Self-Organizing Map (SOM) active contour (SOAC), and the fitting error is smaller. The Precision, Recall and
F
-measure metrics of KSOAC are higher than 0.9, and the segmentation results are closer to the target; while the time consumption of KSOAC is similar to that of SOAC. Theoretical analysis and simulation results show that KSOAC can improve segmentation effectiveness and reduce target leak in segmenting images with inhomogeneous intensity and objects characterized by many different intensities, especially in segmenting unknown probability distribution images.
Adaptive shadow removal based on superpixel and local color constancy
LAN Li, HE Xiaohai, WU Xiaohong, TENG Qizhi
2016, 36(10): 2837-2841. DOI:
10.11772/j.issn.1001-9081.2016.10.2837
Asbtract
(
)
PDF
(746KB) (
)
References
|
Related Articles
|
Metrics
In order to remove the moving cast shadow in the surveillance video quickly and efficiently, an adaptive shadow elimination method based on superpixel and local color constancy of shaded area was proposed. First, the improved simple linear iterative clustering algorithm was used to divide the moving area in the video image into non-overlapping superpixels. Then, the luminance ratio of background and the moving foreground in the RGB color space was calculated, and the local color constancy of shaded area was analyzed. Finally, the standard deviation of the luminance ratio was computed by taking superpixel as basic processing unit, and an adaptive threshold algorithm based on turning point according to the characteristic and distribution of the standard deviation of the shadowed region was proposed to detect and remove the shadow. Experimental results show that the proposed method can process shadows in different scenarios, the shadow detection rate and discrimination rate are both more than 85%; meanwhile, the computational cost is greatly reduced by using the superpixel, and the average processing time per frame is 20 ms. The proposed algorithm can satisfy the shadow removal requirements of higher precision, real-time and robustness.
Realistic real-time rendering method for translucent three-dimensional objects
WEN Peizhi, ZHU Likun, HUANG Jia
2016, 36(10): 2842-2848. DOI:
10.11772/j.issn.1001-9081.2016.10.2842
Asbtract
(
)
PDF
(1158KB) (
)
References
|
Related Articles
|
Metrics
A new realistic rendering method for translucent material jade was put forward by superposition of high light, diffuse and transmission. Firstly, subsurface scattering of translucent material jade was simulated by combination of scattering layer and diffuse profile, then a flexible diffuse profile method was proposed to simulate the diffuse profile characteristic of different types of jade. Secondly, by combining pre-computed local thickness maps and Gaussian linear sum, light transmission effect of transmission layer was realized based on the surface thickness, which was superimposed with specular reflection items based on micro-plane using energy conservation, thus a realistic translucent material representation based on three layers of lighting model was achieved. Experimental results show that the proposed method can achieve photorealistic rendering of different kinds of translucent jade, and ensure the real-time efficiency of 30 frames per second when the triangular patch number reaches 1.6 million.
Multi-channel real-time video stitching based on circular region of interest
WANG Hanguang, WANG Xuguang, WANG Haoyuan
2016, 36(10): 2849-2853. DOI:
10.11772/j.issn.1001-9081.2016.10.2849
Asbtract
(
)
PDF
(909KB) (
)
References
|
Related Articles
|
Metrics
Aiming at real-time requirements and elimination ghost produced by moving object in video stitching, a method based on circular Region Of Interest (ROI) image registration was proposed by using the simplified process and Graphics Processing Unit (GPU) acceleration. Firstly, the feature extraction only occured in the ROI area, which improved the detection speed and the feature matching accuracy. Secondly, to further reduce the time cost and meet the real-time requirements for video processing, two strategies were used. On one hand, only the first frame was used for matching, while the subsequent frames used the same homography matrix to blend. On the other hand, GPU was adopted to realize hardware acceleration. Besides, when there are dynamic objects in the field of view, the graph-cut and multi-band blending algorithms were used for image blending, which can effectively eliminate ghost image. When stitching two videos of 640×480, the processing speed of the proposed method was up to 27.8 frames per second. Compared with the Speeded Up Robust Features (SURF) and Oriented features from Accelerated Segment Test (FAST) and Rotated BRIEF (ORB), the efficiency of the proposed method was increased by 26.27 times and 11.57 times respectively. Experimental results show the proposed method can be used to stitch multi-channel videos into a high quality video.
Fast intra-depth decision algorithm for high efficiency video coding
LIU Ying, GAO Xueming, LIM Kengpang
2016, 36(10): 2854-2858. DOI:
10.11772/j.issn.1001-9081.2016.10.2854
Asbtract
(
)
PDF
(780KB) (
)
References
|
Related Articles
|
Metrics
To reduce the high computational complexity of intra coding in High Efficiency Video Coding (HEVC), a fast intra depth decision algorithm for Coding Unit (CU) based on spatial correlation property of images was proposed. First, the depth of the current Coded Tree Unit (CTU) was estimated by linearly weighting the adjacent CTU depths. Then appropriate double thresholds were set to terminate the CTU splitting process or skip some depths of CTU, thereby reducing unnecessary depth calculation. Experimental results show that compared with HM12.0, the proposed intra-depth decision optimization algorithm can significantly decrease the coding time of simple video sequence with only negligible drop in quality, when the Y-PSNR dropped by an average of 0.02 dB, the encoding time is reduced by an average of 34.6%. Besides, the proposed algorithm is easy to be fused with other methods to further reduce the computational complexity for HEVC intra coding, and ultimately achieves the purpose of real-time transmission of high-definition video.
Local motion blur detection based on energy estimation
ZHAO Senxiang, LI Shaobo, CHEN Bin, ZHAO Xuezhuan
2016, 36(10): 2859-2862. DOI:
10.11772/j.issn.1001-9081.2016.10.2859
Asbtract
(
)
PDF
(797KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of information loss caused by local motion blur in daily captured images or videos, a local motion detection algorithm based on region energy estimation was proposed. Firstly, the Harris feature points of the image were calculated, and alternative areas were screened out according to the distribution of feature points of each area. Secondly, according to the characteristic of smooth gradient distribution in monochromatic areas, the gradient distribution of the alternative areas was calculated and the average amplitude threshold was used to filter out most of areas which can be easily misjudged. At last, the blur direction of the alternative areas was estimated according to the energy degeneration feature of motion blur images, and the energy of the blur direction and its perpendicular direction were calculated, thus the monochrome region and defocus blur areas were further removed according to the energy ratio in both above directions. Experimental results on image data sets show that the proposed method can detect the motion blur areas from images with monochromatic areas and defocus blur areas, and effectively improve the robustness and adaptability of local motion blur detection.
K
-nearest neighbor searching algorithm for laser scattered point cloud
ZHAO Jingdong, YANG Fenghua
2016, 36(10): 2863-2869. DOI:
10.11772/j.issn.1001-9081.2016.10.2863
Asbtract
(
)
PDF
(1113KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of large amount of data and characteristics of surface in laser scattered point cloud, a
K
-Nearest Neighbors (KNN) searching algorithm for laser scattered point cloud was put forward to reduce memory usage and improve processing efficiency. Firstly, only the non-empty subspace numbers were stored by multistage classification and dynamic linked list storage. Adjacent subspace was coded in ternary, and the pointer connection between adjacent subspaces was established by dual relationship of code, a generalized table that contained all kinds of required information for KNN searching was constructed, then KNN were searched. In the process of KNN searching, the candidate points outside inscribed sphere of filtration cube were directly deleted when calculating the distance from measured point to candidate points, thus reducing the candidate points that participate in the sort by distance to half. Both dividing principles, whether it relies on
K
value or not, can be used to calculate different
K
neighborhoods. Experimental results prove that the proposed algorithm not only has low memory usage, but also has high efficiency.
Registration for multi-temporal high resolution remote sensing images based on abnormal region sensing
WU Wei, DING Xiangqian, YAN Ming
2016, 36(10): 2870-2874. DOI:
10.11772/j.issn.1001-9081.2016.10.2870
Asbtract
(
)
PDF
(943KB) (
)
References
|
Related Articles
|
Metrics
In the processing of registration for multi-temporal high resolution remote sensing images, the phenomena of surface features change and relative parallax displacement caused by differences in acquisition conditions degrades the accuracy of registration. To resolve the aforementioned issue, a registration algorithm for multi-temporal high resolution remote sensing images based on abnormal region sensing was proposed, which consists of coarse and fine registration. The algorithm of Scale-Invariant Feature Transform (SIFT) has a better performance on scale space, the feature points from different scale space indicates the various size of spot. The high scale space points represent the objects which have a stable condition, the coarse registration can be executed depending on those points. For the fine registration, intensity correlation measurement and spatial constraint were used to decide the regions which were used to extract the efficacious points from low scale space, the areas for searching matching points were limited as well. Finally, the accuracy of the proposed method was evaluated from subjective and objective aspects. Experimental results demonstrate that the proposed method can effectively restrain the influence of abnormal region and improve registration accuracy.
Two-person interaction recognition based on improved spatio-temporal interest points
WANG Peiyao, CAO Jiangtao, JI Xiaofei
2016, 36(10): 2875-2879. DOI:
10.11772/j.issn.1001-9081.2016.10.2875
Asbtract
(
)
PDF
(972KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of unsatisfactory feature extraction and low recognition rate caused by redundant words in clustering dictionary in the practical monitoring video for two-person interaction recognition, a Bag Of Word (BOW) model based on improved Spatio-Temporal Interest Point (STIP) feature was proposed. First of all, foreground movement area of interaction was detected in the image sequences by the intractability method of information entropy, then the STIPs were extracted and described by 3-Dimensional Scale-Invariant Feature Transform (3D-SIFT) descriptor in detected area to improve the accuracy of the detection of interest points. Second, the BOW model was built by using the improved Fuzzy C-Means (FCM) clustering method to get the dictionary, and the representation of the training video was obtained based on dictionary projection. Finally, the nearest neighbor classification method was chosen for the two-person interaction recognition. Experimental results showed that compared with the recent STIPs feature algorithm, the improved method with intractability detection achieved 91.7% of recognition rate. The simulation results demonstrate that the intractability detection method combined with improved BOW model can greatly improve the accuracy of two-person interaction recognition, and it is suitable for dynamic background.
Remote sensing image enhancement based on combination of non-subsampled shearlet transform and guided filtering
LYU Duliang, JIA Zhenhong, YANG Jie, Nikola KASABOV
2016, 36(10): 2880-2884. DOI:
10.11772/j.issn.1001-9081.2016.10.2880
Asbtract
(
)
PDF
(883KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of low contrast, lack of the details and weakness of edge gradient retention in remote sensing images, a new remote sensing image enhancement method based on the combination of Non-Subsampled Shearlet Transform (NSST) and guided filtering was proposed. Firstly, the input image was decomposed into a low-frequency component and several high-frequency components by NSST. Then a linear stretch was adopted for the low-frequency component to improve the overall contrast of the image, and the adaptive threshold method was used to restrain the noise in the high-frequency components. After denoising, the high-frequency components were enhanced by guided filtering to improve the detail information and edge-gradient retention ability. Finally, the final enhanced image was reconstructed by applying the inverse NSST to the processed low-frequency and high-frequency components. Experimental results show that, compared with the Histogram Equalization (HE), image enhancement based on contourlet transform and fuzzy theory, remote sensing image enhancement based on nonsubsampled contourlet transform and unsharp masking as well as remote sensing image enhancement based on non-subsampled shearlet transform and parameterized logarithmic image processing, the proposed method can effectively increase the information entropy, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM), which can obviously improve the visual effect of the image and make the texture of the image more clear.
Gesture feature extraction of depth image based on curvature and local binary pattern
SHANG Changjun, DING Rui
2016, 36(10): 2885-2889. DOI:
10.11772/j.issn.1001-9081.2016.10.2885
Asbtract
(
)
PDF
(956KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the information redundancy and encoding instability of depth image for gesture feature extraction in complex environment, an improved gesture feature extraction algorithm for depth image based on curvature-LBP (Local Binary Pattern) was proposed. Firstly, the divided gesture depth data to the point cloud data was converted through the coordinate conversion. Secondly, surface fitting was fulfilled with the moving least square method. And then the Gaussian curvature was calculated to describe the characteristics of the 3D surface geometry more accurately. Finally, the improved LBP uniform model was applied to encode the Gaussian curvature data and form a feature vector. In the American Sign Language (ASL) database, the average recognition rate of the proposed algorithm reached 91.20%, which 18.5 percentage points and 13.7 percentage points higher than 3DLBP and gradient LBP. Simulation results show that the proposed algorithm can recognize the gestures with similar outline and different shape, and improve the precision of describing the internal details in gesture depth image.
Hemorrhagic feature extraction based on different color channels in fundus images
NI Sen, FU Dongmei, DING Ye
2016, 36(10): 2890-2894. DOI:
10.11772/j.issn.1001-9081.2016.10.2890
Asbtract
(
)
PDF
(742KB) (
)
References
|
Related Articles
|
Metrics
The characteristics in fundus hemorrhage images are varying hemorrhage shapes and multiple interferences. Specific to these characteristics, a method based on three color channels in fundus hemorrhage images was proposed to improve the extraction accuracy of the hemorrhage areas and reduce the interferences caused by non-bleeding ones. Firstly, the relevant features of pixels in different color channels were analyzed by the statistical properties of the hemorrhage areas, and the extracting threshold was set in the method. Then, multi-scale top-hat transformation and vascular density feature were applied to locate vessels and macular for removing these disturbances. Finally, the logical relations of the hemorrhage, vessels and macular were computed to extract the hemorrhage areas while removing the interferences of vessels and macular. The proposed method realizes the automatic extraction of hemorrhagic areas and the simulation results show that the method can ensure the extraction accuracy with a high computational efficiency.
Regularized robust coding for tumor cell image recognition based on dictionary learning
GAN Lan, ZHANG Yonghuan
2016, 36(10): 2895-2899. DOI:
10.11772/j.issn.1001-9081.2016.10.2895
Asbtract
(
)
PDF
(928KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the characteristics of high dimension and complexity of gastric mucosal tumor cell images, a new method based on Fisher Discrimination Dictionary Learning and Regularized Robust Coding (FDDL-RRC) was proposed for the recognition of tumor cell images, so as to improve the robustness of sparse representation for image recognition. Firstly, all the original stained tumor cell images were transformed into gray images, and then the Fisher discrimination dictionary learning method was used to learn the global features of training samples and obtain the structured dictionary with class labels; lastly, the new discriminative dictionary was used to classify the test samples by the model of RRC. The model of RRC was based on Maximum A Posterior (MAP) estimation, and the sparse fidelity was expressed by the MAP function of residuals, so the problem of identification was converted to the optimal regularized weighted norm approximation problem. The highest recognition accuracy rate of the proposed method for tumor cell images can reach 92.4%, which indicates that the presented method can effectively and quickly distinguish the tumor cell images.
Motion parameter extraction algorithm for pigs under natural conditions
FENG Aijing, XIAO Deqin
2016, 36(10): 2900-2906. DOI:
10.11772/j.issn.1001-9081.2016.10.2900
Asbtract
(
)
PDF
(1164KB) (
)
References
|
Related Articles
|
Metrics
The daily movement data of pigs, including exercise time, distance, speed, acceleration and so on, can be used as an important data base for the analysis and evaluation of the health status of pigs. Collecting movement data of pigs by sensors would make animals uncomfortable. Combined with video surveillance and digital image technology, a methodology for detecting and tracking pigs under natural conditions was put forward. Then state parameter and motion parameters of displacement, velocity, acceleration and angular velocity were extracted by shortest distance matching algorithm. Eight experiments in real-time monitoring of farms were given and analyzed. Experimental results show that the proposed method has good performance in real farm scene with adaption to mild adhesion and illumination change. In addition, the accumulated value of displacement, as well as velocity, acceleration and angular velocity of motion extracted from the videos, can reflect the movement of pigs in the overall trend and provide the data foundation for future research of pig behavior.
Fast flame recognition approach based on local feature filtering
MAO Wentao, WANG Wenpeng, JIANG Mengxue, OUYANG Jun
2016, 36(10): 2907-2911. DOI:
10.11772/j.issn.1001-9081.2016.10.2907
Asbtract
(
)
PDF
(819KB) (
)
References
|
Related Articles
|
Metrics
For flame recognition problem, the traditional recognition methods based on physical signal are easily affected by the external environment. Meanwhile, most of the current methods based on feature extraction of flame image are less discriminative to different scene and flame type, and then have lower recognition precision if the flame scene and type change. To overcome this drawback, a new fast recognition method for flame image was proposed by introducing colorspace information into Scale Invariant Feature Transform (SIFT) algorithm. Firstly, the feature descriptors of flame were extracted by SIFT algorithm from the frame images which were obtained from flame video. Secondly, the local noisy feature points were filtered by introducing the feature information of flame colorspace, and the feature descriptors were transformed into feature vectors by means of Bag Of Keypoints (BOK). Finally, Extreme Learning Machine (ELM) was utilized to establish a fast flame recognition model. Experiments were conducted on open flame datasets and real-life flame images. The results show that for different flame scenes and types the accuracy of the proposed method is more than 97%, and the recognition time is just 2.19 s for test set which contains 4301 images. In addition, comparing with the other three methods such as support vector machine based on entropy, texture and flame spread rate, support vector machine based on SIFT and fire specialty in color space, ELM based on SIFT and fire specialty in color space, the proposed method outperforms in terms of recognition accuracy and speed.
Defect detection of small hole inner surface based on multi-focus image fusion
NIU Qunyao, YE Ming, LU Yonghua
2016, 36(10): 2912-2915. DOI:
10.11772/j.issn.1001-9081.2016.10.2912
Asbtract
(
)
PDF
(795KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of defect defection of the small holes at inner surface with the diameter below 3 mm, a new defection method combining the micro optics and multi-focus image fusion technology was proposed. Firstly, in the different light illumination, the inner surface images of the hole were collected along the axial direction of the hole at oblique incidence. Secondly, the Region Of Interest (ROI) was extracted from the collected hole images based on the mask template obtained by foreground illumination, which was registered through Speeded Up Robust Feature (SURF) algorithm. In the following step, the ROI images were fused through the multi-focus fusion method based on definition of region and wavelet transform. Finally, the guide hole of the spinneret plate with the diameter of 2 mm was used as the experimental object, and the corrosion spots extracted by the segmentation algorithm based on threshold were used for detection and analysis. The research results indicate that the proposed method is certain feasible, which avoids the low efficiency of artificial detection, and breaks the limitation of the traditional defection methods for small holes with diameter below 3 mm.
Statistical iterative algorithm based on adaptive weighted total variation for low-dose CT
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Wen, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
2016, 36(10): 2916-2921. DOI:
10.11772/j.issn.1001-9081.2016.10.2916
Asbtract
(
)
PDF
(888KB) (
)
References
|
Related Articles
|
Metrics
Concerning the streak artifacts and impulse noise of the Low-Dose Computed Tomography (LDCT) reconstructed images, a statistical iterative reconstruction method based on adaptive weighted Total Variation (TV) for LDCT was presented. Considering the shortage that traditional TV may bring staircase effect while suppressing streak artifacts, an adaptive weighted TV model that combined the weighting factor based on weighted variation and TV model was proposed. Then, the new model was applied to the Penalized Weighted Least Square (PWLS). Different areas of the image were processed with different de-noising intensities, so as to achieve a good effect of noise suppression and edge preservation. The Shepp-Logan model and the digital pelvis phantom were used to test the effectiveness of the proposed algorithm. Experimental results show that the proposed method has smaller Normalized Mean Square Distance (NMSD) and Normal Average Absolute Distance (NAAD) in the two experiment images, compared with the Filtered Back Projection (FBP), PWLS, PWLS-Median Prior (PWLS-MP) and PWLS-TV algorithms. Meanwhile, the proposed method get Peak Signal-To-Noise Ratio (PSNR) of 40.91 dB and 42.25 dB respectively. Experimental results show that the proposed algorithm can well preserve image details and edges, while eliminating streak artifacts effectively.
Automatic positioning and detection method for jewelry based on principal component analysis
JIA Yulan, HUO Zhanqiang, HOU Zhanwei, WANG Zhiheng
2016, 36(10): 2922-2926. DOI:
10.11772/j.issn.1001-9081.2016.10.2922
Asbtract
(
)
PDF
(739KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that it is difficult to estimate the shape characteristics of irregular objects, a new automatic detection method for irregular jewelry images was put forward by introducing the concept of Principal Component Analysis (PCA) to realize the automatic measurement for jewelry. First, the principal axis of target image was extracted by PCA. Then, four vertices of the external rectangle of jewelry were computed according to the optimization direction of the principal axis. Last, the best-fitted rectangle of irregular contour was positioned to detect the irregular shape of the jewelry. The proposed method was applied to real jewelry images, experimental results illustrate that this algorithm can accurately locate the target in the image. Compared with the linear spectral frequency method and the projection rotation translation method, the subjective and objective evaluation results prove the superiority of the proposed algorithm.
Whole parameters estimation for linear frequency modulation pulse based on partial correlationtle
WANG Sixiu, XU Zhou, WANG Xiaojie, WANG Jianghua
2016, 36(10): 2927-2932. DOI:
10.11772/j.issn.1001-9081.2016.10.2927
Asbtract
(
)
PDF
(877KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the reconnoitering problem of Linear Frequency Modulation (LFM) pulse signals, a method to estimate the whole parameters containing frequency modulate rate, center frequency, time of arrival and pulse width, was proposed. Firstly, frequency modulate rate as well as time-frequency relation was estimated based on Fractional Fourier Transform (FrFT), then partial correlation pulses was used for signal accumulation, at last the autocorrelation technology was used to estimate the center frequency, time of arrival and pulse width. The Cramer-Rao Low Bounds (CRLB) for the parameters were derived and the effect on estimation error caused by signal to noise ratio was analyzed. Finally, the effect on estimation error caused by the width of partial accumulation pulse was analyzed, and some advice was given on choosing the width of accumulation pulse. Simulation results show that the estimation error of frequency modulate rate is close to CRLB. When signal to noise ratio is 0 dB without any knowledge of baseband and modulation parameters, the Root Mean Square Error (RMSE) of center frequency is about 10
-1
MHz orders of magnitude, and the RMSE of time of arrival as well as pulse width is about 10
-1
μs orders of magnitude. The estimate error, which is affected by the correlation pulse width, decreases with the increase of correlation pulse width, and then increases. The proposed method is especially applicable to the reconnoitering of new system radar such as chirp radar, and Synthetic Aperture Radar (SAR).
Single-channel vibration signal blind source separation by combining extreme-point symmetric mode decomposition with time-frequency analysis
YE Weidong, YANG Tao
2016, 36(10): 2933-2939. DOI:
10.11772/j.issn.1001-9081.2016.10.2933
Asbtract
(
)
PDF
(1014KB) (
)
References
|
Related Articles
|
Metrics
As the number of the observed signals for single-channel vibration signal-separation is less than that of the source signals, and in traditional methods, the Blind Source Separation (BSS) of vibration signals commonly ignores the non-stationarity, a BSS algorithm based on Extreme-point Symmetric Mode Decomposition (ESMD) and Time-Frequency Analysis (ESMD-TFA-BSS) was proposed. Firstly, the single observed signal was decomposed into different modes by ESMD method, and the number of source signals was estimated by Bayesian Information Criterion (BIC) and the optimal observed signals were selected by using correlation coefficient method. The original observed signals and the optimal observed signals were used to construct the new observed signals. Secondly, the whitening matrix and the whitened signals were obtained based on the new observed signals, and the whitened signals were extended to the time-frequency domain by using smoothed pseudo Wigner-Ville distribution, then the unitary matrix was calculated by utilizing matrix joint diagonalization. Finally, the source signals were estimated by the whitening matrix and the unitary matrix. In the BSS simulation experiments, the similarity coefficients between the estimated signals with ESMD-TFA-BSS and the source signals were 0.9771, 0.9784 and 0.9660, and the similarity coefficients with the BSS algorithm based on Empirical Mode Decomposition and Time-Frequency Analysis (EMD-TFA-BSS) were 0.8697, 0.9706 and 0.8548. Compared with EMD-TFA-BSS, the similarity coefficients with ESMD-TFA-BSS was increased by 12.53%, 0.08% and 13.00%. Experimental results indicate that ESMD-TFA-BSS can effectively improve separation accuracy of source signals in practical application.
Aircraft stands assignment optimization based on variable tabu length
LI Yaling, LI Yi
2016, 36(10): 2940-2944. DOI:
10.11772/j.issn.1001-9081.2016.10.2940
Asbtract
(
)
PDF
(720KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of maximizing the utilization of aircraft stands and minimizing the passengers' total walking distance in air transport, a new dynamic and flexible algorithm was proposed. Firstly, a simple and basic tabu search algorithm was introduced; then a modified method called Dynamic Tabu Search (DTS) was recommended; finally, comparison of several groups of data was given to verify that the variable length of tabu can reduce cycle times of global optimization. Moreover, the comparison with algorithms from references showed that the total walking time was decreased by 15.75% under sufficient resources and 22.84% under limited resources respectively. Experimental results indicate that the dynamic tabu search algorithm can get distribution solutions with smaller passenger walking distance.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF