Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
01 August 2013, Volume 33 Issue 08
Previous Issue
Next Issue
Network and communications
Influence maximization algorithm for micro-blog network
WU Kai JI Xinsheng GUO Jinshi LIU Caixia
2013, 33(08): 2091-2094. DOI:
10.11772/j.issn.1001-9081.2013.08.2091
Asbtract
(
)
PDF
(648KB) (
)
References
|
Related Articles
|
Metrics
Influence maximization problem in micro-blog cannot be solved by simple user rank algorithm. To solve this problem, a greedy algorithm based on Extended Linear Threshold Model (ELTM) was proposed to solve Top-K problem in microblog. A concept of influence rate and a WIR (Weibo Influence Rank) algorithm were established to determine the user's influence by summarizing the key factors. Then, based on WIR values, an influence propagation model was proposed. After using greedy algorithm, the Top-K nodes were excavated. A simulation test based on Sina micro-blog was performed to validate the effectiveness of the proposed method. The result shows that the method outperforms the traditional algorithm.
Dynamic community discovery algorithm of Web services based on collaborative filtering
Zhong WU Gui-hua NIE CHEN Dong-lin ZHANG Peilu
2013, 33(08): 2095-2099. DOI:
10.11772/j.issn.1001-9081.2013.08.2095
Asbtract
(
)
PDF
(782KB) (
)
References
|
Related Articles
|
Metrics
To cope with the low accuracy of the mining results in the existing community discovery algorithms and the low quality of intelligent recommendation in the Web services resource, on the basis of the conventional collaborative filtering algorithms, a dynamic community discovery algorithm was proposed based on the nodes' similarity. Firstly, the central node that had the most connected nodes was regarded as the initial network community, and the community contribution degree was taken as the metric to continuously form a plurality of global saturated contribution degree communities. Then, an overlapping calculation was used to merge the communities of high similarity. Finally, the calculated results were arranged in descending order to form neighboring user sets for obtaining community recommendation object by calculating the dynamic similarity between target user and other users in the community. The experimental results show that the user social network community classification by the proposed community discovery algorithms is consistent with the real community classification results. The proposed algorithm can improve the accuracy of the community mining and helps to achieve high-quality community recommendation.
Associating User Mining of Location Group in The Mobile Communication Network
Fen LIU GE Guodong ZHAO Yu LIU Bingyang
2013, 33(08): 2100-2103. DOI:
10.11772/j.issn.1001-9081.2013.08.2100
Asbtract
(
)
PDF
(675KB) (
)
References
|
Related Articles
|
Metrics
The current network relationship analysis mainly studies the association relationship or group relations between users. Due to the variety of characteristic relation between users in mobile communication network, the relationship between the users and the groups are also diverse. On the basis of the specific groups with certain communication correlation and location similarity in the mobile communication network, the position prediction was introduced to the correlation measurement of position item, the location trajectory correlation measurement criterion was established, and an association user mining algorithm was proposed. The experimental result indicates that the proposed method can achieve the measuring of the relationship between users and the groups, and discover potential users associated with specific groups.
Energy-efficient algorithm of strong k-barrier coverage in wireless sensor network
GUO Xinming
2013, 33(08): 2104-2107.
Asbtract
(
)
PDF
(849KB) (
)
References
|
Related Articles
|
Metrics
To further reduce the energy consumption of Wireless Sensor Network (WSN) strong k-barrier coverage for crossing behavior detection, the minimum energy consumption of strong k-barrier coverage was proved to be NP-hard firstly, and then a heuristic algorithm named HARPN which could adjust the sensing power of nodes was proposed. In HARPN, four rules of computing node's sensing radius were put forward according to the distance between wireless nodes in barriers and the state of the preorder nodes, and then sensing power of nodes was determined based on the size of node's sensing radius. On the premise that sensing barriers must be connected, the energy consumption of overall barriers should be reduced as much as possible. The theoretical analysis and simulations show that the adaptability and stability of HARPN are stronger than the others, and its average energy consumption is about 62% of Heuristic-2's under the same network conditions of barrier fluctuation, which means the network lifetime is prolonged.
Node scheduling scheme based on spatial resolution in wireless sensor networks
REN Xiuli WANG Weiyong
2013, 33(08): 2108-2111.
Asbtract
(
)
PDF
(658KB) (
)
References
|
Related Articles
|
Metrics
Node scheduling scheme is an effective approach to solve problems of energy constraints and high coverage redundancy in Wireless Sensor Network (WSN). However, it should satisfy the requirements of coverage rate as well as energy saving. To solve the problems of unbalanced and inefficient energy consumption of nodes in random node scheduling schemes, a node scheduling scheme based on spatial resolution was proposed. This scheme maintained coverage rate by controlling the number of active nodes in a deployed area, and balanced the remaining energy of every node. Meanwhile, a neighbor nodes protection mechanism was adopted to ensure that the dormant nodes closed real-time listening to reduce energy consumption, and the demand of coverage rate was fulfilled effectively due to relieving the situation of coverage holes which may occur when nodes take turns to rest. The simulation results indicate the performance of this scheme is superior to other similar schemes in coverage rate, lifetime and balance of energy consumption among nodes.
Energy-aware P2P data sharing mechanism for heterogeneous mobile terminals
DU Peng BAI Guangwei SHEN Hang CAO Lei
2013, 33(08): 2112-2116.
Asbtract
(
)
PDF
(985KB) (
)
References
|
Related Articles
|
Metrics
In response to the issue of mobile terminal heterogeneity in the existing Peer-to-Peer (P2P) data sharing network, an energy-aware P2P data sharing mechanism in heterogeneous mobile terminal named EADS was proposed, which enabled terminals to determine the types of end-user devices with introducing an energy-aware module to predict residual energy of these end-user devices. On this basis, the mechanism adjusted data sharing strategy dynamically in accordance with the changes of network environment. The simulation results demonstrate that EADS achieves significant performance improvement, in terms of energy utilization efficiency, balance load, as well as data sharing time, thus enhancing the success rate of data distribution. On the premise of maintaining high availability of files, average energy consumption has been reduced up to 15%.
Probabilistic forwarding algorithm of mobile Ad Hoc networks based on directional prediction
LI Shibao LOU Linlin CHEN Xiangrui HONG Li
2013, 33(08): 2117-2120.
Asbtract
(
)
PDF
(650KB) (
)
References
|
Related Articles
|
Metrics
In Mobile Ad Hoc Network (MANET), each node forwards a message in the traditional routing protocol such as flooding and expanding ring search, which results in heavy overhead and long latency of routing. In order to improve the performance of routing protocol, a scheme of probabilistic forwarding algorithm was provided based on directional prediction. The information such as ID and time was extracted from data packets by monitoring network, and a table was established to store these records which can hint the distance to the destination node. Based on these records, the node's forwarding probability was calculated and adaptively adjusted according to the network. Whether some node should forward a packet depended on the forwarding probability, which was high enough only for sustaining the routing process towards the destination. The simulation results show that the routing overhead declined up to 70% compared with flooding algorithm and 20% compared with the classical probabilistic forwarding algorithm. The new scheme significantly improved the performance of the network.
Outage performance of multi-hop cooperative diversity system
TANG Jingmin CAO Jinshi LONG Hua ZHANG Chunping
2013, 33(08): 2121-2123.
Asbtract
(
)
PDF
(585KB) (
)
References
|
Related Articles
|
Metrics
An approximate outage probability formulation and diversity order were derived for multi-hop diversity system employing Decode-and-Forward (DF) relaying in high Signal-to-Noise Ratio (SNR). And a modified multi-hop Selective Decode-and-Forward (SDF) protocol was proposed to overcome the limitation of space full diversity system unemploying DF relaying, the source node resended the information when the relay could not decode information correctly. An approximate outage probability formulation and diversity order were deduced for SDF protocol in high SNR. The theoretical analysis and simulation results show that the SDF protocol can improve transmission performance and achieve the full space diversity contrasted with DF protocol.
Analysis on the availability of Partition-Aware CORBA-Based Distributed System
2013, 33(08): 2124-2127.
Asbtract
(
)
PDF
(846KB) (
)
References
|
Related Articles
|
Metrics
Fault-Tolerant Common Object Request Broker Architecture (FT-CORBA) released by Object Management Group (OMG) provides fault-tolerance by replication. However, one of the main drawbacks of the FT-CORBA is that it does not support network partition, which limits the availability of the system severely since it makes the system access the states of adequate replications again and again. By adding appendix to the ORB middleware, classifying the requirement of the replication's uniformity into two levels, and acting according to the different levels, distributed system can better perform during partition failure. Considering the situation that network has been partitioned into 3 and 10 subnets: when there are three subnets, improved replication mechanism has pretty good performance no matter how many notes the subnets contain; when the network is divided into ten subnets, the availability of the distributed system is mostly dependent on the number of the nodes that a single subnet contains. If no subset contains most nodes of the distributed system, the availability also stays low but still higher than before. Since solution is applied to an off-the-set ORB, there is no need to have ORB's source code modified.
Estimation of RFID tags based on 0-1 distribution
QIAN Xiaojie GUO Hongyuan TIAN Yangguang
2013, 33(08): 2128-2131.
Asbtract
(
)
PDF
(622KB) (
)
References
|
Related Articles
|
Metrics
In the large-scale Radio Frequency Identification (RFID) system, the estimated time of current tag estimation algorithms increase linearly with the increase of tags, and the deviation is large. Regarding these problems, a new estimation algorithm based on 0-1 distribution was proposed. By using the feature of 0-1 distribution, the algorithm set the specific frame length and selected flag to choose the collection of tags which responded to the command of query. In this way, estimation time was reduced to the logarithmic level of tag number and the deviation was reduced through picking numerous average values randomly. Compared with other algorithms, the simulation results show that the proposed algorithm drops deviation at least by 0.9%, and has less fluctuation.
RFID anti-collision algorithm based on tags grouping
CHEN RongLing WANG Yuhao LIU Wei CHEN Zhongping
2013, 33(08): 2132-2135.
Asbtract
(
)
PDF
(658KB) (
)
References
|
Related Articles
|
Metrics
A new tags anti-collision algorithm was proposed for the reader collision problems in the Radio Frequency IDentification (RFID) technology. It divided tags into groups based on the coset partition theory, and restricted each group to response in a fixed timeslot. According to the query codes and collision flag bits, the reader could identify a group of tags in one slot. The Matlab simulation results show that, compared with the binary search algorithm and dynamic frame timeslot algorithm, the proposed algorithm improves the slot utilization and throughput when the number of tags is large.
The Shifted Bit Inverse Sequential Acquisition Algorithm
REN Guofeng JI Jiang TIAN Zhumei
2013, 33(08): 2136-2139.
Asbtract
(
)
PDF
(626KB) (
)
References
|
Related Articles
|
Metrics
When the period of objective sequence is long enough, the traditional acquisition algorithm will consume a lot of system resource. The shifted bit inverse sequential acquisition algorithm was proposed, which could be utilized to deduce the trail-and-error results of following sequence state from the previous trail-and-error result. As a result, the complicated sequence shifting calculation was avoided. Then the rule of the control state was proved, which led to the storage space reduction of the bit inverse vector and shifted bit inverse vector. Finally, an acquisition system based on the theory mentioned above was designed, which could acquire sequence with high efficiency, and the complexity decreased from conventional O(n2) to O(n).
Network and distributed techno
Task scheduling strategy based on load balance of cluster in heterogeneous cloud environment
LIU Weining GAO Long
2013, 33(08): 2140-2142.
Asbtract
(
)
PDF
(676KB) (
)
References
|
Related Articles
|
Metrics
Load balancing is an important means to improve resource utilization and system stability. Based on Adaptive Mutation Particle Swarm Optimization (AMPSO) algorithm, a new task scheduling model and strategy about load balancing for cluster in heterogeneous cloud environment were proposed. In order to maximize customer satisfaction degree and reduce the total execution time of a collection of tasks under ensuring the system load as much balanced as possible, a concept of user bias degree on cluster node performance such as safety and reliability and a method of grasping the degree of preference on security and reliability of cluster nodes and estimating the load information of the tasks were added into the design of scheduling policy. The simulation shows that the improved AMPSO algorithm performs better than the original AMPSO algorithm and the basic Particle Swarm Optimization (PSO) algorithm at convergence speed and the capacity of jumping out the local optimum. The results prove that the improved AMPSO can better improve the profit margins of the cloud service provider while ensuring the load balancing of the cluster.
Replica allocation policy of cloudy services based on social network properties
LUO Haoyu CHEN Wanghu
2013, 33(08): 2143-2146.
Asbtract
(
)
PDF
(812KB) (
)
References
|
Related Articles
|
Metrics
To improve the running efficiency of business workflow in cloud environment, a policy of replica allocation of cloudy services was proposed. Taking the advantage of social network analysis, the policy specified the central service nodes in a service network based on mining the social network properties such as connectivity and centralization for a service community. The host physical machine of the replica of the central service was specified according to the analysis of logical sequence between the central service and its pre-service, and the usage of other physical machines. The analysis and simulation show that the policy can improve the running efficiency of data intensive business workflow in a cloud environment by averaging the overload of physical machine and reducing the time wasted by long-distance service interaction.
Task scheduling algorithm based on priority and cost constraint in cloud computing
WU Xiaonian DENG Mengqin ZHANG Mingling ZENG Bin
2013, 33(08): 2147-2150.
Asbtract
(
)
PDF
(634KB) (
)
References
|
Related Articles
|
Metrics
Concerning the service quality assurance in cloud computing, a task scheduling algorithm based on priority and cost constraint was proposed. Firstly, it computed the priority of tasks and the service ability of resources, then made sorting and grouping for tasks and resources respectively, and set the scheduling constrained relationship according to the priority and service ability between task groups and resource groups. Furthermore, the completion time and cost of tasks spent on different resources located in the related resource group were calculated, and finally each task was scheduled in turn onto a resource with minimum time-cost tradeoff value according to its priority. The simulation results show that, compared with Min-Min and QoS-Guided-Min, the proposed algorithm achieves better performance and load balancing, and reduces the overall service cost.
Cloud computing resource scheduling based on improved quantum genetic algorithm
LIU Weining JIN Hongbing LIU Bo
2013, 33(08): 2151-2153.
Asbtract
(
)
PDF
(448KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem of high efficiency resource scheduling in cloud computing environment, since current research has been less concerned about the cost of the services of the cloud service provider, an improved Quantum Genetic Algorithm (QGA) was proposed to reduce the minimum service cost of cloud service provider. This algorithm converted quantum-bits encoded by binary number to real-coded quantum-bits as chromosome represented by binary-coded quantum-bits cannot describe the resource scheduling matrix, and used rotation strategy and mutation operator to guarantee the convergence of the algorithm. Comparative experiments were conducted among the improved QGA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) through simulation platform, the populations number is 1 and 100 with 100 iteration times. The experimental results show that the improved QGA can obtain smaller minimum service cost.
Workflow task scheduling algorithm based on resource clustering in cloud computing environment
GUO Fenguu YU Long TIAN Shengwei YU Jiong SUN Hua
2013, 33(08): 2154-2157.
Asbtract
(
)
PDF
(614KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the characteristics of resource under large-scale, heterogeneous and dynamic environment in cloud computing, a workflow task scheduling algorithm based on resource fuzzy clustering was proposed. After quantizing and normalizing the resource characteristics, this algorithm integrated the theory of clustering to divide the resources based on the workflow task model and the resource model constructed in advance. The cluster with better synthetic performance was chosen firstly in scheduling stage. Therefore, it shortened the matching time between the task and the resource, and improved the scheduling performance. By comparing this algorithm with HEFT (Heterogeneous Earliest Finish Time) and DLS (Dynamic Level Scheduling), the experimental results show that the average SLR (Schedule Length Ratio) of this algorithm was smaller than that of HEFT by 3.4%, the DLS by 9.9%, and the average speedup of this algorithm was faster than that of HEFT by 59%, the DLS by 10.2% with the increase of tasks in a certain range of [0,100]; when the resources were increased in a certain range of [0,100], the average SLR of this algorithm was smaller than that of HEFT by 3.6%, the DLS by 9.7%, and the average speedup of this algorithm was faster than that of HEFT by 4.5%, the DLS by 10.8%. The results indicate that the proposed algorithm realizes the reasonable division of resources, and it surpasses HEFT and DLS algorithms in makespan.
Tasks assignment optimization in Hadoop
HUANG Chengzhen WANG Lei LIU Xiaolong KUANG Yaping
2013, 33(08): 2158-2162.
Asbtract
(
)
PDF
(756KB) (
)
References
|
Related Articles
|
Metrics
Hadoop has been widely used in large data parallel processing. The existing tasks assignment strategies are almost oriented to a homogenous environment, but ignore the global cluster state, or not take into account the efficiency of the implementation and the complexity of the algorithm in a heterogeneous environment. To solve these problems, a new tasks assignment algorithm named λ-Flow which was oriented to a heterogeneous environment was proposed. In λ-Flow, the tasks assignment was divided into several rounds. In each round, λ-Flow collected the cluster states and the execution result of the last round dynamically, and assigned tasks in accordance with these states and the result. The comparative experimental result shows that the λ-Flow algorithm performs better in a dynamic changing cluster than the existing algorithms, and reduces the execution time of a job effectively.
Service architecture and service discovery oriented to service clusters
HU Qiang DU Yuyue
2013, 33(08): 2163-2166.
Asbtract
(
)
PDF
(657KB) (
)
References
|
Related Articles
|
Metrics
In order to reduce the search space and improve the efficiency of service discovery, the concept of service cluster was put forward. Web services with the same or similar functions were encapsulated as a service cluster. The request/response schema based on service clusters was constructed. The formal definition of service cluster, the architecture and an algorithm to discover the optimal Web service in the service clusters schema were presented. The simulation experiment was conducted on 10000 Web services. The discovery time and rediscovery time under service clusters schema were less than 600ms when the number of service clusters was no more than 1000. However, the above time was more than 900ms in the current service response schema. The efficiency is greatly increased in discovering services under service clusters schema, and the rediscovery time is also greatly decreased.
Web service composition approach based on service cluster and QoS
DENG Shiyang DU Yuyue
2013, 33(08): 2167-2170.
Asbtract
(
)
PDF
(824KB) (
)
References
|
Related Articles
|
Metrics
To improve the searching speed and get optimal services compositions on large scale of semantic Web services, a quick composition approach based on service cluster and Quality of Service (QoS) was proposed. Using the pre-built service clusters, it could quickly get the candidate service set with the effectively reduced searching space and semantic comparison complexity. It could obtain more optimal compositions by filtering service with the dynamically determined threshold based on the best composition QoS in the process of composition. It adopted an effective redundancy processing method to ensure minimum redundant services were used in the composition, and a service cluster internal filtering method was used to limit the number of candidate services, that solved the operation overtime problems caused by too many compositions. The results of experiments performed on large scale service storage illustrate that the searching speed is improved dozens of times than common methods, and the effectiveness of service filtering and redundancy processing is remarkable, so that the approach can quickly get multiple QoS optimal and non-redundant service compositions, and perform well on deep hierarchy composition in service storage of millions level.
Pipelining granularity optimization algorithm based on loop tiling
LIU Xiaoxian ZHAO Rongcai DING Rui LI Yanbing
2013, 33(08): 2171-2176.
Asbtract
(
)
PDF
(906KB) (
)
References
|
Related Articles
|
Metrics
When the pipelining loop has a great number of iterations, or the size of its body is large, but the number of available threads is small, the workload between two synchronizations of a thread is so heavy, which produces pretty low degree of parallelism. The traditional trade-off approach based on loop tiling cannot handle the above situation. To solve this problem, a pipelining granularity decreasing approach based on loop tiling was proposed. The optimal pipelining granularity was obtained by building the cost model for pipelining loop and a pipelining granularity optimizing algorithm was implemented. By measuring the wavefront loops of Finite Difference Relaxation (FDR) and the representative loops of Finite Difference Time Domain (FDTD), the loops show better performance improvement by using the proposed algorithm than the traditional one.
Parallel Delaunay algorithm design in lunar surface terrain reconstruction system
WANG Zhe GAO Sanhong ZHENG Huiying LI Lichun
2013, 33(08): 2177-2183.
Asbtract
(
)
PDF
(1078KB) (
)
References
|
Related Articles
|
Metrics
Triangulation procedure is one of the time bottle-necks of 3D reconstruction system. To increase the speed of triangulation procedure, a parallel Delaunay algorithm was designed based on a shared memory multi-core computer. The algorithm employed divide-and-conquer method and improved conquer procedure and Delaunay mesh optimization procedure to avoid data competition problem. Experiments were conducted on datasets with range from 500000 to 5000000 gathered on the lunar surface simulation ground, and the speedup of the algorithm reached 6.44. In addition, the algorithm complexity and parallel efficiency were fully analyzed and the algorithm was applied in the lunar surface terrain reconstruction system to realize fast virtual terrain reconstruction.
Database technology
Quantitative associative classification based on lazy method
LI Xueming LI Binfei YANG Tao WU Haiyan
2013, 33(08): 2184-2187.
Asbtract
(
)
PDF
(620KB) (
)
References
|
Related Articles
|
Metrics
In order to avoid the problem of blind discretization of traditional classification "discretize first learn second", a new method of associative classification based on lazy thought was proposed. It discretized the new training dataset gotten by determining the K-nearest neighbors of test instance firstly, and then mined associative rules form the discrete dataset and built a classifier for predicting the class label of test instance. At last, the results of contrastive experiments with CBA (Classification Based on Associations), CMAR (Classification based on Multiple Class-Association Rules) and CPAR (Classification based on Predictive Association Rules) carried out on seven commonly used quantitative datasets of UCI show that the classification accuracy of the proposed method can be increased by 0.66% to 1.65%, and verify the feasibility of this method.
Differential co-expression relative constant row bicluster mining algorithm
XIE Huabo SHANG Xuequn WANG Miao
2013, 33(08): 2188-2193.
Asbtract
(
)
PDF
(1080KB) (
)
References
|
Related Articles
|
Metrics
Bioinformaticly, it is useful to study the change process of biology, such as aging and canceration, by mining differential co-expression bicluster. The definition in the past only measured from the perspective of all set of genes, thus containing a lot of noise. Therefore, a new definition named MiSupport was put forward to measure the difference on gene level, and on the basis of MiSupport, an algorithm named MiCluster was proposed to mine the maximal differential co-expression bicluster in two real gene chips. Firstly, MiCluster constructed a differential weighted undirected sample-sample relational graph in two real-valued gene expression datasets. Secondly, the maximal differential biclusters was produced in the above differential weighted undirected sample-sample relational graph with efficiently pruning techniques and accurately generating candidates method by sample-growth and level-growth. The experimental results show that MiCluster is more efficient than the existing methods. Furthermore, the performance is evaluated by Mean Square Error (MSE) score and Gene Ontology (GO). The results show that this algorithm can find better statistical and biological significance.
Robust feature selection method in high-dimensional data mining
LI Zhean CHEN Jianping ZHANG Yajuan ZHAO Weihua
2013, 33(08): 2194-2197.
Asbtract
(
)
PDF
(811KB) (
)
References
|
Related Articles
|
Metrics
According to the feature of high-dimensional data, the number of variables is usually larger than the sample size and the data are often heterogeneous, a robust and effective feature selection method was proposed by using the dimensional reduction technique of variable selection and the modal regression based estimation method. The estimation algorithm was given by using Local Quadratic Algorithm (LQA) and Expectation-Maximum (EM) algorithm, and the selection method of the parameter adjustment was also discussed. Data analysis of the simulation shows that the proposed method is overall better than the least square and median regression based regularized method. Compared with the existing methods, the proposed method has higher prediction ability and stronger robustness especially for the non-normal error distribution.
Visualization of multi-valued attribute association rules based on concept lattice
GUO Xiaobo ZHAO Shuliang ZHAO Jiaojiao LIU Jundan
2013, 33(08): 2198-2203.
Asbtract
(
)
PDF
(1159KB) (
)
References
|
Related Articles
|
Metrics
Considering the problems caused by the traditional association rules visualization approaches, including being unable to display the frequent pattern and relationships of items, unitary express, especially being not conducive to represent multi-schema association rules, a new visualizing algorithm for multi-valued association rules mining was proposed. It introduced the redefinition and classification of multi-valued attribute data by using conceptual lattice and presented the multi-valued attribute items of frequent itemset and association rules with concept lattice structure. This methodology was able to achieve frequent itemset visualization and multi-schema visualization of association rules, including the type of one to one, one to many, many to one, many to many and concept hierarchy. At last, the advantages of these new methods were illustrated with the help of experimental data obtained from demographic data of a province, and the source data visualization, frequent pattern and association relation visual representation of the demographic data were also achieved. The practical application analysis and experimental results prove that the schema has more excellent visual effects for frequent itemset display and authentical multi-schema association rules visualization.
Enhanced clustering ensemble algorithm based on characteristics of data sets
HOU Yong ZHENG Xuefeng
2013, 33(08): 2204-2207.
Asbtract
(
)
PDF
(812KB) (
)
References
|
Related Articles
|
Metrics
The popular clustering ensemble algorithms cannot give the appropriate treatment program in the light of the different characteristics of the different data sets. A new clustering ensemble algorithm — Enhanced Clustering Ensemble algorithm based on Characteristics of Data sets (ECECD) was proposed for overcoming this defect. ECECD was composed of generation of base clustering, selection of base clustering and consensus function. It selected a special range of ensemble members to form the final ensemble and produced the final clustering based on the characteristic of the data set. Three Benchmark data sets including ecoli, leukaemia and Vehicle were clustered in the experiment, and the clustering errors gained by the proposed algorithm were 0.014, 0.489 and 0.361 respectively, which were always the minimum compared with that of the other algorithms such as Bagging based Structure Ensemble Approach (BSEA), Hybrid Cluster Ensemble (HCE) and Cluster-Oriented Ensemble Classifier (COES). The Normalized Mutual Information (NMI) values of the proposed algorithm were also always higher than that of these algorithms when increasing candidate base clusterings. Therefore, compared with these popular clustering ensemble algorithms, the proposed algorithm has the highest clustering precision and the strongest scalability.
Algorithm for detecting approximate duplicate records in massive data
ZHOU Dianrui ZHOU Lianying
2013, 33(08): 2208-2211.
Asbtract
(
)
PDF
(673KB) (
)
References
|
Related Articles
|
Metrics
For the problem of low precision and low time efficiency of approximate duplicate records detection algorithm in massive data, integrated weighted method and filtration method based on the length of strings were adopted to do the approximate duplicate records detection in dataset. Integrated weighted method integrated user experience and mathematical statistics to calculate the weight of each attribute to make weight calculation more scientific. The filtration method based on the length of strings made use of the length difference between strings to terminate the edit distance algorithm earlier which reduced the number of the records to be matched during the detection process. The experimental results show that the weight vector calculated by the integrated weighted method makes the importance of each field more comprehensive and accurate. The filtration method based on the length of strings reduces the comparison time among records and effectively solves the problem of the detection of approximate duplicate records under massive data.
Information security
Secure protection scheme for hierarchical OSPF network
KONG Lingjing ZENG Huashen LI Yao
2013, 33(08): 2212-2217.
Asbtract
(
)
PDF
(981KB) (
)
References
|
Related Articles
|
Metrics
As the most widely used autonomous intra-domain routing protocol for large-scale network, the security of Open Shortest Path First (OSPF) is not only about the normal running of autonomous intra-domain, but also closely related to inter-domain even the whole network. Based on asymmetric encryption algorithm, the traditional digital signature solution can realize the security validation of end-to-end; however, it ignores the issue of point-to-point. Additionally, the problem of storage and extra overhead also needs to be solved urgently. On the basis of symmetrical encryption algorithm, a new solution named HS-OSPF was put forward. HS-OSPF extended the original two-level hierarchical structure as well as designed a reasonable, efficient key distribution and management scheme. The result shows that the shortcomings of traditional solution are overcome, key storage and system overhead are reduced and real-time of security communication is improved.
Fuzzy risk evaluation on multi-role security mutual exclusion in mobile environment
WANG Jianjun LI Jianjun
2013, 33(08): 2218-2221.
Asbtract
(
)
PDF
(658KB) (
)
References
|
Related Articles
|
Metrics
Since the traditional mechanism is low in efficiency to solve the multi-role security mutex in the mobile environment, a solution was proposed to determine the degree of security mutex using the multi-role comprehensive sensitivity. The system carried out fuzzy evaluation on the sensitivity of a role based on the internal security factors of the role, then calculated the comprehensive sensitivity of multiple roles utilizing the compensation competitive algorithm, i.e. did the Hamming distance compensation to the role sensitivity, took the compensated maximum as the multi-role comprehensive sensitivity, enabling the multi-role system in the mobile environment to achieve a balance between safety and efficiency. Finally, the algorithm has been analyzed for its complexity and an example demonstrates that the algorithm can increase the execution efficiency of the roles.
Colluding clique detector based on evaluation similarity in WSN reputation system mechanism
WANG Yong YUAN Chaoyan TANG Jing HU Liangliang
2013, 33(08): 2218-2221.
Asbtract
(
)
PDF
(670KB) (
)
References
|
Related Articles
|
Metrics
Bad Mouthing and Self-Promoting (BS) collusion attack group and its detection mechanism, called BSCD, were proposed to resolve the security issues of the multiple malicious node collusion attack network nodes and affect their accurate positioning in the Wireless Sensor Network (WSN) reputation system. And the implementation method of the mechanism was given. It detected the abnormal recommended node, analyzed the evaluation behavior similarity between recommended nodes, and effectively detected the existence of collusion attack group, thereby reduced its damage and impact on the reputation of the system. The simulation results show that, BSCD has significant effect on the detection and resisting BS collusion attack group, effectively improves the malicious node detection rate in the reputation system and the capacity of the entire system to resist malicious node.
Linear collusion attack analysis of combined public key cryptosystem
MA Anjun LI Fangwei ZHU Jiang
2013, 33(08): 2225-2227.
Asbtract
(
)
PDF
(456KB) (
)
References
|
Related Articles
|
Metrics
Concerning the linear collusion attack problem in Combined Public Key (CPK) cryptosystem, on the basis of the nature of the linear collusion attack and according to the principle of key generation, a new equation set was constructed. Through the linear transformation to the coefficient matrix of the equation set, the rank of the equations can be solved, and it is less than the number of seeds of private key seed matrix. At the same time, the analysis of the private key's structure shows that the rank of the augmented matrix is not equal to the rank of coefficient matrix. Thus both sides above prove that the attacker never get the unique solution to the private key seed matrix even if he get all the private keys. Therefore, it demonstrates that there does not exist the threat of linear collusion attack in the CPK cryptosystem.
Detection of application-layer DDoS attack based on time series analysis
GU Xiaoqing WANG Hongyuan NI Tongguang DING Hui
2013, 33(08): 2228-2231.
Asbtract
(
)
PDF
(651KB) (
)
References
|
Related Articles
|
Metrics
According to the difference between normal users visiting patterns and abnormal ones, a new method to detect applicationlayer Distributed Denial of Service (DDoS) attack was proposed based on IP Service Request Entropy (SRE) time series. By approximating the Adaptive AutoRegressive (AAR) model, the SRE time series was transformed into a multidimensional vector series regarded as a description of current users visiting patterns. Furthermore, a Support Vector Machine (SVM) classifier was applied to classify vector series and identify the attacks. The simulation results show that this approach not only can distinguish between normal traffic and DDoS attack traffic, but also is suitable to detect DDoS attack against the large scale network traffic, which does not arouse the sharp changes of the network traffic.
Immunity digital watermarking algorithm based on reversible information hiding in wavelet domain
XIAO Di ZHU Xinyi
2013, 33(08): 2232-2235.
Asbtract
(
)
PDF
(751KB) (
)
References
|
Related Articles
|
Metrics
To overcome the drawback of the existing immunity watermarking algorithm that the original image cannot be recovered exactly, a new reversible watermarking algorithm in wavelet domain based on immune watermarking framework was proposed. The algorithm has large embedding capacity and can recover the original image exactly. Meanwhile, the algorithm used histogram shifting method of reversible watermarking to embed recovery vector for recovering the original image precisely. By using the block size in the wavelet transform, the amount of the peak-zero point pair and the round amount in the histogram shifting, 〖JP2〗the control factor in the algorithm could be selected to decide the embedding depth. Then the algorithm could generate publishing image which has large distortion but also kept the main information of the original image. The scrambling and encryption methods were used in the algorithm so that the invaders could not recover the original image correctly by brute-force method without authorized file. The experimental results show that the algorithm can not only obtain the publishing image which has big differences with the original one, but also can recover the exact image when security permission is released.
Quantitative estimation of parameters in quantization modulation watermarking method of wavelet domain based on PSNR
JING Li ZHANG Hongjun
2013, 33(08): 2232-2235.
Asbtract
(
)
PDF
(637KB) (
)
Related Articles
|
Metrics
Quantization step is a key parameter in quantization modulation method, but now there is no theoretical method to decide its value. To solve this problem, a quantitative estimation method of quantization step based on Peak Signal-to-Noise Ratio (PSNR) was proposed. In this method, dither quantization modulation method was chosen as research object, and wavelet coefficients were regarded as quantization coefficients. According to the distribution of quantization error, it firstly gave an estimating quantization error method based on quantization step. Then it deduced the quantitative relationship equation of quantization step, watermark sequence length and PSNR on the basis of some properties of wavelet transform. The experimental results show that PSNR values calculated through quantitative equation are in good agreement with those obtained from experiments when the values of their quantization step are the same. It demonstrates that the deduced quantitative relationship equations are accurate.
Scrambling algorithm based on layered Arnold transform
ZHANG Haitao YAO Xue CHEN Hongyu ZHANG Ye
2013, 33(08): 2240-2243.
Asbtract
(
)
PDF
(750KB) (
)
References
|
Related Articles
|
Metrics
Concerning the safe problem of digital image information hiding, a scrambling algorithm based on bitwise layered Arnold transform was proposed. The secret image was stratified by bit-plane, taking into account the location and pixel gray transform, each bit-plane was scrambled for different times with Arnold transforma, and the pixel was cross transposed, and adjacent pixels were bitwise XOR to get a scrambling image. The experimental results show that the secret image histogram is more evenly distributed after stratification scrambling, its similarity with the white noise is around 0.962, and the scrambling image can be restored and extracted almost lossless, which improves the robustness. Compared with other scrambling algorithms, the proposed algorithm is more robust to resist attack, and improves the spatial information hiding security.
Multiple-dimension process behavior evaluation model and its optimization
MAO Kun DU Xuehui SUN Yi
2013, 33(08): 2244-2249.
Asbtract
(
)
PDF
(955KB) (
)
References
|
Related Articles
|
Metrics
To solve the existing problems of optimization and selection in process behavior evaluation model, the process behavior was defined, and the process behavior was described based on Hidden Markov Model (HMM). The relation between precision rate and false positives rate was discussed, and a multiple-dimension process behavior evaluation model based on Boolean function was proposed, which overcame the shortcomings of single process behavior evaluation model, and increased evaluation performance. On the basis of cost decision tree, the target function was given to select the optimal process behavior on the proposed evaluation model. Finally, the proposed evaluation model was tested and compared with the traditional Sequence TIme-Delay Embedding (STIDE) and HMM method. The test results verify the efficiency and superiority of the proposed model.
Security analysis and improvement of certificateless signature scheme without bilinear pairing
WANG Yi DU Weizhang
2013, 33(08): 2250-2252.
Asbtract
(
)
PDF
(467KB) (
)
References
|
Related Articles
|
Metrics
By analyzing the security of a certificateless signature scheme without bilinear pairing proposed by Wang Shengbao, 〖WTBX〗et al.〖WTBZ〗 (WANG S B, LIU W H, XIE Q. Certificateless signature scheme without bilinear pairings. Journal on Communications, 2012, 33(4): 93-98), it indicated that the scheme could not resist malicious attack of positive dishonest Key Generation Center (KGC). For this kind of attack, detailed attack method was given, and an improved scheme was proposed. Finally, the security of the improved scheme was analyzed. The result shows that the proved scheme can resist the malicious KGC attack, maintain efficiency of the original scheme and has higher security. Meanwhile, the communication complexity is reduced due to the elimination of the security channel.
Artificial intelligence
Improved glowworm swarm optimization algorithm for high-dimensional functions
PENG Shuo OUYANG Aijia YUE Guangxue HE Minghua ZHOU Xu
2013, 33(08): 2253-2256.
Asbtract
(
)
PDF
(700KB) (
)
References
|
Related Articles
|
Metrics
Concerning the low accuracy and convergence of Glowworm Swarm Optimization (GSO) algorithm when resolving high-dimensional functions, an Improved GSO (IGSO) algorithm with mutation operator and foraging behavior was proposed. Using mutation operator to guide the evolution of glow worms which cannot find their peers in the visible range, the proposed algorithm could enhance the utilization of outliers and improve the overall efficiency. The operator with foraging behavior substantially increased the accuracy and convergence speed by searching accurately in the global optimal field captured by the algorithm. In the meantime, the operator could effectively avoid local optimum and enlarge the global search range of the algorithm in the late stage. The experimental results indicate that IGSO has better ability of global optimization and higher success ratio than GSO according to the tests of eight Benchmarks.
Mixed optimization strategy based on eating particle swarm and conjugate gradient
WANG Jianchen QI Xiaohui SHAN Ganlin
2013, 33(08): 2257-2260.
Asbtract
(
)
PDF
(588KB) (
)
References
|
Related Articles
|
Metrics
Particle Swarm Optimization (PSO) is an intelligent evolutionary approach widely used to search for the global optimal solution. However, fast flying of swarm particles to the current optimal solution at the early algorithm phase may result in premature convergence, and at the late phase, convergence of a majority of particles causes the degradation of swarm speed. To deal with those shortcomings, a new global algorithm named Eating Particle Swarm Optimization (EPSO) was put forward. In this algorithm, the concepts of eating process and second flight were introduced to guarantee particles flying quickly away from the current optimal solution, so that individual diversity was enhanced. Then the proposed EPSO was combined with Conjugate Gradient (CG) method to form a mixed optimization strategy, in which CG was applied to the local optimization of EPSO algorithm to improve the convergence speed and precision. High-dimensional Benchmark functions were used for optimization experiments, of which the results were compared with the methods in recent literature. The results show that the proposed approach can avoid local optimal phenomena, and obtains the merits of CG in terms of optimization accuracy and convergence speed.
Particle swarm optimization for two-echelon location-routing problem
CHEN Jiumei GONG Ying
2013, 33(08): 2261-2264.
Asbtract
(
)
PDF
(757KB) (
)
References
|
Related Articles
|
Metrics
In order to solve two-echelon location-routing problem of distribution network, particle swarm optimization with path relinking integrated into particle update process was proposed. Three path relinking search modules with regarding transfer station, path and edge as the object were put forward according to the attributes of the solution of two-echelon location-routing problem. At the same time, on the basis of the different combinations of these search modules, four kinds of path relinking strategy were put forward. The test results on different scale examples show that the particle swarm optimization can solve two-echelon location-routing problem effectively, the first path relinking strategy has higher efficiency. The second one has higher stability, the third one has no obvious performance in every aspect, and the fourth one has higher quality solution. Key words: two-echelon location - routing problem; particle swarm optimization; path relinking; distribution
Particle swarm optimization algorithm with weight function's learning factor
ZHAO Yuandong FANG Zhenghua
2013, 33(08): 2265-2268.
Asbtract
(
)
PDF
(592KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the independent adjusting strategy of inertia weight and learning factor reduces evolution unity and intelligence feature of Particle Swarm Optimization (PSO) algorithm, and cannot adapt to the complex and nonlinear optimization problems, a new PSO algorithm with learning factor controlled by inertia weight function was proposed. Learning factor in the presented PSO was regarded as inertia weight's linear, nonlinear or trigonometric function, and increased or decreased progressively when inertia weight decreased by degrees linearly or nonlinearly. This strategy could effectively enhance the interaction of inertia weight and learning factor, then balance the global exploration and local exploitation and preferably lead particles to search globally optimal solution. Then the inertia weights were given to linear and nonlinear methods to analyze fusion performance of weight and learning factor, and the experimental results to test functions show that nonlinear weight is better. Finally, the experimental simulation results on benchmark test functions and the comparison with PSO with asynchronous linear and trigonometric function learning factor draw a conclusion that the strategy uses inertia weight to adjust learning factor, balances particle learning ability of individual and population, and improves optimization precision of algorithm.
Improved particle swarm optimization based on adaptive rejection factor
CHEN Ming LIU Yanming
2013, 33(08): 2269-2272.
Asbtract
(
)
PDF
(570KB) (
)
Related Articles
|
Metrics
As the multimodal complex problem has many local optima, it is difficult for the basic Particle Swarm Optimization (PSO) to effectively solve this kind of problem. To conquer this defect, firstly, Monte Carlo method was used to simulate the fly trajectory of particle, and the reason for falling into local optima was concluded. Then, by defining distance, average distance and maximal distance between particles, an adaptive control factor named Adaptive Rejection Factor (ARF) for controlling local optimum position and global optimum position was proposed to increase the ability for escaping from local optima. In order to test the proposed strategy, three test benchmarks including Rosenbrock, Ackley and Griewank were selected to conduct the analysis of convergence property and statistical property. The 60 times independent runs show that the improved PSO based on ARF (ARFPSO) has the best value of 53.82, 2.1203 and 5.32E-004, which is better than the both contrast algorithms. The results show that ARFPSO can effectively avoid premature phenomenon, and the complexity analysis of the algorithm also shows that the introduced strategy does not increase computational complexity.
Friends recommended method based on common users and similar labels
ZHANG Yiwen YUE Lihua Yifei LI Qing CHENG Jiaxing
2013, 33(08): 2273-2275.
Asbtract
(
)
PDF
(511KB) (
)
Related Articles
|
Metrics
Concerning the problems of current social networking friends recommended methods, such as no obvious user interest and poor correlation between the users, a collaborative filtering algorithm was proposed based on common users and similar labels. The common concerned users were extracted as joint project data, and the custom labels were added to reflect the users' interest. Then the semantic similarity of the labels was calculated to expand the sparse matrix and improve the collaborative filtering recommendation. The experimental results show that, compared with the traditional collaborative filtering algorithm with single index, the proposed algorithm can better reflect the users' interest, and has significant improvement in the recommended accuracy and the average accuracy.
Improved vocabulary semantic similarity calculation based on HowNet
ZHU Zhengyu SUN Junhua
2013, 33(08): 2276-2279.
Asbtract
(
)
PDF
(867KB) (
)
Related Articles
|
Metrics
The present HowNet-based vocabulary semantic similarity calculation method fails to give due attention to the linear feature of conceptual description in knowledge database mark-up language. To resolve this shortcoming, an improved vocabulary semantic similarity calculation method was proposed. Firstly, fully considering the linear relationship between the sememes in the conceptual description formula, a position-related weight distribution strategy was proposed. Then concept similarity was calculated by combining the strategy above with bigraph maximum weight matching. The experimental results show that, compared with the contrast method, the F-measure of text clustering using improved method increases by 5% on average, thus verifying the rationality and validity of the improved method.
New feature weight calculation method for short text
MA Wenwen DENG Yigui
2013, 33(08): 2280-2282.
Asbtract
(
)
PDF
(633KB) (
)
Related Articles
|
Metrics
The inherent sparse features and unbalanced sample of the short text make it difficult for short text to use traditional weight of long text mechanically. To resolve this problem, an approach of short text feature weight named Integrated Category (IC) was proposed. This approach introduced the concept of inverse document frequency and relevancy frequency, and integrated the distribution of sample in positive category and negative category. The experimental results show that, compared with other feature weight methods, the micro-average and macro-average of this method are above 90%, and it can enhance the sample categories distinguishing ability in negative category, and improve the precision and recall of short text categorization.
Evolutionary operant behavior learning model and its application to mobile robot obstacle avoidance
GAO Yuanyuan ZHU Fan SONF Hongjun
2013, 33(08): 2283-2288.
Asbtract
(
)
PDF
(993KB) (
)
Related Articles
|
Metrics
To solve the problem of poor self-adaptive ability in the robot obstacle avoidance, combined with evolution thought of Genetic Algorithm (GA), an Evolutionary Operant Behavior Learning Model (EOBLM) was proposed for the mobile robot learning obstacle avoidance in unknown environment, which was based on Operant Conditioning (OC) and Adaptive Heuristic Critic (AHC) learning. The proposed model was a modified version of the AHC learning architecture. Adaptive Critic Element (ACE) network was composed of a multi-layer feedforward network and the learning was enhanced by TD(λ) algorithm and gradient descent algorithm. A tropism mechanism was designed in this stage as intrinsic motivation and it could direct the orientation of the Agent learning. Adaptive Selection Element (ASE) network was used to optimize operant behavior to achieve the best mapping from state to actor. The optimizing process has two stages. At the first stage, the information entropy got by OC learning algorithm was used as individual fitness to search the optimal individual with executing the GA learning. At the second stage, the OC learning selected the optimal operation behavior within the optimal individual and got new information entropy. The results of experiments on obstacle avoidance show that the method endows the mobile robot with the capabilities of learning obstacle avoidance actively for path planning through interaction with the environment constantly. The results were compared with the traditional AHC learning algorithm, and the proposed model had better performance on self-learning and self-adaptive abilities.
Improved RRT-Connect path planning algorithm for biped robot
MO Dongcheng LIU Guodong
2013, 33(08): 2289-2292.
Asbtract
(
)
PDF
(622KB) (
)
Related Articles
|
Metrics
When lots of narrow passages are contained in the configuration space, the algorithm of Rapidly-exploring Random Tree (RRT) can hardly get the path. In order to deal with the problem, an improved RRT-Connect algorithm was presented. An improved bridge test algorithm was employed to identify and sample narrow passages, so it could be easy to get the connectivity. Combining RRT-Connect with anytime algorithm, the cost of RRT-Connect could be reduced obviously. Each algorithm was run 100 times. Compared with RRT-Connect, the successes' number of the improved algorithm was increased from 34 to 93, and the path planning time was decreased from 9.3s to 4.2s. The biped robot simulation results demonstrate that the algorithm can get the optimal path inside the narrow passages; meanwhile, it improves the efficiency.
Multimedia processing technology
Web video classification based on bidirectional propagation of heterogeneous attributes
LI Qian DU Youtian XUE Jiao
2013, 33(08): 2293-2296.
Asbtract
(
)
PDF
(707KB) (
)
Related Articles
|
Metrics
Concerning that most Web video categorization researches just focus on the basic simple fusion of the information from text model and visual model, a Web video classification method based on the bidirectional propagation of heterogeneous attributes was proposed. Firstly, the method adopted K-means clustering to divide key frames into multiple clusters, and modeled videos at the level of frame. For each cluster, a part of key frames were randomly chosen to propagate their text information to the cluster. For each key frame, the text explanation of the corresponding cluster was transferred to this frame. Finally, the Web video was classified based on the extended text information from dual propagation by using Support Vector Machine (SVM) classifiers. The method integrates heterogeneous attributes well based on the dual propagation. The experimental results demonstrate the effectiveness of the method in the Web video classification.
New disparity estimation method for multiview video based on Mean Shift
HU Bo DAI Wanchang XIAO Zhijian WU Jianping HU Jie
2013, 33(08): 2297-2299.
Asbtract
(
)
PDF
(620KB) (
)
Related Articles
|
Metrics
Concerning the high computation complexity of disparity estimation for multiview video encoding, a new method for disparity estimation based on Mean Shift was proposed. The relationship between disparity vector and motion vector in the spatio-temporal domain was analyzed, and the prediction disparity vector was calculated. The initial searching position for disparity matching was confirmed to be the initial value of iteration calculation by Mean Shift, and the macroblock's optimum matching in reference frame was achieved. The experimental results show that the proposed method can save more than 94% encoding time with a negligible drop in rate distortion compared with the full search algorithm. Compared to the fast searching algorithm, this method saves more than 10% encoding time and improves rate distortion.
Improved foundamental matrix computation and optimization method in parallel calibration of multi-camera system
LI Bin TAN Guanghua GAO Chunming
2013, 33(08): 2300-2305.
Asbtract
(
)
PDF
(1225KB) (
)
Related Articles
|
Metrics
Multi-camera system has a number of cameras and complex space distribution, hence the calibration in multi-camera system is inefficient. The calculation of fundamental matrix and nonlinear optimization are the key steps in camera calibration algorithm. In the light of independent marked points in multi-camera system calibration, fundamental matrix based on RANdom SAmple Consensus (RANSAC) and increment equation in nonlinear optimization were improved, and a parallel calibration algorithm for multi-camera system was proposed. Based on the analysis of the parallel process of calibration, this algorithm improved the efficiency of the calibration time. Compared with the traditional camera calibration algorithm, it makes the time complexity of calibration reduce from O(n3) to O(n). The experiments show that the parallel algorithm reduces the time obviously without any loss in precision, thus realizing fast multi-camera system calibration.
Method of no-reference quality assessment for blurred infrared image
DU Shaobo ZHANG Chong WANG Chao LIANG Xiaobin SUN Shibao
2013, 33(08): 2306-2309.
Asbtract
(
)
PDF
(659KB) (
)
Related Articles
|
Metrics
The image quality assessment is to give a reasonable assessment for the quality of image processing algorithm, and No-Reference (NR) quality evaluation method is applied in a lot of situations of being unable to get the original reference image. The result of structure analysis of the infrared image shows that the uncertainty of the image is fuzzy, but not random. Therefore, the concept of fuzzy entropy was introduced into the quality assessment of infrared image. A method of no-reference quality assessment for blurred infrared image was proposed, comparisons and analysis on performance of the algorithm were given from the following aspects: efficiency, consistency and accuracy. The simulation results show that this method has the characteristics of low computation complexity, fast operation speed and consistence of subjective and objective evaluations. And the general performance is better than the assessment based on Mean Squared Error (MSE) and Peak Singal-to-Noise Ratio (PSNR).
Image multilayer visual representation method based on latent dirichlet allocation
LI Dongrui LI Mei
2013, 33(08): 2310-2312.
Asbtract
(
)
PDF
(583KB) (
)
Related Articles
|
Metrics
Image layer visual representation has been currently used in computer vision field, but it is difficult for feed-forward image multilayer visual representation methods to deal with local ambiguities. An image multilayer visual representation method based on Latent Dirichlet Allocation (LDA) named LDA-IMVR was proposed. It derived a recursive generative model of LDA by implementing recursive probabilistic decomposition process. Meanwhile, it learned and deduced all layers of the hierarchy together, and improved classification and learning performance by using feed-back style. The approach was tested on Caltech 101 dataset. The experimental results show that the proposed method improves classification performance of objects compared with related hierarchical approaches, and it achieves better results in learned components and image patches visualization.
Corner detection algorithm using multi-chord curvature polynomial
WANG Junqing ZHANG Weichuan WANG Fuping CHEN Meirong
2013, 33(08): 2313-2316.
Asbtract
(
)
PDF
(832KB) (
)
Related Articles
|
Metrics
Multi-chord curvature polynomial algorithm for corner detection was proposed based on Chord-to-Point Distance Accumulation (CPDA) technique and curvature product. Firstly, the edge map was extracted by Canny edge detector. Then, at each chord, a multi-chord curvature polynomial was used as the sum or multiplication of the contour curvature. The new method can not only effectively enhance curvature extreme peaks, but also prevent smoothing some corners. To reduce false or missing detection made by experiment threshold, local adaptive threshold was used to detect corners. According to the detection capability, localization accuracy and repeatability of corner number criteria, experiments were made to compare the proposed detector with several recent corner detectors. The experimental results demonstrate that the proposed detector has strong robustness, its detection accuracy increases by 14.5%, and its average repeatability increases by 12.6%.
Texture-preserving shadow removal algorithm based on gradient domain
HUANG Wei FU Liqin WANG Chen
2013, 33(08): 2317-2319.
Asbtract
(
)
PDF
(704KB) (
)
Related Articles
|
Metrics
Accurate shadow boundary detecting and texture-preserving are two critical difficulties in shadow removal. To solve these problems, a new shadow removal method based on gradient field was proposed. Firstly, shadow boundary was detected approximately. Then, the gradients in internal shadow region and shadow boundary were modified respectively to obtain the non-shadowed gradient field. Based on the gradient field, the information in shadow regions was recovered with Poisson equation. The experimental results with several images indicate that the method can remove shadow from images easily while preserving the textures in the shadow regions, and it is not sensitive to the accuracy of shadow boundary.
Non-rigid feature point matching algorithm using concave quadratic regularization term
LIAN Wei ZUO Junyi
2013, 33(08): 2320-2324.
Asbtract
(
)
PDF
(750KB) (
)
Related Articles
|
Metrics
For the existing point matching algorithms adopting the l1 norm regularization terms, the corresponding l1 norm optimization problems are equivalent to linear programs. But the constraints do not satisfy the total unimodularity property, which causes the point correspondence solutions to be non-integers and post-processing is needed to convert the solutions to integers. Such processing brings error and complicates the algorithms. To resolve the above problem, based on the latest result with the robust point matching algorithm, a new regularization term was proposed. The new regularization term is concave and it can be proved that the objective function has integral optimal solutions. Therefore, no post-processing is needed and it is simpler to implement. The experimental results show that, compared with the algorithms adopting the l1 norm regularization terms, the proposed algorithm is more robust to various types of disturbances, particularly outliers, while its error is only half of the compared algorithms.
Bi-level image sharpening method based on parallel computing
ZHANG Wei HE Xing HUO Yingxiang TENG Shaohua TENG Yi LI Rigui
2013, 33(08): 2325-2329.
Asbtract
(
)
PDF
(849KB) (
)
References
|
Related Articles
|
Metrics
A parallel bi-level image sharpening methodology in Compute Unified Device Architecture (CUDA) circumstance was proposed especially for the improvements on fuzzy boundaries and poor quality when enlarging low resolution photos or images. A GPU-based parallel sharpening algorithm with two stages was designed and implemented. Firstly, the parallel linear interpolation algorism was repeatedly adopted by the calculation of non-edge region and the sharpening treatments of edge area. Secondly, an improved gradient method was utilized for the further optimized images. The jagged edges of the enlarged images were basically eliminated by the proposed method, making the images much more smooth, natural, and legible. The experimental results prove that the GPU-based parallel image sharpening algorithm is superior to the currently popular algorithms in calculation efficiency and image quality, and it can be applied in sharpening images and amplifying photos.
Image edge detection without threshold
HONG Liurong
2013, 33(08): 2330-2333.
Asbtract
(
)
PDF
(660KB) (
)
Related Articles
|
Metrics
Concerning the thresholds often being needed in the image edge detection and it is difficult to set good threshold values for the variant illumination image, a new edge detection method was proposed to solve these problems. Firstly, according to the logarithm, an image was decomposed into high frequency and low frequency, and the high frequency image was extracted by the logarithmic image minus the image by the maximum value filter. Then based on the Stevens theorem from cognitive psychology, the high frequency information was transformed into visual psychological quantity. After the edges were thinned by non-minimum suppression, they were extracted by Pillar K-means algorithm. The proposed method has good effect on the variant illumination image and does not need to set threshold value. The experimental results prove the effectiveness of the proposed method, and also show that the edge value in variant intensity may be agreed by converting the intensity to the psychological value.
Image decomposition and edge detection based on wavelet transform and partial differential equation
ZHANG Lina Li Xiaolin
2013, 33(08): 2334-2336.
Asbtract
(
)
PDF
(673KB) (
)
Related Articles
|
Metrics
In the decomposition of natural image containing texture, the edge information of structure image is easy to be regarded as the texture, which results in blurred edge of the structural image and inaccurate detected edge. An image decomposition and edge detection method was proposed based on wavelet transform of Partial Differential Equation (PDE). At first, wavelet transform threshold was used to extract texture information. Then the improved PDE image decomposition model was used to further decompose the image and extract the edge. The numerical experimental results show that this method improves the quality of image decomposition, makes the texture extraction through and the structure image piecewise smooth, and better protects the edge of the structure.
Learning optimal kernel mapping based on function space with dynamic parameters
TAN Zhiying CHEN Ying FENG Yong SONG Xiaobo
2013, 33(08): 2337-2340.
Asbtract
(
)
PDF
(573KB) (
)
Related Articles
|
Metrics
The kernel function methods can discover the nonlinear distribution rules among the images of high precision prints. And the mining capacity is decided by the kernel function and its parameters. Selecting the kernel function is imminent to the development and application in kernel function theory. Based on the intelligent detection of prints, a new learning kernel method based on the optimization was presented for the industry of high precision printing to make the kernel function method to achieve optimal performance. Unlike the traditional calculation method, the kernel's parameter was continuously changing in kernel space, which meant that the learning scope expanded one dimension. The experimental results show that the iterative algorithm based on the theoretical analysis only needs ten iterations to get the statistical optimal kernel function and its parameters, and the recovery error of the kernel function is statistically minimum.
Noise reduction of optimization weight based on energy of wavelet sub-band coefficients
WANG Kai LIU Jiajia YUAN Jianying JIANG Xiaoliang XIONG Ying LI Bailin
2013, 33(08): 2341-2345.
Asbtract
(
)
PDF
(751KB) (
)
Related Articles
|
Metrics
Concerning the key problems of selecting threshold function in wavelet threshold denoising, in order to address the discontinuity of conventional threshold function and large deviation existing in the estimated wavelet coefficients, a continuous adaptive threshold function in the whole wavelet domain was proposed. It fully considered the characteristics of different sub-band coefficients in different scales, and set the energy of sub-band coefficients in different scales as threshold function's initial weights. Optimal weights were iteratively solved by using interval advanced-retreat method and golden section method, so as to adaptively improve approximation level between estimated and decomposed wavelet coefficients. The experimental results show that the proposed method can both efficiently reduce noise and simultaneously preserve the edges and details of image, also achieve higher Peak Signal-to-Noise Ratio (PSNR) under different noise standard deviations.
Color clustering method for high and low intensity stripes of color structured light system
LU Jun GAO Le ZHANG Xin
2013, 33(08): 2341-2345.
Asbtract
(
)
PDF
(763KB) (
)
Related Articles
|
Metrics
Color coded structured light based on De Bruijn sequence is a kind of spatial code method that is characterized by rapid shape measurement. The recognition of color stripes is a key issue. Considering projected De Bruijn stripe pattern combining four colors with high and low intensity, segmentation of high and low intensity stripes was implemented by using linear filter of second derivative of L channel in L*a*b* color space. Adaptive color clustering was designed by employing Principal Component Analysis (PCA) and K-means clustering. The recognition of four colors with two intensities was finished. The experimental results indicate that the proposed method is robust to factors such as ambient light and it satisfies demand for high precision and simple extraction of information to structured light vision measurement.
Classification of polarimetric SAR images based on quotient space granularity composition theory
HE Yin CHENG Jian
2013, 33(08): 2351-2354.
Asbtract
(
)
PDF
(678KB) (
)
Related Articles
|
Metrics
Incomplete utilization of polarimetric information is one of the important factors that impact the result of polarimetric Synthetic Aperture Radar (SAR) image classification. In order to achieve the comprehensive utilization of polarimetric information, quotient space granularity composition theory, combined with multiple classifiers to construct different quotient space, was applied in classification of polarimetric SAR. Firstly, using different polarization decomposition method to get different characteristics, and based on these characteristics, setting different Support Vector Machine (SVM) classifiers to classify the image. Secondly, integrating these quotient spaces based on granularity composition theory to get more fine-grained result in order to achieve the upgrading of the classification accuracy. Finally, an experiment for AIRSAR image was given. The result shows the misclassification of targets is inhibited significantly and the classification accuracy of each class is improved.
Generalized fuzzy c-means algorithm with improved fuzzy partitions based on local membership and neighbor information
WANG Haijun LIU Ming
2013, 33(08): 2355-2358.
Asbtract
(
)
PDF
(661KB) (
)
Related Articles
|
Metrics
As an improved algorithm of Fuzzy C-Means (FCM), generalized fuzzy c-means algorithm with improved fuzzy partitions (GIFP-FCM) can reduce the influence of image noises on image segmentation to some extent. However, since the neighbor information is not taken into consideration, GIFP-FCM cannot work well on image with much noises. In order to solve this problem, a new objective function was established with neighbor information and local membership. Every pixel with local membership and neighbor information was recomputed to overcome the influences of noises. The experimental results on synthesized images and brain images show that the proposed algorithm can get the maximum partition coefficient and the minimum partition entropy, which shows the effectiveness of the proposed algorithm.
Modified 3D facial reconstruction algorithm based on a single photo
XIONG Ping LU Hua
2013, 33(08): 2359-2361.
Asbtract
(
)
PDF
(663KB) (
)
Related Articles
|
Metrics
The original 3D facial reconstruction algorithm cannot determine the shape of face, and it is also very complex. To resolve this problem, a 3D facial reconstruction algorithm based on the Shape From Shading (SFS) by a single photo was proposed, which used the level set method to determine the facial contour. Firstly, initial contour of the face was defined by Active Shape Model (ASM), and it was used as the input of the level set method to segment full face shape. Secondly, the facial image was converted to grayscale. Finally, a reference model was established by using SFS on an image whose illumination condition was known. The input image was matched with the reference model, and then the illumination condition of the input image and the reconstruction result could be calculated. The experimental results show that, compared with the mesh model-based algorithm, the proposed method can ensure the shape of face and obtain the reconstruction model in a short time.
Typical applications
Application of support vector regression in prediction of due date under uncertain assemble-to-order environment
SUN Dechang SHI Haibo LIU Chang
2013, 33(08): 2362-2365.
Asbtract
(
)
PDF
(753KB) (
)
Related Articles
|
Metrics
For the issue of how to quickly estimate the accurate, reliable due date according to the order information and the features of the production system in Assembly To Order (ATO), a due date prediction model was constructed based on the influential mechanism analysis of the uncertainty factors. The model parameters included three parts: order release time, assembly cycle time and abnormal tardiness. Order release time was based on the availability of materials and production capacity. The assembly cycle time and abnormal tardiness were predicted by using Support Vector Regression (SVR) method based on actual production history data. The case study shows that the predicted results of the model are close to actual due date and it can be used to guide the order's delivery time consultation.
Weight modification accumulated epochs RAIM algorithm based on self-adaptive strategy
HUANG Guorong CHANG Cheng HAO Shunyi CHANG Yanan XU Gang
2013, 33(08): 2366-2369.
Asbtract
(
)
PDF
(594KB) (
)
Related Articles
|
Metrics
The conventional Receiver Autonomous Integrity Monitoring (RAIM) algorithm is limited when detecting weak pseudo-range bias under gradual change because of its longer detection delay and higher miss detection rate. A weight modification accumulated epochs parity vector RAIM algorithm based on self-adaptive strategy was presented to solve this problem. In this algorithm, the weight factor was obtained according to the single epoch fault degree to adjust the proportion of each epoch in the selected window to structure more effective detection statistics, and the size of the window was determined according to the repeated simulation experiments. The simulation results show that the proposed method can better detect weak pseudo-range bias under gradual change, compared to accumulated epoch and the conventional RAIM algorithm, the detection delay time declines by 16.67% and 56.52% respectively.
Multi-character DFA-based high speed regular expression matching algorithm
HE Wei GUO Yunfei MO Han HU Hongchao
2013, 33(08): 2370-2374.
Asbtract
(
)
PDF
(861KB) (
)
Related Articles
|
Metrics
Traditional Deterministic Finite Automata (DFA) based regular expression matching can only process one character per cycle, which is the speed bottleneck. A new algorithm named Multi-Character DFA (MC-DFA) was proposed for high throughput matching and precise positioning. It combined the one character transition in traditional DFA together to handle multi-character processing per cycle. A new transition matrix compress algorithm was also proposed to reduce the redundancy introduced by MC-DFA. The result demonstrates that MC-DFA can improve the throughput efficiently while requiring acceptable memory. For a set of 300 regexes, 8C-DFA obtains a throughput of 7.88Gb/s, memory usage less than 6MB and 19.24s preprocessing time, better than traditional methods.
Similar string search algorithm based on Trie tree
Li-Xia LIU ZHANG Zhiqiang
2013, 33(08): 2375-2378.
Asbtract
(
)
PDF
(651KB) (
)
References
|
Related Articles
|
Metrics
Similar string search algorithms based on Trie tree need to compute active-node set of a node by editing distance threshold. A large number of redundant computation leads to a high time and space complexity. A new algorithm named New-Trie-Stack was proposed, which utilized the symmetrical properties of active-node set and the dynamic programming method to improve the performance. It could avoid the redundancy cost on active-node set computing and storing; moreover, active-node sets were pruned. The experimental results show that New-Trie-Stack algorithm has lower time complexity and space complexity.
Research on function shift in Boyer-Moore algorithm
HAN Guanghui ZENG Cheng
2013, 33(08): 2379-2382.
Asbtract
(
)
PDF
(536KB) (
)
Related Articles
|
Metrics
For the research and improvement of Boyer-Moore (BM) algorithm and its variants, it is very necessary to establish strict formal theory of the function shift in Boyer-Moore's and its construction algorithm. A clear formal definition of shift was given. Then, characteristic sets of the pattern suffixes and its minimum value function were introduced, and the construction of shift was described by the characteristic sets, thus theoretical basis of shift and its construction algorithm were strictly established. Finally, a new construction algorithm of shift was presented based on the construction theorem of shift and iterative computing method of the minimum value function. It is proved that the algorithm has linear time and space complexity. The theoretical analysis and computing results show that the algorithm is simpler, and its complexity of computation is lower, so it is more suitable for hardware implementation compared with several existing algorithms.
Method for music soft-cutting and classification in rough-emotion area
LINJingdong WANG Wei LIAO Xiaoyong
2013, 33(08): 2383-2386.
Asbtract
(
)
PDF
(625KB) (
)
Related Articles
|
Metrics
In response to the issue that music-light show control system cannot automatically obtain the required music characteristic information, a kind of rough-emotion model which can be used for music-light performance was presented combined with the traditional Arousal-Valence model. In this model, the Mallat algorithm of wavelet analysis was used to extract comparative items and the ratio of judgment method of strength and rhythm was used to take two "soft-cutting" actions on music. Then the classification of music in the rough-emotion model and the extraction of music characteristic parameters could be achieved by the corresponding production rules of expert system. The simulation results show that the method can effectively classify music clips according to music emotion and extract characteristic elements. Meanwhile, these characteristic elements can satisfy the precision requirement of the music-light show control system on the time domain.
Iterative Hard Thresholding Orthogonal Projection under Cosparsity Analysis Model
ZHANG Zongnian LI Jinhui HUANG Rentai YAN Jingwen
2013, 33(08): 2387-2389.
Asbtract
(
)
PDF
(624KB) (
)
Related Articles
|
Metrics
To reconstruct the original signal from a set of linear measurements with noise, the cosparsity analytical model theory was analyzed and the hard thresholding orthogonal projection algorithm under the cosparsity analysis model was proposed. The cosparsity orthogonal projection strategy was used to improve the iterative process for the proposed algorithm, and the methods for selecting iterative step size and the length of cosparsity were given. The sufficient condition of convergence for the algorithm and the reconstructed signal error range between the reconstructed signal and the original one were provided. The experiments show that the CPU running time of the algorithm is only equal to 19%, 11% and 10% of AIHT, AL1 and GAP algorithms, and the average Peak Signal-to-Noise Ratio (PSNR) of reconstructed signal improves 0.89dB than that of AIHT but degrades a little bit than that of AL1 and GAP. It is concluded that the proposed algorithm can reconstruct the signal with Gaussion noise in high probability with very short running time or faster convergence speed than that of the current typical algorithm when some conditions are satisfied.
Inventory model for deteriorating items with time-dependent partial backlogging rate and inventory-level-dependent demand rate
HE Wei XU Fuyuan
2013, 33(08): 2390-2393.
Asbtract
(
)
PDF
(501KB) (
)
Related Articles
|
Metrics
In this paper, a more general inventory model for deteriorating items with partial backlogging rate was developed, and the deterioration rate changed as the time was running on. When the inventory level was positive, the demand rate depended on the selling price; but when the inventory level was negative, some customers were more impatient to wait, therefore the sale opportunity was reduced and a bigger shortage led to a larger loss. With some analysis, the necessary condition for the existence of optimal solution to the general model was derived. The necessary conditions of the existence and uniqueness of the optimal solutions were introduced. At last, a case study was provided for the proposed model.
Extraction of fetal electrocardiogram signal utilizing blind source separation based on time-frequency distributions and wavelet packet denoising
HAN Liang PU Xiujuan
2013, 33(08): 2394-2396.
Asbtract
(
)
PDF
(629KB) (
)
Related Articles
|
Metrics
A new method utilizing Blind Source Separation based on Time-Frequency distributions (TFBSS) and wavelet packet denosing was proposed to extract the Fetal ElectroCardioGram (FECG) signal. The original eight ElectroCardioGram (ECG) signals obtained from the thoracic and abdominal area of the pregnant woman were firstly processed to eight components by utilizing rearrangement TFBSS. Then the Maternal ECG (MECG) signal and noise components in eight components were set by zero and the rest components were reconstructed by using the mixing matrix. The FECG with noise could be extracted by separating the reconstructed result by using rearrangement TFBSS. Finally, the baseline shift and noise in FECG were suppressed by wavelet packet denoising technique. The FECG could be extracted even under the condition of the fetal QRS wave being partly and entirely overlapped with the maternal QRS wave in the abdominal composite signal. The experimental results show that the clear FECG can be extracted by utilizing the proposed method.
Delta operator sliding mode variable structure control system based on reaching law method
LIU Yunlong HAN Haixing TANG shuhong
2013, 33(08): 2397-2400.
Asbtract
(
)
PDF
(615KB) (
)
Related Articles
|
Metrics
Concerning the chattering problem of Sliding Mode Variable Structure Control (SMVSC) for Delta operator system with uncertainties, a sliding mode variable structure controller based on Delta operator discrete method was designed. A modified discrete reaching law for Delta operator system was proposed based on arc tangent function. The system uncertainty was substituted with its least upper bound and greatest lower bound, and then the SMVSC law was designed. The simulation result illustrates the feasibility and validity of the proposed method, and shows it has a good complete robustness for the internal parameter perturbation and external disturbance. Therefore, the SMVSC system based on the proposed reaching law method can not only quickly reach the region of the switching plane in finite time, but also is stable in the equilibrium state.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF