Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 September 2015, Volume 35 Issue 9
Previous Issue
Next Issue
Contract design for relay selection in cooperative communication
ZHAO Nan, WU Minghu, XIONG Wei, LIU Cong
2015, 35(9): 2415-2418. DOI:
10.11772/j.issn.1001-9081.2015.09.2415
Asbtract
(
)
PDF
(712KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the characteristics of selfishness of relay users and asymmetric network information in cognitive radio networks, a multi-user cooperative communication mechanism was proposed. Firstly, by modeling the cooperative communication as a labor market, a modeling method for the multi-user contract-based relay selection framework was investigated under the symmetric network information scenario. Then, to avoid the adverse selection problem due to the hidden-information of Secondary User (SU) before contract assignment, a contract-based relay selection model was proposed to incentivize the contribution of secondary users for ensuring cooperative communication. The experimental results show that, by hiring the secondary users with better channel condition or less relay cost, Primary User (PU) can obtain much more cooperative communication utility. The proposed multi-user contract-based cooperative communication framework can put forward new ideas for efficient utilization of spectrum resource.
Cooperative relay selection method based on channel prediction
QIN Cailing, XIAO Kun
2015, 35(9): 2419-2423. DOI:
10.11772/j.issn.1001-9081.2015.09.2419
Asbtract
(
)
PDF
(840KB) (
)
References
|
Related Articles
|
Metrics
Common cooperative relay selection used outdated Channel State Information (CSI) to select relay, and accurate channel prediction can provide precise CSI for relay selection. The existing channel prediction methods often cannot adapt to the rapidly time-varying channel, or with high complexity. Therefore a channel prediction method based on slope prediction was proposed, which was applied to relay selection for cooperative communication system. This channel prediction method combined first-order linear polynomial curve fitting with Finite Impulse Response (FIR) Wiener prediction, by using FIR Wiener predictor to predict the slope of channel first-order linear function, cut the prediction range into several small enough time slices, and implemented continuous channel prediction within these small segments of time. At the same time, the Channel-Prediction based Relay Selection (CPRS) method was proposed, and performance analysis was carried out. Simulation results show that in the comparison experiments with the coordination system of Outdated-Channel based Relay Selection (OCRS), CPRS can make the system Bit Error Ratio (BER) decline 13% to 63%, with significant performance improvement effect.
Nash bargaining based resource allocation in peer-to-peer network
ZHANG Qingfeng, WANG Sheng, LIAO Dan
2015, 35(9): 2424-2429. DOI:
10.11772/j.issn.1001-9081.2015.09.2424
Asbtract
(
)
PDF
(994KB) (
)
References
|
Related Articles
|
Metrics
To effectively overcome the free-rider problem existing in Peer-to-Peer (P2P) network, this paper presented resource allocation scheme based on Nash bargaining which guarantees the minimum Quality of Service (QoS). Firstly, the article built the system model of the minimum QoS, analysis indicated that the cooperative peer' bargaining power is positively related to the maximum contribution ability but the non-cooperative peer' bargaining power is negative related to the maximum contribution ability, so, the cooperative peers can obtain more resources than non-cooperative peers; secondly, the article demonstrated that the cooperative peer who has larger relative bargaining power could obtain more resources than the others. Lastly, simulations show that to guarantee the peers receiving the minimum QoS, the cooperation peers resource allocation related to the initial resource allocation and the Nash bargaining power and other factors; the initial resource allocation is positively related to the maximum contribute ability of the cooperation peers, which reduces when the number of peers increases; the bargaining power decreases when the number of peers increases and resource allocation increases when the bargaining power increases. This resource allocation mechanism was compared with the classical average resource allocation mechanism which also guarantees the fairness, the cooperators can obtain more resources. The simulation results verified that the greater the bargaining power of nodes, the more resources to obtain within the minimum QoS.
Congestion control algorithm in wireless sensor network based on fuzzy compress sensing
GAN Fenghao, NIU Yugang, JIA Tinggang
2015, 35(9): 2430-2435. DOI:
10.11772/j.issn.1001-9081.2015.09.2430
Asbtract
(
)
PDF
(929KB) (
)
References
|
Related Articles
|
Metrics
To solve the congestion problem in Wireless Sensor Network (WSN), a congestion control mechanism which combines fuzzy control and Compressed Sensing (CS) techniques together was proposed to alleviate WSN congestion. Firstly, compressed sensing technology was introduced into WSN to congestion control in wireless sensor networks, and the congestion control effect of CS was analyzed. It would reduce redundant information and relive network congestion. Secondly, for the problems that compressed sensing cannot adapt to the complex environment of WSN, a congestion control algorithm of fuzzy compressed sensing was designed in this paper, which combined the congestion degree of network to dynamically adjust the dimension of observation matrix, thus make the compress sensing better adapt the complex environment of WSN. The mechanism can improve the network throughput by 10% to 50%, reduce the packet loss rate by 10% to 50%, and reduce the network delay by nearly 5 s. NS2 simulation shows that the mechanism achieves better improvement to WSN congestion.
Improvement on RaSMaLai in wireless sensor networks
SUN Xuemei, ZHANG Xinzhong, WANG Yaning, ZHANG Tianyuan
2015, 35(9): 2436-2439. DOI:
10.11772/j.issn.1001-9081.2015.09.2436
Asbtract
(
)
PDF
(775KB) (
)
References
|
Related Articles
|
Metrics
Two improvement methods were presented to avoid ineffective circulation and invalid waiting state problems when running algorithm Randomized Switching for Maximizing Lifetime (RaSMaLai) and then a new random switching algorithm New Randomized Switching for Maximizing Lifetime (NRaSMaLai) was put forward: the first improvement was to conduct the initialized inspection in the process of traversing tree nodes in order to prevent the tree from entering into invalid waiting state; the second improvement was to do the state inspection for the maximum load node and all its descendant nodes in the operation process of updating the tree to avoid ineffective circulation. The tree balance was achieved by increasing the load of the minimum node and its descendant nodes with NRaSMaLai. The simulation experiment shows that these two methods can make the tree achieve the balance state or at least get closer to the presupposed state. When the sink node was located in the regional center, the iterative steps which make the tree balanced can be reduced to 1/5 of the original by NRaSMaLai and also it appears little oscillation. This is significant for the data collection tree's rapid convergence and the extension of the network's lifetime.
Linked sensor data publishing system in semantic Web of things
CUAN Linna, SHI Yimin, LI Guanyu, WU Xuehua
2015, 35(9): 2440-2446. DOI:
10.11772/j.issn.1001-9081.2015.09.2440
Asbtract
(
)
PDF
(1028KB) (
)
References
|
Related Articles
|
Metrics
To solve the problems that there are differences in presentation and transmission of sensor network data and the application requirement cannot be satisfied by single data source, a method of publishing sensor network data as linked sensor data was proposed. Based on the analysis of linked sensor data publishing methods, firstly, the technology of ontology annotation was used to add semantic information to sensor network data; secondly, a method that queried linked data based on the set of concept group with inheritance relations to find Related Web Datasets (RWD), and a method that compared the similarity of graph based on heuristic property to build links between sensor network data and related datasets on Web were put forward; at last, a Linked Sensor Data Publishing System (LSDPS) was built. Comparing LSDPS with other classical linked sensor data publishing systems, the accuracy of building Resource Description Framework (RDF) links among datasets was raised about 9%. By publishing sensor network data as linked sensor data, the applications can not only understand and take advantage of sensor network data, but also obtain related resource according to RDF links among linked sensor datasets.
High-efficient community-based message transmission scheme in opportunistic network
YAO Yukun, YANG Jikai, LIU Wenhui
2015, 35(9): 2447-2452. DOI:
10.11772/j.issn.1001-9081.2015.09.2447
Asbtract
(
)
PDF
(894KB) (
)
References
|
Related Articles
|
Metrics
To deal with the problems that message-distributed tasks are backlogged by nodes in the inner community and active nodes are blindly selected to transmit message in the Community-based Message Transmission Scheme in Opportunistic Social Network (OSNCMTS), a High-Efficient Community-based Message Transmission Scheme in opportunistic network (HECMTS) was proposed. In HECMTS, firstly, communities were divided by Extremal Optimization (EO) algorithm and the corresponding community matrices were distributed to nodes; secondly, the copies of message were assigned based on the community matrices and the success rate of data packets to destination nodes; finally, the active nodes' information was collected by the opportunities of active nodes' back and forth in different communities, then the suitable nodes were selected to finish message transmitting between communities by querying the active node information. The simulation results show that routing overhead in HECMTS is decreased by at least 19% and the average end-to-end delay is decreased by at least 16% compared with OSNCMTS.
Users' mobility analysis based on call detail record in two-dimensional space
SHI Lixing, HU Fangyu
2015, 35(9): 2453-2456. DOI:
10.11772/j.issn.1001-9081.2015.09.2453
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
Since recent studies on users' mobility based on Call Detail Record (CDR) mainly use metrics in one-dimensional space, such as the travel distance and the radius of gyration which can not exactly describe the scope of users' mobility, the Area of the Convex Hull Covering a user's daily Trajectory (ACHCT) was applied to investigate users' mobility scale in two-dimensional space, and the mobility vector was introduced to study the mobility of the crowd. Firstly, a method was designed to set up two-dimensional Cartesian coordinates based on latitude and longitude coordinates. The method applied the Mercator projection and the haversine formula to calculate the bearing and distance between scattering points, based on which the coordinates of points in the plane coordinates were determined. Then, based on the coordinates, the convex hulls covering users' daily trajectories were calculated and the distribution of the areas of all convex hulls was analyzed. Finally, the mobility vectors of agglomerated de-identified callers were accumulated respectively in different time segments and the changes in a day were analyzed. The experimental results show that, within the scale of 180 km, the average deviations of bearing angle and distance calculated with the new coordinates are 0.037° and 0.102%, compared with those calculated with Mercator projection and haversine formula. The new coordinates can maintain the distance and bearing between points well. ACHCT follows a power-law distribution and has a strong correlation with the travel distance. The changes of the crowd's mobility vector, show the tidal phenomenon of the crowd's travel and give a new sight to discover the correlation between areas where users reside and those nearby.
Node mobility model based on user interest similarity
GAO Yuan, WANG Shumin, SUN Jianfei
2015, 35(9): 2457-2460. DOI:
10.11772/j.issn.1001-9081.2015.09.2457
Asbtract
(
)
PDF
(624KB) (
)
References
|
Related Articles
|
Metrics
According to the driving effect of people's social relations and interests on the social activities of nodes,a mobility model based on user interest similarity was presented.The interest degree of node to the activities was described with a interest probability matrix,and Pearson correlation coefficient was used to calculate the similar interest groups of nodes.Simulation results show that,the complementary cumulative density function of inter-contact time and contact duration in a certain time approximately follows power-law distribution,which is more consistent with the curve obtained from statistical results of real data set.Additionally, strong space-time regularity is observed when nodes are involved in the activities in the evening.
Channel estimation algorithm for orthogonal frequency division multiplexing based on wavelet de-noising and discrete cosine transform
XIE Bin, LE Honghao, CHEN Bo
2015, 35(9): 2461-2464. DOI:
10.11772/j.issn.1001-9081.2015.09.2461
Asbtract
(
)
PDF
(757KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that the traditional channel estimation algorithm based on Discrete Cosine Transform (DCT) does not eliminate the noise in the cyclic prefix length, a new method of Orthogonal Frequency Division Multiplexing (OFDM) system channel estimation based on wavelet de-noising and DCT interpolation was proposed. First the method of Least Squares (LS) was used to preliminarily estimate channel for received pilot signal, then the results estimated by LS method were processed through discrete wavelet thresholding denoising, finally the noise of the cyclic prefix length was handled again by DCT interpolation algorithm to further reduce the influence of noise. The simulation on Matlab 2012 platform, compared with the traditional channel estimation algorithm based on DCT, under the conditions of the same Bit-Error-Rate (BER), the Signal-to-Noise Rate (SNR) performance of the proposed algorithm improved about 1 dB; under the conditions of the same Mean-Square-Error (MSE), the SNR performance of the proposed algorithm improved about 2 dB.The simulation results show that the proposed algorithm can not only reduce the influence of Additive White Gaussian Noise (AWGN), but also improve the accuracy of channel estimation effectively, and the proposed algorithm has better performances than the channel estimation algorithm based on DCT.
Indoor positioning algorithm with dynamic environment attenuation based on particle filtering
LI Yinuo, XIAO Ruliang, NI Youcong, SU Xiaomin, DU Xin, CAI Shengzhen
2015, 35(9): 2465-2469. DOI:
10.11772/j.issn.1001-9081.2015.09.2465
Asbtract
(
)
PDF
(796KB) (
)
References
|
Related Articles
|
Metrics
Due to the problem that the nodes having the same distance but different position in the complex environment, brings shortage to accuracy and stability of indoor positioning, a new indoor positioning algorithm with Dynamic Environment Attenuation Factor (DEAF) was proposed. This algorithm built a DEAF model and redefined the way to assume the value. In this algorithm, particle filtering method was firstly used to smooth the Received Signal Strength Indication (RSSI); then, the DEAF model was used to calculate the estimation distance of the node; finally, the trilateration was used to get the position of the target node. Comparative experiments had been done using several filtering models, and the results show that this dynamic environment attenuation factor model combined with particle filtering can resolve the problem of the environment difference very well. This algorithm reduces the mean error to about 0.68 m, and the result has higher positioning accuracy and good stability.
Evolution of diffuse multipath modeling scheme for indoor wireless optical local area network
XU Chun
2015, 35(9): 2470-2475. DOI:
10.11772/j.issn.1001-9081.2015.09.2470
Asbtract
(
)
PDF
(893KB) (
)
References
|
Related Articles
|
Metrics
To overcome the inadaptability of conventional single source-oriented modeling scheme in satisfying the need of Light Emitting Diode (LED) array-based wireless optical access network, two novel evolution editions of this scheme were proposed. The first one sufficiently covered the diffuse portion from all sources. Based on the first one, the second evolution edition more precisely included the relative delays of all diffuse components. The two proposed schemes were capable of avoiding the overestimation in baseband transmission characteristic by current evolution scheme. Numerical results indicate that, the overestimation in baseband transmission bandwidth and transmission gain can be up to 50 MHz and 15 dB respectively as diffuse path components of all sources are sufficiently characterized. Moreover, two novel schemes are capable of quantifying the correlation between one of receiver position, reflectance and Field Of View (FOV) of receiver and transmission characteristic which overcomes the limitation of current evolution edition in characterizing above correlation.
MapReduce performance optimization based on anomaly detection model in heterogeneous cloud environment
HOU Jialin, WANG Jiajun, NIE Hongyu
2015, 35(9): 2476-2481. DOI:
10.11772/j.issn.1001-9081.2015.09.2476
Asbtract
(
)
PDF
(788KB) (
)
References
|
Related Articles
|
Metrics
To effectively select the straggler machines, an anomaly detection model generally adopted in failure analysis was proposed. Firstly, an anomaly detection algorithm was employed to detect the slow nodes in the cluster. Secondly, task assignment algorithm and speculative execution algorithm were improved to stop assigning new tasks to slow nodes and these tasks were assigned to normal nodes with idle slots. In the improved speculative execution, it was for the first time that those tasks in slow nodes were transferred into the normal nodes in the same network segment, since data transferring can be physically accelerated in one network segment. The experimental results demonstrate that the straggler machines are quickly detected after running the anomaly detection algorithm. Compared with the algorithms in Hadoop-LATE, 17% of the processing time can be saved when the same amount of the tasks are processed, which concludes that the proposed algorithm is more suitable for improving the overall performance of the clusters.
Parallel particle swarm optimization algorithm in multicore computing environment
HE Li, LIU Xiaodong, LI Songyang, ZHANG Qian
2015, 35(9): 2482-2485. DOI:
10.11772/j.issn.1001-9081.2015.09.2482
Asbtract
(
)
PDF
(739KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that serial Particle Swarm Optimization (PSO) algorithms are time-consuming to deal with big tasks, a novel shared parallel PSO (Shared-PSO) algorithm was proposed. The multi-core processing power was used to reduce time to get resolution. In order to facilitate communication of particles, a shared area was set up and a random strategy was applied to switch particles. Several serial PSO algorithms could be permitted to update particle information because of the universality of its algorithm flow. Shared-PSO was applied on the standard optimization test set CEC (Congress on Evolutionary Computation) 2014. The experiment results show that the execution time of Shared-PSO is a quarter of the serial PSO's. The proposed algorithm can effectively improve the execution efficiency of serial PSO, and expand applied range of PSO.
Development of medical image registration technology using GPU
ZHA Shanshan, WANG Yuanjun, NIE Shengdong
2015, 35(9): 2486-2491. DOI:
10.11772/j.issn.1001-9081.2015.09.2486
Asbtract
(
)
PDF
(1060KB) (
)
References
|
Related Articles
|
Metrics
The current medical image registration technology could not meet the real-time requirements for clinical diagnosis and treatment. Graphic Processing Unit (GPU) accelerated medical image registration technology was reviewed and discussed for this problem in this paper. The paper summarized GPU general purpose computation, studied current technology of medical image registration which based on GPU acceleration with the essential framework of medical image registration as main line, and implemented Positron Emission computed Tomography (PET) and Computed Tomography (CT) image registration experiments respectively on Central Processing Unit (CPU) and GPU computing platforms. The Normalized Mutual Information (NMI) value of GPU accelerated medical image registration based on Free Form Deformation ( FFD) and NMI was slightly smaller than that of CPU method, but the registration efficiency is 12 times than CPU method. Except keeping high registration accuracy, GPU accelerated medical image registration algorithms also get a lot of ascension in terms of registration speed.
Internal model control based automatic tuning method for PID controller
XIA Hao, LI Liuliu
2015, 35(9): 2492-2496. DOI:
10.11772/j.issn.1001-9081.2015.09.2492
Asbtract
(
)
PDF
(699KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the turning problem of PID controller parameters, an automatic tuning method based on Internal Model Control (IMC) algorithm and system identification was proposed. In this approach, an identification method based on the open-loop unit step response was employed. The input/output data during the transient process were used to obtain a First Order Plus Dead Time (FOPDT) or Second Order Plus Dead Time (SOPDT) model. Then the parameters of PID controller were determined by IMC algorithm. As to the determination of the IMC filter parameter
λ
, two parameters,
γ
and
σ
were introduced in this method. Then the parameter
λ
was determined by the relationship between the square of output error and the two parameters above. In the simulation experiments, compared with the traditional IMC based PID controller, the Integral Absolute Error (IAE) index can be improved by about 20% without the input disturbance, while the index can be improved by about 10% with disturbance. The simulation results show that in the premise of ensuring the system robustness, the proposed algorithm not only improves the speed of the transient response, but also effectively restrains the overshoot of the system output.
Cut-GAR: solution to determine cut-off point in cloud storage system
SHAO Tian, CHEN Guangsheng, JING Weipeng
2015, 35(9): 2497-2502. DOI:
10.11772/j.issn.1001-9081.2015.09.2497
Asbtract
(
)
PDF
(864KB) (
)
References
|
Related Articles
|
Metrics
Considering poor performance caused by vague definition of small files in Hadoop Distributed File System (HDFS), Cut-off Point via Grey Relational Analysis (Cut-GAR) was presented to find the cut-off point between small files and large files, the relationship among the consumed memory of NameNode (M), speeds of MB of Uploaded Files per Second (MUFS), speeds of MB of Accessed Files per Second (MAFS) and file size was analyzed, the proper file sizes according to the three factors, were set respectively as FM, FMUFS and FMAFS. And then, grey relational analysis was taken to weight impacts of the three factors on file size while file size was treated as evaluated object, and M, MUFS and MAFS were employed as evaluated indexes, therefore the weight of evaluated index and relational degree of index-object were obtained. The outcome that the sum of FM, FMUFS, and FMAFS multiplied by the corresponding index weight was regarded the approximate optimal value of cut-off point. As experiment results demonstrate, Cut-GAR achieves a balance among M, MUFS, and MAFS, which improves the performance of small file processing.
Improved Dijkstra algorithm with traffic rule constraints
REN Pengfei, QIN Guihe, DONG Jinnan, LI Bin, ZHENG Xiaotian
2015, 35(9): 2503-2507. DOI:
10.11772/j.issn.1001-9081.2015.09.2503
Asbtract
(
)
PDF
(736KB) (
)
References
|
Related Articles
|
Metrics
Traditional Dijkstra algorithm cannot adapt for the transportation network with traffic rule constraints during path planning. In order to solve this problem, based on the previous network models and algorithms, an improved Dijkstra algorithm with traffic rule constraints was proposed. It added new "to be selected state" and "renewable state" on the nodes, to solve the problem of nodes with traffic rule constraints. Meanwhile, grandfather node was introduced, thereby generating triples information of each node in the traffic network. Therefore, taking this as a backtrack basis, the shortest path from the starting node to the destination node could be obtained. This algorithm is not only applicable to the transportation network with traffic rule constraints but with low complexity. The correctness of the algorithm was verified through theoretical analysis. The effectiveness of the algorithm is verified through experimental tests carried out with actual traffic network in Chaoyang District of Changchun city and random added traffic rule constraints.
Simultaneous iterative hard thresholding for joint sparse recovery based on redundant dictionaries
CHEN Peng, MENG Chen, WANG Cheng, CHEN Hua
2015, 35(9): 2508-2512. DOI:
10.11772/j.issn.1001-9081.2015.09.2508
Asbtract
(
)
PDF
(756KB) (
)
References
|
Related Articles
|
Metrics
For improving recovery performance of signals sampled by sub-Nyquist sampling system with Compressed Sensing (CS), the block Simultaneous Iterative Hard Thresholding (SIHT) recovery algorithm for joint sparse model based on
ε
-closure was proposed. Firstly, The CS synthesis model for Multiple Measurement Vector (MMV) of sampling system was analyzed and the concepts of
ε
-coherence and Restricted Isometry Property (RIP) were proposed. Then, according to the block coherence of redundant dictionaries, the SIHT algorithm was improved by optimizing the support sets in iterations. In addition, the iterative convergence constant was given and the algorithm convergence property was analyzed. At last, the simulation experiments show that, compared with traditional method, the new algorithm can achieve recovery success rate of 100% with enough sampling channels, while the noise suppressing ability was increased by 7 dB to 9 dB and the total execution time was brought down by at least 37.9%, with higher convergence speed.
Intrusion detection based on multiple layer extreme learning machine
KANG Songlin, LIU Le, LIU Chuchu, LIAO Qin
2015, 35(9): 2513-2518. DOI:
10.11772/j.issn.1001-9081.2015.09.2513
Asbtract
(
)
PDF
(966KB) (
)
References
|
Related Articles
|
Metrics
In view of high dimension, big data, the difficulty of getting labeled samples, the problem of feature expression and training existed in the application of neural network in intrusion detection, an intrusion detection method based on Multiple Layer Extreme Learning Machine (ML-ELM) was proposed in this paper. Firstly, the highest level abstract features of the detection samples were extracted by multi-layer network structure and deep learning method. The characteristics of intrusion detection data were expressed by singular values. Secondly, the Extreme Learning Machine (ELM) was used to establish the classification model of intrusion detection data. Then, the problem that hard to obtain labeled samples was solved by using a layer by layer unsupervised learning method. Finally, the KDD 99 dataset was used to test the performance of ML-ELM. The experimental results show that the proposed model can improve the detection accuracy, and the false negative rate of detection is low to 0.48%. The detection speed can be improved by more than 6 times compared with other depth detection methods. What's more, the detection accuracy is still more than 85% in the case of a few labeled samples. The detection rates of U2L attack and R2L attack are improved by constructing multi-layer network structure. The method integrates the advantages of deep learning and unsupervised learning. It can express these features of high dimension and large data well using fewer parameters. It also has a good performance in intrusion detection rate and characteristic expression.
Anomaly detection model based on danger theory of distributed service
LI Jinmin, LI Tao, XU Kai
2015, 35(9): 2519-2521. DOI:
10.11772/j.issn.1001-9081.2015.09.2519
Asbtract
(
)
PDF
(607KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that a large number of services' massive behavior data leads to inefficiency in anomaly detection of services and dynamic composition of services leads to uncertainty in service under the distributed environment, a new distributed service anomaly detection model based on danger theory was proposed. Firstly, inspired by the biological processes of artificial immune recognizing abnormalities, this paper used differentiation to describe the variation of massive services' behavior data, and constructed characteristic triad to detect abnormal source. Then, service guided by the idea of cloud model, this paper resolved uncertainty among services by constructing status cloud of the services and computing the degree of membership between services, and calculated the danger zone. Finally, the simulation experiments of student for selecting courses were carried out. According to the simulation results, the model not only detects abnormal services dynamically, but also describes of the dependencies between services accurately, and improves the anomaly detection efficiency. The simulation results verify the validity and effectiveness of the model.
Fully secure hierarchical identity-based online/offline encryption
WANG Zhanjun, MA Haiying, WANG Jinhua
2015, 35(9): 2522-2526. DOI:
10.11772/j.issn.1001-9081.2015.09.2522
Asbtract
(
)
PDF
(921KB) (
)
References
|
Related Articles
|
Metrics
Since the encryption algorithm of Hierarchical Identity-Based Encryption (HIBE) is unsuitable for the lightweight devices, a fully secure Hierarchical Identity-Based Online/Offline Encryption (HIBOOE) scheme was proposed. This scheme introduced the online/offline cryptology into HIBE, and divided the encryption algorithm into two stages. Firstly, the offline encryption preprocessed most of heavy computations before knowing the message and the recipient, then the online encryption could be performed efficiently to produce the ciphertext once the recipient's identity and the message were got. The experiment results show that the proposed scheme greatly improves the encryption efficiency, and gets suitable for power-constrained devices. Moreover it is proven fully secure.
Wavelet domain digital watermarking method based on fruit fly optimization algorithm
XIAO Zhenjiu, SUN Jian, WANG Yongbin, JIANG Zhengtao
2015, 35(9): 2527-2530. DOI:
10.11772/j.issn.1001-9081.2015.09.2527
Asbtract
(
)
PDF
(632KB) (
)
References
|
Related Articles
|
Metrics
For balancing transparency and robustness of watermark, this paper proposed wavelet-domain digital watermarking method based on Fruit Fly Optimization Algorithm (FOA). The algorithm used Discrete Wavelet Transform (DWT) by FOA to watermarking technology and solved the contradiction between transparency and robustness in the watermark by swarm intelligence algorithm. In order to protect the copyright information of digital image, the selected original image was decomposed through a two-dimensional discrete wavelet transform, and watermark image through Arnold transformation was better embedded into wavelet coefficients of vertical sub-band, which guaranteed image quality. In the optimization process, the scaling factor was continuously being trained and updated by FOA. In addition, a new algorithm framework was proposed, which evaluated the scaling factor by prediction feasibility of DWT domain. The experimental results show that, the proposed algorithm has higher transparency and robustness against attacks, with watermarking similarity above 0.95, and 10% higher under geometric attacks such as rotation and shearing compared to some existing watermarking methods based on swarm intelligence.
Robust video watermarking algorithm for high efficiency video coding based on texture direction
ZHANG Minghui, FENG Gui
2015, 35(9): 2531-2534. DOI:
10.11772/j.issn.1001-9081.2015.09.2531
Asbtract
(
)
PDF
(600KB) (
)
References
|
Related Articles
|
Metrics
Considering low robustness of existing video watermarking algorithms for High Efficiency Video Coding (HEVC), a robust video watermarking algorithm based on texture direction was proposed in this paper. Depending on the value of watermarking, the intra-frame angular prediction modes were divided into horizontal modes and vertical modes, and the texture direction for each prediction unit was calculated when the splitting mode was
N
×
N
during compressing coding. Once the texture direction was consistent with the direction represented by watermarking, the 33 angular prediction modes of current prediction unit would be truncated to the horizontal or vertical direction prediction modes, and a best prediction mode which could decide whether watermarking was embedded would be decided on the basis of rate distortion cost function. The location of embedded watermarking would be recorded as a key for extraction at decoding side. The experimental results show that the proposed algorithm has low bitrate increasing and low video distortion, and Error Bit Rate (BER) still remains at low value after the attack of noise, filtering and re-encoding, which means the proposed algorithm can be used to protect video copyright.
Public sensitive watermarking algorithm with weighted multilevel wavelet coefficient mean and quantization
ZHU Ying, SHAO Liping
2015, 35(9): 2535-2541. DOI:
10.11772/j.issn.1001-9081.2015.09.2535
Asbtract
(
)
PDF
(1132KB) (
)
References
|
Related Articles
|
Metrics
Conventional watermarking algorithms usually pay more attention to the visual quality of embedded carrier while ignore the security of watermarking. Although some methods provided watermarking encryption procedures, they usually embed watermarks in fixed positions which are prone to be attacked. The sensitivity of watermarking algorithm based on parameterized wavelet transform is difficult to be applied in practice. To address these problems, a public sensitive watermarking algorithm with weighted multilevel wavelet coefficient mean and quantization was proposed. In the proposed algorithm, firstly the Message Digest Algorithm 5 (MD5) value of cover image, user keys and initial parameters were bound with Logistic map which were used to encrypt watermarks and select wavelet coefficients in different decomposition levels; secondly weights of wavelet coefficients in different levels were estimated by absolute variation means of wavelet coefficients before and after Joint Photographic Experts Group (JPEG) compression, and then weighted multilevel wavelet coefficient mean was adjusted to embed watermark; finally an isolated black point filtering strategy was adopted to enhance the quality of fetched watermark. The experiments show the proposed method has better sensitivities of plaintext image and user keys and still is robust for common image attacks such as image clipping, white noise, JPEG compression, covering and graffiti. The Peak Signal-to-Noise Ratio (PSNR) of image after embedding watermarks can reach 45 dB. The embedded watermark is difficult to be tampered or extracted even if all watermarks embedding procedures are published.
Construction of binary three-order cyclotomic sequences with 3-valued autocorrelation and large linear complexity
LI Shenghua, ZHAO Hannuo, LUO Lianfei
2015, 35(9): 2542-2545. DOI:
10.11772/j.issn.1001-9081.2015.09.2542
Asbtract
(
)
PDF
(648KB) (
)
References
|
Related Articles
|
Metrics
In order to obtain the sequences with a few autocorrelation values and large linear complexity, a new class of binary cyclotomic sequences of order 3 with period
p
were constructed, where
p
is a prime and
p
≡1(mod 3). The autocorrelation was computed based on cyclotomy, and the condition for
p
that assures the 3-valued autocorrelation was discussed. The condition is that
p
should be the form
p
=
a
2
+12 for an integer a. The linear complexity is
p
-1 if
p
is the form, or 2(
p
-1)/3 otherwise. By computer experiments, all
p
s' satisfying the form were found, the corresponding sequences were given, and the autocorrelation and linear complexity were confirmed. The linear complexity was the same as that of the known ternary cyclotomic sequence of order 3. Compared with the related known binary cyclotomic sequences of even order, the linear complexity was the same or better in most cases. The method in this paper can be extended to construct other cyclotomic sequences of odd order with a few autocorrelation values and large linear complexity. Since the cyclotomic sequences of larger odd order also have better balance, they can be applied to stream ciphers and communication systems.
Practical power analysis of smart card implementation of block cipher
FU Rong
2015, 35(9): 2546-2552. DOI:
10.11772/j.issn.1001-9081.2015.09.2546
Asbtract
(
)
PDF
(1064KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the security issues for SM4 encryption algorithm based on hardware implementation of smart card, a fast and efficient method of correlation power analysis was proposed. The theoretical analysis and experimental research revealed that even theoretically secure encryption algorithm might disclose important sensitive information during the physical implementation process. First, the mathematical model of the power analysis was put forward, and the decryption process and optimization algorithm of it were deduced by analyzing the theoretical implementation process and encryption features of SM4. Second, combined with the theoretical physics leak points, a complete experimental system for smart card hardware power analysis was set up, and the power consumption of smart card data including collection, analysis and optimization was analyzed through real smart card side-channel security. Finally, the experimental results were used to further optimize the power analysis, and the safety performance of SM4 algorithm in embedded system environment was explored. Compared with the Mifare DESFire MF3ICD40 3DES (Triple Data Encryption Standard) algorithm, this research reduced the amount of data consumption from 250000 to less than 1000, reduced the time consumption from more than seven hours to a few minutes, and recovered the complete restoration of the SM4 original key. The proposed method can effectively improve the power analysis efficiency under the hardware environment, and reduce the computational complexity.
Deep Web resource selection using topic model
WANG Qiuyue, CAO Wei, SHI Shaochen
2015, 35(9): 2553-2559. DOI:
10.11772/j.issn.1001-9081.2015.09.2553
Asbtract
(
)
PDF
(1304KB) (
)
References
|
Related Articles
|
Metrics
Federated search is a widely-used technique to find information on Deep Web. Given a user query, one of the challenges for a federated search system is to select a set of resources that are most likely to return relevant results for the query. Most existing resource selection methods are based on text-matching between the sample documents of the resource and the query, which typically suffer the problem of missing vocabulary or incomplete information. To alleviate the problem of incomplete information, Latent Dirichlet Allocation (LDA) topic model approach for resource selection was proposed. First, topic probability distributions for resources and query were inferred using LDA topic model approach. Then the similarities between the topic distributions of resources and query were calculated to rank the resources. By mapping both resources and the query into the low dimensional topic space, the problem of missing information caused by the sparsity of high dimensional word space was alleviated. Experiments were conducted on the test sets of TREC FedWeb 2013 and 2014 Tracks, and the results were compared with that of other participants in the Tracks. The experimental results on the TREC FedWeb 2013 Track show that the LDA based approach outperforms the best result of other participants by 24%; and the results on the TREC FedWeb 2014 Track show that it outperforms the best results of the traditional text-matching-based resource selection methods using either small-or big-document strategies by 22% for small-document methods and 43% for big-document methods respectively. In addition, using sampled snippets rather than documents to generate big-document representation for resources can significantly improve the efficiency of the system, thus enables the proposed approach more feasible and applicable in practice.
Sentiment-aspect analysis method based on seed words
CHEN Yongheng, ZUO Wanli, LIN Yaojing
2015, 35(9): 2560-2564. DOI:
10.11772/j.issn.1001-9081.2015.09.2560
Asbtract
(
)
PDF
(884KB) (
)
References
|
Related Articles
|
Metrics
The analysis of sentiment-aspect for product or service is useful for finding the information of sentiment-aspect from the mess of comment set. This paper proposed a new method of sentiment-aspect based on seed words of aspect. Firstly, seed words of aspect and documents of aspect automatically could be achieved by this method. Secondly, Sentiment-Aspect Analysis model Supervised by Seed Words (SAA_SSW) was employed by this method to find aspect and related sentiment. The experimental results show that, compared with traditional Joint Sentiment/Topic Model (JST) and Aspect and Sentiment Unification Model (ASUM), SAA_SSW can find the sentiment labels for same word under different topics and achieve higher relevance between sentiment word and topic. In addition, SAA_SSW model, compared with traditional JST and ASUM model, can improve the classification accuracy by at least 7.5%. So, SAA_SSW model can achieve the extraction of sentiment-aspect well and improve the classification accuracy.
Community detection model in large scale academic social networks
LI Chunying, TANG Yong, TANG Zhikang, HUANG Yonghang, YUAN Chengzhe, ZHAO Jiandong
2015, 35(9): 2565-2568. DOI:
10.11772/j.issn.1001-9081.2015.09.2565
Asbtract
(
)
PDF
(779KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that community detection algorithm based on label propagation in complex networks has a pre-parameter limit in the real network and redundant labels, a community detection model in large scale academic social networks was proposed. The model detected Utmost Maximal Cliques (UMC) in the academic social network and arbitrary intersection between the UMC is the empty set, and then let nodes of each UMC share the unique label by reducing redundant labels and random factors, so the model increased the efficiency and stability of the algorithm. Meanwhile the model completed label propagation of the UMC adjacent nodes using closeness from core node groups (UMC) to spread around, Non-UMC adjacent nodes in the network were updated according to the maximum weight of its neighbor nodes. In the post-processing stage an adaptive threshold method removed useless labels, thereby effectively overcame the pre-parameter limitations in the real complex network. The experimental results on academic social networking platform-SCHOLAT data set prove that the model has an ability to assign nodes with certain generality to the same community, and it provides support of the academic social networks precise personalized service in the future, such as latent friend recommendation and paper sharing.
Personalized book recommendation algorithm based on topic model
ZHENG Xiangyun, CHEN Zhigang, HUANG Rui, LI Bo
2015, 35(9): 2569-2573. DOI:
10.11772/j.issn.1001-9081.2015.09.2569
Asbtract
(
)
PDF
(762KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of high time complexity of traditional recommendation algorithms, a new recommendation model based on Latent Dirichlet Allocation (LDA) model was proposed. It was a data mining model applied to Book Recommendation (BR) in library management systems, named Book Recommendation_Latent Dirichlet Allocation (BR_LDA) model. Through the content similarity analysis of historical borrowing data of the target borrowers with other books, other books which had high content similarities with historical borrowing books of the target borrowers were gotten. Through the similarity analyses performed on the target borrowers' historical borrowing data and historical data from other borrowers, historical borrowing data of the nearest neighbors were gotten. Books which the target borrowers were interested in could be finally gotten by calculating the probabilities of the recommended books. In particular, when the number of recommended books is 4000, the precision of BR_LDA model is 6.2% higher than multi-feature method and 4.5% higher than association rule method; when the recommended list has 500 items, the precision of BR_LDA model is 2.1% higher than collaborative filtering based on the nearest neighbors and 0.5% higher than collaborative filtering based on matrix decomposition. The experimental results show that this model can efficiently mine data of books, reasonably recommend new books which belong to historical interested categories and new books in potential interested categories to the target borrowers.
Probabilistic matrix factorization recommendation with explicit and implicit feedback
WANG Dong, CHEN Zhi, YUE Wenjing, GAO Xiang, WANG Feng
2015, 35(9): 2574-2578. DOI:
10.11772/j.issn.1001-9081.2015.09.2574
Asbtract
(
)
PDF
(855KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the recommender systems with explicit feedback drastically degrade the accuracy, the recommender technique using probabilistic matrix factorization with explicit and implicit feedback was proposed. So the explicit and implicit feedback was taken into account in this method. Firstly, user trust relationship matrix and user-item matrix were factorized using probabilistic matrix factorization to mix the feedback of user rating records. Then the model was trained to provide users with accurate prediction. The experimental results show that this technique can obtain user preferences effectively and produce large amounts of highly accurate recommendations.
Nonlinear AdaBoost algorithm based on statistics for
K
-nearest neighbors
GOU Fu, ZHENG Kai
2015, 35(9): 2579-2583. DOI:
10.11772/j.issn.1001-9081.2015.09.2579
Asbtract
(
)
PDF
(753KB) (
)
References
|
Related Articles
|
Metrics
AdaBoost is one of the most popular boosting algorithms in the area of data mining. By analyzing the disadvantages of the traditional AdaBoost using linear combination of the basic classifiers, a new algorithm was proposed, which changed the traditional linear addition into a nonlinear combination, and replaced the constant weights acquired in the training stage by a series of dynamic parameters based on the statistics of the
K
-nearest neighbors and decided by the instances in the predicting stage. In this way, the weight of each basic classifier was closer to reality. The experimental results show that, compared to the traditional AdaBoost, the new algorithm can increase the prediction accuracy nearly seven percentage points at most. The new algorithm is more accurate and it can achieve higher classification accuracy for most data sets.
Chromosomal translocation-based Dynamic evolutionary algorithm
TAN Yang, NING Ke, CHEN Lin
2015, 35(9): 2584-2589. DOI:
10.11772/j.issn.1001-9081.2015.09.2584
Asbtract
(
)
PDF
(863KB) (
)
References
|
Related Articles
|
Metrics
When traditional binary-coded evolutionary algorithms are applied to optimize functions, the mutual interference between different dimensions would prevent effective restructuring of some low-order modes. A new evolutionary algorithm, called Dynamic Chromosomal Translocation-based Evolutionary Algorithm (CTDEA), was proposed based on cytological findings. This algorithm simulated the structuralized process of organic chromosome inside cells by constructing gene matrixes, and realized modular translocations of homogeneous chromosomes on the basis of gene matrix, in order to maintain the diversity of populations. Moreover, the individual fitness-based population-dividing method was adopted to safeguard elite populations, ensure competitions among individuals and improve the optimization speed of the algorithm. Experimental results show that compared with existing Genetic Algorithm (GA) and distribution estimation algorithms, this evolutionary algorithm greatly improves the population diversity, keeping the diversity of populations around 0.25. In addition, this algorithm shows obvious advantages in accuracy, stability and speed of optimization.
Improved grey wolf optimization algorithm for constrained optimization problem
LONG Wen, ZHAO Dongquan, XU Songjin
2015, 35(9): 2590-2595. DOI:
10.11772/j.issn.1001-9081.2015.09.2590
Asbtract
(
)
PDF
(842KB) (
)
References
|
Related Articles
|
Metrics
The standard Grey Wolf Optimization (GWO) algorithm has a few disadvantages of low solving precision, slow convergence, and bad local searching ability. In order to overcome these disadvantages of GWO, an Improved GWO (IGWO) algorithm was proposed to solve constrained optimization problems. Using non-stationary multi-stage assignment penalty function method to deal with the constrained conditions, the original constrained optimization problem was converted into an unconstrained optimization problem. The proposed IGWO algorithm was applied to solve the converted problem. In proposed IGWO algorithm, good point set theory was used to initiate population, which strengthened the diversity of global searching. Powell search method was applied to the current optimal individual to improve local search ability and accelerate convergence. Simulation experiments were conducted on the well-known benchmark constrained optimization problems. The simulation results show that the proposed algorithm not only overcomes shortcomings of the original GWO algorithm, but also outperforms differential evolution and particle swarm optimization algorithms.
Matrix-structural fast learning of cascaded classifier for negative sample inheritance
LIU Yang, YAN Shengye, LIU Qingshan
2015, 35(9): 2596-2601. DOI:
10.11772/j.issn.1001-9081.2015.09.2596
Asbtract
(
)
PDF
(930KB) (
)
References
|
Related Articles
|
Metrics
Due to the disadvantages such as inefficiency of getting high-quality samples, bad impact of bootstrap to the whole learning-efficiency and final classifier performance in the negative samples bootstrap process of matrix-structural learning of cascade classifier algorithm. This paper proposed a fast learning algorithm-matrix-structural fast learning of cascaded classifier for negative sample inheritance. The negative sample bootstrap process of this algorithm combined sample inheritance and gradation bootstrap, which inherited helpful samples from the negative sample set used by last training stage firstly, and then got insufficient part of sample set from the negative image set. Sample inheritance reduced the bootstrap range of useful samples, which accelerated bootstrap. And sample pre-screening, during bootstrap process, increased sample complexity and promoted final classifier performance. The experiment results show that the proposed algorithm saves 20h in training time and improves 1 percentage point in detection performance, compared with matrix-structural learning of cascaded classifier algorithm. Besides, compared with other 17 human detection algorithms, the proposed algorithm achieves good performance too. The proposed algorithm gets great improvement in training efficiency and detection performance compared with matrix-structural learning of cascaded classifier algorithm.
Autonomous developmental algorithm for intelligent robot based on intrinsic motivation
REN Hongge, XIANG Yingfan, LI Fujin
2015, 35(9): 2602-2605. DOI:
10.11772/j.issn.1001-9081.2015.09.2602
Asbtract
(
)
PDF
(712KB) (
)
References
|
Related Articles
|
Metrics
The initiative of two-wheeled self-balancing robot in the process of learning is poor. Inspired by intrinsic motivation theory of psychology, an autonomous development algorithm for intelligent robot based on intrinsic motivation was put forward. In the frame work of the reinforcement learning theory, the algorithm introduced human curiosity of intrinsic motivation theory as the internal driving force, and external reward signal into entire learning progress, and adopted double internal regression neural network for storage of knowledge learning and accumulation, which made robot gradually learn autonomous balance skill. Finally, aiming at the effects of measurement noise pollution on two-wheeled angular velocity of robot, further by adopting the method of Kalman filter to compensate, to speed up the algorithm convergence, and reduce the system error. Simulation experiments show that this algorithm can make the two-wheeled robot obtain cognition through interaction with the environment, therefore successfully learn balance control skill.
Bearing fault diagnosis method based on conditional local mean decomposition and variable predictive model
XU Youcai, WAN Zhou
2015, 35(9): 2606-2610. DOI:
10.11772/j.issn.1001-9081.2015.09.2606
Asbtract
(
)
PDF
(708KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the modal aliasing phenomenon of Local Mean Decomposition (LMD) method in the decomposition process of nonlinear and non-stationary vibration signals, affects the accuracy of identification, a fault diagnosis method based on Conditional Local Mean Decomposition (CLMD) method and Variable Predictive Model Class Discriminate (VPMCD) was proposed. The method combined the frequency resolution method of digital image processing with LMD. Firstly, the frequency resolutions of all local extreme points were calculated, and according to the frequency resolutions of local extreme points, the vibration signals could be divided into the low frequency resolution area and the high frequency resolution area. Secondly, LMD method was used to decompose the high frequency resolution area to get several components of Product Function (PF). Finally, after these PF components were connected by broken line, PF could be got through moving average processing. The skewness coefficient and the energy coefficient of PF could form fault feature vector. VPMCD could use fault feature vector to identify the fault types. This method was applied into bearing fault diagnosis. The experimental results show that the recognition efficiency of the proposed method increases by 8.33%, compared with LMD. As a result, the method is feasible and valid.
Dynamic spatial index of mesh surface for supporting STL data source
GUO Hongshuai, SUN Dianzhu, LI Yanrui, LI Cong
2015, 35(9): 2611-2615. DOI:
10.11772/j.issn.1001-9081.2015.09.2611
Asbtract
(
)
PDF
(743KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that there exist defects of vertex data redundancy and lack of adjacency information among the faces in the STereo Lithography (STL) file format, an explicit algorithm of surface topology reconstruction was presented based on the multi-dimensional dynamic spatial index. During the process of eliminating the copies of the mesh vertex data, the K-Dimensional Tree (KD Tree) of the vertices on the mesh surface was gradually built. The efficiency of eliminating the vertex copies was improved by the index and the surface topology was rapidly built based on the storage openness of the data in the leaf node layer of KD-tree, in which the half-edge data structure could be integrated. Finally, compared with methods using R
*
-Tree, array and hash table as index, the proposed dynamic spatial index integrated KD-Tree with half-edge date structure used 11.93 s to remove redundant vertices and 2.87 s to reconstruct surface topology when dealing with the data file of nearly one million faces, which significantly reduced the time of eliminating the redundant vertices and surface topology reconstruction. And the index effectively supported quick query of the topology information of mesh surface with the query time in 1 ms, which was far less than the comparison algorithms. The experimental results show that the proposed algorithm can improve the efficiency of eliminating the vertex data redundancy and the topological reconstruction as well as achieve quick query of the topology information of mesh surface.
Fast removal algorithm for trailing smear effect in CCD drift-scan star image
YANG Huiling, LIU Hongyan, LI Yan, SUN Huiting
2015, 35(9): 2616-2618. DOI:
10.11772/j.issn.1001-9081.2015.09.2616
Asbtract
(
)
PDF
(491KB) (
)
References
|
Related Articles
|
Metrics
When drift-scan CCD shooting the sky where bright stars are in the filed of view, because of the frame transfer feature, the trailing smear will appear throughout the star image. A fast smear trailing elimination algorithm was proposed by analyzing the imaging mechanism. The method firstly decreased the background non-uniformity by fitting the background, then located smear trailing by calculating the mean gray value of every column in star image and comparing the mean gray values before and after fitting, finally eliminated smear trailing by setting the trailing pixel with the mean gray value after fitting. The experimental results show that the smear trailing is removed completely and the mean deviation of background is apparently reduced, moreover the consuming time of this method is only 20% of that of traditional smear elimination method, which proves the validity of the method.
Orientation-invariant generalized Hough transform algorithm based on U-chord curvature
CHEN Binbin, DENG Xinpu, YANG Jungang
2015, 35(9): 2619-2623. DOI:
10.11772/j.issn.1001-9081.2015.09.2619
Asbtract
(
)
PDF
(704KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the mismatch occurred in template matching when using Generalized Hough Transform (GHT) algorithm to extract the target shape from the rotated image, an improved orientation-invariant generalized Hough transform algorithm based on U-chord curvature was proposed. Firstly, the modified R-table with orientation-invariant performance was constructed by using features of U-chord curvature and displacement vectors of edge points of the template shape; secondly, the information such as the displacement vector was achieved by calculating the curvature of edge points as an index to lookup the constructed R-table; finally, the possible locations of reference points were calculated according to the information. The point with maximum voting was the location of the target shape of the image. When the target shape of the image is rotated by 0°, 2°, 4°, 5° and 6° individually, the sharper peaks occur in the target shape position of all the rotation images by using the proposed algorithm. The simulation results show that the Improved Generalized Hough Transform (I-GHT) algorithm has high stability in rotation and noise conditions.
New multi-object image dataset construction and evaluation of visual saliency analysis algorithm
ZHENG Bin, NIU Yuzhen, KE Lingling
2015, 35(9): 2624-2628. DOI:
10.11772/j.issn.1001-9081.2015.09.2624
Asbtract
(
)
PDF
(966KB) (
)
References
|
Related Articles
|
Metrics
Image visual saliency analysis algorithms have achieved satisfactory performance on existing datasets, but these datasets have two major problems. Firstly, most of the images contain only one salient object. Secondly, users' cognition of multiple salient objects in the same image was ignored during building salient objects' ground truth. The above problems result in that the performance of saliency analysis algorithms used in the real applications cannot be reflected by the evaluation on the existing datasets. So in this paper, a novel method of labeling the ground truth of salient objects was proposed. Firstly, a software to collect users' cognition of the important values of multiple salient objects in each image was designed and implemented. Then, according to the collected data from each user, the ground truth map represented as a gray scale image was created by manually labeling the regions covered by the salient objects. The pixel value of each region equals to the collected saliency in the first step. Based on the improved ground truth labeling method, a salient object dataset contains 1000 multi-object images was built. A ground truth map for each image was created to record users' cognition of the objects' saliencies. Then 10 state-of-the-art saliency analysis algorithms on existing datasets and the established dataset were compared. The experimental results show that these algorithms' performances are greatly reduced on the established dataset, such as the Area Under Curve of Receiver-Operating Characteristic (ROC-AUC) has a greatest decline of more than 0.5. The results prove the problems of existing datasets and the demand of building a new dataset, and point out the insufficiency of saliency analysis algorithms on complex images with multiple salient objects.
Image classification method based on visual saliency detection
LIU Shangwang, LI Ming, HU Jianlan, CUI Yanmeng
2015, 35(9): 2629-2635. DOI:
10.11772/j.issn.1001-9081.2015.09.2629
Asbtract
(
)
PDF
(1208KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that traditional image classification methods deal with the whole image in a non-hierarchical way, an image classification method based on visual saliency detection was proposed. Firstly, the visual attention model was employed to generate the salient region. Secondly, the texture feature and time signature feature of the image were extracted by Gabor filter and pulse coupled neural network, respectively. Finally, the support vector machine was adopted to accomplish image classification according to the features of the salient region. The experimental results show that the image classification precision rates of the proposed method in SIMPLIcity and Caltech are 94.26% and 95.43%, respectively. Obviously, saliency detection and efficient image feature extraction are significant to image classification.
Unsupervised deep learning method for color image recognition
KANG Xiaodong, WANG Hao, GUO Jun, YU Wenyong
2015, 35(9): 2636-2639. DOI:
10.11772/j.issn.1001-9081.2015.09.2636
Asbtract
(
)
PDF
(578KB) (
)
References
|
Related Articles
|
Metrics
In view of significance of color image recognition, the method of color image recognition based on data of image features and Deep Belief Network (DBN) was presented. Firstly, data field of color image was constructed in accord with human visual characteristics; secondly, wavelet transforms was applied to describe multi-scale feature of image; finally, image recognition could be made by training unsupervised DBN.The experimental results show that compared with the methods of Adaboost and Support Vector Machine(SVM),classification accuracy is improved by 3.7% and 2.8% respectively and better image recognition is achieved by the proposed method.
2D intra string copy for screen content coding
CHEN Xianyi, ZHAO Liping, CHEN Zhizhong, LIN Tao
2015, 35(9): 2640-2647. DOI:
10.11772/j.issn.1001-9081.2015.09.2640
Asbtract
(
)
PDF
(1264KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of that although Intra String Copy (ISC) improved the effect of the screen content coding, but it transformed the 2D image to 1D by Coding Unit (CU), making adjacent regions in an image segmented and spatial correlation not to be used, a new algorithm called 2D Intra String Copy (2D ISC) was proposed. Almost without additional memory in encoder and decoder, the algorithm realized arbitrary 2D shape searching and matching without boundary restriction of CU for pixels in current CU, by using dictionary coding tool in High Efficiency Video Coding (HEVC) reconstruction cache. Also adopted technologies of color quantization preprocessing and horizontal vertical search order self-adaption to enhance coding effect. Experiments on common test for typical screen content test sequences show that compared with HEVC, 2D ISC can achieve bit-rate saving of 46.5%, 34.8%, 25.4% for All Intra(AI), Random Access(RA) and Low-delay B(LB) configurations respectively in lossless coding mode, and 34.0%, 37.2%, 23.9% for AI, RA and LB configurations respectively in lossy coding mode. Even compared with ISC, 2D ISC can also achieve bit-rate saving up to 18.3%, 13.9%, 11.0% for AI, RA and LB configurations in lossless coding mode, and 19.8%, 20.5%, 10.4% for AI, RA and LB configurations in lossy coding mode. The experimental results indicate that the proposed algorithm is feasible and efficient.
Lossless digital image coding method based on multi-directional hybrid differential
GAO Jian, YANG Ke, LIU Xingxing
2015, 35(9): 2648-2651. DOI:
10.11772/j.issn.1001-9081.2015.09.2648
Asbtract
(
)
PDF
(591KB) (
)
References
|
Related Articles
|
Metrics
In view of the digital image coding, the authors proposed a multi-directional hybrid differential method based on the analysis of two-directional hybrid differential and 3-parameter variable length coding method. Firstly, multi-directional hybrid differential method was used to analyze local feature of the current pixel based on other four pixels nearby. Then according to the analysis results, an optimal differential direction for the current pixel was chosen from several primary differential directions. Compared with two-directional hybrid differential, multi-directional hybrid differential did not save direction flags. Primary directions contained four differential directions. Compared with results of two-directional hybrid differential, the entropy value of image processed by multi-direction hybrid differential was reduced by 8.2% and the bits per pixel was reduced by 11%. The experimental results show that this algorithm is useful for improving the coding efficiency by reducing the entropy value of digital image to a lower level.
Building-damage detection based on combination of multi-features
LIU Yu, CAO Guo, ZHOU Licun, QU Baozhu
2015, 35(9): 2652-2655. DOI:
10.11772/j.issn.1001-9081.2015.09.2652
Asbtract
(
)
PDF
(828KB) (
)
References
|
Related Articles
|
Metrics
To detect building-damage areas in post-seismic high-resolution remote sensing images, a building-damage detection method based on multi-features was proposed. Firstly, Morphological Attribute Profile (MAP) and Local Binary Pattern (LBP) operator were used to extract geometric features and texture features. Then, Random Forest (RF) classifier was applied to extract damaged building regions so as to form the preliminary results. At last, for segmented objects, the ultimate building-damage area was obtained by computing the damaged ratio of each object. Experiments were carried out on Yushu post-seismic aerial remote sensing images whose spatial resolution was 0.1 m. Results show that this method improves overall accuracy by 12% compared with Morphological Profile (MP)-based method. The results indicate that the proposed method can effectively detect building-damage areas with high accuracy in post-seismic high-resolution images.
Tracking algorithm by template matchingbased on particle swarm optimization
LI Jie, ZHOU Hao, ZHANG Jin, GAO Yun
2015, 35(9): 2656-2660. DOI:
10.11772/j.issn.1001-9081.2015.09.2656
Asbtract
(
)
PDF
(896KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the tracking algorithm based on template matching has poor performance in running speed and success rate, a template matching tracking algorithm based on Particle Swarm Optimization (PSO) was proposed. The algorithm took the PSO algorithm as the search strategy of the candidate templates in template matching algorithm, and the target template was updated self-adaptively. Firstly, 30 candidate templates were selected in a search scope and then the individual and global optimal candidate template were selected; secondly, the best candidate template was worked out through the particle swarm optimization and the target is the best one; finally, the target template was updated self-adaptively based on the matching rate of the best candidate template. The theoretical analysis and simulation experiments show that, compared with the tracking algorithm based on template matching and the template matching tracking algorithm based on the rough search and refined by search, the computation of the template matching tracking algorithm based on particle swarm optimization is greatly reduced about 91.1% and 69.8%, and the success rate is 2.02 times and 1.94 times of the primary algorithm. The experiment show that the new algorithm can achieve well real-time tracking and the robustness and accuracy of tracking is greatly improved.
Weighted guided image filtering algorithm using Laplacian-of-Gaussian edge detector
LONG Peng, LU Huaxiang
2015, 35(9): 2661-2665. DOI:
10.11772/j.issn.1001-9081.2015.09.2661
Asbtract
(
)
PDF
(857KB) (
)
References
|
Related Articles
|
Metrics
The original guided image filter algorithm performs not robust enough because it occupies the same local linear model among all the local patches while ignoring the texture difference. Based on the absolute magnitude of LoG (Laplacian-of-Gaussian) strength, a locally adaptive weighting parameter was used to penalize the fixed regularization parameter to produce a more robust method, aiming to amplify the grey scale difference between flat patch and edge patch, meanwhile avoid degraded denoising performance of original method. The open medical database BrainWeb including 6 T1, 6 T2 and 6 PD weighted pictures added with 9% magnitude of Racian noise were used as the testing database. Structural Similarity Index Measurement (SSIM) and Cumulative Probability of Blur Detection (CPBD) were used as quantity value indexes. According to the best experiment results, the proposed method respectively gets 5% and 6% advancement for SSIM and CPBD, compared to original guided image filter algorithm. Furthermore, the proposed method performs better than both the original guided image filter and another improved guided image filter under each regularization parameter of guided image filter, and the original
O
(
N
) time complexity is not affected. Compared to state-of-the-art methods, the proposed method obtains best performance compromising SSIM and CPBD, and it has lowest time complexity, while providing a fast and robust denoising method for medical images and color images.
Automated lung segmentation for chest CT images based on Random Walk algorithm
WANG Bing, GU Xiaomeng, YANG Ying, DONG Hua, TIAN Xuedong, GU Lixu
2015, 35(9): 2666-2672. DOI:
10.11772/j.issn.1001-9081.2015.09.2666
Asbtract
(
)
PDF
(1334KB) (
)
References
|
Related Articles
|
Metrics
To deal with the lung segmentation problem under complex conditions, Random Walk algorithm was applied to automatic lung segmentation. Firstly, according to the anatomical and imaging characteristics of the chest Computed Tomography (CT) images, foreground and background seeds were selected respectively. Then, CT image was segmented roughly by using the Random Walk algorithm and the approximate mask of lung area was extracted. Next, through implementing mathematical morphology operations to the mask, foreground and background seeds were further adjusted to adapt to the actually complicated situations. Finally, the fine segmentation of lung parenchyma for chest CT image was implemented by using the Random Walk algorithm again. The experimental results demonstrate that, compared with the gold standard, the Mean Absolute Distance (MAD) is 0.44±0.13 mm, the Dice Coefficient (DC) is 99.21%±0.38%. Compared with the other lung segmentation methods, the proposed method are significantly improved in accuracy of segmentation. The experimental results show that the proposed method can solve the difficult cases of the lung segmentation, and ensure the integrity, accuracy, real-time and robustness of the segmentation. Meanwhile, the results and time of the proposed method can meet the clinical needs.
Scheduling optimization for cross-over twin automated stacking cranes in automated container terminal
ZHOU Jingxian, HU Zhihua
2015, 35(9): 2673-2677. DOI:
10.11772/j.issn.1001-9081.2015.09.2673
Asbtract
(
)
PDF
(743KB) (
)
References
|
Related Articles
|
Metrics
For the scheduling optimization problem of cross-over twin Automated Stacking Crane (ASC), a multi-objective mathematical formulation model was proposed with considering the conflicts between two ASCs while reaching the same bay. The operation sequences of ASC was optimized, the optimal task sequence, task finish time and ASC's idle time after avoiding the conflict were gained and the practicality of optimized model was proved. Four experiment scenarios were performed to further analyze the efficiency difference between twin ASC and single ASC, and the effects of parameter changes. The results illustrate that the utilization rate of equipment of twin ASC is 107% lower, and the operation efficiency is 35% higher than single ASC. The ASV's total operation time will be decreased while the number of containers decreases and the travel speed of ASC increases. While the ratio of storages and retrieves of containers is 1, the optimized tasks' total operation time and ASC's idle time will be gained. It shows that the container terminals can improve their operation through adjusting the ratio of storages and retrieves during a short period or optimizing ASC travel speed.
Robot tool calibration method based on camera space point constraint
DU Shanshan, ZHOU Xiang
2015, 35(9): 2678-2681. DOI:
10.11772/j.issn.1001-9081.2015.09.2678
Asbtract
(
)
PDF
(545KB) (
)
References
|
Related Articles
|
Metrics
The tool calibration means calculating the transformation matrix of the tool coordinate system relative to the end of the robot coordinate system. Traditional solution realizes point constraint by manual teaching. A calibration method based on visual camera space positioning was proposed. It used the camera to build the relation between the 3D space of the robot and the 2D space of camera to achieve the point constraints of the center point of the rings marks which were used as feature points and stuck on the edge factor. Visual positioning did not need camera calibration and other tedious process. The Tool Center Point (TCP) was figured out based on the forward kinematics derivation process of the robot and camera space point constraint. The calibration error of repeated experiments was less than 0.05 mm, and the absolute positioning error was less than 0.1 mm. The experimental results verify that the tool calibration based on camera space positioning has high repeatability and reliability.
Power grid fault evolution model based on fuzzy cellular automata
YU Qun, ZHANG Min, CAO Na, HE Qing, SHI Liang
2015, 35(9): 2682-2686. DOI:
10.11772/j.issn.1001-9081.2015.09.2682
Asbtract
(
)
PDF
(724KB) (
)
References
|
Related Articles
|
Metrics
To build a power grid fault model which is closer to the actual power grid, a new model named FCA was proposed combined with fuzzy theory and Cellular Automata (CA) to simulate the evolution of power grid failure, and the fuzzy rule bases of cellular status, power status and degree of fault transmission in the model was defined. Meanwhile, simulation with FCA model based on IEEE39 node system was conducted. The simulation results further validate the Self-Organized Criticality (SOC) of power grid, and show that the absolute value of losses load power law curve slope of this model increases by 17% than the model without fuzzy rule, the grid is more stable, and the FCA model is closer to the actual operation of grid.
Lane line recognition using region division on structured roads
WANG Yue, FAN Xianxing, LIU Jincheng, PANG Zhenying
2015, 35(9): 2687-2691. DOI:
10.11772/j.issn.1001-9081.2015.09.2687
Asbtract
(
)
PDF
(987KB) (
)
References
|
Related Articles
|
Metrics
It is difficult to maintain a balance between accuracy and real-time performance of lane line recognition, thus a new lane line recognition method was proposed based on region division. Firstly, an improved OTSU algorithm was applied to segment the edge image; then, feature points in that edge image were extracted by using Progressive Probabilistic Hough Transform (PPHT) algorithm and fitted as a line by using Least Square Method (LSM). Finally, all fitted lines were judged and the possible lines were chosen by using an anti-interference algorithm. Comparative experiments were conducted with three other algorithms mentioned in the references. In addition, an evaluation model was put forward to assess the performance of the algorithms when dealing with 500 typical lane images. Meanwhile, by calculating the average overhead time on processing each frame of a 1 min 26 s video, the response time of the algorithm was evaluated. The experimental results show that three indexes including precision, recall rate and F value of the proposed algorithm are better than the comparison algorithm, and the proposed algorithm also meets the requirement of real-time processing.
Formal description approach for software component in model-driven development
HOU Jinkui, WANG Chengduan
2015, 35(9): 2692-2700. DOI:
10.11772/j.issn.1001-9081.2015.09.2692
Asbtract
(
)
PDF
(1420KB) (
)
References
|
Related Articles
|
Metrics
To resolve the problems on description and proof of semantic property preservation in Model-Driven Software Development (MDSD), a formal approach was proposed for software architecture model on the basis of type category theory and process algebra. The semantic constraints of component specifications which should be kept through model transformation, were deeply analyzed and discussed. From the view of diagram structure, port and configuration constraints, external behavior and component substitutability, the problem of property preservation was described, and the corresponding criteria was built at the same time. The framework provides a guidance for the definition of model transformation rules, and provides the basis to verify the correctness of model transformation as well as to analyze the effect of model transformation. The application research shows that, the approach enhances semantic description capabilities of component model, and can be used as an effective supplement for existing software modeling method.
Epileptic EEG signals classification based on wavelet transform and AdaBoost extreme learning machine
HAN Min, SUN Zhuoran
2015, 35(9): 2701-2705. DOI:
10.11772/j.issn.1001-9081.2015.09.2701
Asbtract
(
)
PDF
(934KB) (
)
References
|
Related Articles
|
Metrics
Aiming at solving the problem of unstable predicted results and poor generalization ability when a single Extreme Learning Machine (ELM) was treated as a classifier in the research of automatic epileptic ElectroEncephaloGram (EEG) signals classification, a classification method of AdaBoost ELM based on Mutual Information (MI) was put forward. The algorithm embedded the MI variable selection into AdaBoost ELM, regarded the final performance of the strong leaner as evaluation index, and realized the optimization of input variables and network model. Wavelet Transform (WT) was used to extract the feature of EEG signal, and the proposed classification algorithm was used to classify the UCI EEG datasets and epileptic EEG datasets of the University of Bonn. The experimental results show that compared to traditional methods and other similar studies, the proposed method significantly has improvement in the classification accuracy and stability, and has better generalization performance.
Water body extraction method based on stacked autoencoder
WANG Zhiyin, YU Long, TIAN Shengwei, QIAN Yurong, DING Jianli, YANG Liu
2015, 35(9): 2706-2709. DOI:
10.11772/j.issn.1001-9081.2015.09.2706
Asbtract
(
)
PDF
(619KB) (
)
References
|
Related Articles
|
Metrics
To improve the accuracy and automation of extracting water body by using remote sensing image, a method was proposed for water body extraction based on Stacked AutoEncoder (SAE). A deep network model was built by stacking sparse autoencoders and each layer was trained in turn with the greedy layerwise approach. Features were learnt without supervision from the pixel level to avoid the problem that methods such as traditional neural network needed artificial feature analysis and selection. Softmax classifier was trained with supervision by using the learnt features and corresponding labels. Back Propagation (BP) algorithm was used to fine-tune and optimize the whole model. The accuracy of SAE-based method reaches 94.73% by using the Tarim River's ETM+ data to do the experiment, which is 3.28% and 4.04% higher than that of Support Vector Machine (SVM) and BP neural network separately. The experimental results show that the proposed method can effectively improve the accuracy of water body extraction.
Anomaly detection and diagnosis of high sulfur natural gas purification process based on dynamic kernel independent component analysis
LI Jingzhe, LI Taifu, GU Xiaohua, QIU Kui
2015, 35(9): 2710-2714. DOI:
10.11772/j.issn.1001-9081.2015.09.2710
Asbtract
(
)
PDF
(739KB) (
)
References
|
Related Articles
|
Metrics
At present, the parameters of high sulfur gas purification process present timing autocorrelation characteristics, resulting in poor static multivariate statistical process monitoring for abnormal condition. An anomaly detection and diagnosis method called Dynamic Kernel Independent Component Analysis (DKICA) was proposed, which considered the timing autocorrelation of parameters. Firstly, Auto-Regression (AR) model was introduced. The model order was determined by the parameter identification to describe the timing of autocorrelation in the monitoring process. Secondly, original variables were projected to a kernel independent space, their T
2
and SPE statistics were monitored to realize anomaly detection by judging whether they exceeded control limit of normal condition. Finally, the first order partial derivative of the T
2
statistic to original variable was calculated, and the contribution plot was given to achieve abnormality diagnosis. The data collected from a high sulfur gas purification plant was analyzed, and the results showed the detection accuracy of DKICA was prior to that of Kernel Independent Component Analysis (KICA).
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF