Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 November 2016, Volume 36 Issue 11
Previous Issue
Next Issue
Parameter optimization model of interval concept lattice based on compression theory
LI Mingxia, LIU Baoxiang, ZHANG Chunying
2016, 36(11): 2945-2949. DOI:
10.11772/j.issn.1001-9081.2016.11.2945
Asbtract
(
)
PDF
(910KB) (
)
References
|
Related Articles
|
Metrics
Before building interval concept lattice from the formal context, the interval parameters[
α
,
β
] should be determined, which influence the concept extension, the lattice structure and the quantity and precision of extracted association rules. In order to obtain
α
and
β
with the biggest compression degree of interval concept lattice, firstly the definition of the similarity of binary relation pairs and covering-neighborhood-space from formal context were proposed, the similarity matrix of binary relation pairs was obtained, and the neighborhood of binary relation pairs was calculated by the covering which was obtained by similar class of
γ
. Secondly, update algorithm of concept sets based on change of parameters was raised, where concept sets were got on the basis of the non-reconstruction. Combining with covering-neighborhood of binary relation pairs on changing interval parameters, further the model of parameter optimization of interval concept lattice could be built based on compression theory. According to the size of the compression degree and its changing trend, the optimal values of interval parameters were found. Finally, the validity of the model was demonstrated by an example.
Constructing method of attribute subset sequence in multi-granulation rough set model
YAO Sheng, XU Feng, WANG Jie
2016, 36(11): 2950-2953. DOI:
10.11772/j.issn.1001-9081.2016.11.2950
Asbtract
(
)
PDF
(671KB) (
)
References
|
Related Articles
|
Metrics
Concerning the construction problem of attribute subset sequence in multi-granulation rough set model, a construction method based on the distance between attributes was proposed. Firstly, the concept of the distance between attributes in information system was introduced. Secondly, the quantitative calculation formula was given, which was then used to compute the distance between the attributes. Finally, according to the distance between the attributes, the neighborhood attribute set of each attribute was obtained, and then the attribute subset sequence was constructed. The experimental results show that the proposed method is more accurate for each object class of the experiment than the random constructional attribute subset sequence.
Regional attribute reduction and their structural heuristic algorithms for variable precision rough sets
XIONG Fang, ZHANG Xianyong
2016, 36(11): 2954-2957. DOI:
10.11772/j.issn.1001-9081.2016.11.2954
Asbtract
(
)
PDF
(675KB) (
)
References
|
Related Articles
|
Metrics
According to the two-category case and three-way decision regions, two types of attribute reductions for Variable Precision Rough Sets (VPRS) and their structural heuristic algorithms were studied. First of all, classification-regions were constructed by three-way decision regions, and Classification-Region Preservation (CRP) reduction and Decision-Region Preservation (DRP) reduction were proposed, quantitative expansion of the qualitative attribute reduction was obtained, and the structural heuristic algorithms based on cores were designed. Furthermore, the strong-weak relationships between the two kinds of regional reductions were studied, and structural heuristic algorithms from strong to weak were designed to achieve improvement from the two-way to three-way decisions. Finally, the validity of the relevant reductions and algorithms were verified by the data table and UCI data set.
Attribute reduction in incomplete information systems based on extended tolerance relation
LUO Hao, XU Xinying, XIE Jun, ZHANG Kuo, XIE Xinlin
2016, 36(11): 2958-2962. DOI:
10.11772/j.issn.1001-9081.2016.11.2958
Asbtract
(
)
PDF
(742KB) (
)
References
|
Related Articles
|
Metrics
Current neighborhood rough sets have been usually used to solve complete information system, not incomplete system. In order to solve this problem, an extended tolerance relation was proposed to deal with the incomplete mixed information system, and associative definitions were provided. The degree of complete tolerance and neighborhood threshold were used as the constraint conditions to find the extended tolerance neighborhood. The attribute importance of the system was got by the decision positive region within the neiborhood, and the attribute reduction algorithm based on the extended tolerance relation was proposed, which was given by the importance as the heuristic factor. Seven different types of data sets on UCI database was used for simulation, and the proposed method was compared with Extension Neighborhood relation (EN), Tolerance Neighborhood Entropy (TRE) and Neighborhood Rough set (NR) respectively. The experimental results show that, the proposed algorithm can ensure accuracy of classification, select less attributes by reduction. Finally, the influence of neighborhood threshold in extended tolerance relation on classification accuracy was discussed.
Text semantic classification algorithm based on risk decision
CHENG Yusheng, LIANG Hui, WANG Yibin, LI Kang
2016, 36(11): 2963-2968. DOI:
10.11772/j.issn.1001-9081.2016.11.2963
Asbtract
(
)
PDF
(967KB) (
)
References
|
Related Articles
|
Metrics
Most of traditional text classification algorithms are based on vector space model and hierarchical classification tree model is used for statistical analysis. The model mostly doesn't combine with the semantic information of characteristic items. Therefore it may produce a large number of frequent semantic modes and increase the paths of classification. Combining with the good distinguishment characteristic of essential Emerging Pattern (eEP) in the classification and the model of rough set based on minimum expected risk decision, a Text Semantic Classification algorithm with Threshold Optimization (TSCTO) was presented. Firstly, after obtaining the document feature frequency distribution table, the minimum threshold value was calculated by the rough set combined with distribution density matrix. Then the high frequency words of the semantic intra-class document frequency are obtained by combining semantic analysis and inverse document frequency method. In order to get the simplest model, the eEP pattern was used for classification. Finally, using similarity formula and HowNet semantic relevance degree, the score of text similarity was calculated, and some thresholds were optimized by the three-way decision theory. The experimental results show that the TSCTO algorithm has a certain improvement in the performance of text classification.
Construction of gene regulatory network based on hybrid particle swarm optimization and genetic algorithm
MENG Jun, SHI Guanli
2016, 36(11): 2969-2973. DOI:
10.11772/j.issn.1001-9081.2016.11.2969
Asbtract
(
)
PDF
(752KB) (
)
References
|
Related Articles
|
Metrics
MicroRNA(miRNA) is endogenous small non-coding RiboNucleic Acid (RNA), approximately 21~25 nucleotides in length, which plays an important role in gene expression via binding to the 3'-UnTranslated Region (UTR) of their mRNA target genes for translational repression or degradation of target messenger RNA. To improve the accuracy of gene regulatory network, a Rough Set based hybrid Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) method (PSO-GA-RS) was proposed. Firstly, features of sequence information were extracted, and then using rough set dependence as a fitness function, an optimal feature subset was selected through hybrid PSO and GA. At last, Support Vector Machine (SVM) was used to establish the model to predict the unknown regulatory relationships. The experimental results show that, compared with Feature Selection based on Rough Set and PSO (PSORSFS) and Rosetta algorithm, the accuracy, F measure and Receiver Operating Characteristic (ROC) curve area of PSO-GA-RS was improved at most 5% on Arabidopsis thaliana, and improved at most 8% on Oryza sativa dataset. The proposed method achieves an improved performance in identifying true connections between miRNA and their target genes.
target tracking algorithm based on the speeded up robust features and multi-instance learning
BAI Xiaohong, WEN Jing, ZHAO Xue, CHEN Jinguang
2016, 36(11): 2974-2978. DOI:
10.11772/j.issn.1001-9081.2016.11.2974
Asbtract
(
)
PDF
(797KB) (
)
References
|
Related Articles
|
Metrics
Concerning the influence of changing light, shape, appearance, as well as occlusion on target tracking, a target tracking algorithm based on Speeded Up Robust Feature (SURF) and Multi-Instance Learning (MIL) was proposed. Firstly, the SURF features of the target and its surrounding image were extracted. Secondly, SURF descriptor was introduced to the MIL as the examples in positive and negative bags. Thirdly, all the extracted SURF features were clustered, and a visual vocabulary was established. Fourthly, a "word document" matrix was establish by calculating the importance of the visual words in bag, and the latent semantic features of the bag was got by Latent Semantic Analysis (LSA). Finally, Support Vector Machine (SVM) was trained with the latent semantic features of the bag, so that MIL problem could be handled in accordance with the supervised learning problem. The experimental results show that the robustness and efficiency of the proposed algorithm under the variation of scale, gesture and appearance, as well as short-term partial occlusion.
Action recognition based on depth images and skeleton data
LU Zhongqiu, HOU Zhenjie, CHEN Chen, LIANG Jiuzhen
2016, 36(11): 2979-2984. DOI:
10.11772/j.issn.1001-9081.2016.11.2979
Asbtract
(
)
PDF
(1010KB) (
)
References
|
Related Articles
|
Metrics
In order to make full use of depth images and skeleton data for action detection, a multi-feature human action recognition method based on depth images and skeleton data was proposed. Multi-features included Depth Motion Map (DMM) feature and Quadruples skeletal feature (Quad). In aspect of depth images, DMM could be captured by projecting the depth image onto the three plane of a Descartes coordinate system. In aspect of skeleton data, Quad was a kind of calibration method for skeleton features and the results were only related to the skeleton posture. Meanwhile, a strategy of multi-model probabilistic voting model was proposed to reduce the influence from noise data on the classification. The proposed method was evaluated on Microsoft Research Action 3D dataset and Depth-included Human Action (DHA) database. The results indicate that the method has high accuracy and good robustness.
Super pixel segmentation algorithm based on Hadoop
WANG Chunbo, DONG Hongbin, YIN Guisheng, LIU Wenjie
2016, 36(11): 2985-2992. DOI:
10.11772/j.issn.1001-9081.2016.11.2985
Asbtract
(
)
PDF
(1313KB) (
)
References
|
Related Articles
|
Metrics
In view of the high time complexity of pixel segmentation, a super pixel segmentation algorithm was proposed for high resolution image. Super pixels instead of the original pixels were used as the segmentation processing elements and the characteristics of Hadoop and the super pixels were combined. Firstly, a static and dynamic adaptive algorithm for multiple tasks was proposed which could reduce the coupling of the blocks in HDFS (Hadoop Distributed File System) and task arranging. Secondly, based on the constraint in the distance and gradient on the super pixel formed by the boundary of super pixel block, a parallel watershed segmentation algorithm was proposed in each Map node task. Meanwhile, two merging strategies were proposed and compared in the super pixel block merging in the Shuffle process. Finally, the combination of super pixels was optimized to complete the final segmentation in the Reduce node task. The experimental results show that the proposed algorithm is superior to the Simple Linear Iterative Cluster (SLIC) algorithm and Normalized cut (Ncut) algorithm in Boundary Recall ratio (BR) and Under segmentation Error (UE), and the segmentation time of the high-resolution image is remarkably decreased.
Optimization algorithm based on
R-λ
model rate control in H.265/HEVC
LIAO Jundong, LIU Licheng, HAO Luguo, LIU Hui
2016, 36(11): 2993-2997. DOI:
10.11772/j.issn.1001-9081.2016.11.2993
Asbtract
(
)
PDF
(910KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the bit-allocation effect of the Largest Coding Unit (LCU) and the parameter-update precision (
α
、
β
), in the rate control algorithm of H.265/HEVC based
R-λ
model, an optimized rate control algorithm was proposed. By utilizing the existing encoding basic unit, bit allocation was carried out, and the parameters (
α
,
β
) were updated by using the coding distortion degree. The experimental result shows that in the constant bit rate case, compared to the HM13.0 rate control algorithm, three component PSNR gain improves at least 0.76 dB, the coding transmission bit reduces at least by 0.46%, and the coding time reduces at least by 0.54%.
Virtual data center management platform based on software defined network
ZUO Cheng, YU Hongfang
2016, 36(11): 2998-3005. DOI:
10.11772/j.issn.1001-9081.2016.11.2998
Asbtract
(
)
PDF
(1357KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the solidity of code and the difficulty of upgrades of existing Virtual Data Center (VDC) management platform, a VDC management platform based on Software Defined Network (SDN) was proposed. The proposed platform was composed of VDC Management subsystem (VDCM), VDC Computing Resources Control subsystem (VDCCRC) and VDC Network Resources Control subsystem (VDCNRC). A loosely coupled architecture was built by RESTful API interaction between subsystems. VDCNRC managed data center network by SDN controller, VDCCRC managed computing resources of data center by open source cloud computing platform, and a VDC management algorithm framework was built in VDC management subsystem to develop rapidly VDC management algorithm suitable for production environment. By utilizing Mininet, Openstack and Floodlight to set up test environment, the results show the proposed platform can support running, migrating or deleting virtual machines by Openstack, implement bandwidth resource isolation between VDCs by Openflow controller, and support the operations of creating, deleting, or updating VDC.
Link connectivity and restricted link connectivity of augmented bubble-sort networks
QIU Yana, YANG Yuxing
2016, 36(11): 3006-3009. DOI:
10.11772/j.issn.1001-9081.2016.11.3006
Asbtract
(
)
PDF
(614KB) (
)
References
|
Related Articles
|
Metrics
In view of the disadvantages of small link connectivity, restricted edge connectivity and weak fault tolerance, a kind of augmented bubble-sort network was designed by adding some links to the original bubble-sort network. By constructing a minimum link cut of the
n
dimensional augmented bubble-sort network, the link connectivity of the n dimensional augmented bubble-sort network was proved to be
n
, which implied that any two nodes are still connected even deleting
n
-1 links. The restricted link connectivity of the n dimensional augmented bubble-sort network was proved to be 2
n
-2. Therefore, any two nodes of the
n
dimensional augmented bubble-sort network were still connected even deleting 2
n
-3 links if the removal of these links doesn't result in singletons. Based on above results, the example rusults show that the augmented bubble-sort networks is better than bubble-sort network.
Routing protocol based on unequal partition area for wireless sensor network
LI Shuangshuang, YANG Wenzhong, WU Xiangqian
2016, 36(11): 3010-3015. DOI:
10.11772/j.issn.1001-9081.2016.11.3010
Asbtract
(
)
PDF
(935KB) (
)
References
|
Related Articles
|
Metrics
Responding to the problem of the unreasonable distribution of cluster head nodes and "hot spots" caused by uneven load energy in Wireless Sensor Network (WSN), an Unequal partition Area Uneven Clustering routing protocol (UAUC) was proposed. The network was divided according to unequal partition area, and the appropriate cluster head nodes in each area were selected on the basis of the energy factor, the distance factor and the intensity factor. Meanwhile, a load balancing path tree was built between cluster head nodes to solve the problem of "hot spots" in data transmission. In the comparison experiments with LEACH (Low Energy Adaptive Clustering Hierarchy) protocol, DEBUC (Distributed Energy-Balanced Unequal Clustering routing) protocol and HRPNC (Hierarchical Routing Protocol based on Non-uniform Clustering) protocol, UAUC achieved more reasonable distribution of cluster head nodes. The network cycle of UAUC was increased than that of LEACH, DEBUC and HRPNC by 88%, 12% and 17.5% respectively. The average residual energy of UAUC was higher than LEACH, DEBUC and HRPNC. And the variance of node residual energy of UAUC was less than LEACH, DEBUC and HRPNC. What is more, the aggregate of data packet of UAUC was higher than that of LEACH, DEBUC and HRPNC by 400%, 87.5% and 17.5% respectively. The experimental results show that UAUC can effectively improve the energy efficiency and the aggregate of data packet, balance energy consumption and prolong the network lifetime.
Distributed fault detection for wireless sensor network based on cumulative sum control chart
LIU Qiuyue, CHENG Yong, WANG Jun, ZHONG Shuiming, XU Liya
2016, 36(11): 3016-3020. DOI:
10.11772/j.issn.1001-9081.2016.11.3016
Asbtract
(
)
PDF
(908KB) (
)
References
|
Related Articles
|
Metrics
With the stringent resources and distributed nature in wireless sensor networks, fault diagnosis of sensor nodes faces great challenges. In order to solve the problem that the existing approaches of diagnosing sensor networks have high false alarm ratio and considerable computation redundancy on nodes, a new fault detection mechanism based on Cumulative Sum Chart (CUSUM) and neighbor-coordination was proposed. Firstly, the historical data on a single node were analyzed by CUSUM to improve the sensitivity of fault diagnosis and locate the change point. Then, the fault nodes were detected though judging the status of nodes by the data exchange between neighbor nodes. The experimental results show that the detection accuracy is over 97.7% and the false alarm ratio is below 2% when the sensor fault probability in wireless sensor networks is up to 35%. Hence, the proposed algorithm has a high detection accuracy and low false alarm ratio even in the conditions of high fault probabilities and reduces the influence of sensor fault probability clearly.
Cooperative delay and tolerant network routing strategy based on urban public transport mobility model
KOU Lan, YANG Lina, LIU Kezheng, HU Min, MAO Yiding
2016, 36(11): 3021-3027. DOI:
10.11772/j.issn.1001-9081.2016.11.3021
Asbtract
(
)
PDF
(1132KB) (
)
References
|
Related Articles
|
Metrics
How to use the limited transmission opportunity to transmit the information of the vehicle service perception reliably is the "bottleneck" problem in the development of intelligent transportation. By utilizing the motion law of vehicles in public transport, the hop by hop message forwarding mechanism based on opportunistic contact between nodes was put forward. And in combination with the characteristics of the public transport system, the cooperative Delay and Tolerant Network (DTN) routing strategy (TF) based on urban public transport mobility model was designed. Firstly, according to the characteristics of public transportation mobile model itself, such as bus, intercity bus nodes were grouped based on their motion paths, and a packet DTN routing algorithm based on fixed moving path was proposed. Then the taxi, human nodes were defined as free nodes, and a kind of DTN routing strategy based on forward factor control was designed as a supplement to the packet routing mechanism. The simulation results show that compared with the Epidemic, Prophet and Spray And Wait (SAW) routing algorithms, TF routing algorithm has higher message delivery ratio and lower average delay.
Load balanced routing protocol in electric power communication networks
ZHAO Canming, LI Zhuhong, YAN Fan, ZHANG Xinming
2016, 36(11): 3028-3032. DOI:
10.11772/j.issn.1001-9081.2016.11.3028
Asbtract
(
)
PDF
(859KB) (
)
References
|
Related Articles
|
Metrics
In electric power communication networks, load balance can reduce overloading on bottlenecks and improve the reliability and utilization of network resources. According to structure and flow characteristics of electric power communication network, a load balanced routing protocol combined with deterministic routing and opportunistic routing was proposed. Each node determined candidate sets to relay data packets from an area centered by it. Each candidate according to the precise local cost and the estimated remaining cost, and the forwarding probability was determined based on the priority. Compared with Load Balance Advanced-Open Shortest Path First (LBA-OSPF) protocol, the proposed routing protocol can reduce the average load by 32.3% and reduce the end-to-end delay by 50.3%.
Improved adaptive linear minimum mean square error channel estimation algorithm in discrete wavelet transform domain based on empirical mode decomposition-singular value decomposition difference spectrum
XIE Bin, YANG Liqing, CHEN Qin
2016, 36(11): 3033-3038. DOI:
10.11772/j.issn.1001-9081.2016.11.3033
Asbtract
(
)
PDF
(948KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that the channel estimation error of the current Singular Value Decomposition-Linear Minimum Mean Square Error (SVD-LMMSE) algorithm was relatively large, an improved adaptive Linear Minimum Mean Square Error (LMMSE) channel estimation algorithm in Discrete Wavelet Transform (DWT) domain based on Empirical Mode Decomposition-Singular Value Decomposition (EMD-SVD) difference spectrum was proposed. The DWT was used to quantify the threshold of the signal high frequency coefficients after Least Square (LS) channel estimation and pre-filtering. Then, combined with the adaptive algorithm based on EMD-SVD difference spectrum, the weak signal was extracted from the strong noise wavelet coefficients, and the signal was reconstructed. Finally, the corresponding threshold was set based on Cyclic Prefix (CP) inside and outside the noise's variance of the mean, and the noise of the cyclic prefix length was handled to reduce the further influence of noise. The Bit Error Rate (BER) and the Mean Squared Error (MSE) performances of the algorithm was simulated. The simulation results show that the improved algorithm is better than the classcial LS algorithm, the traditonal LMMSE algorithm and the more popular SVD-LMMSE algorithm and can not only reduce the influence of noise, but also improve the accuracy of channel estimation effectively.
Analysis of delay performance of hybrid automatic repeat request in meteor burst communication
XIA Bing, LI Linlin, ZHENG Yanshan
2016, 36(11): 3039-3043. DOI:
10.11772/j.issn.1001-9081.2016.11.3039
Asbtract
(
)
PDF
(788KB) (
)
References
|
Related Articles
|
Metrics
In modeling and simulation of meteor burst communication system, concerning the problem of network delay caused by Hybrid Automatic Repeat Request (HARQ), an estimation model of transmission delay based on HARQ was proposed. Firstly, in consideration of the network structure and channel characters in meteor burst communication, a network delay model was constructed by analyzing the theory of HARQ. Then, based on queuing theories, the improvement mechanism of HARQ was introduced to establish an estimation model of transmission delay of Type-Ⅰ HARQ and one of Type-Ⅱ HARQ. Finally, the simulation was realized to compare and analyze the transmission delay performance of two kinds of HARQ. When packet transmission accuracy or packet transmission time changes independently, the transmission delay of Type-Ⅱ HARQ is less than that of Type-Ⅰ HARQ. The experimential results show that Type-Ⅱ HARQ has advantages of network delay performance in meteor burst communication compared to Type-Ⅰ HARQ.
Data driven parallel incremental support vector machine learning algorithm based on Hadoop framework
PI Wenjun, GONG Xiujun
2016, 36(11): 3044-3049. DOI:
10.11772/j.issn.1001-9081.2016.11.3044
Asbtract
(
)
PDF
(1005KB) (
)
References
|
Related Articles
|
Metrics
Traditional Support Vector Machine (SVM) algorithm is difficult to deal with the problem of large scale training data, an efficient data driven Parallel Incremental Adaboost-SVM (PIASVM) learning algorithm based on Hadoop was proposed. An ensemble system was used to make each classifier process a partition of the data, and then integrated the classification results to get the combination classifier. Weights were used to depict the spatial distribution prosperities of samples which were to be iteratively reweighted during the incremental training stage, and forgetting factor was applied to select new samples and eliminate historical samples. Also, the controller component based on HBase was used to schedule the iterative procedure, persist the intermediate results and reduce the bandwidth pressure of iterative MapReduce. The experimental results on multiple data sets demonstrate that the proposed algorithm has good performance in speedup, sizeup and scaleup, and high processing capacity of large-scale data while guaranteeing high accuracy.
Efficient file management system based on cloud instances in smartphone
MA Junfeng, WANG Yan
2016, 36(11): 3050-3054. DOI:
10.11772/j.issn.1001-9081.2016.11.3050
Asbtract
(
)
PDF
(1004KB) (
)
References
|
Related Articles
|
Metrics
To overcome the disadvantage of high consumption of energy and bandwidth when the existing cloud storage technology which is applied to the smartphone, based on the Dropbox platform as a cloud service provider, an efficient and safe File Management system based on Cloud Instances (FM-CI) was designed. FM-CI supported download, compress, encrypt, convert operations, and file transfer between two smartphone users' cloud storage spaces. In addition, due to frequent open cloud instances may still increase the cost of the user, a protocol for users to share their idle instances and a file transfer scheme for sharing instance was designed. Simulation results show that, FM-CI can efficiently complete the file operations with less time and bandwidth, and the performance of FM-CI is better than those of the latest cloud storage schemes.
Hybrid firefly Memetic algorithm based on simulated annealing
LIU Ao, DENG Xudong, LI Weigang
2016, 36(11): 3055-3061. DOI:
10.11772/j.issn.1001-9081.2016.11.3055
Asbtract
(
)
PDF
(992KB) (
)
References
|
Related Articles
|
Metrics
A mathematical analysis was carried out theoretically to reveal the fact that the Firefly Algorithm (FA) gets the risk of premature convergence and being trapped in local optimum. A hybrid Memetic algorithm based on simulated annealing was proposed. In the hybrid algorithm, the FA was employed to keep the diversity of firefly population and global exploration ability of the proposed algorithm. And then, the simulated annealing operator was incorporated to get rid of local optimum, which was utilized to carry out local search with partial firefly individuals by accepting bad solutions with some probability, and the proposed algorithm conducted simultaneously the attracting process and the annealing process to reduce the complexity. Finally, the performance of the proposed algorithm and other comparison algorithms were tested on ten standard functions, respectively. The experimental results show that the proposed algorithm can find the optimal solutions in six functions, outperform firefly algorithm, particle swarm optimization, etc, in terms of optimal value, mean value and standard deviation, and find better solutions than firefly algorithm in four functions.
Fast high average-utility itemset mining algorithm based on utility-list structure
WANG Jinghua, LUO Xiangzhou, WU Qian
2016, 36(11): 3062-3066. DOI:
10.11772/j.issn.1001-9081.2016.11.3062
Asbtract
(
)
PDF
(722KB) (
)
References
|
Related Articles
|
Metrics
In the field of data mining, high utility itemset mining has been widely studied. However, high utility itemset mining does not consider the effect of the itemset length. To address this issue, high average-utility itemset mining has been proposed. At present, the proposed high average utility itemset mining algorithms take a lot of time to dig out the high average-utility itemset. To solve this problem, an improved high average itemset mining algorithm, named FHAUI (Fast High Average Utility Itemset), was proposed. FHAUI stored the utility information in the utility-list and mined all the high average-utility itemsets from the utility-list structure. At the same time, FHAUI adopted a two-dimensional matrix to effectively reduce the number of join-operations. Finally, the experimental results on several classical datasets show that FHAUI has greatly reduced the number of join-operations, and reduced its cost in time consumption.
Overview on reversible data hiding in encrypted domain
KE Yan, ZHANG Minqing, LIU Jia, YANG Xiaoyuan
2016, 36(11): 3067-3076. DOI:
10.11772/j.issn.1001-9081.2016.11.3067
Asbtract
(
)
PDF
(1927KB) (
)
References
|
Related Articles
|
Metrics
Reversible data hiding is a new research direction of information hiding technology. Reversible data hiding in encrypted domain is a significant point which combines the technologies of the signal processing in encrypted domain and information hiding and can play an important role of double insurance for information security in data processing. In particular with the adoption of cloud services, reversible data hiding in encrypted domain has become a focused issue to achieve privacy protection in the cloud environment. Concerning the current technical requirements, the background and the development of reversible data hiding were introduced in encrypted domain, and the current technical difficulties were pointed out and analysed. By studying on typical algorithms of various types, the reversible data hiding algorithms in encrypted domain were systematically classified and their technical frameworks, characteristics and limitations of different applications were analysed. Finally, focused on the technology needs and difficulties, several future directions in this field were proposed.
Forward secure identity-based signcryption from lattice
XIANG Wen, YANG Xiaoyuan, WANG Xu'an, WU Liqiang
2016, 36(11): 3077-3081. DOI:
10.11772/j.issn.1001-9081.2016.11.3077
Asbtract
(
)
PDF
(913KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that current signcryption schemes based on lattice cannot achieve forward security, a new identity-based signcryption scheme with forward security was proposed. Firstly, lattice basis delegation algorithm was used to update the users' public keys and private keys. Then, the preimage sampleable functions based on Learning With Errors (LWE) over lattice was used to sign the message,and the signature was also used to encrypt the message. The scheme was proved to be adaptive INDistinguishiability selective IDentity and Chosen-Ciphertext Attack (IND-sID-CCA2) secure, strong UnForgeable Chosen-Message Attack (sUF-CMA) secure and forward secure. Compared with the signcryption schemes based on pairings, the proposed scheme has more advantages in computational efficiency and ciphertext extension rate.
Separable reversible Hexadecimal data hiding in encrypted domain
KE Yan, ZHANG Minqing, LIU Jia
2016, 36(11): 3082-3087. DOI:
10.11772/j.issn.1001-9081.2016.11.3082
Asbtract
(
)
PDF
(982KB) (
)
References
|
Related Articles
|
Metrics
In view of the poor separability and carrier recovery distortion of the current reversible data hiding technology, a novel scheme of separable reversible data hiding was proposed in encrypted domain. Hexadecimal data was embedded by recoding in the cipher text redundancy by the weight of the encrypted domain and the recoding of the encrypted data in Ring-Learning With Errors (R-LWE) algorithm. With embedded cipher text, the additional data was extracted by using data-hiding key, and the original data was recovered losslessly by using encryption key, and the processes of extraction and decryption were separable. By deducing the error probability of the scheme, the parameters in the scheme which directly related to the scheme's correctness were mainly discussed, and reasonable values of the parameters were got by experiments. The experimental results demonstrate that the proposed scheme can better guarantee the reversibility losslessly and 1 bit plaintext data can maximally load 4 bits additional data in encrypted domain.
Reversible data hiding in encrypted medical images based on bit plane compression
ZHENG Hongying, REN Wen, CHENG Huihui
2016, 36(11): 3088-3092. DOI:
10.11772/j.issn.1001-9081.2016.11.3088
Asbtract
(
)
PDF
(770KB) (
)
References
|
Related Articles
|
Metrics
The present reversible data hiding algorithms in encrypted medical image have low embedding capacity and need to partition Regions Of Interest (ROI), the receiver operation is not flexible. Combining with the characteristics of medical images, a novel separable reversible data hiding method in encrypted medical images was proposed. Firstly, the medical image with 256 gray levels was decomposed into 8 bit planes, and the highest four bit planes were compressed, which left some space to fill with the peak pixel values. Then the image was reconstructed. The head, middle, tail of the reconstructed image were encrypted respectively. After that, information was embedded in the encrypted image by histogram shifting, and the position was chosen by the data hiding key in the tail. For the receiver, information extraction and image restoration can be separated operations according to keys. The experimental results demonstrate that information can be stored in the compressed image to avoid auxiliary information transmission, which effectively improves the embedding capacity with higher security.
Advanced marking scheme algorithm based on multidimensional pseudo-random sequences
TANG Yan, LYU Guonian, ZHANG Hong
2016, 36(11): 3093-3097. DOI:
10.11772/j.issn.1001-9081.2016.11.3093
Asbtract
(
)
PDF
(946KB) (
)
References
|
Related Articles
|
Metrics
The current Advanced Marking Scheme (AMS) algorithm is a relatively efficient algorithm for tracing IP addresses of Distributed Denial of Service (DDoS) attackers. However, as using hash functions to achieve compression of edge address, the AMS algorithm has many defects such as high complexity, poor confidentiality and a high ratio of false positives. In order to improve the efficiency of AMS, the AMS algorithm based on multidimensional pseudo-random sequences was designed. On one hand, replacing original hash functions, an edge sampling matrix was constructed with a full hardware device in a router to achieve the compression coding of IP address. On the other hand, combined with the compressed code of edge address and the calculation process of edge weight in the victim's side, the output of DDoS attack path graph was realized. In the simulation experiments, the performance of the AMS algorithm based on multidimensional pseudo-random sequences is basically the same as the original algorithm, which can effectively reduce misjudgment and quickly judge forged paths. The experimental results show that the proposed algorithm has high security, fast computation and strong anti-attack ability.
Partially blind signature scheme with ID-based server-aided verification
REN Xiaokang, CHEN Peilin, CAO Yuan, LI Yanan, YANG Xiaodong
2016, 36(11): 3098-3102. DOI:
10.11772/j.issn.1001-9081.2016.11.3098
Asbtract
(
)
PDF
(704KB) (
)
References
|
Related Articles
|
Metrics
Combined ID-based partially blind signature and server-aided verification signature, a partially blind signature scheme with ID-based server-aided verification was presented to overcome the shortcomings of ID-based partially blind signature schemes such as strong security assumption and high computation cost. Most computing tasks of signature verification were accomplished by a server, and it greatly reduced computational overhead of verifier. Based on bilinear mapping, a partially blind signature scheme with specific ID-based server-aided verification was proposed. This scheme was proven to be secure in the standard model. Analysis results show that the proposed scheme greatly reduces computational complexity of signature verification. The proposed scheme is more efficient than Li's scheme (LI F, ZHANG M, TAKAGI T. Identity-based partially blind signature in the standard model for electronic cash. Mathematical and Computer Modelling, 2013, 58(1):196-203) and Zhang's scheme (ZHANG J, SUN Z. An ID-based server-aided verification short signature scheme avoid key escrow. Journal of Information Science and Engineering, 2013, 29(3):459-473).
4G indoor physical layer authentication algorithm based on support vector machine
YANG Jianxi, DAI Chuping, JIANG Tingting, DING Zhengguang
2016, 36(11): 3103-3107. DOI:
10.11772/j.issn.1001-9081.2016.11.3103
Asbtract
(
)
PDF
(934KB) (
)
References
|
Related Articles
|
Metrics
Aimming at the problem that the traditional physical layer security algorithm does not make full use of the channel,a new physical layer channel detection algorithm was proposed. In view of the essential properties of 4G wireless channel, combined with the hypothesis testing, Support Vector Machine (SVM) was used to analyse the metrics of channel vector to decide whether there are counterfeit attackers or not. Simulation experiments show that the accuracy of the proposed algorithm based on linear kernel is more than 98%, and the accuracy of the proposed algorithm based on Radial Basis Function (RBF) is more than 99%. The proposed algorithm can make full use of the wireless channel characteristics of different spatial locations to implement authenticaton of information source one by one, and hence enhances the security of the system.
Trusted access authentication protocol for mobile nodes in Internet of things
ZHANG Xin, YANG Xiaoyuan, ZHU Shuaishuai, YANG Haibing
2016, 36(11): 3108-3112. DOI:
10.11772/j.issn.1001-9081.2016.11.3108
Asbtract
(
)
PDF
(787KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that mobile nodes lack trusted verification in Wireless Sensor Network (WSN), a mobile node access authentication protocol was proposed in Internet of Things (IoT). Mutual authentication and key agreement between the sensor nodes and mobile sink nodes were realized, when they wre authenticated. At the same time, the trustness of mobile node platform was authenticated by sensor nodes. The authentication scheme was based on trusted computing technology without using base station and its concrete steps were described in detail. Pseudonyms and the corresponding public/private keys were used in authentication to achieve the protection of the user privacy. The proposed scheme was provably secure in the CK (Canetti-Krawczyk) security model. Compared to similar mobile node schemes, the protocol is more suitable for fast authentication in IoT, with less computation and communication overhead.
Hierarchical co-location pattern mining approach of unevenly distributed fuzzy spatial objects
YU Qingying, LUO Yonglong, WU Qian, CHEN Chuanming
2016, 36(11): 3113-3117. DOI:
10.11772/j.issn.1001-9081.2016.11.3113
Asbtract
(
)
PDF
(904KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that the existing co-location pattern mining algorithms fail to effectively address the problem of unevenly distributed spatial objects, a hierarchical co-location pattern mining approach of unevenly distributed fuzzy spatial objects was proposed. Firstly, an unevenly distributed dataset generation method was put forward. Secondly, the unevenly distributed dataset was partitioned by a hierarchical mining method in order to provide each region with an even spatial distribution. Finally, the spatial data mining of the separated fuzzy objects was conducted by means of the improved PO_RI_PC algorithm. Based on the distance variation coefficient, the neighborhood relationship graph for each sub-region was constructed to complete the regional fusion, and then the co-location pattern mining was realized. The experimental results show that, compared to the traditional method, the proposed method has higher execution efficiency. With the change of the number of instances and uneven degree, more co-location sets are mined, and the average increase reaches about 25% under the same condition, more accurate mining results are obtained through this method.
Fruit fly optimization algorithm based on simulated annealing
ZHANG Bin, ZHANG Damin, A Minghan
2016, 36(11): 3118-3122. DOI:
10.11772/j.issn.1001-9081.2016.11.3118
Asbtract
(
)
PDF
(876KB) (
)
References
|
Related Articles
|
Metrics
Concerning the defects of low optimization precision and easy to fall into local optimum in Fruit Fly Optimization Algorithm (FOA), a Fruit Fly Optimization Algorithm based on Simulated Annealing (SA-FOA) was proposed. The receiving mechanism of solution and the optimal step size were improved in SA-FOA. The receiving probability was based on the generalized Gibbs distribution and the receiving of solution met Metropolis criterion. The step length decreased with the increasing iteration according to non-uniform variation idea. The simulation result using several typical test functions show that the improved algorithm has high capability of global searching. Meanwhile, the optimization accuracy and convergence rate are also improved greatly. Therefore, it can be used to optimize the parameters of neural network and service scheduling models.
Optimization of extreme learning machine parameters by adaptive chaotic particle swarm optimization algorithm
CHEN Xiaoqing, LU Huijuan, ZHENG Wenbin, YAN Ke
2016, 36(11): 3123-3126. DOI:
10.11772/j.issn.1001-9081.2016.11.3123
Asbtract
(
)
PDF
(595KB) (
)
References
|
Related Articles
|
Metrics
Since it was not ideal for Extreme Learning Machine (ELM) to deal with non-linear data, and the parameter randomization of ELM was not conducive for generalizing the model, an improved version of ELM algorithm was proposed. The parameters of ELM were optimized by Adaptive Chaotic Particle Swarm Optimization (ACPSO) algorithm to increase the stability of the algorithm and improve the accuracy of ELM for gene expression data classification. The simulation experiments were carried out on the UCI gene data. The results show that Adaptive Chaotic Particle Swarm Optimization-Extreme Learning Machine (ACPSO-ELM) has good stability and reliability, and effectively improves the accuracy of gene classification over existing algorithms, such as Detecting Particle Swarm Optimization-Extreme Learning Machine (DPSO-ELM) and Particle Swarm Optimization-Extreme Learning Machine (PSO-ELM).
Task coordination and workload balance method for multi-robot based on trading-tree
SHEN Li, LI Jie, ZHU Huayong
2016, 36(11): 3127-3130. DOI:
10.11772/j.issn.1001-9081.2016.11.3127
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
In task decomposition and coordination for multi-robot, the problem of workload imbalance in task with partial order constraint still exists. To overcome this problem, a task coordination and workload balance method for multi-robot based on trading-tree was proposed. Firstly, the task decomposition problem satisfying partial order constraint was described as a constraint graph. Secondly, an initial task assignment strategy was proposed according to the directed weighted graph, and the problem of task coordination among multiple robots was solved by using the improved Dijkstra algorithm. At last, the strategy of workload balance was proposed to balance each robot's workload without violating any constraints via a protocol based on trading-tree. The experimental results show that, compared with Dijkstra algorithm, after finishing the strategy of workload balance, the efficiency signifiantly increases by 12% and the difference of workload reduces by 30%.
Evaluation model based on dual-threshold constrained tolerance dominance relation
YU Shunkun, YAN Hongxu
2016, 36(11): 3131-3135. DOI:
10.11772/j.issn.1001-9081.2016.11.3131
Asbtract
(
)
PDF
(831KB) (
)
References
|
Related Articles
|
Metrics
Considering the problems that classical dominance relation rough set is too strict about attribute values in solving the dominant class which may lead to the failure of the evaluation model, and single-threshold constrained tolerance dominance relation rough set was too loose about attribute number which may cause inconsistency between the evaluation results and human cognitive judgment in the ordered information system, a rough evaluation model based on dual-threshold constraint tolerance relation was proposed. Firstly, the concept of dual-threshold constrained tolerance dominance relation was proposed and its relevant properties were studied. Then, based on the extended dominance relation, the definition of dominant degree was proposed and a rough evaluation model was built by using statistical analysis method. Finally, the model was applied to the comprehensive strength evaluation of the regional building industry and the sorting results were verified in comparison with the results by using classical dominance relation rough set. According to the results, the proposed model presents more rationality and high efficiency in multi-attribute decision issues.
Integrated berth and quay-crane scheduling based on improved genetic algorithm
YANG Jie, GAO Hong, LIU Tao, LIU Wei
2016, 36(11): 3136-3140. DOI:
10.11772/j.issn.1001-9081.2016.11.3136
Asbtract
(
)
PDF
(771KB) (
)
References
|
Related Articles
|
Metrics
A strategy for integrated berth and quay-crane scheduling was proposed to cope with unreasonable allocation of port resources in container terminals. First, a nonlinear mixed integer programming model which aims at minimizing the port operational cost was presented. And the loading and unloading cost of quay-crane was considered in the objective of our model. To make the model more realistic, the handling time of a vessel was assumed to depend on the number of assigned quay-cranes. Second, an improved genetic algorithm based on extenics dependent function was used to solve this model. In this algorithm, infeasible solutions play an important role. They were evaluated by their extenics dependent degrees. Some infeasible solutions were always contained in the population to maintain the diversity of the population. This improved local search ability of traditional genetic algorithm. At last, the effectiveness and efficiency of the proposed model and algorithm were testified by several test instances. Compared with the model without considering the loading and unloading cost of quay-crane, the waste of resource is effectively reduced.
Clustering algorithm for split delivery vehicle routing problem
XIANG Ting, PAN Dazhi
2016, 36(11): 3141-3145. DOI:
10.11772/j.issn.1001-9081.2016.11.3141
Asbtract
(
)
PDF
(735KB) (
)
References
|
Related Articles
|
Metrics
A clustering algorithm which arranges paths after grouping was proposed to solve Split Delivery Vehicle Routing Problem (SDVRP). Considering the balance of vehicle load and characteristics of feasible solution, first, the customers which load were greater or equal to the vehicle load limit were arranged in advance. Then combined with the distance between customers and load, a split threshold was set to limit the load of vehicle to a certain range. According to the nearest principle, all customers were clustered and grouped. If the customer load in a group does not reach the minimum load of vehicle and is beyond the limit load when new customers are added in, the new customers were split and adjusted. Finally, while all the customers were divided into groups, the customers paths for each group was arranged by Ant Colony Optimization (ACO) algorithm. The experimental results show that the proposed algorithm has higher stability, and better results in SDVRP.
Domain-specific term recognition method based on word embedding and conditional random field
FENG Yanhong, YU Hong, SUN Geng, ZHAO Yujin
2016, 36(11): 3146-3151. DOI:
10.11772/j.issn.1001-9081.2016.11.3146
Asbtract
(
)
PDF
(982KB) (
)
References
|
Related Articles
|
Metrics
Domain-specific term recognition methods based on statistical distribution characteristics neglect term semantics and domain feature, and the recognition result are unsatisfying. To resolve this problem, a domain-specific term recognition method based on word embedding and Conditional Random Field (CRF) was proposed. The strong semantic expression ability of word embedding and strong field expression ability of similarity between words and term were fully utilized. Based on statistical features, the similarity between word embedding of words and word embedding of term was increased to create the feature vector. term recognition was realized by CRF and a series of features. Finally, experiment was carried out on field text and SogouCA corpus, and the precision, recall and F measure of the recognition results reached 0.9855, 0.9439 and 0.9643, respectively. The results show that the proposed method is more effective than current methods.
Target tracking based on improved sparse representation model
LIU Shangwang, GAO Liuyang
2016, 36(11): 3152-3160. DOI:
10.11772/j.issn.1001-9081.2016.11.3152
Asbtract
(
)
PDF
(1646KB) (
)
References
|
Related Articles
|
Metrics
When the target apperance is influenced by the change of illumination, occlusion or attitude, the robustness and accuracy of target tracking system are usually frangible. In order to solving this problem, sparse representation was introduced into the particle filter framework for target tracking and a sparse cooperative model was proposed. Firstly, the target object was represented by intensity in the target motion positioning model. Secondly, the optimal classification features were extracted by training the positive template set and negative template set in the discriminant classification model, then the target was weighted by the histogram in the generative model. Subsequently, discriminant classification model and generative model were cooperated in a collaborative model, and the target was determined by the reconstruction error. Finally, every module was updated independently to mitigate the effects of changes in the appearance of the target. The experimental results show that the average center location error of the proposed model is only 7.5 pixels, meanwhile the model has good performance in anti-noise and real-time.
File type detection algorithm based on principal component analysis and
K
nearest neighbors
YAN Mengdi, QIN Linlin, WU Gang
2016, 36(11): 3161-3164. DOI:
10.11772/j.issn.1001-9081.2016.11.3161
Asbtract
(
)
PDF
(583KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that using the file suffix and file feature to identify file type may cause a low recognition accuracy rate, a new content-based file-type detection algorithm was proposed, which was based on Principal Component Analysis (PCA) and
K
Nearest Neighbors (
K
NN). Firstly, PCA algorithm was used to reduce the dimension of the sample space. Then by clustering the training samples, each file type was represented by cluster centroids. In order to reduce the error caused by unbalanced training samples,
K
NN algorithm based on distance weighting was proposed. The experimental result shows that the improved algorithm, in the case of a large number of training samples, can reduce computational complexity, and can maintain a high recognition accuracy rate. This algorithm doesn't depend on the feature of each file, so it can be used more widely.
Selection of training data for cross-project defect prediction
WANG Xing, HE Peng, CHEN Dan, ZENG Cheng
2016, 36(11): 3165-3169. DOI:
10.11772/j.issn.1001-9081.2016.11.3165
Asbtract
(
)
PDF
(926KB) (
)
References
|
Related Articles
|
Metrics
Cross-Project Defect Prediction (CPDP), which uses data from other projects to predict defects in the target project, provides a new perspective to resolve the shortcoming of limited training data encountered in traditional defect prediction. The data more similar to target project should be given priority in the context, because the quality of train cross-project data will directly affect the performance of cross-project defect prediction. In this paper, to analyze the impact of different similarity measures on the selection of training data for cross-project defect prediction, experiments were performed on 34 datasets from the PROMISE repository. The results show that the quality of training data selected by different similarity measure methods is various, and cosine similarity and correlation coefficient can achieve better performance as a whole. The greatest improvement rate is up to 6.7%. According to defect rate of target project, cosine similarity is seem to be more suitable when the defect rate is more than 0.25.
Mutation strategy based on concurrent program data racing fault
WU Yubo, GUO Junxia, LI Zheng, ZHAO Ruilian
2016, 36(11): 3170-3177. DOI:
10.11772/j.issn.1001-9081.2016.11.3170
Asbtract
(
)
PDF
(1458KB) (
)
References
|
Related Articles
|
Metrics
As the low ability of triggering the data racing fault of the existing mutation operators for concurrent program in mutation testing, some new mutation strategies based on data racing fault were proposed. From the viewpoint of mutation operator designing, Lock-oriented Mutation Strategy (LMS) and Shared-variable-oriented Mutation Strategy (SMS) were introduced, and two new mutation operators that named Synchronized Lock Resting Operator (SLRO) and Move Shared Variable Operator (MSVO) were designed. From the viewpoint of mutation point selection, also a new mutation point selection strategy named Synchronized relationship pair Mutation Point Selection Strategy (SMPSS) was proposed. SLRO and MSVO mutation operators were used to inject the faults which generated by SMPSS strategy on 12 Java current libraries, and then the ability of mutants to trigger the data racing fault was checked by using Java Path Finder (JPF). The results show that the SLRO and MSVO for 12 Java libs can generate 121 and 122 effective mutants respectively, and effectiveness rates are 95.28% and 99.19% respectively. In summary, the new current mutation operators and mutation strategies can effectively trigger the data racing fault.
APK-based automatic method for GUI traversal of Android applications
ZHANG Shengqiao, YIN Qing, CHANG Rui, ZHU Xiaodong
2016, 36(11): 3178-3182. DOI:
10.11772/j.issn.1001-9081.2016.11.3178
Asbtract
(
)
PDF
(799KB) (
)
References
|
Related Articles
|
Metrics
In order to improve the coverage and automation level of Graphical User Interface (GUI) by automatic execution technology for Android applications, an Android Package (APK)-based automatic traversal method which meets the requirements of dynamic security analysis and GUI testing was proposed. The GUI of the target application was captured dynamically, and the interaction of user actions was simulated with applications running automatically. Based on the open source project of Appium, a platform-crossed prototype tool of automatic traversal method which can automatically traverse GUIs of lightweight Android applications was implemented. The experimental results show that the proposed method with a high coverage is feasible and effective.
Visually smooth Gregory patches interpolation of triangle mesh surface model
CHEN Ming, LI Jie
2016, 36(11): 3183-3187. DOI:
10.11772/j.issn.1001-9081.2016.11.3183
Asbtract
(
)
PDF
(660KB) (
)
References
|
Related Articles
|
Metrics
The inconsistence of normal curvature at vertex when reconstructing a coarse mesh model as a fine heavy one has been unsolved and this inconsistence will result in shadow in rendering. In this paper, the geometric condition of the normal curvature consistence at vertex was obtained and a novel algorithm was further proposed based on that condition refining coarse mesh models to be visually smooth parametric ones, which were presented collectedly as triangular Gregory patches. The constructed parametric model was
G
1
everywhere without the normal curvature inconsistence problem, so a good visual effect could be obtained. The experimental results show that the proposed algorithm can obtain a high quality visual effect with the 1%-2% vertexes of the original mesh model.
Multi-threshold MRI image segmentation algorithm based on Curevelet transformation and multi-objective particle swarm optimization
BIAN Le, HUO Guanying, LI Qingwu
2016, 36(11): 3188-3195. DOI:
10.11772/j.issn.1001-9081.2016.11.3188
Asbtract
(
)
PDF
(1337KB) (
)
References
|
Related Articles
|
Metrics
To deal with the difficulties caused by noise disturbance, intensity inhomogeneity and edge blurring in Magnetic Resonance Imaging (MRI) image segmentation, a new multi-threshold MRI image segmentation algorithm based on mixed entropy using Curvelet transformation and Multi-Objective Particle Swarm Optimization (MOPSO) was proposed. First, the high-frequency and the low-frequency subbands were obtained using Curvelet decomposition, which were used to construct the profile-detail gray level matrix model that could represent edge details accurately. Then, with the consideration of both inter-class similarity and intra-class difference of background and object region, two-dimensional reciprocal entropy and reciprocal gray entropy were proposed and combined to define the mixed entropy, which was used as the objective function of MOPSO. The optimal multi-threshold was searched cooperatively to get an accurate segmentation. Finally, in order to speed up the segmentation process, gradient-based multi-threshold estimation algorithms for two-dimensional reciprocal entropy and reciprocal gray entropy were proposed. The experimental results show that the proposed method is more adaptive and accurate when applied to gray uneven and noisy MRI image segmentation in comparison with two-dimensional tsallis entropy, Adaptive Bacterial Foraging (ABF) and improved Otsu multi-threshold segmentation algorithms.
Automatic nonrigid registration method for 3D skulls based on boundary correspondence
Reziwanguli XIAMXIDING, GENG Guohua, Gulisong NASIERDING, DENG Qingqiong, Dilinuer KEYIMU, Zulipiya MAIMAITIMING, ZHAO Wanrong, ZHENG Lei
2016, 36(11): 3196-3200. DOI:
10.11772/j.issn.1001-9081.2016.11.3196
Asbtract
(
)
PDF
(996KB) (
)
References
|
Related Articles
|
Metrics
In order to automatically register the skulls that differ a lot in pose with the reference skull, or miss a large part of bones, an automatic nonrigid 3D skull registration method based on boundary correspondence was proposed. First, all the boundaries of target skull were calculated, and according to the edge length and the shortest distance between the edges, the edge type was identified automatically, and the correspondence between the registered skull and the reference skull was established. Based on that, the initial position and attitude of the skull were adjusted to realize the coarse registration. Finally, Coherent Point Drift (CPD) algorithm was used twice to realize the accurate registration of two skulls from the edge region to all regions. The experimental results show that, compared with the automatic registration method based on Iterative Closest Point (ICP) and Thin Plate Spline (TPS), the proposed method has stronger robustness in pose, position, resolution and defect, and has more availability.
Automatic segmentation of glomerular basement membrane based on image patch matching
LI Chuangquan, LU Yanmeng, LI Mu, LI Mingqiang, LI Ran, CAO Lei
2016, 36(11): 3201-3206. DOI:
10.11772/j.issn.1001-9081.2016.11.3201
Asbtract
(
)
PDF
(1089KB) (
)
References
|
Related Articles
|
Metrics
An automatic segmentation method based on image patch matching strategy was proposed to realize the automatic segmentation of glomerular basement membrane automatically. First of all, according to the characteristics of the glomerular basement membrane, the search range was extended from a reference image to multiple reference images, and an improved searching method was adopted to improve matching efficiency. Then,the optimal patches were searched out and the label image patches corresponding to the optimal patches were extracted, which were weighted by matching similarity. Finally, the weighted label patches were rearranged as the initial segmentation of glomerular basement membrane, from which the final segmentation could be obtained after morphological processing. On the glomerular Transmission Electron Microscopy (TEM) dataset, the Jaccard coefficient is between 83% and 95%. The experimental results show that the proposed method can achieve higher accuracy.
Blind restoration of blurred images based on tensorial total variation
LIU Hong, LIU Benyong
2016, 36(11): 3207-3211. DOI:
10.11772/j.issn.1001-9081.2016.11.3207
Asbtract
(
)
PDF
(837KB) (
)
References
|
Related Articles
|
Metrics
In general blind restoration algorithms, only the gray information of a color image is utilized to estimate the blurring kernel, and thus a restored image may be unsatisfactory if its size is too small or the salient edge in it is too little. Focused on the above mentioned problem, a new blind image restoration algorithm was proposed under a new tensorial framework, in which a color image was regarded as a third-order tensor. First, the blurring kernel was estimated utilizing the multi-scale edge information of blurred color image which could be obtained by adjusting the regularization parameter in tensorial total variation model. Then a deblurring algorithm based on tensorial total variation was adopted to recover the latent image. The experimental results show that the proposed algorithm can achieve obvious improvement on Peak Signal-to-Noise Ratio (PSNR) and subjective vision.
Voice activity detection algorithm based on hidden Markov model
LI Qiang, CHEN Hao, CHEN Dingdang
2016, 36(11): 3212-3216. DOI:
10.11772/j.issn.1001-9081.2016.11.3212
Asbtract
(
)
PDF
(756KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the existing Voice Activity Detection (VAD) algorithms based on Hidden Markov Model (HMM) were poor to track noise, a method using Baum-Welch algorithm was proposed to train the noise with different characteristics, and the corresponding noise model was generated to establish a library. When voice activity was detected, depending on the measured background noise of the speech, the voice was dynamically matched to a noise model in the library. Meanwhile, in order to meet real-time requirements of speech signal processing, reduce the complexity of the speech parameter extraction, the threshold was improved to ensure the inter-frame correlation of the speech signal. Under different noise environments, the improved algorithm performance was tested and compared with Adaptive Multi-Rate (AMR), G.729B of the International Telecommunications Union (ITU-T). The test results show that the improved algorithm can effectively improve the accuracy of detection and noise tracking ability in real-time voice signal processing.
Significant visual attention method guided by object-level features
YANG Fan, CAI Chao
2016, 36(11): 3217-3221. DOI:
10.11772/j.issn.1001-9081.2016.11.3217
Asbtract
(
)
PDF
(1006KB) (
)
References
|
Related Articles
|
Metrics
Concerning the defects of fusing object information by existing visual attention models, a new visual attention method combining high-level object features and low-level pixel features was proposed. Firstly, high-level feature maps were obtained by using Convolutional Neural Network (CNN) which has strong understanding of multi-class targets. Then all object feature maps were combined by training the weights with eye fixation data. Then the saliency map was obtained by fusing pixel-level conspicuity map and object-level conspicuity map. Finally, the proposed method was compared with many popular visual attention methods on OSIE and MIT datasets. Compared with the contrast methods, the Area Under Curve (AUC) result of the proposed method is increased. Experimental results show that the proposed method can make full use of the object information in the image, and increases the saliency prediction accuracy.
Chinese speech segmentation into syllables based on energies in different times and frequencies
ZHANG Yang, ZHAO Xiaoqun, WANG Digang
2016, 36(11): 3222-3228. DOI:
10.11772/j.issn.1001-9081.2016.11.3222
Asbtract
(
)
PDF
(1015KB) (
)
References
|
Related Articles
|
Metrics
Precise speech segmentation methods, which can also greatly improve the efficiency of corpus annotation works, are helpful in comparing voice with voice models in speech recognition. A new Chinese speech segmentation into syllables based on the feature of time-frequency-dimensional energy was proposed:firstly, silence frames were searched in traditional way; secondly, unvoiced frames were sought using the difference of energies in different frequencies; thirdly, the voiced frames and speech frames were looked for with the help of 0-1 energies in special frequency ranges; finally, syllable positions were given depending on the judgements above. The experimental results show that the proposed method whose syllable error is 0.0297 s and syllable deviation is 7.93% is superior to Merging-Based Syllable Detection Automaton (MBSDA) and method of Gauss fitting.
Adaptive residual error correction support vector regression prediction algorithm based on phase space reconstruction
LI Junshan, TONG Qi, YE Xia, XU Yuan
2016, 36(11): 3229-3233. DOI:
10.11772/j.issn.1001-9081.2016.11.3229
Asbtract
(
)
PDF
(881KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem of nonlinear time series prediction in the field of analog circuit fault prediction and the problem of error accumulation in traditional Support Vector Regression (SVR) multi-step prediction, a new adaptive SVR prediction algorithm based on phase space reconstruction was proposed. Firstly, the significance of SVR multi-step prediction method for time series trend prediction and the error accumulation problem caused by multi-step prediction were analyzed. Secondly, phase space reconstruction technique was introduced into SVR prediction, the phase space of the time series of the analog circuit state was reconstructed, and then the SVR prediction was carried out. Thirdly, on the basis of the two SVR prediction of the error accumulated sequence generated in the multi-step prediction process, the adaptive correction of the initial prediction error was realized. Finally, the proposed algorithm was simulated and verified. The simulation verification results and experimental results of the health degree prediction of the analog circuit show that the proposed algorithm can effectively reduce the error accumulation caused by multi-step prediction, and significantly improve the accuracy of regression estimation, and better predict the change trend of analog circuit state.
Improved design method for infinite impulse response digital filter based on structure evolution
GAO Ling, CHEN Lijia, LIU Mingguo, MAO Junyong
2016, 36(11): 3234-3238. DOI:
10.11772/j.issn.1001-9081.2016.11.3234
Asbtract
(
)
PDF
(696KB) (
)
References
|
Related Articles
|
Metrics
In order to further improve the performance of Infinite Impulse Response (IIR) digital filter, a design method of the IIR digital filter based on structure evolution and parameter evolution was proposed. Firstly, initial filter structure was got by using Genetic Algorithm (GA). Then, Differential Evolution (DE) was used to optimize the parameters of the filter. Finally, an improved optimization strategy was used to further optimize the parameters of the filter by using adjustment search-step and bidirectional heuristic search. Furthermore, the proposed method was applied to the design of low-pass filter and high-pass filter. Compared with the design method based on GA, the pass-band performance of low-pass filter based on the proposed method is not much different from that of the previous algorithm, however, the transition zone width of it is reduced by 65%, the minimum stop-band attenuation of it was reduced by 36.48 dB; the pass-band ripple of high-pass filter based on the proposed method is reduced by 75%, the transition zone width of it is reduced by 44%, and the minimum stop-band attenuation of it is reduced by 12.13 dB. Simulation results show that the proposed method can get effective filters with better performance, therefore it is suitable for the IIR digital filter design.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF