Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 February 2018, Volume 38 Issue 2
Previous Issue
Next Issue
Paging-measurement method for virtual machine process code based on hardware virtualization
CAI Mengjuan, CHEN Xingshu, JIN Xin, ZHAO Cheng, YIN Mingyong
2018, 38(2): 305-309. DOI:
10.11772/j.issn.1001-9081.2017082167
Asbtract
(
)
PDF
(1037KB) (
)
References
|
Related Articles
|
Metrics
In cloud environment, the code of pivotal business in Virtual Machine (VM) can be modified by malicious software in many ways, which can pose a threat to its stable operation. Traditional measurement systems based on host are liable to be bypassed or attacked. To solve the problem that it is difficult to obtain a complete virtual machine running process code and verify its integrity at Virtual Machine Monitor (VMM) layer, a paging-measurement method based on hardware virtualization was proposed. The Kernel-based Virtual Machine (KVM) was used as the VMM to capture the system calls of virtual machine process in VMM and regarde it as the trigger point of the measurement process; the semantic differences of different virtual machine versions were solved by using relative address offset, then the paging-measurement method could verify the code integrity of running process in virtual machine transparently at VMM layer. The implemented prototype system of VMPMS (Virtual Machine Paging-Measurement System) can effectively measure the virtual machine process code with acceptable performance loss.
Information flow control model for cloud composite service supporting Chinese Wall policy
LIU Mingcong, WANG Na
2018, 38(2): 310-315. DOI:
10.11772/j.issn.1001-9081.2017081981
Asbtract
(
)
PDF
(1004KB) (
)
References
|
Related Articles
|
Metrics
Due to the conflict of interest between the component services of the cloud composite service from different service providers, it is necessary to control the information flow of the cloud services to avoid the flow of sensitive information between the conflicting component services. For the conflict of interest in cloud composite service, it was formally described that the information flow of complex composite structure with the weighted directed graph model of cloud composite service, and defined that the concept of the dependency relation of data and the alliance relation of cloud services. Moreover, the conflict of interest relation was extended to the composite service-conflict of interest relation in Chinese Wall policy. Based on this, the Chinese Wall-based Cloud Composite Service Information Flow Control (CW-CCSIFC) model was proposed and the formal description as well as the proof of the relevant theorems was given. The analysis shows that the CW-CCSIFC model can block the illegal information flow between cloud services with conflicting interests and protect the information flow security of cloud composite service.
Efficient cross-domain authentication scheme based on blockchain technology
ZHOU Zhicheng, LI Lixin, LI Zuohui
2018, 38(2): 316-320. DOI:
10.11772/j.issn.1001-9081.2017082170
Asbtract
(
)
PDF
(945KB) (
)
References
|
Related Articles
|
Metrics
To solve the efficiency problem of the existing Public Key Infrastructure (PKI) cross-domain authentication scheme, by using blockchain technology with the advantages of distributed multi-center, collective maintenance and not being easy to tamper, an effective cross-domain authentication scheme was proposed, including BlockChain Certificate Authority (BCCA) trust model and system architecture, blockchain certificate format and user cross-domain authentication protocol, as well as the security and efficiency. The results show that in terms of security, the scheme has security attributes such as mutual entity authentication; in terms of efficiency, compared with the existing cross-domain authentication scheme, by taking advantage of blockchain mechanism such as not being easy to tamper, and hash algorithm, the number of signature and verification of public key algorithm is reduced, which enhances the efficiency of cross-domain authentication.
Publicly verifiable outsourced computation scheme for multivariate polynomial based on two-server model
LUO Xiaoshuang, YANG Xiaoyuan, LI Cong, WANG Xu'an
2018, 38(2): 321-326. DOI:
10.11772/j.issn.1001-9081.2017082169
Asbtract
(
)
PDF
(907KB) (
)
References
|
Related Articles
|
Metrics
Combining with the privacy-preserving problem of secure outsourced computation in the cloud and aiming at arbitrary outsourcing multivariate polynomials, a publicly verifiable outsourced computation scheme based on two-server model was constructed by homomorphic encryption and multilinear mapping. The scheme can guarantee the privacy and security of inputs and outputs of polynomial functions, and reach the goal that users or any third party can verify the correctness of the results, thus achieving open verification and availability. The results returned by the cloud are in the state of encryption, only users who have decryption key can output the final results, which can ensure the security of computation. Besides, the scheme can achieve Chosen Plaintext Attack (CPA) security of inputs in the standard model, and the user's computational cost is much less than that of the server and direct computation.
Trust chain model with waterfall characteristic based on trusted virtualization platform
QI Neng, TAN Liang
2018, 38(2): 327-336. DOI:
10.11772/j.issn.1001-9081.2017082159
Asbtract
(
)
PDF
(1584KB) (
)
References
|
Related Articles
|
Metrics
The trusted virtual platform constructed by the combination of virtualization technology and trusted computing and its trust chain have become a research hot spot. But at present, most of the research achievements construct the trust chain by extending the conventional trust chain model, as a result, the model is not precise and the logic is not completely reasonable. Moreover, there are two separate trust chains, one starts from the underlying virtual platform, the other starts from the top-level user Virtual Machine (VM). In order to solve this problem, a trust chain model with waterfall characteristic called TVP-QT was proposed for the trusted virtual platform. This model starts with the physical Trusted Platform Module (TPM), and adds a Trusted-Joint Point (TJP) between the chain of the underlying virtual platform and the chain of the top-level user VM. The TJP is in charge of the measurement of virtualization TPM (vTPM) for VM after the trusted chain is transmitted from the underlying virtual platform to the TJP, then the vTPM gets the control and is in charge of the measurement of the related components and applications of the top-level user VM in the starting process. The TJP which has the waterfall characteristic between the underlying virtual platform and the top-level user VM can be viewed as a connecting link, and it can satisfy the hierarchical and dynamic characteristics of the virtual platform, moreover guarantee the trust of the whole virtual platform. Finally, the correctness of the model was proved in theory, and the generality and feasibility of the proposed trust chain model in the instantiation system was analyzed and discussed. Simulation results on Xen show that the trust chain can ensure the trust and credibility of the trusted cloud platform in the whole running process.
Correspondence property-based platform configuration attestation
XU Mingdi, GAO Yang, GAO Xueyuan, ZHANG Fan
2018, 38(2): 337-342. DOI:
10.11772/j.issn.1001-9081.2017082168
Asbtract
(
)
PDF
(904KB) (
)
References
|
Related Articles
|
Metrics
Concerning the security problem of local and global attacks on the Integrity Report Protocol (IRP), the StatVerif syntax was extended by adding constructors and destructors associated with the integrity measurement. The security of the Platform Configuration Attestation (PCA) was analyzed and the local and global attacks were found, including tampering the platform configuration register or revising stored measurement log by running unauthorized commands. The abilities of attackers were modeled, and how attackers accumulated knowledge and tampered PCA protocol by using constructors and destructors was introduced. Finally, the existence of attacking sequence was proved theoretically when PCA does not satisfy the correspondence property; and several propositions that PCA can meet the local reliability and gloabal reliability were given, which were proved by the formal verification tool Proverif.
Multi-keyword ciphertext search method in cloud storage environment
YANG Hongyu, WANG Yue
2018, 38(2): 343-347. DOI:
10.11772/j.issn.1001-9081.2017071869
Asbtract
(
)
PDF
(963KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem of low efficiency and lack of adaptive ability for the existing multi-keyword ciphertext search methods in cloud storage environment, a Multi-keyword Ranked Search over Encrypted cloud data based on Improved Quality Hierarchical Clustering (MRSE-IQHC) method was proposed. Firstly, the document vectors were constructed by Term Frequency-Inverse Document Frequency (TF-IDF) method and Vector Space Model (VSM). Secondly, the Improved Quality Hierarchical Clustering (IQHC) algorithm was proposed to cluster the document vectors, the document index and cluster index were constructed. Thirdly, the K-Nearest Neighbor (KNN) query algorithm was used to encrypt the indexes. Finally, the user-defined keyword weight was used to construct the search request and search for the top
k
relevant documents in ciphertext state. The experimental results show that compared with the Multi-keyword Ranked Search over Encrypted cloud data (MRSE) method and the Multi-keyword Ranked Search over Encrypted data based on Hierarchical Clustering Index (MRSE-HCI) method, the search time was shortened by 44.3% and 34.2%, 32.4% and 13.2%, 36.9% and 19.4% in the same number of search documents, retrieved documents and search keywords conditions, and the accuracy rate was increased by 10.8% and 8.6%. The proposed method MRSE-IQHC has high search efficiency and accuracy for multi-keyword ciphertext search in cloud storage environment.
Data updating method for cloud storage based on ciphertext-policy attribute-based encryption
LIU Rong, PAN Hongzhi, LIU Bo, ZU Ting, FANG Qun, HE Xin, WANG Yang
2018, 38(2): 348-351. DOI:
10.11772/j.issn.1001-9081.2017071856
Asbtract
(
)
PDF
(763KB) (
)
References
|
Related Articles
|
Metrics
Cloud computing data are vulnerable to be theft illegally and tampered maliciously. To solve these problems, a Dynamic Updating Ciphertext-Policy Attribute-Based Encryption (DU-CPABE) scheme which enables both data dynamic updating and security protection was proposed. Firstly, by using linear partitioning algorithm, data information was divided into fixed size blocks. Secondly, the data blocks were encrypted by using Ciphertext-Policy Attribute-Based Encryption (CP-ABE) algorithm. Finally, based on conventional Merkle Hash Tree (MHT), an Address-MHT (A-MHT) was proposed for the operation of dynamically updating data in cloud computing. The theoretical analysis proved the security of the scheme, and the simulation in ideal channel showed that, for five updates, compared with CP-ABE method, the average time overhead of data update was decreased by 14.6%. The experimental results show that the dynamic updating of DU-CPABE scheme in cloud computng services can effectively reduce data update time and system overhead.
Design and implementation of log parsing system based on machine learning
ZHONG Ya, GUO Yuanbo
2018, 38(2): 352-356. DOI:
10.11772/j.issn.1001-9081.2017071786
Asbtract
(
)
PDF
(841KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem that the existing log classification method is only applicable to the formative log, and the performance is closely related to the structure of the log, the existing log parsing algorithm LogSig (Log Signature) was extended and improved based on machine learning, and a log clustering analysis system was designed by combining data processing and result analysis in one, including raw data preprocessing, log analysis, clustering analysis and evaluation, scatter diagram display of results. This system was tested on the open source firewall log data set in VAST 2011 challenge. The experimental results show that the average accuracy of the improved algorithm in the classification of the event log reaches more than 85%; compared with the original LogSig algorithm, the log parsing accuracy is improved by 50%, and the parsing time is only 25% of the original algorithm. The proposed algorithm can be used to analyze multi-source unstructured log data efficiently and accurately in large data environment.
Abnormal user detection in enterprise network based on graph analysis and support vector machine
XU Bing, GUO Yuanbo, YE Ziwei, HU Yongjin
2018, 38(2): 357-362. DOI:
10.11772/j.issn.1001-9081.2017081951
Asbtract
(
)
PDF
(971KB) (
)
References
|
Related Articles
|
Metrics
In the enterprise network, if the internal attacker obtains the user's identity authentication information, his behavior will be very difficult to distinguish with the normal user. The current research on the abnormal user detection method in enterprise network is relatively simple and the detection rate is low. The user's authentication activity information directly reflects the user's interaction with various resources or personnel in the network. Based on this, a new abnormal user detection method by using user authentication activity information was proposed. The user's authentication activity was used to generate the user authentication graph, and then the attributes in the authentication graph were extracted based on the graph analysis method, such as the size of the largest connected components of the graph and the number of isolated certificates. These attributes reflect the user's authentication behavioral characteristics in the enterprise network. Finally, a supervised Support Vector Machine (SVM) was used to model the extracted graph attributes to indirectly identify and detect abnormal users in the network. After extracting the user graph vector, the training set and the test set, the penalty parameter and the kernel function were analyzed by taking different values. Through the adjustment of these parameters, the recall, accuracy and F1-Score of the propsed method have reached more than 80%. The experimental results show that the proposed method can effectively detect abnormal users in the enterprise network.
Intrusion detection method in industrial control network combining white list filtering and neural network
CHEN Wanzhi, LI Dongzhe
2018, 38(2): 363-369. DOI:
10.11772/j.issn.1001-9081.2017061509
Asbtract
(
)
PDF
(1139KB) (
)
References
|
Related Articles
|
Metrics
In the industrial control network, there are some known anomaly behaviors and some unknown anomaly behaviors in network communication. The white list method can effectively detect the known abnormal behaviors in the rule library, but the detection rate of unknown anomaly behaviors is low. In order to improve the detection rate on the basis of full mining of valid information, an intrusion detection method combining white list filtering and neural network unsupervised learning algorithm named AMPSO-BP was proposed to apply on routers between the servers of manage network and industrial network. Firstly, the white list technology was used to filter the communication behaviors that could not match with the white list rules base at first time; then the results of sample training by offline unsupervised learning in neural network system were used to filter the abnormal communication behaviors that trusted with the white list at second time. The neural network was used to improve the detection rate under incomplete information, and according to the neural network detection results, the white list rule library was improved constantly to promote the detection rate of abnormal communication over network. The Particle Swarm Optimization algorithm with Adaptive Mutation (AMPSO) was used as training function for the BP (Back Propagation) neural network, and the adaptive mutation process was added to the Particle Swarm Optimization (PSO) algorithm to avoid falling into the local optimal solution prematurely during the training process. Two groups of training and testing data sets were used in experiment. The experimental results show that the detection accuracy of AMPSO-BP combined with white list is higher than that of PSO-BP combined with white list.
Plaintext recovery attack on RC4 with different length of seed key
YUAN Chao, XU Mixue, SI Xueming
2018, 38(2): 370-373. DOI:
10.11772/j.issn.1001-9081.2017071945
Asbtract
(
)
PDF
(611KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the plaintext recovery on plaintexts encrypted by RC4 (Rivest Cipher 4) algorithm with different lengths of seed key, a plaintext recovery attack on plaintexts encrypted by RC4 algorithm with different lengths of seed key (8 bytes, 16 bytes, 22 bytes) was proposed. Firstly, by using the statistical algorithm, the
t
-value distribution of each output byte of key stream of RC4 was calculated under the condition of 2
32
different seed keys, and biases were found. Then the attack on the first 256 bytes of the plaintext encrypted by the RC4 was given by using single-byte biases and double-bytes biases. The experimental results show that with 2
31
ciphertexts, the first 196 bytes of the plaintext can be recovered with the success probability of 100% except the 4th Byte. Besides, the first 256 bytes can be recovered with the success probability over 91%, 87% and 81% for 8-byte, 16-byte and 22-byte seed key, respectively. The proposed attack algorithm extends the scope of RC4 algorithm with seed key length of 16 bytes, and it can recover the plaintexts encrypted by RC4 algorithm in practice.
Efficient certificateless aggregate signcryption scheme without bilinear pairings
SU Jingfeng, LIU Juxia
2018, 38(2): 374-378. DOI:
10.11772/j.issn.1001-9081.2017081984
Asbtract
(
)
PDF
(924KB) (
)
References
|
Related Articles
|
Metrics
Most of the current aggregate signcryption schemes based on bilinear pairings have low computational efficiency, thus they are not suitable for the application environment with limited computing resources and communication bandwidth. In order to improve the efficiency of aggregate signcryption, a new certificateless aggregate signcryption scheme without bilinear pairings was proposed. Based on Diffie-Hellman problem and Discrete Logarithm Problem (DLP), it was proven to be existentially unforgeable and confidential under the random oracle model. Compared with the current typical aggregation signcryption schemes, the proposed scheme has not bilinear pairings and exponential computation, and only needs two point multiplications in the single signcryption. Therefore, it has higher efficiency and shorter length of ciphertext. In the aggregate signcryption verification phase, there is no need to provide any user's secret information, so the proposed scheme has the public verifiability property. In addition, the proposed scheme does not need a secure channel in the partial private key generation phase, which reduces communication complexity.
Efficient bilinear-pairing-free certificate-based encryption scheme with keyword search
XU Hailin, LU Yang
2018, 38(2): 379-385. DOI:
10.11772/j.issn.1001-9081.2017071877
Asbtract
(
)
PDF
(1148KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problems of complex certificate management, key escrow and key distribution in the existing public key encryption schemes with keyword search, a certificate-based encryption scheme with keyword search was proposed. Firstly, the framework of certificate-based encryption with keyword search and its security model were formally defined. Secondly, an efficient bilinear-pairing-free certificate-based encryption with keyword search scheme over the elliptic curve group was proposed, which was proved to be indistinguishable against adaptively chosen-keyword attacks under the hardness assumption of the Computational Diffie-Hellman Problem (CDHP) in random oracle model. Finally, the proposed scheme was simulated and compared with several exsiting public key encryption schemes with keyword search in terms of property and performance. The comparison and analysis results show that the proposed scheme not only has the merits of implicit authentication, key escrow freeness and key distribution freeness, but also outperforms the comparison certificateless encryption schemes with keyword search in both computation efficiency and communication bandwidth.
Analysis and improvement of two electronic cash schemes
SHAO Dongyang, KANG Baoyuan, WANG Jiaqiang
2018, 38(2): 386-389. DOI:
10.11772/j.issn.1001-9081.2017082160
Asbtract
(
)
PDF
(651KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the shortcomings of the current electronic cash scheme in anonymity and fairness, two schemes based on elliptic curve and bilinear pair were proposed. The schemes can not only guarantee the anonymity of customers, but also trace the double-spending to ensure the fairness of the transaction. Firstly, the electronic cash scheme based on elliptic curve authentication encryption proposed by Chaudhry et al. was analyzed, which can not guarantee the anonymity of consumption and effectively resolve the transaction disputes. Then the scheme of bank delegating offline electronic cash scheme proposed by Liu et al. was also analyzed, and it was found that the users in it can fake electronic money. Two improved schemes were proposed to modify the above defects and the security of them were analyzed. The analysis show that the new schemes not only inherit the security and efficiency of the previous schemes, and can resist replay attacks and impersonation attacks, but also ensure the anonymity and fairness of the schemes.
Variable intuitionistic fuzzy multi-granulation rough set model and its approximate distribution reduction algorithms
WAN Zhichao, SONG Jie, SHENG Yongliang
2018, 38(2): 390-398. DOI:
10.11772/j.issn.1001-9081.2017071894
Asbtract
(
)
PDF
(1241KB) (
)
References
|
Related Articles
|
Metrics
In order to obtain a better approximate approximation effect in multi-granulation rough set model for target conception, an intuitionistic fuzzy rough set and a multi-granulation rough set were combined together and a model of intuitionistic fuzzy multi-granulation rough set was proposed. Due to the loose defect of the target approximation of the model, a variable intuitionistic fuzzy multi-granulation rough set model was proposed by introducing parameters to improve the proposed model, and the validity of this model was proved. In addition, on the basis of this model, a corresponding approximate distribution reduction algorithm was also proposed. The simulation results show that, compared with the existing fuzzy multi-granulation decision-theoretic rough set and multi-granulation double-quantitative decision-theoretic rough set, the proposed lower approximation distribution reduction algorithm has 2 to 4 attributes more than that of them, and the proposed upper approximate distribution reduction algorithm has 1 to 5 attributes less than that of them; meanwhile, the approximation accuracy of reduction results is more reasonable and superior. Theoretical analysis and experimental results verify that the proposed variable intuitionistic fuzzy multi-granulation rough set model has higher superiority in terms of approximating approximation and reducing dimensions.
Self-adaptive differential evolution algorithm based on opposition-based learning
LI Longshu, WENG Qingqing
2018, 38(2): 399-404. DOI:
10.11772/j.issn.1001-9081.2017071888
Asbtract
(
)
PDF
(871KB) (
)
References
|
Related Articles
|
Metrics
Concerning premature convergence and low searching capability of Differential Evolutionary (DE) algorithm, the dynamic adjustment of control parameters was dicussed, and a self-adaptive differential evolution algorithm based on opposition-based learning was proposed. In the proposed algorithm, opposition-based elite learning was used to enhance the local search ability of the population and obtain more accurate optimal individuals; meanwhile, Gaussian distribution was used to improve the exploitation ability of each individual and increase the diversity of the population, which avoids premature convergence of the algorithm and achieves the balance of the global exploitation and local exploitation. Comparison experiments with some other differential evolution algorithms were conducted on six test functions in CEC 2014. The experimental results show that the proposed algorithm outperforms the compared differential evolution algorithms in terms of convergence speed, solution accuracy and reliability.
Fast image background automatic replacement based on dilated convolution
ZHANG Hao, DOU Qiwei, LUAN Guikai, YAO Shaowen, ZHOU Wei
2018, 38(2): 405-409. DOI:
10.11772/j.issn.1001-9081.2017081966
Asbtract
(
)
PDF
(831KB) (
)
References
|
Related Articles
|
Metrics
Because of complexity of background replacement, the traditional method is inefficient and the accuracy is difficult to improve. To solve these problems, a fast image background replacement method based on dilated convolution, called FABRNet, was proposed. First of all, the first three parts of VGG (Visual Geometry Group network) model were used for convolution and pooling operations of input images. Secondly, the combination of multiple sets of dilated convolutions were embedded into convolution neural network to make the network have a large and fine enough receptive field; meanwhile, the residual network structure was used to ensure the accuracy of the information distribution in the convolution process. Finally, the image was scaled to the original size and output by bilinear interpolation algorithm. Compared with three classical methods such as KNN (K-Nearest Neighbors) matting, Portrait matting and Deep matting, the experimental results show that FABRNet can effectively complete the background automatic replacement, and has advantages in running speed.
Micro-blog misinformation detection based on gradient boost decision tree
DUAN Dagao, GAI Xinxin, HAN Zhongming, LIU Bingxin
2018, 38(2): 410-414. DOI:
10.11772/j.issn.1001-9081.2017082368
Asbtract
(
)
PDF
(971KB) (
)
References
|
Related Articles
|
Metrics
Micro-blog has become an important platform for information sharing. Meanwhile, it is also one of the main ways for spreading of different misinformation. In order to detect the micro-blog misinformation quickly and effectively, a method based on Gradient Boost Decision Tree (GBDT) was proposed. Firstly, classification features of content, user properties, information dissemination and time characteristic were extracted from the comments of micro-blog. Then an identification model based on GBDT algorithm was proposed to detect misinformation. Finally, two real micro-blog datasets were used to verify the efficiency and effectiveness of the model. The experimental results show that the proposed model can effectively improve the accuracy of micro-blog misinformation detection.
Adaptive threshold algorithm based on statistical prediction under spatial crowdsourcing environment
LIU Hui, LI Sheng'en
2018, 38(2): 415-420. DOI:
10.11772/j.issn.1001-9081.2017071805
Asbtract
(
)
PDF
(946KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problem that the randomness of task assignment is too high and the utility value is not ideal under the spatial crowdsourcing environment, an adaptive threshold algorithm based on statistical prediction was proposed. Firstly, the numbers of free tasks, free workers and free positions in the crowdsourcing platform in real-time was counted to set the threshold value. Secondly, according to the historical statistical analysis, the distributions of tasks and workers were divided into two balanced parts, then the Min-max normalization method was applied to match each task to a certain worker. Finally, the probability of the appearance of the matched workers was calculated to verify the effectiveness of the task distribution. The experimental results on real data show that, compared with random threshold algorithm and greedy algorithm, the utility value of the proposed algorithm was increased by 7% and 10%, respectively. Experimental result indicates that the proposed adaptive threshold algorithm can reduce the randomness and improve the utility value in the process of task assignment.
Mutual information maximum value filter criteria combined with particle swarm optimization algorithm in feature gene selection for tumor classification
YU Dekuang, YANG Yi
2018, 38(2): 421-426. DOI:
10.11772/j.issn.1001-9081.2017061609
Asbtract
(
)
PDF
(1175KB) (
)
References
|
Related Articles
|
Metrics
Gene data has the characteristics of small sample, high dimensionality and high redundancy, which easily lead to "curse of dimensionality" and "over-fitting" in feature gene selection. To overcome these obstacles, a feature gene selection algorithm, named Mutual Information Maximum Value Filter Criteria-Inertia-Weight Particle Swarm Optimization (MIMVFC-IWPSO), was proposed. Firstly, interaction between genes was calculated by newly defined feature entropies of gene-category and gene-gene, and Feature Gene Candidates Subset (FGCS) was obtained by MIMVFC (Mutual Information Maximum Value Filter Criteria) which reduced the scope of classification operations and improved the probability of feature genes being covered. Secondly, the Particle Swarm Optimization (PSO) algorithm was reconstructed to IWPSO (Inertia Weight Particle Swarm Optimization) by introduction of self-adjusted inertia weight which enabled the algorithm to have strong global optimization ability in the early stage of iteration and strong local search ability in the later stage. Lastly, Core Feature Gene Subset (CFGS) was extracted from FGCS by IWPSO which was exploited in the classification of samples into tumor and normal classes. The experiments were carried out based on three public tumor gene databases. Compared with four popular filter methods, MIMVFC achieved higher correct classification rate than the methods based on Signal-to-Noise Ratio (SNR), t-statistic and Information Gain (IG), and ranked nearly the same as Chi-Square method, but the proposed method still had the optimized step to enhance the results furthermore. For the same FGCS, compared with BPSO-CGA (Binary Partical Swarm Optimization and Combat Genetic Algorithm), an algorithm with good performance, IWPSO gained a smaller CFGS with slightly increased time consumption and a higher accuracy; compared with classic PSO, IWPSO gained a smaller FGCS with less time consumption and a higher accuracy. The simulation results show that MIMVFC-IWPSO has comprehensive classification performance in both the aspects of accuracy and efficiency which proves to be feasible and effective in feature gene selection of multiple types of tumors, and it can be employed in assisting instruction in molecular biology experiment design and validation.
Biological sequence classification algorithm based on density-aware patterns
HU Yaowei, DUAN Lei, LI Ling, HAN Chao
2018, 38(2): 427-432. DOI:
10.11772/j.issn.1001-9081.2017071767
Asbtract
(
)
PDF
(894KB) (
)
References
|
Related Articles
|
Metrics
Concerning unsatisfactory classification accuracy and low efficiency of the existing pattern-based classification methods for model training, a concept of density-aware pattern and an algorithm for biological sequence classification based on density-aware patterns, namely BSC (Biological Sequence Classifier), were proposed. Firstly, frequent sequence patterns based on density-aware concept were mined. Then, the mined frequent sequence patterns were filtered and sorted for designing the classification rules. Finally, the sequences without classification were classified by classification rules. According to a number of experiments conducted on four real biological sequence datasets, the influence of BSC algorithm parameters on the results were analyzed and the recommended parameter settings were provided. Meanwhile, the experimental results showed that the accuracies of BSC algorithm were improved by at least 2.03 percentage points compared with other four pattern-based baseline algorithms. The results indicate that BSC algorithm has high biological sequence classification accuracy and execution efficiency.
Differential crow search algorithm based on Lévy flight for solving discount {0-1} knapsack problem
LIU Xuejing, HE Yichao, LU Fengjia, WU Congcong, CAI Xiufeng
2018, 38(2): 433-442. DOI:
10.11772/j.issn.1001-9081.2017071852
Asbtract
(
)
PDF
(1349KB) (
)
References
|
Related Articles
|
Metrics
A large-scale Discount {0-1} Knapsack Problem (D{0-1} KP) is difficult to solve with the deterministic algorithms, thus a differential crow search algorithm based on Lévy flight named LDECSA was proposed. Firstly, the coding problem about the second mathematical model of D{0-1} KP was solved by using mixed coding. Secondly, a New greedy Repair and Optimization Algorithm (NROA) was used to deal with the infeasible solution. Thirdly, in order to avoid the problems of local optimum and slow convergence, Lévy flight and differential strategy were introduced. Finally, the reasonable value of awareness probability and flight length were determined through experiments, the differential strategy was also chosen. The experimental results on four types of large-scale D{0-1} KP show that LDECSA is very suitable for solving large-scale D{0-1} KP with very satisfactory approximate solution.
Improved teaching-learning-based optimization algorithm based on self-learning mechanism
TONG Nan, FU Qiang, ZHONG Caiming
2018, 38(2): 443-447. DOI:
10.11772/j.issn.1001-9081.2017081953
Asbtract
(
)
PDF
(836KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problems of low convergence precision and premature convergence in Teaching-Learning-Based Optimization (TLBO) algorithms, an improved Self-Learning mechanism-based TLBO (SLTLBO) algorithm was proposed. A more complete learning framework was constructed for students in SLTLBO algorithm. Besides, after completing nomal learning in "teaching" and "learning" stage, students would further compare their differences from the teachers and the worst students, then various learning operations were implemented independently, so as to enhance their knowledge level and improve the convergence accuracy of the algorithm. Meanwhile, the students carried out self-examination through Gaussian searching to jump out of the local area and achieved better global search. The performance of SLTLBO was tested on 10 benchmark functions and compared with the algorithms including Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) and TLBO. The experimental results verify the effectiveness of the proposed SLTLBO algorithm.
Person re-identification method based on block sparse representation
SUN Jinyu, WANG Hongyuan, ZHANG Ji, ZHANG Wenwen
2018, 38(2): 448-453. DOI:
10.11772/j.issn.1001-9081.2017082491
Asbtract
(
)
PDF
(1006KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the person re-identification in non-overlapping camera views and the high dimensional feature extracted from the images, a person re-identification method based on block sparse representation was proposed. The Canonical Correlation Analysis (CCA) was taken to carry out the feature projection transformation, and the curse of dimensionality caused by high dimensional feature operation was avoided by improving the feature matching ability, and the feature vectors in a probe image were made to be probably linear with the corresponding gallery feature vectors in the learned projected space of CCA transformation. A person re-identification model was also built with block structure feature of pedestrian dataset, and the associated optimization problem was solved by utilizing the alternating direction framework. Finally, the residues were used to deal with the person in the probe set to be identified and the index of the minimum value in the residues was regarded as the identity of the person. Several experiments were conducted on public datasets such as PRID 2011, iLIDS-VID and VIPeR. The experimental results show that the Rank1 value of the proposed method on three experimental datasets reaches 40.4%, 38.11% and 23.68%, respectively, which is significantly higher than that of Large Margin Nearest Neighbor (LMNN) method, and the matching rate of it on Rank-1 is also much bigger than that of LMNN method; besides, the overall performance of it is better than the classical algorithms based on feature representation and metric learning. The experimental results verify the effectiveness of the proposed method on person re-identification.
Human action recognition based on coupled multi-Hidden Markov model and depth image data
ZHANG Quangui, CAI Feng, LI Zhiqiang
2018, 38(2): 454-457. DOI:
10.11772/j.issn.1001-9081.2017081945
Asbtract
(
)
PDF
(607KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the feature extraction is easy to be affected by external factors and the computational complexity is high, the depth data was used for human action recognition, which is a more effective solution scheme. Using the joint data collected by Kinect, the human joint was divided into five regions. The vector angle of each region was discretized to describe different states, and then Baum-Welch algorithm was used to study multi-Hidden Markov Model (multi-HMM), meanwhile, forward algorithm was used to establish the generation region and action class probability matrix. On this basis, the region and action categories were intra-coupled and inter-coupled to analyze, thus expressing the interaction between the joints. Finally, the K-Nearest Neighbors (KNN) algorithm based on coupling was used to complete the action recognition. The experimental results show that the recognition rates of the five actions reach above 90%, and the comprehensive recognition rate is higher than that of the contrast methods such as 3D Trajecttories, which means that the proposed algorithm has obvious advantages.
Clustering ensemble algorithms based on improved genetic algorithm in cloud computing
XU Zhanyang, ZHENG Kezhang
2018, 38(2): 458-463. DOI:
10.11772/j.issn.1001-9081.2017071749
Asbtract
(
)
PDF
(1036KB) (
)
References
|
Related Articles
|
Metrics
Considering the problem that unsupervised clustering lacks priori information about data classification, the accuracy of base clustering is affected by clustering algorithm and general clustering ensemble algorithm has high space complexity, a Clustering Ensemble algorithm based on Improved Genetic Algorithm (CEIGA) was proposed. Focusing on the issue that traditional clustering ensemble algorithms can not meet the time requirement of large scale data processing, a Parallel Clustering Ensemble algorithm based on Improved Genetic Algorithm (PCEIGA) using Hadoop for cloud computing was also proposed. Firstly, the base clustering partitions produced by base clustering generation mechanism were encoded as the initial population of the improved Genetic Algorithm (GA) after changing cluster labels. Secondly, the diversity of base clustering was ensured by improving the selection operator of GA. According to the improved selection operator, crossover operation and mutation operation were adopted on chromosomes and the next generation population was gotten by elitist strategy to ensure the accuracy of base clustering. By this way, the final results of clustering ensemble reached global optimum and the accuracy of the algorithm was improved. To improve the efficiency of the proposed algorithms, two MapReduce processes were designed and one Combine process was added to reduce the communication among nodes. Finally, CEIGA, PCEIGA and four advanced clustering ensemble algorithms were compared on UCI data sets. The experimental results show that CEIGA performs better than other advanced clustering ensemble algorithms, and PCEIGA can significantly reduce running time and improve algorithm efficiency without decreasing the accuracy of clustering results.
k
-core filtered influence maximization algorithms in social networks
LI Yuezhi, ZHU Yuanyuan, ZHONG Ming
2018, 38(2): 464-470. DOI:
10.11772/j.issn.1001-9081.2017071820
Asbtract
(
)
PDF
(1080KB) (
)
References
|
Related Articles
|
Metrics
Concerning the limited influence scope and high time complexity of existing influence maximization algorithms in social networks, a
k
-core filtered algorithm based on independent cascade model was proposed. Firstly, an existing influence maximization algorithm was introduced, its rank of nodes does not depend on the entire network. Secondly, pre-training was carried out to find the value of
k
which has the best optimization effect on existing algorithms but has no relation with the number of selected seeds. Finally, the nodes and edges that do not belong to the
k
-core subgraph were filtered by computing the
k
-core of the graph, then the existing influence maximization algorithms were applied on the
k
-core subgraph, thus reducing computational complexity. Several experiments were conducted on datasets with different scale to prove that the
k
-core filtered algorithm has different optimization effects on different influence maximization algorithms. After combined with
k
-core filtered algorithm, compared with the original Prefix excluding Maximum Influence Arborescence (PMIA) algorithm, the influence range is increased by 13.89% and the execution time is reduced by as much as 8.34%; compared with the original Core Covering Algorithm (CCA), the influence range has no obvious difference and the execution time is reduced by as much as 28.5%; compared with the original OutDegree algorithm, the influence range is increased by 21.81% and the execution time is reduced by as much as 26.96%; compared with the original Random algorithm, the influence range is increased by 71.99% and the execution time is reduced by as much as 24.21%. Furthermore, a new influence maximization algorithm named GIMS (General Influence Maximization in Social network) was proposed. Compared with PIMA and Influence Rank Influence Estimation (IRIE), it has wider influence range while still keeping execution time at second level. When it was combined with
k
-core filtered algorithm, the influence range and execution time do not have significant change. The experimiental results show that
k
-core filtered algorithm can effectively increase the influence ranges of existing algorithms and reduce their execution times; in addition, the proposed GIMS algorithm has wider influence range and better efficiency, and it is more robust.
Dynamic Top-
K
interesting subgraph query on large-scale labeled graph
SONG Baoyan, JIA Chunjie, SHAN Xiaohuan, DING Linlin, DING Xingyan
2018, 38(2): 471-477. DOI:
10.11772/j.issn.1001-9081.2017082360
Asbtract
(
)
PDF
(1088KB) (
)
References
|
Related Articles
|
Metrics
The traditional algorithms are difficult to implement the Top-
K
subgraph query on large-scale dynamic labeled graph due to high time or space complexity. For this reason, a dynamic Top-
K
interesting subgraph query method named DISQtop-
K
was proposed. In this algorithm, a Graph Topology Structure Feature (GTSF) index that include Node Topology Feature (NTF) index and Edge Feature (EF) index was established, which can effectively prune and filter the invalid nodes and edges. Then a multi-factor candidate set filtering strategy was put forward based on GTSF index, which can be used to further prune the query graph candidate sets. Considering that the dynamic changes in the graph may have an impact on the matching results, to ensure the real-time and accuracy of the query results, a new matching-verification method for Top-
K
interesting subgraph was also given, which has two stages of initial matching and dynamic correction. Experimental results show that compared with RAM and RWM, DISQtop-
K
method costs shorter time for index creation and occupies less space, which can effectively deal with dynamic Top-
K
interesting subgraph query on large-scale labeled graph.
Query optimization based on Greenplum database
ZOU Chengming, XIE Yi, WU Pei
2018, 38(2): 478-482. DOI:
10.11772/j.issn.1001-9081.2017081916
Asbtract
(
)
PDF
(849KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the query efficiency of distributed database decreases with the increase of data scale, the Greenplum distributed database was taken as the research object, and a cost-based optimal query plan generation scheme was proposed from the perspective of optimizing the query path. Firstly, an effective cost model was designed to estimate the query cost. The parallel maximum and minimum ant colony algorithm was then used to search the join order with the minimum query cost, i.e. the optimal join order. Finally, the optimal query plan was obtained based on the Greenplum database's default optimal choice for different operations in the query plan. Multiple experiments were carried out on the self-generated data set and Transaction Processing Performance Council Benchmark H (TPC-H) standard data set by using the proposed scheme. The experimental results show that the proposed optimization scheme can effectively search out the optimal solution and obtain the optimal query plan, so as to improve the query efficiency of Greenplum database.
Design of mixed data clustering algorithm based on density peak
LI Ye, CHEN Yiyan, ZHANG Shufen
2018, 38(2): 483-490. DOI:
10.11772/j.issn.1001-9081.2017082053
Asbtract
(
)
PDF
(1493KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the issue that
k
-prototypes algorithm is incapable of identifying automatically the number of clusters and discovering clusters with arbitrary shape, a mixed data clustering algorithm based on searching for density peaks was proposed. Firstly, CFSFDP (Clustering by fast Search and Find of Density Peaks) clustering algorithm was extended to mixed datasets in which the distances between mixed data objects were calculated to determine the cluster centers by using CFSFDP algorithm, that is, the number of clusters was determined automatically. The rest points were then assigned to the cluster in order of their density from large to small. Secondly, the selection method of threshold and weight in the proposed algorithm was introduced. In the density formula, the threshold (cutoff distance) was extracted automatically by calculating potential entropy of data field; in the distance formula, the weight was defined through certain statistic which can measure clustering tendency of numeric datasets and categorical datasets. Finally, experimental results on three real mixed datasets show that compared with
k
-prototypes algorithm, the proposed algorithm can effectively improve the accuracy of clustering.
Mining high gain rate co-location patterns with neighboring effection
ZENG Xin, LI Xiaowei, YANG Jian
2018, 38(2): 491-496. DOI:
10.11772/j.issn.1001-9081.2017081938
Asbtract
(
)
PDF
(927KB) (
)
References
|
Related Articles
|
Metrics
For most spatial co-location pattern mining methods, distance threshold is used as a standard to measure the neighboring relation among instances of different objects, then to mine frequent co-location patterns, but the interation between instances with neighboring relations and the gain rate of patterns are not considered. In the spatial co-location patterns mining process, by introducing the interation rate between instances and the seasonal average income of objects, the concepts of object effect rate, suite total income and gain rate were defined, and a basic algorithm named NAGA and an efficient pruning algorithm named NAGA_JZ for mining high gain rate co-location patterns were put forward. Finally, a large number of experiments were carried out to verify the correctness and practicability of the basic algorithm, and the mining efficiency of the basic algorithm and the pruning algorithm were compared. The experimental results prove the high efficiency of the pruning algorithm.
Simplified Slope One algorithm for online rating prediction
SUN Limei, LI Yue, Ejike Ifeanyi Michael, CAO Keyan
2018, 38(2): 497-502. DOI:
10.11772/j.issn.1001-9081.2017082493
Asbtract
(
)
PDF
(939KB) (
)
References
|
Related Articles
|
Metrics
In the era of big data, personalized recommendation system is an effective means of information filtering. One of the main factors that affect the prediction accuracy is data sparsity. Slope One online rating prediction algorithm uses simple linear regression model to solve data sparisity problem, which is easy to implement and has quick score rating, but its training stage needs to be offline because of high time and space consumption when generating differences between items. To solve above problems, a simplified Slope One algorithm was proposed, which simplified the most time-consuming procedure in Slope One algorithm when generating items' rating difference in the training stage by using each item's historical average rating to get the rating difference. The simplified algorithm reduces the time and space complexity of the algorithm, which can effectively improve the utilization rate of the rating data and has better adaptability to sparse data. In the experiments, rating records in Movielens data set were ordered by timestamps then divided into the training set and test set. The experimental results show that the accuracy of the proposed simplified Slope One algorithm is closely approximated to the original Slope One algorithm, but the time and space complexity are lower than that of Slope One, it means that the simplified Slope One algorithm is more suitable for large-scale recommendation system applications with rapid growth of data.
Service function chain construction method based on node utility maximization
ZHANG Chuanhao, ZHOU Qiao
2018, 38(2): 503-508. DOI:
10.11772/j.issn.1001-9081.2017081971
Asbtract
(
)
PDF
(945KB) (
)
References
|
Related Articles
|
Metrics
Networks heavily rely on middlebox to provide critical service functions, with the development of Software Defined Network (SDN) and Network Function Virtualization (NFV) technology, how to use the new technology to deploy middleboxs and guide flow through a specific sequence of middleboxs to complete the service function chain is still a problem to be solved. A construction method based on the optimal node utility maximization, namely NUM (Node Utility Maximization), was proposed for the service function chain construction problem, which took into account the deployment and steering of the virtual middleboxs in the meantime. Firstly, a service function chain collaborative construction mechanism was designed based on SDN+NFV technology. Secondly, the node selection model and the utility maximization model were introduced in this mechanism, according to the solution of middleware box deployment and traffic guidance problem. Finally, the model was solved by applying Tabu search-combined simulated annealing algorithm. The simulation results show that the proposed method NUM is superior to the traditional algorithm in terms of construction time, success rate and network congestion rate, and the utility of the nodes is improved by about 20% by using the proposed service function chain construction method.
Distributed quantized subgradient optimization algorithm for multi-agent switched networks
LI Jiadi, MA Chi, LI Dequan, WANG Junya
2018, 38(2): 509-515. DOI:
10.11772/j.issn.1001-9081.2017081927
Asbtract
(
)
PDF
(948KB) (
)
References
|
Related Articles
|
Metrics
As the existing distributed subgradient optimization algorithms are mainly based on ideal assumptions:the network topology is balanced and the communication among the network is usually the exact information of a state variable of each agent. To relax these assumptions, a distributed subgradient optimization algorithm for switched networks was proposed based on limited quantized information communication. All information along each dynamical edge was quantified by a uniform quantizer with a limited quantization level before being sent in an unbalanced switching network, then the convergence of the multi-agent distributed quantized subgradient optimization algorithm was proved by using non-quadratic Lyapunov function method. Finally, the simulation examples were given to demonstrate the effectiveness of the proposed algorithm. The simulation results show that, under the condition of the same bandwidth, the convergence rate of the proposed optimization algorithm can be improved by adjusting the parameters of the quantizer. Therefore, the proposed optimization algorithm is more suitable for practical applications by weakening the assumptions on the adjacency matrix and the requirement of the network bandwidth.
Indoor localization algorithm based on feedback correction of dynamic nearest neighbors
DANG Xiaochao, HEI Yili, HAO Zhanjun, LI Fenfang
2018, 38(2): 516-521. DOI:
10.11772/j.issn.1001-9081.2017071777
Asbtract
(
)
PDF
(939KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the accuracy of indoor localization algorithm based on Received Signal Strength Indicator (RSSI) for Wireless Sensor Network (WSN) is easy to be influenced by channel interference and environment, a new indoor localization algorithm, namely FC-DNN, was proposed based on Feedback Correction of Dynamic Nearest Neighbors. Firstly, the minimum localization region was determined by partitioning the whole environment based on Voronoi diagram before positioning. Then the parameters of the path loss model for each region were calculated to obtain the precise distance between nodes. Finally, the Spearman rank correlation coefficient was introduced to select neighbor anchor nodes dynamically, and the feedback of neighbor anchor nodes was used to further improve the localization accuracy. The simulation results confirm that the proposed FC-DNN algorithm has low time complexity, small computation and low energy consumption; furthermore, compared with conventional Distance Difference Localization Algorithm (DDLA) based on RSSI and sensor network localization in COnstrained 3-D spaces (CO-3D), the average positioning error is reduced by about 15 percentage points, which can well meet the requirements of indoor localization.
Design of transform domain communication system for long distance aeronautical communication
WANG Guisheng, REN Qinghua, XU Bingzheng, LIU Yang
2018, 38(2): 522-527. DOI:
10.11772/j.issn.1001-9081.2017071733
Asbtract
(
)
PDF
(869KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of communication interference in Transform Domain Communication System (TDCS) under complex electromagnetic environment, a TDCS model based on store and forward mechanism was proposed. Based on the analysis of the electromagnetic spectrum environment, the spectrum sensing cases of transmitter and receiver were classified according to their respective perceptions. With the analysis of consistent spectrum conditions, the TDCS model for long distance aviation communication system was established based on store and forward modules in inconsistent spectrum conditions, and the specific communication schemes were designed, which can effectively improve the system performance. The simulation results show that the interference model and spectrum setting are reasonable, the performance of TDCS is close to the ideal bit error rate under the condition of the same spectrum. Under the condition of different frequency spectrum with more concentrated comb spectrum interference, the bit error rate is reduced by about 24.48%; moreover, with the increase of Signal-to-Noise Ratio (SNR), the performance improvement is more obvious, and the performance improvement is about 1dB under the same bit error rate.
Moving target tracking algorithm based on improved Camshift for through-wall-radar imaging
LI Songlin, JIA Yong, GUO Yong, ZHONG Xiaoling, CUI Guolong
2018, 38(2): 528-532. DOI:
10.11772/j.issn.1001-9081.2017071787
Asbtract
(
)
PDF
(996KB) (
)
References
|
Related Articles
|
Metrics
In view of the characteristics of flicker and jitter in the moving target imaging of the through-wall-radar, the moving target tracking algorithm based on improved Camshift for through-wall-radar imaging was proposed. With respect to the continuous multi-frame images formed by through-wall-radar and the corresponding distribution maps of color probability, the target prediction process was firstly introduced to determine the wave gate for target searching in the image, removing clutter interference outside the wave gate. Then in order to effectively extract target location, by utilizing the distribution map of color probability, the iterative adjusting strategy for the scale of target searching window was designed to adaptively match the target image with changing shape and size. Finally, a contiguous and smoothing trajectory of target moving was obtained by performing
α-β
filtering on the extracted multi-frame target locations. Experimental results of Multi-Input Multi-Output (MIMO) through-wall-radar show that compared with the traditional Camshift and Meanshift algorithms, the tracking error of the improved algorithm is reduced by 40.99% and 43.09% respectively, achieving a more accurate and smooth moving target trajectory.
Image inpainting algorithm based on priori constraints and statistics
CAO Daming, ZHAI Donghai, MENG Hongyue, LI Mengxue, FENG Yan
2018, 38(2): 533-538. DOI:
10.11772/j.issn.1001-9081.2017071898
Asbtract
(
)
PDF
(1203KB) (
)
References
|
Related Articles
|
Metrics
When inpainting the image of large damaged region with complex geometric structure and rich texture, the PatchMatch-based image inpainting algorithm has disadvantages like texture extension and some incorrect sample patches being selected as candidate patches. To solve these problems, a new image inpainting algorithm was proposed for improving accuracy and efficiency. In terms of exact matching of sample patches, an image was preprocessed to obtain priori information of the image, which was used to initialize the constraint of the offset map, while PathMatch algorithm used global random initialization. In the process of pixel patch matching, to improve the matching accuracy of the sample, mean method and angle method were introduced to compute the similarity of different categories of pixel patches. In terms of efficiency, according to the statistical characteristics of similar patches of an image, histogram statistical method was introduced to reduce the labels for inpainting. The proposed algorithm was verified by some instances. The simulation results show that compared with the original PatchMatch algorithm, the Peak Signal-to-Noise Ratio (PSNR) of the proposed algorithm was improved by 0.5dB to 1dB, and the running time was reduced by 5s to 10s, which indicates that the proposed algorithm can effectively improve the accuracy and efficiency of image inpainting.
Encrypted image retrieval algorithm based on discrete wavelet transform and perceptual hash
ZHANG Chunyan, LI Jingbing, WANG Shuangshuang
2018, 38(2): 539-544. DOI:
10.11772/j.issn.1001-9081.2017071892
Asbtract
(
)
PDF
(1136KB) (
)
References
|
Related Articles
|
Metrics
Focusing on medical image secure retrieval in cloud server, an encrypted medical image retrieval algorithm based on Discrete Wavelet Transform (DWT) and perceptual hash was proposed. Firstly, the image was encrypted in frequency domain based on the characteristics of Henon mapping. Secondly, the encrypted medical image was decomposed by wavelet to obtain the sub-image close to the original image. According to the characteristics of Discrete Cosine Transform (DCT), the perceptual hash sequence of the image was obtained by comparing the relationship between the coefficients of DCT and the mean of the coefficients. Finally, the encrypted medical image retrieval was achieved by comparing the normalized correlation coefficients between the perceived hash sequences. Compared with the hash algorithm based on Non-negative Matrix Factorization (NMF), the proposed algorithm improves the retrieval accuracy by nearly 40% under Gaussian noise, which is not changed obviously under the JPEG compression attack, median filter attack, scaling attack and ripple distortion attack. Experimental results show that the proposed algorithm has strong robustness against geometric attack and conventional attack, as well as reduce the time complexity of image encryption.
Left ventricle segmentation in transesophageal echocardiography based on supervised descent method
WEI Yuxi, WU Yueqing, TAO Pan, YAO Yu
2018, 38(2): 545-549. DOI:
10.11772/j.issn.1001-9081.2017071859
Asbtract
(
)
PDF
(791KB) (
)
References
|
Related Articles
|
Metrics
The image segmentation method based on appearance-model has high computational complexity in iterative positioning feature points, and it is difficult to optimize the nonlinear local feature. To solve these above problems and locate feature points of left ventricular endocardium and epicardium, a gradient decent algorithm based on supervised learning was proposed, a multi-resolution pyramid model of 4 levels was built, and a new feature extraction function based on Bhattacharyya coefficient, namely B-SIFT, was used to replace the Scale Invariant Feature Transform (SIFT) feature in the original method. Firstly, the training set images were normalized to unify the size of each TransEsophageal Echocardiography (TEE). Then the supervised descent model based on B-SIFT and multi-resolution pyramid was built to get a gradient descent direction sequence that approaches the actual values. Finally, the learned direction sequence was applied to the test set to obtain the segmentation results of left ventricular. The experimental results show that compared with the traditional gradient decent method based on supervised learning, the average segmentation error of the proposed method is reduced by 47%, and the iteration results are more closer to the actual values compared with the single-scale method.
High efficient virtual machines consolidation method in cloud data center
YU Xinrong, LI Zhihua, YAN Chengyu, LI Shuangli
2018, 38(2): 550-556. DOI:
10.11772/j.issn.1001-9081.2017061588
Asbtract
(
)
PDF
(1176KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the workload of hosts in data center cannot maintain long-term stability by executing traditional Virtual Machine Consolidation (VMC), a high efficient Gaussian Mixture Model-based VMC (GMM-VMC) method was proposed. Firstly, to accurately predict the variation trend of workload in hosts, Gaussian Mixture Model (GMM) was used to fit the workload history of hosts. Then, the overload probability of a host was calculated according to the GMM of its workload and resource capacity. Next, the aforementioned overload probability was taken as the criteria to determine whether the host is overloaded or not. Besides, some virtual machines hosted by overloaded hosts which can significantly degrade overload risk and demand less migration time were selected to migrate. At last, these migrated virtual machines were placed in new hosts which have less effect on workload variation after placement estimated by GMM. Using CloudSim toolkit, GMM-VMC method was validated and compared with other methods on energy consumption, Quality of Service (QoS) and efficiency of consolidation. The experimental results show that the GMM-VMC method can degrade energy consumption in data center and improve QoS.
Elastic scheduling strategy for cloud resource based on Docker
PENG Liping, LYU Xiaodan, JIANG Chaohui, PENG Chenghui
2018, 38(2): 557-562. DOI:
10.11772/j.issn.1001-9081.2017081943
Asbtract
(
)
PDF
(1012KB) (
)
References
|
Related Articles
|
Metrics
Considering the problem of elastic scheduling for cloud resources and the characteristics of Ceph data storage, a cloud resource elastic scheduling strategy based on Docker container was proposed. First of all, it was pointed out that the Docker container data volumes are unable to work across different hosts, which brings difficulty to apply online migration, then the data storage method of Ceph cluster was improved. Furthermore, a resource scheduling optimization model based on the comprehensive load of nodes was established. Finally, by combining the characteristics of Ceph cluster and Docker container, the Docker Swarm orchestration was used to achieve container deployment and application online migration in consideration of both data storage and cluster load. The experimental results show that compared with some scheduling strategies, the proposed scheduling strategy achieves elastic scheduling of the cloud platform resources by making a more granular partitioning of the cluster resources, makes a reasonable utilization of the cloud platform resources and reduces the cost of data center operations under the premise of ensuring the application performance.
Inverse kinematics equation solving method for six degrees of freedom manipulator based on six dimensional linear interpolation
ZHOU Feng, LIN Nan, CHEN Xiaoping
2018, 38(2): 563-567. DOI:
10.11772/j.issn.1001-9081.2017061494
Asbtract
(
)
PDF
(677KB) (
)
References
|
Related Articles
|
Metrics
A six-dimensional linear interpolation theory was proposed to solve the difficult problem of the inverse kinematics equation of six Degree Of Freedom (DOF) manipulator with general structure. Firstly, seven adjacent non-linear correlation nodes were searched from a large number of empirical data to compose hyper-body. Secondly, these seven nodes were used to obtain a six-dimensional linear predictive function. Finally, the predictive function was used to interpolate and inversely interpolate to predict poses and joint angles. The Matlab simulation was used to generate one million group of empirical data according to the positive kinematics equation, and the target pose was inversely interpolated iteratively to predict six joint angles. The experimental results show that compared with the Radial Basis Function Network (RBFN) and the six-dimensional linear inverse interpolation method, the proposed method can approach the target pose faster and more accuratly. The proposed method is based on data, which avoids complicated theory and can meet the requirements of robot daily applications.
Observation matrix optimization algorithm in compressive sensing based on singular value decomposition
LI Zhou, CUI Chen
2018, 38(2): 568-572. DOI:
10.11772/j.issn.1001-9081.2017071854
Asbtract
(
)
PDF
(756KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem of large correlation coefficients when obtaining the observation matrix from the optimized Gram matrix in Compressive Sensing (CS), based on the optimized Gram matrix obtained in the existing algorithm, the value of the row vector in the observation matrix when the objective function takes the extreme value was obtained based on the extreme value of the equivalent transformation of the objective function, the analytic formula of the row vector when the objective function takes the extreme value was elected from the values mentioned above by Singular Value Decomposition (SVD) of the error matrix, then a new observation matrix optimization algorithm was put forward by using the idea of optimizing the target matrix row by row in the
K
-SVD algorithm, the observation matrix was optimized iteratively row by row, and the difference between the correlations of the observation matrix generated by adjacent two iterations was taken as a measure of whether or not the iteration is completed. Simulation results show that the relevance between the observation matrix and the sparse base in the improved algorithm is better than that in the original algorithm, thus reducing the reconstruction error.
Model and algorithm for split delivery vehicle routing problem with stochastic travel time
SHI Jianli, ZHANG Jin
2018, 38(2): 573-581. DOI:
10.11772/j.issn.1001-9081.2017071872
Asbtract
(
)
PDF
(1522KB) (
)
References
|
Related Articles
|
Metrics
To evaluate the effect of split delivery and waiting time on Vehicle Routing Problem (VRP) with stochastic travel time, by considering the waiting time under soft time window, a stochastic programming model with correction was formulated to solve the Split Delivery VRP (SDVRP) with stochastic travel time. Meanwhile, an improved Particle Swarm Optimization (PSO) was proposed for this problem. At first, an improved relative position index method was used to decode the integer type code in which some customers (the split customers) may appear more than once. Then, an adaptive selection process was used to update the velocity in which the length of the involved vectors are different from each other besause of the split delivery. At last, the path relinking method was used to update the position of the swarm to deal with the information loss caused by the transformation between the continuous space and the discrete space. The experimental results on modified Solomon's instances show that the all-in cost is averagely increased by about 3%, and customers tend to choose split delivery when considering the waiting time. The use of split delivery can effectively reduce the all-in cost (2%) and the use of vehicles (0.6); in addition, in some instances, especially in the R2 instances, the waiting time can be reduced by about 0.78%.
Reception box locating-vehicle routing problems in urban distribution based on nested Logit model
QIU Hanguang, ZHOU Yufeng
2018, 38(2): 582-588. DOI:
10.11772/j.issn.1001-9081.2017071883
Asbtract
(
)
PDF
(1100KB) (
)
References
|
Related Articles
|
Metrics
In order to analyze the effect of the correlation between the last-mile delivery and time slot existing in the customer choice procedure of urban distribution service on the operational decisions, such as reception box locating, time slot allocating and vehicle routing, a nested Logit model was used to quantify the customer's choice of delivery service options, and a two-tier nested Logit selection model for urban delivery was proposed. Then a multi-objective optimization model integrated with reception box locating, time slot allocating and vehicle routing was constructed in the purpose of maximizing the delivery amount and minimizing the delivery cost. At last, a Multi-Objective Particle Swarm Optimization (MOPSO) algorithm was constructed to solve this model based on non-dominance sorting, adaptive grid and crowding distance sorting. The analysis shows that, as the attended-home-delivery independence parameter is gradually increased, the substitution of customer demand in different time slots is smaller, the optimal solutions tend to improve the delivery punctuality with increasing the delivery amount, whether it is to minimize the cost or to maximize the number of delivery; on the contrary, with the increase of the reception-box independence parameter, the optimal solutions will decrease the delivery punctuality with reducing the delivery amount.
Stability analysis of interactive development between manufacturing enterprise and logistics enterprise based on Logistic-Volterra model
WANG Zhenzhen, WU Yingjie
2018, 38(2): 589-595. DOI:
10.11772/j.issn.1001-9081.2017082011
Asbtract
(
)
PDF
(1120KB) (
)
References
|
Related Articles
|
Metrics
The traditional literatures mainly consider the cooperative relationship while neglecting the competitive relationship between manufacturing and logistics enterprises during interactive development. An improved model, namely Logistic-Volterra model, was proposed based on the traditional Logistic model, which considered the contribution coefficients and competition coefficients at the same time. Firstly, the Logistic-Volterra model was built and the stability solution was sovled, then the mathematical conditions for achieving stability and the interpretation of reality were discussed. Secondly, the affecting factors on the interactive development of manufacturing and logistics enterprises were discovered by using Matlab numerical simulation, and the differences between the improved model and traditional model were also discussed. Finally, the manufacturing enterprise A and logistics enterprise B were taken as an example to analyze the competitive behavior in the process of cooperation; furthermore, the impact of coopetition behavior on the interests was also analyzed. The theoretical analysis and simulation results show that the stability of the system is highly affected by contribution coefficient, competition coefficient and environmental capability, the result is more reasonable when considering the competition relationship in the model. It means that manufacturing and logistics enterprises should fully address the effects of competition on both sides.
Intelligent detection method of click farming on E-commerce platform for users
KANG Haiyan, YANG Yue, YU Aimin
2018, 38(2): 596-601. DOI:
10.11772/j.issn.1001-9081.2017082166
Asbtract
(
)
PDF
(902KB) (
)
References
|
Related Articles
|
Metrics
Although the click farming on e-commerce platform improves the store profits to some extent, but it raises the promotion cost of e-commerce platform, which leads to a serious problem of reputation security, and on the other hand, it misleads consumers with property loss. To solve these problems, an intelligent method named SVM-NB was proposed for detecting the click farming on e-commerce platform for users, and a method of constructing characteristics of click farming was also put forward. Firstly, the relevant data of commodity were collected to create an eigenvalue database. Then a classifier was established based on Support Vector Machine (SVM) algorithm with supervised learning, so as to judge the result of click farming. Finally, the click farming probability of goods was calculated by using Naive Bayes (NB), which can provides users with a reference for their shopping. The reasonality and accuracy of the proposed SVM-NB method was validated by
K
-fold cross validation algorithm, and the accuracy reached 95.0536%.
Diagnosis of fault circuit by modularized BP neural network based on fault propagation
HE Chun, LI Qi, WU Ranghao, LIU Bangxin
2018, 38(2): 602-609. DOI:
10.11772/j.issn.1001-9081.2017061516
Asbtract
(
)
PDF
(1169KB) (
)
References
|
Related Articles
|
Metrics
It is difficult to diagnose the faults of large-scale digital-analog hybrid circuit because it has numerous fault modes, the circuit failure status is complex and can be propagated easily. To solve these problems, a new failure diagnosis method, namely Modularized Back Propagation (BP) neural network based on Fault Propagation (MBPFP), was proposed. Firstly, fault propagation between subcircuits was analyzed on the basis of circuit module division, and failure source and transmission source were modularized. Secondly, the set of fault causes was narrowed and the fault module was determined by the anomaly detection model of subcircuit in 1-order positioning. Finally, the fault location was realized and the fault mode was identified by the BP neural network of target module in 2-order positioning. The experimental results show that compared with the traditional BP neural network method, the proposed MBPFP method has a high fault coverage and the accuracy is improved by at least 8 percentage points, which is outperforms the traditional method based on BP neural network.
Design and realization of low frequency radio astronomical signal acquisition circuit based on modulated wideband converter
WU Hailong, BAI Zhengyao, ZHANG Yu, HE Qian
2018, 38(2): 610-614. DOI:
10.11772/j.issn.1001-9081.2017071844
Asbtract
(
)
PDF
(738KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the traditional signal acquisition circuit needs a larger storage space to store data in the observation of radio astronomical signal, a hardware circuit design method for low frequency radio astronomical signal acquisition based on Modulated Wideband Converter (MWC) was proposed. Firstly, the observed signal was multiplied by four pseudo-random period signals, and then the results were divided into four channels. Secondly, the four-channel signals were filtered by second-order Butterworth low-pass filter repectively. Then, the four-channel filtered signals were sampled and the data were transferred to a Field-Programmable Gate Array (FPGA) for storage. Finally, the signal was reconstructed by Orthogonal Matching Pursuit (OMP) algorithm. The theoretical analysis and experimental results show that the Mean Square Error (MSE) between the reconstructed signal and the observed signal is 1.27×10
-2
, and the compression rate of data storage space is 20%. The proposed hardware circuit design method reduces circuit design costs as well as storage space.
2025 Vol.45 No.4
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF