Table of Content

    10 December 2018, Volume 38 Issue 12
    Scene sparse recognition method via intra-class dictionary for visible and near-infrared HSV image fusion
    LIU Jixin, WEI Man
    2018, 38(12):  3355-3359.  DOI: 10.11772/j.issn.1001-9081.2018040806
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    Focusing on the requirement of intelligent observation of typical natural scenes, in order to improve the recognition accuracy of sparse classifiers in small sample databases, a new scene sparse recognition method via intra-class dictionary for visible and Near-InfraRed (NIR) HSV (Hue, Saturation, Value) image fusion was proposed. Firstly, the near infrared image and the visible image were fused by HSV pseudo-color processing technology which had been used in the field of computer vision display. Then, the global Generalized Search Tree (GiST) features and local Pyramid Histogram of Oriented Gradients (PHOG) features were extracted and fused. Finally, the scene classification results were obtained by combining the proposed dictionary-like sparse recognition method. The experimental recognition accuracy of the proposed method on RGB-NIR database is 74.75%. The experimental results show that, the recognition accuracy of scene images fused with near infrared information is higher than that of non-fused images. The proposed method can effectively improve the information representation quality of scene targets in sparse recognition framework.
    Pedestrian heading particle filter correction method with indoor environment constraints
    LIU Pan, ZHANG Bang, HUANG Chao, YANG Weijun, XU Zhengyi
    2018, 38(12):  3360-3366.  DOI: 10.11772/j.issn.1001-9081.2018040883
    Asbtract ( )   PDF (1179KB) ( )  
    References | Related Articles | Metrics
    In the traditional indoor pedestrian positioning algorithm based on dead reckoning and Kalman filtering, there is a problem of cumulative error in the heading angle, which makes the positional error continue to accumulate continuously. To solve this problem, a pedestrian heading particle filter algorithm with indoor environment constraints was proposed to correct direction error. Firstly, the indoor map information was abstracted into a structure represented by line segments, and the map data was dynamically integrated into the mechanism of particle compensation and weight allocation. Then, the heading self-correction mechanism was constructed through the correlation map data and the sample to be calibrated. Finally, the distance weighting mechanism was constructed through correlation map data and particle placement. In addition, the particle filter model was simplified, and heading was used as the only state variable to optimize. And while improving the positioning accuracy, the dimension of state vector was reduced, thereby the complexity of data analysis and processing was reduced. Through the integration of indoor environmental information, the proposed algorithm can effectively suppress the continuous accumulation of directional errors. The experimental results show that, compared with the traditional Kalman filter algorithm, the proposed algorithm can significantly improve the pedestrian positioning accuracy and stability. In the two-dimensional walking experiment with a distance of 435 m, the heading angle error is reduced from 15.3° to 0.9°, and the absolute error at the end position is reduced from 5.50 m to 0.87 m.
    Zenithal pedestrian detection algorithm based on improved aggregate channel features and gray-level co-occurrence matrix
    LI Lin, ZHANG Tao
    2018, 38(12):  3367-3371.  DOI: 10.11772/j.issn.1001-9081.2018051066
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    Aiming at the uniqueness of head feature and high detection error rate extracted by traditional zenithal pedestrian detection method, a multi-feature fusion zenithal pedestrian detection algorithm based on improved Aggregate Channel Feature (ACF) and Gray-Level Co-occurrence Matrix (GLCM) was proposed. Firstly, the extracted Hue, Sturation, Value (HSV) color features, gradient magnitude and improved Histogram of Oriented Gradients (HOG) feature were combined into ACF descriptor. Then, the improved GLCM parameter descriptor was calculated by the window method, and the texture features were extracted. The co-occurrence matrix feature descriptor was obtained by concatenating the feature vectors of each window. Finally, the aggregate channel and co-occurrence matrix features were input into Adaboost for training to get the classifier, and the final results were obtained by detection. The experimental results show that, the proposed algorithm can effectively detect targets in the presence of interference background, and improve the detection precision and recall.
    Robust video object tracking algorithm based on self-adaptive compound kernel
    LIU Peiqiang, ZHANG Jiahui, WU Dawei, AN Zhiyong
    2018, 38(12):  3372-3379.  DOI: 10.11772/j.issn.1001-9081.2018051139
    Asbtract ( )   PDF (1351KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of poor robustness of Kernelized Correlation Filter (KCF) in complex scenes, a new object tracking algorithm based on Self-Adaptive Compound Kernel (SACK) was proposed. The tracking task was decomposed into two independent subtasks:position tracking and scale tracking. Firstly, the risk objective function of SACK weight was constructed by using the self-adaptive compound of linear kernel and Gaussian kernel as the kernel tracking filter. The weights of linear kernel and Gaussian kernel were adjusted adaptively by the constructed function according to the response values of kernels, which not only considered the minimum empirical risk function of different kernel response outputs, but also considered the risk function of maximum response value, and had the advantages of local kernel and global kernel. Then, the exact position of object was obtained according to the output response of the SACK filter, and the adaptive update rate based on the maximum response value of object was designed to adaptively update the position tracking filter. Finally, the scale tracker was used to estimate the object scale. The experimental results show that, the success rate and distance precision of the proposed algorithm are optimal on OTB-50 database, which is 6.8 percentage points and 4.1 percentage points higher than those of KCF algorithm respectively, 2 percentage points and 3.2 percentage points higher than those of Bidirectional Scale Estimation Tracker (BSET) algorithm respectively. The proposed algorithm has strong adaptability to complex scenes such as deformation and occlusion.
    Obstacle avoidance method for multi-agent formation based on artificial potential field method
    ZHENG Yanbin, XI Pengxue, WANG Linlin, FAN Wenxin, HAN Mengyun
    2018, 38(12):  3380-3384.  DOI: 10.11772/j.issn.1001-9081.2018051119
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    Formation obstacle avoidance is one of the key issues in the research of multi-agent formation. Concerning the obstacle avoidance problem of multi-agent formation in dynamic environment, a new formation obstacle avoidance method based on Artificial Potential Field (APF) and Cuckoo Search algorithm (CS) was proposed. Firstly, in the heterogeneous mode of dynamic formation transformation strategy, APF was used to plan obstacle avoidance for each agent in multi-agent formation. Then, in view of the limitations of APF in setting attraction increment coefficient and repulsion increment coefficient, the idea of Lěvy flight mechanism in CS was used to search randomly for the increment coefficients adapted to the environment. The simulation results of Matlab show that, the proposed method can effectively solve the obstacle avoidance problem of multi-agent formation in complex environment. The efficiency function is used to evaluate and analyze the experimental data, which can verify the rationality and effectiveness of the proposed method.
    Lag consensus tracking control for heterogeneous multi-agent systems
    LI Geng, QIN Wen, WANG Ting, WANG Hui, SHEN Mouquan
    2018, 38(12):  3385-3390.  DOI: 10.11772/j.issn.1001-9081.2018051051
    Asbtract ( )   PDF (998KB) ( )  
    References | Related Articles | Metrics
    Aiming at the lag consensus problem of first-order and second-order hybrid heterogeneous multi-agent systems, a distributed lag consensus control protocol based on pinning control was proposed. Firstly, the lag consensus analysis was transformed into stability verification. Then, the stability of closed loop system was analyzed by using graph theory and Lyapunov stability theory. Finally, the sufficient conditions for solvability of lag consensus based on Linear Matrix Inequality (LMI) were given under fixed and switching topologies respectively, so that leader-follower lag consensus of heterogeneous multi-agent system was achieved. The numerical simulation results show that, the proposed lag consensus control method can make the heterogeneous mult-agent systems achieve leader-follower lag consensus.
    Semi-supervised adaptive multi-view embedding method for feature dimension reduction
    SUN Shengzi, WAN Yuan, ZENG Cheng
    2018, 38(12):  3391-3398.  DOI: 10.11772/j.issn.1001-9081.2018051050
    Asbtract ( )   PDF (1212KB) ( )  
    References | Related Articles | Metrics
    Most of the semi-supervised multi-view feature reduction methods do not take into account of the differences in feature projections among different views, and it is not able to avoid the effects of noise and other unrelated features because of the lack of sparse constraints on the low-dimensional matrix after dimension reduction. In order to solve the two problems, a new Semi-Supervised Adaptive Multi-View Embedding method for feature dimension reduction (SS-AMVE) was proposed. Firstly, the projection was extended from the same embedded matrix in a single view to different matrices in multi-view, and the global structure maintenance term was introduced. Then, the unlabeled data was embedded and projected by the unsupervised method, and the labeled data was linearly projected in combination with the classified discrimination information. Finally, the two types of multi-projection were mapped to a unified low-dimensional space, and the combined weight matrix was used to preserve the global structure, which largely eliminated the effects of noise and unrelated factors. The experimental results show that, the clustering accuracy of the proposed method is improved by about 9% on average. The proposed method can better preserve the correlation of features between multiple views, and capture more features with discriminative information.
    Image deep convolution classification method based on complex network description
    HONG Rui, KANG Xiaodong, GUO Jun, LI Bo, WANG Yage, ZHANG Xiufang
    2018, 38(12):  3399-3402.  DOI: 10.11772/j.issn.1001-9081.2018051041
    Asbtract ( )   PDF (692KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of image classification with convolution network model without increasing more computation, a new image deep convolution classification method based on complex network description was proposed. Firstly, the complex network model degree matrices under different thresholds were obtained by using complex network description of image. Then, the feature vector was obtained by deep convolution neural networks based on degree matrix description of image. Finally, the obtained feature vectors were used for image K-Nearest Neighbors (KNN) classification. The verification experiments were carried out on the ImageNet Large Scale Visual Recognition Challenge 2014 (ILSVRC2014) database. The experimental results show that the proposed model has higher accuracy and fewer iterations.
    Multi-source digit recognition algorithm based on improved convolutional neural network
    BU Lingzheng, WANG Hongdong, ZHU Meiqiang, DAI Wei
    2018, 38(12):  3403-3408.  DOI: 10.11772/j.issn.1001-9081.2018050974
    Asbtract ( )   PDF (955KB) ( )  
    References | Related Articles | Metrics
    Most of the existing digit recognition algorithms recognize single-type digits, and can not recognize multi-source digits. Aiming at the character recognition scenarios with handwritten digits and digital tube digits, a multi-source digit recognition algorithm based on improved Convolutional Neural Network (CNN) was proposed. Firstly, a mixed data set consisting of handwritten and digital tube digits was established by using the samples collected from the field of digital display instrument manufacturer and MINIST data set. Then, considering better robustness, an improved CNN was proposed, which was trained by the above mixed data set, and a network was realized to recognize multi-type digits. Finally, the trained neural network model was successfully applied to the multi-source digit recognition scene of RoboMaster robotics competition. The test results show that, the overall recognition accuracy of the proposed algorithm is stable and high, and it has good robustness and generalization ability.
    Learning method of indoor scene semantic annotation based on texture information
    ZHANG Yuanyuan, HUANG Yijun, WANG Yuefei
    2018, 38(12):  3409-3413.  DOI: 10.11772/j.issn.1001-9081.2018040892
    Asbtract ( )   PDF (880KB) ( )  
    References | Related Articles | Metrics
    The manual processing method is mainly used for the detection, tracking and information editing of key objects in indoor scene video, which has the problems of low efficiency and low precision. In order to solve the problems, a new learning method of indoor scene semantic annotation based on texture information was proposed. Firstly, the optical flow method was used to obtain the motion information between video frames, and the key frame annotation and interframe motion information were used to initialize the annotation of non-key frames. Then, the image texture information constraint of non-key frames and its initialized annotation were used to construct an energy equation. Finally, the graph-cuts method was used for optimizing to obtain the solution of the energy equation, which was the non-key frame semantic annotation. The experimental results of the annotation accuracy and visual effects show that, compared with the motion estimation method and the model-based learning method, the proposed learning method of indoor scene semantic annotation based on texture information has the better effect. The proposed method can provide the reference for low-latency decision-making systems such as service robots, smart home and emergency response.
    X-ray security inspection method using active vision based on Q-learning algorithm
    DING Jingwen, CHEN Shuyue, LU Guirong
    2018, 38(12):  3414-3418.  DOI: 10.11772/j.issn.1001-9081.2018050989
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of poor detection performance and slow detection speed of the active vision security inspection method, a Heuristically Accelerated State Backtracking Q-Learning (HASB-QL) algorithm based on Q-Learning (QL) algorithm was proposed to estimate the next-best-view. The cost function and heuristic function were introduced by the proposed algorithm to improve the learning efficiency and speed up the convergence of QL. Firstly, the single view detection of X-ray images obtained by security scanner was performed. Secondly, the pose was estimated and the best rotation angle was obtained by comparing the selection strategy of repeated action in the state backtracking process, and then the single view detection was performed again until the threat object was detected. Moreover, the geometric constraint was established to eliminate the false alarms when the number of views was more than one in detection process. The X-ray images of handguns and razor blades in GDXray data set were used for the experiment. The experimental results show that, compared with active vision algorithm based on QL, the weighted average value of FF1 between the precision and recall of detecting the handguns by the improved active vision algorithm is increased by 9.60% and the detection speed is increased by 12.45%, while the F1 of detecting razor blades is increased by 2.51% and the detection speed is increased by 17.39%. The proposed algorithm can improve the performance and speed of threat object detection.
    Influence maximization algorithm based on structure hole and degree discount
    LI Minjia, XU Guoyan, ZHU Shuai, ZHANG Wangjuan
    2018, 38(12):  3419-3424.  DOI: 10.11772/j.issn.1001-9081.2018040920
    Asbtract ( )   PDF (894KB) ( )  
    References | Related Articles | Metrics
    The existing Influence Maximization (IM) algorithms of social network have the problem of low influence range caused by only selecting local optimal nodes at present. In order to solve the problem, considering the propagation advantage of core node and structure hole node, a maximization algorithm based on Structure Hole and Degree Discount (SHDD) was proposd. Firstly, the ideas of structure hole and centrality degree were integrated and applied to the influence maximization problem, and the factor α combining the structure hole node and the core node was found out to play the maximum propagation function, which made the information spread more widely to increase the influence of the whole network. Then, in order to highlight the advantages of the integration of two ideas, the influence of second-degree neighbor was added to the evaluation criteria of structure hole to select the structure hole node. The experimental results on data sets of different scales show that, compared with DegreeDiscount algorithm, SHDD can increase the influence range without consuming too much time, and compared with the Structure-based Greedy (SG) algorithm, SHDD can expand the influence range and reduce the time cost in the network with large clustering coefficient. The proposed SHDD algorithm can maximize the advantages of structure hole node and core node fusion when factor α is 0.6, and it can expand the influence range more steadily in the social network with large clustering coefficient.
    Parallel multi-layer graph partitioning method for solving maximum clique problems
    GU Junhua, HUO Shijie, WU Junyan, YIN Jun, ZHANG Suqi
    2018, 38(12):  3425-3432.  DOI: 10.11772/j.issn.1001-9081.2018040934
    Asbtract ( )   PDF (1254KB) ( )  
    References | Related Articles | Metrics
    In big data environment, the mass of nodes in graph and the complexity of analysis bring forward higher requirement for the speed and accuracy of maximum clique problems. In order to solve the problems, a Parallel Multi-layer Graph Partitioning method for Solving Maximum Clique (PMGP_SMC) was proposed. Firstly, a new Multi-layer Graph Partitioning method (MGP) was proposed, the large-scale graph partitioning was executed to generate subgraphs while the clique structure of the original graph was maintained and not destroyed. Large-scale subgraphs were divided into multi-level graphs to further reduce the size of subgraphs. The graph computing framework of GraphX was used to achieve MGP to form a Parallel Multi-layer Graph Partitioning (PMGP) method. Then, according to the size of partitioned subgraph, the iteration number of Local Search algorithm Based on Penalty value (PBLS) was reduced, and the PBLS based on Speed optimization (SPBLS) was proposed to solve the maximum clique of each subgraph. Finally, PMGP method and SPBLS were combined to form PMGP_SMC. The large-scale dataset of Stanford was used for running test. The experimental results show that, the proposed PMGP reduces the maximum subgraph size by more than 100 times and the average subgraph size by 2 times compared with Parallel Single Graph Partitioning method (PSGP). Compared with PSGP for Solving Maximum Clique (PSGP_SMC), the proposed PMGP_SMC reduces the overall time by about 100 times, and its accuracy is consistent with that of Parallel Multi-layer Graph Partitioning for solving maximum clique based on Maximal Clique Enumeration (PMGP_MCE) in solving the maximum clique. The proposed PMGP_SMC can solve the maximum clique of large-scale graph quickly and accurately.
    Clustering algorithm of Gaussian mixture model based on density peaks
    TAO Zhiyong, LIU Xiaofang, WANG Hezhang
    2018, 38(12):  3433-3437.  DOI: 10.11772/j.issn.1001-9081.2018040739
    Asbtract ( )   PDF (944KB) ( )  
    References | Related Articles | Metrics
    The clustering algorithm of Gaussian Mixture Model (GMM) is sensitive to initial value and easy to fall into local minimum. In order to solve the problems, taking advantage of strong global search ability of Density Peaks (DP) algorithm, the initial clustering center of GMM algorithm was optimized, and a new Clustering algorithm of GMM based on DP (DP-GMMC) was proposed. Firstly, the clustering center was searched by the DP algorithm to obtain the initial parameters of mixed model. Then, the Expectation Maximization (EM) algorithm was used to estimate the parameters of mixed model iteratively. Finally, the data points were clustered according to the Bayesian posterior probability criterion. In the Iris data set, the problem of dependence on the initial clustering center is solved, and the clustering accuracy of DP-GMMC can reach 96.67%, which is 33.6 percentage points higher than that of the traditional GMM algorithm. The experimental results show that, the proposd DP-GMMC has better clustering effect on low-dimensional datasets.
    Maximal frequent itemset mining algorithm based on DiffNodeset structure
    YIN Yuan, ZHANG Chang, WEN Kai, ZHENG Yunjun
    2018, 38(12):  3438-3443.  DOI: 10.11772/j.issn.1001-9081.2018040913
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    In data mining, mining maximum frequent itemsets instead of mining frequent itemsets can greatly improve the operating efficiency of system. The running time consumption of existing maximum frequent itemset mining algorithms is still very large. In order to solve the problem, a new DiffNodeset Maxmal Frequent Itemset Mining (DNMFIM) algorithm was proposed. Firstly, a new data structure DiffNodeset was adopted to realize the fast calculation of intersection and support degree. Secondly, the connection method with linear time complexity was adopted to reduce the complexity of connecting two DiffNodesets and avoid multiple invalid calculations. Then, the set-enumeration tree was adopted as the search space, and a variety of optimal pruning strategies were used to reduce the search space. Finally, the superset detection technology used in the MAximal Frequent Itemset Algorithm (MAFIA) algorithm was adopted to improve the accuracy of algorithm effectively. The experimental results show that, DNMFIM algorithm outperforms MAFIA and N-list based MAFIA (NB-MAFIA) in terms of time efficiency. The proposed algorithm has a good performance when mining the maximal frequent itemsets in different types of datasets.
    Low rank non-linear feature selection algorithm
    ZHANG Leyuan, LI Jiaye, LI Pengqing
    2018, 38(12):  3444-3449.  DOI: 10.11772/j.issn.1001-9081.2018050954
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of high-dimensional data, such as non-linearity, low-rank form, and feature redundancy, an unsupervised feature selection algorithm based on kernel function was proposd, named Low Rank Non-linear Feature Selection algroithm (LRNFS). Firstly, the features of each dimension were mapped to a high-dimensional kernel space, and the non-linear feature selection in the low-dimensional space was achieved through the linear feature selection in the kernel space. Then, the deviation terms were introduced into the self-expression form, and the low rank and sparse processing of coefficient matrix were achieved. Finally, the sparse regularization factor of kernel matrix coefficient vector was introduced to implement the feature selection. In the proposed algorithm, the kernel matrix was used to represent its non-linear relationship, the global information of data was taken into account in low rank to perform subspace learning, and the importance of feature was determined by the self-expression form. The experimental results show that, compared with the semi-supervised feature selection algorithm via Rescaled Linear Square Regression (RLSR), the classification accuracy of the proposed algorithm after feature selection is increased by 2.34%. The proposed algorithm can solve the problem that the data is linearly inseparable in the low-dimensional feature space, and improve the accuracy of feature selection.
    Design and implementation of middleware system for ciphertext database
    SONG Tianyu, YANG Geng
    2018, 38(12):  3450-3454.  DOI: 10.11772/j.issn.1001-9081.2018051152
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics
    In traditional ciphertext database, the encryption and decryption method is not opaque to the upper application, short of independent key management mechanism and unable to manage multi-user security. In order to solve the problems, a new middleware system for ciphertext database was designed and implemented. Firstly, the encryption and decryption of sensitive data were realized by parsing and rewriting the datagram sent by database client or database server. Then, key management was realized by setting up independent key management module and using secondary key management. Finally, through the independent user management module, the management of users in ciphertext database was realized by means of user authority judgment, identity dynamic authentication, user identity cancellation and update. The experimental results show that, compared with the traditional ciphertext database, the proposed middleware system has better security, and its transmission efficiency is constantly improved with the increase of data volume. The proposed middleware system can effectively guarantee the security of ciphertext database and has high data transmission efficiency.
    Efficient and provably secure short proxy signature scheme
    ZUO Liming, CHEN Zuosong, XIA Pingping, YI Chuanjia
    2018, 38(12):  3455-3461.  DOI: 10.11772/j.issn.1001-9081.2018051159
    Asbtract ( )   PDF (1106KB) ( )  
    References | Related Articles | Metrics
    Proxy signature is widely used in large-scale wireless industrial control Internet of things, the efficiency of signature master server can be greatly improved by using proxy signature. A new short proxy signature scheme based on bilinear mapping was proposed to adapt to the application environment with limited bandwidth and weak computing power. Firstly, the security of the proposed scheme was proved based on Computational Diffie-Hellman (CDH) problem and Collusion Attack Algorithm with k traitors (k-CAA) problem under the random oracle model. Then, the performance advantages of the proposed scheme were analyzed with other existing proxy signature and short proxy signature schemes, and the key codes of the proposed scheme were given. The experimental results show that, the proposed scheme performs one scalar multiplication operation and one hash operation in proxy signature generation; two bilinear pairing operation, one scalar multiplication operation and two hash operations in signature verification. Compared with other similar proxy signature schemes, the proposed scheme has advantages in computational performance and is suitable for application scenarios with weak computational power and limited transmission capacity.
    Research summary of secure routing protocol for low-power and lossy networks
    LUO Yujie, ZHANG Jian, TANG Zhangguo, LI Huanzhou
    2018, 38(12):  3462-3470.  DOI: 10.11772/j.issn.1001-9081.2018051067
    Asbtract ( )   PDF (1423KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the Internet of Things (IoT), the research and application of Low-power and Lossy Network (LLN) has become a trend of development. Firstly, the basic principle and structure of IPv6 over Low-power Wireless Personal Area Network (6LoWPAN) and Routing Protocol for Low-power and lossy network (RPL) were introduced. Secondly, the main security threats of RPL routing in LLN and the corresponding solutions were summarized, classified and compared through the different strategies adopted by the protocol. Then, the research status of existing secure RPL at home and abroad was introduced and analyzed. At the same time, the existing security threats and solutions were summarized. Finally, the security issues and development trends that needed to be further studied in large-scale, mobile, self-organizing and low-power RPL were proposed.
    Attribute-based access control scheme in smart health
    LI Qi, XIONG Jinbo, HUANG Lizhi, WANG Xuan, MAO Qiming, YAO Lanwu
    2018, 38(12):  3471-3475.  DOI: 10.11772/j.issn.1001-9081.2018071528
    Asbtract ( )   PDF (764KB) ( )  
    References | Related Articles | Metrics
    Aiming at preserving the privacy of Personal Health Record (PHR) in Smart health (S-health), an attribute-based access control scheme with verifiable outsourced decryption and delegation was proposed. Firstly, the Ciphertext-Policy Attribute-Based Encryption (CP-ABE) was used to realize fine-grained access control of PHR. Secondly, the most complicated decryption was outsourced to the cloud server, and the authorized agency was used to verify the correctness of Partial Decryption Ciphertext (PDC) returned by the cloud server. Then, based on the delegation method, the outsourcing decryption and authentication could be delegated to third-party users without revealing privacy by restricted users. Finally, the adaptive security of the proposed scheme was proved under the standard model. The theoretical analysis results show that the decryption of user side only needs to perform one exponential operation, so that the proposed scheme has strong security and practicability.
    Security encryption of radio block center based on colored Petri net
    XIA Haonan, DAI Shenghua
    2018, 38(12):  3476-3480.  DOI: 10.11772/j.issn.1001-9081.2018050993
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of train-ground safety communication in the Chinese Train Control System (CTCS)-3 train control system, a new model for information interaction between Radio Block Center (RBC) and train based on Petri net theory was designed by using the hierarchical modelling idea, and the simulation tool of Colored Petri Net (CPN) tools was used to dynamically simulate the whole process of generating, encrypting and transmitting the transmission information between train and RBC. The designed model was mainly divided into three parts. Firstly, the Movement Authority (MA) was requested by the train. Then, the MA under full supervision mode was generated by the RBC. Finally, the MA was received by train through wireless network and the safety control of train was performed according to the MA. The dynamic simulation and state space analysis tools were used to simulate and analyze the proposed model. The simulation results show that, the designed model can meet the design requirements of train-ground information transmission with boundedness, activity, regression and fairness. The designed model can be used for safe transmission of train-ground information, reducing software design flaws.
    Task scheduling strategy based on topology structure in Storm
    LIU Su, YU Jiong, LU Liang, LI Ziyang
    2018, 38(12):  3481-3489.  DOI: 10.11772/j.issn.1001-9081.2018040741
    Asbtract ( )   PDF (1471KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of large communication cost and unbalanced load in the default round-robin scheduling strategy of Storm stream computing platform, a Task Scheduling Strategy based on Topology Structure (TS2) in Storm was proposed. Firstly, the work nodes with sufficient and available Central Processing Unit (CPU) resources were selected and only a process was allocated to each work node to eliminate the communication cost between processes within the nodes and optimize the process deployment. Then, the topology structure was analyzed, the component with the biggest degree in the topology was found and the thread of the component was assigned with the highest priority. Finally, under the condition of the maximum number of threads that a node could carry, the associated tasks were deployed to the same node as far as possible to reduce the communication cost between nodes, improve the load balance of cluster and optimize the thread deployment. The experimental results show that, in terms of system latency, the average optimization rate of TS2 is 16.91% and 5.69% respectively compared with Storm default scheduling strategy and offline scheduling strategy, which effectively improves the real-time performance of system. Additionally, compared with the Storm default scheduling strategy, the communication cost between nodes of TS2 is reduced by 15.75% and its average throughput is improved by 14.21%.
    Dynamic random distribution particle swarm optimization strategy for cloud computing resources
    YU Dekuang, YANG Yi, QIAN Jun
    2018, 38(12):  3490-3495.  DOI: 10.11772/j.issn.1001-9081.2018040898
    Asbtract ( )   PDF (1078KB) ( )  
    References | Related Articles | Metrics
    Resources in cloud computing environment are dynamic and heterogeneous. The goal of resource allocation in large-scale tasks is to minimize the completion time and resource occupation while having the best load balancing, which is a Non-deterministic Polynomial (NP) problem. Drawing on the advantages of intelligent swarm optimization, a hybrid swarm intelligence scheduling strategy named Dynamic Random Distribution PSO (DRDPSO) was proposed based on an improved PSO algorithm. Firstly, the inertia weight constant of PSO was modified to be a variable to control the convergence speed of solution process reasonably. Secondly, the search scope of each iteration was shrinked so as to reduce invalid search on the premise of retaining candidate optimal set. Then, selection operation was introduced to select high-quality individuals and pass them on to the next generation. Finally, random disturbance was designed to improve the diversity of candidate solutions and avoid the local optimal trap to some extent. Two kinds of simulation tests were carried out on the CloudSim platform. The experimental results show that, the proposed DRDPSO is better than Simulated Annealing Genetic Algorithm (SAGA) and Genetic Algorithm (GA)+PSO in most cases when dealing with isomorphic tasks. The total execution time of the proposed algorithm is less than SAGA by 13.7%-37.0% and less than GA+PSO by 13.6%-31.6%, the resource consumption of the proposed algorithm is less than SAGA by 9.8%-17.1% and less than GA+PSO by 0.6%-31.1%, the number of iterations of the proposed algorithm is less than SAGA by 15.7%-60.2% and less than GA+PSO by 1.4%-54.7%, the load balance degree of the proposed algorithm is less than SAGA by 8.1%-18.5% and less than GA+PSO by 2.7%-15.3% with the smallest fluctuation amplitude. When dealing with heterogeneous tasks, three algorithms has the similar properties:in aspect of the total execution time consumption, CPU tasks are the most, the mixed tasks take the second place, and IO tasks are the least. The comprehensive performance of DRDPSO is the best, which is the most suitable for dealing with multiple types of heterogeneous tasks. GA+PSO algorithm is suitable for solving hybrid tasks and SAGA algorithm is suitable for solving IO tasks quickly. When dealing with large-scale isomorphic and heterogeneous tasks, the proposed DRDPSO can significantly shorten the total task execution time and improve the utilization of resources in varying degrees with proper load balancing of computing nodes.
    MPI big data processing for high performance applications
    WANG Peng, ZHOU Yan
    2018, 38(12):  3496-3499.  DOI: 10.11772/j.issn.1001-9081.2018040771
    Asbtract ( )   PDF (809KB) ( )  
    References | Related Articles | Metrics
    In view of the application scenario of Message Passing Interface (MPI) in the field of high performance computing, in order to optimize the existing data centralized management model of MPI and enhance its processing capability for big data, a set of MPI Data Storage Plug-in (MPI-DSP) for large data processing was developed and designed by using the idea of parallel and distributed systems. Firstly, the interface function was created to achieve the design goal of "calculation to storage migration" in a way of minimizing the impact on MPI system. The file allocation and calculation were separated to make the MPI break through the bottleneck of network transmission when reading large data files. Then, the design goal, operation mechanism and implementation strategy were analyzed and elaborated. The design concept was verified by describing the application of interface function MPI_Open in MPI environment. By comparing the time performance of using MPI-DSP component with that of original MPI in data file processing through Wordcount experiment, the feasibility of MPI "computation to storage migration" mode was preliminarily validated, which enables that it has the large data processing capability in high performance application scenarios. At the same time, the applicable environment and limitations of MPI-DSP were analyzed, and its application scope was defined.
    Fast video transcoding method based on Spark Streaming
    FU Mou, YANG Hekun, WU Tangmei, HE Run, FENG Chaosheng, KANG Sheng
    2018, 38(12):  3500-3508.  DOI: 10.11772/j.issn.1001-9081.2018040942
    Asbtract ( )   PDF (1358KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of slow transcoding speed of single-machine video transcoding method and limited efficiency improvement of parallel transcoding method for batch processing, a fast video transcoding method for stream processing based on Spark Streaming distributed stream processing framework was proposed. Firstly, an automated video slicing model was built by using the open source multimedia processing tool of FFmpeg and a programming algorithm was proposed. Then, in view of the characteristics of parallel video transcoding, the stream processing model of video transcoding was constructed by studying Resilient Distributed Datasets (RDD). Finally, the video merging scheme was designed to store the combined video files effectively. Based on the proposed fast video transcoding method, a fast video transcoding system based on Spark Streaming was designed and implemented. The experimental results show that, compared with the Hadoop video transcoding method for batch processing, the proposed method has improved the transcoding efficiency by 26.7%, and compared with the video parallel transcoding based on Hadoop platform, the proposed method has improved the transcoding efficiency by 20.1%.
    Cooperative caching strategy based on user preference for content-centric network
    XIONG Lian, LI Pengming, CHEN Xiang, ZHU Hongmei
    2018, 38(12):  3509-3513.  DOI: 10.11772/j.issn.1001-9081.2018051057
    Asbtract ( )   PDF (815KB) ( )  
    References | Related Articles | Metrics
    Nodes in the Content-Centric Network (CCN) cache all the passed content by default, without selective caching and optimally placing of the content. In order to solve the problems, a new Cooperative Caching strategy based on User Preference (CCUP) was proposed. Firstly, user's preference for content type and content popularity were considered as user's local preference indexes to realize the selection of cached content. Then, the differentiated caching strategy was executed on the content that needed to be cached, the globally active content was cached at the important central node, and the inactive content was cached according to the matching of the local preference and the distance level between node and user. Finally, the near access of user to local preference content and the quick distribution of global active content were both achieved. The simulation results show that, compared with typical caching strategies, such as LCE (Leave Copy Everywhere)、Prob(0.6) (Probabilistic caching with 0.6)、Betw (cache "less for more"), the proposed CCUP has obvious advantages in average cache hit rate and average request delay.
    Energy efficiency optimization of heterogeneous cellular networks based on micro base station power allocation
    YANG Jie, GUO Lihong, CHEN Rui
    2018, 38(12):  3514-3517.  DOI: 10.11772/j.issn.1001-9081.2018051032
    Asbtract ( )   PDF (724KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of tremendous escalation of energy consumption caused by the dense deployment of micro base stations in heterogeneous cellular networks, the energy efficiency of two-tier heterogeneous cellular networks was analyzed and a new method for maximizing network energy efficiency by adjusting the micro base station transmit power was proposed. Firstly, the heterogeneous cellular network was modeled by using homogeneous Poisson point process, and the coverage probability of base stations at each tier was derived. Secondly, according to the definition of energy efficiency, the total power consumption and total throughput of network were derived respectively, and the closed-form expression of energy efficiency was given. Finally, the impact of the micro base station transmission power on the energy efficiency of network was analyzed, and a micro base station power optimization algorithm was proposed to maximize energy efficiency. The simulation results show that, the transmission power of micro base station has a significant impact on the energy efficiency of heterogeneous cellular network. Furthermore, the energy efficiency of heterogeneous cellular network can be effectively improved by optimizing the transmission power of micro base stations.
    Network availability based on Markov chain and quality of service
    TANG Junyong, TIAN Penghui, WANG Hui
    2018, 38(12):  3518-3523.  DOI: 10.11772/j.issn.1001-9081.2018051165
    Asbtract ( )   PDF (1144KB) ( )  
    References | Related Articles | Metrics
    The network availability differs in the Quality of Service (QoS) of different network services and constrains each other with performance expense. In order to solve the problems, Markov chain theory was introduced, a new Markov Chain and QoS based Network Availability (MCQNA) evaluation model was constructed on the basis of defining the matching degree of service capability based on the minimum service expense. Firstly, starting from the QoS indicator that could best reflect the characteristics of network availability and considering the performance overhead, the cost function was defined and the state transition matrix was given. Then, through the analysis of relationship between stationary state and network availability, the stationary distribution was solved and used as the dynamic weight of operation cost of QoS. The network availability evaluation characterized by the minimum operation cost of service was realized. The simulation results show that, the ergodic transition matrix constructed by the proposed model has the stationary distribution, and it is feasible to evaluate the network availability. According to the QoS standards of different services, the proposed model can be used to measure the network availability effectively for specific services.
    Distributed load balancing algorithm based on hop-by-hop computing
    GENG Haijun, LIU Jieqi
    2018, 38(12):  3524-3528.  DOI: 10.11772/j.issn.1001-9081.2018050962
    Asbtract ( )   PDF (754KB) ( )  
    References | Related Articles | Metrics
    The continuous increase of traffic in the network can easily lead to unbalanced traffic, network congestion, and thus affect the users' experience. The Optimizing Open Shortest Path First (OSPF) Weights (OPW) algorithm is generally employed by Internet Service Provider (ISP) to deal with network congestion. However, there are three problems in this algorithm:1) real traffic matrix is needed; 2) network oscillation is easily to be led; 3) OPW has been proven to be a Non deterministic Ploynomial (NP) problem and requires to be solved by centralized approach. Aiming at the above problems of OPW algorithm, a new Distributed Load Balancing algorithm based on Hop-by-hop computing (DLBH) was proposed. Firstly, virtual traffic was set for all nodes. Then, the cost of all links was calculated based on the virtual traffic. Finally, the optimal routing was calculated using distributed algorithm. DLBH uses a distributed approach to solve the network congestion problem, while OPW can only use a centralized approach to deal with the network congestion problem. Therefore, the scalability of DLBH is superior to the scalability of OPW. Theoretical analysis shows that, the time complexity of DLBH is much less than that of OPW. The experimental results show that, the maximum link utilization of DLBH is significantly lower than that of OPW algorithm, which greatly reduces network congestion.
    Robust physical layer secure transmission scheme in two-way multi-relay system
    HUANG Rui, CHEN Jie
    2018, 38(12):  3529-3534.  DOI: 10.11772/j.issn.1001-9081.2018051070
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    The physical layer secure transmission in two-way multi-relay system can not obtain the accurate Channel State Information (CSI) of eavesdroppers. In order to solve the problem, a robust joint physical layer secure transmission scheme of multi-relay cooperative beamforming and artificial noise was proposed to maximize the secrecy sum rate in the worst case of channel state under the total power constraint of system. In the proposed scheme, the problem to be solved was a complex non-convex optimization problem. The alternating iteration and Successive Convex Approximation (SCA) methods were used for the alternating optimization iteration of beamforming vector, artificial noise covariance matrix and source node transmit power, and the optimal solution of the above problem was obtained. The simulation results verify the effectiveness of the proposed scheme and show that the proposed scheme has better security performance.
    Computation offloading scheme based on time switch policy for energy harvesting in device-to-device communication
    DONG Xinsong, ZHENG Jianchao, CAI Yueming, YIN Tinghui, ZHANG Xiaoyi
    2018, 38(12):  3535-3540.  DOI: 10.11772/j.issn.1001-9081.2018051171
    Asbtract ( )   PDF (943KB) ( )  
    References | Related Articles | Metrics
    In order to improve the effectiveness of mobile cloud computing in Device-to-Device (D2D) communication network, a computation offloading scheme based on the time switch policy for energy harvesting was proposed. Firstly, the computational tasks needed to be migrated of a traffic-limited smart mobile terminal were sent to an energy-limited smart mobile terminal in the form of Radio-Frequency (RF) signals through D2D communication, and the time switch policy was used by the energy-limited smart mobile terminal for the energy harvesting of received signals. Then, the extra traffic consumption was paid by the energy-limited terminal for the relay tasks of traffic-limited terminal to the cloud server. Finally, the proposed scheme was modeled as a non-convex optimization problem for minimizing terminal energy and traffic consumption, and the optimal scheme was obtained by optimizing the time switch factor and the harvest energy allocation factor of the energy-limited terminal, and the transmission power of the traffic-limited terminal. The simulation results show that, compared with non-cooperative scheme, the proposed scheme can effectively reduce the terminal's limited resource overhead by the computation offloading through reciprocal cooperation.
    Ripple matrix permutation-based sparsity balanced block compressed sensing algorithm
    DU Xiuli, ZHANG Wei, CHEN Bo
    2018, 38(12):  3541-3546.  DOI: 10.11772/j.issn.1001-9081.2018051039
    Asbtract ( )   PDF (1008KB) ( )  
    References | Related Articles | Metrics
    In matrix permutation-based Block Compressive Sensing (BCS), matrix permutation strategy is introduced, which makes the complex sub-blocks and sparse sub-blocks change to the middle level of sparsity and reduces the blocking artifacts when sampling with the single sampling rate. However there is still a problem of poor sparsity balance among blocks. In order to get better reconstruction effect, a Ripple Matrix Permutation-based sparsity balanced BCS (BCS-RMP) algorithm was proposed. Firstly, an image was pre-processed by matrix replacement before sampling, and the sparsity of each sub-block of the image was equalized by ripple permutation matrix. Then, a same measurement matrix was used to sample the sub-blocks and reconstruct them on the decoding side. Finally, the final reconstructed image was obtained by inverse transformation of reconstruction results by the ripple permutation inverse matrix. The simulation results show that, compared with the existing matrix replacement algorithms, the proposed ripple matrix permutation algorithm can effectively improve the quality of image reconstruction, and it can reflect the details more accurately when choosing appropriate sub-block size and sampling rate.
    Saliency detection method based on graph node centrality and spatial autocorrelation
    WANG Shasha, FENG Ziliang, FU Keren
    2018, 38(12):  3547-3556.  DOI: 10.11772/j.issn.1001-9081.2018050983
    Asbtract ( )   PDF (1641KB) ( )  
    References | Related Articles | Metrics
    The salient area detected by the existing saliency detection methods has the problems of uneven endoplasm, not clear and accurate boundary. In order to solve the problems, a saliency detection method based on spatial autocorrelation and importance evaluation strategy of graph nodes in complex networks was proposed. Firstly, combined with color information and spatial information, a saliency initial graph under multi-criteria was generated by using the centrality rules of complex network nodes and the spatial autocorrelation indicator coefficient. Then, Dempster-Shafer (D-S) evidence theory was used to fuse multiple initial graphs, and the final salient region results were obtained by adding boundary strength information to a progressively optimized two-stage cellular automaton. The single-step validity verification was performed for each module in the main process of the proposed method on two public image data sets, and the experimental comparisons in visual qualitative index, objective quantitative index and algorithmic efficiency were performed between the proposed method and the other existing saliency detection methods. The experimental results show that, the proposed method is effective in single-step modules, and is superior to other algorithms in terms of the comprehensive results of significant visual effects, Precision-Recall (P-R) curve, F-measure value, Mean Absolute Error (MAE) and algorithm time-consuming, especially to the Background-based maps optimized by Single-layer Cellular Automata (BSCA) algorithm closely related to the proposed method. At the same time, the results of visual contrast experiments also verify that, the proposed method can effectively improve the unsatisfactory results of uneven endoplasm, unclear boundary due to the small difference between salient objects and image background, and the large difference in the internal color of salient objects.
    Image completion method of generative adversarial networks based on two discrimination networks
    LIU Boning, ZHAI Donghai
    2018, 38(12):  3557-3562.  DOI: 10.11772/j.issn.1001-9081.2018051097
    Asbtract ( )   PDF (1246KB) ( )  
    References | Related Articles | Metrics
    The existing image completion methods have the problems of structural distortion on visual connectivity and easy to overfitting in the process of training. In order to solve the problems, a new image completion method of Generative Adversarial Network (GAN) based on two discrimination networks was proposed. One completion network, one global discrimination network and one local discrimination network were used in the completion model of the proposed method. The broken area of image to be completed was filled by a similar patch as input in the completion network, which greatly improved the speed and quality of the generation images. The global marginal structure information and feature information were used comprehensively in the global discrimination network to ensure that the completed image of completion network conformed visual connectivity. While discriminating the output image, the assisted feature patches found from multiple images were used to improve the generalization ability of discrimination in the local discrimination network, which solved the issue that the completion network was easily overfitting with too concentrated features or single feature. The experimental results show that, the proposed completion method has good completion effect on face images, and has good applicability in different kinds of images. The Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of the proposed method are better than those of the state-of-the-art methods based on deep learning.
    Enhanced algorithm of image super-resolution based on dual-channel convolutional neural networks
    JIA Kai, DUAN Xintao, LI Baoxia, GUO Daidou
    2018, 38(12):  3563-3569.  DOI: 10.11772/j.issn.1001-9081.2018040820
    Asbtract ( )   PDF (1211KB) ( )  
    References | Related Articles | Metrics
    The single-channel image super-resolution method can not achieve both fast convergence and high quality texture detail restoration. In order to solve the problem, a new Enhanced algorithm of image Super-Resolution based on Dual-channel Convolutional neural network (EDCSR) was proposed. Firstly, the network was divided into deep channel and shallow channel. Deep channel was used to extract detailed texture information of images, and shallow channel was mainly used to restore the overall contour of images. Then, the advantages of residual learning were used by the deep channel to deepen network and reduce parameters of model, eliminate the network degradation problem caused by too deep network. The long and short-term memory blocks were constructed to eliminate the artifacts and noise caused by the deconvolution layer. The texture information of image at different scales were extracted by a multi-scale method, while the shallow channel only needed to be responsible for restoring the main contour of image. Finally, the dual-channel losses were integrated to optimize the network continuously, which guided the network to generate high-resolution images. The experimental results show that, compared with the End-to-End image super-resolution algorithm via Deep and Shallow convolutional networks (EEDS), the proposed algorithm converges more quickly, image edge and texture reconstruction effects are significantly improved, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) are improved averagely by 0.15 dB and 0.0031 on data set Set5, while these are improved averagely by 0.18 dB and 0.0035 on data set Set14.
    Deep belief network-based matching algorithm for abnormal point sets
    LI Fang, ZHANG Ting
    2018, 38(12):  3570-3573.  DOI: 10.11772/j.issn.1001-9081.2018051076
    Asbtract ( )   PDF (802KB) ( )  
    References | Related Articles | Metrics
    In the presence of outliers, noise, or missing points, it is difficult to distinguish abnormal points and normal points in a damaged point set, and the matching relationship between point sets is also affected by these abnormal points. Based on the prior knowledge of the connections between normal points and the differences between normal points and abnormal points, it was proposed to model the estimation problem of matching relationship between point sets to the process of machine learning. Firstly, considering the error characteristics between two normal point sets, a learning method based on Deep Belief Network (DBN) was proposed to train the network with normal point sets. Then, the damaged point set was tested by using the trained DBN, and the outliers and mismatched points could be identified at the output of network according to the set error threshold. In the matching experiments of 2D and 3D point sets with noise and missing points, the matching performance between point sets was quantitatively evaluated by the model prediction results of samples. The precision of matching can reach more than 94%. The experimental results show that, the proposed algorithm can successfully detect the noise in the point set, and it can identify almost all matching points even in the case of data loss.
    Local image intensity fitting model combining global image information
    CHEN Xing, WANG Yan, WU Xuan
    2018, 38(12):  3574-3579.  DOI: 10.11772/j.issn.1001-9081.2018040834
    Asbtract ( )   PDF (1081KB) ( )  
    References | Related Articles | Metrics
    The Local Image Fitting (LIF) model is sensitive to the size, shape and position of initial contour. In order to solve the problem, a local image intensity fitting model combined with global information was proposed. Firstly, a global term based on global image information was constructed. Secondly, the global term was linearly combined with the local term of LIF model. Finally, an image segmentation model in the form of partial differential equation was obtained. Finite difference method was used in numerical implementation, simultaneously, a level set function was regularized by a Gaussian filter to ensure the smoothness of the level set function. In the segmentation experiments, when different initial contours are selected, the proposed model can get the correct segmentation results, and its segmentation time is only 20% to 50% of LIF model. The experimental results show that, the proposed model is not sensitive to the size, shape and position of the initial contour of evolutionary curve, it can effectively segment images with intensity inhomogeneity, and its segmentation speed is faster. In addition, the proposed model can segment some real and synthetic images quickly without initial contours.
    Backtracking-based conjugate gradient iterative hard thresholding reconstruction algorithm
    ZHANG Yanfeng, FAN Xi'an, YIN Zhiyi, JIANG Tiegang
    2018, 38(12):  3580-3583.  DOI: 10.11772/j.issn.1001-9081.2018040822
    Asbtract ( )   PDF (696KB) ( )  
    References | Related Articles | Metrics
    For the Backtracking-based Iterative Hard Thresholding algorithm (BIHT) has the problems of large number of iterations and too long reconstruction time, a Backtracking-based Conjugate Gradient Iterative Hard Thresholding algorithm (BCGIHT) was proposed. Firstly, the idea of backtracking was adopted in each iteration, and the support set of the previous iteration was combined with the current support set to form a candidate set. Then, a new support set was selected in the space spanned by the matrix columns corresponding to the candidate set, so as to reduce times that the support set was selected repeatedly and ensure that the correct support set was found quickly. Finally, according to the criteria of whether or not the support set of the last iteration was equal to the support set of the next iteration, gradient descent method or conjugate gradient method was used to be the optimization method, so as to accelerate the convergence of algorithm. The reconstruction experimental results of one-dimensional random Gaussian signals show that, the reconstruction success rate of BCGIHT is higher than that of BIHT and similar algorithms, and its reconstruction time is less than that of BIHT by at least 25%. The reconstruction experiment results of Pepper image show that, the reconstruction accuracy and the anti-noise performance of the proposed BCGIHT algorithm is comparable with BIHT and similar algorithms, and its reconstruction time is reduced by more than 50% compared with BIHT.
    Stationary wavelet domain deep residual convolutional neural network for low-dose computed tomography image estimation
    GAO Jingzhi, LIU Yi, BAI Xu, ZHANG Quan, GUI Zhiguo
    2018, 38(12):  3584-3590.  DOI: 10.11772/j.issn.1001-9081.2018040833
    Asbtract ( )   PDF (1168KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of a large amount of noise in Low-Dose Computed Tomography (LDCT) reconstructed images, a deep residual Convolutional Neural Network for Stationary Wavelet Transform (SWT-CNN) model was proposed to estimate Normal-Dose Computed Tomography (NDCT) image from LDCT image. In training phase, the high-frequency coefficients of LDCT images after Stationary Wavelet Transform (SWT) three-level decomposition were taken as inputs, the residual coefficients were obtained by subtracting the high-frequency coefficients of NDCT images from high-frequency coefficients of LDCT images were taken as labels, and the mapping relationship between inputs and labels could be learned by deep CNN. In testing phase, the high-frequency coefficients of NDCT image could be predicted from the high-frequency coefficients of LDCT image by using this mapping relationship. Finally, the predicted NDCT image could be reconstructed by Stationary Wavelet Inverse Transform (ISWT). With the size of 512 x 512, 50 pairs of normal-dose chest and abdominal scan sections of the same phantom and reconstructed images with noise added to the projection field were used as data sets, of which 45 pairs constituted a training set and the remaining 5 pairs constituted a test set. The SWT-CNN model was compared with the-state-of-the-art methods, such as Non-Local Means (NLM), K-Singular Value Decomposition (K-SVD) algorithm, Block-Matching and 3D filtering (BM3D), and Image domain CNN (Image-CNN). The experimental results show that, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of NDCT image predicted by SWT-CNN model are higher, and its Root Mean Square Error (RMSE) is smaller than that of other algorithms. The proposed model is feasible and effective in improving the quality of low-dose CT images.
    People counting method combined with feature map learning
    YI Guoxian, XIONG Shuhua, HE Xiaohai, WU Xiaohong, ZHENG Xinbo
    2018, 38(12):  3591-3595.  DOI: 10.11772/j.issn.1001-9081.2018051162
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems such as background interference, illumination variation and occlusion between targets in people counting of actual public scene videos, a new people counting method combined with feature map learning and first-order dynamic linear regression was proposed. Firstly, the mapping model of feature map between the Scale-Invariant Feature Transform (SIFT) feature of image and the target true density map was established, and the feature map containing target and background features was obtained by using aforementioned mapping model and SIFT feature. Then, according to the facts of the less background changes in the monitoring video and the relatively stable background features in the feature map, the regression model of people counting was established by the first-order dynamic linear regression from the integration of feature map and the actual number of people. Finally, the estimated number of people was obtained through the regression model. The experiments were performed on the datasets of MALL and PETS2009. The experimental results show that, compared with the cumulative attribute space method, the mean absolute error of the proposed method is reduced by 2.2%, while compared with the first-order dynamic linear regression method based on corner detection, the mean absolute error and the mean relative error of the proposed method are respectively reduced by 6.5% and 2.3%.
    High fidelity haze removal method for remote sensing images based on estimation of haze thickness map
    WANG Yueyun, HUANG Wei, WANG Rui
    2018, 38(12):  3596-3600.  DOI: 10.11772/j.issn.1001-9081.2018051149
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    The haze removal of remote sensing image may easily result in ground object distortion. In order to solve the problem, an improved haze removal algorithm was proposed on the basis of the traditional additive haze pollution model, which was called high fidelity haze removal method based on estimation for Haze Thickness Map (HTM). Firstly, the HTM was obtained by using the traditional additive haze removal algorithm, and the mean value of the cloudless areas was subtracted from the whole HTM to ensure the haze thickness of the cloudless areas closed to zero. Then, the haze thickness of blue ground objects was estimated alone in degraded images. Finally, the cloudless image was obtained by subtracting the finally optimized haze thickness map of different bands from the degraded image. The experiments were carried out for multiple optical remote sensing images with different resolution. The experimental results show that, the proposed method can effectively solve the serious distortion problem of blue ground objects, improve the haze removal effect of degrade images, and promote the data fidelity ability of cloudless areas.
    Multi-modal process fault detection method based on improved partial least squares
    LI Yuan, WU Haoyu, ZHANG Cheng, FENG Liwei
    2018, 38(12):  3601-3606.  DOI: 10.11772/j.issn.1001-9081.2018051183
    Asbtract ( )   PDF (908KB) ( )  
    References | Related Articles | Metrics
    Partial Least Squares (PLS) as the traditional data-driven method has the problem of poor performance of multi-modal data fault detection. In order to solve the problem, a new fault detection method was proposed, which called PLS based on Local Neighborhood Standardization (LNS) (LNS-PLS). Firstly, the original data was Gaussized by LNS method. On this basis, the monitoring model of PLS was established, and the control limits of T2 and Squared Prediction Error (SPE) were determined. Secondly, the test data was also standardized by the LNS, and then the PLS monitoring indicators of test data were calculated for process monitoring and fault detection, which solved the problem of unable to deal with multi-modal by PLS. The proposed method was applied to numerical examples and penicillin production process, and its test results were compared with those of Principal Component Analysis (PCA), K Nearest Neighbors (KNN) and PLS. The experimental results show that, the proposed method is superior to PLS, KNN and PCA in fault detection. The proposed method has high accuracy in classification and multi-modal process fault detection.
    Facial attractiveness evaluation method based on fusion of feature-level and decision-level
    LI Jinman, WANG Jianming, JIN Guanghao
    2018, 38(12):  3607-3611.  DOI: 10.11772/j.issn.1001-9081.2018051040
    Asbtract ( )   PDF (818KB) ( )  
    References | Related Articles | Metrics
    In the study of personalized facial attractiveness, due to lack of features and insufficient consideration of the influence factors of public aesthetics, the prediction of personal preferences cannot reach high prediction accuracy. In order to improve the prediction accuracy, a new personalized facial attractiveness prediction framework based on feature-level and decision-level information fusion was proposed. Firstly, the objective characteristics of different facial beauty features were fused together, and the representative facial attractive features were selected by a feature selection algorithm, the local and global features of face were fused by different information fusion strategies. Then, the traditional facial features were fused with the features extracted automatically through deep networks. At the same time, a variety of fusion strategies were proposed for comparison. The score information representing the public aesthetic preferences and the personalized score information representing the individual preferences were fused at the decision level. Finally, the personalized facial attractiveness prediction score was obtained. The experimental results show that, compared with the existing algorithms for personalized facial attractiveness evaluation, the proposed multi-level fusion method has a significant improvement in prediction accuracy, and can achieve the Pearson correlation coefficient more than 0.9. The proposed method can be used in the fields of personalized recommendation and face beautification.
    Ability dynamic measurement algorithm for software crowdsourcing workers
    YU Dunhui, WANG Yi, ZHANG Wanshan
    2018, 38(12):  3612-3617.  DOI: 10.11772/j.issn.1001-9081.2018040900
    Asbtract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics
    The existing software crowdsourcing platforms do not consider the ability of workers adequately, which leads to the low completion quality of tasks assigned to workers. In order to solve the problem, a new Ability Dynamic Measurement algorithm (ADM) for software crowdsourcing workers was proposed to achieve the dynamic measurement of the workers' ability. Firstly, the initial ability of a worker was calculated based on his static skill coverage rate. Secondly, for the single task completed by the worker in history, task complexity, task completion quality, and task development timeliness were integrated to realize the calculation of development ability, and the development ability decaying with time was calculated according to a time factor. Then, according to the time sequence of all the completed tasks in history, the dynamic update of ability measurement value was realized. Finally, the development ability of the worker for a task to be assigned was calculated based on the skill coverage rates of historical tasks. The experimental results show that, compared with the user reliability measurement algorithm, the proposed ability dynamic measurement algorithm has a better rationality and effectiveness, and the average coincidence degree of ability measurement is up to 90.5%, which can effectively guide task assignment.
    Optimization model of green multi-type vehicles routing problem
    HE Dongdong, LI Yinzhen
    2018, 38(12):  3618-3624.  DOI: 10.11772/j.issn.1001-9081.2018051085
    Asbtract ( )   PDF (1146KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the waste gas pollution generated by vehicles in the process of logistics distribution, on the basis of traditional Vehicle Routing Problem with Time Windows (VRPTW) model, an approximate calculation method for fuel consumption and carbon emission was introduced from the perspective of energy saving and emission reduction, then a Green Multi-type Vehicles Routing Problem with Time Windows (G-MVRPTW) model was established. The minimum total cost was taken as an optimization objective to find environment-friendly green paths, and an improved tabu search algorithm was designed to solve the problem. When the initial solution and the neighborhood solution were generated, the order of customer sequence in the subpath was set according to the ascending order of the latest service time and the time window size of each customer point. At the same time, through three indexes of the minimum subpath, the total cost of subpaths and the overload, the evaluation function of solution was improved, and a mechanism of reducing the possibility of precocious maturing was adopted. Finally, the effectiveness and feasibility of the proposed model and algorithm were verified by numerical experiments. The experimental results show that, the ton-kilometer index can better measure the fuel consumption and carbon emission cost, and it is a new trend for new energy vehicles to enter the transportation market. It can provide decision support and methodological guidance for low-carbon transportation and management.
    Radar-guided video linkage surveillance model and algorithm
    QU Licheng, GAO Fenfen, BAI Chao, LI Mengmeng, ZHAO Ming
    2018, 38(12):  3625-3630.  DOI: 10.11772/j.issn.1001-9081.2018040858
    Asbtract ( )   PDF (990KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of limited monitoring area and difficult target locating in video security surveillance system, a radar-guided video linkage monitoring model was established with the characteristics of wide radar monitoring range and freedom from optical conditions. On this basis, a target location algorithm and a multi-target selection algorithmm were proposed. Firstly, according to the target information detected by radar, the corresponding camera azimuth and pitch angle of a moving target in the system linkage model were automatically calculated so that the target could be accurately locked, monitored and tracked by camera in real-time. Then, with multiple targets appearing in the surveillance scene, the multi-target selecting algorithm was used for data weighted fusion of discrete degree of target, radial velocity of target and the distance between target and camera to select the target with the highest priority for intensive monitoring. The experimental results show that, the locating accuracy of the proposed target location algorithm for pedestrians and vehicles can reach 0.94 and 0.84 respectively, which can achieve accurate target location. Moreover, the proposed multi-target selection algorithm can effectively select the best monitoring target in complex environment, and has good robustness and real-time performance.
    Ship detection under complex sea and weather conditions based on deep learning
    XIONG Yongping, DING Sheng, DENG Chunhua, FANG Guokang, GONG Rui
    2018, 38(12):  3631-3637.  DOI: 10.11772/j.issn.1001-9081.2018040933
    Asbtract ( )   PDF (1097KB) ( )  
    References | Related Articles | Metrics
    In order to solve the detection of ships with different types and sizes under complex marine environment, a real-time object detection algorithm based on deep learning was proposed. Firstly, a discriminant method between sharp and fuzzy such as rainy and foggy images was proposed. Then a multi-scale object detection algorithm based on deep learning framework of You Only Look Once (YOLO) v2 was proposed. Finally, concerning the character of remote sensing images of ships, an improved non-maximum supression and saliency partitioning algorithm was proposed to optimize the final detection results. The experimental results show that, on the dataset of ship detection in an open competition under complex sea conditions and meteorological conditions, the precision of the proposed method is increased by 16% compared with original YOLO v2 algorithm.
    Application of convolution neural network in heart beat recognition
    YUAN Yongpeng, YOU Datao, QU Shenming, WU Xiangjun, WEI Mengfan, ZHU Mengbo, GENG Xudong, JIA Nairen
    2018, 38(12):  3638-3642.  DOI: 10.11772/j.issn.1001-9081.2018040843
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    ElectroCardioGram (ECG) heart beat classification plays an important role in clinical diagnosis.However, there is a serious imbalance of the available data among four types of ECG, which restricts the improvement of heart beat classification performance. In order to solve this problem, a class information extracting method based on Convolutional Neural Network (CNN) was proposed. Firstly, an general CNN model based on equivalent data of four ECG types was constructed. And then based on the general CNN model, four CNN models that more effectively express the propensity information of the four heart beat categories were constructed. Finally, the outputs of the four categories of CNN models were combined to discriminate the heart beat type. The experimental results show that the average sensitivity of the proposed method is 99.68%, the average positive detection rate is 98.58%, and the comprehensive index is 99.12%; which outperform the two-stage cluster analysis method.
    Indoor speech separation and sound source localization system based on dual-microphone
    CHEN Binjie, LU Zhihua, ZHOU Yu, YE Qingwei
    2018, 38(12):  3643-3648.  DOI: 10.11772/j.issn.1001-9081.2018040874
    Asbtract ( )   PDF (866KB) ( )  
    References | Related Articles | Metrics
    In order to explore the possibility of using two microphones for separation and locating of multiple sound sources in a two-dimensional plane, an indoor voice separation and sound source localization system based on dual-microphone was proposed. According to the signal collected by microphones, a dual-microphone time delay-attenuation model was established. Then, Degenerte Unmixing Estimation Technique (DUET) algorithm was used to estimate the delay-attenuation parameters of model, and the parameter histogram was drawn. In the speech separation stage, Binary Time-Frequency Masking (BTFM) was established. According to the parameter histogram, binary masking method was combined to separate the mixed speech. In the sound source localization stage, the mathematical equations for determining the location of sound source were obtained by deducing the relationship between the model attenuation parameters and the signal energy ratio. Roomsimove toolbox was used to simulate the indoor acoustic environment. Through Matlab simulation and geometric coordinate calculation, the locating in the two-dimensional plane was completed while separating multiple targets of sound source. The experimental results show that, the locating errors of the proposed system for multiple signals of sound source are less than 2%. Therefore, it contributes to the research and development of small system.
2024 Vol.44 No.5

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF