Table of Content

    10 May 2019, Volume 39 Issue 5
    Artificial intelligence
    Dynamic updating method of approximations in multigranulation rough sets based on tolerance relation
    XU Yi, XIAO Peng
    2019, 39(5):  1247-1251.  DOI: 10.11772/j.issn.1001-9081.2018102086
    Asbtract ( )   PDF (717KB) ( )  
    References | Related Articles | Metrics
    Focused on the issue that missing attribute values are obtained when an incomplete information system changes, in order to solve the problem of low time efficiency of updating the approximations in a multigranulation rough sets, a dynamic update algorithm based on tolerance relationship was proposed. Firstly, the properties of the approximations change based on tolerance relationship were discussed, and the change trends of the approximations of optimistic and pessimistic multigranulation rough sets were obtained according to the relevant properties. Then, a theorem of dynamic update tolerance class was proposed for the problem of low efficiency of updating tolerance class. Based on this, a dynamic update algorithm based on tolerance relationship was proposed. The simulation experiments were carried out using four data sets in UCI database. When the data set becomes larger, the calculation time of the proposed update algorithm is much smaller than that of the static update algorithm. The experimental results show that the time efficiency of the proposed dynamic update algorithm is higher than that of the static algorithm, which verifies the correctness and efficiency of the proposed algorithm.
    Positive region preservation reduction based on multi-specific decision classes in incomplete decision systems
    KONG Heqing, ZHANG Nan, YUE Xiaodong, TONG Xiangrong, YU Tianyou
    2019, 39(5):  1252-1260.  DOI: 10.11772/j.issn.1001-9081.2018091963
    Asbtract ( )   PDF (1396KB) ( )  
    References | Related Articles | Metrics
    The existing attribute reduction algorithms mostly focus on all decision classes in decision systems, but in actual decision process, decision makers may only focus on one or several decision classes in the decision systems. To solve this problem, a theoretical framework of positive region preservation reduction based on multi-specific decision classes in incomplete decision systems was proposed. Firstly, the positive region preservation reduction for single specific decision class in incomplete decision systems was defined. Secondly, the positive region preservation reduction for single specific decision class was extended to multi-specific decision classes, and the corresponding discernibility matrix and function were constructed. Thirdly, with related theorems analyzed and proved, an algorithm of Positive region preservation Reduction for Multi-specific decision classes reduction based on Discernibility Matrix in incomplete decision systems (PRMDM) was proposed. Finally, four UCI datasets were selected for experiments. On Teaching-assistant-evaluation, House, Connectionist-bench and Cardiotocography dataset, the average reduction length of Positive region preservation Reduction based on Discernibility Matrix in incomplete decision systems (PRDM) algorithm is 4.00, 13.00, 9.00 and 20.00 respectively while that of the PRMDM algorithm (with decision classes in the multi-specific decision classes is 2) is 3.00, 8.00, 8.00 and 18.00 respectively. The validity of PRMDM algorithm is verified by experimental results.
    Point-of-interest recommendation integrating social networks and image contents
    SHAO Changcheng, CHEN Pinghua
    2019, 39(5):  1261-1268.  DOI: 10.11772/j.issn.1001-9081.2018102084
    Asbtract ( )   PDF (1145KB) ( )  
    References | Related Articles | Metrics
    The rapid growth of Location-Based Social Networks (LBSN) provides a vast amount of Point-of-Interest (POI) data, which facilitates the research of POI recommendation. To solve the low recommendation accuracy caused by the extreme sparseness of user-POI matrix and the lack of POI features, by integrating information such as tags, geography, socialization, score, and image information of POI, a POI recommendation method integrating social networks and image contents called SVPOI was proposed. Firstly, with the analysis of POI dataset, a distance factor was constructed based on power law distribution and a tag factor was constructed based on term frequency, and the existing historical score data was merged to construct a new user-POI matrix. Secondly, VGG16 Deep Convolutional Neural Network (DCNN) was used to process the images of POI to construct the POI image content matrix. Thirdly, the user social matrix was constructed according to the social network information of POI data. Finally, with the use of Probabilistic Matrix Factorization (PMF) model, the POI recommendation list was obtained with the integration of user-POI matrix, image content matrix and user social matrix. On real-world datasets, the accuracy of SVPOI is improved significantly compared to PMF, SoRec (Social Recommendation using probabilistic matrix factorization), TrustMF (Social Collaborative Filtering by Trust) and TrustSVD (Social Collaborative Filtering by Trust with SVD) while Mean Absolute Error (MAE) and Root-Mean-Square Error (RMSE) of SVPOI are decreased by 5.5% and 7.82% respectively compared to those of TrustMF which was the best of the comparison methods. The experimental results demonstrate the recommendation effectiveness of the proposed method.
    Social recommendation method based on multi-dimensional trust and collective matrix factorization
    WANG Lei, REN Hang, GONG Kai
    2019, 39(5):  1269-1274.  DOI: 10.11772/j.issn.1001-9081.2018102110
    Asbtract ( )   PDF (859KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortages in trust analysis of existing social recommendation algorithms, a social recommendation algorithm based on multi-dimensional trust and collective matrix factorization was proposed with full use of user trust relationship mined from social auxiliary information. Firstly, the dynamic and static local trust relationships were extracted respectively from social interaction behaviors and social circle features of the user, and the global trust relationship was extracted from the structural features of trust network. Then, a social recommendation algorithm was presented by collective factorizing the enhanced following relationship matrix and the social trust relationship matrix, and a stochastic gradient descent method was utilized to solve the algorithm. The experimental results on the Sina microblog dataset indicate that the proposed algorithm outperforms some popular social recommendation algorithms such as socialMF, LOCABAL, contextMF and TBSVD (Trust Based Singular Value Decomposition), in terms of recommendation accuracy and Top-K performance.
    Incremental robust non-negative matrix factorization with sparseness constraints and its application
    YANG Liangdong, YANG Zhixia
    2019, 39(5):  1275-1281.  DOI: 10.11772/j.issn.1001-9081.2018092032
    Asbtract ( )   PDF (988KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the operation scale of Robust Non-negative Matrix Factorization (RNMF) increases with the number of training samples, an incremental robust non-negative matrix factorization algorithm with sparseness constraints was proposed. Firstly, robust non-negative matrix factorization was performed on initial data. Then, the factorized result participated in the subsequent iterative operation. Finally, with sparseness constraints, the coefficient matrix was combined with incremental learning, which made the objective function value fall faster in the iterative solution. The cost of computation was reduced and the sparseness of data after factorization was improved. In the numerical experiments, the proposed algorithm was compared with RNMF algorithm and RNMF with Sparseness Constraints (RNMFSC) algorithm. The experimental results on ORL and YALE face databases show that the proposed algorithm is superior to the other two algorithms in terms of operation time and sparseness of factorized data, and has better clustering effect, especially in YALE face database, when the clustering number is 3, the clustering accuracy of the proposed algorithm reaches 91.67%.
    Micro-expression recognition based on local region method
    ZHANG Yanliang, LU Bing, HONG Xiaopeng, ZHAO Guoying, ZHANG Weitao
    2019, 39(5):  1282-1287.  DOI: 10.11772/j.issn.1001-9081.2018102090
    Asbtract ( )   PDF (917KB) ( )  
    References | Related Articles | Metrics
    Micro-Expression (ME) occurrence is only related to local region of face, with very short time and subtle movement intensity. There are also some unrelated muscle movements in the face during the occurrence of micro-expressions. By using existing global method of micro-expression recognition, the spatio-temporal patterns of these unrelated changes were extracted, thereby reducing the representation capability of feature vectors, and thus affecting the recognition performance. To solve this problem, the local region method was proposed to recognize micro-expression. Firstly, according to the region with the Action Units (AU) related to the micro-expression, seven local regions related to the micro-expression were partitioned by facial key coordinates. Then, the spatio-temporal patterns of these local regions were extracted and connected in series to form feature vectors for micro-expression recognition. The experimental results of leave-one-subject-out cross validation show that the micro-expression recognition accuracy of local region method is 9.878% higher than that of global region method. The analysis of the confusion matrix of each region's recognition result shows that the proposed method makes full use of the structural information of each local region of face, effectively eliminating the influence of unrelated regions of the micro-expression on the recognition performance, and its performance of micro-expression recognition can be significantly improved compared with the global region method.
    Recognition model for French named entities based on deep neural network
    YAN Hong, CHEN Xingshu, WANG Wenxian, WANG Haizhou, YIN Mingyong
    2019, 39(5):  1288-1292.  DOI: 10.11772/j.issn.1001-9081.2018102155
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics
    In the existing French Named Entity Recognition (NER) research, the machine learning models mostly use the character morphological features of words, and the multilingual generic named entity models use the semantic features represented by word embedding, both without taking into account the semantic, character morphological and grammatical features comprehensively. Aiming at this shortcoming, a deep neural network based model CGC-fr was designed to recognize French named entity. Firstly, word embedding, character embedding and grammar feature vector were extracted from the text. Then, character feature was extracted from the character embedding sequence of words by using Convolution Neural Network (CNN). Finally, Bi-directional Gated Recurrent Unit Network (BiGRU) and Conditional Random Field (CRF) were used to label named entities in French text according to word embedding, character feature and grammar feature vector. In the experiments, F1 value of CGC-fr model can reach 82.16% in the test set, which is 5.67 percentage points, 1.79 percentage points and 1.06 percentage points higher than that of NERC-fr, LSTM(Long Short-Term Memory network)-CRF and Char attention models respectively. The experimental results show that CGC-fr model with three features is more advantageous than the others.
    Efficient judicial document classification based on knowledge block summarization and word mover’s distance
    MA Jiangang, ZHANG Peng, MA Yinglong
    2019, 39(5):  1293-1298.  DOI: 10.11772/j.issn.1001-9081.2018102085
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    With the deepening of intelligence construction of the national judicial organization, massive judicial documents accumulated through years of information technology application provide data analysis basis for developing judicial intelligent service. The quality and efficiency of case handling can be greatly improved through the analysis of the similarity of judicial documents, which realizes the push of similar cases to provide the judicial officials with intelligent assistant case handling decision support. Aiming at the low efficiency of most document classification approach for common domains in judicial document classification due to the lack of consideration of complex structure and knowledge semantics of specific judicial documents, an efficient judicial document classification approach based on knowledge block summarization and Word Mover's Distance (WMD) was proposed. Firstly, a domain ontology knowledge model was built for judicial documents. Secondly, based on domain ontology, the core knowledge block summarization of judicial documents was obtained by information extraction technology. Thirdly, WMD algorithm was used to calculate judicial document similarity based on knowledge block summary of judicial text. Finally, K-Nearest Neighbors (KNN) algorithm was used to realize judicial document classification. With the documents of two typical crimes used as experimental data, the experimental results show that the proposed approach greatly improves the accuracy of judicial document classification by 5.5 and 9.9 percentage points respectively with the speed of 52.4 and 89.1 times respectively compared to traditional WMD similarity computation algorithm.
    Semantic judgement method of polysemous keywords in dynamic requirement traceability
    TANG Chen, LI Yonghua, RAO Mengni, HU Gangjun
    2019, 39(5):  1299-1304.  DOI: 10.11772/j.issn.1001-9081.2018102150
    Asbtract ( )   PDF (892KB) ( )  
    References | Related Articles | Metrics
    Although ontology-based dynamic requirement traceability methods can improve the accuracy of trace links compared with Information Retrieval (IR), but it is rather complicated and tedious to construct a reasonable and effective ontology, especially domain ontology. In order to reduce time cost and labor cost brought by the domain ontology construction, a Modifier Ontology-based Keyword Semantic Judgment Method (MOKSJM) which combined modifiers with general ontology was proposed. Firstly, the collocation relationship between keywords and modifiers was analyzed. Then, the semantics of keywords were determined by combining modifier ontologies with rules, so as to avoid the bias of dynamic requirements traceability results caused by the polysemy of keywords. Finally, based on results of the above analysis, the semantics of keywords were adjusted and reflected by similarity scores. The number of modifiers is small in the requirements document, design documents, etc., so the time cost and labor cost brought by establishing the modifier ontology is relatively small. The experimental results show that compared to domain ontology-based dynamic requirement traceability method, MOKSJM has a small gap in precision with the same recall rate, and when compared to Vector Space Model (VSM) method, MOKSJM can effectively improve the accuracy of the requirements traceability result.
    Multi-label lazy learning approach based on firefly method
    CHENG Yusheng, QIAN Kun, WANG Yibing, ZHAO Dawei
    2019, 39(5):  1305-1311.  DOI: 10.11772/j.issn.1001-9081.2018109182
    Asbtract ( )   PDF (1074KB) ( )  
    References | Related Articles | Metrics
    The existing Improved Multi-label Lazy Learning Approach (IMLLA) has the problem that the influence of similarity information is ignored with only the neighbor label correlation information considered when the neighbor labels were used, which may reduce the robustness of the approach. To solve this problem, with firefly method introduced and the combination of similarity information with label information, a Multi-label Lazy Learning Approach based on FireFly method (FF-MLLA) was proposed. Firstly, Minkowski distance was used to measure the similarity between samples to find the neighbor point. Secondly, the label count vector was improved by combining the neighbor point and firefly method. Finally, Singular Value Decomposition (SVD) and kernel Extreme Learning Machine (ELM) were used to realize linear classification. The robustness of the approach was improved due to considering both label information and similarity information. The experimental results demonstrate that the proposed approach improves the classification performance to a great extent compared to other multi-label learning approaches. And the statistical hypothesis testing and stability analysis are used to further illustrate the rationality and effectiveness of the proposed approach.
    Path planning of mobile robot based on improved asymptotically-optimal bidirectional rapidly-exploring random tree algorithm
    WANG Kun, ZENG Guohui, LU Dunke, HUANG Bo, LI Xiaobin
    2019, 39(5):  1312-1317.  DOI: 10.11772/j.issn.1001-9081.2018102213
    Asbtract ( )   PDF (910KB) ( )  
    References | Related Articles | Metrics
    To overcome the randomness of RRT-Connect and slow convergence of B-RRT*(asymptotically-optimal Bidirectional Rapidly-exploring Random Tree) in path generation, an efficient path planning algorithm based on B-RRT*, abbreviated as EB-RRT*, was proposed. Firstly, an intelligent sampling function was intriduced to achieve more directional expansion of random tree, which could improve the smoothness of path and reduce the seek time. A rapidly-exploring strategy was also added in EB-RRT* by which RRT-Connect exploration mode was adopted to ensure rapidly expanding in the free space and improved asymptotically-optimal Rapidly-exploring Random Tree (RRT*) algorithm was adopted to prevent trapped in local optimum in the obstacle space. Finally, EB-RRT* algorithm was compared with Rapidly-exploring Random Tree (RRT), RRT-Connect, RRT* and B-RRT* algorithms. The simulation results show that the improved algorithm is superior to other algorithms in the efficiency and smoothness of path planning. It reduced the path planning time by 68.3% and the number of iterations by 48.6% compared with B-RRT* algorithm.
    Multiple extended target tracking algorithm for nonliear system
    HAN Yulan, HAN Chongzhao
    2019, 39(5):  1318-1324.  DOI: 10.11772/j.issn.1001-9081.2018092020
    Asbtract ( )   PDF (1131KB) ( )  
    References | Related Articles | Metrics
    Most of current extended target tracking algorithms assume that its system is linear Gaussian system. To track multiple extended targets for nonlinear Gaussian system, an multiple extended target tracking algorithm using particle filter to jointly estimate target state and association hypothesis was proposed. Firstly, the idea of joint estimation of the multiple extended target state and association hypothesis was proposed, which avoided mutual constraints in estimating target state and data association. Then, based on extended target state evolution model and measurement model, a joint proposal distribution function for multiple extended target and association hypothesis was established, and the Bayesian framework for the joint estimation was implemented by particle filtering. Finally, to avoid the dimension disaster problem in the implementation of the particle filter, the generation and evolution of the multiple extended target combined state particles were decomposed into that of the individual target state particles, and the particle set of each target was resampled according to the weight association with it, so that each target retained the particles with better state estimation while suppressing the poor part of target state estimation. Simulation results show that, in comparison with the Gaussian-mixture implementation of extended target probability hypothesis density filter and the sequential Monte Carlo implementation of that, the estimation accuracy of the target state is improved, and the Jaccard distance of shape estimation is reduced by approximately 30% and 20% respectively. The proposed algorithm is more suitable for multiple extended target tracking of the nonlinear system.
    End-to-end speech synthesis based on WaveNet
    QIU Zeyu, QU Dan, ZHANG Lianhai
    2019, 39(5):  1325-1329.  DOI: 10.11772/j.issn.1001-9081.2018102131
    Asbtract ( )   PDF (819KB) ( )  
    References | Related Articles | Metrics
    Griffin-Lim algorithm is widely used in end-to-end speech synthesis with phase estimation, which always produces obviously artificial speech with low fidelity. Aiming at this problem, a system for end-to-end speech synthesis based on WaveNet network architecture was proposed. Based on Seq2Seq (Sequence-to-Sequence) structure, firstly the input text was converted into a one-hot vector, then, the attention mechanism was introduced to obtain a Mel spectrogram, finally WaveNet network was used to reconstruct phase information to generate time-domain waveform samples from the Mel spectrogram features. Aiming at English and Chinese, the proposed method achieves a Mean Opinion Score (MOS) of 3.31 on LJSpeech-1.0 corpus and 3.02 on THchs-30 corpus, which outperforms the end-to-end systems based on Griffin-Lim algorithm and parametric systems in terms of naturalness.
    Data science and technology
    Blockchain based decentralized item sharing and transaction service system
    FAN Jili, HE Pu, LI Xiaohua, NIE Tiezheng, YU Ge
    2019, 39(5):  1330-1335.  DOI: 10.11772/j.issn.1001-9081.2018112512
    Asbtract ( )   PDF (933KB) ( )  
    References | Related Articles | Metrics
    With the development of sharing economy, there is an urgent need for highly trusted distributed transaction management; however, traditional centralized information systems are difficult to meet it. Blockchain technology provides a shared ledger mechanism, which laid foundation for building credible distributed transaction management service. As blockchain 2.0 platform supporting smart contract, Ethereum platform was used as the basic framework to deeply study the operation mechanism and implementation technology of the decentralized shared goods transaction service system based on blockchain technology. Decentralized item sharing transaction service system framework based on Ethereum was designed, and a transaction management process based on intelligent contract mechanism was proposed. The system implementation technology including user interface was described in detail, and the performance of the system in transaction processing was tested. The experimental results indicate that the Ethereum-based transaction management system can ensure the creditability of the data and has a high operational efficiency, with average transaction processing speed of 21.7 items/s, and indexed average query speed of 117.6 items/s.
    Research on proof of work mining dilemma based on policy gradient algorithm
    WANG Tiantian, YU Shuangyuan, XU Baomin
    2019, 39(5):  1336-1342.  DOI: 10.11772/j.issn.1001-9081.2018102197
    Asbtract ( )   PDF (1022KB) ( )  
    References | Related Articles | Metrics
    In view of the mining dilemma problem caused by block withholding attack under Proof of Work (PoW) consensus mechanism in the blockchain, the game behavior between mining pools was regarded as an Iterative Prisoner's Dilemma (IPD) model and the policy gradient algorithm of deep reinforcement learning was used to study IPD's strategy choices. Each mining pool was considered as an independent Agent and the miner's infiltration rate was quantified as a behavior distribution in reinforcement learning. The policy network in the policy gradient was used to predict and optimize the Agent's behavior in order to maximize miners' average revenues. And the effectiveness of the policy gradient algorithm was validated through simulation experiments. Experimental results show that the mining pools attack each other at the beginning with miners' average revenue less than 1, which causes Nash equilibrium problem. After self-adjustment by the policy gradient algorithm, the relationship between the mining pools transforms from mutual attack to mutual cooperation with infiltration rate of each mining pool tending to zero and miners' average revenue tending to 1. The results show that the policy gradient algorithm can solve the Nash equilibrium problem of mining dilemma and maximize the miners' average revenue.
    Vehicle type mining and application analysis based on urban traffic big data
    JI Lina, CHEN Kai, YU Yanwei, SONG Peng, WANG Shuying, WANG Chenrui
    2019, 39(5):  1343-1350.  DOI: 10.11772/j.issn.1001-9081.2018109310
    Asbtract ( )   PDF (1387KB) ( )  
    References | Related Articles | Metrics
    Real-time urban traffic monitoring has become an important part of modern urban management, and traffic big data collected by video monitoring is wildly applied to urban management and traffic control. However, such huge citywide monitoring traffic big data is rarely used for urban traffic and urban computing research. The vehicle type mining and application analysis were implemented on the citywide monitoring traffic big data of a provincial capital city. Firstly, three types of vehicles with important influence on urban traffic:periodic private car, taxi and public commuter bus were defined. And the corresponding mining method for each type of vehicles was proposed. Experiments on 120 million vehicle records collected from 1704 video monitoring points in Jinan demonstrated the effectiveness of the proposed definitions and mining methods. Secondly, with four communities as examples, the residents' traffic modes and the relationships between the modes and the distribution of surrounding Points of Interest (POI) were mined and analyzed. Moreover, the potential applications of the urban traffic big data incorporated with POI in urban planning, demand forecasting and preference recommendation were explored.
    User opinion extraction based on adaptive crowd labeling with cost constrain
    ZHAO Wei, LIN Yuming, HUANG Taoyi, LI You
    2019, 39(5):  1351-1356.  DOI: 10.11772/j.issn.1001-9081.2018112496
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    User reviews contain a wealth of user opinion information which has great reference value to potential customers and merchants. Opinion targets and opinion words are core objects of user reviews, so the automatic extraction of them is a key work for user review intelligent applications. At present, the problem is solved mainly by supervised extraction method, which depends on high quality labeled samples to train the model. And traditional manual labeling method is time-consuming, laborious and costly. Crowdsourcing calculation provides an effective way to build a high-quality training sample set. However, the quality of the labeling results is uneven due to some factors such as knowledge background of the workers. To obtain high-quality labeling samples at a limited cost, an adaptive crowdsourcing labeling method based on professional level evaluation of workers was proposed to construct a reliable dataset of opinion target-opinion words. Firstly, high professional level workers were digged out with small cost. And then, a task distribution mechanism based on worker reliability was designed. Finally, an effective fusion algorithm for labeling results was designed by using the dependency relationship between opinion targets and opinion words, and the final reliable results were generated by integrating the labeling results of different workers. A series of experiments on real datasets show that the reliability of high quality opinion target-opinion word dataset built by the proposed method can be improved by about 10%, compared with GLAD (Generative model of Labels, Abilities, and Difficulties) model and MV (Majority Vote) method when the cost budget is low.
    Time utility balanced online task assignment algorithm under spatial crowdsourcing environment
    ZHANG Xingsheng, YU Dunhui, ZHANG Wanshan, WANG Chenxu
    2019, 39(5):  1357-1363.  DOI: 10.11772/j.issn.1001-9081.2018092027
    Asbtract ( )   PDF (1051KB) ( )  
    References | Related Articles | Metrics
    Focusing on the poor overall allocation effect due to the total utility of task allocation or task waiting time being considered respectively in the study of task allocation under spatial crowdsourcing environment, a dynamic threshold algorithm based on allocation time factor was proposed. Firstly, the allocation time factor of task was calculated based on the estimated waiting time and the already waiting time. Secondly, the task allocation order was obtained by comprehensively considering the return value of task and the allocation time factor. Thirdly, the dynamic adjustment item was added based on the initial value to set the threshold for each task. Finally, candidate matching set was set for each task according to the threshold condition, and the candidate matching pair with the largest matching coefficient was selected from the candidate matching set to join the result set, and the task allocation was completed. When the task allocation rate was 95.8%, compared with greedy algorithm, the proposed algorithm increased total allocation utility by 20.4%; compared with random threshold algorithm, it increased total allocation utility by 17.8% and decreased task average waiting time by 13.2%; compared with Two phase based Global Online Allocation-Greedy (TGOA-Greedy) algorithm, it increased total allocation utility by 13.9%. The experimental results show that proposed algorithm can shorten the average waiting time of task while improving the total utility of task allocation, to achieve the balance between the total allocation utility and the task waiting time.
    Over sampling ensemble algorithm based on margin theory
    ZHANG Zongtang, CHEN Zhe, DAI Weiguo
    2019, 39(5):  1364-1367.  DOI: 10.11772/j.issn.1001-9081.2018112346
    Asbtract ( )   PDF (597KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that traditional ensemble algorithms are not suitable for imbalanced data classification, Over Sampling AdaBoost based on Margin theory (MOSBoost) was proposed. Firstly, the margins of original samples were obtained by pre-training. Then, the minority class samples were heuristic duplicated by margin sorting thus forming a new balanced sample set. Finally, the finall ensemble classifier was obtained by the trained AdaBoost with the balanced sample set as the input. In the experiment on UCI dataset, F-measure and G-mean were used to evaluate MOSBoost, AdaBoost, Random OverSampling AdaBoost (ROSBoost) and Random UnderSampling AdaBoost (RDSBoost). The experimental results show that MOSBoost is superior to other three algorithm. Compared with AdaBoost, MOSBoost improves 8.4% and 6.2% respctively under F-measure and G-mean criteria.
    Cyber security
    Software defined network path security based on Hash chain
    LI Zhaobin, LIU Zeyi, WEI Zhanzhen, HAN Yu
    2019, 39(5):  1368-1373.  DOI: 10.11772/j.issn.1001-9081.2018091857
    Asbtract ( )   PDF (1058KB) ( )  
    References | Related Articles | Metrics
    For the security problem that the SDN (Software Defined Network) controller can not guarantee the network strategy issued by itself to be correctly executed on the forwarding devices, a new forwarding path monitoring security solution was proposed. Firstly, based on the overall view capability of the controller, a path credential interaction processing mechanism based on OpenFlow was designed. Secondly, Hash chain and message authentication code were introduced as the key technologies for generating and processing the forwarding path credential information. Thirdly, on this basis, Ryu controller and Open vSwitch open-source switch were deeply optimized,with credential processing flow added, constructing a lightweight path security mechanism. The test results show that the proposed mechanism can effectively guarantee the security of data forwarding path, and its throughput consumption is reduced by more than 20% compared with SDNsec, which means it is more suitable for the network environment with complex routes, but its fluctuates of latency and CPU usage are more than 15%, which needs further optimization.
    Secure network coding scheme based on chaotic encryption against wiretapping
    XU Guangxian, WANG Dong
    2019, 39(5):  1374-1377.  DOI: 10.11772/j.issn.1001-9081.2018102128
    Asbtract ( )   PDF (670KB) ( )  
    References | Related Articles | Metrics
    Focused on the problems of extra bandwidth overhead and high computational complexity to realize secure network coding against wiretapping, a secure networking coding scheme based on double chaotic sequences was proposed. Firstly, the first-dimentional data of source information was encrypted by using Cat-Logistic sequence. Then, sparse pre-coding matrix was constructed by the encrypted data. Finally, the rest vectors were linearly and randomly mixed up with the pre-coding matrix, realizing anti-wiretapping. Compared with the traditional Secure Practical netwOrk Coding (SPOC) scheme, the proposed scheme does not indroduce extra source coding redundancy by constructing sparse pre-coding matrix, reducing bandwidth overhead. The theoretical analysis and experimental results show that the proposed scheme not only has lower coding complexity but also improves network security and the transmission efficiency.
    Logical key hierarchy plus based key management program for wireless sensor network
    HAN Si, ZHENG Baokun, CAO Qimin
    2019, 39(5):  1378-1384.  DOI: 10.11772/j.issn.1001-9081.2018102175
    Asbtract ( )   PDF (1118KB) ( )  
    References | Related Articles | Metrics
    Aiming at the security communication problem of Heterogeneous Sensor Network (HSN), a secure group key management scheme W-LKH++ was proposed based on LKH++ tree. Firstly, as the wireless sensor nodes with low configuration, the initialization method for group key of LKH++ tree was modified to reduce the computationol overhead on each sensor node. Then, the holding way of keys were improved to reduce the storage overhead on each sensor node. Finally, a dynamic key updating method suitable for cluster head nodes was proposed to enhance the ability against node capture of the cluster head nodes based on low communication overhead, improving the security of communication of Wireless Sensor Network (WSN). Performance analysis and simulation results show that W-LKH++ improves the WSN security with low computation, storage and communication overhead.
    Design and optimization of network anonymous scanning system
    HE Yunhua, NIU Tong, LIU Tianyi, XIAO Ke, LU Xiang
    2019, 39(5):  1385-1388.  DOI: 10.11772/j.issn.1001-9081.2018111960
    Asbtract ( )   PDF (677KB) ( )  
    References | Related Articles | Metrics
    An anonymous network scanning system was proposed for traceability problem faced by network scanning tools during scanning. Firstly, the anonymous system was combined with the network scanning tool to implement anonymous scanning. Then, the local privatization of the system was implemented based on existing anonymous system. Thirdly, through traffic analysis, it was found that Nmap's multi-process scanning would become a single-process scan due to proxy chain, resulting in lower scan scan performance. Finally, a performance optimization scheme based on multi-Namp process concurrency was proposed, which divided the overall scan task into multiple scan tasks and assigned them to multiple separate Nmap processes in parallel. The experimental results show that the scanning delay of the performance optimization scheme is close to that of the normal scanning system, and achieves the purpose of improving the performance of the anonymous scanning system. Therefore, the optimized network anonymous scanning system hinders the traceability and improves the scanning efficiency.
    Directed fuzzing method for binary programs
    ZHANG Hanfang, ZHOU Anmin, JIA Peng, LIU Luping, LIU Liang
    2019, 39(5):  1389-1393.  DOI: 10.11772/j.issn.1001-9081.2018102194
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics
    In order to address the problem that the mutation in the current fuzzing has certain blindness and the samples generated by the mutation mostly pass through the same high-frequency paths, a binary fuzzing method based on light-weight program analysis technology was proposed and implemented. Firstly, the target binary program was statically analyzed to filter out the comparison instructions which hinder the sample files from penetrating deeply into the program during the fuzzing process. Secondly, the target binary program was instrumented to obtain the specific values of the operands in the comparison instructions, according to which the real-time comparison progress information for each comparison instruction was established, and the importance of each sample was measured according to the comparison progress information. Thirdly, the real-time path coverage information in the fuzzing process was used to increase the probability that the samples passing through rare paths were selected to be mutated. Finally, the input files were directed and mutated by the comparison progress information combining with a heuristic strategy to improve the efficiency of generating valid inputs that could bypass the comparison checks in the program. The experimental results show that the proposed method is better than the current binary fuzzing tool AFL-Dyninst both in finding crashes and discovering new paths.
    Privacy preserving for social network relational data based on Skyline computing
    ZHANG Shuxuan, KANG Haiyan, YAN Han
    2019, 39(5):  1394-1399.  DOI: 10.11772/j.issn.1001-9081.2018112556
    Asbtract ( )   PDF (902KB) ( )  
    References | Related Articles | Metrics
    With the popularity and development of social software, more and more people join the social network, which produces a lot of valuable information, including sensitive private information. Different users have different private requirements and therefore require different levels of privacy protection. The level of user privacy leak in social network is affected by many factors, such as the structure of social network graph and the threat level of the user himself. Aiming at the personalized differential privacy preserving problem and user privacy leak level problem, a Personalized Differential Privacy based on Skyline (PDPS) algorithm was proposed to publish social network relational data. Firstly, user's attribute vector was built. Secondly, the user privacy leak level was calculated by Skyline computation method and the user dataset was segmented according to this level. Thirdly, with the sampling mechanism, the users with different privacy requirements were protected at different levels to realize personalized differential privacy and noise was added to the integreted data. Finally, the processed data were analyzed for security and availability and published. The experimental results demonstrate that compared with the traditional Personalized Differential Privacy (PDP) method on the real data set, PDPS algorithm has better privacy protection quality and data availability.
    Network and communications
    Search tree detection algorithm based on shadow domain
    LI Xiaowen, FAN Yifang, HOU Ningning
    2019, 39(5):  1400-1404.  DOI: 10.11772/j.issn.1001-9081.2018102174
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    In massive Multiple-Input-Multiple-Output (MIMO) system, as the increse of antenna number, traditional detection algorithms have lower performance, higher complexity, and they are not suitable for high order modulation. To solve the problem, based on the idea of shadow domain, a search tree detection algorithm combining Quadratic Programming (QP) and Branch and Bound (BB) algorithm was proposed. Firstly, with QP model constructed, the unreliable symbols from solution vector of first-order QP algorithm were extracted; then, BB search tree algorithm was applied to the unreliable symbols for the optimal solution; meanwhile three pruning strategies were proposed to reach a compromise between complexity and performance. The simulation results show that the proposed algorithm increases 20 dB performance gain compared with the traditional QP algorithm in 64 Quadrature Amplitude Modulation (QAM) and increases 21 dB performance gain compared with QP algorithm in 256 QAM. Meanwhile, applying the same pruning strategies, the complexity of the proposed algorithm is reduced by about 50 percentage points compared with the traditional search tree algorithm.
    Signal strength difference fingerprint localization algorithm based on principal component analysis and chi-square distance
    ZHOU Fei, XIA Pengcheng
    2019, 39(5):  1405-1410.  DOI: 10.11772/j.issn.1001-9081.2018102143
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics
    Due to the significant difference in Received Signal Strength (RSS) acquired by different types of mobile terminals, the traditional indoor localization algorithm based on RSS location fingerprint database has low localization stability and accuracy, existing solutions using Signal Strength Difference (SSD) instead of RSS to construct location fingerprint database has problems such as high data dimension, and high correlation redundancy, and K-Nearest Neighbors (KNN) algorithm has low positioning accuracy. Aiming at the above problems, an SSD fingerprint localization algorithm based on Principal Component Analysis (PCA) and Chi-Square Distance (CSD) was proposed. PCA algorithm was used to reduce the dimension of SSD data and eliminate correlation redundancy, and CSD was used to measure the relative distance between the feature quantities after dimension reduction to match the position. In the simulation experiments, the positioning error cumulative probability curve of the SSD location fingerprint database using the proposed algorithm is higher than that of the original RSS and SSD fingerprint database. Compared with the traditional KNN and the improved KNN algorithm based on Cosine Similarity (COS-KNN), the average positioning error and the positioning error variance of the proposed algorithm are both significantly reduced while time cost is slightly increased. The experimental results show that the proposed algorithm can further improve the positioning stability and positioning accuracy of the original SSD fingerprint localization algorithm effectively, and meets the real-time needs of indoor localization.
    Node classification in signed networks based on latent space projection
    SHENG Jun, GU Shensheng, CHEN Ling
    2019, 39(5):  1411-1415.  DOI: 10.11772/j.issn.1001-9081.2018112559
    Asbtract ( )   PDF (832KB) ( )  
    References | Related Articles | Metrics
    Social network node classification is widely used in solving practical problems. Most of the existing network node classification algorithms focus on unsigned social networks,while node classification algorithms on social networks with symbols on edges are rare. Based on the fact that the negative links contribute more on signed network analysis than the positive links. The classification of nodes on signed networks was studied. Firstly, positive and negative networks were projected to the corresponding latent spaces, and a mathematical model was proposed based on positive and negative links in the latent spaces. Then, an iterative algorithm was proposed to optimize the model, and the iterative optimization of latent space matrix and projection matrix was used to classify the nodes in the network. The experimental results on the dataset of the signed social network show that the F1 value of the classification results by the proposed algorithm is higher than 11 on Epinions dataset, and that is higher than 23.8 on Slashdo dataset,which indicate that the proposed algorithm has higher accuracy than random algorithm.
    Two-level polling control system for distinguishing site status
    YANG Zhijun, SUN Yangyang
    2019, 39(5):  1416-1420.  DOI: 10.11772/j.issn.1001-9081.2018051122
    Asbtract ( )   PDF (752KB) ( )  
    References | Related Articles | Metrics
    To improve the work efficiency of polling control model and distinguish network priorities, an Exhaustive-Threshold Two-stage Polling control model based on Site Status (ETTPSS) was proposed. Based on two levels of priority, parallel processing was used to only send information to busy sites according to busy and idle states of sites. The model could not only distinguish the priorities of transmission services but also avoid the queries to the idle sites without information packets, thereby improving model resource utilization and work efficiency. The method of probabilistic generating function and Markov chain was used to analyze the model theoretically, and the important performance parameters of the model were analyzed accurately. The simulation results show that the simulation values and the theoretical values are approximately equal, indicating that the theoretical analysis is correct and reasonable. Compared with normal polling model, the model performance is greatly improved.
    Ultra-wideband channel environment classification algorithm based on CNN
    YANG Yanan, XIA Bin, ZHAO Lei, YUAN Wenhao
    2019, 39(5):  1421-1424.  DOI: 10.11772/j.issn.1001-9081.2018071516
    Asbtract ( )   PDF (561KB) ( )  
    References | Related Articles | Metrics
    To solve the problem that Non Line Of Sight (NLOS) state identification requires classification of known channel types, a channel environment classification algorithm based on Convolutional Neural Network (CNN) was proposed. Firstly, an Ultra-WideBand (UWB) channel was sampled, and a sample set was constructed. Then, a CNN was trained by the sample set to extract features of different channel scenes. Finally, the classification of UWB channel environment was realized. The experimental results show that the overall accuracy of the model using the proposed algorithm is about 93.40% and the algorithm can effectively realize the classification of channel environments.
    Performance analysis of multi-user orthogonal correlation delay keying scheme
    ZHANG Gang, HUAGN Nanfei, ZHANG Tianqi
    2019, 39(5):  1425-1428.  DOI: 10.11772/j.issn.1001-9081.2018081760
    Asbtract ( )   PDF (601KB) ( )  
    References | Related Articles | Metrics
    In order to improve the transmission performance of chaotic signals, a Multi-User Orthogonal Correlation Delay Shift Keying (MU-OCDSK) scheme was proposed based on Correlation Delay Shift Keying (CDSK) scheme and Multi-Carrier Correlation Delay Shift Keying (MC-CDSK) scheme. The multi-carrier was used to modulate chaotic signals. Compared with CDSK, the proposed scheme not only has higher spectral efficiency, but also improves bit error rate. Theoretical simulation and Monte Carlo simulation show that compared with MC-CDSK, the proposed scheme not only doubles the transmission rate, but also improves the bit error rate. The results of theoretical simulation and Monte Carlo simulation are consistent.
    Downlink resource scheduling based on weighted average delay in long term evolution system
    WANG Yan, MA Xiurong, SHAN Yunlong
    2019, 39(5):  1429-1433.  DOI: 10.11772/j.issn.1001-9081.2018081734
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics
    Aiming at the transmission performance requirements of Real-Time (RT) services and Non-Real-Time (NRT) services for multi-user in the downlink transmission of Long Term Evolution (LTE) mobile communication system, an improved Modified Largest Weighted Delay First (MLWDF) scheduling algorithm based on weighted average delay was proposed. On the basis of considering both channel perception and Quality of Service (QoS) perception, a weighted average dealy factor reflecting the state of the user buffer was utilized, which was obtained by the average delay balance of the data to be transmitted and the transmitted data in the user buffer. The RT service with large delay and traffic is prioritized, which improves the user performance experience.Theoretical analysis and link simulation show that the proposed algorithm improves the QoS performance of RT services on the basis of ensuring the delay and fairness of each service. The packet loss rate of RT service of the proposed algorithm decreased by 53.2%, and the average throughput of RT traffic increased by 44.7% when the number of users achieved 50 compared with MLWDF algorithm. Although the throughput of NRT services are sacrificed, it is still better than VT-MLWDF (Virtual Token MLWDF) algorithm. The theoretical analysis and simulation results show that transmission performances and QoS are superior to the comparison algorithm.
    Virtual reality and multimedia computing
    Subjective and objective quality assessment for stereoscopic 3D retargeted images
    FU Zhenqi, SHAO Feng
    2019, 39(5):  1434-1439.  DOI: 10.11772/j.issn.1001-9081.2018102054
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    Stereoscopic 3D (S3D) image retargeting aims to adjust aspect ratio of S3D images. To objectively and accurately assess the quality of different retargeted S3D images, a retargeted S3D image quality assessment database was constructed. Firstly, 45 original images were retargeted by eight representative retargeting algorithms with two retargeting scales to generate 720 retargeted S3D images. Then, the subjective quality evaluation score of each retargeted image was obtained via subjective testing. Finally, the subjective scores were converted to MOS (Mean Opinion Score) values. Based on all above, an objective quality assessment method was proposed for retargeted S3D images. In this method, three types of features including depth perception, visual comfort and image quality of left and right views were extracted to calculate the retargeted S3D image quality with the use of support vector regression prediction. Experimental results on the proposed database show that the proposed method has the Pearson linear correlation coefficient and the Spearman rank-order correlation coefficient higher than 0.82 and 0.81 respectively, demonstrating its superiority in retargeted S3D image visual quality assessment.
    Single image super-resolution reconstruction method based on improved convolutional neural network
    LIU Yuefeng, YANG Hanxi, CAI Shuang, ZHANG Chenrong
    2019, 39(5):  1440-1447.  DOI: 10.11772/j.issn.1001-9081.2018091887
    Asbtract ( )   PDF (1411KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of edge distortion and fuzzy texture detail information in reconstructed images, an image super-resolution reconstruction method based on improved Convolutional Neural Network (CNN) was proposed. Firstly, various preprocessing operations were performed on the underlying feature extraction layer by three interpolation methods and five sharpening methods, and the images which were only subjected to one interpolation operation and the images which were sharpened after interpolation operation were arranged into a 3D matrix. Then, the 3D feature map formed by the preprocessing was used as the multi-channel input of a deep residual network in the nonlinear mapping layer to obtain deeper texture detail information. Finally, for reducing image reconstruction time, sub-pixel convolution was introduced into the reconstruction layer to complete image reconstruction operation. Experimental results on several common datasets show that the proposed method achieves better restored texture detail information and high-frequency information in the reconstructed image compared with the classical methods. Furthermore, the Peak Signal-to-Noise Ratio (PSNR) was increased by 0.23 dB on average, and the structural similarity was increased by 0.0066 on average. The proposed method can better maintain the texture details of the reconstructed image and reduce the image edge distortion under the premise of ensuring the image reconstruction time, improving the performance of image reconstruction.
    Detection method of non-standard deep squat posture based on human skeleton
    YU Lu, HU Jianfeng, YAO Leiyue
    2019, 39(5):  1448-1452.  DOI: 10.11772/j.issn.1001-9081.2018102137
    Asbtract ( )   PDF (811KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that the posture is not correct and even endangers the health of body builder caused by the lack of supervision and guidance in the process of bodybuilding, a new method of real-time detection of deep squat posture was proposed. The most common deep squat behavior in bodybuilding was abstracted and modeled by three-dimensional information of human joints extracted through Kinect camera, solving the problem that computer vision technology is difficult to detect small movements. Firstly, Kinect camera was used to capture the depth images to obtain three-dimensional coordinates of human body joints in real time. Then, the deep squat posture was abstracted as torso angle, hip angle, knee angle and ankle angle, and the digital modeling was carried out to record the angle changes frame by frame. Finally, after completing the deep squat, a threshold comparison method was used to calculate the non-standard frame ratio in a certain period of time. If the calculated ratio was greater than the given threshold, the deep squat was judged as non-standard, otherwise judged as standard. The experiment results of six different types of deep squat show that the proposed method can detect different types of non-standard deep squat, and the average recognition rate is more than 90% of the six different types of deep squat, which can play a role in reminding and guiding bodybuilders.
    Robust multi-manifold discriminant local graph embedding based on maximum margin criterion
    YANG Yang, WANG Zhengqun, XU Chunlin, YAN Chen, JU Ling
    2019, 39(5):  1453-1458.  DOI: 10.11772/j.issn.1001-9081.2018102113
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    In most existing multi-manifold face recognition algorithms, the original data with noise are directly processed, but the noisy data often have a negative impact on the accuracy of the algorithm. In order to solve the problem, a Robust Multi-Manifold Discriminant Local Graph Embedding algorithm based on the Maximum Margin Criterion (RMMDLGE/MMC) was proposed. Firstly, a denoising projection was introduced to process the original data for iterative noise reduction, and the purer data were extracted. Secondly, the data image was divided into blocks and a multi-manifold model was established. Thirdly, combined with the idea of maximum margin criterion, an optimal projection matrix was sought to maximize the sample distances on different manifolds while to minimize the sample distances on the same manifold. Finally, the distance from the test sample manifold to the training sample manifold was calculated for classification and identification. The experimental results show that, compared with Multi-Manifold Local Graph Embedding algorithm based on the Maximum Margin Criterion (MLGE/MMC) which performs well, the classification recognition rate of the proposed algorithm is improved by 1.04, 1.28 and 2.13 percentage points respectively on ORL, Yale and FERET database with noise and the classification effect is obviously improved.
    Adaptive window regression method for face feature point positioning
    WEI Jiawang, WANG Xiao, YUAN Yubo
    2019, 39(5):  1459-1465.  DOI: 10.11772/j.issn.1001-9081.2018102057
    Asbtract ( )   PDF (1191KB) ( )  
    References | Related Articles | Metrics
    Focused on the low positioning accuracy of Explicit Shape Regression (ESR) for some facical occlusion and excessive facial expression samples, an adaptive window regression method was proposed. Firstly, the priori information was used to generate an accurate face area box for each image, feature mapping of faces was performed by using the center point of the face area box, and similar transformation was performed to obtain multiple initial shapes. Secondly, an adaptive window adjustment strategy was given, in which the feature window size was adaptively adjusted based on the mean square error of the previous regression. Finally, based on the feature selection strategy of Mutual Information (MI), a new correlation calculation method was proposed, and the most relevant features were selected in the candidate pixel set. On the three public datasets LFPW, HELEN and COFW, the positioning accuracy of the proposed method is increased by 7.52%, 5.72% and 5.89% respectively compared to ESR algorithm. The experimental results show that the adaptive window regression method can effectively improve the positioning accuracy of face feature points.
    Fish image retrieval algorithm based on color four channels and spatial pyramid
    ZHANG Meiling, WU Junfeng, YU Hong, CUI Zhen, DONG Wanting
    2019, 39(5):  1466-1472.  DOI: 10.11772/j.issn.1001-9081.2018112522
    Asbtract ( )   PDF (1168KB) ( )  
    References | Related Articles | Metrics
    With the development of the application of computer vision in the field of marine fisheries, fish image retrieval has played a huge role in fishery resource survey and fish behavior analysis. It is found that the background information of fish images can greatly interfere with fish image retrieval, and the fish image retrieval results only using color, texture, shape and other characteristics of fish images are not accurate due to the lack of spatial position information. To solve the above problems, a novel fish image retrieval algorithm based on HSVG (Hue, Saturation, Value, Gray) four-channel and spatial pyramid was proposed. Firstly, a visual saliency map was extracted to separate the foreground and the background, thereby reducing the interference of the image background on the retrieval. Then, in order to contain certain spatial position information, the fish image was converted into an HSVG four-channel map, and on this basis, the theory of spatial pyramid was used to segment the image and extract the SURF (Speed Up Robust Feature). Finally, the search results were obtained. In order to verify the effectiveness of the proposed algorithm, the recall and precision of the algorithm were compared with classic HSVG algorithm and saliency block algorithm on QUT_fish_data dataset and DLOU_fish_data dataset. Compared with traditional HSVG algorithm, the precision on two datasets is increased at most by 12% and 5%, and the recall is increased at most by 7% and 22%, respectively. Compared with saliency block algorithm, the precision on two datasets is increased at most by 15% and 5%, and the recall is increased at most by 36% and 22%, respectively. So, the proposed algorithm is effective and can improve the retrieval results significantly.
    Video compression artifact removal algorithm based on adaptive separable convolution network
    NIE Kehui, LIU Wenzhe, TONG Tong, DU Min, GAO Qinquan
    2019, 39(5):  1473-1479.  DOI: 10.11772/j.issn.1001-9081.2018081801
    Asbtract ( )   PDF (1268KB) ( )  
    References | Related Articles | Metrics
    The existing optical flow estimation methods, which are frequently used in video quality enhancement and super-resolution reconstruction tasks, can only estimate the linear motion between pixels. In order to solve this problem, a new multi-frame compression artifact removal network architecture was proposed. The network consisted of motion compensation module and compression artifact removal module. With the traditional optical flow estimation algorithms replaced with the adaptive separable convolution, the motion compensation module was able to handle with the curvilinear motion between pixels, which was not able to be well solved by optical flow methods. For each video frame, a corresponding convolutional kernel was generated by the motion compensation module based on the image structure and the local displacement of pixels. After that, motion offsets were estimated and pixels were compensated in the next frame by means of local convolution. The obtained compensated frame and the original next frame were combined together as input for the compression artifact removal module. By fusing different pixel information of the two frames, the compression artifacts of the original frame were removed. Compared with the state-of-the-art Multi-Frame Quality Enhancement (MFQE) algorithm on the same training and testing datasets, the proposed network has the improvement of Peak Signal-to-Noise Ratio (ΔPSNR) increased by 0.44 dB at most and 0.32 dB on average. The experimental results demonstrate that the proposed network performs well in removing video compression artifacts.
    Application of matching model based on grayscale tower score in unmanned aerial vehicle network video stitching
    LI Nanyun, WANG Xuguang, WU Huaqiang, HE Qinglin
    2019, 39(5):  1480-1484.  DOI: 10.11772/j.issn.1001-9081.2018092034
    Asbtract ( )   PDF (910KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that in complex and non-cooperative situations the number of matching feature pairs and the accuracy of feature matching results in video stitching can not meet the requirements of subsequent image stabilization and stitching at the same time, a method of constructing matching model to match features accurately after feature points being scored by grayscale tower was proposed. Firstly, the phenomenon that the similiar grayscales would merged together after grayscale compression was used to establish a grayscale tower to realize the scoring of feature points. Then, the feature points with high score were selected to establish the matching model based on position information. Finally, according to the positioning of the matching model, regional block matching was performed to avoid the influence of global feature point interference and large error noise matching, and the feature matching pair with the smallest error was selected as the final result of matching pair. In addition, in a motion video stream, regional feature extraction could be performed by using the information of previous and next frames to establish a mask, and the matching model could be selectively passed on to the next frame to save the computation time. The simulation results show that after using this matching model based on grayscale tower score, the feature matching accuracy is about 95% and the number of matching feature pairs of the same frame is nearly 10 times higher than that of the traditional method. The proposed method has good robustness to environment and illumination while guaranteeing the matching number and the matching accuracy without large error matching result.
    Segmentation of nasopharyngeal neoplasms based on random forest feature selection algorithm
    LI Xian, WANG Yan, LUO Yong, ZHOU Jiliu
    2019, 39(5):  1485-1489.  DOI: 10.11772/j.issn.1001-9081.2018102205
    Asbtract ( )   PDF (796KB) ( )  
    References | Related Articles | Metrics
    Due to the low grey-level contrast and blurred boundaries of organs in medical images, a Random Forest (RF) feature selection algorithm was proposed to segment nasopharyngeal neoplasms MR images. Firstly, gray-level, texture and geometry information was extracted from nasopharyngeal neoplasms images to construct a random forest classifier. Then, feature importances were measured by the random forest, and the proposed feature selection method was applied to the original handcrafted feature set. Finally, the optimal feature subset obtained from the feature selection process was used to construct a new random forest classifier to make the final segmentation of the images. Experimental results show that the performances of the proposed algorithm are:dice coefficient 79.197%, accuracy 97.702%, sensitivity 72.191%, and specificity 99.502%. By comparing with the conventional random forest based and Deep Convolution Neural Network (DCNN) based segmentation algorithms, it is clearly that the proposed feature selection algorithm can effectively extract useful information from the nasopharyngeal neoplasms MR images and improve the segmentation accuracy of nasopharyngeal neoplasms under small sample circumstance.
    Frontier & interdisciplinary applications
    Cloud monitoring system of tunnel groups for traffic safety
    MA Qinglu, ZOU Zheng
    2019, 39(5):  1490-1494.  DOI: 10.11772/j.issn.1001-9081.2018102121
    Asbtract ( )   PDF (850KB) ( )  
    References | Related Articles | Metrics
    Some defects exist in traditional management systems of highway tunnels, such as decentralized and independent management, lack of flexibility in each-part monitoring and low degree of visualization of disease details. Aiming at these problems above, a regional joint control and measurement concept of tunnel groups for traffic safety was proposed. Firstly, the scattered monitoring data of tunnels were integrated into a cloud database to realize the management of tunnel group. Secondly, a "division/combination" segmental monitoring method was designed to monitor the diseases at any position of tunnels. Thirdly, on the basis of GIS (Geographic Information System) map, JSP (Java Server Pages) technology and CANVAS technology, a visualized monitoring platform of disease monitoring of tunnel group for traffic safety of road network was established. Finally, the monitoring data were analyzed and processed real-timely and classified pre-warning of tunnel safety status was realized, so that traffic safety of road network could be judged. Experiments were carried out on the data set of tunnel diseases in a certain area. The results show that the proposed system realizes the functions such as tunnel group management, disease details visualization and tunnel safety classification; meanwhile, the system also has certain pre-warning ability for traffic safety of road network.
    Urban traffic signal control based on deep reinforcement learning
    SHU Lingzhou, WU Jia, WANG Chen
    2019, 39(5):  1495-1499.  DOI: 10.11772/j.issn.1001-9081.2018092015
    Asbtract ( )   PDF (850KB) ( )  
    References | Related Articles | Metrics
    To meet the requirements for adaptivity, and robustness of the algorithm to optimize urban traffic signal control, a traffic signal control algorithm based on Deep Reinforcement Learning (DRL) was proposed to control the whole regional traffic with a control Agent contructed by a deep learning network. Firstly, the Agent predicted the best possible traffic control strategy for the current state by observing continously the state of the traffic environment with an abstract representation of a location matrix and a speed matrix, because the matrix representation method can effectively abstract vital information and reduce redundant information about the traffic environment. Then, based on the impact of the strategy selected on the traffic environment, a reinforcement learning algorithm was employed to correct the intrinsic parameters of the Agent constantly in order to maximize the global speed in a period of time. Finally, after several iterations, the Agent learned how to effectively control the traffic.The experiments in the traffic simulation software Vissim show that compared with other algorithms based on DRL, the proposed algorithm is superior in average global speed, average queue length and stability; the average global speed increases 9% and the average queue length decreases 13.4% compared to the baseline. The experimental results verify that the proposed algorithm can adapt to complex and dynamically changing traffic environment.
    Secure storage and access scheme for medical records based on blockchain
    XU Jian, CHEN Zhide, GONG Ping, WANG Keke
    2019, 39(5):  1500-1506.  DOI: 10.11772/j.issn.1001-9081.2018102241
    Asbtract ( )   PDF (1119KB) ( )  
    References | Related Articles | Metrics
    To solve the problems of the cumbersome process in medical record authorization, the low efficiency in record sharing and the difficulty in identity authentication in current medical systems, a method of asymmetric encryption technology combining with blockchain technology was proposed to make medical records cross-domain sharing traceable, data tamper-resistant and identity authentication simplified by applying charatistics of asymmetric encryption technology like high safety and simple cooperation to the peer-to-peer network constructed by blockchain technology. Firstly, based on the anti-tampering of blockchain technology and with asymmetric encryption technology combined, file synchronization contract and authorization contract were designed, in which the distributed storage advantages secure the privacy of user's medical information. Secondly, cross-domain acquisition contracts were designed to validate the identity of both parties and improve authentication efficiency, so that non-legitimate users can be securely filtered without third-party notary agency. The experimental and analysis results show that the proposed scheme has obvious advantages in data guard against theft, multi-party authentication and data access control compared with the traditional scheme of using cloud computing method to solve medical record sharing problem. The proposed method provides a good application demonstration for solving the security problems in the data sharing process across medical institutions and a reference for cross-domain identity verification in the process of sharing data by using decentralization and auditability of blockchain technology.
    Intelligent risk contagion mechanism of interbank market credit lending based on multi-layer network
    ZHANG Xi, ZHU Li, LIU Luhui, ZHAN Hanglong, LU Yanmin
    2019, 39(5):  1507-1511.  DOI: 10.11772/j.issn.1001-9081.2018110064
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    Analysis and research on interbank market based on multi-layer network structure is conducive to avoiding or weakening the risk impact on financial market. Based on test data simulated by credit lending business scenario, combined with the multi-layer network structure and complex network analysis method of interbank market, the important nodes in interbank market were judged and identified from different angles, meanwhile Jaccard similarity coefficient between the layers and inter-institution Pearson similarity coefficient were calculated and the infectousness of risk contagion of interbank market was measured from macroscopic and microscopic perspectives. The experimental results show that large-scale state-owned financial institutions such as Bank of China and China Development Bank are of high importance in the system, and the greater the similarity between institutions, the greater the infectiousness of risk contagion. Therefore, by calculating the important node measure index in the network layer, comprehensive and complete analysis of the risk contagion of the entire system can help the regulators to achieve accurate monitoring of important institutions in the system. At the same time, from the perspectives of inter-layer analysis and intra-layer analysis, comprehensive measurement of the infectious degree of risk contagion between institutions after financial shock provides policy advice to regulators.
    Abnormal flow monitoring of industrial control network based on convolutional neural network
    ZHANG Yansheng, LI Xiwang, LI Dan, YANG Hua
    2019, 39(5):  1512-1517.  DOI: 10.11772/j.issn.1001-9081.2018091928
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics
    Aiming at the inaccuracy of traditional abnormal flow detection model in the industrial control system, an abnormal flow detection model based on Convolutional Neural Network (CNN) was proposed. The proposed model was based on CNN algorithm and consisted of a convolutional layer, a full connection layer, a dropout layer and an output layer. Firstly, the actual collected network flow characteristic values were scaled to a range corresponding to the grayscale pixel values, and the network flow grayscale map was generated. Secondly, the generated network traffic grayscale image was put into the designed convolutional neural network structure for training and model tuning. Finally, the trained model was used to the abnormal flow detection of the industrial control network. The experimental results show that the proposed model has a recognition accuracy of 97.88%, which is 5 percentage points higher than that of Back Propagation (BP) neural network with the existing highest accuracy.
    Detection of new ground buildings based on generative adversarial network
    WANG Yulong, PU Jun, ZHAO Jianghua, LI Jianhui
    2019, 39(5):  1518-1522.  DOI: 10.11772/j.issn.1001-9081.2018102083
    Asbtract ( )   PDF (841KB) ( )  
    References | Related Articles | Metrics
    Aiming at the inaccuracy of the methods based on ground textures and space features in detecting new ground buildings, a novel Change Detection model based on Generative Adversarial Networks (CDGAN) was proposed. Firstly, a traditional image segmentation network (U-net) was improved by Focal loss function, and it was used as the Generator (G) of the model to generate the segmentation results of remote sensing images. Then, a convolutional neutral network with 16 layers (VGG-net) was designed as the Discriminator (D), which was used for discriminating the generated results and the Ground Truth (GT) results. Finally, the Generator and Discriminator were trained in an adversarial way to get a Generator with segmentation capability. The experimental results show that, the detection accuracy of CDGAN reaches 92%, and the IU (Intersection over Union) value of the model is 3.7 percentage points higher than that of the traditional U-net model, which proves that the proposed model effectively improves the detection accuracy of new ground buildings in remote sensing images.
    Computation offloading method for workflow management in mobile edge computing
    FU Shucun, FU Zhangjie, XING Guowen, LIU Qingxiang, XU Xiaolong
    2019, 39(5):  1523-1527.  DOI: 10.11772/j.issn.1001-9081.2018081753
    Asbtract ( )   PDF (853KB) ( )  
    References | Related Articles | Metrics
    The problem of high energy consumption for mobile devices in mobile edge computing is becoming increasingly prominent. In order to reduce the energy consumption of the mobile devices, an Energy-aware computation Offloading for Workflows (EOW) was proposed. Technically, the average waiting time of computing tasks in edge devices was analyzed based on queuing theory, and the time consumption and energy consumption models for mobile devices were established. Then a corresponding computation offloading method, by leveraging NSGA-Ⅲ (Non-dominated Sorting Genetic Algorithm Ⅲ) was designed to offload the computing tasks reasonably. Part computing tasks were processed by the mobile devices, or offloaded to the edge computing platform and the remote cloud, achieving the goal of energy-saving for all the mobile devices. Finally, comparison experiments were conducted on the CloudSim platform. The experimental results show that EOW can effectively reduce the energy consumption of all the mobile devices and satisfy the deadline of all the workflows.
    Passive falling detection method based on wireless channel state information
    HUANG Mengmeng, LIU Jun, ZHANG Yifan, GU Yu, REN Fuji
    2019, 39(5):  1528-1533.  DOI: 10.11772/j.issn.1001-9081.2018091938
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics
    Traditional vision-based or sensor-based falling detection systems possess certain inherent shortcomings such as hardware dependence and coverage limitation, hence Fallsense, a passive falling detection method based on wireless Channel State Information (CSI) was proposed. The method was based on low-cost, pervasive and commercial WiFi devices. Firstly, the wireless CSI data was collected and preprocessed. Then a model of motion-signal analysis was built, where a lightweight dynamic template matching algorithm was designed to detect relevant fragments of real falling events from the time-series channel data in real time. Experiments in a large number of actual environments show that Fallsense can achieve high accuracy and low false positive rate, with an accuracy of 95% and a false positive rate of 2.44%. Compared with the classic WiFall system, Fallsense reduces the time complexity from O(mN log N) to O(N) (N is the sample number, m is the feature number), and increases the accuracy by 2.69%, decreases the false positive rate by 4.66%. The experimental results confirm that this passive falling detection method is fast and efficient.
    Stability analysis of a drug abuse epidemic model
    LIU Feng
    2019, 39(5):  1534-1539.  DOI: 10.11772/j.issn.1001-9081.2018102215
    Asbtract ( )   PDF (810KB) ( )  
    References | Related Articles | Metrics
    The recovered drug users maybe become susceptible to drug again, but this possibility is neglected in the existing drug abuse epidemic model in which the drug users are assumed to be permanently immune to drugs after recovery. Aiming at the problem, the evolution process of drug abuse population was analyzed with considering both community treatment and isolation therapy, and a drug abuse epidemic model based on temporary immunity was proposed. Furthermore, the basic reproduction number of the proposed model was calculated and the existence and stability of the proposed model equilibrium were discussed. It is shown that the proposed model has a drug free equilibrium which is locally asymptotically stable and a unique endemic equilibrium when the basic reproduction number is less and more than unity respectively. And the global stability of the endemic equilibrium was proved by using a geometric approach. Otherwise, the proposed model has the phenomenon of backward bifurcation under certain conditions when the basic reproduction number is equal to unity. The above results were verified by the numerical simulations. The results indicate that the prevalence of drug abuse can be effectively inhibited by increasing the rate of isolation therapy, improving the effect of community treatment and reducing the infection rate.
    Optimal design of energy storage spring in circuit breaker based on improved particle swarm optimization algorithm
    SHI Lili, XIA Kewen, DAI Shuidong, JU Wenzhe
    2019, 39(5):  1540-1546.  DOI: 10.11772/j.issn.1001-9081.2018051080
    Asbtract ( )   PDF (1098KB) ( )  
    References | Related Articles | Metrics
    In the traditional way to design the energy storage spring of the circuit breaker the method of experience trial calculation is mainly adopted, which may easily lead to unreasonable parameters of the spring structure, large volume of circuit breaker and poor breaking performance. Therefore, An improved cloud particle swarm optimization algorithm combined with catfish effect was applied to optimize the parameters of energy storage spring of circuit breaker. Firstly, according to the working principle of energy storage springs, the mathematical optimization design model of the energy storage springs and the constraints of the spring parameter design were deduced. Then, improving the algorithm based on the optimization model, on the basis of the traditional particle swarm optimization algorithm, catfish effect strategy was introduced to produce various candidate solutions, avoiding the algorithm falling into local optimal value and the optimization speed weighting factor was adjusted combined with the cloud model to speed up the convergence of the algorithm and improve the ability of global search solutions. Finally, the improved algorithm was used to simulate the optimization model of the energy storage spring of circuit breakers and calculate the corresponding spring parameters. The results show that the improved particle swarm optimization algorithm can achieve miniaturization and better breaking performance of circuit breakers.
    Icing prediction of wind turbine blade based on stacked auto-encoder network
    LIU Juan, HUANG Xixia, LIU Xiaoli
    2019, 39(5):  1547-1550.  DOI: 10.11772/j.issn.1001-9081.2018102230
    Asbtract ( )   PDF (630KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that wind turbine blade icing seriously affects the generating efficiency, safety and economy of wind turbines, a Stacked AutoEncoder (SAE) network based prediction model was proposed based on SCADA (Supervisory Control And Data Acquisition) data. The unsupervised method of encoding-decoding was utilized to pre-train the unlabeled dataset, and then the back propagation algorithm was utilized to train and fine tune the labeled dataset to achieve adaptive fault feature extraction and fault state classification. The complexy of the traditional prediction models was simplified effectively, and the influence of artificial feature extraction was avoided on model performance. The historical data of wind turbine No.15 collected by SCADA system was used for training and testing. The accuracy of the test results was 97.28%. Compared with the models based on Support Vector Machine (SVM) and Principal Component Analysis-Support Vector Machine (PCA-SVM), which accuracies are 91% and 93% respectively, the result indicates that the proposed model is more accurate than the other two.
    Blood pressure prediction with multi-factor cue long short-term memory model
    LIU Jing, WU Yingfei, YUAN Zhenming, SUN Xiaoyan
    2019, 39(5):  1551-1556.  DOI: 10.11772/j.issn.1001-9081.2018110008
    Asbtract ( )   PDF (866KB) ( )  
    References | Related Articles | Metrics
    Hypertension is an important hazard to health. Blood pressure prediction is of great importance to avoid grave consequences caused by sudden increase of blood pressure. Based on traditional Long Short-Term Memory (LSTM) network, a multi-factor cue LSTM model for both short-term prediction (predicting blood pressure for the next day) and long-term prediction (predicting blood pressure for the next several days) was proposed to provide early warning of undesirable change of blood pressure. Multi-factor cues used in blood pressure prediction model included time series data cues (e.g. heart rate) and contextual information cues (e.g. age, BMI (Body Mass Index), gender, temperature).The change characteristics of time series data and data features of other associated attributes were extracted in the blood pressure prediction. Environment factor was firstly considered in blood pressure prediction and multi-task learning method was used to help the model to capture the relation between data and improve the generalization ability of the model. The experimental results show that compared with traditional LSTM model and the LSTM with Contextual Layer (LSTM-CL) model, the proposed model decreases prediction error and prediction bias by 2.5%, 3.8% and 1.9%, 3.2% respectively for diastolic blood pressure, and reduces prediction error and prediction bias by 0.2%, 0.1% and 0.6%, 0.3% respectively for systolic blood pressure.
2024 Vol.44 No.3

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Website: www.joca.cn
E-mail: bjb@joca.cn
Join CCF