Loading...

Table of Content

    10 April 2018, Volume 38 Issue 4
    Recommendation model of taxi passenger-finding locations based on weighted non-homogeneous Poisson model
    SHANG Jiandong, LI Panle, LIU Runjie, LI Runchuan
    2018, 38(4):  923-927.  DOI: 10.11772/j.issn.1001-9081.2017092339
    Asbtract ( )   PDF (914KB) ( )  
    References | Related Articles | Metrics
    To slove the problem of high taxi empty-loading ratio of taxi and difficulty in finding passengers, a new model called Possion-Kalman combined prediction Model (PKCPM) was proposed. Firstly, weighted Non-Homogeneous Poisson Model (NHPM) was used to get the estimated value of the target time based on taxi historical data. Secondly, the mean value of the passenger demand in the near time, was taken as the predicted value, based on the real-time data. Finally, the predicted value and the estimated value were used as the inputs of Kalman filtering model to predict the target variance, meanwhile, the error backpropagation mechanism was introduced to reduce the next prediction error. The experimental results on the taxi trajectory dataset in Zhengzhou show that compared with NHPM, Weighted NHPM (WNHPM) and Support Vector Machine (SVM), PKCPM achieves a better optimization effect, and the error of PKCPM is reduced by about 8.85 percentage points and 14.9 percentage points respectively compared with WNHPM and SVM. PKCPM can predict passenger demand within different time and spacial grid, and provides a reliable solution to taxi driver for finding passengers.
    Objective equilibrium measurement based kernelized incremental learning method for fall detection
    HU Lisha, WANG Suzhen, CHEN Yiqiang, HU Chunyu, JIANG Xinlong, CHEN Zhenyu, GAO Xingyu
    2018, 38(4):  928-934.  DOI: 10.11772/j.issn.1001-9081.2017092315
    Asbtract ( )   PDF (1046KB) ( )  
    References | Related Articles | Metrics
    In view of the problem that conventional incremental learning models may go through a way of performance degradation during the update stage, a kernelized incremental learning method was proposed based on objective equilibrium measurement. By setting the optimization term of "empirical risk minimization", an optimization objective function fulfilling the equilibrium measurement with respect to training data size was designed. The optimal solution was given under the condition of incremental learning training, and a lightweight incremental learning classification model was finally constructed based on the effective selection strategy of new data. Experimental results on a publicly available fall detection dataset show that, when the recognition accuracy of representative methods falls below 60%, the proposed method can still maintain the recognition accuracy more than 95%, while the computational consumption of the model update is only 3 milliseconds. In conclusion, the proposed method contributes to achieving a stable growth of recognition performance as well as efficiently decreasing the time consumptions, which can effectively realize wearable devices based intellectual applications in the cloud service platform.
    Parking lot space detection method based on mini convolutional neural network
    AN Xuxiao, DENG Hongmin, SHI Xingyu
    2018, 38(4):  935-938.  DOI: 10.11772/j.issn.1001-9081.2017092362
    Asbtract ( )   PDF (638KB) ( )  
    References | Related Articles | Metrics
    For the increasingly severe parking problem, a method of parking lot space detection based on a modified convolutional neural network was proposed. Firstly, based on the characteristic that a parking lot only needs to be denoted by two states, a concept of Mini Convolutional Neural Network (MCNN) was proposed by improving the traditional CNN. Secondly, the number of network parameters was decreased to reduce the training and recognition time, a local response normalization layer was added to the network to enhance brightness correction, and the small convolution kernel was utilized to get more details of the image. Finally, the video frame was manually masked and cut into separate parking lots by edge detection. Then the trained MCNN was used for parking lot recognition. Experimental results show that the proposed method can improve the recognition rate by 3-8 percentage points compared with the traditional machine learning methods, and the network parameters of MCNN is only 1/1000 of the conventionally used convolutional model. In several different environments discussed in this paper, the recognition rate maintains above 92%. The experimental result shows that the MCNN can be transplanted to a low-configuration camera to achieve automatic parking space detection.
    User location prediction model based on author topic model and radiation model
    LI Yan, LIU Jiayong
    2018, 38(4):  939-944.  DOI: 10.11772/j.issn.1001-9081.2017102539
    Asbtract ( )   PDF (893KB) ( )  
    References | Related Articles | Metrics
    Due to the sparseness of user's historical location data collected by Global Positioning System (GPS) devices, the capability of location prediction model based on single user data was limited. Therefore, a new user location prediction model based on Author Topic Model (ATM) and Radiation Model (RM) was proposed. In the time dimension, the user group that similar to the target user was discovered by using ATM, and the target state of the user group at the prediction time was determined. In the spatial dimension, the RM algorithm was used to calculate the probabilities of target user's candidate location in the target state, and the user's target predictive location could be achieved by comparing the probability value of each candidate location to determine the location where the target user might occur. The experimental results show that the average prediction accuracy of the model is 61.49%, which is nearly 28 percentage points higher than that of the Markov model based on variable order. The proposed model can obtain higher prediction accuracy under the condition of small amount of single user data.
    Local focus support vector machine algorithm
    ZHOU Yuhao, ZHANG Hongling, LI Fangfei, QI Peng
    2018, 38(4):  945-948.  DOI: 10.11772/j.issn.1001-9081.2017092228
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    Aiming at the imbalance of training data set, an integrated support vector machine classification algorithm was proposed by combining sampling method with ensemble method. Firstly, unsupervised clustering was performed on an unbalanced training set, then the underlying local attention support vector machine was used to partition the data set so as to precisely control the local features of data sets. Finally, top support vector machine was used to predicte classification. The evaluation results on UCI dataset show that compared with the popular algorithms such as sampling based Kernelized Synthetic Minority Over-sampling TEchnique (K-SMOTE), integration based Gradient Tree Boosting (GTB) and cost sensitive ensemble algorithm (AdaCost), the proposed support vector machine algorithm can significantly improve the classification effect and solve the problem of unbalanced data set to a certain extent.
    Application of improved convolution neural network in remote sensing image classification
    LIU Yutong, LI Zhiqing, YANG Xiaoling
    2018, 38(4):  949-954.  DOI: 10.11772/j.issn.1001-9081.2017092158
    Asbtract ( )   PDF (980KB) ( )  
    References | Related Articles | Metrics
    The sparse network structure for traditional Convolutional Neural Network (CNN) can not preserve the high efficiency of dense network-intensive computing and the empirical selection of the activation function in the experiment process, which leads to inaccurate results or high computational complexity. To solve above problems, an improved CNN method was proposed and applied in remote sensing images classification. Firstly, the multi-scale features of an image was extracted by using different scale convolution kernels of the Inception module, then the activation function of the hidden layer node was studied by using the Maxout model. Finally, the image was classified by the Softmax method. Experiments were conducted on the same US Land Use Classification Data Set 21(UCM_LandUse_21), and the experimental results showed that the accuracy of the proposed method was about 3.66% and 2.11% higher than that of the traditional CNN method and a Multi-Scale Deep CNN (MS_DCNN) respectively with the same number of convolution layers, and it was also more than 10% higher than that of visual dictionary methods based on low-level features and middle-level features. The proposed method has high classification efficiency and is suitable for image classification.
    k-nearest neighbor classification method for class-imbalanced problem
    GUO Huaping, ZHOU Jun, WU Chang'an, FAN Ming
    2018, 38(4):  955-959.  DOI: 10.11772/j.issn.1001-9081.2017092181
    Asbtract ( )   PDF (940KB) ( )  
    References | Related Articles | Metrics
    To improve the performance of k-Nearest Neighbor (kNN) model on class-imbalanced data, a new kNN classification algorithm was proposed. Different from the traditional kNN, for the learning process, the majority set was partitioned into several clusters by using partitioning method (such as K-Means), then each cluster was merged with the minority set as a new training set to train a kNN model, therefore a classifier library was constructed consisting of serval kNN models. For the prediction, using a partitioning method (such as K-Means), a model was selected from the classifier library to predict the class category of a sample. By this way, it is guaranteed that the kNN model can efficiently discover local characteristics of the data, and also fully consider the effect of imbalance of the data on the performance of the classifier. Besides, the efficiency of kNN was also effectively promoted. To further enhance the performance of the proposed algorithm, Synthetic Minority Over-sampling TEchnique (SMOTE) was applied to the proposed algorithm. Experimental results on KEEL data sets show that the proposed algorithm effectively enhances the generalization performance of kNN method on evaluation measures of recall, g-mean, f-measure and Area Under ROC Curve (AUC) on majority set partitioned by random partition strategy, and it also shows great superiority to other state-of-the-art methods.
    Plant recognition algorithm based on AdaBoost.M2 and neural fuzzy system
    LEI Jianchun, HE Jinguo
    2018, 38(4):  960-964.  DOI: 10.11772/j.issn.1001-9081.2017092342
    Asbtract ( )   PDF (744KB) ( )  
    References | Related Articles | Metrics
    An AdaBoost.M2-NFS model was presented to improve the recognition rate of traditional Neural Fuzzy System (NFS) towards similar plants. The traditional NFS was improved for fusion, and then the new NFS was combined with AdaBoost.M2 to get a new AdaBoost.M2-NFS model. Experimental results show that the new model increases the recognition rate by 3.33 percentage points compared with the single NFS; compared with the linear Support Vector Machine (SVM), its recognition rate increases by 1.11 percentage points; compared with Softmax, its recognition rate increases by 3.33 percentage points. Based on sensitivity and specificity analysis, the non-linear data can get better classification result than the linear data by the proposed algorithm. At the same time, due to the improvement of AdaBoost.M2, the new algorithm has the advantages of modeling quickly and high generalization ability in the field of plant recognition.
    Local outlier factor fault detection method based on statistical pattern and local nearest neighborhood standardization
    FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
    2018, 38(4):  965-970.  DOI: 10.11772/j.issn.1001-9081.2017092310
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    A Local Outlier Factor fault detection method based on Statistics Pattern and Local Nearest neighborhood Standardization (SP-LNS-LOF) was proposed to deal with the problem of unequal batch length, mean drift and different batch structure of multi-process data. Firstly, the statistical pattern of each training sample was calculated; secondly, each statistical modulus was standardized as standard sample by using the set of local neighbor samples; finally the local outlier factor of standard sample was calculated and used as a detection index. The quintile of the local outlier factor was used as the detection control limit, when the local outlier factor of the online sample was greater than the detection control limit, the online sample was identified as a fault sample, otherwise it was a normal sample. The statistical pattern was used to extract the main information of the process and eliminate the impact of unequal length of batches; the local neighborhood normalization was used to overcome the difficulties of mean shift and different batch structure of process data; the local outlier factor was used to measure the similarity of samples and separate the fault samples from the normal samples. The simulation experiment of semiconductor etching process was carried out. The experimental results show that SP-LNS-LOF detects all 21 faults, and has higher detection rate than that of Principal Component Analysis (PCA), kernel PCA (kPCA), Fault Detection using k Nearest Neighbor rule (FD-kNN) and Local Outlier Factor (LOF) methods. The theoretical analysis and simulation result show that SP-LNS-LOF is suitable for fault detection of multimode process, and has high fault detection efficiency and ensures the safety of the production process.
    Russian phonetic transcription system based on TensorFlow
    FENG Wei, YI Mianzhu, MA Yanzhou
    2018, 38(4):  971-977.  DOI: 10.11772/j.issn.1001-9081.2017092149
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics
    Focusing on the limited pronunciation dictionary in Russian speech synthesis and speech recognition system, a Russian grapheme-to-phoneme algorithm based on Long Short-Term Memory (LSTM) sequence-to-sequence model was proposed, as well as a phonetic transcription system. Firstly, a new Russian phoneme set based on Speech Assessment Methods Phonetic Alphabet (SAMPA) was designed, making transcription results can reflect the stress position and vowel reduction of Russian words, and a 20 000-word Russian pronunciation dictionary was constructed according to the new phoneme set. Then, the proposed algorithm was implemented by using the TensorFlow framework, in which the Russian word was converted into a fixed-length vector by encoding LSTM, and then the vector was converted into the target pronunciation sequence by decoding LSTM. Finally, the Russian phonetic transcription system was designed and implemented. The experimental results on out-of-vocabulary test set show that the word correct rate reaches 74.8%, and the phoneme correct rate reaches 94.5%, which are higher than those of Phonetisaurus method. The system can effectively support the construction of the Russian pronunciation dictionary.
    Research progress in similarity join query of big data
    MA Youzhong, ZHANG Zhihui, LIN Chunjie
    2018, 38(4):  978-986.  DOI: 10.11772/j.issn.1001-9081.2017092202
    Asbtract ( )   PDF (1755KB) ( )  
    References | Related Articles | Metrics
    In order to deeply understand and fully grasp the research progress of similarity join query technology of big data and to promote its wide application in image clustering, entity resolution, similar document detection, similar trajectory retrieval, a comprehensive survey was conducted on similarity join query technology of big data. Firstly, the basic concepts of similarity join query were introduced; then intensive study on the big data similarity join research works for different data types, such as set, vector, spatial data, probabilistic data, string and graph was elaborated, their advantages and disadvantages were analyzed and summarized. Finally, some challenging research problems and future research priorities in big data similarity join query were pointed out.
    Effect of Web advertisement based on multi-modal features under the influence of multiple factors
    HU Xiaohong, WANG Hong, REN Yanju, ZHOU Ying
    2018, 38(4):  987-994.  DOI: 10.11772/j.issn.1001-9081.2017102425
    Asbtract ( )   PDF (1247KB) ( )  
    References | Related Articles | Metrics
    Although the relevant research on Web advertisement effect has achieved good results, there are still a lack of thorough research on the interaction between advertisement and each blue link in a Web page, as well as a lack of thorough analysis of the impact of user characteristics and advertising features, and advertising metrics are also inappropriate. Therefore, a method based on multi-modal feature fusion was proposed to study the effectiveness of Internet advertising and user behavior patterns under the influence of multiple factors. Through the quantitative analysis of multi-modal features, the attractiveness of advertising was verified, and the attention effects under different conditions were summarized. By mining frequent patterns of user behavior information and combining with the characteristics of the data, the Directional Frequent Browsing Patterns (DFBP) algorithm was proposed to directionally mine the most common browsing patterns of users with fixed-length. Memory was used as a new index to measure the quality of advertising, and the random forest algorithm was improved by frequent pattern, then a new advertising memory model was built by fusing multimodal features. Experimental results show that the memory model has an accuracy of 91.64%, and it has good robustness.
    Performance analysis of frequent itemset mining algorithms based on sparseness of dataset
    XIAO Wen, HU Juan
    2018, 38(4):  995-1000.  DOI: 10.11772/j.issn.1001-9081.2017092389
    Asbtract ( )   PDF (934KB) ( )  
    References | Related Articles | Metrics
    Frequent Itemset Mining (FIM) is one of the most important data mining tasks. The characteristics of the mined datasets have a significant effect on the performance of FIM algorithms. Sparseness of datasets is one of the attributes that characterize the essential characteristics of datasets. Different types of FIM algorithms are very different in the scalability of dataset sparseness. Aiming at the measurement of sparseness of datasets and influence of sparsity on the performance of different types of FIM algorithms, the existing measurement methods were reviewed and discussed, then two methods were proposed to quantify the sparseness of the datasets:the measurement based on transaction difference and the measurement based on FP-Tree method, both of which considered the influence of the minimum support degree on the sparseness of the datasets in the background of the FIM task, and reflected the difference between the frequent itemsets of the transaction. The scalability of different types of FIM algorithms for sparseness of datasets was studied experimentally. The experimental results show that the sparseness of datasets is inversely proportional to the minimum support, and the FIM algorithm based on vertical format has the best scalability in three kinds of typical FIM algorithms.
    Collaborative filtering recommendation algorithm based on improved clustering and matrix factorization
    WANG Yonggui, SONG Zhenzhen, XIAO Chenglong
    2018, 38(4):  1001-1006.  DOI: 10.11772/j.issn.1001-9081.2017092314
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics
    Concerning data sparseness, low accuracy and poor real-time performance of traditional collaborative filtering recommendation algorithm in e-commerce system under the background of big data, a new collaborative filtering recommendation algorithm based on improved clustering and matrix decomposition was proposed. Firstly, the dimensionality reduction and data filling of the original data were reliazed by matrix decomposition. Then the time decay function was introduced to deal with user score. The attribute vector of a project was used to characterize the project and the interest vector of user was used to characterize the user, then the projects and users were clustered by k-means clustering algorithm. By using the improved similarity measure method, the nearest neighbors and the project recommendation candidate set in the cluster were searched, thus the recommendation was made. Experimental results show that the proposed algorithm can not only solve the problem of sparse data and cold start caused by new projects, but also can reflect the change of user's interest in multi-dimension, and the accuracy of recommendation algorithm is obviously improved.
    Collaborative filtering recommendation algorithm combined with item tag similarity
    LIAO Tianxing, WANG Ling
    2018, 38(4):  1007-1011.  DOI: 10.11772/j.issn.1001-9081.2017092238
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics
    Aiming at the shortages in similarity calculation and rating prediction in traditional recommendation system, in order to further improve the accuracy and stability of the algorithm, a new recommendation algorithm was proposed. Firstly, according to the number of important labels for an item, the M2 similarity between the item and other items was calculated, which was used to constitute the nearest item set of the item. Then, according to the Slope One weighting theory, a new rating prediction method was designed to predict users' ratings based on the nearest item set. To validate the accuracy and stability of the proposed algorithm, comparison experiments with the traditional recommendation algorithms including K-Nearest Neighbor (KNN) algorithm based on Manhattan distance were conducted on MovieLens dataset. The experimental results showed that compared with the KNN algorithm, the mean absolute error and the root mean square error of the new algorithm were decreased by 7.6% and 7.1% respectively. Besides, the proposed algorithm performs better in stability, which can provide more accurate and personalized recommendation.
    The KYLIN-2 assembly databank based on HDF5 file format
    FENG Jintao, LU Wei, CHAI Xiaoming, TU Xiaolan, YIN Qiang, CHEN Dingyong, LIU Yuan
    2018, 38(4):  1012-1016.  DOI: 10.11772/j.issn.1001-9081.2017102543
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    Nuclear Power Institute of China (NPIC) developed an advanced neutron transport lattice code, namel KYLIN-2. To solve the problem of mass data storage and processing in software, a computing data storage solution based on Hierarchical Data Format v5 (HDF5) was proposed. Firstly, HDF5 file format was researched. Secondly, according to the requirements of KYLIN-2, KYlin-2 Main RESults databank (KYMRES) based on HDF5 was designed. Finally, KYMRES was realized by self-developed HDF5 read/write tool. The performance tests show that KYMRES has a higher I/O efficiency compared with conventional storage solutions, the efficiency of reading and writing is increased to 2.3 and 4.5 times on average that of the old algorithm. KYMRES has significant advantages in mass data storage and processing. and provides a new data storage and management solution for KYLIN-2.
    Information hiding algorithm for 3D models based on feature point labeling and clustering
    REN Shuai, ZHANG Tao, XU Zhenchao, WANG Zhen, HE Yuan, LIU Yunong
    2018, 38(4):  1017-1022.  DOI: 10.11772/j.issn.1001-9081.2017092348
    Asbtract ( )   PDF (994KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that some 3D model-based information hiding algorithms are incompetent against combined attacks, a new strategy based on feature point labeling and clustering was proposed. Firstly, edge folding was adopted to achieve mesh simplification and all the vertexes were labeled in order by their energy level. Secondly, the ordered vertexes were clustered and re-ordered by using local height theory and Mean Shift clustering analysis. Lastly, hidden information and cover model carrier information were optimized, matched and modified by Logistic chaos mapping scrambling and genetic algorithm, completing the final hiding. The data in hiding areas were labeled and screened locally and globally according to the energy weight, which is good for the robustness and transparency of the algorithm. The experimental results show that, compared with 3D information hiding algorithms based on inscribed sphere and outer skeleton, the robustness of the proposed algorithm against single or joint attacks is significantly improved, and it also has the same degree of invisibility.
    Double-level encryption reversible data hiding based on code division multiple access
    WANG Jianping, ZHANG Minqing, LI Tianxue, MA Shuangpeng
    2018, 38(4):  1023-1028.  DOI: 10.11772/j.issn.1001-9081.2017102493
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics
    Aiming at enhancing the embedded capacity and enriching the available encryption algorithm of reversible data hiding in encrypted domain, a new scheme was proposed by adopting double-level encryption methods and embedding the secret information based on Code Division Multiple Access (CDMA). The image was first divided into blocks and a multi-granularity encryption was introduced. The image was first divided into blocks, which were scrambled by introducing multi-granularity encryption, then 2 bits in the middle of each pixel in blocks were encrypted by a stream cipher. Based on the idea of CDMA, k mutually orthogonal matrices of 4 bits were selected to carry k-level secret information. The orthogonal matrices can guarantee the multi-level embedding and improve the embedding capacity. The pseudo bit was embedded into the blocks that cannot meet the embedding condition. The secret data could be extracted by using the extraction key; the original image could be approximately recovered by using the image decryption key; with both of the keys, the original image could be recovered losslessly. Experimental results show that, when the Peak Signal-to-Noise Ratio (PSNR) of gray Lena image of 512×512 pixels is higher than 36 dB, the maximum embedded capacity of the proposed scheme is 133313 bit. The proposed scheme improves the security of encrypted images and greatly enhances the embedded capacity of reversible information in ciphertext domain while ensuring the reversibility.
    Evolutionary game considering node degree in social network
    LIU Yazhou, WANG Jing, PAN Xiaozhong, FU Wei
    2018, 38(4):  1029-1035.  DOI: 10.11772/j.issn.1001-9081.2017102431
    Asbtract ( )   PDF (986KB) ( )  
    References | Related Articles | Metrics
    In the process of rumor spreading, nodes with different degrees of recognition have different recognition abilities. A evolution model of dynamic complex network was proposed based on game theory, in which a new game gain was defined according to node degree. In this model, considering the fact that rumor propagation was often related to node interests, the non-uniform propagation rates of different nodes and propagation dynamics of rumors were described by introducing the recognition ability, and two rumor suppression strategies were proposed. The simulation were conducted on two typical network models and verified in the Facebook real network data. The research demonstrates that the fuzzy degree of rumor has little effect on the rumor propagation rate and the time required to reach steady state in BA scale-free network and Facebook network. As rumors are increasingly fuzzy, the scope of rumor in the network is expanding. Compared with Watts-Strogtz (WS) small-world network, rumors spread more easily in BA scale-free network and Facebook network. The study also finds out that immune nodes in the WS small-world network grow more rapidly than immune nodes in BA scale-free network and Facebook network with the same added value of immune benefits. In addition, there is a better rumor suppression effect by suppressing the degree of node hazard than by suppressing the game gain.
    New voting protocol based on homomorphic threshold cryptography
    DAI Xiaokang, CHEN Changbo, WU Wenyuan
    2018, 38(4):  1036-1040.  DOI: 10.11772/j.issn.1001-9081.2017102400
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics
    A new voting protocol was proposed to solve the problem that most of the existing voting protocols require a trusted management authority. This protocol comprehensively makes use of homomorphic encryption, threshold cryptography, blind signature, ring signature, zero knowledge proof, and so on, to resolves the coexistence problem between robustness and the absence of trusted third party under the assumption that no one abstains from voting or the authority does not cheat conspiracy with other voters when one voter abstains from voting, at the same time, anonymity, eligibility, robustness, verifiability and no trusted third party are also satisfied.
    Dynamic threshold signature scheme based on Chinese remainder theorem
    WANG Yan, HOU Zhengfeng, ZHANG Xueqi, HUANG Mengjie
    2018, 38(4):  1041-1045.  DOI: 10.11772/j.issn.1001-9081.2017092242
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    To resist mobile attacks, a new dynamic threshold signature scheme based on Chinese Remainder Theorem (CRT) was proposed. Firstly, members exchanged their shadows to generate their private keys and the group public key. Secondly, a partial signature was generated by cooperation. Finally, the partial signature was used to synthesize the signature. The scheme does not expose the group private key in the signature process, so that the group private key can be reused. The members update their private keys periodically without changing the group public key to ensure that the signature is still valid before update. Besides, the scheme allows new members to join while keeping the old member's private keys and group private key unexposed. The scheme has forward security, which can resist mobile attacks effectively. Theoretical analysis and simulation results show that, compared with the proactive threshold scheme based on Lagrange interpolation, the updating time consumption of the proposed scheme is constant, therefore the scheme has time efficiency.
    Online signature verification based on curve segment similarity matching
    LIU Li, ZHAN Enqi, ZHENG Jianbin, WANG Yang
    2018, 38(4):  1046-1050.  DOI: 10.11772/j.issn.1001-9081.2017092186
    Asbtract ( )   PDF (930KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of mismatching and too large matching distance because of curves scaling, shifting, rotation and non-uniform sampling in the process of online signature verification, a curve segment similarity matching method was proposed. In the progress of online signature verification, two curves were partitioned into segments and matched coarsely at first. A dynamic programming algorithm based on cumulative difference matrix of windows was introduced to get the matching relationship. Then, the similarity distance for each matching pair and weighted sum of all the matching pairs were calculated, and the calculating method is to fit each curve of matching pairs, carry out the similarity transformation within a certain range, and resample the curves to get the Euclidean distance. Finally, the average of the similarity distance between test signature and all template signatures was used as the authentication distance, which was compared with the training threshold to judge the authenticity. The method was validated on the open databases SUSIG Visual and SUSIG Blind respectively with 3.56% and 2.44% Equal Error Rate (EER) when using personalized threshold, and the EER was reduced by about 14.4% on Blind data set compared with the traditional Dynamic Time Wraping (DTW) method. The experimental results show that the proposed method has certain advantages in skilled forgery signature and random forgery signature verification.
    Privacy preserving attribute matching method based on CP-ABE in social networks
    CUI Weirong, DU Chenglie
    2018, 38(4):  1051-1057.  DOI: 10.11772/j.issn.1001-9081.2017102407
    Asbtract ( )   PDF (1182KB) ( )  
    References | Related Articles | Metrics
    Aiming at privacy protection of user attribute matching in social networks, a privacy preserving user attribute matching method was proposed based on anonymous attribute-based encryption, which can be applied to centralized attribute matching scenarios. In this method, two attribute lists were used by each user to descript his own profile and the dating preference. Then, these two lists were encoded into attribute secret key and ciphertext access control strategy respectively for the purpose of privacy protection. Finally, the server made a matching decision by judging whether the ciphertext which implies the dating preference could be decrypted correctly by the secret key which implies the user profile. In this way, the server can achieve bidirectional attribute matching without knowing the specific attribute information of both sides. According to the analysis and experimental results, the proposed method has strong practicability because it can provide high computational efficiency while ensuring privacy security.
    Android malware detection based on texture fingerprint and malware activity vector space
    LUO Shiqi, TIAN Shengwei, YU Long, YU Jiong, SUN Hua
    2018, 38(4):  1058-1063.  DOI: 10.11772/j.issn.1001-9081.2017102499
    Asbtract ( )   PDF (862KB) ( )  
    References | Related Articles | Metrics
    To improve the accuracy and automation of malware recognition, an Android malware analysis and detection method based on deep learning was proposed. Firstly, the malware texture fingerprint was proposed to reflect the content similarity of malicious code binary files, and 33 types of malware activity vector space were selected to reflect the potential dynamic activities of malicious code. In addition, to improve the accuracy of the classification, the AutoEncoder (AE) and the Softmax classifier were trained combined with the above characteristics. Test results on different data samples showed that the average classification accuracy of the proposed method was up to 94.9% by using Stacked AE (SAE), which is 1.1 percentage points higher than that of Support Vector Machine (SVM). The proposed method can effectively improve the accuracy of malicious code recognition.
    Application of self-adaptive chaotic quantum particle swarm algorithm in coverage optimization of wireless sensor network
    ZHOU Haipeng, GAO Qin, JIANG Fengqian, YU Dawei, QIAO Yan, LI Yang
    2018, 38(4):  1064-1071.  DOI: 10.11772/j.issn.1001-9081.2017092372
    Asbtract ( )   PDF (1197KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of traditional Particle Swarm Optimization (PSO) such as slow convergence and being easy falling into local extremum, a Dynamic self-Adaptive Chaotic Quantum-behaved PSO (DACQPSO) was proposed by studying the relationship between population diversity and the evolution of PSO. The population-distribution-entropy was introduced into the evolutionary control of the particle swarm in this algorithm. Based on the Sigmoid function model, the method of calculating the contraction-expansion coefficient of the Quantum-behaved PSO (QPSO) was given. The average-distance-amongst-points was taken as the criterion of chaotic search to carry out a chaotic perturbation. The DACQPSO algorithm was applied to the coverage optimization of Wireless Sensor Network (WSN), and the simulation analysis was carried out. Experimental results show that compared with Standard PSO (SPSO), QPSO and Chaotic Quantum-behaved PSO (CQPSO), the DACQPSO algorithm improves the coverage rate by 3.3501%, 2.6502% and 1.9000% respectively. DACQPSO algorithm improves the coverage performance of WSN, and has better coverage optimization effect than other algorithms.
    Parallel algorithm for explicit finite element analysis based on efficient parallel computational strategy
    FU Chaojiang, WANG Tianqi, LIN Yuerong
    2018, 38(4):  1072-1077.  DOI: 10.11772/j.issn.1001-9081.2017092384
    Asbtract ( )   PDF (1072KB) ( )  
    References | Related Articles | Metrics
    Concerning the time-consuming problem of finite element analysis for solving the nonlinear dynamic problems of large-scale structure, some parallel computational strategies for implementing explicit nonlinear finite element analysis were proposed under the environment of Message Passing Interface (MPI) cluster. Based on the technique of domain decomposition with explicit message passing, using overlapped, non-overlapped domain decomposition techniques and Dynamic Task Allocation (DTA) algorithm, domain decomposition parallel algorithms for overlapped domain, non-overlapped domain, clustering for DTA, DTA and Dynamic Load Balancing (DLB) were researched by overlapping calculations and communications to improve the performance of communication between processors. A parallel finite element analysis program was developed with message passing interface as software development environment. Some numerical examples were implemented on workstation cluster to evaluate the performance of the parallel algorithm, the computation performance was also compared with the conventional Newmark algorithm. The experimental results show that the performance of the algorithm for dynamic task allocation with clustering technique is better than that of the dynamic task allocation, which is lower than that of the domain decomposition algorithm, and the dynamic load balancing algorithm is the best. For the problem with the same size, the proposed algorithms are faster and better than conventional Newmark algorithm. The proposed algorithms are efficient for parallel computing of nonlinear dynamic problems of structure.
    Communication optimization for intermediate data of MapReduce computing model
    CAO Yunpeng, WANG Haifeng
    2018, 38(4):  1078-1083.  DOI: 10.11772/j.issn.1001-9081.2017092358
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    Aiming at the communication problem of crossing the rack switches for a large amount of intermediate data generated after the Map phase in the MapReduce process, a new optimization method was proposed for the map-intensive jobs. Firstly, the features from the pre-running scheduling information were extracted and the data communication activity was quantified. Then naive Bayesian classification model was used to realize the classification prediction by using the historical jobs running data to train the classification model. Finally, the jobs with active intermediate data communication process were mapped into the same rack to keep communication locality. The experimental results show that the proposed communication optimization scheme has a good effect on shuffle-intensive jobs, and the calculation performance can be improved by 4%-5%. In the case of multi-user multi-jobs environment, the intermediate data can be reduced by 4.1%. The proposed method can effectively reduce the communication latency in large-scale data processing and improve the performance of heterogeneous clusters.
    Overview of network coding for video streaming
    CUI Huali, SUN Qindong, ZHANG Xingjun, WU Weiguo
    2018, 38(4):  1084-1088.  DOI: 10.11772/j.issn.1001-9081.2017092262
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    With the explosive growth of video streaming applications, the use of Network Coding (NC) to improve the network performance and then to provide a better quality of video streaming is becoming a hot topic. In order to efficiently exploit the benefits of NC for video delivery, the proposed transmission strategies should be adapted for the characteristics of video traffic and the network environment should also be considered. Firstly, the basic concepts and methods of NC were presented. Then, a variety of NC based techniques that have been specifically designed for video streaming were analyzed and summarized into three main categories, including unequal error protection to give priority to important video packets, reducing packet transmission delay to meet realtime video streaming requirements, enhancing network error recovery strategy to improve transmission reliability. Thirdly, the applications of video streaming with NC in the P2P networks, multi-source cooperative and content-centric network scenarios were introduced respectively. Finally, based on this study, open issues and further research topics were elaborated.
    Multi-attribute spatial node selection algorithm based on subjective and objective weighting
    DAI Cuiqin, WANG Wenhan
    2018, 38(4):  1089-1094.  DOI: 10.11772/j.issn.1001-9081.2017102534
    Asbtract ( )   PDF (964KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that single attribute cooperative node selection algorithm in spatial cooperative transmission cannot balance the reliability and the survival time of the system, a Subjective and Objective Weighting Based Multi-attribute Cooperative Node Selection (SOW-CNS) algorithm was proposed by introducing Multiple Attribute Decision Making (MADM), and considering three attributes such as channel fading level, residual energy of the cooperative nodes and packet loss rate were considered to complement multi-attribute evaluation of spatial cooperative nodes. Firstly, according to the influence of shadow fading, a two-state wireless channel model was established, including the shadow free Loo channel fading model and the shadow Corazza channel fading model. Secondly, considering the channel fading level, the residual energy of cooperative nodes and the system packet loss rate, the multi-attribute decision making strategy based on subjective and objective weighting was introduced, and the subjective attribute weight vector and objective attribute weight vector of spatial cooperative nodes were established by using Analytic Hierarchy Process (AHP) and information entropy method. Then the maximum entropy principle, the deviation and the maximum method were used to calculate the subjective and objective attribute weight vectors. Finally, the evaluation value of each potential node was calculated by using the subjective and objective attributes weight vector and the attribute value of each node, and then the best cooperative node was selected to participate in the cooperative transmission of spatial information. Simulation results show that SOW-CNS algorithm detain lower system packet loss rate, and longer system Survival time compared with traditional Best Quality based Cooperative Node Selection (BQ-CNS) algorithm, Energy Fairness based Cooperative Node Selection (EF-CNS) algorithm and Random based Cooperative Node Selection (R-CNS) algorithm.
    Energy balancing routing protocol for low-power and lossy network
    HE Wangji, MA Xiaoyuan, LI Xin, TANG Weisheng
    2018, 38(4):  1095-1101.  DOI: 10.11772/j.issn.1001-9081.2017092151
    Asbtract ( )   PDF (1071KB) ( )  
    References | Related Articles | Metrics
    To deal with the problems of unbalanced energy consumption of nodes, short network lifetime and tardy update of parent nodes' status information during steady state in the IPv6 Routing Protocol for Low-power and lossy network (RPL), an Energy Balancing RPL (EB-RPL) with battery estimation strategy was proposed. Firstly, a new routing metric combing expected transmission count and node residual energy was presented, with which nodes can adaptively adjust the network topology at different stages. Secondly, a battery estimation method based on energy consumption rate of the parent node was designed, thus the child nodes can calculate the power consumption of the parent node and make the correct routing decision without increasing the additional overhead of control messages. Finally, the performance of EB-RPL was compared and analyzed through experiments. The simulation results show that compared with RPL, EB-RPL can significantly reduce the standard deviation of power between nodes at the same level, and the average network lifetime is respectively prolonged by 29.4% and 39.4% on average with different interpacket intervals and network sizes. EB-RPL can effectively achieve energy balance and significantly extend network lifetime.
    Multicast routing of power grid based on demand response constraints
    LONG Dan, LI Xiaohui, DING Yuemin
    2018, 38(4):  1102-1105.  DOI: 10.11772/j.issn.1001-9081.2017092295
    Asbtract ( )   PDF (659KB) ( )  
    References | Related Articles | Metrics
    In multicast routing comunication of smart grid, concerning the long communication delay of multicast tree when transmitting control messages to high-power load device, which caused by only considering delay constraint without considering the demand of smart grid, a new multicast tree construction method that considered load and comunication delay at the same time was proposed, namely multicast routing algorithm based on Demand Response (DR) capability constraint. Firstly, a complete graph satisfying the constraint was generated according to the grid network topology. Then, a lower-cost multicast tree was constructed by using the Prim algorithm. Finally, the multicast tree was restored to the original network. The simulation results show that the proposed algorithm can effectively reduce the demand response delay of high-power load devices, and can significantly reduce the power frequency deviation compared with the multicast routing algorithm only considering delay constraint. This algorithm can actually improve the real-time demand response in the smart grid and stabilize the grid frequency.
    Spatially common sparsity channel estimation based on compressive sensing for massive multi-input multi-output system
    TANG Hu, LIU Ziyan, LIU Shimei, FENG Li
    2018, 38(4):  1106-1110.  DOI: 10.11772/j.issn.1001-9081.2017082027
    Asbtract ( )   PDF (747KB) ( )  
    References | Related Articles | Metrics
    Focusing on low the channel estimation accuracy is in virtual angular domain channel for Frequency Division Duplex based MASSIVE Multi-Input Multi-Output (MASSIVE MIMO) systems, a new algorithm Based on Threshold Sparsity Adaptive Matching Pursuit (BT-SAMP) was proposed. The algorithm combined the atomic selection characteristics of BAOMP algorithm and the adaptive characteristics of Sparsity Adaptive Matching Pursuit (SAMP) algorithm. The Backtracking-based Adaptive Orthogonal Matching Pursuit (BAOMP) rule of the "adding atom" algorithm was used as the atomic selection preprocessing of the SAMP algorithm, the fixed atom was added by reasonable threshold, and then the step size of the SAMP algorithm was extended to find the maximum approximation coefficient of the channel matrix, which can improve the accuracy of SAMP algorithm and accelerate the convergence speed of the algorithm. The simulation results show that the channel estimation accuracy is improved compared with the SAMP algorithm in the case of low Signal-to-Noise Ratio (SNR), especially when the SNR is 0 to 10 dB, the estimation accuracy is improved by 4 dB, and the running time of the algorithm is reduced by about 61%.
    Image inpainting algorithm based on pruning samples referring to four-neighborhood
    MENG Hongyue, ZHAI Donghai, LI Mengxue, CAO Daming
    2018, 38(4):  1111-1116.  DOI: 10.11772/j.issn.1001-9081.2017082033
    Asbtract ( )   PDF (1011KB) ( )  
    References | Related Articles | Metrics
    To inpaint the image with large damaged region and complex structure texture, a new method based on neighborhood reference priority which can not only maintain image character but also improve inpainting speed was proposed, by which the problem of image inpainting was translated into the best sample searching process. Firstly,the structure information of target image was extracted, and the sample region was divided into several sub-regions to reduce the sample size and the search scope. Secondly, in order to solve the problem that Sum of Squares of Deviations (SSD) method ignores the matching of structure information, structure symmetry matching constraint was introduced into matching method, which effectively avoided wrong matches and improves sample matching precision and searching efficiency. Then, priority formulas which highlights the effect of structure was obtained by introducing structure weight and confidence and combining the traditional priority calculation. Finally,the priority of four-neighborhood was got by computing overlapping information between target block and neighborhood blocks patches, according to the reliable reference information provided by four-neighborhood and the improved block matching method, the samples were pruned and the optimal sample was retrieved. The inpainting was completed until all the the optimal samples for all the target blocks were retrieved. The experimental results demonstrate that the proposed method can overcome the problems like texture blurring and structure dislocations and so on, the Peak Signal-to-Noise Ratio (PSNR) of the improved algorithm is increased by 0.5 dB to 1 dB compared with the contrast methods with speeding up inpainting process, the recovered image is much continuous for human vision. Meanwhile, it can effectively recover common damaged images and is more pervasive.
    Slices reconstruction method for single image dedusting
    WANG Yuanyu, ZHANG Yifan, WANG Yunfei
    2018, 38(4):  1117-1120.  DOI: 10.11772/j.issn.1001-9081.2017092388
    Asbtract ( )   PDF (824KB) ( )  
    References | Related Articles | Metrics
    In order to solve the image degradation in the non-uniform dust environment with multiple scattering lights, a slices reconstruction method for single image dedusting was proposed. Firstly, the slices along the depth orientation were produced based on McCartney model in dust environment. Secondly, the joint dust detection method was used to detect dust patches in the slices where non-dust areas were reserved but the dust zones were marked as the candidate detected areas of the next slice image. Then, an image was reconstructed by combining these non-dust areas of each slice and the dust zone of the last slice. Finally, a restored image was obtained by a fast guided filter which was applied to the reconstructed area. The experimental results prove that the proposed restoration method can effectively and quickly get rid of dust in the image, and lay the foundation of object detection and recognition work based on computer vision in dust environment.
    Improved D-Nets algorithm with matching quality purification
    YE Feng, HONG Zheng, LAI Yizong, ZHAO Yuting, XIE Xianzhi
    2018, 38(4):  1121-1126.  DOI: 10.11772/j.issn.1001-9081.2017102394
    Asbtract ( )   PDF (1072KB) ( )  
    References | Related Articles | Metrics
    To address the underperformance of feature-based image registration under situations with large affine deformation and similar targets, and reduce the time cost, an improved Descriptor-Nets (D-Nets) algorithm based on matching quality purification was proposed. The feature points were detected by Features From Accelerated Segment Test (FAST) algorithm initially, and then they were filtered according to Harris corner response function and meshing. Furthermore, on the basis of calculating the line-descriptor, a hash table and a vote were constructed, thus rough-matching pairs could be obtained. Eventually, mismatches were eliminated by the purification based on matching quality. Experiments were carried out on Mikolajczyk standard image data set of Oxford University. Results show that the proposed improved D-Nets algorithm has an average registration accuracy of 92.2% and an average time cost of 2.48 s under large variation of scale, parallax and light. Compared to Scale-Invariant Feature Transform (SIFT), Affine-SIFT (ASIFT), original D-Nets algorithms, the improved algorithm has a similar registration accuracy with the original algorithm but with up to 80 times speed boost, and it has the best robustness which significantly outperforms SIFT and ASIFT, which is practical for image registration applications.
    Non-rigid multi-modal medical image registration based on multi-channel sparse coding
    WANG Lifang, CHENG Xi, QIN Pinle, GAO Yuan
    2018, 38(4):  1127-1133.  DOI: 10.11772/j.issn.1001-9081.2017102392
    Asbtract ( )   PDF (1067KB) ( )  
    References | Related Articles | Metrics
    Sparse coding similarity measure has good robustness to gray-scale offset field in non-rigid medical image registration, but it is only suitable for single-modal medical image registration. A non-rigid multi-modal medical image registration method based on multi-channel sparse coding was proposed to solve this problem. In this method, the multi-modal registration was regarded as a multi-channel registration, with each modal running in a separate channel. At first, the two registered images were synthesized and regularized separately, and then they were divided into channels and image blocks. The K-means-based Singular Value Decomposition (K-SVD) algorithm was used to train the image blocks in each channel to get the analytical dictionary and sparse coefficients, and each channel was weightedy summated. The multilayer P-spline free transform model was used to simulate the non-rigid geometric deformation, and the gradient descent method was used to optimize the objective function. The experimental results show that compared with multi-modal similarity measure such as local mutual information, Multi-Channel Local Variance and Residual Complexity (MCLVRC), Multi-Channel Sparse-Induced Similarity Measure (MCSISM) and Multi-Channel Rank Induced Similarity Measure (MCRISM), the root mean square error of the proposed method is decreased by 30.86%, 22.24%, 26.84% and 16.49% respectively. The proposed method can not only effectively overcome the influence of gray-scale offset field on registration in multi-modal medical image registration, but also improve the accuracy and robustness of registration.
    Multi-modal brain image fusion method based on adaptive joint dictionary learning
    WANG Lifang, DONG Xia, QIN Pinle, GAO Yuan
    2018, 38(4):  1134-1140.  DOI: 10.11772/j.issn.1001-9081.2017092291
    Asbtract ( )   PDF (1149KB) ( )  
    References | Related Articles | Metrics
    Currently, the adaptivity of global training dictionary is not strong for brain medical images, and the "max-L1" rule may cause gray inconsistency in the fused image, which cannot get satisfactory image fusion results. A multi-modal brain image fusion method based on adaptive joint dictionary learning was proposed to solve this problem. Firstly, an adaptive joint dictionary was obtained by combining sub-dictionaries which were adaptively learned from registered source images using improved K-means-based Singular Value Decomposition (K-SVD) algorithm. The sparse representation coefficients were computed by the Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP) algorithm by using the adaptive joint dictionary. Furthermore, the activity level measurement of source image patches was regarded as the "multi-norm" of the sparse representation coefficients, and an unbiased rule combining "adaptive weighed average" and "choose-max" was proposed, to chose fusion rule according to the similarity of "multi-norm" of the sparse representation coefficients. Then, the sparse representation coefficients were fused by the rule of "adaptive weighed average" when the similarity of "multi-norm" was greater than the threshold, otherwise the rule of "choose-max" was used. Finally, the fusion image was reconstructed according to the fusion coefficient and the adaptive joint dictionary. The experimental results show that, compared with the other three methods based on multi-scale transform and five methods based on sparse representation, the fusion images of the proposed method have more image detail information, better image contrast and sharpness, and clearer edge of lesion, the mean values of the objective parameters such as standard deviation, spatial frequency, mutual information, the gradient based index, the universal image quality based index and the mean structural similarity index under three groups of experimental conditions are 71.0783, 21.9708, 3.6790, 0.6603, 0.7352 and 0.7339 respectively. The proposed method can be used for clinical diagnosis and assistant treatment.
    Face super-resolution via very deep convolutional neural network
    SUN Yitang, SONG Huihui, ZHANG Kaihua, YAN Fei
    2018, 38(4):  1141-1145.  DOI: 10.11772/j.issn.1001-9081.2017092378
    Asbtract ( )   PDF (890KB) ( )  
    References | Related Articles | Metrics
    For multiple scale factors of face super-resolution, a face super-resolution method based on very deep convolutional neural network was proposed; and through experiments, it was found that the increase of network depth can effectively improve the accuracy of face reconstruction. Firstly, a network that consists of 20 convolution layers were designed to learn an end-to-end mapping between the low-resolution images and the high-resolution images, and many small filters were cascaded to extract more textural information. Secondly, a residual-learning method was introduced to solve the problem of detail information loss caused by increasing depth. In addition, the low-resolution face images with multiple scale factors were merged to one training set to enable the network to achieve the face super resolution with multiple scale factors. The results on the CASPEAL test dataset show that the proposed method based on this very deep convolutional neural network has 2.7 dB increasement in Peak Signal-to-Noise Ratio (PSNR), and 2% increasement in structural similarity compared to the Bicubic based face reconstruction method. Compared with the SRCNN method, there is also a greater improvement. as well as a greater improvement in accuracy and visual improvement. It means that deeper network structures can achieve better results in reconstruction.
    Real-time face detection for mobile devices with optical flow estimation
    WEI Zhenyu, WEN Chang, XIE Kai, HE Jianbiao
    2018, 38(4):  1146-1150.  DOI: 10.11772/j.issn.1001-9081.2017092154
    Asbtract ( )   PDF (836KB) ( )  
    References | Related Articles | Metrics
    To improve the face detection accuracy of mobile devices, a new real-time face detection algorithm for mobile devices was proposed. The improved Viola-Jones was used for a quick region segmentation to improve segmentation precision without decreasing segmentation speed. At the same time, the optical flow estimation method was used to propagate the features of discrete keyframes extracted by the sub-network of a convolution neural network to other non-keyframes, which increased the efficiency of convolution neural network. Experiments were conducted on YouTube video face database, a self-built one-minute face video database of 20 people and the real test items at different resolutions. The results show that the running speed is between 2.35 frames per second and 22.25 frames per second, reaching the average face detection level; the recall rate of face detection is increased from 65.93% to 82.5%-90.8% at rate of 10% false alarm, approaching the detection accuracy of convolution neural network, which satisfies the speed and accuracy requirements for real-time face detection of mobile devices.
    Image enlargement based on improved complex diffusion adaptivly coupled nonlocal transform domain model
    HAI Tao, ZHANG Lei, LIU Xuyan, ZHANG Xingang
    2018, 38(4):  1151-1156.  DOI: 10.11772/j.issn.1001-9081.2017092273
    Asbtract ( )   PDF (1032KB) ( )  
    References | Related Articles | Metrics
    Concerning the loss of weak edges and texture details of the second-order Partial Differential Equation (PDE) amplification algorithm, an image enlargement algorithm was proposed based on improved complex diffusion adaptively coupled nonlocal transform domain model. By utilizing the advantage of accurate edge location of the complex diffusion model, the improved complex diffusion coupled impulse filter to enhance strong edges better; by modeling the sparse characteristics of the transform coefficients coming from three dimensional transformation of the image group composed of similar image blocks, the nonlocal transform domain model could make good use of the nonlocal information of the similar image blocks and had better processing effects on weak edges and texture details. Finally, the second-order derivation of the image obtained by the complex diffusion was used as the parameter to realize the adaptive coupling of the improved complex diffusion model and the nonlocal transform domain model. Compared with partial differential equation amplification algorithm, nonlocal transformation domain amplification algorithm and partial differential equation coupled space domain nonlocal model amplification algorithm, the proposed algorithm has better amplification effect on strong edges, weak edges and detail textures, the mean structural similarity measures of weak edges and texture detail images are higher than those of improved complex diffusion magnification algorithm and the nonlocal transform domain amplification algorithm. The proposed algorithm also confirms the validity of the coupling between the space domain model and the transform domain model, local model and nonlocal model.
    Fast intra mode prediction decision and coding unit partition algorithm based on high efficiency video coding
    GUO Lei, WANG Xiaodong, XU Bowen, WANG Jian
    2018, 38(4):  1157-1163.  DOI: 10.11772/j.issn.1001-9081.2017092302
    Asbtract ( )   PDF (1218KB) ( )  
    References | Related Articles | Metrics
    Due to the high complexity of intra encoding in High Efficiency Video Coding (HEVC), an efficient intra encoding algorithm combining coding unit segmentation and intra mode selection based on texture feature was proposed. The strength of dominant direction of each depth layer was used to decide whether the Coding Unit (CU) need segmentation, and to reduce the number of intra modes. Firstly, the variance of pixels was used in the coding unit, and the strength of dominant direction based on pixel units to was calculated determine its texture direction complexity, and the final depth was derived by means of the strategy of threshold. Secondly, the relation of vertical complexity and horizontal complexity and the probability of selected intra model were used to choose a subset of prediction modes, and the encoding complexity was further reduced. Compared to HM15.0, the proposed algorithm saves 51.997% encoding time on average, while the Bjontegaard Delta Peak Signal-to-Noise Rate (BDPSNR) only decreases by 0.059 dB and the Bjontegaard Delta Bit Rate (BDBR) increases by 1.018%. The experimental results show that the method can reduce the encoding complexity in the premise of negligible RD performance loss, which is beneficial to real-time video applications of HEVC standard.
    Abnormal crowd behavior detection based on motion saliency map
    HU Xuemin, YI Chonghui, CHEN Qin, CHEN Xi, CHEN Long
    2018, 38(4):  1164-1169.  DOI: 10.11772/j.issn.1001-9081.2017092340
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    To deal with the crowd supervision issue of low accuracy and poor real-time performance in public places, an abnormal crowd behavior detection approach based on motion saliency map was proposed. Firstly, the Lucas-Kanade method was used to calculate the optical flow field of the sparse feature points, then the movement direction, velocity and acceleration of feature points were computed after filtering the optical flow field both in time and space. In order to precisely describe the crowd behavior, the amplitude of velocity, the direction change, and the amplitude of acceleration were mapped to three image channels corresponding to R, G, and B, respectively, and the motion saliency map for describing the characteristics of crowd movement was fused by this way. Finally, a convolution neural network model was designed and trained for the motion saliency map of crowd movement, and the trained model was used to detect abnormal crowd behaviors. The experimental results show that the proposed approach can effectively detect abnormal crowd behaviors in real time, and the detection rate in the datasets of UMN and PETS2009 are more than 97.9%.
    Object tracking algorithm based on static-adaptive appearance model correction
    WEI Baoguo, GE Ping, WU Hong, WANG Gaofeng, HAN Wenliang
    2018, 38(4):  1170-1175.  DOI: 10.11772/j.issn.1001-9081.2017092312
    Asbtract ( )   PDF (1086KB) ( )  
    References | Related Articles | Metrics
    For long-term robust tracking to single target, a corrected tracking algorithm based on static-adaptive appearance model was proposed. Firstly, the interference factors that may be encountered in the tracking process were divided into two categories from environment and target itself, then a static appearance model and an adaptive appearance model were proposed respectively. The static appearance model was used for global matching while the adaptive appearance model was employed for local tracking, and the former corrected tracking drift of the latter. A single-link hierarchical clustering algorithm was used to remove the noise introduced by the fusion of the above two models. To capture the re-occurring target, static appearance model was applied for global search. Experimental results on standard video sequences show that the accuracy of tracking the target center is 0.9, and the computer can process 26 frames per second. The proposed tracking algorithm framework can achieve long-term stable tracking with good robustness and real-time performance.
    Speech enhancement method based on sparsity-regularized non-negative matrix factorization
    JIANG Maosong, WANG Dongxia, NIU Fanglin, CAO Yudong
    2018, 38(4):  1176-1180.  DOI: 10.11772/j.issn.1001-9081.2017092316
    Asbtract ( )   PDF (800KB) ( )  
    References | Related Articles | Metrics
    In order to improve the robustness of Non-negative Matrix Factorization (NMF) algorithm for speech enhancement in different background noises, a speech enhancement algorithm based on Sparsity-regularized Robust NMF (SRNMF) was proposed, which takes into account the noise effect of data processing, and makes sparse constraints on the coefficient matrix to get better speech characteristics of the decomposed data. First, the prior dictionary of the amplitude spectrum of speech and noise were learned and the joint dictionary matrix of speech and noise were constructed. Then, the SRNMF algorithm was used to update the coefficient matrix of the amplitude spectrum with noise in the joint dictionary matrix. Finally, the original pure speech was reconstructed, and enhanced. The speech enhancement performance of the SRNMF algorithm in different environmental noise was analyzed through simulation experiments. Experimental results show that the proposed algorithm can effectively weaken the influence of noise changes on performance under non-stationary environments and low Signal-to-Noise Ratio (SNR) (<0 dB), it not only has about 1-1.5 magnitudes improvement in Source-to-Distortion Ratio (SDR) scores, but also is faster than other algorithms, which makes the NMF-based speech enhancement algorithm more practical.
    Integrated scheduling of production and distribution for perishable products with freshness requirements
    WU Yao, MA Zujun, ZHENG Bin
    2018, 38(4):  1181-1188.  DOI: 10.11772/j.issn.1001-9081.2017092252
    Asbtract ( )   PDF (1223KB) ( )  
    References | Related Articles | Metrics
    To improve the production/distribution efficiency of perishable products with short lives under Make-To-Order (MTO) mode, considering the operational costs of business and customer demand for freshness degree of delivered products, a bi-objective model was established to coordinate the production scheduling and vehicle routing with minimum freshness limitations, which aims to minimize the total distribution cost and maximize the total freshness degree of delivered products. And an elitist nodominated sorting genetic algorithm with chromosomes encoded by two substrings was devised to optimize the proposed model. Firstly, the customers' time windows were described and freshness degrees of delivered products were defined with average degree level for multiple kinds of products. The bi-objective model was constructed to schedule production and delivery simultaneously. Then, the hard constraints and two objective functions were transformed. Chromosomes were encoded by two substrings and the computation framework of elitist nodominated sorting genetic algorithm with some key operators was adopted to solve the proposed model. Finally, the proposed algorithm was tested with the comparison of Pareto based simulated annealing on a numerical example. The simulation results show that the two objectives have a trade-off conflict and the proposed algorithm can provide Pareto optimal solutions. The sensitivity analysis of minimum limitation of freshness degree demonstrates that the two objectives are affected significantly when fewer vehicles are put into use.
    Modeling of twin rail-mounted gantry scheduling and container slot selection in automated terminal
    WEI Yaru, ZHU Jin
    2018, 38(4):  1189-1194.  DOI: 10.11772/j.issn.1001-9081.2017082028
    Asbtract ( )   PDF (1037KB) ( )  
    References | Related Articles | Metrics
    For the scheduling problem of no cross-over twin Rail-Mounted Gantry (RMG) and container slot selection, considering the safety distance between the two RMGs and the buffer capacity, a coupled model of twin RMG scheduling and container slot selection was proposed with the goal of minimizing the completion time by setting the twin RMG scheduling as the main line and setting the container slot selection as the auxiliary line. The basic idea of it is to set the decision variable to describe the relationship between the tasks. A Genetic Algorithm-Ant Algorithm (GAAA) was designed for solving the coupled model, and the CPLEX was developed for comparisons by analyzing the efficiency in relay mode and mixed mode. The experimental results show that the efficiency in relay mode is better than that of mixed mode when dealing with 8 to 150 container tasks; in small and medium-large sized experiments, the minimum completion time of GAAA is reduced by about 2.65% and 18.50%, respectively; the running time of GAAA is reduced by 88.6% and 99.19% respectively on average compared with CPLEX, which validates the validity of the model.
    Heuristic packing algorithm based on forbearing stratified strategy
    LIANG Lidong, JIA Wenyou
    2018, 38(4):  1195-1200.  DOI: 10.11772/j.issn.1001-9081.2017092230
    Asbtract ( )   PDF (899KB) ( )  
    References | Related Articles | Metrics
    Focusing on the balance between the best-fit method and the evaluation of packing layout, a new effective best-fit heuristic algorithm for 2D packing problem was proposed based on forbearing stratified strategy for multi-objective optimization. Firstly, the packing space and fit values were defined, and the wide and high fit values of the current part and the packing space were calculated, then the unified multi-objective optimization function model and packing priority rule were built based on the objective function values. Especially for the general placeable-fit situation, the best layout was finally achieved by setting and adjusting the tolerance values. The computational results on benchmark problems of 7 kinds of data show that the average Gap is reduced by 2% compared with Lowest-Level Left Align Best Fit (LLABF) and Lowest Skyline Best Fit (LSBF); the packing heights reach 24 and 339 respectively for two sets of random data of C1P1+C3P1 and C2-C7 (The number of rectangles is 33 and 66). This algorithm can also be used for irregular part packing optimization.
    Dynamic model of public opinion and simulation analysis of complex network evolution
    WANG Jian, WANG Zhihong, ZHANG Lejun
    2018, 38(4):  1201-1206.  DOI: 10.11772/j.issn.1001-9081.2017081949
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics
    In terms of the evolution of complex dynamics in the dissemination of public opinion, a dynamic evolution model was proposed based on transmission dynamics. Firstly, the models of public opinion and its evolution were constructed and the static solution was obtained through equation transformation. Secondly, the Fokker-Planck equation was introduced to analyze the asymptotic behavior of public opinion evolution, getting the steady-state solution and solving it. In that case, the correlation between the complex network and the model was built and the experiment objective of simulation research was put forward. Finally, through the simulation analysis of the public opinion evolution model and the public opinion model with the Fokker-Planck equation, and the empirical analysis of real micro-blog public opinion data, the essence of the dissemination and evolution of public opinion in the complex network was studied. The results show that the asymptotic behavior of public opinion network evolution is consistent with the distribution of degrees and the connection way of network public opinion dissemination is influenced by nodes. The model can describe the dynamic behavior in the formation and evolution of micro-blog public opinion network.
    Visual analytics on trajectory of pseudo base-stations based on SMS spam collected from mobilephone users
    PU Yuwen, HU Haibo, HE Lingjun
    2018, 38(4):  1207-1212.  DOI: 10.11772/j.issn.1001-9081.2017102414
    Asbtract ( )   PDF (1083KB) ( )  
    References | Related Articles | Metrics
    Due to critical security vulnerabilities of the protocols for Short Message Service (SMS), SMS spam come to the fore through numerous malicious pseudo base-stations, to spread fraud message or illegal advertisements. Nowadays, SMS spam negatively affects daily lives of the masses, even influences the stability of society. However, with respect to the properties as mobility and concealment of pseudo base-stations, exploring the trajectory and activity of pseudo base-stations is a difficult task. To solve this problem, a visual analytics scheme was proposed to trail pseudo base-stations via multi-users' SMS spam collected by mobile service provider. Multi-visualized views and a visual analytics system were designed based upon the proposed scheme. Moreover, a case study was presented to validate the proposed method and system, with the aid of dataset provided by the ChinaVis'2017 Challenge I. The result verifies the feasibility and effectiveness of the proposed method.
    Application of fully online sequential extreme learning machine controller with PID compensation in input-disturbance system adaptive control
    ZHANG Liyou, MA Jun, JIA Huayu
    2018, 38(4):  1213-1217.  DOI: 10.11772/j.issn.1001-9081.2017092207
    Asbtract ( )   PDF (749KB) ( )  
    References | Related Articles | Metrics
    To deal with the difficulty of input disturbance system in achieving adaptive control, a design method for the Fully Online Sequential Extreme Learning Machine (FOS-ELM) controller with Proportion-Integral-Derivative (PID) compensation was proposed. Firstly, a dynamic linear model of the system was established, then the FOS-ELM algorithm was used to design the controller and learn its parameters. Secondly, by calculating the output error of the system and combining with the system control error, the PID parameters of the system compensation were designed. Finally, the FOS-ELM controller parameters for PID compensation were adjusted online and used for system control. The experiment was carried out on engine Air Fuel Ratio (AFR) control system. The results show that the proposed method can achieve the adaptive control, reduce the disturbance caused by system disturbance input, and obviously improve the effective control rate of the system at the same time. When the positive and negative interference coefficients are 0.2, the effective control rate is increased from less than 53% to over 93%. In addtion, the proposed method is easy to implement and has strong robustness and practical value.
    Improvement of Chinese films for on-line scoring system
    XIE Difan, DU Zifang
    2018, 38(4):  1218-1222.  DOI: 10.11772/j.issn.1001-9081.2017092254
    Asbtract ( )   PDF (813KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the original on-line scoring system of Chinese films does not consider the information about the viewer who did not participate in the online survey, an improved scoring system based on evaluation participation rate was proposed. Firstly, an evaluation criterion of scoring system was established at the core of divergence and divergent effect based on the method of Regression Discontinuity Design (RDD). Secondly, distinguished from the weighted average method, an improvement method using the participation rate was put forward. Lastly, empirical study was conducted on the films released from 2014 to 2016 in China. The results show that the improved score's change rate of divergent effects after normalization of deviation is less than or approximate to 0, while the change rate of weighted average score is nearly 40%. Therefore, it is reasonable and feasible to analyze the differences between different scoring systems by using the divergent point and divergent effects; the improved score is closer to the real reputation influence of the film with less divergent effect after normalization, and can more intuitively reflect the sorting position of a film.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF