Loading...

Table of Content

    10 August 2016, Volume 36 Issue 8
    Automatic three-way decision clustering algorithm based on k-means
    YU Hong, MAO Chuankai
    2016, 36(8):  2061-2065.  DOI: 10.11772/j.issn.1001-9081.2016.08.2061
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    The result of widely used k-means algorithm is a two-way decision result, namely each object either belongs to one cluster or not. The two-way decision method is difficult to apply to some situations with uncertainty. Therefore, a three-way decision clustering method was proposed to show the three relationships between an object and a cluster. That is, the object definitely belongs to the cluster, the object may belong to the cluster or the object does not belong to the cluster. Obviously, the two-way decision is a special case of the three-way decision. A new separation index and clustering validity index were defined from the perspective of two aspects, which were the compactness of cluster and the separation among clusters considering the nearest neighbors. Then, an automatic three-way decision clustering algorithm was put forward. The method provides a new way to solve the problem of automatically determining the number of clusters in the framework of k-means algorithm for the uncertain information. The preliminary comparison experimental results on the artificial and real UCI data sets show that the proposed method is effective.
    K-means clustering algorithm based on adaptive cuckoo search and its application
    YANG Huihua, WANG Ke, LI Lingqiao, WEI Wen, HE Shengtao
    2016, 36(8):  2066-2070.  DOI: 10.11772/j.issn.1001-9081.2016.08.2066
    Asbtract ( )   PDF (803KB) ( )  
    References | Related Articles | Metrics
    The original K-means clustering algorithm is seriously affected by initial centroids of clustering and easy to fall into local optima. To solve this problem, an improved K-means clustering algorithm based on Adaptive Cuckoo Search (ACS), namely ACS-K-means, was proposed, in which the search step of cuckoo was adjusted adaptively so as to improve the quality of solution and boost speed of convergence. The performance of ACS-K-means clustering was firstly evaluated on UCI dataset, and the results demonstrated that it surpassed K-means, GA-K-means (K-means based on Genetic Algorithm), CS-K-means (K-means based on Cuckoo Search) and PSO-K-means (K-means based on Particle Swarm Optimization) in clustering quality and convergence rate. Finally, the ACS-K-means clustering algorithm was applied to the development of heat map of urban management cases of Qingxiu district of Nanning city, the results also showed that the proposed method had better quality of clustering and faster speed of convergence.
    Credibility evaluating method of Chinese microblog based on information fusion
    GAO Mingxia, CHEN Furong
    2016, 36(8):  2071-2075.  DOI: 10.11772/j.issn.1001-9081.2016.08.2071
    Asbtract ( )   PDF (964KB) ( )  
    References | Related Articles | Metrics
    To measure Chinese microblog, a framework of Credibility of Chinese Microblog based on Information Fusion (CCM-IF) was proposed by analyzing impact factors of Chinese microblog and their pedigree. Firstly, different evaluating methods were implemented for three particular features, such as text message, user, and information propagation. Secondly, a method based on Dempster-Shafer (D-S) theory was proposed to combine the features from the fuzzy nature of the credibility. Thirdly, a series of experimental validations involving two real datasets from Sina Weibo were conducted. Experimental results show that the accuracy of CCM-IF is 10%-20% higher than that of the classical ranking algorithm named LMJM (Language Modeling with Jelinek-Mercer smoothing). So, as a static indicator of quality assessment, CCM-IF can be used for microblog retrieval ordering and garbage microblog filtering.
    Emotion classification for news readers based on multi-category semantic word clusters
    WEN Wen, WU Biao, CAI Ruichu, HAO Zhifeng, WANG Lijuan
    2016, 36(8):  2076-2081.  DOI: 10.11772/j.issn.1001-9081.2016.08.2076
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    The analysis and study of readers' emotion is helpful to find negative information of the Internet, and it is an important part of public opinion monitoring. Taking into account the main factors that lead to the different emotions of readers is the semantic content of the text, how to extract semantic features of the text has become an important issue. To solve this problem, the initial features related to the semantic content of the text was expressed by word2vec model. On the basis of that, representative semantic word clusters were established for all emotion categories. Furthermore, a strategy was adopted to select the representative word clusters that are helpful for emotion classification, thus the traditional text word vector was transformed to the vector on semantic word clusters. Finally, the multi-label classification was implemented for the emotion label learning and classification. Experimental results demonstrate that the proposed method achieves better accuracy and stability compared with state-of-the-art methods.
    Key information extraction algorithm of news Web pages
    XIANG Jingjing, GENG Guanggang, LI Xiaodong
    2016, 36(8):  2082-2086.  DOI: 10.11772/j.issn.1001-9081.2016.08.2082
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics
    Since information extraction algorithm for Web pages lacks generality and information of title, release-time and source in news Web page, a new information extraction algorithm was proposed to resolve those problems. Firstly, HTML code of Web page was parsed to text sets combined with line number and text; then, extractor began to search boundary of news content from line which the longest sentence belonged to due to the characteristic that the longest sentence belongs to the content of news with an extremely high probability. Meanwhile, the longest common string algorithm was used to extract title, the regular expression and line number were used to extract release-time, and the presentation characteristics of source and line number were used to extract source. Finally, a data set was built to conduct a comparison experiment with an open-source software named newsPaper in accuracy of extraction. Experimental results show that newsExtractor outperforms newsPaper in average accuracy of content, title, release-time and source, it has strong generality and robustness.
    Improved adaptive collaborative filtering algorithm to change of user interest
    HU Weijian, TENG Fei, LI Lingfang, WANG Huan
    2016, 36(8):  2087-2091.  DOI: 10.11772/j.issn.1001-9081.2016.08.2087
    Asbtract ( )   PDF (767KB) ( )  
    References | Related Articles | Metrics
    As a widely used recommendation algorithm in the industry, collaborative filtering algorithm can predict the likely favorite items based on the user's historical behavior records. However, the traditional collaborative filtering algorithms do not take into account the drifting of user interests, and there are also some deficiencies when the recommendation's timeliness is considered. To solve these problems, the measure method of similarity was improved by combining with the characteristics of user interests change with time. At the same time, an enhanced time attenuation model was introduced to measure the predictive value. By combining these two ways together, the concept drifting problem of user interests was solved and the timeliness of the recommendation algorithm was also considered. In the simulation experiment, predictive scoring accuracy and TopN recommendation accuracy were compared among the proposed algorithm, UserCF, TCNCF, PTCF and TimesSVD++ algorithm in different data sets. The experimental results show that the improved algorithm can reduce the Root Mean Square Error (RMSE) of the prediction score, and it is better than all the compared algorithms on the accuracy of TopN recommendation.
    Mining Ceteris Paribus preference from preference database
    XIN Guanlin, LIU Jinglei
    2016, 36(8):  2092-2098.  DOI: 10.11772/j.issn.1001-9081.2016.08.2092
    Asbtract ( )   PDF (1198KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the traditional recommendation system requires users to give a clear preference matrix (U-I matrix) and then uses automation technology to capture the user preferences, a method for mining preference information of Agent from preference database was introduced. From the perspective of knowledge discovery, a k order preference mining algorithm named kPreM was proposed according to the Ceteris Paribus rules (CP rules). The k order CP rules were used to prune the information in the preference database, which decreased the database scanning times and increased the efficiency of mining preference. Then a general graph model named CP-nets (Conditional Preference networks) was used as a tool to reveal that the user preferences can be approximated by the corresponding CP-nets. The theoretical analysis and simulation results show that the user preferences are conditional preferences. In addition, the xcavation of CP-nets preference model provides a theoretical basis for designing personalized recommendation system.
    Pseudo relevance feedback based on sorted retrieval result
    YAN Rong, GAO Guanglai
    2016, 36(8):  2099-2102.  DOI: 10.11772/j.issn.1001-9081.2016.08.2099
    Asbtract ( )   PDF (774KB) ( )  
    References | Related Articles | Metrics
    Focusing on the low quality of expansion source of traditional Pseudo Relevance Feedback (PRF) algorithms, which lead to low retrieval performance, a retrieval result based sorting model, namely REM, was proposed. Firstly, the first-pass retrieval result was considered as a pseudo relevant set. Secondly, documents in the pseudo relevant set were re-ranked based on rules of maximizing the relevance between the user query intention and the documents of pseudo relevant set and minimizing the similarity between documents. Finally, the top ranked documents of the re-ranking were regarded as the expansion source to the second-retrieval. The experimental results show that, compared with two classical PRF methods, the proposed model can improve the performance of retrieval and obtain more relevant feedback document to the user query intention.
    Knowledge mining and visualizing for scenic spots with probabilistic topic model
    XU Jie, FAN Yushun, BAI Bing
    2016, 36(8):  2103-2108.  DOI: 10.11772/j.issn.1001-9081.2016.08.2103
    Asbtract ( )   PDF (879KB) ( )  
    References | Related Articles | Metrics
    Since the tourism text for destinations contains semantic noise and different scenic spots, which can not be displayed intuitively, a new scenic spots-topic model based on the probabilistic topic model was proposed. The model assumed that one document included several scenic spots with correlation, and a special scenic spot named "global scenic spot" was introduced to filter the semantic noise. Then Gibbs sampling algorithm was employed to learn the maximum a posteriori estimates of the model and get a topic distribution vector for each scenic spot. A clustering experiment was conducted to indirectly evaluate the effects of the model and analyze the impact of "global scenic spot" on the model. The result shows that the proposed model has better effect than baseline model such as TF-IDF (Term Frequency-Inverse Document Frequency) and Latent Dirichlet Allocation (LDA), and the "global scenic spot" can improve the modeling effect significantly. Finally, scenic spots association graph was employed to display the result visually.
    Taxi unified recommendation algorithm based on region partition
    LYU Hongjin, XIA Shixiong, YANG Xu, HUANG Dan
    2016, 36(8):  2109-2113.  DOI: 10.11772/j.issn.1001-9081.2016.08.2109
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    In extreme weather or traffic, passengers cannot get a taxi to the destination quickly, thus a taxi unified recommendation algorithm based on region partition was proposed to provide common taxi service and carpooling service. First of all, the region was regarded as the logo of journey, making the journey matching possible. Secondly, in the carpooling service, the similar routes of two passengers were matched in real-time to help passenger carpool sharing. Finally, the taxi with the minimum percentage of bypass time was selected to recommend to the user. The Global Positioning System (GPS) data of 14747 taxis was used to evaluate the proposed algorithm. Compared with CallCab system, the total mileage of the proposed algorithm was dropped by about 10%, while the carpooling time was only raised by 6% on average, as well as the total passenger mileage was reduced by 30%. Experimental results show that the proposed algorithm not only can significantly reduce the emission of automotive exhaust, but also has better performance in terms of time consumption.
    Performance optimization of wireless network based on canonical causal inference algorithm
    HAO Zhifeng, CHEN Wei, CAI Ruichu, HUANG Ruihui, WEN Wen, WANG Lijuan
    2016, 36(8):  2114-2120.  DOI: 10.11772/j.issn.1001-9081.2016.08.2114
    Asbtract ( )   PDF (1089KB) ( )  
    References | Related Articles | Metrics
    The existing wireless network performance optimization methods are mainly based on the correlation analysis between indicators, and cannot effectively guide the design of optimization strategies and some other interventions. Thus, a Canonical Causal Inference (CCI) algorithm was proposed and used for wireless network performance optimization. Firstly, concerning that wireless network performance is usually presented by numerous correlated indicators, the Canonical Correlation Analysis (CCA) method was employed to extract atomic events from indicators. Then, typical causal inference method was conducted on the extracted atomic events to find the causality among the atomic events. The above two stages were iterated to determine the causal network of the atomic events and provided a robust and effective basis for wireless network performance optimization. The validity of CCI was indicated by simulation experiments, and some valuable causal relations of wireless network indicators were found on the data of a city's more than 30000 mobile base stations.
    Online incentive mechanism based on reputation for mobile crowdsourcing system
    WANG Yingjie, CAI Zhipeng, TONG Xiangrong, PAN Qingxian, GAO Yang, YIN Guisheng
    2016, 36(8):  2121-2127.  DOI: 10.11772/j.issn.1001-9081.2016.08.2121
    Asbtract ( )   PDF (1144KB) ( )  
    References | Related Articles | Metrics
    In big data environment, the research on mobile crowdsourcing system has become a research hotspot in Mobile Social Network (MSN). However, the selfishness of individuals in networks may cause the distrust problem of mobile crowdsourcing system. In order to inspire individuals to select trustful strategy, an online incentive mechanism based on reputation for mobile crowdsourcing system named RMI was proposed. Combining evolutionary game theory and Wright-Fisher model in biology, the evolution trend of mobile crowdsourcing system was studied. To solve free-riding and false-reporting problems, the reputation updating methods were established. Based on the above researches, an online incentive mechanism was built, which can inspire workers and requesters to select trustful strategies. The simulation results verify the effectiveness and adaptability of the proposed incentive mechanism. Compared with the traditional social norm-based reputation updating method, RMI can improve the trust degree of mobile crowdsourcing system effectively.
    Fair allocation of multi-dimensional resources based on intelligent optimization algorithm in heterogeneous cloud environment
    LIU Xi, ZHANG Xiaolu, ZHANG Xuejie
    2016, 36(8):  2128-2133.  DOI: 10.11772/j.issn.1001-9081.2016.08.2128
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    Resource allocation strategy has been a hot and difficult research topic in cloud computing field. In view of the fair distribution of multi-dimensional resources in heterogeneous cloud computing environment, two resource allocation strategies were proposed by combining Genetic Algorithm (GA) and Different Evolution (DE) algorithm and taking into account both fairness and efficiency in heterogeneous cloud environment. The solution matrix was improved to convert the Dominant Resource Fairness allocation in Heterogeneous systems (DRFH) model into Integer Linear Programming (ILP) model, a Max Task Match (MTM) based algorithm was used to generate initial solutions, and a revising operation was brought to change infeasible solutions into feasible solutions, which can accelerate the convergence to acquire the optimal solution quickly and effectively. Experimental results demonstrate that the multi-dimensional resources fair allocation strategies based on GA and DE algorithm can obtain near-optimal solutions; and in aspects of maximizing the value of minimum global dominant share and resource utilization, it is superior to Best-Fit DRFH and Distributed-DRFH, and has higher environmental adaptability to the resource requirement of different task types.
    Behavior representation of multi-athletes for football game video based on scale adaptive local spatial and temporal characteristics
    WANG Zhiwen, JIANG Lianyuan, WANG Yuhang, WANG Rifeng, ZHANG Canlong, HUANG Zhenjin, WANG Pengtao
    2016, 36(8):  2134-2138.  DOI: 10.11772/j.issn.1001-9081.2016.08.2134
    Asbtract ( )   PDF (777KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of behavior recognition of multi-athletes in football game video, a behavior representation method of multi-athletes for video football game based on scale adaptive local spatial and temporal characteristics was put forward. Behavior recognition was carried on using spatial-timporal interest point to represent behavior of multi-athletes in video football game. Firstly, multi-athletes behavior in the sequence of video football game was regarded as a collection of spatial-timporal interest points in three-dimensional space. Secondly, the set of spatial-temporal interest points was quantified as histogram which has fixed dimension (ie temporal word) by using quantitative technique of histogram. Finally, spatial-temporal codebook was generated by using K-means clustering algorithm. Each spatial-timporal interest point was normalized to ensure its scaling and translation invariance before clustering codebook generated. Experimental results show that the proposed method can greatly reduce the computational amount of the algorithm,and the accuracy of recognition can be significantly improved.
    Deep Web entity matching method based on twice-merging
    CHEN Lijun
    2016, 36(8):  2139-2143.  DOI: 10.11772/j.issn.1001-9081.2016.08.2139
    Asbtract ( )   PDF (760KB) ( )  
    References | Related Articles | Metrics
    Concerning the limitations of the Weighted Edge Pruning (WEP) method in accuracy and matching efficiency, a Deep Web entity matching method based on twice-merging was proposed by introducing the concepts of self-matching and merging. Firstly, attribute values of each object were extracted to regroup objects for gathering objects with the same attribute value together, therefore, all objects could be divided into blocks efficiently. Secondly, the matching values between objects within a same block were calculated for pruning, self-matching detection, merging explicit matching to generate preliminary clusters. Finally, based on these preliminary clusters, matching relationships were further discovered by using the message passing between objects within a cluster and objects' attribute similarity values, which triggered a new round of cluster merging and updating. Experimental results show that compared with the WEP method, the proposed method, by detecting self-matching to automatically distinguish matching relationships and take the proper matching method, gradually refines the merging process to improve the matching accuracy; simultaneously, by blocking and pruning to effectively reduce the matching space, its system efficiency is improved.
    Functional homogeneity analysis on topology module of human interaction network for disease classification
    GAO Panpan, WANG Ning, ZHOU Xuezhong, LIU Guangming, WANG Huixin
    2016, 36(8):  2144-2149.  DOI: 10.11772/j.issn.1001-9081.2016.08.2144
    Asbtract ( )   PDF (1006KB) ( )  
    References | Related Articles | Metrics
    Concerning that there is no research about the relationship between disease classification and functional homogeneity analysis of functional protein module in network medicine, the following research work was carried out. Firstly, a gene relationship network was constructed based on the Mesh database and String9 database. Secondly, the gene relationship network was divided by using optimized modularity-based module classification method (such as BGLL, Nonnegtive Matrix Factorization (NMF) and other clustering algorithms). Thirdly, the GO enrichment analysis was carried out for divided modules, and through the comparison of GO enrichment analysis to the high and low pathogenic topology module, important biology suggests for disease classification could be found from protein functional module characteristics in the aspects of biological process, cellular component, molecular function and so on. Finally, the functional characteristics of topological module for disease classification were analyzed, and the data about the functional features of each module was obtained by the analysis to the properties of the network topology such as average degree, density, and average shortest path length, and further correlativity between disease classification and functional module was revealed.
    Protein subcellular multi-localization prediction based on three-layer ensemble multi-label learning
    QIAO Shanping, YAN Baoqiang
    2016, 36(8):  2150-2156.  DOI: 10.11772/j.issn.1001-9081.2016.08.2150
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    Aiming at the situation that multi-label learning and ensemble learning are not applied maturely in solving the problem of protein subcellular multi-localization prediction, an ensemble multi-label learning based method was studied to address this issue. Firstly, from the view of combination of multi-label learning and ensemble learning, a three-layer ensemble multi-label learning framework was proposed. Learning algorithms and classifiers were both categorized into three groups according to the corresponding three layers of the proposed framework. In this framework, binary classification learning, multi-class classification learning, multi-label learning and ensemble learning were all integrated together effectively, and thus a general-purpose ensemble multi-label learning model was constructed. Secondly, a learning system with good expansibility was designed using the object-oriented technology and Unified Modeling Language (UML), which can enhance the function and improve the performance of the system. Finally, by extending the model, a Java-based learning system was developed and applied successfully to predict protein's multiple subcellular localizations. The test results on a gram positive bacteria dataset indicate the operability of the system function as well as better prediction performance, the proposed system may become a useful tool to predict protein multiple subcellular localizations.
    Evolutionary game theory based clustering algorithm for multi-target localization in wireless sensor network
    LIU Baojian, ZHANG Xiaoyi, LI Qing
    2016, 36(8):  2157-2162.  DOI: 10.11772/j.issn.1001-9081.2016.08.2157
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem that the network lifetime was reduced because of the high energy consumption of the nodes covered by multiple radiation sources in large scale Wireless Sensor Network (WSN), a new clustering algorithm based on Evolutionary Game Theory (EGT) was proposed. The non-cooperative game theory model was established by mapping the search space of the optimal node sets to the strategy space of the game and using the utility function of the game as objective function respectively; then the optimization objective was achieved by using Nash equilibrium analysis and the perturb-recover process of equilibrium states. Furthermore, a detailed clustering algorithm was presented to group the optimal node sets into clusters for further location. The proposed algorithm was compared with the nearest-neighbor algorithm and the clustering algorithm based on Discrete Particle Swarm Optimization (DPSO) algorithm in the location accuracy and the network lifetime under the RSSI (Received Signal Strength Indication)/TDOA (Time Difference of Arrival) two rounds cooperative location scheme. Simulation results show that the proposed algorithm decreases the energy consumption of the nodes covered by multiple radiation sources, prolongs the network lifetime and guarantees the precise location.
    Dynamic spectrum allocation algorithm of HF radio access network based on intelligent frequency hopping
    DUAN Ruijie, YAO Fuqiang, LI Yonggui, GUO Pengcheng
    2016, 36(8):  2163-2169.  DOI: 10.11772/j.issn.1001-9081.2016.08.2163
    Asbtract ( )   PDF (1127KB) ( )  
    References | Related Articles | Metrics
    Because of the problem that the existing fixed spectrum allocation method of HF radio access to network cannot satisfy the new requirement of intelligent frequency hopping technology, the communication demand of HF radio access to network based on intelligent frequency hopping was analyzed, and the strategy and algorithm of dynamic spectrum allocation in intelligent frequency hopping HF radio access to network were proposed. Firstly, mobile users and base stations were regarded as a subnet, then the subnet of spectrum allocation model was modeled as spectrum allocation model by using graph coloring theory. Finally, allocation strategy and algorithm were put forward combined with communication demand to complete spectrum allocation. Simulation results show that the proposed strategy and algorithm can effectively solve application problem of intelligent frequency hopping technology in HF radio access to network. Compared with fixed frequency communication of fixed spectrum allocation method, dynamic spectrum allocation can obviously increase network benefits, subnet satisfaction, network fairness, the number of users and spectrum efficiency, and reduce the mutual interference ratio at the same time.
    Beamforming based localization algorithm in 60GHz wireless local area networks
    LIU Xing, ZHANG Hao, XU Lingwei
    2016, 36(8):  2170-2174.  DOI: 10.11772/j.issn.1001-9081.2016.08.2170
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    Concerning ranging difficulties with 60GHz signals in Non Line of Sight (NLOS) conditions, a new positioning algorithm based on beamforming in Wireless Local Area Network (WLAN) was proposed. Firstly, the beamforming technology was applied to search the strongest path by adjusting receiving antennas along the channel path with the maximum power.The searching robustness was enhanced and the location coverage was expanded. Secondly, the time delay bias in NLOS conditions was modeled as a Gaussian random variable to reconstruct the NLOS measurements. Finally, to further improve the positioning accuracy, the outlier detection mechanism was introduced by setting a reasonable detection threshold. The localization simulation experiments were conducted on Matlab using STAs-STAs (STAtions-STAtions) channel model, the Time of Arrival (TOA) localization algorithm based on traditional coherent estimation method achieved the average positioning error at about 2m, and the probability of 1m localization accuracy was just 0.5% under NLOS conditions, while the proposed algorithm achieved the average positioning error at 1.02cm, and the probability of 1m localization accuracy reached 94%. Simulation results show that the beamforming technology is an effective solution to 60GHz localization in NLOS conditions, and the localization accuracy and the probability of successful localization are effectively improved.
    Power control mechanism for vehicle status message in VANET
    XU Zhexin, LI Shijie, LIN Xiao, WU Yi
    2016, 36(8):  2175-2180.  DOI: 10.11772/j.issn.1001-9081.2016.08.2175
    Asbtract ( )   PDF (1020KB) ( )  
    References | Related Articles | Metrics
    When the packets are broadcasted with the fixed power in Vehicular Ad-Hoc NETwork (VANET), the wireless channel may not be allocated reasonable. In order to solve this problem, a power control mechanism adapted to the variation of vehicle density was proposed. It is adaptive to the variation of vehicle density. The direct neighbor list of each node was constructed and updated in a power control period, the power that used to transmit the vehicle status message was adjusted according to the location of the direct neighbor to cover all the direct neighbors, thus wireless channel could be allocated more reasonable and the performance of router could also be optimized. The validity of the proposed mechanism was proved by the simulation results. It is also found that the proposed mechanism is useful for adjusting the transmission power according to the vehicular density, reducing channel busy ratio and enhancing the performance of packet delivery ratio among direct neighbors, which can ensure the effective transmission of the security information.
    Adaptive priority method of public bus under Internet of vehicles
    WANG Yongsheng, TAN Guozhen, LIU Mingjian, DING Nan
    2016, 36(8):  2181-2186.  DOI: 10.11772/j.issn.1001-9081.2016.08.2181
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    Focusing on the problems in bus priority systems, like hysteresis, being not able to fully exploit road capacity, an Adaptive Bus Priority (ABP) model based on the Internet of Vehicles (IOV) was proposed. First, with powerful communication capability of IOV, using time division multiplexing idea, road multiplexed control rules for ordinary bus was designed, and the "space priority" was achieved by setting a virtual bus lane. Secondly, real-time data acquisition of arrival vehicle was used to replace the historical data to solve the problem of hysteresis. Finally, the bus priority signal control model was designed, and bus priority was realized by inserting short phases to make public transit priority. VISSIM software was used to design simulation experiment to compare the ABP model and the traditional model. Simulation results indicate that the ABP model can improve the operation efficiency of bus and intersection pass capacity without causing great impact to social vehicles.
    Design and development of power line carrier communication system for variable refrigerant volume air-conditioning systems
    HONG Weiwei, XU Zheng
    2016, 36(8):  2187-2191.  DOI: 10.11772/j.issn.1001-9081.2016.08.2187
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the cost and simplify the installation of Variable Refrigerant Volume (VRV) air-conditioning system, a communication system was constructed based on Power Line Carrier Communication (PLCC) technology. First, a solution based on narrowband PLCC technology was proposed according to the communication requirement of the system control. Then, the four-layer communication protocol including physical layer, Medium Access Control (MAC) layer, network layer and application layer was designed mainly from two aspects of channel access control and networking control. Three kinds of channel access algorithms based on CSMA/CA (Carrier Sense Multiple Access with Collsion Avoidance) design idea and a kind of network algorithm for star network topology were put forward. Last, the networking test, system communication test and anti-disturbance test for a 11-node VRV system were given. The test results show that the proposed algorithms and solution can satisfy the real-time control of VRV system and has strong ability of anti-disturbance. In addition, with the openness of the designed communication protocol, it can be modified according to different requirements and be applied to a variety of real-time control areas in short distance without relay such as smart home.
    Using noise to improve information transmission in optimal matching array stochastic resonance system
    WANG Youguo, DONG Hongcheng, LIU Jian
    2016, 36(8):  2192-2196.  DOI: 10.11772/j.issn.1001-9081.2016.08.2192
    Asbtract ( )   PDF (771KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the noise influences transmission of symbol in the digital communication system, to improve system reliability and reduce Bit Error Rate (BER) of the received signal, a new Stochastic Resonance (SR) system based on the optimal match and parallel array theory was proposed. Firstly, the parallel array theory was used to improve the stochastic resonance effect of single bistable system. Secondly, the method of optimal match for SR was also used in array systems for weak signal detection. Finally, an analytical expression of the Signal-to-Noise Ratio (SNR) gain in the optimal matching stochastic resonance bistable system was derived; furthermore, the influence of array number on the BER was analyzed. In the comparison experiments with single stochastic resonance system, the performance of the optimal matching array stochastic resonance system on the detection of weak signal in strong noise background was improved, the SNR gain of the output signal of the system was significantly higher than 1, and the BER was also significantly reduced; in addition, SR performed better while the number of units of stochastic resonance system increased. The theoretical analysis and simulation results show that the optimal matching array stochastic resonance system can effectively improve the reliability of the digital communication system in practical engineering.
    Two-dimensional direction-of-arrival estimation based on sparse representation of reduced covariance matrix
    LI Wenjie, YANG Tao, MEI Yanying
    2016, 36(8):  2197-2201.  DOI: 10.11772/j.issn.1001-9081.2016.08.2197
    Asbtract ( )   PDF (726KB) ( )  
    References | Related Articles | Metrics
    Since the computational load of Two-Dimensional Direction-Of-Arrival (2D-DOA) estimation using sparse reconstruction is high, a 2D-DOA estimation algorithm based on sparse representation of reduced covariance matrix was proposed. Firstly, the manifold vector matrix redundant dictionary was constructed by using space angle, which maps the azimuth angle and pitch angle from two-dimensional space to one-dimensional space. Consequently, the length of the dictionary and the computational complexity were greatly reduced, and the pitch angle and the azimuth angle could be automatically matched. Secondly, the sampled covariance matrix sparse representation model was improved to reduce its model dimension. Then, constraint residual confidence intervals were obtained by the residual constraint characteristics of the sparse reconstruction of the covariance matrix to avoid the choice of regularization parameters. Finally, the 2D-DOA estimation was realized via convex optimization package. Simulation results show that the incident angle can be accurately estimated when selected covariance matrix column reaches a threshold (the number is 3 in the presence of 2 incident signals). As compared with the feature vector method based on space angle, the estimation accuracy of the proposed method is higher when the Signal-to-Noise Ratio (SNR) is relatively small (<5dB), and is slightly lower under small number of snapshots (<100); both methods have similar estimation accuracy with small angle intervals.
    Parallel high utility pattern mining algorithm based on cluster partition
    XING Shuning, LIU Fang'ai, ZHAO Xiaohui
    2016, 36(8):  2202-2206.  DOI: 10.11772/j.issn.1001-9081.2016.08.2202
    Asbtract ( )   PDF (844KB) ( )  
    References | Related Articles | Metrics
    The exiting algorithms generate a lot of utility pattern trees based on memory when mining high utility patterns in large-scale database, leading to occupying more memory spaces and losing some high utility itemsets. Using Hadoop platform, a parallel high utility pattern mining algorithm, named PUCP, based on cluster partition was proposed. Firstly, the clustering method was introduced to divide the transaction database into several sub-datasets. Secondly, sub-datasets were allocated to each node of Hadoop to construct utility pattern tree. Finally, the conditional pattern bases of the same item which generated from utility pattern trees were allocated to the same node, reducing the crossover operation times of each node. The theoretical analysis and experimental results show that, compared with the mainstream serial high utility pattern mining algorithm named UP-Growth (Utility Pattern Growth) and parallel high utility pattern mining algorithm named HUI-Growth (Parallel mining High Utility Itemsets by pattern-Growth), the mining efficiency of PUCP is increased by 61.2% and 16.6% respectively without affecting the reliability of the mining results; and the memory pressure of large data mining can be effectively relieved by using Hadoop platform.
    Optimal QoS-aware service selection approach considering semantics and transactional properties
    YANG Wanchun, ZHANG Chenxi, MU Bin
    2016, 36(8):  2207-2212.  DOI: 10.11772/j.issn.1001-9081.2016.08.2207
    Asbtract ( )   PDF (875KB) ( )  
    References | Related Articles | Metrics
    Optimal service selection satisfying Service Level Agreement (SLA) is a NP hard problem. In order to solve the dimension and granularity problems, a comprehensive service selection model was proposed which considered semantic link degree, Quality of Service (QoS) and transactional properties. A coding strategy was adopted to solve the multi-granularity problem. In order to reduce the computation cost, a hybrid optimization algorithm based on clonal selection and Genetic Algorithm (GA) was proposed. Firstly, the dynamic fitness function was adopted to drive the evolution toward constraint satisfaction. Secondly, the knowledge-oriented crossover and mutation operators based on priority were designed to ensure the transactional property of composite service. Finally, the clonal selection was combined into the GA to improve the search ability. In the simulation experiments, the proposed algorithm had better performance in accuracy and successful rate compared to GA; the time cost of the proposed algorithm was a little higher than that of GA but much lower than that of exhaustive search algorithm. The experimental results show that the proposed algorithm can guarantee QoS of service selection with low time cost.
    Design and implementation of high performance floating-point multiply acculate for M-DSP
    CHE Wenbo, LIU Hengzhu, TIAN Tian
    2016, 36(8):  2213-2218.  DOI: 10.11772/j.issn.1001-9081.2016.08.2213
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    In order to meet the requirements on performance, power, area of floating-point computing in M-DSP, the architecture of a M-DSP, as well as the characteristics of all the instructions related to its floating-point computing were analyzed, and a Floating-point Multiply ACcumulate (FMAC) with high performance and low power was proposed. The proposed FMAC has structure with separated single and double precision path, which was divided into 6-stage pipelines; its key modules including multiplier and shift device were designed for reuse, and the operations including single and double precision floating-point multiplication, multiply-add and multiply-sub, floating-point complex multiplication, dot product, etc. were all implemented in it. The proposed FMAC was fully verified and synthesized by using Design Compiler with 45nm technique of Synopsys Company. Experimental results show that the frequency of the proposed FMAC is up to 1GHz, the area is 36856μm2; compared with the FMAC of FT-XDSP, the area is saved by 12.95%, and the critical path was shortened by 2.17%.
    Broadcast authentication using cooperative sensor nodes
    ZENG Xiaofei, LU Jianzhu, WANG Jie
    2016, 36(8):  2219-2224.  DOI: 10.11772/j.issn.1001-9081.2016.08.2219
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    Since the broadcast authentication of public-key cryptography based on digital signatures in Wireless Sensor Network (WSN) costs large amounts of energy, and the sensor nodes have limited resources, so a broadcast authentication scheme based on mutual cooperation of sensor nodes was proposed to save the energy consumption of sensor nodes and speed up the digital signature authentication of sensor nodes. First of all, a user broadcasted his signature information into the group network of WSN, but did not broadcast the y-coordinate of the point in the signature. Then, according to the x-coordinate of the point and elliptic curve equation, the high-energy nodes in group network computed the y-coordinate and broadcasted it to the normal nodes in the group; at the same time, using vBNN-IBS (a variant of Bellare-Namprempre-Neben-Identity-based Signature) digital signature, the high-energy nodes authenticated the signature information broadcasted by user and rebroadcasted the effective signature information. Finally, after receiving the y-coordinate, the normal nodes in group network utilized elliptic curve equation to verify the correctness and reliability of y-coordinate, and implemented the same signature authentication as the high-energy nodes, and then rebroadcasted the effective signature information. In addition, the proposed scheme minimized the length of Authorization Revocation List (ARL) by integrating immediate revocation and automation revocation. Simulation results show that compared with another improved vBNN-IBS scheme accelerated by using mutual cooperation of sensor nodes, the energy consumption and the total certification time of the proposed scheme decreases 41% and 66% respectively when the amount of data packets received by the authentication node from its neighbour nodes is up to a certain number.
    Asymmetric proxy re-encryption scheme of efficient access to outsourcing data for mobile users
    HAO Wei, YANG Xiaoyuan, WANG Xu'an, ZHANG Yingnan, WU Liqiang
    2016, 36(8):  2225-2230.  DOI: 10.11772/j.issn.1001-9081.2016.08.2225
    Asbtract ( )   PDF (1032KB) ( )  
    References | Related Articles | Metrics
    In order to make the mobile device more convenient and faster decrypt the outsourcing data stored in the cloud, on the basis of Identity-Based Broadcast Encryption (IBBE) system and Identity-Based Encryption (IBE) system, using the technique of outsourcing the decryption proposed by Green et al. (GREEN M, HOHENBERGER S, WATERS B. Outsourcing the decryption of ABE ciphertexts. Proceedings of the 20th USENIX Conference on Security. Berkeley:USENIX Association, 2011:34), a Modified Asymmetric Cross-cryptosystem Proxy Re-Encryption (MACPRE) scheme across the encryption system was proposed. The proposed scheme is more suitable for mobile devices with limited computing power to securely share the data stored in the cloud. When the mobile user decrypts the re-encrypted data, the plaintext can be restored by performing one exponent operation and one bilinear pairing operation, which greatly improves the decryption efficiency of the mobile user and saves the power consumption of the mobile user. The security of this proposed scheme can be reduced to the security of the IBE and IBBE scheme. The theoretical analysis and experimental results show that, the proposed scheme can allow the mobile devices to decrypt data stored in the cloud by spending less time, and ease the problem of limited computing power of the mobile devices. The proposed scheme is more practical.
    Trusted and anonymous authentication protocol for mobile networks
    ZHANG Xin, YANG Xiaoyuan, ZHU Shuaishuai
    2016, 36(8):  2231-2235.  DOI: 10.11772/j.issn.1001-9081.2016.08.2231
    Asbtract ( )   PDF (783KB) ( )  
    References | Related Articles | Metrics
    The lackness of trusted verification of mobile terminal may affect the security of mobile network. A trusted anonymous authentication protocol for mobile networks was proposed, in which both user identity and platform integrity were authenticated when the mobile terminal accesses the networks. On the basis of trusted network connection architecture, the concrete steps of trusted roaming authentication and trusted handover authentication were described in detail. The authentication used pseudonyms and the corresponding public/private keys to achieve the protection of the user anonymous privacy. The security analysis indicates that the proposed protocol meets mutual authentication, strong user anonymity, untraceability and conditional privacy preservation; moreover, the implementation of the first roaming authentication requires two rounds of communications while the handover authentication protocol just needs one round. The analytic comparisons show that the proposed protocol is efficient in terminal computation and turns of message exchange.
    IMTP: a privacy protection mechanism for MIPv6 identity and moving trajectory
    WU Huiting, WANG Zhenxing, ZHANG Liancheng, KONG Yazhou
    2016, 36(8):  2236-2240.  DOI: 10.11772/j.issn.1001-9081.2016.08.2236
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics
    Nowadays, privacy protection for identity and trajectory has been a hot point in research and application field of Mobile IPv6 (MIPv6). Targeting on the problem that the mobile message and application data of mobile node suffers from malicious data analysis to expose its identity and to be located and tracked, an MIPv6 address privacy protection mechanism named IMTP was proposed, which supports hidden identity and prevents location tracking. In the first place, by applying self-defining mobile message option Encryptedword and making XOR transformation with home address, IMTP achieved the privacy protection of MIPv6 node identity. In the second place, by means of the mutual authentication technique among any nodes, this mechanism completed the randomly appointing of location proxy and hided the care of address of mobile node, thus to realize the privacy protection of MIPv6 node trajectory. The result of simulation indicates that IMTP has the higher quality of privacy protection and low resource cost. Meanwhile, it not only modifies a little of the standard MIPv6 protocol and well supports routing optimization, but also possesses flexible deployment, strong scalability and other advantages. The dual privacy protection for identity and trajectory provided by IMTP will be benefit to reduce the probability that specific mobile node communication data would be intercepted, thus to guarantee the communication security among the mobile nodes.
    Audit scheme for intranet behavior based on improved regular expression rule grouping
    YU Yihan, FU Yu, WU Xiaoping
    2016, 36(8):  2241-2245.  DOI: 10.11772/j.issn.1001-9081.2016.08.2241
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    In view of the insufficient ability of application layer protocol audit, an intranet behavior audit scheme based on improved Regular Expression (RE) rule grouping was proposed. First, the protocol needed to be audited was described by regular expression, and the relevant parameters were set, so that the states of high frequency protocols and the relative importance protocols of the audit in the intranet had the high priority in the RE set. Then, under the premise of the small interaction value of the regular expression, the high priority protocol state expression was built into the same automaton group to generate the audit engine as much as possible. At last, according to the audit requirements, the relevant parameters were changed to achieve security audit of the intranet behavior. Experimental results showed that, compared with the classic Nondeterministic Finite Automaton (NFA) algorithm named Thompson, the state number of the transformation of the proposed automata construction algorithm was reduced to 10% to 20%, and the throughput became 8 to 12 times as much as the throughput of the traditional automata grouping engine in detection. The proposed audit scheme can satisfy the demand of the application layer protocol in safety audit with high accuracy and efficiency.
    Collaborative filtering recommendation method based on improved heuristic similarity model
    ZHANG Nan, LIN Xiaoyong, SHI Shenghui
    2016, 36(8):  2246-2251.  DOI: 10.11772/j.issn.1001-9081.2016.08.2246
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy and efficiency of collaborative filtering recommendation method, a collaborative filtering recommendation method based on improved heuristic similarity model, namely PSJ, was proposed, which considered the difference of user ratings, the user global rating preferences and the number of common rating items. The Proximity factor of PSJ method used the exponential function to reflect the influence of the difference of user ratings, which avoided the problem of zero divider. The Significance factor of NHSM (New Heuristic Similarity Model) method and the URP (User Rating Preference) factor were merged to build the Significance factor of PSJ method, which makes the computational complexity of the PSJ method be lower than that of NHSM. To improve the recommendation performance in data sparsity conditions, both the variance value of user ratings and user global rating preferences were considered in PSJ method. In experiments, precision and recall of Top-k recommendation were used to evaluate the results. The results show that compard with NHSM, Jaccard algorithm, Adjust COSine similarity (ACOS) algorithm, Jaccard Mean Squared Difference (JMSD) algorithm and Sigmoid function based Pearson Correlation Coefficient method (SPCC), the precision and recall of PSJ method are improved.
    Multidimensional topic model for oriented sentiment analysis based on long short-term memory
    TENG Fei, ZHENG Chaomei, LI Wen
    2016, 36(8):  2252-2256.  DOI: 10.11772/j.issn.1001-9081.2016.08.2252
    Asbtract ( )   PDF (784KB) ( )  
    References | Related Articles | Metrics
    Concerning the low accuracy of global Chinese microblog sentiment classification, a new model was introduced from the perspective of Multi-dimensional Topics based on Long Short-Term Memory (MT-LSTM). The proposed model was constituted by hierarchical multidimensional sequence computation, it was composed of Long Short-Term Memory (LSTM) cell network and suitable for processing vector, array and higher dimensional data. Firstly, microblog was divided into multiple levels for analysis. To upward spread, sentiment tendencies of words and phrases were analyzed by three-Dimensional Long Short-Term Memory (3D-LSTM); to rightward spread, sentiment tendencies of the whole microblog were analyzed by Multi-Dimensional Long Short-Term Memory (MD-LSTM). Secondly, sentiment tendencies were analyzed by Gaussian distribution in topic sign. Finally, the classification result was obtained by weighting above analyses. The experimental results show that the average precision of the proposed model reached 91%, up to 96.5%, and the recall of the neutral microblog reached 50%. In the comparison experiments with Recursive Neural Network (RNN) model, the F-measure of MT-LSTM was enhanced above 40%; compared with no topic division, the F-measure of MT-LSTM was enhanced by 11.9% because of meticulous topic division. The proposed model has good overall performance, it can effectively improve the accuracy of analyzing Chinese microblog sentiment tendencies and reduce the amount of training data and the complexity of matching calculation.
    Microblog advertisement filtering method based on classification feature extension of latent Dirichlet allocation
    XING Jinbiao, CUI Chaoyuan, SUN Bingyu, SONG Liangtu
    2016, 36(8):  2257-2261.  DOI: 10.11772/j.issn.1001-9081.2016.08.2257
    Asbtract ( )   PDF (842KB) ( )  
    References | Related Articles | Metrics
    The traditional microblog advertisement filtering methods neglect the impact of factors such as data sparseness, semantic information, and advertisement background characteristics. Focusing on these issues, a new filtering method based on classification feature extension of Latent Dirichlet Allocation (LDA) was proposed. Firstly, microblogs were divided into normal microblog and advertising microblog, and the topic model of LDA was built respectively to infer the corresponding topic distribution, the words in the topic model were regarded as the basis of feature extension. Secondly, the background characteristics were extracted in conjunction with text category information during extension to reduce the impact on text classification. Finally, the extended feature vectors were served as the input of the classifier, and the advertisements were filtered depending on the results of Support Vector Machine (SVM) classification. In comparison experiments with the method only based on short text classification, the precision of the proposed method was averagely increased by 4 percentage points. The results indicate that the proposed method can effectively extend the text features and reduce the influence of background characteristics, it is more suitable for the filtering of microblog advertisement with great amount of data.
    Association rules recommendation of microblog friend based on similarity and trust
    WANG Tao, QIN Xizhong, JIA Zhenhong, NIU Hongmei, CAO Chuanling
    2016, 36(8):  2262-2267.  DOI: 10.11772/j.issn.1001-9081.2016.08.2262
    Asbtract ( )   PDF (861KB) ( )  
    References | Related Articles | Metrics
    Since the efficiency of rule mining and validity of recommendation are not high in personalized friends recommendation based on association rules, an improved association rule algorithm based on bitmap and hashing, namely BHA, was proposed. The mining time of frequent 2-itemsets was decreased by introducing hashing technique in this algorithm, and the irrelevant candidates were compressed to decrease the traversal of data by using bitmap and relevant properties. In addition, on the basis of BHA, a friend recommendation algorithm named STA was proposed based on similarity and trust. The problem of no displayed trust relationship in microblog was resolved effectively through trust defined by similarity of out-degree and in-degree; meanwhile, the defect of the similarity recommendation without considering users' hierarchy distance was remedied. Experiments were conducted on the user data of Sina microblog. In the comparison experiment of digging efficiency, the average minging time of BHA was only 47% of the modified AprioriTid; in the comparison experiment of availability in friend recommendation with SNFRBOAR (Social Network Friends Recommendation algorithm Based On Association Rules), the precision and recall of BHA were increased by 15.2% and 9.8% respectively. The theoretical analysis and simulation results show that STA can effectively decrease average time of mining rules, and improve the validity of friend recommendation.
    Approach for hesitant fuzzy two-sided matching decision making under unknown attribute weights
    LIN Yang, LI Yuansheng, WANG Yingming
    2016, 36(8):  2268-2273.  DOI: 10.11772/j.issn.1001-9081.2016.08.2268
    Asbtract ( )   PDF (838KB) ( )  
    References | Related Articles | Metrics
    To deal with Two-Sided Matching (TSM) problem based on Hesitant Fuzzy Value (HFV) of unknown weights, a multi-attribute matching decision making approach was proposed. To begin with, the weight information was determined by maximizing the sum of deviations of the given values in terms of HFVs with multi-attribute evaluated by both two-sided Agents. Then, the matching degree could be aggregated via an operation of adjusted hesitant fuzzy weighted averaging with obtained weights and multi-attribute information. In addition, a multi-objective optimization model was established based on the matching degree of two sides. By solving this model into single objective optimization model in min-max method, the matching scheme was generated. Finally, a numerical illustration and comparison was taken, the solutions of objectives by the proposed method were respectively 1.689 and 1.575, and a unique matching scheme was obtained. The experimental results show that the proposed method can avoid multiple solutions caused by subjective weights of goal functions.
    Advances in automatic image annotation
    LIU Mengdi, CHEN Yanli, CHEN Lei
    2016, 36(8):  2274-2281.  DOI: 10.11772/j.issn.1001-9081.2016.08.2274
    Asbtract ( )   PDF (1305KB) ( )  
    References | Related Articles | Metrics
    Existing image annotation algorithms can be roughly divided into four categories:the semantics based methods, the probability based methods, the matrix decomposition based methods and the graph learning based methods. Some representative algorithms for every category were introduced and the problem models and characteristics of these algorithms were analyzed. Then the main optimization methods of these algorithms were induced, and the common image datasets and the evaluation metrics of these algorithms were introduced. Finally, the main problems of automatic image annotation were pointed out, and the solutions to these problems were put forward. The analytical results show that the full use of complementary advantages of the current algorithms, or taking multi-disciplinary advantages may provide more efficient algorithm for automatic image annotation.
    Multimodal multi-label transfer learning for early diagnosis of Alzheimer's disease
    CHENG Bo, ZHU Bingli, XIONG Jiang
    2016, 36(8):  2282-2286.  DOI: 10.11772/j.issn.1001-9081.2016.08.2282
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    In the field of medical imaging analysis using machine learning, training samples are not enough. In order to solve the problem, a multimodal multi-label transfer learning model was proposed and applied to early diagnosis of Alzheimer's Disease (AD). Specifically, the multimodal multi-label transfer learning model consisted of two components:multi-label transfer learning feature selection and multimodal multi-label learning machine for classification and regression together. Firstly, the multi-label transfer learning feature selection model was built, which was based on the conventional sparse multi-label learning of Lasso (Least absolute shrinkage and selection operator) model for the combination of classification and regression tasks. Secondly, the technique of transfer learning was used to extend the conventional sparse multi-label learning of Lasso model and create the multi-label transfer learning feature selection model that can be performed on training samples from different learning multi-domains. Then, according to the multimodal feature data in the heterogeneous feature space, the multi-kernel learning was used to combine multimodal feature kernel matrix. Finally, the multimodal multi-label learning machine was built, and which was consisted of multi-kernel learning for the combination of multimodal biomarkers and multi-label classification and regression model. To evaluate the effectiveness of the multimodal multi-label transfer learning model, the Alzheimer's Disease Neuroimaging Initiative (ADNI) database was employed. The experimental results on the ADNI database show that the proposed model can recognize Mild Cognitive Impairment Converters (MCI-C) patients from MCI NonConverters (MCI-NC) ones with 79.1% accuracy and predict clinical scores with 0.727 correlation coefficient, so it can significantly improve the performance of early AD diagnosis with the aid of related domain knowledge.
    Human interaction recognition based on statistical features of key frame feature library
    JI Xiaofei, ZUO Xinmeng
    2016, 36(8):  2287-2291.  DOI: 10.11772/j.issn.1001-9081.2016.08.2287
    Asbtract ( )   PDF (765KB) ( )  
    References | Related Articles | Metrics
    Some issues such as high computational complexity and low recognition accuracy still exist in human interaction recognition. In order to solve those problems, an innovative and effective method based on statistical features of key frame feature library was proposed. Firstly, features of global GIST and regional Histogram of Oriented Gradient (HOG) were extracted from the pre-processed videos. Secondly, training videos with different kind of actions were clustered by the k-means algorithm respectively to get key frame feature of each action for constructing key frame feature library; in addition, similarity measure was utilized to calculate the frequency of different key frames in every interactive video, then the statistical histogram representation of interactive videos were obtained. Finally, the decision level fusion was achieved by using Support Vector Machine (SVM) classifier based on histogram intersection kernel to obtain impressive results on UT-interaction dataset. The experimental results on standard database show that the correct recognition rate of the proposed method is 85%, which indicates that the proposed method is simple and effective.
    Retrieval method of images based on robust Cosine-Euclidean metric dimensionality reduction
    HUANG Xiaodong, SUN Liang
    2016, 36(8):  2292-2295.  DOI: 10.11772/j.issn.1001-9081.2016.08.2292
    Asbtract ( )   PDF (766KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues that the Principal Component Analysis (PCA) related dimensionality reduction methods are limited to deal with nonlinear distributed datasets and have poor robustness, a new dimensionality reduction method named Robust Cosine-Euclidean Metric (RCEM) was proposed. Considering that Cosine Metric (CM) can handle the outliers efficiently and Euclidean distance can well maintain variance information of samples, the CM was used to describe the geometric characteristics of neighborhood and the Euclidean distance was used to depict the global distribution of dataset. This new proposal method retained local information of dataset while achieving the unification of local and global structure, thus it increased the robustness of local dimensionality reduction algorithm and helped avoiding the problem of small sample size cases. The experimental results on Corel-1000 dataset showed that the retrieval average precision of RCEM was 5.61% higher than that of Angle Optimization Global Embedding (AOGE), and the retrieval time of RCEM was decreased by 42% compared with dimensionality reduction free method. The results indicate that RCEM can improve the efficiency of image retrieval without decreasing the retrieval accuracy, and it can be effectively applied to Content-Based Image Retrieval (CBIR).
    Super-pixel based pointwise mutual information boundary detection algorithm
    LIU Shengnan, NING Jifeng
    2016, 36(8):  2296-2300.  DOI: 10.11772/j.issn.1001-9081.2016.08.2296
    Asbtract ( )   PDF (881KB) ( )  
    References | Related Articles | Metrics
    The Pointwise Mutual Information (PMI) boundary detection algorithm can achieve the boundary of each image accurately, however the efficiency is restricted by the redundancy and randomness of sampling process. In order to overcome the disadvantage, a new method based on the middle structure information provided by super-pixel segmentation was proposed. Firstly, the image was divided into approximately the same super-pixels in the pre-processing. Secondly, the sampling points were located in adjacent different super-pixels which made the sample selection be more ordered, and the image information could still be extracted effectively and completely even though the total number of sampling points was reduced sharply. The comparison experiment of the proposed algorithm and the original PMI boundary detection algorithm was carried out on the Berkeley Segmentation Data Set (BSDS). The results show that the proposed algorithm achieves 0.7917 AP (Average Precision) under PR (Precision/Recall) curve with 3500 sample points, while the original algorithm needs 6000 pairs. It confirms that the proposed algorithm can guarantee the detection accuracy with reducing sample points, which improves the real-time performance effectively.
    Edge extraction method based on graph theory
    ZHANG Ningbo, LIU Zhenzhong, ZHANG Kun, WANG Lulu
    2016, 36(8):  2301-2305.  DOI: 10.11772/j.issn.1001-9081.2016.08.2301
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that edges extracted by state-of-the-art exist some deficiencies including non-continuity, incompleteness, incline, jitter and notches etc., an edge extraction method based on graph theory was proposed, which considered the image as an undirected graph by regarding each pixel as a node and connecting two adjacent nodes in horizontal or vertical direction to constitute a side. The proposed method included three phases:in pixels similarity calculation phase, the weights were given to sides in undirected graph, which represented pixels similarity; in threshold determination phase, the mean of all the weights (the similarity of the whole image) was determined as a threshold; in edge determination phase, when weights on horizontal or vertical sides were smaller than the threshold, the left nodes of horizontal side and the upper nodes of vertical side were retained to constitute edges of the image. The experimental results show that the proposed edge extraction method based on graph theory is suitable for the images with obvious target and background, and can overcome deficiencies including non-continuity, incompleteness, incline, jitter and notches etc., and has anti-noise ability to Speckle noise and Gaussian noise.
    Moving object detection method based on multi-information dynamic fusion
    HE Wei, QI Qi, ZHANG Guoyun, WU Jianhui
    2016, 36(8):  2306-2310.  DOI: 10.11772/j.issn.1001-9081.2016.08.2306
    Asbtract ( )   PDF (843KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of simple fusion of spatio-temporal information and ignoring moving information in moving object detection based on visual saliency, a moving object detection method based on the dynamic fusion of visual saliency and moving information was proposed. Firstly, local and global saliencies of each pixel was computed by spatial characters extracted from an image, then the spatial salient map was calculated combining those saliencies by Bayesian criteria. Secondly, with the help of structured random forest, the motion boundaries were predicted to primarily orientate the moving objects, by which the motion boundary map was built. Then, according to the change of the spatial salient and motion boundary maps, the optimal fusion weights were determined dynamically. Finally, moving objects were calculated and marked by the dynamic fusion weights. The proposed approach not only inherits the advantages of saliency algorithm and moving boundary algorithm, but also overcomes their disadvantages. In the comparison experiments with the traditional background subtraction method and three-frame difference method, the detection rate and the false alarm rate of the proposed approach are improved at most more than 40%. Experimental results show that the proposed method can detect moving objects accurately and completely, and the adaptation to scene is promoted.
    Multi-target tracking algorithm based on improved Hough forest framework
    GAO Qingji, HUO Lu, NIU Guochen
    2016, 36(8):  2311-2315.  DOI: 10.11772/j.issn.1001-9081.2016.08.2311
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    For the failure of similar multi-target tracking with monocular vision caused by influence factors such as occlusion, a multi-target tracking algorithm based on improved online Hough forest tracking framework was proposed. Based on that, the tracking problem could be formulated as a detection-based trajectories association process, and the association calculation was formulated as a Maximum A Posteriori (MAP) problem with online learning Hough forest framework. Through online multi-objective samples collection and appearance and motion information extraction, a Hough forest was constructed to associate multi-target trajectories by training for track association probability. Low-rank approximation Hankel Matrix was employed to correct the trajectories, which modified associated errors and improved the efficiency of online update of the training set. Experimental results show that the trajectory miss match ratio is significantly decreased by the proposed method, and tracking accuracy and robustness of the monocular vision are effectively improved for similar or inter-occlusion targets.
    Accurate motion estimation algorithm based on upsampled phase correlation with kernel regression refining
    YU Yinghuai, XIE Shiyi, MEI Qixiang
    2016, 36(8):  2316-2321.  DOI: 10.11772/j.issn.1001-9081.2016.08.2316
    Asbtract ( )   PDF (1098KB) ( )  
    References | Related Articles | Metrics
    Concerning highly accurate sub-pixel motion vector estimation, an accurate motion estimation algorithm based on upsampled phase correlation with kernel regression refining was proposed. Firstly, an upsampled phase correlation was computed efficiently by means of matrix-multiply discrete Fourier transform, and the initial estimation of motion vector with sub-pixel accuracy was achieved by simply locating its peak. Secondly, a kernel regression function was fit to the upsampled phase correlation values in a neighborhood of initial estimation. Finally, the initial estimation was refined with the location of peak found in the kernel regression fitting function, so as to obtain accurate estimation at arbitrary-precision. In the comparison experiments with some state-of-the-art algorithms such as Quadratic function Fitting (QuadFit), Linear Fitting (LinFit), Sinc Fitting (SincFit), Local Center of Mass (LCM) and Upsampling in the frequency domain (Upsamp), the proposed scheme achieved the average estimation error at 0.0070 in the case of noise-free, and increased the accuracy of motion estimation by more than 64%; while under the noise condition, the average estimation error of the proposed shceme was 0.0204, and the accuracy of motion estimation was improved by more than 47%. Experimental results show that the proposed scheme can not only improve the accuracy of motion estimation significantly, but also achieve good robustness to the influence of noise.
    Rapid video background extraction algorithm based on nearest neighbor pixel gradient
    ZHAO Shuyan, LU Yanxue, HAN Xiaoxia
    2016, 36(8):  2322-2326.  DOI: 10.11772/j.issn.1001-9081.2016.08.2322
    Asbtract ( )   PDF (849KB) ( )  
    References | Related Articles | Metrics
    For the instantaneity of video background extraction in embedded visual systems, a rapid algorithm based on the Nearest Neighbor Pixel Gradient (N2PG) stability was proposed. Firstly, background initialization was conducted with one single frame, and the N2PG matrix of this frame was calculated. Secondly, several frames of the subsequent video were operated as reference image for background update, and the N2PG matrix of those frames were calculated in the same way. Then, it was judged rapidly that each pixel of the background model was static or nonstatic by calculating the subtraction between the N2PG matrix of the background image and the N2PG matrix of the reference image, referencing the threshold value of gradient stability estimated in real-time. Finally, the current background was obtained by updating or replacing each background pixel. In the simulation tests, compared with Kalman filtering method and Gaussian mixture model, only 10 to 50 frames were needed to get background in the algorithm based on N2PG, and the average speed of processing frames was increased by 36% and 75% respectively; compared to the modified Visual Background Extractor (ViBe) algorithm, the speed of updating background by using N2PG algorithm was doubled with the same required number of the video frames and the similar background quality. Experimental results show that the proposed algorithm has the advantages of strong adaptability, high speed and small storage, and the background extraction accuracy is also above 90%, it can satisfy the application of real-time embedded visual systems in natural environment.
    Camera calibration method of surgical navigation based on C-arm
    ZHANG Jianfa, ZHANG Fengfeng, SUN Lining, KUANG Shaolong
    2016, 36(8):  2327-2331.  DOI: 10.11772/j.issn.1001-9081.2016.08.2327
    Asbtract ( )   PDF (756KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem that too many transitional links and complex parameter solving process existed in camera calibration of the surgical navigation based on C-arm, a new method that completely ignored the camera model was proposed. In this method, the camera model was completely ignored, and the transition link in the process of solving mapping parameters was simplified, which increased the efficiency. In addition, the camera calibration was achieved by distinguishing the projection data from the calibration target which has double-layer metal ball. In the calibration point verification experiment, it can be proved that the residual error of each test point was no more than 0.002 pixels; in the navigation validation experiment, probe point and perforation test were successfully implemented with the established preliminary experiment platform. The experimental results verify that the proposed camera calibration method can meet the accuracy requirements of surgical navigation system.
    Electric vehicle charging scheduling scheme oriented to traveling plan
    ZENG Ming, LENG Supeng, ZHANG Ke
    2016, 36(8):  2332-2334.  DOI: 10.11772/j.issn.1001-9081.2016.08.2332
    Asbtract ( )   PDF (636KB) ( )  
    References | Related Articles | Metrics
    Due to the deficiency of ubiquitous charging stations (or stakes) and short driving distances of Electric Vehicle (EV), many people are hesitant to use EV. To reduce users' anxiety about limited battery capacity and lower fees due to frequent charging and making detour to charge, a matching theoretic Traveling Plan-aware Charging Scheduling (TPCS) scheme was proposed. Firstly, preference lists of EV users and charging stations were constructed respectively according to traveling plans of EV and their electricity demand at each charging station. Secondly, a many-to-one matching model was established between EV users and charging stations. Finally, interfaces of charging stations were allocated to optimize the system total utility. Compared with the Random Charging Scheduling (RCS) algorithm and Only utility of Electric Vehicle concerned Scheduling (OEVS) algorithm, the system total utility of TPCS was increased at most by 39.3% and 5% respectively. In addition, TPCS guaranteed the satisfactory ratio of EV users to be above 90% when charging demand of EV users was light, which is higher than that of RCS. The proposed algorithm can effectively improve the system total utility and satisfactory ratio of EV users, and reduce the computational complexity.
    Coordinating emergency task allocation of logistics service supply chain under time-to-service
    ZHANG Guangsheng, LIU Wei
    2016, 36(8):  2335-2339.  DOI: 10.11772/j.issn.1001-9081.2016.08.2335
    Asbtract ( )   PDF (825KB) ( )  
    References | Related Articles | Metrics
    Aiming at the task allocation problem of two echelon logistics service supply chain in emergency, a customer satisfaction model with time-to-service was put forward. First of all, considering the randomness of orders in emergency, the customer satisfaction model with time-to-service was established. Secondly, the minimum logistics cost model was established to ensure the optimization of logistics service supply chain cost. Again, using linear weighted method, multi-objective model including maximizing customer satisfaction and minimizing service cost was transformed into single-objective model. At last, Genetic Algorithm (GA) was used to solve this model, and sensitivity analysis was used to analyze the weight. Results of calculation example show that compared with target value of 0.0501 and 0.0825 of single-objective assignment, the comprehensive function model can obtain the optimal target value of 0.2716, which means the task allocation scheme of the established model can effectively solve task allocation problem of customer satisfaction with time-to-service. Analysis of weight sensitivity shows that when the weight is between 0.1 and 0.5, the optimal solution is more significant in slope variation degree compared to the weight of 0.5 to 0.9, which indicates that it should be more rational to choose allocation weight according to service ability parameter when allocating tasks in emergency, and paradox effect of customer satisfaction and logistics cost exists in the emergency task allocation. The results indicate that the task allocation model with time-to-service can effectively solve task allocation problem of logistics service supply chain in emergency.
    Dynamic shop scheduling problem of maintenance point prediction
    KUANG Peng, WU Jinzhao
    2016, 36(8):  2340-2345.  DOI: 10.11772/j.issn.1001-9081.2016.08.2340
    Asbtract ( )   PDF (848KB) ( )  
    References | Related Articles | Metrics
    Aiming at the uncertain issue of the production plan in manufacturing industry, an optimal scheduling method which combined the prediction of maintenance point with the adaptive algorithm of genetic and simulated annealing was proposed. First of all, the Auto Regressive Integrated Moving Average model (ARIMA) was used to predict equipment failure rate; then the Weibull distributed model was used to reverse the equipment maintenance point in the future by equipment failure rate; finally, regarding the maintenance point as a constraint condition, the traditional production scheduling problem was solved by the adaptive hybrid algorithm of genetic and simulated annealing. The random scheduling situation of equipment for maintenance was analyzed in combination with the practical situation of the factory, and to determine the optimal scheduling scheme, the minimum makespan was regarded as a goal to obtain the scheduling plan of each task and maintenance point of each equipment. Experimental results show that the adaptive genetic and simulated annealing algorithm has good performance. In the production workshop of a certain factory in Hebei, the average failure rate of the equipment which used optimization scheduling method was relatively reduced by 3.46 percent than that before optimization.
    Cow recognition algorithm based on improved bag of feature model
    CHEN Juanjuan, LIU Caixing, GAO Yuefang, LIANG Yun
    2016, 36(8):  2346-2351.  DOI: 10.11772/j.issn.1001-9081.2016.08.2346
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    Concerning the high time-consuming and low recognition accuracy of Bag of Feature (BOF) model, a new improved BOF model was proposed to improve the accuracy and efficiency of target recognition, and it was also applied to cow recognition. The optimized Histogram of Oriented Gradient (HOG) feature was introduced to feature extraction and description of the images; then the Spatial Pyramid Matching (SPM) principle was used to generate the histogram representation of images based on visual dictionary; finally, the histogram intersection kernel defined in this paper was used as the kernel function of the classifier. The experimental results on the data set in this paper (including 15 kinds of cows with 7500 images of cow heads) showed that the recognition rate of the algorithm was improved by an average of 2 percentage points by using the BOF model based on SPM; compared with Gauss kernel, the recognition rate of the algorithm was increased by an average of 2.5 percentage points by using the histogram intersection kernel; compared with traditional HOG feature, the recognition rate of the algorithm was improved by an average of 21.3 percentage points by using optimized HOG feature, and the computation efficiency of the algorithm was improved by an average of 1.68 times; compared with Scale Invariant Feature Transform (SIFT) feature, the computation efficiency of the algorithm was improved by an average of nearly 7.10 times as well as ensuring the average recognition accuracy reached 95.3%. Analysis results indicate that this algorithm has good robustness and practicability in cow individual recognition.
    Monte Carlo localization algorithm based on particle filter with adaptive multi-proposal distribution
    LUO Yuan, PANG Dongxue, ZHANG Yi, SU Qin
    2016, 36(8):  2352-2356.  DOI: 10.11772/j.issn.1001-9081.2016.08.2352
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of high computation complexity and poor real-time processing capability in Monte Carlo Localization based on Cubature particle filter (CMCL), a new Monte Carlo localization algorithm based on particle filter with Adaptive Multi-Proposal Distribution (AMPD-MCL) was proposed. The proposal distribution in this algorithm was improved by using Cubature Kalman filter and the extended Kalman filter, in which the most recent measurements were added to weaken particle set degeneracy phenomenon. According to the distribution of particles in state space, Kullback-Leibler Distance (KLD) sampling was utilized in re-sampling to adjust the number of particles required for the next iteration of the filter, which reduced the amount of computation. Simulation results proved the effectiveness of Particle Filter with Adaptive Multi-Proposal Distribution (AMPD-PF). Experiments carried out on the Robot Operating System (ROS) showed that the improved algorithm achieved the average localization accuracy at 19.891cm, the number of particles needed for localization was 60, and the localization time was 45.543s; compared with CMCL algorithm, the localization accuracy was increased by 71.03%, the localization time was shortened by 63.10%. The results demonstrate that AMPD-MCL algorithm reduces localization error, adjusts the number of particles in real-time, reduces computation cost, and enhances real-time processing capability.
2024 Vol.44 No.5

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF