Loading...

Table of Content

    10 June 2016, Volume 36 Issue 6
    Energy hole avoidance strategy based on multi-level energy heterogeneity for wireless sensor networks
    XIE Lin, PENG Jian, LIU Tang, LIU Huashan
    2016, 36(6):  1475-1479.  DOI: 10.11772/j.issn.1001-9081.2016.06.1475
    Asbtract ( )   PDF (868KB) ( )  
    References | Related Articles | Metrics
    In order to alleviate the problem of energy hole in the Wireless Sensor Network (WSN), a Multi-level Energy Heterogeneous algorithm (MEH) was proposed. The energy consumption's characteristics of WSN were analyzed. Then the nodes with different initial energies were deployed according to the energy consumption's characteristics. To balance the energy consumption rate of each region, alleviate the energy hole problem and prolong the network lifecycle, nodes in the heavy communication load region would be configured with higher initial energy. The simulation results show that, compared with Low-Energy Adaptive Clustering Hierarchy (LEACH), Distributed Energy-Balanced Unequal Clustering routing protocol (DEBUC), and Nonuniform Distributed Strategy (NDS), the utilization rate of network energy, network lifecycle and period ratio of network energy of MEH were increased nearly 10 percentage points respectively. The proposed MEH has a good balance of energy consumption as well. The experimental results show that, the proposed MEH can effectively prolong the network lifecycle and ease the energy hole problem.
    Effect of non-Lambertian emitter on indoor coverage of wireless optical local area network
    XU Chun, GUNIMIRE Awudan, ABDURAHMAN Kadir
    2016, 36(6):  1480-1485.  DOI: 10.11772/j.issn.1001-9081.2016.06.1480
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    In engineering practice, due to the influence of the factors such as manufacturing process, source design, and package technique, most commercially available optical sources have unique radiation characteristics which belong to the range of non-Lambertian emitters. However, the physical channel characterization of current wireless optical local area network is based on the Lambertian emitter which taking the started optical source as a standard. Aiming at this problem, the radiation characteristics of two typical non-Lambertian emitters were incorporated into the physical multipath channel characterization of wireless optical local area network. The focus was the induced effect on the coverage performance of indoor wireless optical local area network by the comparison with the traditional Lambertian emitter. The numerical results indicate that, the non-Lambertian emitters, especially the one with bowl-shaped radiation characteristic, are capable of significantly improving the spatial uniformity of optical path loss and the respective increase can be up to 0.5 dB. However, in the time delay characteristic of coverage domain, both the two non-Lambertian emitters can elevate the level of Root Mean Square (RMS) delay spread to different extent. Specifically, the respective elevations are 0.27 ns and 0.38 ns respectively.
    Establishment of virtual chain network in ZigBee network
    LIU Xiankai
    2016, 36(6):  1486-1491.  DOI: 10.11772/j.issn.1001-9081.2016.06.1486
    Asbtract ( )   PDF (850KB) ( )  
    References | Related Articles | Metrics
    Because of the limitation of 6-layer depth in the ZigBee network, it is not effective to establish chain network utility directly. And even the chain network is established by improving the protocol stack, the delay of information transmission will increase, which results in the unreliable problem. In view of this, a virtual chain network was proposed and developed based on the modified node which was composed of the ZigBee transparent transmission module and Arduino controller. This remote virtual chain network could be established and managed online, which also had the features of the original ZigBee network. By using the advantages of ZigBee network, such as Ad-Hoc network, routing forwarding, unvarnished transmission, the virtual chain network has higher transmission efficiency and a lower time delay. At the same time, the chain network can manage dynamically by lengthening and shortening the chain as required. The proposed method of establishing ZigBee virtual network chain network can be widely used in the systems of intelligent transportation, smart grid and intelligent lighting.
    Adaptive N-sigma amplitude spectrum shaping algorithm in transform domain communication system
    LIU Li, ZHANG Hengyang, MAO Yuquan, SUN Le, MA Lihua
    2016, 36(6):  1492-1495.  DOI: 10.11772/j.issn.1001-9081.2016.06.1492
    Asbtract ( )   PDF (639KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the relatively high probability of missed and false detection in traditional hard threshold setting algorithm, improve the anti-interference performance of Transform Domain Communication System (TDCS), an adaptive N-sigma amplitude spectrum shaping algorithm was proposed. The amplitude information of environment power spectrum was got according to spectrum sensing, the mean and standard deviation of the environment power spectrum were calculated. According to the correlation theory of normal distribution, the threshold was adaptively set. Therefore, when the electromagnetic environment changed, the mean and standard deviation would be readjusted and the threshold would be updated. The simulation results show that, compared to the traditional hard threshold setting algorithm, the threshold setting of the adaptive N-sigma amplitude spectrum shaping algorithm is more flexible and accurate, which can reduce the missed detection probability and false detection probability of interference and improve the overall anti-interference performance of the system.
    Unambiguous capture model for binary offset carrier modulated signals based on correlation function
    OU Zhengbao, GUO Chengjun
    2016, 36(6):  1496-1501.  DOI: 10.11772/j.issn.1001-9081.2016.06.1496
    Asbtract ( )   PDF (738KB) ( )  
    References | Related Articles | Metrics
    The capture of Binary Offset Carrier (BOC) modulated signals is ambiguous. In order to solve the problem, a novel algorithm named decompose-compose based on the local BOC signals was proposed. Firstly, the local subcarrier signal was decomposed according to the order n of the local BOC signal. Secondly, the 2n subfunctions of BOC signal were gotten by multiplying the pseudo-random code with the decomposed functions gotten from the first step. Then the 2n subfunctions were respectively used to make correlation with the received BOC signals for obtaining the 2n cross-correlation functions. Finally, the 2n cross-correlation functions gotten from the last step were further processed according to the decompose-compose algorithm. The theoretical analysis and the simulation results show that, compared with Offset Quadratic Cross-Correlation (OQCC) algorithm, the proposed decompose-compose algorithm had the improvement of 21.51 dB and 3.4 dB in the Amplitude Separation Degree of Main and Side Peaks(ASDMSP) when the signals of BOC(1, 1) and BOC(2, 1) were captured. The experimental results show that the decompose-compose algorithm can effectively solve the ambiguous problem of BOC signals.
    Modeling and characteristic research for spatial activity network
    CHEN Chao, CHEN Qu, HAN Dingding
    2016, 36(6):  1502-1505.  DOI: 10.11772/j.issn.1001-9081.2016.06.1502
    Asbtract ( )   PDF (573KB) ( )  
    References | Related Articles | Metrics
    Based on the characteristics of time-varying for the topology of real networks, an online social network was constructed by Twitter data set. Some results were found that the activity distribution of users was virtually independent of the time scale and degree distribution was heterogeneous as well as the side length distribution through analysis. Combined with the characteristics of the network, a spatial activity network model was proposed. The network topology was affected by the activity of nodes and the preferential attachment probability, the accuracy of the mechanism was proved through the analysis of the statistical characteristics. In order to study the dynamic process in time-varying network, a random walk process was carried out in the spatial activity network, the conclusion was obtained that the Mean First-Passage Time (MFPT) was shorter when the node was more active. Finally, the relationship between the preferential attachment power exponent and the average search time was discussed, and it was found that the power exponent with the highest search efficiency was 2 in spatial activity network. The proposed activity network model can be applied to time-varying network.
    Sampling algorithm for social networks based on Dijkstra algorithm
    DU Jinglin, HOU Dajun
    2016, 36(6):  1506-1509.  DOI: 10.11772/j.issn.1001-9081.2016.06.1506
    Asbtract ( )   PDF (698KB) ( )  
    References | Related Articles | Metrics
    Using random sampling algorithm to sample the social network can not represent the original network well. In order to solve the problem, a new algorithm based on the Dijkstra shortest path algorithm was proposed. Firstly, the Dijkstra algorithm was used to sample the shortest path between nodes in social network repeatedly. Then, the frequencies of edges in the extracted path were ordered, and the edges with the higher frequencies were selected to compose the sampling subgraph. The proposed algorithm solved some problems existing in the random sampling algorithm and achieved a good function of extraction of social network. The simulation experimental results show that the proposed sampling algorithm has less error and reflect the original network better than random sampling method.
    Research on replication consistency of cache in publish/subscribe systems
    WANG Feng, LI Lixin, CAO Jingyuan, PAN Cong
    2016, 36(6):  1510-1514.  DOI: 10.11772/j.issn.1001-9081.2016.06.1510
    Asbtract ( )   PDF (804KB) ( )  
    References | Related Articles | Metrics
    Aiming at the replication consistency maintenance problem of cache in publish/subscribe systems, firstly, a new algorithm based on trace label was proposed to improve the consistency maintenance algorithm based on gossip. The trajectory information of nodes was added to update information message, which avoided sending redundant update messages to updated nodes. Secondly, in order to improve the reliability of message propagation, a hierarchical feedback recovery mechanism based on trajectory label was proposed, which combined push/pull transmission mode of publish/subscribe systems, reduced the number of feedback messages, and avoided feedback implosion. The simulation experimental results show that the improved consistency maintenance algorithm can reduce the message cost and time cost of the consistency maintenance, and improve the system's reliability and scalability.
    Energy consumption optimization method for cloud storage content distribution network
    DENG Zhigang, ZENG Guosun, TAN Yunlan, XIONG Huanliang
    2016, 36(6):  1515-1519.  DOI: 10.11772/j.issn.1001-9081.2016.06.1515
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    Concerning the problem of high energy consumption existing in Cloud storage Content Distribution Network (CCDN), the energy consumption optimization method for the CCDN was studied. Firstly, the operation principle of CCDN was analysed. Then, the energy consumption formulas were given for each cloud server and each network link. Moreover, the weighted graph was used to describe the whole network. Furthermore, based on the weighted graph, an energy consumption optimization algorithm named Min-Energy-Graph (MEG) was designed to satisfy the Quality of Service (QoS) of CCDN and data distribution of network system. MEG was compared with Greedy Site (GS) and Optimal Static Placement and Routing (OSPR) by the simulation experiments. Compared with the GS and OSPR, the energy consumption of MEG was reduced by 6.6% and 30% respectively in the experiment of system extension, the energy consumption of MEG was reduced by 28.9% and 60.2% separately in the experiment of ensuring the user QoS, and the energy consumption of MEG was reduced by 32.2% and 89.3% independently in the experiment of network topology density. The experimental results show that the proposed energy management method can greatly reduce the energy consumption of CCDN.
    Software defined network based anti-saturated grouping cloud load balance
    HE Qian, HU Qiwei, WANG Yong, YANG Xinlei, LIU Shuming
    2016, 36(6):  1520-1525.  DOI: 10.11772/j.issn.1001-9081.2016.06.1520
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    In the cloud computing, the statistical multiplexing is a remarkable character, and the utilization efficiency of physical resource can be improved through the virtual technology. Aiming at the load balancing problem of the resource utilization needed to be discussed in the cloud virtual machine cluster, a Software Defined Network (SDN) based Anti-Saturated Grouping Strategy (ASGS) method was proposed according to the OpenStack cloud platform. The cloud hosts were separated into different groups based on their weights, and then the load information of cloud hosts were obtained by the SDN controller which used the probe with different groups periodically. When a request came, a group was selected randomly using the average weight of each group cloud hosts by the load balancer, and a proper backend was chosen by the polling method within the group. In order to avoid the cloud host downtime caused by the sudden increased requests for too many resources of a backend, the cloud host with higher weight was given a default parameter to increase the weight, and then the host would receive fewer requests on the higher load status. The experimental results show that, whatever the request number changes, the resource utilization standard variance of the proposed ASGS is always smaller than the random and round robin methods when time varies, which is nearly 0. The proposed ASGS has better load balance for the cloud host cluster.
    Parallel access strategy for big data objects based on RAMCloud
    CHU Zheng, YU Jiong, LU Liang, YING Changtian, BIAN Chen, WANG Yuefei
    2016, 36(6):  1526-1532.  DOI: 10.11772/j.issn.1001-9081.2016.06.1526
    Asbtract ( )   PDF (1195KB) ( )  
    References | Related Articles | Metrics
    RAMCloud only supports the small object storage which is not larger than 1 MB. When the object which is larger than 1 MB needs to be stored in the RAMCloud cluster, it will be constrained by the object's size. So the big data objects can not be stored in the RAMCloud cluster. In order to resolve the storage limitation problem in RAMCloud, a parallel access strategy for big data objects based on RAMCloud was proposed. Firstly, the big data object was divided into several small data objects within 1 MB. Then the data summary was created in the client. The small data objects which were divided in the client were stored in RAMCloud cluster by the parallel access strategy. On the stage of reading, the data summary was firstly read, and then the small data objects were read in parallel from the RAMCloud cluster according to the data summary. Then the small data objects were merged into the big data object. The experimental results show that, the storage time of the proposed parallel access strategy for big data objects can reach 16 to 18 μs and the reading time can reach 6 to 7 μs without destroying the architecture of RAMCloud cluster. Under the InfiniBand network framework, the speedup of the proposed paralled strategy almost increases linearly, which can make the big data objects access rapidly and efficiently in microsecond level just like small data objects.
    Network trust evaluation based on extension cloud
    MA Manfu, ZHANG Zhengfeng
    2016, 36(6):  1533-1537.  DOI: 10.11772/j.issn.1001-9081.2016.06.1533
    Asbtract ( )   PDF (878KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of uncertain factors in network trust evaluation, under the research background of security trading in complex open network, the extension cloud theory was introduced. Using the matter-element theory of extenics and the uncertainty of cloud model and the advantage of both the quantitative and qualitative analysis, an extension cloud-based network trust evaluation model was proposed. In the proposed model, the transformation between qualitative trust value and quantitative trust value was realized. And then, an evaluation method based on the extension cloud was put forward on the basis of the proposed model. The trust assessment of network security trading can be achieved effectively for providing good basis for final trust decision. The simulation experimental results show that, the trust evaluating and scheduling algorithm has improved the accuracy of trust evaluation and the successful rate of transaction in complex network environment and alleviated the network transaction entity fraud effectively. The proposed network trust extension method is effective and feasible.
    Real-time alert correlation approach based on attack planning graph
    ZHANG Jing, LI Xiaopeng, WANG Hengjun, LI Junquan, YU Bin
    2016, 36(6):  1538-1543.  DOI: 10.11772/j.issn.1001-9081.2016.06.1538
    Asbtract ( )   PDF (840KB) ( )  
    References | Related Articles | Metrics
    The alert correlation approach based causal relationship has the problems that it cannot be able to process massive alerts in time and the attack scenario graphs split. In order to solve the problem, a novel real-time alert correlation approach based on Attack Planning Graph (APG) was proposed. Firstly, the definition of APG and Attack Planning Tree (APT) were presented. The real-time alert correlation algorithm based on APG was proposed by creating APG model on basis of priori knowledge to reconstruct attack scenario. And then, the attack scenario was completed and the attack was predicted by applying alert inference mechanism. The experimental results show that, the proposed approach is effective in processing massive alerts and rebuilding attack scenarios with better performance in terms of real-time. The proposed approach can be applied to analyze intrusion attack intention and guide intrusion responses.
    Privacy protection algorithm based on trajectory shape diversity
    SUN Dandan, LUO Yonglong, FAN Guoting, GUO Liangmin, ZHENG Xiaoyao
    2016, 36(6):  1544-1551.  DOI: 10.11772/j.issn.1001-9081.2016.06.1544
    Asbtract ( )   PDF (1156KB) ( )  
    References | Related Articles | Metrics
    The high similarity between trajectories in anonymity set may lead to the trajectory privacy leak. In order to solve the problem, a trajectory privacy preserving algorithm based on trajectory shape diversity was proposed. The exiting pre-processing method was improved to reduce the loss of information through trajectory synchronization processing. And by l-diversity, the trajectories with shape diversity were chosen as the members of the anonymity set when greedy clustering. Too high shape similarity between member trajectories of the set was prevented to avoid the attack of trajectory shape similarity. The theoretical analysis and experimental results show that, the proposed algorithm can realize k-anonymity of trajectory and l-diversity concurrently, reduce the running time and trajectory information loss, increase the trajectory data availability and realize better privacy protection. The proposed algorithm can be effectively applied to the privacy-preserving trajectory data publishing.
    Authentication scheme for mobile terminals based on user society relation
    HU Zhenyu, LI Zhihua, CHEN Chaoqun
    2016, 36(6):  1552-1557.  DOI: 10.11772/j.issn.1001-9081.2016.06.1552
    Asbtract ( )   PDF (907KB) ( )  
    References | Related Articles | Metrics
    The existing authentication schemes based on user social relations have the problems that the user trust computation is not reasonable, the identity voucher is lack of authentication weight and the authentication threshold cannot change with the change of user familiarity. In order to solve these problems, a user social relation-based mobile terminal authentication scheme in cloud computing environment was proposed. Firstly, the user trust was calculated from two aspects of communication trust and attribute trust. And then, the dynamic weights and dynamic authentication thresholds of identity vouchers were set according to user familiarity. Finally, the generation and certification processes of identity vouchers were improved. The experimental results show that the proposed scheme not only solves the problems in the existing authentication scheme based on user social relations, but also reduces the resource consumption of the mobile terminals, which is only a third of the existing methods. Therefore, the proposed scheme is more suitable for the mobile cloud computing environment.
    Certificateless aggregate signcryption scheme based on bilinear pairings
    LIU Jianhua, MAO Kefei, HU Junwei
    2016, 36(6):  1558-1562.  DOI: 10.11772/j.issn.1001-9081.2016.06.1558
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    Signcryption is a cryptography primitive which can provide message confidentiality and sender authentication in a single logic step. In order to improve computational efficiency of certificateless aggregate signcryption scheme based on bilinear pairings, a new CertificateLess Aggregate SignCryption (CLASC) scheme based on bilinear pairings was proposed. In the proposed scheme, any user could be allowed to be an aggregator to initiate the signcryption protocol. After signcryption ciphertexts were generated by users, the ciphertexts were sent to the aggregator for aggregating to be one single ciphertext. The scheme was proved to be existentially unforgeable and confidential under the random oracle model through security analysis. The comparison results show that the proposed scheme only requires pairings operation only once for each signcryption user, which can improve the computational efficiency and can be applied to applications with high real-time requirements.
    Construction of balanced Boolean functions using plateaued functions
    ZHANG Yiyi, MENG Fanrong, ZHANG Fengrong, SHI Jihong
    2016, 36(6):  1563-1566.  DOI: 10.11772/j.issn.1001-9081.2016.06.1563
    Asbtract ( )   PDF (554KB) ( )  
    References | Related Articles | Metrics
    Boolean function plays an important role in the design and analysis of symmetric cryptography. Firstly, by studying the balanced property of subfunctions of the disjoint spectra function set, some sufficient conditions were provided that there were three balanced Boolean functions in the set of four plateaued functions. Then, based on three balanced disjoint spectra plateaued functions, a special Boolean permutation and a balanced Boolean function with high nonlinearity, a method of constructing balanced Boolean functions with high nonlinearity was proposed on a small number of variables. The analysis results show that the proposed method can construct the 2k-variable balanced Boolean functions with the optimal algebraic number and the nonlinearity is not less than 22k-1-2k-1-2k/2-2⌈(k-1)/2⌉.
    Fine-grained data randomization technique based on field-sensitive pointer analysis
    MAN Yujia, YIN Qing, ZHU Xiaodong
    2016, 36(6):  1567-1572.  DOI: 10.11772/j.issn.1001-9081.2016.06.1567
    Asbtract ( )   PDF (842KB) ( )  
    References | Related Articles | Metrics
    Concerning the low precision of static analysis in the traditional data randomization techniques, a Fine-Grained Data Randomization (FGDR) technique based on field-sensitive pointer analysis was proposed. During the static analysis, firstly, the syntax of the intermediate representation was abstracted to obtain the formal statement representation. Then, a non-standard type inference system was established to describe points-to relationship between the variables. Finally, field-sensitive points-to relationship was solved by implementing type inference based on type rules. Based on the point-to relationship, the intermediate representation was randomizationly encrypted and translated to the randomized executable program. The experimental results indicate that, compared with the existing data randomization techniques, the proposed data randomization technique based on field-sensitive pointer analysis improved the precision of analysis. The processing time of the proposed technique was increased 2% while the run-time was decreased 3% on average. The proposed technique brings less overhead to the executable program and can effectively increase the defense ability with the field-sensitive pointer analysis algorithm.
    Semi-synchronous communities detection algorithm based on label influence
    WANG Yan, HUANG Faliang, YUAN Chang'an
    2016, 36(6):  1573-1578.  DOI: 10.11772/j.issn.1001-9081.2016.06.1573
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    It is a great challenge to discover communities in the fast growing large-scale interactive social information networks such as Weibo and social networks. Although Label Propagation Algorithm (LPA) has great advantage in time complexity, but its inherent multiple random strategies make the algorithm unstable. In order to solve the problem, a semi-synchronous label propagation algorithm named Influence-driven Semi-synchronous Label Propagation Algorithm (ISLPA) was proposed. The propagation oscillation was avoided effectively and the synchronous update between neighbor nodes was realized by abandoning the original random strategy and integrating node influence into label initialization, neighbor node selection and updated order determination. The experimental results from the real-world and artificial networks indicate that, in terms of validity and stability of generated communities from the networks, the proposed ISLPA outperforms the currently typical LPAs used in community detection.
    Web service recommendation for user group
    XIE Qi, CUI Mengtian
    2016, 36(6):  1579-1582.  DOI: 10.11772/j.issn.1001-9081.2016.06.1579
    Asbtract ( )   PDF (734KB) ( )  
    References | Related Articles | Metrics
    The sparse data of Web services Quality of Service (QoS) which is invoked by service users in Web service recommendation may lead to low recommendation quality. In order to solve the problem, a collaborative filtering based Web service Recommendation algorithm for User Group (WRUG) was proposed. Firstly, personalized similar user group was constructed for each service user according to user similarity matrix. Secondly, instead of the group, the center of similar user group was employed to compute the user group similarity matrix. Finally, Web service recommendation equation with user group was defined and missing QoS values of Web service were predicted for target user. And a dataset was used for experiments which included 1.97 million real-world Web QoS invocation records. Compared with Traditional Collaborative Filtering algorithm (TCF) and Collaborative Filtering recommendation algorithm Based on User Group Influence (CFBUGI), the mean absolute error of the proposed WRUG was decreased by 28.9% and 4.57% respectively, and the coverage rate of WRUG was increased by 110% and 22.5% separately. The experimental results show that the proposed WRUG can not only achieve better prediction accuracy of Web service recommendation system, but also noticeably enhance the percentage of valuable predicted QoS values under the same experimental settings.
    Novel survival of the fittest shuffled frog leaping algorithm with normal mutation
    ZHANG Mingming, DAI Yueming, WU Dinghui
    2016, 36(6):  1583-1587.  DOI: 10.11772/j.issn.1001-9081.2016.06.1583
    Asbtract ( )   PDF (729KB) ( )  
    References | Related Articles | Metrics
    To overcome the demerits of basic Shuffled Frog Leaping Algorithm (SFLA), such as slow convergence speed, low optimization precision and falling into local optimum easily, a novel survival of the fittest SFLA with normal mutation was proposed. In the local search strategy of the proposed algorithm, the normal mutations for updating strategy of the worst frog individuals in the subgroup were introduced to avoid the algorithm falling into local convergence effectively, expand the searching space and increase the diversity of population. Meanwhile, the mutations were selected for a small number of worse frog individual in the subgroup to inherit the useful mutations instead of the bad mutations. The survival of the fittest was implemented, the quality of the population was improved, the blindness of the algorithm optimization process was reduced and the algorithm optimization was speeded up. The elite mutation mechanism for the best frog individuals in each subgroup was introduced for obtaining better individuals to enhance the global optimization ability of the algorithm further, avoid falling into local convergence, and lead the whole population evolution to the better. The experimental results of 30 independent runs indicate that the proposed algorithm can converge to the optimal solution of 0 in Sphere, Rastrigrin, Griewank, Ackley and Quadric, which is better than the other contrastive algorithms. The experimental results show that the proposed algorithm can avoid falling into premature convergence effectively, improve the convergence speed and convergence precision.
    Improved self-organized criticality optimized gray wolf optimizer metaheuristic algorithm
    XU Dayu, LIU Renping
    2016, 36(6):  1588-1593.  DOI: 10.11772/j.issn.1001-9081.2016.06.1588
    Asbtract ( )   PDF (778KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue that the novel metaheuristic optimization algorithm-Gray Wolf Optimizer (GWO) is easy to fall into local optimum when it is searching for the global optimal solution, thereby its ability was enhanced to obtain the global optimal solution. The fundamental principles and modeling processes of GWO were introduced firstly. On this basis, combined with the advantages of self-organized criticality theory, the Improved Extremes Optimization (IEO) algorithm was proposed. Then the IEO was integrated into the GWO model to construct the Self-Organized Critical (SOC) optimization algorithm named IEO-GWO. By adopting 23 benchmark test functions to implement a comprehensive comparison with traditional optimization algorithms in optimization performance, the superior ability of IEO-GWO model in searching global optimal values was verified.
    Co-clustering recommendation algorithm based on parallel factorization decomposition
    DING Xiaohuan, PENG Furong, WANG Qiong, LU Jianfeng
    2016, 36(6):  1594-1598.  DOI: 10.11772/j.issn.1001-9081.2016.06.1594
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    Aiming at the complexity of triple data's inner relation, a co-clustering recommendation model based on the PARAllel FACtorization (PARAFAC) decomposition was proposed. The PARAFAC was used for tensor decomposition to mine the relevant relations and potential topics between the entities of multidimensional data. Firstly, triple tensor data was clustered by using the PARAFAC decomposition algorithm. Secondly, three recommendation models for different schemes were proposed based on collaborative clustering algorithm, and compared for obtaining the optimal recommendation model through the experiment. Finally, the proposed co-clustering recommendation model was compared with Higher Order Singular Value Decomposition (HOSVD) model. Compared to the HOSVD tensor decomposition algorithm, the PARAFAC collaborative clustering algorithm increased the recall rate and precision by 9.8 percentage points and 3.7 percentage points on average on the last.fm data set, and increased the recall rate and precision by 11.6 percentage points and 3.9 percentage points on average on the delicious data set. The experimental results show that the proposed algorithm can effectively dig out tensor potential information and internal relations, and achieve recommendation with high accuracy and high recall rate.
    Similarity measure method for 3D CAD master model based on Web ontology language
    ZHONG Yanru, LIANG Yifang, XU Bensheng, ZENG Congwen, LU Hongcheng, WU Fan, ZHAO Zhengjun
    2016, 36(6):  1599-1604.  DOI: 10.11772/j.issn.1001-9081.2016.06.1599
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    To promote the model reuse efficiency of 3D Computer Aided Design (CAD), aiming at the flaw of semantic expression in previous 3D model retrieval systems, a similarity measure method based on model semantic representation of Web Ontology Language (OWL) was presented. Firstly, 3D CAD master model was transformed into structuralized representation model with class-property feature as its basic semantic object. And then, the feature semantic information for matching two models was extracted from OWL representation model to be a quantitative similarity unit. And a method of total weight combining subgraph isomorphism and Tversky algorithm was proposed for similarity measure. Finally, the experiment verified the feasibility and effectiveness of the proposed method. The comprehensive quantitative assessments of the experiment show that the proposed method makes the evaluation benchmark switch from the object itself to the set of semantic description of two object properties, and can objectively reflect the similar degree of the two contrast models.
    Burst-event evolution process expression based on short-text
    CHEN Xue, HU Xiaofeng, XU Hao
    2016, 36(6):  1605-1612.  DOI: 10.11772/j.issn.1001-9081.2016.06.1605
    Asbtract ( )   PDF (1215KB) ( )  
    References | Related Articles | Metrics
    Current analytical method based on short-text can not describe the evolution process of burst-event in a simple and accurate manner. In order to solve the problem, a new method was proposed to express the evolution process of burst-event based on short-text data sets. Firstly, a method of measuring event status was proposed to describe the state of event at each time for analyzing the development process of the event. Secondly, according to the structured information of short-text, the value of event status was set from two aspects: text information and user information. Thirdly, with the consideration of the impact factor of text information, the weight of text information was calculated by constructing related formulas. Fourthly, with the consideration of the impact factor of user information, a modified PageRank algorithm was proposed, and users were divided into different layers to calculate the weight of user information by constructing related formulas. Finally, the weight of text information and the weight of user information were combined to calculate the value of event status. The experimental results show that considering user information in turn, the modified PageRank algorithm, and the idea of dividing the users into different layers all can correct 1~2 points of description and improve the accuracy of expressing the evolution process of event.
    Sentiment analysis research based on combination of naive Bayes and latent Dirichlet allocation
    SU Ying, ZHANG Yong, HU Po, TU Xinhui
    2016, 36(6):  1613-1618.  DOI: 10.11772/j.issn.1001-9081.2016.06.1613
    Asbtract ( )   PDF (947KB) ( )  
    References | Related Articles | Metrics
    Generally the manually labeled corpus is a critical resource for sentiment analysis. To circumvent laborious annotation efforts, an unsupervised hierarchical generation model for sentiment analysis was presented, which was based on the combination of Naive Bayes (NB) and Latent Dirichlet Allocation (LDA), named Naive Bayes and Latent Dirichlet Allocation (NB-LDA). Just needing the right emotional dictionary, the emotional tendencies of network comments were analyzed at sentence level and document level simultaneously without sentence level and document level markup information. In particular, the proposed model assumed that each sentence instead of each word had a latent sentiment label, and then the sentiment label generated a series of features for the sentence independently by the NB manner. The proposed model could combine the advanced Natural Language Processing (NLP) correlation technologies such as dependency parsing and syntactic parsing by the introduction of NB assumption and could be used to improve the performance for unsupervised sentiment analysis. The experimental results conducted on two sentiment corpus datasets show that the proposed NB-LDA can automatically derive the emotional polarities of sentence level and document level, and significantly improve the accuracy of sentiment analysis compared to the other unsupervised methods. Moreover, as an unsupervised model, the NB-LDA can achieve comparable performance to some supervised or semi-supervised methods.
    Multi-view kernel K-means algorithm based on entropy weighting
    QIU Baozhi, HE Yanfang, SHEN Xiangdong
    2016, 36(6):  1619-1623.  DOI: 10.11772/j.issn.1001-9081.2016.06.1619
    Asbtract ( )   PDF (718KB) ( )  
    References | Related Articles | Metrics
    In multi-view clustering based on view weighting, weight value of each view products great influence on clustering accuracy. Aiming at this problem, a multi-view clustering algorithm named Entropy Weighting Multi-view Kernel K-means (EWKKM) was proposed, which assigned a reasonable weight to each view so as to reduce the influence of noisy or irrelevant views, and then to improve clustering accuracy. In EWKKM, different views were firstly represented by kernel matrix and each view was assigned with one weight. Then, the weight of each view was calculated from the corresponding information entropy. Finally, the weight of each view was optimized according to the defined optimized objective function, then multi-view clustering was conducted by using the kernel K-means method.The experiments were done on the UCI datasets and a real datasets. The experimental results show that the proposed EWKKM is able to assign the optimal weight to each view, and achieve higher clustering accuracy and more stable clustering results than the existing cluster algorithms.
    Regularized neighborhood preserving embedding algorithm based on QR decomposition
    ZHAI Dongling, WANG Zhengqun, XU Chunlin
    2016, 36(6):  1624-1629.  DOI: 10.11772/j.issn.1001-9081.2016.06.1624
    Asbtract ( )   PDF (921KB) ( )  
    References | Related Articles | Metrics
    The estimation of the low-dimensional subspace data may have serious deviation under lacking of the training samples. In order to solve the problem, a novel regularized neighborhood preserving embedding algorithm based on QR decomposition was proposed. Firstly, a local Laplace matrix was defined to preserve local structure of the original data. Secondly, the eigen spectrum space of within-class scatter matrix was divided into three subspaces, the new eigenvector space was obtained by inverse spectrum model defined weight function and then the preprocess of the high-dimensional data was achieved. Finally, a neighborhood preserving adjacency matrix was defined, the projection matrix obtained by QR decomposition and the nearest neighbor classifier were selected for face recognition. Compared with the Regularized Generalized Discriminant Locality Preserving Projection (RGDLPP) algorithm, the recognition accuracy rate of the proposed method was respectively increased by 2 percentage points, 1.5 percentage points, 1.5 percentage points and 2 percentage points on ORL, Yale, FERET and PIE database. The experimental results show that the proposed algorithm is easy to implement and has high recognition rate relatively under Small Sample Size (SSS).
    Gobang game algorithm based on LabVIEW
    MAO Limin, ZHU Peiyi, LU Zhenli, PENG Weiwei
    2016, 36(6):  1630-1633.  DOI: 10.11772/j.issn.1001-9081.2016.06.1630
    Asbtract ( )   PDF (701KB) ( )  
    References | Related Articles | Metrics
    The current researches of Gobang man-machine game are mostly based on the computer, mobile phone, which are lacking real environments. In order to solve the problem, a game algorithm based on Laboratory Virtual Instrument Engineering Workbench (LabVIEW) was proposed, and was applied to Gobang man-machine game in real environment. Firstly, the state information of the chess board and the man-machine chess pieces location on both sides in the current state were obtained by the image acquisition system. Then the game situation was analyzed. In order to improve the efficiency of chess, the chess type was classified, and the original game algorithm was improved by using two weights of attack and defense to simplify the decision-making process. The experimental results of real game tests prove that the proposed algorithm based on LabVIEW can realize the Gobang man-machine chess fast and accurately.
    Optimized clustering algorithm based on density of hierarchical division
    PANG Lin, LIU Fang'ai
    2016, 36(6):  1634-1638.  DOI: 10.11772/j.issn.1001-9081.2016.06.1634
    Asbtract ( )   PDF (731KB) ( )  
    References | Related Articles | Metrics
    The traditional clustering algorithms cluster the dataset repeatedly, and have poor computational efficiency on large datasets. In order to solve the problem, a novel algorithm based on hierarchy partition was proposed to determine the optimal number of clusters and initial centers of clusters, named Clusters Optimization based on Density of Hierarchical Division (CODHD). Based on hierarchical division, the computational process was studied, which did not need to cluster datasets repeatedly. First of all, all statistical values of clustering features were obtained by scanning dataset. Secondly, the data partitions of different level were generated from bottom-to-up, the density of each partition data point was calculated, and the maximum density point of each partition was taken as the initial center. At the same time, the minimum distance from the center to the higher density data point was calculated, the average of products' sum of the density of the center and the minimum distance was taken as the validity index and a clustering quality curve of different hierarchical division was built incrementally. Finally, the optimal number of clusters and the initial center of clusters were estimated corresponding to the partition of extreme points of curve. The experimental results demonstrate that, compared with Clusters Optimization on Preprocessing Stage (COPS), the proposed CODHD improved clustering accuracy by 30% and clustering algorithm efficiency at least 14.24%. The proposed algorithm has strong feasibility and practicability.
    Temporal similarity algorithm of coarse-granularity based dynamic time warping
    CHEN Mingwei, SUN Lihua, XU Jianfeng
    2016, 36(6):  1639-1644.  DOI: 10.11772/j.issn.1001-9081.2016.06.1639
    Asbtract ( )   PDF (974KB) ( )  
    References | Related Articles | Metrics
    The Dynamic Time Warping (DTW) algorithm cannot keep high classification accuracy while improving the computation speed. In order to solve the problem, a Coarse-Granularity based Dynamic Time Warping (CG-DTW) algorithm based on the idea of naive granular computing was proposed. First of all, the better temporal granularities were obtained by computing temporal variance features, and the original series were replaced by granularity features. Then, the relatively optimal corresponding temporal granularity was obtained by executing DTW with dynamically adjusting intergranular elasticity of granularities compared. Finally, the DTW distance was calculated in the case of the corresponding optimal granularity. During this progress, an early termination strategy of lower bound function was introduced for further improving the CG-DTW algorithm efficiency. The experimental results show that, the proposed algorithm was better than classical algorithm in running rate with increasing by about 21.4%, and better than dimension reduction strategy algorithm in accuracy with increasing by about 32.3 percentage points.Especially for the long time sequences classification, CG-DTW takes consideration into both high computing speed and better classification accuracy. In actual applications, CG-DTW can adapt to long time sequences classification with uncertain length.
    Algorithm for lifting temporal consistency QoS improvement of real-time data objects based on deferrable scheduling
    YU Ge, FENG Shan
    2016, 36(6):  1645-1649.  DOI: 10.11772/j.issn.1001-9081.2016.06.1645
    Asbtract ( )   PDF (709KB) ( )  
    References | Related Articles | Metrics
    Concerning the application problem of the existing scheduling algorithms for guaranteeing the temporal consistency of real-time data objects in the soft real-time database system environment, a Statistical Deferrable Scheduling-OPTimization (SDS-OPT)algorithm was proposed. At first, the characteristics and shortcomings of the existed algorithms were analyzed and compared in terms of scheduling, Quality of Service (QoS) and workload, then the necessity of optimizing the existing algorithms was pointed out. Secondly, in order to maximize QoS of temporal consistency for real-time data objects by advancing the schedulable job quantity of real-time updating transactions, the steepest descend method was used to increase the reference value of the screening benchmark for job execution time. Finally, the proposed algorithm was compared with the existing algorithms in terms of workload and QoS. The experimental results show that, compared with the Deferrable Scheduling algorithm for Fixed Priority transactions (DS-FP) and Deferring Scheduling-Probability Statistic algorithm (DS-PS), the proposed optimization algorithm can guarantee temporal consistency of real-time data objects effectively and reduce the workload, while the QoS is improved significantly.
    Tracking mechanism of service provenance based on graph
    LUO Bo, LI Tao, WANG Jie
    2016, 36(6):  1650-1653.  DOI: 10.11772/j.issn.1001-9081.2016.06.1650
    Asbtract ( )   PDF (691KB) ( )  
    References | Related Articles | Metrics
    Service provenance data stored in relational database and document database cannot provide effective service tracking operations and graphic database storage cannot execute rapid aggregation operations. In order to solve the problems, a new service provenance tracking mechanism based on graph was proposed. On the basis of graphic database storage service provenance tracking mechanism, the service provenance storage structure in the graphic database was defined, and the aggregation operation for this storage structure was provided. Then the three different service provenance tracking models which were separately based on static weight, mixed operation and real-time task. The experimental results show that the proposed service provenance tracking mechanism can meet different query requirements of different types of service provenance data such as aggregation and tracking operation, reduces the service tracking time-consuming and improves the tracking efficiency of service provenance.
    Image super-resolution reconstruction based on local regression model
    LI Xin, CUI Ziguan, SUN Linhui, ZHU Xiuchang
    2016, 36(6):  1654-1658.  DOI: 10.11772/j.issn.1001-9081.2016.06.1654
    Asbtract ( )   PDF (798KB) ( )  
    References | Related Articles | Metrics
    Image Super-Resolution (SR) algorithms based on sparse reconstruction generally require external training samples. The shortcoming of these algorithms is that the reconstruction quality depends on the similarity between the image to be reconstructed and the training sample. In order to solve the problem, an image super-resolution reconstruction algorithm based on local regression model was proposed. Using the fact that the local image structure would repeat in the corresponding position of different image scales, a first-order approximation model of the nonlinear mapping function from low to high resolution image patches was built for super-resolution reconstruction. The prior model of the nonlinear mapping function was established by handling the in-place example pair of the input image and its low frequency band image with dictionary learning. During the reconstruction of the image block, the non-local self-similarity of image was used and the first-order regression model was applied to multiple non-local self-similarity patches respectively, the high-resolution image patch could be obtained through weighted summing. The experimental results show that, compared with other super-resolution algorithms which also make use of image self-similarity, the average Peak Signal-to-Noise Ratio (PSNR) of the reconstructed images of the proposed algorithm is increased by 0.3~1.1 dB, and the subjective reconstruction effect of the proposed algorithm is improved significantly as well.
    Extraction algorithm of multi-view matching points using union find
    LU Jun, ZHANG Baoming, GUO Haitao, CHEN Xiaowei
    2016, 36(6):  1659-1663.  DOI: 10.11772/j.issn.1001-9081.2016.06.1659
    Asbtract ( )   PDF (888KB) ( )  
    References | Related Articles | Metrics
    The extraction of multi-view matching points is one of the key problems in 3D reconstruction of multi-view image scenes, and the accuracy of the reconstruction will be affected directly by the extraction result. The extraction problem of multi-view matching points was converted into the dynamic connectivity problem and a Union-Find (UF) method was designed. The nodes of UF were organized using efficient tree structure in which parent-link connection was adopted. In the process of increasing the matching points, only the addressing parameters of a single node needed to be modified, which avoided comparing the calculation process of the addressing parameters through traversing the array, and the efficiencies of locating and modifying were improved. A weighting strategy was used to optimize the algorithm, and the weighted encoding method was used to replace the conventional hard encoding, which could balance the structure of the tree and reduce the average depth of the dendrogram. The experimental results of multiple image sets show that the proposed algorithm based on UF can extract more multi-view matching points, and is more efficient than the conventional Breadth-First-Search (BFS) algorithm.
    Global point cloud registration algorithm based on translation domain estimating
    YANG Binhua, ZHAO Gaopeng, LIU Lujiang, BO Yuming
    2016, 36(6):  1664-1667.  DOI: 10.11772/j.issn.1001-9081.2016.06.1664
    Asbtract ( )   PDF (593KB) ( )  
    References | Related Articles | Metrics
    The Iterative Closest Point (ICP) algorithm requires two point clouds to have a good initialization to start, otherwise the algorithm may easily get trapped into local optimum. In order to solve the problem, a novel translation domain estimating based global point cloud registration algorithm was proposed. The translation domain was estimated according to axis-aligned bounding box of calculating the defuzzification principal point clouds of data and model point clouds. With the estimated translation domain and [-π, π]3 rotation domain, an improved globally optimal ICP was used to register for global searching. The proposed algorithm could estimate translation domain adaptively and register globally according to the point clouds for registration. The process of registration did not need to calculate the feature information of point clouds and was efficient for any initialization with less setting parameters. The experimental results show that the proposed algorithm can get accurate registration results of global optimization automatically, and also improve the efficiency of global registration.
    New image retrieval method based on weighted color stratification and texture unit
    ZHAI Minghan, GAO Ling
    2016, 36(6):  1668-1672.  DOI: 10.11772/j.issn.1001-9081.2016.06.1668
    Asbtract ( )   PDF (699KB) ( )  
    References | Related Articles | Metrics
    Using single color or texture feature can not achieve satisfied image retrieval effect. In order to solve the problem, a new image retrieval method which combined the color and texture features was proposed. The color component was retrieved from both micro and macro parts. In the micro part, the color histogram was used to describe the proportion of pixels in each color of the whole image. In the macro part, color entropy and bit-plane entropy were used to process the image respectively in order to exclude the obvious different images from the aimed picture. The first 4 layers with distinct features were selected from the bit-plane entropy part, and different weight value was given for bit-plane entropy of each layer. Finally, according to the each pixel color value and the angle value of the defined five basic texture elements, combined with color feature, image retrieval was achieved. The experimental results on Corel-1000 dataset show that, compared with unweighted bit-plane entropy, the average precision and recall of weighted bit-plane entropy were increased by 10.01 percentage points and 1.2 percentage points respectively. Moreover, compared with Structure Elements' Descriptor (SED) method for texture feature only, the proposed method combining the color and texture feature improved the average precision and recall by 4.3 percentage points and 2.1 percentage points respectively on Corel-10000 dataset. The proposed method can effectively improve the effectiveness of image retrieval.
    Quality evaluation method for color restoration image
    LI Na, ZHOU Pengbo, GENG Guohua, JIA Hui
    2016, 36(6):  1673-1676.  DOI: 10.11772/j.issn.1001-9081.2016.06.1673
    Asbtract ( )   PDF (645KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of quality evaluation of color restoration image for digital protection of faded cultural relics, the objective quality evaluation methods were researched. Combined the computational advantage of Peak Signal-to-Noise Ratio (PSNR) and structure characteristic of human visual feature information entropy, a color image quality evaluation method was proposed based on information entropy of visual features. A quality evaluation function with weights and the corresponding evaluation algorithm process were established, and the weights were determined by normalization method. Then the function value for comparing the similarity between the color restoration image and the reference color image was calculated by using the evaluation algorithm process. The smaller the value was, the higher the similarity was, and the better the corresponding color restoration image quality was, which could be used for the objective judgement of color restoration method. The quality evaluation parameters of four different performance restoration methods were compared. The experimental results show that, the evaluation results are consistent with the subjective perception of human eyes, and the proposed method is effective.
    Motion feature extraction of random-dot video sequences based on V1 model of visual cortex
    ZOU Hongzhong, XU Yuelei, MA Shiping, LI Shuai, ZHANG Wenda
    2016, 36(6):  1677-1681.  DOI: 10.11772/j.issn.1001-9081.2016.06.1677
    Asbtract ( )   PDF (897KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue of target motion feature extraction of video sequences in complex scene, and referring to the motion perception of biological vision system to the moving video targets, the traditional primary Visual cortex (V1) cell model of visual cortex was improved and a novel method of random-dot motion feature extraction based on the mechanism of biological visual cortex was proposed. Firstly, the spatial-temporal filter and half-squaring operation combined with normalization were adopted to simulate the linearity and nonlinearity of neuron's receptive field. Then, a universal V1 cell model was obtained by adding a directional selectivity adjustable parameter to the output weight, which solved the problem of the single direction selectivity and the disability to respond correctly to multi-direction motion in the traditional model. The simulation results show that the analog outputs of proposed model are almost consistent with the experimental data of biology, which indicates that the proposed model can simulate the V1 neurons of different direction selectivity and extract motion features well from random-dot video sequences with complex motion morphs. The proposed method can provide new idea for processing feature information of optical flow, extract motion feature of video sequence and track its object effectively.
    Optimized vector of locally aggregated descriptor algorithm in image retrieval based on minimized reconstruction error
    HUANG Xiujie, CHEN Jing, ZHANG Yunchao
    2016, 36(6):  1682-1687.  DOI: 10.11772/j.issn.1001-9081.2016.06.1682
    Asbtract ( )   PDF (855KB) ( )  
    References | Related Articles | Metrics
    Aiming at the uncertainty value of weight coefficient and the big error of characteristic quantification in soft assignment of characteristics quantification in Vector of Locally Aggregated Descriptor (VLAD) model, an efficient weight coefficient soft quantization assignment algorithm based on minimized reconstruction error was proposed. The sparse coding coefficients with the minimized reconstruction errors were taken as the weighting values of soft quantization assignment based on VLAD by taking the minimized reconstruction error as the standard. The image retrieval test results of database show that, compared with the mainstream VLAD feature coding algorithms, the image retrieval accuracy of the proposed algorithm can be improved about 10%, and the proposed algorithm can obtain a smaller feature reconstruction error.
    Application of scale-invariant feature transform algorithm in image feature extraction
    LIN Tao, HUANG Guorong, HAO Shunyi, SHEN Fei
    2016, 36(6):  1688-1691.  DOI: 10.11772/j.issn.1001-9081.2016.06.1688
    Asbtract ( )   PDF (732KB) ( )  
    References | Related Articles | Metrics
    The high complexity and long computing time of Scale-Invariant Feature Transform(SIFT) algorithm cannot meet the real-time requirements of stereo matching. And the mismatching rate is high when an image has many similar regions. To solve the problems, an improved stereo matching algorithm was proposed. The proposed algorithm was improved in two aspects. Firstly, because the circular has natural rotation invariance, the feature point was acted as the center and the rectangle region of the original algorithm was replaced by two approximate-size concentric circle regions in the improved algorithm. Meanwhile, the gradient accumulated values of 12 directions were calculated within the areas of the inner circle and the outer circle ring respectively, and the dimension of the local feature descriptor was reduced from 128 to 24. Then, a 12-dimensional global vector was added, so that the generated feature descriptor contained the SIFT vector based on local information and the global vector based on global information, which improved the resolving power of the algorithm when the images had similar areas. The simulation results show that, compared with the original algorithm, the real-time performance of the proposed algorithm was improved by 59.5% and the mismatching rate was decreased by 9 percentage points when the image had many similar regions. The proposed algorithm is suitable for in the case of high real-time image processing.
    Moving object detection with moving camera based on motion saliency
    GAO Zhiyong, TANG Wenfeng, HE Liangjie
    2016, 36(6):  1692-1698.  DOI: 10.11772/j.issn.1001-9081.2016.06.1692
    Asbtract ( )   PDF (1226KB) ( )  
    References | Related Articles | Metrics
    The moving object detection with moving camera has the problems that it is difficult to model the background and the computation cost is usually high. In order to solve the problems, a method for detecting moving object with moving camera based on motion saliency was proposed, which realized accurate moving object detection and avoided complex background modeling. The moving objects were detected according to the saliency of the video scene, which was computed based on the simulation of the attention mechanism in human vision system and the moving properties of background and foreground when the camera moved in translation. Firstly, the motion features of object were extracted by optical flow method and the background motion texture was suppressed by 2-D Gaussian convolution. Then the global saliency of motion features was measured by counting the histogram. According to the temporal salient map, the color information of foreground and background was extracted respectively. Finally, Bayesian model was used to deal with temporal salient map for extracting salient moving objects. The experimental results on the public video datasets show that the proposed method can suppress background motion noise, while detecting the moving object distinctly and accurately in the dynamic scene with moving camera.
    Improvement of adaptive generalized total variation model for image denoising
    GAO Leifu, LI Chao
    2016, 36(6):  1699-1703.  DOI: 10.11772/j.issn.1001-9081.2016.06.1699
    Asbtract ( )   PDF (1004KB) ( )  
    References | Related Articles | Metrics
    The Adaptive Generalized Total Variation (AGTV) model for image denoising has the shortages that it cannot locate image edge accurately and extract enough edge information. In order to improve the effectiveness and Peak Signal-to-Noise Ratio (PSNR) of image denoising, an Improved AGTV(IAGTV) model for image denoising was presented. On the one hand, another gradient calculating method with higher accuracy was adopted, in order to locate image edge more accurately than AGTV. On the other hand, for optimizing the filtering of image preprocess, the united Gauss-Laplace conversion which was good at image edge information detection was chosen to take place of Gaussian smoothing filter, so as to prevent edge information from reduction while denoising. Numerical simulation experiments show that the restored image PSNR of IAGTV was increased approximately by 1 dB than that of GTV with the fixed value p and at least 0.2 dB than that of AGTV. The experimental results show that IAGTV has good ability of image denoising.
    Local error progressive mesh simplification algorithm for keeping detailed features
    HUANG Jia, WEN Peizhi, LI Lifang, ZHU Likun
    2016, 36(6):  1704-1708.  DOI: 10.11772/j.issn.1001-9081.2016.06.1704
    Asbtract ( )   PDF (870KB) ( )  
    References | Related Articles | Metrics
    To optimize the balance issue of local area accuracy and efficiency in the progressive mesh generation of the 3D model simplification, a new simplification algorithm for the half-edge collapse progressive mesh based on vector angle change between the local area ring was proposed. Firstly, the normal vector was obtained which restricted by center of gravity measurement distance in the local neighborhood area and consisted of points near the first ring of 3D data points. Secondly, the triangle set was selected as the second ring neighborhood area which intersected with the triangle assembly points of the first ring neighborhood area. Then the value multiplied by the two local normal vectors was made as the edge collapse cost. The smaller the value was, the plainer the region was inclined to be and had the priority of simplification, otherwise it would be retained. Finally, the method of angles judgment of a triangle was adopted as the restriction of half-edge collapse to ensure the regular degree of the triangle in simplification mesh and reduce the error caused by the deformation. The experimental results show that the proposed algorithm can better balance the preserving of local detail features and efficiency in the simplification of progressive mesh of 3D model and can meet the needs of practical applications.
    Adaptive regularization active contour model
    ZHANG Shaohua
    2016, 36(6):  1709-1713.  DOI: 10.11772/j.issn.1001-9081.2016.06.1709
    Asbtract ( )   PDF (763KB) ( )  
    References | Related Articles | Metrics
    The Chan-Vese model for image segmentation involves many parameters, which needs to be tuned artificially for images from different modalities. The work is tedious, laborious and time-consuming. To overcome this problem, an adaptive regularization active contour model was proposed. Firstly, the data term of Chan-Vese model was reduced. Secondly, the length term was substituted by the improved edge weighted H1 regularization term. Finally, a new active contour model was proposed without any parameters. In the segmentation experiments, the proposed model was less sensitive to the size and location of initial contour with strong noise resistance, and the average segmentation time of 6 images was 1.5834 s while the number of iterations was 19. The experimental results show that, the proposed model can handle images with intensity inhomogeneity and strong noise well without manual adjustment of parameters, and the segmentation speed is faster compared with other active contour models.
    Object detection in remote sensing imagery based on strongly-supervised deformable part models
    ZHOU Fusong, HUO Hong, WAN Weibing, FANG Tao
    2016, 36(6):  1714-1718.  DOI: 10.11772/j.issn.1001-9081.2016.06.1714
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics
    The object detection of remote sensing imagery has lower detection accuracy caused by complexity of background, target appearance variety and arbitrary orientation. In order to solve the problem, a method based on strongly-supervised deformable part models was proposed. Then multiple sub-models in each direction range of the object were trained. In addition, the object bounding rectangle, position and semantic information of every part were labeled. In the model training stage, firstly, multi-scale Histogram of Oriented Gradients (HOG) feature pyramid for every training image was constructed, and the model structure was initialized according to object-part label information and Minimum Span Tree (MST). Secondly, the sub-models corresponding to every direction region were trained using Latent Support Vector Machine (LSVM). Every sub-model was consisted of a object filter, multiple twice resolution part filters, and the position relation model. Finally, the mixture model was merged from all sub-models to detect object. In the object detection stage, the multi-scale feature pyramid was also firstly constructed, then matching response score in feature pyramid was computed using mixture training filter model and sliding window. Optimized detection results could be obtained by setting threshold for the response score and adopting Non-Maximum Suppression (NMS) algorithm. The object detection accuracy of the proposed method is 89.4% on self-built remote sensing data sets, compared to the highest accuracy among weakly-supervised Deformable Part Model (DPM), Exemplar Support Vector Machines (Exemplar-SVMs) and Histogram of Oriented Gradients-Support Vector Machine (HOG-SVM), the proposed algorithm has an improvement about 4 percentage points in detection behavior. The experimental results show that the proposed algorithm could has some improvement in solving above mentioned direction and background complex problems, and also can be applied in object detection of the airport military airplane.
    Real-time detection method of abnormal event in crowds
    PAN Lei, ZHOU Huan, WANG Minghui
    2016, 36(6):  1719-1723.  DOI: 10.11772/j.issn.1001-9081.2016.06.1719
    Asbtract ( )   PDF (735KB) ( )  
    References | Related Articles | Metrics
    In the field of dense crowd scene, in order to improve the defects of present anomaly detection methods in real-time performance and applicability, a real-time method was proposed based on the optical flow feature and Kalman filtering. Firstly, the global optical flow value was extracted as the movement feature. Then the Kalman filtering was used to process the global optical flow value. The residual was analyzed based on the assumption that the residual obeyed a Gauss distribution in normal condition which was validated by the hypothesis testing. Then the parameter of the residual probability distribution was calculated through the Maximum Likelihood (ML) estimation. Finally, under a certain confidence coefficient level, the confidence interval of normal condition and the judgment formula of abnormal condition were obtained, which could be used to detect the abnormal events. The experimental result shows that, for the videos with the size of 320×240, the average detection time of the proposed method can be as low as 0.023 s/frame and the accuracy can reach above 95%. As a result, the proposed method has high detection efficiency and good real-time performance.
    Abnormal behavior detection of small and medium crowd based on intelligent video surveillance
    HE Chuanyang, WANG Ping, ZHANG Xiaohua, SONG Danni
    2016, 36(6):  1724-1729.  DOI: 10.11772/j.issn.1001-9081.2016.06.1724
    Asbtract ( )   PDF (905KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issues of poor real-time, low classification recognition rate and less features of the crowd abnormal detection, an abnormal behavior detection algorithm of small and medium crowd based on intelligent video surveillance was proposed. Firstly, the rapid population density detection algorithm was employed to extract the change information of crowd amount. Secondly, the improved Lucas-Kanade optical flow method was utilized to extract the average kinetic energy, the direction entropy and the distance potential energy of the crowd. Finally, the crowd behaviors were classified by using the Extreme Learning Machine (ELM) algorithm. UMN common data set was used for test, compared to abnormal crowd behavior detection algorithm in high and medium density and abnormal behavior detection algorithm based on Kinetic Orientation Distance (KOD) energy feature, the recognition rate of ELM algorithm in abnormal behavior detection of small and medium crowd increased by 7.13 percentage points and 5.89 percentage points respectively. On the part of the crowd density estimation, compared to the high and medium crowd density detection algorithm, the processing time for each frame of ELM algorithm reduced 106 ms almost 1/3, approximately. The experiments show that the proposed abnormal behavior detection of small and medium crowd based on intelligent video surveillance can effectively improve recognition rate and real-time performance of the abnormal behavior detection.
    Vehicle license plate localization algorithm based on multi-feature fusion
    YANG Shuo, ZHANG Bo, ZHANG Zhijie
    2016, 36(6):  1730-1734.  DOI: 10.11772/j.issn.1001-9081.2016.06.1730
    Asbtract ( )   PDF (865KB) ( )  
    References | Related Articles | Metrics
    The single feature based vehicle license plate localization algorithms are hard to be adapted to the complex environment. In order to solve the problem, a multi-feature fusion algorithm was proposed, which made use of multi-features such as edge, color and texture. The localization process was divided into two phases: Hypothesis Generation (HG) and Hypothesis Verification (HV). In HG, feature point detection algorithm and mathematical morphology were used as the primary techniques, and the character texture and color information of vehicle license plate were extracted as the features to generate the candidates. In HV, gray projection technology and constant feature of vehicle license plate were used to verify the candidates from HG, then the correct license plate was located. The experimental results show that the proposed algorithm can achieve the localization success ratio of 96.6% and the precision of 95.4% in the testing image set in real environment. Moreover, the rationality and validity of the multi-feature fusion algorithm are verified.
    SMFCC: a novel feature extraction method for speech signal
    WANG Haibin, YU Zhengtao, MAO Cunli, GUO Jianyi
    2016, 36(6):  1735-1740.  DOI: 10.11772/j.issn.1001-9081.2016.06.1735
    Asbtract ( )   PDF (874KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of effective feature extraction of speech signal and influence of noise in speaker recognition, a novel method called Mel Frequency Cepstral Coefficients based on S-transform (SMFCC) was proposed for speech feature extraction. The speech features were obtained which were based on traditional Mel Frequency Cepstral Coefficients (MFCC), employed the properties of two-dimensional Time-Frequency (TF) multiresolution in S-transform and effective denoising of two-dimensional TF matrix with Singular Value Decomposition (SVD) algorithm, and combined with other related statistic methods. Based on the TIMIT corpus, the extracted features were compared with the current features by the experiment. The Equal Error Rate (EER) and Minimum Detection Cost Function (MinDCF) of SMFCC were smaller than those of Linear Prediction Cepstral Coefficient (LPCC), MFCC, and LMFCC; especially, the EER and MinDCF08 of SMFCC were decreased by 3.6% and 17.9% respectively compared to MFCC.The experimental results show that the proposed method can eliminate the noise in the speech signal effectively and improve local speech signal feature resolution.
    Numerical simulation of flight vehicle multi-body separation based on unstructured mesh
    LI Shaolei, ZHENG Jianjing, SHANG Mengmeng
    2016, 36(6):  1741-1744.  DOI: 10.11772/j.issn.1001-9081.2016.06.1741
    Asbtract ( )   PDF (676KB) ( )  
    References | Related Articles | Metrics
    To solve the problem of local remeshing in multi-body separation numerical simulation of flight vehicle with unstructured mesh, a method for the construction of local remeshing regions was proposed based on element neighbouring indices. Firstly, the mesh quality was checked by the element radius ratio and the remeshing regions were marked. Secondly, the remeshing regions were extended by the neighboring indices of mesh elements. Finally, the around elements of the non-two manifold sides were marked for ensuring the boundary definition of remeshing regions which satisfied two sides manifold criterion. The numerical experiment of a separation simulation was conducted by the proposed method. In this simulation, the local remeshing was operated successfully for 16 times, and the average of the overall remeshing element radius ratio was improved above 0.71. The calculation results and comparative analysis of the wind tunnel experimental data shows that, the trajectory and motion of the separation were calculated accurately in the numerical experiments. It is verified that the proposed unstructured remeshing process is effective.
    Cellular automaton model of vehicle-bicycle conflict at channelized islands based on VISSIM microscopic traffic simulation software
    LIAN Peikun, LI Zhenlong, RONG Jian, CHEN Ning
    2016, 36(6):  1745-1750.  DOI: 10.11772/j.issn.1001-9081.2016.06.1745
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    For the complex behaviors of vehicle-bicycle conflict at the conflict zones of channelized islands, the capacity of right-turn lane which is calculated by the traditional analytical method is different with the practical condition. In order to solve the problem, a cellular automaton model based on VISSIM microscopic traffic simulation software for vehicle-bicycle conflict at channelized islands was proposed. According to the proposed rules of cellular automaton, the component object model of VISSIM was used for programming to control the velocity variation of right-turn vehicles by setting a series of detectors which were used to simulate cellular. Therefore, the closure effect of right-turn vehicles when they were in conflict with non-motorized vehicles or pedestrians could be simulated by these settings. Meanwhile, the crossing behaviors of non-motorized vehicles or pedestrians were controlled by using the priority rules of VISSIM. The simulation results show that the average relative error between the capacity of right-turn lane by the proposed model and the practical observation value was 5.45%. The experimental results show that the proposed model is better than the traditional analytical methods and can reflect the practical condition of conflict zones of of channelized islands, thus it can provide theoretical basis for the planning, design, traffic management and organization of channelized islands under the condition of mixed traffic flow.
    Detection probability research of regional staff density based on Wi-Fi devices
    ZHAO Feifei, JIN Yanliang, XIONG Yong
    2016, 36(6):  1751-1756.  DOI: 10.11772/j.issn.1001-9081.2016.06.1751
    Asbtract ( )   PDF (900KB) ( )  
    References | Related Articles | Metrics
    To overcome the weakness of traditional detection approaches for regional staff density, and meanwhile obtain the regional staff density information better from Probe Request (PR) frame sended from mobile phones with the enabled Wireless-Fidelity (Wi-Fi), a detection probability model of regional staff density based on Wi-Fi devices was proposed. Firstly, the average PR frame time intervals of common mobile phones were obtained by experiment, which could provide a guidance for setting some parameters of probability model. Secondly, according to the IEEE 802.11 standard and Wi-Fi channel attributes, a Wi-Fi detector's mathematical model was built. Finally, reasonable values were assigned to these parameters on the basis of specific environment, and the detection probability of detector was simulated. The theoretical analysis and simulation results show that the proposed mathematical model can reflect the detection situations for staff density.
    Electricity customers arrears alert based on parallel classification algorithm
    CHEN Yuzhong, GUO Songrong, CHEN Hong, LI Wanhua, GUO Kun, HUANG Qicheng
    2016, 36(6):  1757-1761.  DOI: 10.11772/j.issn.1001-9081.2016.06.1757
    Asbtract ( )   PDF (755KB) ( )  
    References | Related Articles | Metrics
    The "consumption first and replenishment afterward" operation model of the power supply companies may cause the risk of arrears due to poor credit of some power consumers. Therefore, it is necessary to analyze of the tremendous user data in real-time and quickly before the arrears' happening and provide a list of the potential customers in arrear. In order to solve the problem, a method for arrears alert of power consumers based on the parallel classification algorithm was proposed. Firstly, the arrear behaviors were modeled by the parallel Random Forest (RF) classification algorithm based on the Spark framework. Secondly, based on previous consumption behaviors and payment records, the future characteristics of consumption and payment behavior were predicted by time series. Finally, the list of the potential hig-risk customers in arrear was obtained by using the obtained model for classifying users. The proposed algorithm was compared with the parallel Support Vector Machine (SVM) algorithm and Online Sequential Extreme Learning Machine (OSELM) algorithm. The experimental results demonstrate that, the prediction accuracy of the proposed algorithm performs better than the other algorithms in comparison. Therefore, the proposed method is a convenient way for electricity recycling management to remind the customers of paying the electricity bills ahead of time, which can ensure timeliness electricity recovery. Moreover, the proposed method is also beneficial for consumer arrear risk management of the power supply companies.
    Personalized trip itinerary recommendation based on user interests and points of interest popularity
    WU Qingxia, ZHOU Ya, WEN Diyao, HE Zhenghong
    2016, 36(6):  1762-1766.  DOI: 10.11772/j.issn.1001-9081.2016.06.1762
    Asbtract ( )   PDF (761KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem of low recommendation precision in the traditional trip itinerary recommendation algorithm, a novel Personalized Trip Itinerary Recommendation (PTIR) algorithm based on Points Of Interest (POI) popularity and user interests was proposed. Firstly, the user's real-life travel histories were obtain by analyzing the data. Then user interests based on the time were proposed according to the stay time of each scenic spot. Finally, a calculation method of optimal trip itinerary was designed under the given travel time limits, start and end points. The experimental results of a Flickr data set show that, compared with the traditional algorithm with only considering POI popularity, the precision and recall of the proposed PTIR algorithm based on the POI popularity and user interests were greatly improved; and compared with the traditional algorithm with only considering the user interests, the precision and recall of the proposed PTIR algorithm based on the points of interest and user interests were also improved. The experimental results show that considering both the POI popularity and user interests can make the itinerary recommendation more precise.
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF