Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Triplet deep hashing method for speech retrieval
Qiuyu ZHANG, Yongwang WEN
Journal of Computer Applications    2023, 43 (9): 2910-2918.   DOI: 10.11772/j.issn.1001-9081.2022081149
Abstract262)   HTML10)    PDF (2003KB)(123)       Save

The existing deep hashing methods of content-based speech retrieval do not make enough use of supervised information and have the suboptimal generated hash codes, low retrieval precision and low retrieval efficiency. To address the above problems, a triplet deep hashing method for speech retrieval was proposed. Firstly, the spectrogram image features were used as the input of the model in triplet manner to extract the effective information of the speech feature. Then, an Attentional mechanism-Residual Network (ARN) model was proposed, that is, the spatial attention mechanism was embedded on the basis of the ResNet (Residual Network), and the salient region representation was improved by aggregating the energy salient region information in the whole spectrogram. Finally, a novel triplet cross-entropy loss was introduced to map the classification information and similarity between spectrogram image features into the learned hash codes, thereby achieving the maximum class separability and maximal hash code discriminability during model training. Experimental results show that the efficient and compact binary hash codes generated by the proposed method has the recall, precision and F1 score of over 98.5% in speech retrieval. Compared with methods such as single-label retrieval method, the average running time of the proposed method using Log-Mel spectra as features is shorted by 19.0% to 55.5%. Therefore, this method can improve the retrieval efficiency and retrieval precision significantly while reducing the amount of computation.

Table and Figures | Reference | Related Articles | Metrics
Label noise filtering method based on local probability sampling
ZHANG Zenghui, JIANG Gaoxia, WANG Wenjian
Journal of Computer Applications    2021, 41 (1): 67-73.   DOI: 10.11772/j.issn.1001-9081.2020060970
Abstract440)      PDF (1462KB)(817)       Save
In the classification learning tasks, it is inevitable to generate noise in the process of acquiring data. Especially, the existence of label noise not only makes the learning model more complex, but also leads to overfitting and the reduction of generalization ability of the classifier. Although some label noise filtering algorithms can solve the above problems to some extent, there are still some limitations such as poor noise recognition ability, unsatisfactory classification effect and low filtering efficiency. Focused on these issues, a local probability sampling method based on label confidence distribution was proposed for label noise filtering. Firstly, the random forest classifiers were used to perform the voting of the labels of samples, so as to obtain the label confidence of each sample. And then the samples were divided into easy and hard to recognize ones according to the values of label confidences. Finally, the samples were filtered by different filtering strategies respectively. Experimental results show that in the situation of existing label noise, the proposed method can maintain high noise recognition ability in most cases, and has obvious advantage on classification generalization performance.
Reference | Related Articles | Metrics
Pseudoinverse-based motion planning scheme for deviation correction of rail manipulator joint velocity
LI Kene, ZHANG Zeng, WANG Wenxin
Journal of Computer Applications    2020, 40 (12): 3695-3700.   DOI: 10.11772/j.issn.1001-9081.2020040560
Abstract415)      PDF (1145KB)(323)       Save
Aiming at the problem that the joint velocity of the rail manipulator deviates from the expected value during the process of task execution, a pseudoinverse-based motion planning scheme for deviation correction of joint velocity of rail manipulator was proposed. Firstly, according to the joint angle state of the manipulator and the motion state of the end-effector, the pseudoinverse algorithm was used to analyze the redundancy of the rail manipulator on the velocity level. Secondly, a time-varying function was designed to perform constraint and adjustment of the joint velocity, making the deviated joint velocity converge to the expected value. Thirdly, an error correction method was employed to reduce the position error of the end-effector for ensuring the successful execution of the trajectory tracking task. Finally, the motion planning scheme was simulated on Matlab software with the four-bar redundant manipulator with the base of linear movement and circular movement as the example. The simulation results show that the proposed motion planning scheme can correct the joint velocity of the rail manipulator deviated from the expected value during the task execution, and can make the end-effector obtain higher accuracy in trajectory tracking.
Reference | Related Articles | Metrics
Constraint iterative image reconstruction algorithm of adaptive step size non-local total variation
WANG Wenjie, QIAO Zhiwei, NIU Lei, XI Yarui
Journal of Computer Applications    2020, 40 (1): 245-251.   DOI: 10.11772/j.issn.1001-9081.2019061129
Abstract782)      PDF (1066KB)(438)       Save
In order to solve the problem that the Total Variation (TV) iterative constraint model is easy to cause staircase artifact and cannot save the details in Computer Tomography (CT) images, an adaptive step size Non-Local Total Variation (NLTV) constraint iterative reconstruction algorithm was proposed. Considering the NLTV model is able to preserve and restore the details and textures of image, firstly, the CT model was regarded as a constraint optimization model for searching the solutions satisfying specific regular term, which means the NLTV minimization, in the solution set that satisfies the fidelity term of projection data. Then, the Algebraic Reconstruction Technique (ART) and the Split Bregman (SB) algorithm were used to ensure that the reconstructed results were constrained by the data fidelity term and regularization term. Finally, the Adaptive Steepest Descent-Projection Onto Convex Sets (ASD-POCS) algorithm was used as basic iterative framework to reconstruct images. The experimental results show that the proposed algorithm can achieve accurate results by using the projection data of 30 views under the noise-free sparse reconstruction condition. In the noise-added sparse data reconstruction experiment, the algorithm obtains the result similar to final convergence and has the Root Mean Squared Error (RMSE) as large as 2.5 times of that of ASD-POCS algorithm. The proposed algorithm can reconstruct the accurate result image under the sparse projection data and suppress the noise while improving the details reconstruction ability of TV iterative model.
Reference | Related Articles | Metrics
Link prediction model based on densely connected convolutional network
WANG Wentao, WU Lintao, HUANG Ye, ZHU Rongbo
Journal of Computer Applications    2019, 39 (6): 1632-1638.   DOI: 10.11772/j.issn.1001-9081.2018112279
Abstract536)      PDF (1061KB)(432)       Save
The current link prediction algorithms based on network representation learning mainly construct feature vectors by capturing the neighborhood topology information of network nodes for link prediction. However, those algorithms usually only focus on learning information from the single neighborhood topology of network nodes, while ignore the researches on similarity between multiple nodes in link structure. Aiming at these problems, a new Link Prediction model based on Densely connected convolutional Network (DenseNet-LP) was proposed. Firstly, the node representation vectors were generated by the network representation learning algorithm called node2vec, and the structure information of the network nodes was mapped into three dimensional feature information by these vectors. Then, DenseNet was used to to capture the features of link structure and establish a two-category classification model to realize link prediction. The experimental results on four public datasets show that, the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) value of the prediction result of the proposed model is increased by up to 18 percentage points compared to the result of network representation learning algorithm.
Reference | Related Articles | Metrics
Recognition model for French named entities based on deep neural network
YAN Hong, CHEN Xingshu, WANG Wenxian, WANG Haizhou, YIN Mingyong
Journal of Computer Applications    2019, 39 (5): 1288-1292.   DOI: 10.11772/j.issn.1001-9081.2018102155
Abstract533)      PDF (796KB)(656)       Save
In the existing French Named Entity Recognition (NER) research, the machine learning models mostly use the character morphological features of words, and the multilingual generic named entity models use the semantic features represented by word embedding, both without taking into account the semantic, character morphological and grammatical features comprehensively. Aiming at this shortcoming, a deep neural network based model CGC-fr was designed to recognize French named entity. Firstly, word embedding, character embedding and grammar feature vector were extracted from the text. Then, character feature was extracted from the character embedding sequence of words by using Convolution Neural Network (CNN). Finally, Bi-directional Gated Recurrent Unit Network (BiGRU) and Conditional Random Field (CRF) were used to label named entities in French text according to word embedding, character feature and grammar feature vector. In the experiments, F1 value of CGC-fr model can reach 82.16% in the test set, which is 5.67 percentage points, 1.79 percentage points and 1.06 percentage points higher than that of NERC-fr, LSTM(Long Short-Term Memory network)-CRF and Char attention models respectively. The experimental results show that CGC-fr model with three features is more advantageous than the others.
Reference | Related Articles | Metrics
Network representation learning algorithm based on improved random walk
WANG Wentao, HUANG Ye, WU Lintao, KE Xuan, TANG Wan
Journal of Computer Applications    2019, 39 (3): 651-655.   DOI: 10.11772/j.issn.1001-9081.2018071509
Abstract1023)      PDF (817KB)(479)       Save
Existing Word2vec-based Network Representation Learning (NRL) algorithms use a Random Walk (RW) to generate node sequence. The RW tends to select nodes with larger degrees, so that the node sequence can not reflect the network structure information well, decreasing the performance of the algorithm. To solve the problem, a new network representation learning algorithm based on improved random walk was proposed. Firstly, RLP-MHRW (Remove self-Loop Probability for Metropolis-Hastings Random Walk) was used to generate node sequence. This algorithm would not favor nodes with larger degrees while forming a node sequence, so that the obtained sequence can efficiently reflect the network structure information. Then, the node sequence was put into Skip-gram model to obtain the node representation vector. Finally, the performance of the network representation learning algorithm was measured by a link prediction task. Contrast experiment has been performed in four real network datasets. Compared with LINE (Large-scale Information Network Embedding) and node2vec on arXiv ASTRO-PH, the AUC (Area Under Curve) value of link prediction has increased by 8.9% and 3.5% respectively, and so do the other datasets. Experimental results show that RLP-MHRW can effectively improve the performance of the network representation learning algorithm based on Word2vec.
Reference | Related Articles | Metrics
Kernelized correlation filtering method based on fast discriminative scale estimation
XIONG Xiaoxuan, WANG Wenwei
Journal of Computer Applications    2019, 39 (2): 546-550.   DOI: 10.11772/j.issn.1001-9081.2018061360
Abstract468)      PDF (881KB)(348)       Save
Focusing on the issue that the Kernelized Correlation Filter (KCF) can not respond to the target scale change, a KCF target tracking algorithm based on fast discriminative scale estimation was proposed. Firstly, the target position was estimated by KCF. Then, a fast discriminative scale filter was learned online by using a set of target samples with different scales. Finally, an accurate estimation of the target size was obtained by applying the learned scale filter at the target position. The experiments were conducted on Visual Tracker Benchmark video sequence sets, and comparison was performed with the KCF algorithm based on Discriminative Scale Space Tracking (DSST) and the traditional KCF algorithm. Experimental results show that the tracking accuracy of the proposed algorithm is 2.2% to 10.8% higher than that of two contrast algorithms when the target scale changes, and the average frame rate of the proposed algorithm is also 19.1% to 68.5% higher than that of KCF algorithm based on DSST. The proposed algorithm has strong adaptability and high real-time performance to target scale change.
Reference | Related Articles | Metrics
Component substitution-based fusion method for remote sensing images via improving spatial detail extraction scheme
WANG Wenqing, LIU Han, XIE Guo, LIU Wei
Journal of Computer Applications    2019, 39 (12): 3650-3658.   DOI: 10.11772/j.issn.1001-9081.2019061063
Abstract453)      PDF (1705KB)(320)       Save
Concerning the spatial and spectral distortions caused by the local spatial dissimilarity between the multispectral and panchromatic images, a component substitution-based remote sensing image fusion method was proposed via improving spatial detail extraction scheme. Different from the classical spatial detail extraction methods, a high-resolution intensity image was synthesized by the proposed method to replace the panchromatic image in spatial detail extraction with the aim of acquiring spatial detail information matching the multispectral image. Firstly, according the manifold consistency between the low-resolution intensity image and the high-resolution intensity image, locally linear embedding-based reconstruction method was used to reconstruct the first high-resolution intensity image. Secondly, after decomposing the low-resolution intensity image and the panchromatic image with the wavelet technique respectively, the low-frequency information of the low-resolution intensity image and the high-frequency information of the panchromatic image were retained, and the inverse wavelet transformation was performed to reconstruct the second high-resolution intensity image. Thirdly, sparse fusion was performed on the two high-resolution intensity images to acquire the high-quality intensity image. Finally, the synthesized high-resolution intensity image was input in the component substitution-based fusion framework to obtain the fused image. The experimental results show that, compared with the other eleven fusion methods, the proposed method has the fused images with higher spatial resolution and lower spectral distortion. For the proposed method, the mean values of the objective evaluation indexes such as correlation coefficient, root mean squared error, erreur relative global adimensionnelle de synthese, spectral angle mapper and quaternion theory-based quality index on three groups of GeoEye-1 fused images are 0.9439, 24.3479, 2.7643, 3.9376 and 0.9082 respectively. These values are better than those of the other eleven fusion methods. The proposed method can efficiently reduce the effect of local spatial dissimilarity on the performance of the component substitution-based fusion framework.
Reference | Related Articles | Metrics
Overlapping community detection algorithm for attributed networks
DU Hangyuan, PEI Xiya, WANG Wenjian
Journal of Computer Applications    2019, 39 (11): 3151-3157.   DOI: 10.11772/j.issn.1001-9081.2019051177
Abstract627)      PDF (1064KB)(472)       Save
Real-world network nodes contain a large number of attribute information and there is an overlapping characteristic between communities. Aiming at the problems, an overlapping community detection algorithm for attributed networks was proposed. The network topology structure and node attributes were fused to define the intensity degree and interval degree of network nodes, which were designed to describe the characteristics of community-the dense interior connection and the sparse exterior connection respectively. Based on the idea of density peak clustering, the local density centers were selected as community centers. On this basis, an iteration calculating method for the membership of non-central nodes about each community was proposed, and the division of overlapping communities was realized. The simulation experiments were carried out on real datasets. The experimental results show that the proposed algorithm has better performance in community detection than LINK algorithm, COPRA algorithm and DPSCD (Density Peaks-based Clustering Method).
Reference | Related Articles | Metrics
Multi-attribute spatial node selection algorithm based on subjective and objective weighting
DAI Cuiqin, WANG Wenhan
Journal of Computer Applications    2018, 38 (4): 1089-1094.   DOI: 10.11772/j.issn.1001-9081.2017102534
Abstract402)      PDF (964KB)(407)       Save
Aiming at the problem that single attribute cooperative node selection algorithm in spatial cooperative transmission cannot balance the reliability and the survival time of the system, a Subjective and Objective Weighting Based Multi-attribute Cooperative Node Selection (SOW-CNS) algorithm was proposed by introducing Multiple Attribute Decision Making (MADM), and considering three attributes such as channel fading level, residual energy of the cooperative nodes and packet loss rate were considered to complement multi-attribute evaluation of spatial cooperative nodes. Firstly, according to the influence of shadow fading, a two-state wireless channel model was established, including the shadow free Loo channel fading model and the shadow Corazza channel fading model. Secondly, considering the channel fading level, the residual energy of cooperative nodes and the system packet loss rate, the multi-attribute decision making strategy based on subjective and objective weighting was introduced, and the subjective attribute weight vector and objective attribute weight vector of spatial cooperative nodes were established by using Analytic Hierarchy Process (AHP) and information entropy method. Then the maximum entropy principle, the deviation and the maximum method were used to calculate the subjective and objective attribute weight vectors. Finally, the evaluation value of each potential node was calculated by using the subjective and objective attributes weight vector and the attribute value of each node, and then the best cooperative node was selected to participate in the cooperative transmission of spatial information. Simulation results show that SOW-CNS algorithm detain lower system packet loss rate, and longer system Survival time compared with traditional Best Quality based Cooperative Node Selection (BQ-CNS) algorithm, Energy Fairness based Cooperative Node Selection (EF-CNS) algorithm and Random based Cooperative Node Selection (R-CNS) algorithm.
Reference | Related Articles | Metrics
Dimension reduction method of brain network state observation matrix based on Spectral Embedding
DAI Zhaokun, LIU Hui, WANG Wenzhe, WANG Yanan
Journal of Computer Applications    2017, 37 (8): 2410-2415.   DOI: 10.11772/j.issn.1001-9081.2017.08.2410
Abstract565)      PDF (1084KB)(674)       Save
As the brain network state observation matrix based on functional Magnetic Resonance Imaging (fMRI) reconstruction is high-dimensional and characterless, a method of dimensionality reduction based on Spectral Embedding was presented. Firstly, the Laplacian matrix was constructed from the similarity measurement between the samples. Secondly, in order to achieve the purpose of mapping (reducing dimension) datasets from high dimension to low dimension, the first two main eigenvectors were selected to construct a two-dimensional eigenvector space through Laplacian matrix factorization. The method was applied to reduce the dimension of the matrix and visualize it in two-dimensional space, and the results were evaluated by category validity indicators. Compared with the dimensionality reduction algorithms such as Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Mapping (Isomap), the mapping points in the low dimensional space got by the proposed method have obvious category significance. According to the category validity indicators, compared with Multi-Dimensional Scaling (MDS) and t-distributed Stochastic Neighbor Embedding (t-SNE) algorithms, the Di index (the average distance among within-class samples) of the proposed method was decreased by 87.1% and 65.2% respectively, and the Do index (the average distance among between-class samples) of it was increased by 351.3% and 25.5% respectively. Finally, the visualization results of dimensionality reduction show a certain regularity through a number of samples, and the effectiveness and universality of the proposed method are validated.
Reference | Related Articles | Metrics
Photogrammetric method for accurate tracking of 3D scanning probe
LIU Hong, WANG Wenxiang, LI Weishi
Journal of Computer Applications    2017, 37 (7): 2057-2061.   DOI: 10.11772/j.issn.1001-9081.2017.07.2057
Abstract634)      PDF (825KB)(517)       Save
For the traditional 3D robot scanners, the measuring precision is dependent on the positioning precision of the robot, and it is difficult to achieve high measuring precision. A photogrammetric method was proposed to track and position the 3D scanning probe accurately. First, a probe tracking system consisting of multiple industrial cameras was set up, and coded markers were pasted on the probe. Then, the camera was calibrated with high precision and interior and exterior parameters of the camera were obtained. Second, all cameras were synchronized, the markers in the image were matched according to the coding principle, and the projection matrix was obtained. Finally, the 3D coordinates of the markers in space were computed to track and position the probe. The experimental results show that the mean error of the marker position is 0.293 mm, the average angle error is 0.136°, and the accuracy of the algorithm is within reasonable range. The photogrammetric method can improve the positioning precision of the probe, so as to achieve high precision measurement.
Reference | Related Articles | Metrics
Smoke recognition based on deep transfer learning
WANG Wenpeng, MAO Wentao, HE Jianliang, DOU Zhi
Journal of Computer Applications    2017, 37 (11): 3176-3181.   DOI: 10.11772/j.issn.1001-9081.2017.11.3176
Abstract911)      PDF (1219KB)(882)       Save
For smoke recognition problem, the traditional recognition methods based on sensor and image feature are easily affected by the external environment, which would lead to low recognition precision if the flame scene and type change. The recognition method based on deep learning requires a large amount of data, so the model recognition ability is weak when the smoke data is missing or the data source is restricted. To overcome these drawbacks, a new smoke recognition method based on deep transfer learning was proposed. The main idea was to conduct smoke feature transfer by means of VGG-16 (Visual Geometry Group) model with setting ImageNet dataset as source data. Firstly, all image data were pre-processed, including random rotation, cut and overturn, etc. Secondly, VGG-16 network was introduced to transfer the features in the convolutional layers, and to connect the fully connected layers network pre-trained by smoke data. Finally, the smoke recognition model was achieved. Experiments were conducted on open datasets and real-world smoke images. The experimental results show that the accuracy of the proposed method is higher than those of current smoke image recognition methods, and the accuracy is more than 96%.
Reference | Related Articles | Metrics
Weighted sparse representation based on self-paced learning for face recognition
WANG Xuejun, WANG Wenjian, CAO Feilong
Journal of Computer Applications    2017, 37 (11): 3145-3151.   DOI: 10.11772/j.issn.1001-9081.2017.11.3145
Abstract553)      PDF (1023KB)(535)       Save
In recent years, Sparse Representation based Classifier (SRC) has become a hot issue which has been great successful in face recognition. However, when the SRC reconstructed test samples, it is possible to use the training samples with large difference from the test samples, meanwhile, SRC tends to lose locality information and thus produces unstable classification results. A Self-Paced Learning Weighted Sparse Representation based Classifier (SPL-WSRC) was proposed. It could effectively eliminate the training samples with large difference from the test samples. In addition, locality information between the samples was considered by weighting to improve the classification accuracy and stability. The experimental results on three classical face databases show that the proposed SPL-WSRC algorithm is better than the original SRC algorithm. The effect is more obvious, especially when the training samples are enough.
Reference | Related Articles | Metrics
Domain partition and controller placement for large scale software defined network
LIU Bangzhou, WANG Binqiang, WANG Wenbo, WU Di
Journal of Computer Applications    2016, 36 (12): 3239-3243.   DOI: 10.11772/j.issn.1001-9081.2016.12.3239
Abstract796)      PDF (961KB)(820)       Save
Concerning the high complexity of multiple controller placement model in existing works, several metrics to improve network service quality were defined and an approach to partition network domain and implement controller placement for large scale Software Defined Network (SDN) was proposed. The network was partitioned into several domains based on Label Propagation Algorithm (LPA) and then the controllers in the small domains were deployed separately, which makes the model complexity be linear with the network size on consideration of control path average latency, reliability and the load balance. Simulation results show that our strategy improves the load balance dramatically compared with the original LPA, decreases the model complexity and enhances network service quality compared with CCP. In Internet2, the average control path latency decreases by 9% and the reliability increases by 10% at most.
Reference | Related Articles | Metrics
Fast flame recognition approach based on local feature filtering
MAO Wentao, WANG Wenpeng, JIANG Mengxue, OUYANG Jun
Journal of Computer Applications    2016, 36 (10): 2907-2911.   DOI: 10.11772/j.issn.1001-9081.2016.10.2907
Abstract592)      PDF (819KB)(578)       Save
For flame recognition problem, the traditional recognition methods based on physical signal are easily affected by the external environment. Meanwhile, most of the current methods based on feature extraction of flame image are less discriminative to different scene and flame type, and then have lower recognition precision if the flame scene and type change. To overcome this drawback, a new fast recognition method for flame image was proposed by introducing colorspace information into Scale Invariant Feature Transform (SIFT) algorithm. Firstly, the feature descriptors of flame were extracted by SIFT algorithm from the frame images which were obtained from flame video. Secondly, the local noisy feature points were filtered by introducing the feature information of flame colorspace, and the feature descriptors were transformed into feature vectors by means of Bag Of Keypoints (BOK). Finally, Extreme Learning Machine (ELM) was utilized to establish a fast flame recognition model. Experiments were conducted on open flame datasets and real-life flame images. The results show that for different flame scenes and types the accuracy of the proposed method is more than 97%, and the recognition time is just 2.19 s for test set which contains 4301 images. In addition, comparing with the other three methods such as support vector machine based on entropy, texture and flame spread rate, support vector machine based on SIFT and fire specialty in color space, ELM based on SIFT and fire specialty in color space, the proposed method outperforms in terms of recognition accuracy and speed.
Reference | Related Articles | Metrics
Supersonic-based parallel group-by aggregation
ZHANG Bing, SUN Hui, FAN Xu, LI Cuiping, CHEN Hong, WANG Wen
Journal of Computer Applications    2016, 36 (1): 13-20.   DOI: 10.11772/j.issn.1001-9081.2016.01.0013
Abstract560)      PDF (1253KB)(389)       Save
To solve the time-consuming problem of group-by aggregation operation in case of data-intense computation, a cache-friendly group-by aggregation method was proposed. In this paper, the group-by aggregation operation was optimized in two aspects. Firstly, designing cache-friendly group-by aggregation algorithm on Supersonic, an open-source and column-oriented query execution engine, to take the full advantage of column-storage on in-memory computation. Secondly, rewriting the algorithm with multi-threads to speed up the query. In this paper, four different parallel aggregation algorithms were put forward, respectively named Shared-Nothing Parallel Group-by Aggregation (NSHPGA) algorithm, Table-Lock Shared-Hash Parallel Group-by Aggregation (TLSHPGA) algorithm, Bucket-Lock Shared-Hash Parallel Group-by Aggregation (BLSHPGA) algorithm and Node-Lock Shared-Hash Parallel Group-by Aggregation (NLSHPGA) algorithm. Through a series of comparison experiment on different group power set and different number of worker threads, NLSHPGA algorithm was proved to have the best performance both on speed-up ratio and concurrency, which achieved 10x speedups on part of queries. Besides, considering Cache miss and memory utilization, the results shows that NSHPGA algorithm is suitable for smaller group power set, which was 8 in the experiment, and when getting larger, NLSHPGA algorithm performs better than NSHPGA algorithm.
Reference | Related Articles | Metrics
Progressive auction based switch migration mechanism in software defined network
CHEN Feiyu, WANG Binqiang, WANG Wenbo, WANG Zhiming
Journal of Computer Applications    2015, 35 (8): 2118-2123.   DOI: 10.11772/j.issn.1001-9081.2015.08.2118
Abstract618)      PDF (988KB)(438)       Save

In multi-controller Software Defined Network (SDN), since the existed switch migration strategies always have low efficiency and need to migrate many times which only consider single migration factor, a mechanism of switches migration based on progressive auction named PASMM (Progressive Auction based Switches Migration Mechanism) was proposed. To improve network benefit, the switch migration problem was optimized by auctioning controllers' remaining resources in the mechanism. By increasing the trading price of the over-demanded controllers' resources, PASMM completed the auction and redeployed the controllers and switches. The simulation results show that, compared with some typical switch migration policies, PASMM achieves good load balancing of controllers, reduces the response time of the PACKET_IN messages by an average of 13.5%, and spends the least migration time with the increasing of switches flow requests.

Reference | Related Articles | Metrics
Terrain rendering for level of detail based on hardware tessellation
WANG Wenbo, YIN Hong, XIE Wenbin, WANG Jiateng
Journal of Computer Applications    2015, 35 (6): 1716-1719.   DOI: 10.11772/j.issn.1001-9081.2015.06.1716
Abstract658)      PDF (849KB)(662)       Save

The vertex shader needs an extra generating pattern and the calculation of subdivision level is complicated when subdividing terrain grid. A Level of Detail (LOD) terrain rendering algorithm using subdivision shader was put forward for the insufficiency. The proposed method used block quad tree organization to build a rough terrain grid hierarchical structure, and filtrated the activity terrain blocks by LOD discrimination function. A subdivision factor calculation method was proposed based on viewpoint in a three-dimensional continuous distance in tessellation control shader and cracks of the external factor segment was eliminated. As a result, displacement mapping on tessellation evaluation shader and displacement of height component in fine grid blocks were achieved. Meanwhile, the quadtree was saved to vertex buffer, and the exchange of resource between Central Processing Unit (CPU) and Graphic Processing Unit (GPU) was decreased. The subdivision process was accelerated by bringing in subdivision queue. The experimental results show that the proposed algorithm has a smooth detail level transition and good subdivision effect, and it can increase the utilization ratio of GPU and terrain rendering efficiency.

Reference | Related Articles | Metrics
Face recognition based on local binary pattern and deep learning
ZHANG Wen, WANG Wenwei
Journal of Computer Applications    2015, 35 (5): 1474-1478.   DOI: 10.11772/j.issn.1001-9081.2015.05.1474
Abstract1213)      PDF (765KB)(1586)       Save

In order to solve the problem that deep learning ignores the local structure features of faces when it extracts face feature in face recognition, a novel face recognition approach which combines block Local Binary Pattern (LBP) and deep learning was presented. At first, LBP features were extracted from different blocks of a face image, which were connected together to serve as the texture description for the whole face. Then, the LBP feature was input to a Deep Belif Network (DBN), which was trained level by level to obtain classification capability. At last, the trained DBN was used to recognize unseen face samples. On ORL, YALE and FERET face databases, the experimental results show that the proposed method has a better recognition performance compared with Support Vector Machine (SVM) in small sample face recognition.

Reference | Related Articles | Metrics
Evolving model of multi-local world based on supply chain network with core of manufacturers
SUN Junyan, FU Weiping, WANG Wen
Journal of Computer Applications    2015, 35 (2): 560-565.   DOI: 10.11772/j.issn.1001-9081.2015.02.0560
Abstract552)      PDF (892KB)(572)       Save

In order to reveal the evolution rules of supply chain network with the core of manufacturers, a kind of five-level local world network model was put forward. This model used the BA model and the multi-local world theory as the foundation, combined with the reality of network node generation and exit mechanism. First of all, the intrinsic characteristics and evolution mechanism of network were studied. Secondly, the topology structure and evolution rules of the network were analyzed, and the simulation model was established. Finally, the changes of network characteristic parameters were simulated and analyzed in different time step and different critical conditions, including nodes number, clustering coefficient and degree distribution, then the evolution law of the network was derived. The simulation results show that the supply chain network with the core of manufacturers has the characteristics of scale-free and high concentration. With the increase of time and the growth rate of the network nodes, the degree distribution of overall network approaches to the power-law distribution with the exponent three. The degree distribution of the network at all levels is different, sub-tier suppliers and retailers obey power-law distribution, suppliers and distributors obey exponential distribution, manufacturers generally obey the Poisson distribution.

Reference | Related Articles | Metrics
Robust tracking operator using augmented Lagrange multiplier
LI Feibin, CAO Tieyong, HUANG Hui, WANG Wen
Journal of Computer Applications    2015, 35 (12): 3555-3559.   DOI: 10.11772/j.issn.1001-9081.2015.12.3555
Abstract544)      PDF (970KB)(338)       Save
Focusing on the problem of robust video object tracking, a robust generative algorithm based on sparse representation was proposed. Firstly, object and background templates were constructed by extracting the image features, and sufficient candidates were acquired by using random sampling method at each frame. Secondly, the sparse coefficient vector was got to structure the similarity map by an innovative optimization formulation named multitask reverse sparse representation formulation, which searched multiple subsets from the whole candidate set to simultaneously reconstruct multiple templates with minimum error. Here a customized Augmented Lagrange Multiplier (ALM) method was derived for solving the L 1-min problem within several iterations. Finally, the additive pooling was proposed to extract discriminative information in the similarity map for effectively selecting the best candidate which the most similar to the object template and was most different to the background template to be the tracking result, and the tracking was implemented within the Bayesian filtering framework. Moreover, a simple but effective update mechanism was made to update object and background templates so as to handle the object appearance variation caused by illumination change, occlusion, background clutter and motion blur. Compared with the other tracking algorithms, both qualitative and quantitative evaluations on a variety of challenging sequences demonstrate that the tracking accuracy and stability of the proposed algorithm has improved and the proposed algorithm can effectively solve target tracking problem in these scenes of illumination and scale changing, occlusion, complex background, and so on.
Reference | Related Articles | Metrics
Pedestrian texture extraction by fusing significant factor
MA Qiang, WANG Wenwei
Journal of Computer Applications    2015, 35 (11): 3293-3296.   DOI: 10.11772/j.issn.1001-9081.2015.11.3293
Abstract469)      PDF (634KB)(659)       Save
The algorithm of extracting pedestrian features based on texture information has the problems of redundant feature information and being unable to depict the human visual sensitivity, an algorithm named SF-LBP was proposed to extract pedestrian texture feature by Significant Local Binary Pattern which fuses the characteristics of human visual pedestrian system. Firstly, the algorithm calculated the significant factor in each region by saliency detection method. Then, it rebuilt the eigenvector of the image by significant factor weight and pedestrian texture feature, and generated the feature histogram according to local feature. Finally it integrated adaptive AdaBoost classifier to construct pedestrian detection system. The experimental results on INRIA database show that the SF-LBP feature achieves a detection rate of 97% and about 2%-3% higher than HOG (Histogram of Oriented Gradients) feature and Haar feature. It reaches recall rate of 90% and 2% higher than other features. It indicates that the SF-LBP feature can effectively describe the texture characteristics of pedestrians, and improve the accuracy of the pedestrian detection system.
Reference | Related Articles | Metrics
Intrusion detection based on dendritic cell algorithm and twin support vector machine
LIANG Hong, GE Yufei, CHEN Lin, WANG Wenjiao
Journal of Computer Applications    2015, 35 (11): 3087-3091.   DOI: 10.11772/j.issn.1001-9081.2015.11.3087
Abstract376)      PDF (729KB)(507)       Save
In order to solve the problem that network intrusion detection was weak in training speed, real-time process and high false positive rate when dealing with big data, a Dendritic Cell TWin Support Vector Machine (DCTWSVM) approach was proposed. The Dendritic Cell Algorithm (DCA) was firstly used for the basic intrusion detection, and then the TWin Support Vector Machine (TWSVM) was applied to optimize the first step detection outcome. The experiments were carried out for testing the performance of the approach. The experimental results show that DCTWSVM respectively improves the detection accuracy by 2.02%, 2.30%, and 5.44% compared with DCA, Support Vector Machine (SVM) and Back Propagation (BP) neural network, and reduces the false positive rate by 0.26%, 0.46%, and 0.90%. The training speed is approximately twice as the SVM, and the brief training time is another advantage. The results indicate that the DCTWSVM is suitable for the comprehensive intrusion detection environment and helpful to the real-time intrusion process.
Reference | Related Articles | Metrics
Prostate tumor CAD model based on neural network with feature-level fusion in magnetic resonance imaging
LU Huiling, ZHOU Tao, WANG Huiqun, WANG Wenwen
Journal of Computer Applications    2015, 35 (10): 2813-2818.   DOI: 10.11772/j.issn.1001-9081.2015.10.2813
Abstract432)      PDF (894KB)(7683)       Save
Focusing on the issue that feature relevancy and dimension disaster problem in high-dimensional representation of Magnetic Resonance Imaging (MRI) prostate tumor Region of Interesting (ROI), a prostate tumor CAD model was proposed based on Neural Network (NN) with Principal Component Analysis (PCA) feature-level fusion in MRI. Firstly, 102 dimension features were extracted form MRI prostate tumor ROI, including 6 dimension geometry features, 6 dimension statistical features, 7 dimension Hu invariant moment features, 56 dimension GLCM texture features, 3 dimension Tamura texture features and 24 dimension frequency features. Secondly, 8 dimension features with cumulative contribution rate of 89.62% were obtained by using PCA in feature-level fusion, reducing the dimension of the feature vectors. Thirdly, the classical NN, which used Broyden-Fletcher-Goldfarb-Shanno (BFGS), Back-Propagation (BP) and Gradient Descent (GD), Levenberg-Marquardt as the training algorithm, was regarded as classifier to classify the features. Finally, 180 MRI images of prostate patients were used as original data, and the prostate tumor CAD model based on NN with feature-level fusion was utilized to diagnose. The experimental results illustrate that the ability to identify benign and malignant prostate tumor of neural network with PCA feature-level fusion is improved at least 10%, and the feature-level fusion strategy is effective, which increases the feature irrelevancy to a certain extent.
Reference | Related Articles | Metrics
Design of positioning and attitude data acquisition system for geostress monitoring
GU Jingbo GUAN Guixia ZHAO Haimeng TAN Xiang YAN Lei WANG Wenxiang
Journal of Computer Applications    2014, 34 (9): 2752-2756.   DOI: 10.11772/j.issn.1001-9081.2014.09.2752
Abstract249)      PDF (944KB)(644)       Save

Aiming at efficient data acquisition, real-time precise positioning and attitude measurement problems of geostress low-frequency electromagnetic monitoring, real-time data acquisition system was designed and implemented in combination with positioning and attitude measurement module. The hardware system took ARM microprocessor (S3C6410) as control core based on embedded Linux. The hardware and software design architecture were introduced in detail. In addition, the algorithm of positioning and attitude measurement characteristics data extraction was proposed. Monitoring terminal of data acquisition and processing was designed using Qt/Embedded GUI programming technique based on LCD (Liquid Crystal Display) and achieved human-computer interaction. Meanwhile, the required data could be real-time stored to SD card. The results of system debugging and actual field experiments indicate that the system can complete the positioning and attitude data acquisition and processing, effectively solve the problem of real-time positioning for in-situ monitoring. It also can realize geostress low-frequency electromagnetic monitoring with high-speed, real-time and high reliability.

Reference | Related Articles | Metrics
Regional blood supply system optimization under stochastic demand
YU Juan WANG Wenxian ZHONG Qinglun
Journal of Computer Applications    2014, 34 (9): 2585-2589.   DOI: 10.11772/j.issn.1001-9081.2014.09.2585
Abstract201)      PDF (628KB)(419)       Save

Concerning the perspective of supply chain integration, a blood supply model was developed, which aimed to minimize the blood acquisition risk, system operation cost, the punishment for both excessive and insufficient acquisition by the multi-objective programming method. Taking into account the feature that the amount of expired blood is proportional to time, as well as the cost for expired blood processing, a regional supply and demand equilibrium model characterized by stochastic demand of the four types of blood was built. The model was proved to be convex, and the variational inequality of the blood supply and demand network equilibrium was derived. By modified quasi-Newton method, the solutions of the blood supply chain supply and demand equilibrium under stochastic demand condition were obtained. Finally, a case study in Chengdu verified the model's applicability.

Reference | Related Articles | Metrics
Energy-aware virtual network embedding algorithm based on topology aggregation
WANG Bo CHEN Shuqiao WANG Zhiming WANG Wengao
Journal of Computer Applications    2014, 34 (6): 1537-1540.   DOI: 10.11772/j.issn.1001-9081.2014.06.1537
Abstract267)      PDF (745KB)(322)       Save

The key issue of network virtualization is Virtual Network Embedding (VNE), and the rapid growth of energy cost makes infrastructure providers concern energy conservation. An energy conservation VNE algorithm that centrally used network topology for saving energy on VNE problem was presented. The importance of the nodes was characterized by the conception of closeness centrality and the capabilities of the nodes, and the working nodes were preferentially used for resources integration to reduce energy consumption and calculation cost, that ensured the distance of the substrate links won't be too long. The simulation results show that the proposed algorithm improves revenue-energy ratio more than 20% when accept ratio reaches 70% and revenue cost ratio reaches 75%, and has advantages compared with the previous algorithms.

Reference | Related Articles | Metrics
Mobile robot safety navigation based on time to contact
HAO Dapeng FU Weiping WANG Wen
Journal of Computer Applications    2014, 34 (4): 1209-1212.   DOI: 10.11772/j.issn.1001-9081.2014.04.1209
Abstract421)      PDF (590KB)(433)       Save

Navigation exists potential safety hazard when autonomous mobile robot moves in dynamic uncertain environments. In order to improve the navigation safety, a representation method of navigation environments using time of contact was proposed, namely time of contact space. As risk index in navigation environments, the time of contact between two points of an arbitrary was computed by using linear velocity and rotation velocity, and the configuration space was mapped into time of contact space when robot moved in the navigation environment. The time of contact space was applied to classic behavior dynamic navigation method. Compared with the classical behavior dynamics method and behavior dynamics adding velocity obstacles, the simulation results prove that the time of contact space can guarantee safety navigation of autonomous mobile robot.

Reference | Related Articles | Metrics