Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Two-echelon location-routing model and algorithm for waste recycling considering obnoxious effect
MA Yanfang, ZHANG Wen, LI Zongmin, YAN Fang, GUO Lingyun
Journal of Computer Applications    2023, 43 (1): 289-298.   DOI: 10.11772/j.issn.1001-9081.2021111969
Abstract436)   HTML5)    PDF (3080KB)(145)       Save
With regard to the Location-Routing Problem (LRP) of domestic waste transfer stations and incineration stations, by considering the economic objective and the obnoxious effect of waste facilities, a piecewise function of obnoxious effect related to wind direction and distance was designed, a Two-Echelon Multi-Objective LRP (2E-MOLRP) model was formulated, and a non-dominated algorithm combining Whale Optimization Algorithm (WOA) and Simulated Annealing (SA) algorithm was proposed, namely WOA-SA. Firstly, the random method and Clarke and Wright (CW) saving algorithm were used to optimize the initial population. Secondly, a nonlinear dynamic inertia weight coefficient was adopted to adjust the convergence speed of the WOA-SA. Thirdly, the global optimization ability was enhanced by designing the parallel structure of WOA-SA. Finally, the Pareto solution set was obtained by using the non-dominated sorting method. The analysis was carried out on 35 benchmark cases such as Prins and Barreto as well as a simulated case of Tianjin. The results show that the WOA-SA can find the Best Known Solution (BKS) of 20 benchmark cases, and has the mean values of the difference between the solution results and the BKSs of 0.37% and 0.08% on Prins and Barreto cases, which proves the good convergence and stability of the WOA-SA. The proposed model and algorithm were applied to the instance, and provided three schemes with different obnoxious effect values and economic costs for decision makers with different decision preferences. Therefore, the cost of waste recycling and the obnoxious effect of facilities on environment were reduced.
Reference | Related Articles | Metrics
Hybrid particle swarm optimization with multi-region sampling strategy to solve multi-objective flexible job-shop scheduling problem
ZHANG Wenqiang, XING Zheng, YANG Weidong
Journal of Computer Applications    2021, 41 (8): 2249-2257.   DOI: 10.11772/j.issn.1001-9081.2020101675
Abstract520)      PDF (1458KB)(557)       Save
Flexible Job-shop Scheduling Problem (FJSP) is a widely applied combinatorial optimization problem. Aiming at the problems of multi-objective FJSP that the solution process is complex and the algorithm is easy to fall into the local optimum, a Hybrid Particle Swarm Optimization algorithm with Multi-Region Sampling strategy (HPSO-MRS) was proposed to optimize both the makespan and total machine delay time. The multi-region sampling strategy was able to reorganize the positions of the Pareto frontiers that the particles belonging to, and guide the corresponding moving directions for the particles in multiple regions of the Pareto frontiers after sampling. Thus, the convergence ability of particles in multiple directions was adjusted, and the ability of uniform distribution was improved to a certain extent. In addition, in the encoding and decoding aspect, the decoding strategy with interpolation mechanism was used to eliminate the potential local left shift; in the particle updating aspect, the particle update method of traditional Particle Swarm Optimization (PSO) algorithm was combined with the crossover and mutation operators of Genetic Algorithm (GA), which improved the diversity of search process and avoid the algorithm from falling into the local optimum. The proposed algorithm was tested on benchmark problems Mk01-Mk10 and compared with Hybrid Particle Swarm Optimization algorithm (HPSO), Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) on algorithm effectiveness and operation efficiency. Experimental results of significance analysis showed that, HPSO-MRS was significantly better than the comparison algorithms on the convergence evaluation indexes Hyper Volume (HV) and Inverted Generational Distance (IGD) in 85% and 77.5% of the control groups, respectively. In 35% of the control groups, the distribution index Spacing of the algorithm was significantly better than those of the comparison algorithms. And there was no situation that the proposed algorithm was significantly worse than the comparison algorithms on the three indexes. It can be seen that, compared with the others, the proposed algorithm has better convergence and distribution performance.
Reference | Related Articles | Metrics
Mixed precision neural network quantization method based on Octave convolution
ZHANG Wenye, SHANG Fangxin, GUO Hao
Journal of Computer Applications    2021, 41 (5): 1299-1304.   DOI: 10.11772/j.issn.1001-9081.2020071106
Abstract435)      PDF (2485KB)(443)       Save
Deep neural networks with 32-bit weights require a lot of computing resources, making it difficult for large-scale deep neural networks to be deployed in limited computing power scenarios (such as edge computing). In order to solve this problem, a plug-and-play neural network quantification method was proposed to reduce the computational cost of large-scale neural networks and keep the model performance away from significant reduction. Firstly, the high-frequency and low-frequency components of the input feature map were separated based on Octave convolution. Secondly, the convolution kernels with different bits were respectively applied to the high- and low-frequency components for convolution operation. Thirdly, the high- and low-frequency convolution results were quantized to the corresponding bits by using different activation functions. Finally, the feature maps with different precisions were mixed to obtain the output of the layer. Experimental results verify the effectiveness of the proposed method on model compression. When the model was compressed to 1+8 bit(s), the proposed method had the accuracy dropped less than 3 percentage points on CIFAR-10/100 dataset; moreover, the proposed method made the ResNet50 structure based model compressed to 1+4 bit(s) with the accuracy higher than 70% on ImageNet dataset.
Reference | Related Articles | Metrics
Sealed-bid auction scheme based on blockchain
LI Bei, ZHANG Wenyin, WANG Jiuru, ZHAO Wei, WANG Haifeng
Journal of Computer Applications    2021, 41 (4): 999-1004.   DOI: 10.11772/j.issn.1001-9081.2020081329
Abstract637)      PDF (1651KB)(978)       Save
With the rapid development of Internet technology, many traditional auctions are gradually replaced by electronic auctions, and the security privacy protection problem in them becomes more and more concerned. Concerning the problems in the current electronic bidding and auction systems, such as the risk of the privacy of bidder being leaked, the expensive cost of third-party auction center is expensive, and the collusion between third-party auction center and the bidder, a sealed-bid auction scheme based on blockchain smart contract technology was proposed. In the scheme, an auction environment without third-party was constructed by making full use of the features of the blockchain, such as decentralization, tamper-proofing and trustworthiness; and the security deposit strategy of the blockchain was used to restrict the behaviors of bidders, which improved the security of the electronic sealed-bid auction. At the same time, Pedersen commitment was used to protect auction price from being leaked, and Bulletproofs zero-knowledge proof protocol was used to verify the correctness of the winning bid price. Security analysis and experimental results show that the proposed auction scheme meets the security requirements, and has the time consumption of every stage within the acceptable range, so as to meet the daily auction requirements.
Reference | Related Articles | Metrics
Formal verification of smart contract for access control in IoT applications
BAO Yulong, ZHU Xueyang, ZHANG Wenhui, SUN Pengfei, ZHAO Yingqi
Journal of Computer Applications    2021, 41 (4): 930-938.   DOI: 10.11772/j.issn.1001-9081.2020111732
Abstract642)      PDF (1289KB)(1116)       Save
The advancement of network technologies such as bluetooth and WiFi has promoted the development of the Internet of Things(IoT). IoT facilitates people's lives, but there are also serious security issues in it. Without secure access control, illegal access of IoT may bring losses to users in many aspects. Traditional access control methods usually rely on a trusted central node, which is not suitable for an IoT environment with nodes distributed. The blockchain technology and smart contracts provide a more effective solution for access control in IoT applications. However, it is difficult to ensure the correctness of smart contracts used for access control in IoT applications by using general test methods. To solve this problem, a method was proposed to formally verify the correctness of smart contracts for access control by using model checking tool Verds. In the method, the state transition system was used to define the semantics of the Solidity smart contract, the Computation Tree Logic(CTL) formula was used to describe the properties to be verified, and the smart contract interaction and user behavior were modelled to form the input model of Verds and the properties to be verified. And then Verds was used to verify whether the properties to be verified are correct. The core of this method is the translation from a subset of Solidity to the input model of Verds. Experimental results on two smart contracts for access control of IoT source show that the proposed method can be used to verify some typical scenarios and expected properties of access control contracts, thereby improving the reliability of smart contracts.
Reference | Related Articles | Metrics
Group scanpath generation based on fixation regions of interest clustering and transferring
LIU Nanbo, XIAO Fen, ZHANG Wenlei, LI Wangxin, WENG Zun
Journal of Computer Applications    2021, 41 (1): 150-156.   DOI: 10.11772/j.issn.1001-9081.2020061147
Abstract520)      PDF (2048KB)(520)       Save
For redundancy chaos, and the lack of representation of group observers' scanpath data in natural scenes, by mining the potential characteristics of individual scanpaths, a method for group scanpath generation based on fixation Regions of Interest (ROI) spatial temporal clustering and transferring was proposed. Firstly, multiple observers' scanpaths under the same stimulus sample were analyzed, and multiple fixation regions of interest were generated by utilizing affinity propagation clustering algorithm to cluster the fixation points. Then, the statistics and analysis of the information related to fixation intensity such as the number of observers, fixation frequency and lasting time were carried out and the regions of interest were filtered. Afterwards, the subregions of interest with different types were extracted via defining fixation behaviors in the regions of interest. Finally, the transformation mode of regions and subregions of interest was proposed on the basis of fixation priority, so as to generate the group scanpath in natural scenes. The group scanpath generation experiments were conducted on two public datasets MIT1003 and OSIE. The results show that compared with the state-of-the-art methods, such as eMine, Scanpath Trend Analysis (STA), Sequential Pattern Mining Algorithm (SPAM), Candidate-constrained Dynamic time warping Barycenter Averaging method (CDBA) and Heuristic, the proposed method has the group scanpath generated of higher entirety similarity indexes with ScanMatch (w/o duration) reached 0.426 and 0.467 respectively, and ScanMatch (w/duration) reached 0.404 and 0.439 respectively. It can be seen that the scanpath generated by the proposed method has high overall similarity to the real scanpath, and has a certain function of representation.
Reference | Related Articles | Metrics
3D point cloud classification and segmentation network based on Spider convolution
WANG Benjie, NONG Liping, ZHANG Wenhui, LIN Jiming, WANG Junyi
Journal of Computer Applications    2020, 40 (6): 1607-1612.   DOI: 10.11772/j.issn.1001-9081.2019101879
Abstract776)      PDF (689KB)(1004)       Save

The traditional Convolutional Neural Network (CNN) cannot directly process point cloud data, and the point cloud data must be converted into a multi-view or voxelized grid, which leads to a complicated process and low point cloud recognition accuracy. Aiming at the problem, a new point cloud classification and segmentation network called Linked-Spider CNN was proposed. Firstly, the deep features of point cloud were extracted by adding more Spider convolution layers based on Spider CNN. Secondly, by introducing the idea of residual network, short links were added to every Spider convolution layer to form residual blocks. Thirdly, the output features of each layer of residual blocks were spliced and fused to form the point cloud features. Finally, the point cloud features were classified by three-layer fully connected layers or segmented by multiple convolution layers. The proposed network was compared with other networks such as PointNet, PointNet++ and Spider CNN on ModelNet40 and ShapeNet Parts datasets. The experimental results show that the proposed network can improve the classification accuracy and segmentation effect of point clouds, and it has faster convergence speed and stronger robustness.

Reference | Related Articles | Metrics
Smart contract vulnerability detection scheme based on symbol execution
ZHAO Wei, ZHANG Wenyin, WANG Jiuru, WANG Haifeng, WU Chuankun
Journal of Computer Applications    2020, 40 (4): 947-953.   DOI: 10.11772/j.issn.1001-9081.2019111919
Abstract1497)      PDF (775KB)(2068)       Save
Smart contract is one of the core technologies of blockchain,and its security and reliability are very important. With the popularization of blockchain application,the number of smart contracts has increased explosively. And the vulnerabilities of smart contracts will bring huge losses to users. However,the current research focuses on the semantic analysis of Ethereum smart contracts,the modeling and optimization of symbolic execution,and does not specifically describe the process of detecting smart contract vulnerabilities using symbolic execution technology,and how to detect common vulnerabilities in smart contracts. Based on the analysis of the operation mechanism and common vulnerabilities of Ethereum smart contract,the symbol execution technology was used to detect vulnerabilities in smart contracts. Firstly,the smart contract control flow graph was constructed based on Ethereum bytecode,then the corresponding constraint conditions were designed according to the characteristics of smart contract vulnerabilities,and the constraint solver was used to generate software test cases to detect the common vulnerabilities of smart contracts such as integer overflow,access control,call injection and reentry attack. The experimental results show that the proposed detection scheme has good detection effect, and has the accuracy of smart contract vulnerability detection up to 85% on 70 smart contracts with vulnerabilities in Awesome-Buggy-ERC20-Tokens.
Reference | Related Articles | Metrics
Speech enhancement algorithm based on MMSE spectral subtraction with Laplacian distribution
WANG Yongbiao, ZHANG Wenxi, WANG Yahui, KONG Xinxin, LYU Tong
Journal of Computer Applications    2020, 40 (3): 878-882.   DOI: 10.11772/j.issn.1001-9081.2019071152
Abstract654)      PDF (1053KB)(532)       Save
A Minimum Mean Square Error (MMSE) speech enhancement algorithm based on Laplacian distribution was proposed to solve the problem of noise residual and speech distortion of speech enhanced by the spectral subtraction algorithm based on Gaussian distribution. Firstly, the original noisy speech signal was framed and windowed, and the Fourier transform was performed on the signal of each processed frame to obtain the Discrete-time Fourier Transform (DFT) coefficient of short-term speech. Secondly, the noisy frame detection was performed to update the noise estimation by calculating the logarithmic spectrum energy and spectral flatness of each frame. Thirdly, based on the assumption of Laplace distribution of speech DFT coefficient, the optimal spectral subtraction coefficient was derived under the MMSE criterion, and the spectral subtraction with the obtained coefficient was performed to obtain the enhanced signal spectrum. Finally, the enhanced signal spectrum was subjected to inverse Fourier transform and framing to obtain the enhanced speech. The experimental results show that the Signal-to-Noise Ratio (SNR) of the speech enhanced by the proposed algorithm is increased by 4.3 dB on average, and has 2 dB improvement compared with that of the speech enhanced by the over-subtraction method. In the term of Perceptual Evaluation of Speech Quality (PESQ) score, compared with that of over-subtraction method, the average score of the proposed algorithm has a 10% improvement. The proposed algorithm has better noise suppression and less speech distortion, and has a significant improvement in SNR and PESQ evaluation standards.
Reference | Related Articles | Metrics
Collaborative filtering recommendation algorithm based on dual most relevant attention network
ZHANG Wenlong, QIAN Fulan, CHEN Jie, ZHAO Shu, ZHANG Yanping
Journal of Computer Applications    2020, 40 (12): 3445-3450.   DOI: 10.11772/j.issn.1001-9081.2020061023
Abstract569)      PDF (948KB)(523)       Save
Item-based collaborative filtering learns user preferences from the user's historical interaction items and recommends similar new items based on the user's preferences. The existing collaborative filtering methods assume that a set of historical items that user has interacted with have the same impact on user, and all historical interaction items are considered to have the same contribution to the prediction of target item, which limits the accuracy of these recommendation methods. In order to solve the problems, a new collaborative filtering recommendation algorithm based on dual most relevant attention network was proposed, which contained two attention network layers. Firstly, the item-level attention network was used to assign different weights to different historical items in order to capture the most relevant items in the user historical interaction items. Then, the item-interaction-level attention network was used to perceive the correlation degrees of the interactions between the different historical items and the target item. Finally, the fine-grained preferences of users on the historical interaction items and the target item were simultaneously captured through the two attention network layers, so as to make the better recommendations for the next step. The experiments were conducted on two real datasets of MovieLens and Pinterest. Experimental results show that, the proposed algorithm improves the recommendation hit rate by 2.3 percentage points and 1.5 percentage points respectively compared with the benchmark model Deep Item-based Collaborative Filtering (DeepICF) algorithm, which verifies the effectiveness of the proposed algorithm on making personalized recommendations for users.
Reference | Related Articles | Metrics
Storage location assignment optimization of stereoscopic warehouse based on genetic simulated annealing algorithm
ZHU Jie, ZHANG Wenyi, XUE Fei
Journal of Computer Applications    2020, 40 (1): 284-291.   DOI: 10.11772/j.issn.1001-9081.2019061035
Abstract850)      PDF (1063KB)(503)       Save
Concerning the problem of storage location assignment in automated warehouse, combined with the operational characteristics and security requirements of warehouse, a multi-objective model for automated stereoscopic warehouse storage location assignment was constructed, and an adaptive improved Simulated Annealing Genetic Algorithm (SAGA) based on Sigmoid curve for solving the model was proposed. Firstly, aiming at reducing the loading and unloading time of items, the distance between items in same group and the gravity center of shelf, a storage location optimization model was established. Then, in order to overcome the shortcomings of poor local search ability and being easily fall into local optimum of Genetic Algorithm (GA), the adaptive cross mutation operation based on Sigmoid curve and the reversed operation were introduced and fused with SAGA. Finally, the optimization, stability and convergence of the improved genetic SAGA were tested. The experimental results show that compared with Simulated Annealing (SA) algorithm, the proposed algorithm has the optimization degree of loading and unloading time of items increased by 37.7949 percentage points, the optimization degree of distance between items in same group improved by 58.4630 percentage points, and optimization degree of gravity center of shelf increased by 25.9275 percentage points, meanwhile the algorithm has better stability and convergence. It proves the effectiveness of the improved genetic SAGA to solve the problem. The algorithm can provide a decision method for automated warehouse storage location assignment optimization.
Reference | Related Articles | Metrics
Plant image segmentation method under bias light based on convolutional neural network
ZHANG Wenbin, ZHU Min, ZHANG Ning, DONG Le
Journal of Computer Applications    2019, 39 (12): 3665-3672.   DOI: 10.11772/j.issn.1001-9081.2019040637
Abstract576)      PDF (1365KB)(490)       Save
To solve the problems of low precision and poor generalization performance of traditional image segmentation algorithms on the plant images under bias light in plant factory, a method based on neural network and deep learning for accurately segmenting the plant images under artificial bias light in plant factory was proposed. By using this method, the segmentation accuracy on the original test set of bias light plant images is 91.89% and is far superior to that by other segmentation algorithms such as Fully Convolutional Network (FCN), clustering, threshold and region growth. In addition, this method has better segmentation effect and generalization performance than the above methods on plant images under different color lights. The experimental results show that the proposed method can significantly improve the accuracy of plant image segmentation under bias light, and can be applied to practical plant factory projects.
Reference | Related Articles | Metrics
Image classification learning via unsupervised mixed-order stacked sparse autoencoder
YANG Donghai, LIN Minmin, ZHANG Wenjie, YANG Jingmin
Journal of Computer Applications    2019, 39 (12): 3420-3425.   DOI: 10.11772/j.issn.1001-9081.2019061107
Abstract741)      PDF (1005KB)(504)       Save
Most of the current image classification methods use supervised learning or semi-supervised learning to reduce image dimension. However, supervised learning and semi-supervised learning require image carrying label information. Aiming at the dimensionality reduction and classification of unlabeled images, a mixed-order feature stacked sparse autoencoder was proposed to realize the unsupervised dimensionality reduction and classification learning of the images. Firstly, a serial stacked sparse autoencoder network with three hidden layers was constructed. Each hidden layer of the stacked sparse autoencoder was trained separately, and the output of the former hidden layer was used as the input of the latter hidden layer to realize the feature extraction of image data and the dimensionality reduction of the data. Secondly, the features of the first hidden layer and the second hidden layer of the trained stacked autoencoder were spliced and fused to form a matrix containing mixed-order features. Finally, the support vector machine was used to classify the image features after dimensionality reduction, and the accuracy was evaluated. The proposed method was compared with seven comparison algorithms on four open image datasets. The experimental results show that the proposed method can extract features from unlabeled images, realize image classification learning, reduce classification time and improve image classification accuracy.
Reference | Related Articles | Metrics
Nonlocal self-similarity based low-rank sparse image denoising
ZHANG Wenwen, HAN Yusheng
Journal of Computer Applications    2018, 38 (9): 2696-2700.   DOI: 10.11772/j.issn.1001-9081.2018020310
Abstract2089)      PDF (1002KB)(721)       Save
Focusing on the issue that many image denoising methods are easy to lose detailed information when removing noise, a nonlocal self-similarity based low-rank sparse image denoising method was proposed. Firstly, external natural clean image patches were put into groups by a method of block matching based on Mahalanobis Distance (MD), and then a patch group based Gaussian Mixture Model (GMM) was developed to learn the nonlocal self-similarity prior. Secondly, based on the Stable Principal Component Pursuit (SPCP) method, the noise image matrix was decomposed into low-rank, sparse and noise parts, while the sparse matrix contained useful information. Finally, the global objective function was minimized to achieve denoising. The experimental results show that compared to the previous denoising methods, such as EPLL (Expected Patch Log Likelihood), NCSR (Non-locally Centralized Sparse Representation), PCLR (external Patch prior guided internal CLusteRing), etc., the proposed method has better results in Peak Signal-to-Noise Ratio (PSNR) and Structure self-SIMilarity (SSIM), speed, denoising effect and detail retention ability.
Reference | Related Articles | Metrics
Person re-identification method based on block sparse representation
SUN Jinyu, WANG Hongyuan, ZHANG Ji, ZHANG Wenwen
Journal of Computer Applications    2018, 38 (2): 448-453.   DOI: 10.11772/j.issn.1001-9081.2017082491
Abstract582)      PDF (1006KB)(418)       Save
Focusing on the person re-identification in non-overlapping camera views and the high dimensional feature extracted from the images, a person re-identification method based on block sparse representation was proposed. The Canonical Correlation Analysis (CCA) was taken to carry out the feature projection transformation, and the curse of dimensionality caused by high dimensional feature operation was avoided by improving the feature matching ability, and the feature vectors in a probe image were made to be probably linear with the corresponding gallery feature vectors in the learned projected space of CCA transformation. A person re-identification model was also built with block structure feature of pedestrian dataset, and the associated optimization problem was solved by utilizing the alternating direction framework. Finally, the residues were used to deal with the person in the probe set to be identified and the index of the minimum value in the residues was regarded as the identity of the person. Several experiments were conducted on public datasets such as PRID 2011, iLIDS-VID and VIPeR. The experimental results show that the Rank1 value of the proposed method on three experimental datasets reaches 40.4%, 38.11% and 23.68%, respectively, which is significantly higher than that of Large Margin Nearest Neighbor (LMNN) method, and the matching rate of it on Rank-1 is also much bigger than that of LMNN method; besides, the overall performance of it is better than the classical algorithms based on feature representation and metric learning. The experimental results verify the effectiveness of the proposed method on person re-identification.
Reference | Related Articles | Metrics
Minimum access guaranteed bandwidth allocation mechanism in data center network
CAI Yueping, ZHANG Wenpeng, LUO Sen
Journal of Computer Applications    2017, 37 (7): 1825-1829.   DOI: 10.11772/j.issn.1001-9081.2017.07.1825
Abstract737)      PDF (833KB)(763)       Save
In Data Center Network (DCN), multiple tenants may interfere with each other, which leads to unpredictable application performance. Reserving bandwidth resources can hardly guarantee high network utilization, which may result in revenue loss of cloud providers. To address above problems, a Minimum Access Guaranteed Bandwidth Allocation (MAGBA) mechanism for data center networks was proposed. To provide the minimum bandwidth guarantee and make full use of idle bandwidth for tenants, the MAGBA scheduled traffic of Virtual Machine (VM) through Weighted Fairness Queuing at the sending side, and it adjusted TCP flow's receiving window at the receiving side. Through simulations on NS2 (Network Simulation version 2) platform, compared with the static resource reservation method, MAGBA mechanism was more flexible in bandwidth allocation and it could improve the network throughput by 25%. When some tenants sent a lot of TCP flows, the other tenants in the MAGBA obtained higher bandwidth than that in the existing bandwidth allocation mechanism based on TCP flows. The simulation results show that the MAGBA mechanism can provide minimum access guaranteed bandwidth for VMs and it can avoid interferences from other tenants.
Reference | Related Articles | Metrics
Agent-based dynamic scheduling system for hybrid flow shop
WANG Qianbo, ZHANG Wenxin, WANG Bailin, WU Zixuan
Journal of Computer Applications    2017, 37 (10): 2991-2998.   DOI: 10.11772/j.issn.1001-9081.2017.10.2991
Abstract701)      PDF (1172KB)(561)       Save
Aiming at the uncertainty and dynamism in agile manufacturing and the features of Hybrid Flow Shop (HFS) scheduling problem, a multi-Agent based dynamic scheduling system for hybrid flow shop was developed, which consists of management Agent, strategy Agent, job Agent and machine Agent. First, a HFS aimed Interpolation Sorting (HIS) algorithm was proposed and integrated into the strategy Agent, which is suitable for static scheduling and dynamic scheduling under a variety of dynamic events. Then the coordination mechanism between the various Agents was designed. In the process of production, all Agents worked independently and coordinate with each other according to their behavioral logic. If a dynamic event occurred, the strategy Agent called HIS algorithm to generate the sequence of jobs according to the current workshop state, and then the Agents continued to coordinate according to the generated sequence until the production was finished. Finally, simulations of dynamic scheduling such as machine failure, rush order and online scheduling were carried out. The experimental results show that compared with a variety of scheduling rules, HIS algorithm has better schedule results than those by scheduling rules in these cases; especially in machine breakdown rescheduling, the consistency of makespan before and after rescheduling was up to 97.6%, which means that the HFS dynamic scheduling system is effective and flexible.
Reference | Related Articles | Metrics
3-D visualization and information management system design based on open scene graph
ZHANG Wenying, HE Kunjin, ZHANG Rongli, LIU Yuxing
Journal of Computer Applications    2016, 36 (7): 2056-2060.   DOI: 10.11772/j.issn.1001-9081.2016.07.2056
Abstract664)      PDF (830KB)(336)       Save
Concerning the problem of managing components information in the process of three-dimensional presentation of virtual assembly, a design of integrating 3-D visualization and information management technology combined with electric vehicles assembling and disassembling was proposed. Firstly, 3D model and information library were established according to the topology and supporting information of electric vehicles such as material and type. Secondly, a directory tree was created based on the parent-child relationship between the parts and sub-assemblies from information library, and three-dimensional presentation of the sub-assembly was achieved according to the principle that sub-assembly and scene tree have the same structure of "multi-tree", each node of the sub-assembly was animated before disassembly presentation. Finally, pickup interactive query and retrieving location query were achieved by combining the information management and visualization of electric vehicle. The constructed system was verified by century bird electric bicycle models, the integration of 3-D visualization and virtual assembly was realized, which could provide technical support for 3-D presentation and virtual assembly effectively. The experimental results show that the constructed system can effectively integrate the 3-D visualization and information management of components in virtual assembly.
Reference | Related Articles | Metrics
Motion feature extraction of random-dot video sequences based on V1 model of visual cortex
ZOU Hongzhong, XU Yuelei, MA Shiping, LI Shuai, ZHANG Wenda
Journal of Computer Applications    2016, 36 (6): 1677-1681.   DOI: 10.11772/j.issn.1001-9081.2016.06.1677
Abstract599)      PDF (897KB)(454)       Save
Focusing on the issue of target motion feature extraction of video sequences in complex scene, and referring to the motion perception of biological vision system to the moving video targets, the traditional primary Visual cortex (V1) cell model of visual cortex was improved and a novel method of random-dot motion feature extraction based on the mechanism of biological visual cortex was proposed. Firstly, the spatial-temporal filter and half-squaring operation combined with normalization were adopted to simulate the linearity and nonlinearity of neuron's receptive field. Then, a universal V1 cell model was obtained by adding a directional selectivity adjustable parameter to the output weight, which solved the problem of the single direction selectivity and the disability to respond correctly to multi-direction motion in the traditional model. The simulation results show that the analog outputs of proposed model are almost consistent with the experimental data of biology, which indicates that the proposed model can simulate the V1 neurons of different direction selectivity and extract motion features well from random-dot video sequences with complex motion morphs. The proposed method can provide new idea for processing feature information of optical flow, extract motion feature of video sequence and track its object effectively.
Reference | Related Articles | Metrics
Image target recognition method based on multi-scale block convolutional neural network
ZHANG Wenda, XU Yuelei, NI Jiacheng, MA Shiping, SHI Hehuan
Journal of Computer Applications    2016, 36 (4): 1033-1038.   DOI: 10.11772/j.issn.1001-9081.2016.04.1033
Abstract1103)      PDF (891KB)(1396)       Save
The deformation such as translation, rotation and random scaling of local images in image recognition tasks is a complicated problem. An algorithm based on pre-training convolutional filters and Multi-Scale block Convolutional Neural Network (MS-CNN) was proposed to solve these problems. Firstly, the training dataset without labels was used to train a sparse autoencoder and get a collection of convolutional filters with characteristics in accord with the dataset and good initial values. To enhance the robustness and reduce the impact of the pooling layer for the feature extraction, a new Convolutional Neural Network (CNN) structure with multiple channels was proposed. The multi-scale block operation was applied to input image to form several channels, and each channel was convolved with corresponding size of filter. Then the convolutional layer, a local contrast normalization layer and a pooling layer were set to obtain invariability. The feature maps were put in the full connected layer and final features were exported for target recognition. The recognition rates of STL-10 database and remote sensing airplane images were both improved compared to traditional CNN. The experimental results show that the proposed method has robust performance when dealing with deformations such as translation, rotation and scaling.
Reference | Related Articles | Metrics
Fast multi-objective hybrid evolutionary algorithm for flow shop scheduling problem
ZHANG Wenqiang, LU Jiaming, ZHANG Hongmei
Journal of Computer Applications    2016, 36 (4): 1015-1021.   DOI: 10.11772/j.issn.1001-9081.2016.04.1015
Abstract550)      PDF (974KB)(659)       Save
A fast multi-objective hybrid evolutionary algorithm was proposed for solving bi-criteria Flow shop Scheduling Problem (FSP) with the objectives of minimizing makespan and total flow time. The sampling strategy of the Vector Evaluated Genetic Algorithm (VEGA) and a new sampling strategy according to the Pareto dominating and dominated relationship-based fitness function were integrated with the proposed algorithm. The new sampling strategy made up the shortage of the sampling strategy of VEGA. VEGA was good at searching the edge region of the Pareto front, but it neglected the central area of the Pareto front, while the new sampling strategy preferred the center region of the Pareto front. The fusion of these two mechanisms ensured that the hybrid algorithm can converge to the Pareto front quickly and smoothly. Moreover, the algorithm efficiency was improved greatly without calculating the distance. Simulation experiments on Taillard benchmark sets show that, compared with Non-dominated Sorting Genetic Algorithm-Ⅱ (NSGA-Ⅱ) and Strength Pareto Evolutionary Algorithm 2 (SPEA2), the fast multi-objective hybrid evolutionary algorithm is improved in the performance of convergence and distribution, and the efficiency of the algorithm has been improved. The proposed algorithm can be better at solving the bi-criteria flow shop scheduling problem.
Reference | Related Articles | Metrics
Statistical iterative algorithm based on adaptive weighted total variation for low-dose CT
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Wen, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2016, 36 (10): 2916-2921.   DOI: 10.11772/j.issn.1001-9081.2016.10.2916
Abstract587)      PDF (888KB)(540)       Save
Concerning the streak artifacts and impulse noise of the Low-Dose Computed Tomography (LDCT) reconstructed images, a statistical iterative reconstruction method based on adaptive weighted Total Variation (TV) for LDCT was presented. Considering the shortage that traditional TV may bring staircase effect while suppressing streak artifacts, an adaptive weighted TV model that combined the weighting factor based on weighted variation and TV model was proposed. Then, the new model was applied to the Penalized Weighted Least Square (PWLS). Different areas of the image were processed with different de-noising intensities, so as to achieve a good effect of noise suppression and edge preservation. The Shepp-Logan model and the digital pelvis phantom were used to test the effectiveness of the proposed algorithm. Experimental results show that the proposed method has smaller Normalized Mean Square Distance (NMSD) and Normal Average Absolute Distance (NAAD) in the two experiment images, compared with the Filtered Back Projection (FBP), PWLS, PWLS-Median Prior (PWLS-MP) and PWLS-TV algorithms. Meanwhile, the proposed method get Peak Signal-To-Noise Ratio (PSNR) of 40.91 dB and 42.25 dB respectively. Experimental results show that the proposed algorithm can well preserve image details and edges, while eliminating streak artifacts effectively.
Reference | Related Articles | Metrics
Adaptive moving object extraction algorithm based on visual background extractor
LYU Jiaqing, LIU Licheng, HAO Luguo, ZHANG Wenzhong
Journal of Computer Applications    2015, 35 (7): 2029-2032.   DOI: 10.11772/j.issn.1001-9081.2015.07.2029
Abstract625)      PDF (628KB)(720)       Save

The prior work of video analysis technology is video foreground detection in complex scenes. In order to solve the problem of low accuracy in foreground moving target detection, an improved moving object extraction algorithm for video based on Visual Background Extractor (ViBE), called ViBE+, was proposed. Firstly, in the model initialization stage, each background pixel was modeled by a collection of its diamond neighborhood to simply the sample information. Secondly, in the moving object extraction stage, the segmentation threshold was adaptively obtained to extract moving object in dynamic scenes. Finally, for the sudden illumination change, a method of background rebuilding and update-parameter adjusting was proposed during the process of background update. The experimental results show that, compared with the Gaussian Mixture Model (GMM) algorithm, Codebook algorithm and original ViBE algorithm, the improved algorithm's similarity metric on moving object extracting results increases by 1.3 times, 1.9 times and 3.8 times respectively in complex video scene LightSwitch. The proposed algorithm has a better adaptability to complex scenes and performance compared to other algorithms.

Reference | Related Articles | Metrics
Quantized distributed Kalman filtering based on dynamic weighting
CHEN Xiaolong, MA Lei, ZHANG Wenxu
Journal of Computer Applications    2015, 35 (7): 1824-1828.   DOI: 10.11772/j.issn.1001-9081.2015.07.1824
Abstract827)      PDF (766KB)(699)       Save
Focusing on the state estimation problem of a Wireless Sensor Network (WSN) without a fusion center, a Quantized Distributed Kalman Filtering (QDKF) algorithm was proposed. Firstly, based on the weighting criterion of node estimation accuracy, a weight matrix was dynamically chosen in the Distributed Kalman Filtering (DKF) algorithm to minimize the global estimation Error Covariance Matrix (ECM). And then, considering the bandwidth constraint of the network, a uniform quantizer was added into the DKF algorithm. The requirement of the network bandwidth was reduced by using the quantized information during the communication. Simulations were conducted by using the proposed QDKF algorithm with an 8-bit quantizer. In the comparison experiments with the Metropolis weighting and the maximum degree weighting, the estimation Root Mean Square Error (RMSE) of the mentioned dynamic weighting method decreased by 25% and 27.33% respectively. The simulation results show that the QDKF algorithm using dynamic weighting can improve the estimation accuracy and reduce the requirement of network bandwidth, and it is suitable for network communications limited applications.
Reference | Related Articles | Metrics
Face recognition based on local binary pattern and deep learning
ZHANG Wen, WANG Wenwei
Journal of Computer Applications    2015, 35 (5): 1474-1478.   DOI: 10.11772/j.issn.1001-9081.2015.05.1474
Abstract1296)      PDF (765KB)(1615)       Save

In order to solve the problem that deep learning ignores the local structure features of faces when it extracts face feature in face recognition, a novel face recognition approach which combines block Local Binary Pattern (LBP) and deep learning was presented. At first, LBP features were extracted from different blocks of a face image, which were connected together to serve as the texture description for the whole face. Then, the LBP feature was input to a Deep Belif Network (DBN), which was trained level by level to obtain classification capability. At last, the trained DBN was used to recognize unseen face samples. On ORL, YALE and FERET face databases, the experimental results show that the proposed method has a better recognition performance compared with Support Vector Machine (SVM) in small sample face recognition.

Reference | Related Articles | Metrics
Naïve differential evolution algorithm
WANG Shenwen, ZHANG Wensheng, QIN Jin, XIE Chengwang, GUO Zhaolu
Journal of Computer Applications    2015, 35 (5): 1333-1335.   DOI: 10.11772/j.issn.1001-9081.2015.05.1333
Abstract716)      PDF (434KB)(794)       Save

In order to solve singleness of mutation study, a naïve mutation strategy was proposed to approach the best individual and depart the worst one. So, a scale factor self-adaptation mechanism was used and the parameter was set to a small value when the dimension value of three random individuals is very close to each other, otherwise, set it to a large value. The results showed that the Differential Evolution (DE) with the new mechanism exhibits a robust convergence behavior measured by average number of fitness evaluations, successful running rate and acceleration rate.

Reference | Related Articles | Metrics
Derivation and spectrum analysis of a kind of low weight spectral annihilator
HU Jianyong, ZHANG Wenzheng
Journal of Computer Applications    2015, 35 (12): 3447-3449.   DOI: 10.11772/j.issn.1001-9081.2015.12.3447
Abstract461)      PDF (576KB)(253)       Save
For stream cipher to implement effective fast discrete Fourier spectra attack, it is necessary to find a low spectral weight relation or a low spectral weight annihilator. By using discrete Fourier transform of periodic sequences, a necessary and sufficient condition of the sequences which meet product relation was achieved. And on this basis, by defining spectral cycle difference, a kind of low spectral weight relation and annihilator was derived. At the same time, the spectral properties of m sequences was researched, a method to calculate the spectral space quickly was proposed and an example was given.
Reference | Related Articles | Metrics
Cloud architecture intrusion detection system based on KKT condition and hyper-sphere incremental SVM algorithm
ZHANG Wenxing, FAN Jiejie
Journal of Computer Applications    2015, 35 (10): 2886-2890.   DOI: 10.11772/j.issn.1001-9081.2015.10.2886
Abstract682)      PDF (749KB)(563)       Save
In view of overload, nonsupport of multi-computer conjunction analysis and maintenance of huge rule database in traditional Intrusion Detection System (IDS), a new kind of cloud architecture IDS with Incremental Support Vector Machine (ISVM) algorithm based on KKT condition and hyper-sphere, namely KS-ISVM was proposed. The network data captured by client were preprocessed and sent to the cloud as samples. The KS-ISVM was used to analyze these samples in cloud. According to the KKT condition, the samples that violated the KKT condition were selected as useful samples, and the others that met the KKT condition were removed. In addition, in order to ensure that the removed samples were redundant, they were screened again by hyper-sphere, after that, the samples which met the hyper-sphere rule were regarded as useful samples, while the others were deleted. Finally, the SVM was trained and updated by merging those selected useful samples. Contrast experiments with SVM, Batch-SVM and Incremental SVM based on KKT (K-ISVM) were carried out on KDDCUP 99. The results show that KS-ISVM has good performance in prediction and selection of samples, its accuracy can reach 90.3%, but the accuracy of SVM, Batch-SVM and K-ISVM are all below 89%. Through analyzing the parallel KS-ISVM processes, the analyzing time of the single process is 6351 s, while that of 16 processes is 146 s, which proves that the multi-process techniques is effiective, and it can meet the efficiency and accuracy requirements of IDS in cloud computing environment.
Reference | Related Articles | Metrics
Design of aerial photography control system for unmanned aerial vehicle
ZHAO Haimeng, ZHANG Wenkai, GU Jingbo, WANG Qiang, SHEN Luning, YAN Lei
Journal of Computer Applications    2015, 35 (1): 270-275.   DOI: 10.11772/j.issn.1001-9081.2015.01.0270
Abstract829)      PDF (920KB)(740)       Save

Aiming at the problems of automatic control of camera load parameters and real-time tracking of the flight path in the remote sensing photography of Unmanned Aerial Vehicle (UAV), this paper presented a design scheme which could complete camera load control and aerial control automatically. First, the information of real-time geographic location and environment forecasting could be acquired in the system according to experimental requirements, and the parameter encoding was completed based on the table of camera control parameters; second, the custom protocol instruction set was sent to hardware control circuits through the communication port to complete the set of camera load parameters, and photography could be completed. Meanwhile, the geographic coordinate information of real-time flight path was recorded by the route planning software. The system can combine hardware control platform with software data processing, to achieve collaborative control. The UAV experiment results show that compared with the mode of single parameter aerial control, the proposed system in this paper can automatically control camera parameters and track real-time flight path according to different photography conditions and photography scenes.

Reference | Related Articles | Metrics
Brain tumor segmentation based on morphological multi-scale modification and fuzzy C-means clustering
LIU Yue WANG Xiaopeng YU Hui ZHANG Wen
Journal of Computer Applications    2014, 34 (9): 2711-2715.   DOI: 10.11772/j.issn.1001-9081.2014.09.2711
Abstract343)      PDF (856KB)(546)       Save

Tumor in brain Magnetic Resonance Imaging (MRI) images is often difficult to be segmented accurately due to noise, gray inhomogeneity, complex structrue, fuzzy and discontinuous boundaries. For the purpose of getting precise segmentation with less position bias, a new method based on Fuzzy C-Means (FCM) clustering and morphological multi-scale modification was proposed. Firstly, a control parameter was introduced to distinguish noise points, edge points and regional interior points in neighborhood, and the function relationship between pixels and the sizes of structure elements was established by combining with spatial information. Then, different pixels were modified with different-sized structure elements using morphological closing operation. Thus most local minimums caused by irregular details and noises were removed, while region contours positions corresponding to the target area were largely unchanged. Finally, FCM clustering algorithm was employed to implement segmentation on the basis of multi-scale modified image, which avoids the local optimization, misclassification and region contours position bias, while remaining accurate positioning of contour area. Compared with the standard FCM, Kernel FCM (KFCM), Genetic FCM (GFCM), Fuzzy Local Information C-Means (FLICM) and expert hand sketch, the experimental results show that the suggested method can achieve more accurate segmentation result, owing to its lower over-segmentation and under-segmentation, as well as higher similarity index compared with the standard segmentation.

Reference | Related Articles | Metrics