Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Section steel surface defect detection algorithm based on cascade neural network
YU Haitao, LI Jiansheng, LIU Yajiao, LI Fulong, WANG Jiang, ZHANG Chunhui, YU Lifeng
Journal of Computer Applications    2023, 43 (1): 232-241.   DOI: 10.11772/j.issn.1001-9081.2021111940
Abstract337)   HTML8)    PDF (4174KB)(167)       Save
Deep learning has superior performance in defect detection, however, due to the low defect probability, the detection process of defect-free images occupies most of the calculation time, which seriously limits the overall effective detection speed. In order to solve the above problem, a section steel surface defect detection algorithm based on cascade network named SDNet (Select and Detect Network) was proposed. The proposed algorithm was divided into two stages: the pre-inspection stage and the precise detection stage. In the pre-inspection stage, the lightweight ResNet pre-inspection network based on Depthwise Separable Convolution (DSC) and multi-scale parallel convolution was used to determine whether there were defects in the surface image of the section steel. In the precise detection stage, the YOLOv3 was used as the baseline network to accurately classify and locate the defects in the image. In addition, the improved Atrous Spatial Pyramid Pooling (ASPP) module and dual attention module were introduced in the backbone feature extraction network and prediction branches to improve the network detection performance. Experimental results show that the detection speed and the accuracy of SDNet on 1 024 pixel×1 024 pixel images reach 120.63 frames per second and 92.1% respectively. Compared to the original YOLOv3 algorithm, the proposed algorithm has the detection speed of about 3.7 times and the detection precision improved by 10.4 percentage points. The proposed algorithm can be applied to the rapid detection of section steel surface defects.
Reference | Related Articles | Metrics
Motion control method of two-link manipulator based on deep reinforcement learning
WANG Jianping, WANG Gang, MAO Xiaobin, MA Enqi
Journal of Computer Applications    2021, 41 (6): 1799-1804.   DOI: 10.11772/j.issn.1001-9081.2020091410
Abstract581)      PDF (875KB)(733)       Save
Aiming at the motion control problem of two-link manipulator, a new control method based on deep reinforcement learning was proposed. Firstly, the simulation environment of manipulator was built, which includes the two-link manipulator, target and obstacle. Then, according to the target setting, state variables as well as reward and punishment mechanism of the environment model, three kinds of deep reinforcement learning models were established for training. Finally, the motion control of the two-link manipulator was realized. After comparing and analyzing the three proposed models, Deep Deterministic Policy Gradient (DDPG) algorithm was selected for further research to improve its applicability, so as to shorten the debugging time of the manipulator model, and avoided the obstacle to reach the target smoothly. Experimental results show that, the proposed deep reinforcement learning method can effectively control the motion of two-link manipulator, the improved DDPG algorithm control model has the convergence speed increased by two times and the stability after convergence enhances. Compared with the traditional control method, the proposed deep reinforcement learning control method has higher efficiency and stronger applicability.
Reference | Related Articles | Metrics
Vehicle number optimization approach of autonomous vehicle fleet driven by multi-spatio-temporal distribution task
ZHENG Liping, WANG Jianqiang, ZHANG Yuzhao, DONG Zuofan
Journal of Computer Applications    2021, 41 (5): 1406-1411.   DOI: 10.11772/j.issn.1001-9081.2020081183
Abstract358)      PDF (1248KB)(819)       Save
A stochastic optimization method was proposed in order to solve the vehicle number allocation problem of the minimum autonomous vehicle fleet driven by spatio-temporal multi-tasks of terminal delivery. Firstly, the influence of service time and waiting time on the route planning of autonomous vehicle fleet was analyzed to build the shortest route model, and the service sequence network was constructed based on the two-dimensional spatio-temporal network. Then, the vehicle number allocation problem of the minimum autonomous vehicle fleet was converted into a network maximum flow problem through the network transformation, and a minimum fleet model was established with the goal of minimizing the vehicle number of the fleet. Finally, the Dijkstra-Dinic algorithm combining Dijkstra algorithm and Dinic algorithm was designed according to the model features in order to solve the vehicle number allocation problem of the minimum autonomous vehicle fleet. Simulation experiments were carried out in four different scales of service networks, the results show that:under different successful service rates, the minimum size of autonomous vehicle fleet is positively correlated with the scale of service network, and it decreases with the increase of waiting time and gradually tends to be stable, the One-stop operator introduced into the proposed algorithm greatly improves the search efficiency, and the proposed model and algorithm are suitable for the calculation of the minimum vehicle fleet in large-scale service network.
Reference | Related Articles | Metrics
Overview of information extraction of free-text electronic medical records
CUI Bowen, JIN Tao, WANG Jianmin
Journal of Computer Applications    2021, 41 (4): 1055-1063.   DOI: 10.11772/j.issn.1001-9081.2020060796
Abstract795)      PDF (1090KB)(1414)       Save
Information extraction technology can extract the key information in free-text electronic medical records, helping the information management and subsequent information analysis of the hospital. Therefore, the main process of free-text electronic medical record information extraction was simply introduced, the research results of single extraction and joint extraction methods for three most important types of information:named entity, entity assertion and entity relation in the past few years were studied, and the methods, datasets, and final effects of these results were compared and summarized. In addition, an analysis of the features, advantages and disadvantages of several popular new methods, a summarization of commonly used datasets in the field of information extraction of free-text electronic medical records, and an analysis of the current status and research directions of related fields in China was carried out.
Reference | Related Articles | Metrics
Two-stage file compaction framework by log-structured merge-tree for time series data
ZHANG Lingzhe, HUANG Xiangdong, QIAO Jialin, GOU Wangminhao, WANG Jianmin
Journal of Computer Applications    2021, 41 (3): 618-622.   DOI: 10.11772/j.issn.1001-9081.2020122053
Abstract583)      PDF (793KB)(1064)       Save
When the Log-Structured Merge-tree (LSM-tree) in the time series database is under high write load or resource constraints, file compaction not in time will cause a large accumulation of LSM C 0 layer data, resulting in an increase in the latency of ad hoc queries of recently written data. To address this problem, a two-stage LSM compaction framework was proposed that realizes low-latency query of newly written time series data on the basis of maintaining efficient query for large blocks of data. Firstly, the file compaction process was divided into two stages:quickly merging of a small number of out-of-order files, merging of a large number of small files, then multiple file compaction strategies were provided in each stage, finally the two-stage compaction resource allocation was performed according to the query load of the system. By implementing the test of the traditional LSM compaction strategy and the two-stage LSM compaction framework on the time series database Apache IoTDB, the results showed that compared with the traditional LSM, the two-stage file compaction module was able to greatly reduce the number of ad hoc query reads while improving the flexibility of the strategy, and made the historical data analysis and query performance improved by about 20%. Experimental results show that the two-stage LSM compaction framework can increase the ad hoc query efficiency of recently written data, and can improve the performance of historical data analysis and query as well as the flexibility of compaction strategy.
Reference | Related Articles | Metrics
Magnetic tile surface quality recognition based on multi-scale convolution neural network and within-class mixup operation
ZHANG Jing'ai, WANG Jiangtao
Journal of Computer Applications    2021, 41 (1): 275-279.   DOI: 10.11772/j.issn.1001-9081.2020060886
Abstract405)      PDF (974KB)(912)       Save
The various shapes of ferrite magnetic tiles and the wide varieties of their surface defects are great challenges for computer vision based surface defect quality recognition. To address this problem, the deep learning technique was introduced to the magnetic tile surface quality recognition, and a surface defect detection system for magnetic tiles was proposed based on convolution neural networks. Firstly, the tile target was segmented from the collected image and was rotated in order to obtain the standard image. After that, the improved multiscale ResNet18 was used as the backbone network to design the recognition system. During the training process, a novel within-class mixup operation was designed to improve the generalization ability of the system on the samples. To close to the practical application scenes, a surface defect dataset was built with the consideration of illumination changes and posture differences. Experimental results on the self-built dataset indicate that the proposed system achieves recognition accuracy of 97.9%, and provides a feasible idea for the automatic recognition of magnetic tile surface defects.
Reference | Related Articles | Metrics
Rainfall cloud segmentation method in Tibet based on DeepLab v3
ZHANG Yonghong, LIU Hao, TIAN Wei, WANG Jiangeng
Journal of Computer Applications    2020, 40 (9): 2781-2788.   DOI: 10.11772/j.issn.1001-9081.2019122131
Abstract492)      PDF (2718KB)(593)       Save
Concerning the problem that the numerical prediction method is complex in modeling, the radar echo extrapolation method is easy to generate cumulative error and the model parameters are difficult to set in plateau area, a method for segmenting rainfall clouds in Tibet was proposed based on the improved DeepLab v3. Firstly, the convolutional layers and residual modules in the coding network were used for down-sampling. Then, the multi-scale sampling module was constructed by using the dilated convolution, and the attention mechanism module was added to extract deep high-dimensional features. Finally, the deonvolutional layers in the decoding network were used to restore the feature map resolution. The proposed method was compared with Google semantic segmentation network DeepLab v3 and other models on the validation set. The experimental results show that the method has better segmentation performance and generalization ability, has the rainfall cloud segmented more accurately, and the Mean intersection over union (Miou) reached 0.95, which is 15.54 percentage points higher than that of the original DeepLab v3. On small targets and unbalanced datasets, rainfall clouds can be segmented more accurately by this method, so that the proposed method can provide a reference for the rain cloud monitoring and early warning.
Reference | Related Articles | Metrics
Interactive water flow heating simulation based on smoothed particle hydrodynamics method
WANG Jiangkun, HE Kunjin, CAO Hongfei, WANG Jinqiang, ZHANG Yan
Journal of Computer Applications    2020, 40 (5): 1409-1414.   DOI: 10.11772/j.issn.1001-9081.2019101734
Abstract439)      PDF (2338KB)(532)       Save

To solve the problems of interaction difficulty and low efficiency in traditional water flow heating simulation, a method about thermal motion simulation based on Smoothed Particle Hydrodynamics (SPH) was proposed to control the process of water flow heating interactively. Firstly, the continuous water flow was transformed into particles based on the SPH method, the particle group was used to simulate the movement of the water flow, and the particle motion was limited in the container by the collision detection method. Then, the water particles were heated by the heat conduction model of the Dirichlet boundary condition, and the motion state of the particles was updated according to the temperature of the particles in order to simulate the thermal motion of the water flow during the heating process. Finally, the editable system parameters and constraint relationships were defined, and the heating and motion processes of water flow under multiple conditions were simulated by human-computer interaction. Taking the heating simulation of solar water heater as an example, the interactivity and efficiency of the SPH method in solving the heat conduction problem were verified by modifying a few parameters to control the heating work of the water heater, which provides convenience for the applications of interactive water flow heating in other virtual scenes.

Reference | Related Articles | Metrics
Chinese-Vietnamese bilingual multi-document news opinion sentence recognition based on sentence association graph
WANG Jian, TANG Shan, HUANG Yuxin, YU Zhengtao
Journal of Computer Applications    2020, 40 (10): 2845-2849.   DOI: 10.11772/j.issn.1001-9081.2020020280
Abstract433)      PDF (815KB)(497)       Save
The traditional opinion sentence recognition tasks mainly realize the classification by emotional features inside the sentence. In the task of cross-lingual multi-document opinion sentence recognition, the certain supporting function for opinion sentence recognition was provided by the association between sentences in different languages and documents. Therefore, a Chinese-Vietnamese bilingual multi-document news opinion sentence recognition method was proposed by combining Bi-directional Long Short Term Memory (Bi-LSTM) network framework and sentence association features. Firstly, emotional elements and event elements were extracted from the Chinese-Vietnamese bilingual sentences to construct the sentence association diagram, and the sentence association features were obtained by using TextRank algorithm. Secondly, the Chinese and Vietnamese news texts were encoded in the same semantic space based on the bilingual word embedding and Bi-LSTM. Finally, the opinion sentence recognition was realized by jointly considering the sentence coding features and semantic features. The theoretical analysis and simulation results show that integrating sentence association diagram can effectively improve the precision of multi-document opinion sentence recognition.
Reference | Related Articles | Metrics
Performance analysis of wireless key generation with multi-bit quantization under imperfect channel estimation condition
DING Ning, GUAN Xinrong, YANG Weiwei, LI Tongkai, WANG Jianshe
Journal of Computer Applications    2020, 40 (1): 143-147.   DOI: 10.11772/j.issn.1001-9081.2019061004
Abstract422)      PDF (769KB)(357)       Save
The channel estimation error seriously affects the key generation consistency of two communicating parties in the process of wireless key generation, a multi-bit quantization wireless key generation scheme under imperfect channel estimation condition was proposed. Firstly, in order to investigate the influence of imperfect channel estimation on wireless key generation, a channel estimation error model was established. Then, a multi-bit key quantizer with guard band was designed, and the performance of wireless key was able to be improved by optimizing the quantization parameters. The closed-form expressions of Key Disagreement Rate (KDR) and Effective Key Generation Rate (EKGR) were derived, and the relationships between pilot signal power, quantization order, guard bands and the above two key generation performance indicators were revealed. The simulation results show that, increasing the transmit pilot power can effectively reduce the KDR, and with the increasing of quantization order, the key generation rate can be improved, but the KDR also increases. Moreover, increasing the quantization order and choosing the appropriate guard band size at the same time can effectively reduce KDR.
Reference | Related Articles | Metrics
Flexible job-shop green scheduling algorithm considering machine tool depreciation
WANG Jianhua, PAN Yujie, SUN Rui
Journal of Computer Applications    2020, 40 (1): 43-49.   DOI: 10.11772/j.issn.1001-9081.2019061058
Abstract425)      PDF (997KB)(397)       Save
For the Flexible Job-shop Scheduling Problem (FJSP) with machine flexibility and machine tool depreciation, in order to reduce the energy consumption in the production process, a mathematical model with the minimization of weighted sum of maximum completion time and total energy consumption as the scheduling objective was established, and an Improved Genetic Algorithm (IGA) was proposed. Firstly, according to strong randomness of Genetic Algorithm (GA), the principle of balanced dispersion of orthogonal test was introduced to generate initial population, which was used to improve the search performance in global range. Secondly, in order to overcome genetic conflict after crossover operation, the coding mode of three-dimensional real numbers and the arithmetic crossover of double individuals were used for chromosome crossover, which reduced the steps of conflict detection and improved the solving speed. Finally, the dynamic step length was adopted to perform genetic mutation in mutation operation stage, which guaranteed local search ability in global range. By testing on the 8 Brandimarte examples and comparing with 3 improved heuristic algorithms in recent years, the calculation results show that the proposed algorithm is effective and feasible to solve the FJSP.
Reference | Related Articles | Metrics
Handwritten numeral recognition under edge intelligence background
WANG Jianren, MA Xin, DUAN Ganglong, XUE Hongquan
Journal of Computer Applications    2019, 39 (12): 3548-3555.   DOI: 10.11772/j.issn.1001-9081.2019050869
Abstract562)      PDF (1271KB)(373)       Save
With the rapid development of edge intelligence, the development of existing handwritten numeral recognition convolutional network models has become less and less suitable for the requirements of edge deployment and computing power declining, and there are problems such as poor generalization ability of small samples and high network training costs. Drawing on the classic structure of Convolutional Neural Network (CNN), Leaky_ReLU algorithm, dropout algorithm, genetic algorithm and adaptive and mixed pooling ideas, a handwritten numeral recognition model based on LeNet-DL improved convolutional neural network was constructed. The proposed model was compared on large sample MNIST dataset and small sample REAL dataset with LeNet, LeNet+sigmoid, AlexNet and other algorithms. The improved network has the large sample identification accuracy up to 99.34%, with the performance improvement of about 0.83%, and the small sample recognition accuracy up to 78.89%, with the performance improvement of about 8.34%. The experimental results show that compared with traditional CNN, LeNet-DL network has lower training cost, better performance and stronger model generalization ability on large sample and small sample datasets.
Reference | Related Articles | Metrics
Human skeleton key point detection method based on OpenPose-slim model
WANG Jianbing, LI Jun
Journal of Computer Applications    2019, 39 (12): 3503-3509.   DOI: 10.11772/j.issn.1001-9081.2019050954
Abstract767)      PDF (1060KB)(513)       Save
The OpenPose model originally used for the detection of key points in human skeleton can greatly shorten the detection cycle while maintaining the accuracy of the Regional Multi-Person Pose Estimation (RMPE) model and the Mask Region-based Convolutional Neural Network (R-CNN) model, which were proposed in 2017 and had the near-optimal detection effect at that time. At the same time, the OpenPose model has the problems such as low parameter sharing rate, high redundancy, long time-consuming and too large model scale. In order to solve the problems, a new OpenPose-slim model was proposed. In the proposed model, the network width was reduced, the number of convolution block layers was decreased, the original parallel structure was changed into sequential structure and the Dense connection mechanism was added to the inner module. The processing process was mainly divided into three modules:1) the position coordinates of human skeleton key points were detected in the key point localization module; 2) the key point positions were connected to the limb in the key point association module; 3) limb matching was performed to obtain the contour of human body in the limb matching module. There is a close correlation between processing stages. The experimental results on the MPII dataset, Common Objects in COntext (COCO) dataset and AI Challenger dataset show that, the use of four localization modules and two association modules as well as the use of Dense connection mechanism inside each module of the proposed model is the best structure. Compared with the OpenPose model, the test cycle of the proposed model is shortened to nearly 1/6, the parameter size is reduced by nearly 50%, and the model size is reduced to nearly 1/27.
Reference | Related Articles | Metrics
Reversible data hiding algorithm based on pixel value order
LI Tianxue, ZHANG Minqing, WANG Jianping, MA Shuangpeng
Journal of Computer Applications    2018, 38 (8): 2311-2315.   DOI: 10.11772/j.issn.1001-9081.2018020297
Abstract682)      PDF (718KB)(485)       Save
For the distortion of the image after embedding secret is excessively obvious, a new Reversible Data Hiding (RDH) based on Pixel Value Order (PVO) was proposed. Firstly, the pixels of a carrier image were divided into gray and white layers, the pixels of a gray layer were selected as the target pixels, and the four white pixels in the cross positions of the target pixels were sorted. Secondly, according to the sorting result, the mean value of the two end pixels and the mean value of the median pixels were calculated, and the reversible constraint was used to achieve dynamic prediction of pixels. Finally, a Prediction Error Histogram (PEH) was constructed according to the prediction result. Six images in the USC-SIPI standard image library were used for simulation experiments. The experimental results show that, when the Embedding Capacity (EC) is 10000 b and the average Peak Signal-to-Noise Ratio (PSNR) is 61.89 dB, the proposed algorithm can effectively reduce the distortion of the image with ciphertext.
Reference | Related Articles | Metrics
Road extraction from multi-source high resolution remote sensing image based on fully convolutional neural network
ZHANG Yonghong, XIA Guanghao, KAN Xi, HE Jing, GE Taotao, WANG Jiangeng
Journal of Computer Applications    2018, 38 (7): 2070-2075.   DOI: 10.11772/j.issn.1001-9081.2017122923
Abstract946)      PDF (961KB)(554)       Save
The semi-automatic road extraction method needs more artificial participation and is time-consuming, and its accuracy of road extraction is low. In order to solve the problems, a new method of road extraction from multi-source high resolution remote sensing image based on Fully Convolutional neural Network (FCN) was proposed. Firstly, the GF-2 and World View high resolution remote sensing images were divided into small pieces, the images containing roads were classified by Convolutional Neural Network (CNN). Then, the Canny operator was used to extract the edge feature information of road. Finally, RGB, Gray and ground truth were combined and put into the FCN model for training, and the existing FCN model was extended to a new FCN model with multi-satellite source input and multi-feature source input. The Shigatse region of Tibet was chosen as the research area. The experimental results show that, the proposed method can achieve the extraction precision of 99.2% in the road extraction from high resolution remote sensing images, and effectively reduce the time needed for extraction.
Reference | Related Articles | Metrics
Dynamic model of public opinion and simulation analysis of complex network evolution
WANG Jian, WANG Zhihong, ZHANG Lejun
Journal of Computer Applications    2018, 38 (4): 1201-1206.   DOI: 10.11772/j.issn.1001-9081.2017081949
Abstract642)      PDF (868KB)(528)       Save
In terms of the evolution of complex dynamics in the dissemination of public opinion, a dynamic evolution model was proposed based on transmission dynamics. Firstly, the models of public opinion and its evolution were constructed and the static solution was obtained through equation transformation. Secondly, the Fokker-Planck equation was introduced to analyze the asymptotic behavior of public opinion evolution, getting the steady-state solution and solving it. In that case, the correlation between the complex network and the model was built and the experiment objective of simulation research was put forward. Finally, through the simulation analysis of the public opinion evolution model and the public opinion model with the Fokker-Planck equation, and the empirical analysis of real micro-blog public opinion data, the essence of the dissemination and evolution of public opinion in the complex network was studied. The results show that the asymptotic behavior of public opinion network evolution is consistent with the distribution of degrees and the connection way of network public opinion dissemination is influenced by nodes. The model can describe the dynamic behavior in the formation and evolution of micro-blog public opinion network.
Reference | Related Articles | Metrics
Fast intra mode prediction decision and coding unit partition algorithm based on high efficiency video coding
GUO Lei, WANG Xiaodong, XU Bowen, WANG Jian
Journal of Computer Applications    2018, 38 (4): 1157-1163.   DOI: 10.11772/j.issn.1001-9081.2017092302
Abstract408)      PDF (1218KB)(458)       Save
Due to the high complexity of intra encoding in High Efficiency Video Coding (HEVC), an efficient intra encoding algorithm combining coding unit segmentation and intra mode selection based on texture feature was proposed. The strength of dominant direction of each depth layer was used to decide whether the Coding Unit (CU) need segmentation, and to reduce the number of intra modes. Firstly, the variance of pixels was used in the coding unit, and the strength of dominant direction based on pixel units to was calculated determine its texture direction complexity, and the final depth was derived by means of the strategy of threshold. Secondly, the relation of vertical complexity and horizontal complexity and the probability of selected intra model were used to choose a subset of prediction modes, and the encoding complexity was further reduced. Compared to HM15.0, the proposed algorithm saves 51.997% encoding time on average, while the Bjontegaard Delta Peak Signal-to-Noise Rate (BDPSNR) only decreases by 0.059 dB and the Bjontegaard Delta Bit Rate (BDBR) increases by 1.018%. The experimental results show that the method can reduce the encoding complexity in the premise of negligible RD performance loss, which is beneficial to real-time video applications of HEVC standard.
Reference | Related Articles | Metrics
Double-level encryption reversible data hiding based on code division multiple access
WANG Jianping, ZHANG Minqing, LI Tianxue, MA Shuangpeng
Journal of Computer Applications    2018, 38 (4): 1023-1028.   DOI: 10.11772/j.issn.1001-9081.2017102493
Abstract478)      PDF (1060KB)(452)       Save
Aiming at enhancing the embedded capacity and enriching the available encryption algorithm of reversible data hiding in encrypted domain, a new scheme was proposed by adopting double-level encryption methods and embedding the secret information based on Code Division Multiple Access (CDMA). The image was first divided into blocks and a multi-granularity encryption was introduced. The image was first divided into blocks, which were scrambled by introducing multi-granularity encryption, then 2 bits in the middle of each pixel in blocks were encrypted by a stream cipher. Based on the idea of CDMA, k mutually orthogonal matrices of 4 bits were selected to carry k-level secret information. The orthogonal matrices can guarantee the multi-level embedding and improve the embedding capacity. The pseudo bit was embedded into the blocks that cannot meet the embedding condition. The secret data could be extracted by using the extraction key; the original image could be approximately recovered by using the image decryption key; with both of the keys, the original image could be recovered losslessly. Experimental results show that, when the Peak Signal-to-Noise Ratio (PSNR) of gray Lena image of 512×512 pixels is higher than 36 dB, the maximum embedded capacity of the proposed scheme is 133313 bit. The proposed scheme improves the security of encrypted images and greatly enhances the embedded capacity of reversible information in ciphertext domain while ensuring the reversibility.
Reference | Related Articles | Metrics
Facial attractiveness evaluation method based on fusion of feature-level and decision-level
LI Jinman, WANG Jianming, JIN Guanghao
Journal of Computer Applications    2018, 38 (12): 3607-3611.   DOI: 10.11772/j.issn.1001-9081.2018051040
Abstract592)      PDF (818KB)(415)       Save
In the study of personalized facial attractiveness, due to lack of features and insufficient consideration of the influence factors of public aesthetics, the prediction of personal preferences cannot reach high prediction accuracy. In order to improve the prediction accuracy, a new personalized facial attractiveness prediction framework based on feature-level and decision-level information fusion was proposed. Firstly, the objective characteristics of different facial beauty features were fused together, and the representative facial attractive features were selected by a feature selection algorithm, the local and global features of face were fused by different information fusion strategies. Then, the traditional facial features were fused with the features extracted automatically through deep networks. At the same time, a variety of fusion strategies were proposed for comparison. The score information representing the public aesthetic preferences and the personalized score information representing the individual preferences were fused at the decision level. Finally, the personalized facial attractiveness prediction score was obtained. The experimental results show that, compared with the existing algorithms for personalized facial attractiveness evaluation, the proposed multi-level fusion method has a significant improvement in prediction accuracy, and can achieve the Pearson correlation coefficient more than 0.9. The proposed method can be used in the fields of personalized recommendation and face beautification.
Reference | Related Articles | Metrics
Adaptive backstepping sliding mode control for robotic manipulator with the improved nonlinear disturbance observer
ZOU Sifan, WU Guoqing, MAO Jingfeng, ZHU Weinan, WANG Yurong, WANG Jian
Journal of Computer Applications    2018, 38 (10): 2827-2832.   DOI: 10.11772/j.issn.1001-9081.2018030525
Abstract859)      PDF (799KB)(516)       Save
In order to solve the problems of control input chattering of traditional sliding mode, requiring acceleration term, and limited application model of traditional disturbance observes in manipulator joint position tracking, a self-adaptive inverse sliding mode control algorithm for manipulators with improved nonlinear disturbance observer was proposed. Firstly, an improved nonlinear disturbance observer to perform on-line testing was designed. In the sliding mode control law, interference estimates were added to compensate for observable disturbance, and then appropriate design parameters were selected to make the observation error converge exponentially; the adaptive control law was performed to estimate the unobservable interference and further improved the tracking performance of the control system. Finally, the Lyapunov function was used to verify the asymptotic stability of the closed-loop system, then the system was applied to the joint position tracking of the manipulator. The experimental results show that compared with the traditional sliding mode algorithm, the improved control algorithm not only accelerates the response speed of the system, but also effectively suppresses the system chattering, avoids measuring acceleration items and expands the application scope of the application model.
Reference | Related Articles | Metrics
Evidence combination rule with similarity collision reduced
WANG Jian, ZHANG Zhiyong, QIAO Kuoyuan
Journal of Computer Applications    2018, 38 (10): 2794-2800.   DOI: 10.11772/j.issn.1001-9081.2018030532
Abstract416)      PDF (1010KB)(378)       Save

Aiming at the problem of decision error caused by similarity collision in evidence theory, a new combination rule for evidence theory was proposed. Firstly, the features of focal-element sequence in evidence were extracted and converted into a sort matrix to reduce similarity collision. Secondly, the weight of each evidence was determined based on sort matrix and information entropy. Finally, the Modified Average Evidence (MAE) was generated based on the evidence set and evidence weight, and the combination result was obtained by combing MAE for n-1 times by using Dempster combination rule. The experimental results on the online dataset Iris show that the F-Score of average-based combination rule, similarity-based combination rule, evidence distance-based combination rule, evidence-credit based combination rule and the proposed method are 0.84, 0.88, 0.88, 0.88 and 0.91. Experimental results show that the proposed method has higher accuracy of decision making and more reliable combination results, which can provide an efficient solution for decision-making based on evidence theory.

Reference | Related Articles | Metrics
Classical cipher model based on rough set
TANG Jianguo, WANG Jianghua
Journal of Computer Applications    2017, 37 (4): 993-998.   DOI: 10.11772/j.issn.1001-9081.2017.04.0993
Abstract560)      PDF (901KB)(550)       Save
Although classical cipher is simple and efficient, but it has a serious defect of being cracked easily under the current social computing power. A new classical cipher model based on rough sets was developed to solve this problem. Firstly, two features of rough sets were integrated into the model to weaken the statistical law of the model. One feature is that certainty contains uncertainty in rough sets, another is that the approximate space scale tends to increase sharply with the slight increase of the domain size. Secondly, the ability of producing random sequences of the model was improved by using mixed congruence method. Finally, part of plaintext information was involved in the encryption process by using self-defined arithmetic and congruence method to enhance the anti-attack ability of the model. The analysis shows that the model not only has the same level of time and space complexity as traditional classical cipher, but also has nearly ideal performance of diffusion and confusion, which completely overcomes the defects that classical cipher can be easily cracked, and can effectively resist the attacks such as exhaustive method and statistical analysis method.
Reference | Related Articles | Metrics
Endpoint prediction method for steelmaking based on multi-task learning
CHENG Jin, WANG Jian
Journal of Computer Applications    2017, 37 (3): 889-895.   DOI: 10.11772/j.issn.1001-9081.2017.03.889
Abstract601)      PDF (1088KB)(639)       Save
The quality of the molten steel is usually judged by the hit rate of the endpoint. However, there are many influencing factors in the steelmaking process, and it is difficult to accurately predict the endpoint temperature and carbon content. In view of this, a data-driven Multi-Task Learning (MTL) steelmaking endpoint prediction method was proposed. Firstly, the input and output factors of steelmaking process were analyzed and extracted, and a number of sub-learning tasks were selected to combine the two-stage blowing characteristics of steelmaking. Secondly, according to the relativity between the sub-tasks and the endpoint parameters, the appropriate subtasks were selected to improve the accuracy of the endpoint prediction, and the multi-task learning model was constructed, and the model output was optimized twice. Finally, the process parameters of the multitask learning model were obtained by model training of the processed production data through the proximal gradient algorithm. In the case of a steel plant, compared with neural network, the prediction accuracy of the method proposed increased 10% when endpoint temperature error was less than 12℃ and carbon content error was less than 0.01%. The prediction accuracy increased by 11% and 7% respectively with the error range less than 6℃ and 0.005%. The experimental results show that multi-task learning can improve the accuracy of endpoint prediction in practice.
Reference | Related Articles | Metrics
Modeling of high-density crowd emergency evacuation based on floor-field particle swarm optimization algorithm
WANG Chao, WANG Jian
Journal of Computer Applications    2017, 37 (12): 3597-3601.   DOI: 10.11772/j.issn.1001-9081.2017.12.3597
Abstract610)      PDF (980KB)(741)       Save
Aiming at the problems of congestion management and emergency evacuation of high-density crowd under the environment of unconventional emergencies, a four-layer crowd Evacuation Cyber-Physical System (E-CPS) framework was proposed, which contained sensing layer, transport layer, calculation layer and application layer. In the calculation layer of E-CPS framework, a Floor-Field Particle Swarm Optimization (PSO) (FF-PSO) crowd evacuation model was proposed by introducing static floor-field modeling rules into classical PSO. The FF-PSO evacuation model has the advantages such as simple rule and quick calculation of static floor-field, fast search and fast convergence of PSO. In addition, a new fitness function was designed and introduced into the proposed FF-PSO model to realize the dynamic adjustment of evacuation strategy. Numerical simulation and instance simulation were carried out to further verify the feasibility and effectiveness of the proposed FF-PSO model in congestion management. The instance simulation results of National Exhibition and Convention Center (Shanghai) show that 66 more pedestrians can be evacuated from the accident area per minute on average by the proposed model of introducing congestion management than the model of only considering the shortest distance. Furthermore, the evacuation time is saved by 19 min and the evacuation efficiency is improved by 13.4% by introducing congestion management.
Reference | Related Articles | Metrics
Active semi-supervised community detection method based on link model
CHAI Bianfang, WANG Jianling, XU Jiwei, LI Wenbin
Journal of Computer Applications    2017, 37 (11): 3090-3094.   DOI: 10.11772/j.issn.1001-9081.2017.11.3090
Abstract534)      PDF (756KB)(591)       Save
Link model is able to model community detection problem on networks. Compared with other similar models including symmetric models and conditional models, PPL (Popularity and Productivity Link) deals more types of networks, and detects communities more accurately. But PPL is an unsupervised model, and works badly when the network structure is unclear. In addition, PPL is not able to utilize priors that are easily captained. In order to improve its performance by using as less as possible, an Active Node Prior Learning (ANPL) algorithm was provided. ANPL selected the highest utility and easily labeled pairwise constraints, and generated automatically more informative labeled nodes based on the labeled pairwise constraints. Based on the PPL model,a Semi-supervised PPL (SPPL) model was proposed for community detection, which combined the topology of network and node labels learned from the ANPL algorithm. Experiments on synthetic and real networks demonstrate that using node priors from the ANPL algorithm and the topology of a network, SPPL model excels to unsupervised PPL model and popular semi-supervised community detection models based on Non-negative Matrix Factorization (NMF).
Reference | Related Articles | Metrics
Monitoring and analysis of operation status under architecture of stream computing and memory computing
ZHAO Yongbin, CHEN Shuo, LIU Ming, WANG Jianan, BEN Chi
Journal of Computer Applications    2017, 37 (10): 3029-3033.   DOI: 10.11772/j.issn.1001-9081.2017.10.3029
Abstract478)      PDF (798KB)(487)       Save
In real-time operation state analysis of power grid, in order to meet the requirements of real-time analysis and processing of large-scale real-time data, such as real-time electricity consumption data, and provide fast and accurate data analysis support for power grid operation decision, the system architecture for large-scale data analysis and processing based on stream computing and memory computing was proposed. The Discrete Fourier Transform (DFT) was used to construct abnormal electricity behavior evaluation index based on the real-time electricity consumption data of the users by time window. The K-Means clustering algorithm was used to classify the users' electricity behavior based on the characteristics of user electricity behavior constructed by sampling statistical analysis. The accuracy of the proposed evaluation indicators of abnormal behavior and user electricity behavior was verified by the experimental data extracted from the actual business system. At the same time, compared with the traditional data processing strategy, the system architecture combined with stream computing and memory computing has good performance in large-scale data analysis and processing.
Reference | Related Articles | Metrics
Analysis algorithm of electroencephalogram signals for epilepsy diagnosis based on power spectral density and limited penetrable visibility graph
WANG Ruofan, LIU Jing, WANG Jiang, YU Haitao, CAO Yibin
Journal of Computer Applications    2017, 37 (1): 175-182.   DOI: 10.11772/j.issn.1001-9081.2017.01.0175
Abstract729)      PDF (1242KB)(671)       Save
Focused on poor robustness to noise of the Visibility Graph (VG) algorithm, an improved Limited Penetrable Visibility Graph (LPVG) algorithm was proposed. LPVG algorithm could map time series into networks by connecting the points of time series which satisfy the certain conditions based on the visibility criterion and the limited penetrable distance. Firstly, the performance of LPVG algorithm was analyzed. Secondly, LPVG algorithm was combined with Power Spectrum Density (PSD) to apply to the automatic identification of epileptic ElectroEncephaloGram (EEG) before, during and after the seizure. Finally, the characteristic parameters of the LPVG network in the three states were extracted to study the influence of epilepsy seizures on the network topology. The simulation results show that compared with VG and Horizontal Visibility Graph (HVG), although LPVG had a high time complexity, it had strong robustness to noise in the signal:when mapping the typical periodic, random, fractal and chaos time series into networks by LPVG, it was found that as the noise intensity increased, the fluctuation rates of clustering coefficient by LPVG network were always the lowest, respectively 6.73%, 0.05%, 0.99% and 3.20%. By the PSD and LPVG analysis, it was found that epilepsy seizure had great influence on the brain energy. PSD was obviously enhanced in the delta frequency band, and significantly reduced in the theta frequency band; the topological structure of the LPVG network changed during the seizure, characterized by the independent enhanced network module, increased average path length and decreased graph index complexity. The PSD and LPVG applied in this paper could be taken as an effective measure to characterize the abnormality of the energy distribution and topological structure of single EEG signal channel, which would provide help for the pathological study and clinical diagnosis of epilepsy.
Reference | Related Articles | Metrics
Single image super-resolution via independently adjustable sparse coefficients
NI Hao, RUAN Ruolin, LIU Fanghua, WANG Jianfeng
Journal of Computer Applications    2016, 36 (4): 1096-1099.   DOI: 10.11772/j.issn.1001-9081.2016.04.1096
Abstract601)      PDF (849KB)(456)       Save
The recovered image from the example-based super-resolution has sharp edges, but there are obvious artifacts. An improved super-resolution algorithm with independently adjustable sparse coefficients was proposed to eliminate the artifacts. In the dictionary training phase, the sparse coefficients in the high-dimensional space and the low-dimensional space of the image are different because of the known high-resolution training images and low-resolution ones. So the accurate high-resolution dictionary and the low-resolution one were generated separately via online dictionary learning algorithm. In the image reconstruction phase, the sparse coefficients in the two spaces were approximately the same because the input low-resolution image was known but the target high-resolution image was unknown. Different regularization parameters in the two phases were set to tune the corresponding sparse coefficients independently to get the best super-resolution results. According to the experiment results, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.45 dB higher than that of sparse coding super-resolution in average, while the Structural SIMilarity (SSIM) is also 0.011 higher. The proposed algorithm eliminates the artifacts as well as recovers the edge sharpness and texture details effectively to promote the super-resolution results.
Reference | Related Articles | Metrics
Hadoop adaptive task scheduling algorithm based on computation capacity difference between node sets
ZHU Jie, LI Wenrui, WANG Jiangping, ZHAO Hong
Journal of Computer Applications    2016, 36 (4): 918-922.   DOI: 10.11772/j.issn.1001-9081.2016.04.0918
Abstract566)      PDF (783KB)(558)       Save
Aiming at the problems of the fixed task progress proportions and passive selection of slow tasks in the task speculation execution algorithm for heterogeneous cluster, an adaptive task scheduling algorithm based on the computation capacity difference between node sets was proposed. The computation capacity difference between node sets was quantified to schedule tasks by fast and slow node sets, and dynamic feedback of nodes and tasks speed were calculated to update slow node sets timely to improve the resource utilization rate and task parallelism. Within two node sets, task progress proportions were adjusted dynamically to improve the accuracy of slow tasks identification, and the fast node which executed backup tasks dynamically for slow tasks by substitute execution implementation was selected to improve the task execution efficiency. The experimental results showed that, compared with the Longest Approximate Time to End (LATE) algorithm, the proposed algorithm reduced the running time by 5.21%, 20.51% and 23.86% respectively in short job set, mixed-type job set and mixed-type job set with node performance degradation, and reduced the number of initiated backup tasks significantly. The proposed algorithm can make the task adapt to the node difference, and improves the overall job execution efficiency effectively with reducing slow backup tasks.
Reference | Related Articles | Metrics
Blocked person relation recognition system based on multiple features
ZHANG Zhihua, WANG Jianxiang, TIAN Junfeng, WU Guoshun, LAN Man
Journal of Computer Applications    2016, 36 (3): 751-757.   DOI: 10.11772/j.issn.1001-9081.2016.03.751
Abstract685)      PDF (1004KB)(478)       Save
With the rapid development of Internet, huge amount of textual information is accessible on the Internet. The task of reliable person-person relation extraction from Web page has become an import research topic in the field of information extraction. To address this problem, this work implemented a blocked person relation recognition system and adopted abundant of features, i.e., bag-of-word, relevant frequency, Dependency Tree (DT), Named Entity Recognition (NER) features, etc. A series of experiments were conducted to select out optimal feature set and classification algorithm for each relation type to improve the performance. This system was performed on two tasks in China Conference of Machine Learn Competition (CCML Competition) of 2015, to recognize person relation from single or a set of news titles in Chinese (Task1 and Task2, respectively). For these two tasks, this system achieved the MacroF1 score of 75.68% and 76.58%, respectively and ranked the 1st on both tasks.
Reference | Related Articles | Metrics