Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Project Articles
China Conference on Data Mining 2020 (CCDM 2020)
Default
Latest
Most Read
Please wait a minute...
For Selected:
Download Citations
EndNote
Ris
BibTeX
Toggle Thumbnails
Select
Automatic summary generation of Chinese news text based on BERT-PGN model
TAN Jinyuan, DIAO Yufeng, QI Ruihua, LIN Hongfei
Journal of Computer Applications 2021, 41 (
1
): 127-132. DOI:
10.11772/j.issn.1001-9081.2020060920
Abstract
(
1541
)
PDF
(857KB)(
2876
)
Knowledge map
Save
Aiming at the problem that the abstractive summarization model in text automatic summarization task does not fully understand the context of sentence and generates duplicate contents, based on BERT (Bidirectional Encoder Representations from Transformers) and Pointer Generator Network (PGN), an abstractive summarization model for Chinese news text was proposed, namely Bidirectional Encoder Representations from Transformers-Pointer Generator Network (BERT-PGN). Firstly, combining with multi-dimensional semantic features, the BERT pre-trained language model was used to obtain the word vectors, thereby obtaining a more fine-grained text context representation. Then, through PGN model, the words were extracted from the vocabulary or the original text to form a summary. Finally, the coverage mechanism was combined to reduce the generation of duplicate contents and obtain the final summarization result. Experimental results on the single document Chinese news summary evaluation dataset of the 2017 CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC2017) show that, compared with models such as PGN and Long Short-Term Memory with attention mechanism (LSTM-attention), the BERT-PGN model combined with multi-dimensional semantic features has a better understanding of the original text of the summary, has the generated summary content richer and more comprehensive with the generation of duplicate and redundant contents effectively reduced, and has Rouge-2 and Rouge-4 indicators increased by 1.5% and 1.2% respectively.
Reference
|
Related Articles
|
Metrics
Select
Urban transportation path planning based on reinforcement learning
LIU Sijia, TONG Xiangrong
Journal of Computer Applications 2021, 41 (
1
): 185-190. DOI:
10.11772/j.issn.1001-9081.2020060949
Abstract
(
1044
)
PDF
(1042KB)(
778
)
Knowledge map
Save
For urban transportation path planning issue, the speed of planning and the safety of vehicles in the path needed to be considered, but most existing reinforcement learning algorithms cannot consider both of them. Aiming at this problem, the following steps were carried out. First, a Dyna framework with the combination of model-based and model-independent algorithms was proposed, so as to improve the speed of planning. Then, the classical Sarsa algorithm was used as a route selection strategy in order to improve the safety of the algorithm. Finally, the above two were combined and an improved Sarsa-based algorithm called Dyna-Sa was proposed. Experimental results show that the reinforcement learning algorithm converges faster with more planning steps in advance. Compared with Q-learning, Sarsa and Dyna-Q algorithms through metrics such as convergence speed and number of collisions, it can be seen that the Dyna-Sa algorithm not only reduces the number of collisions in the map with obstacles, ensures the safety of vehicles in the urban traffic environment, but also accelerates the algorithm convergence.
Reference
|
Related Articles
|
Metrics
Select
Entity relation extraction method for guidelines of cardiovascular disease based on bidirectional encoder representation from transformers
WU Xiaoping, ZHANG Qiang, ZHAO Fang, JIAO Lin
Journal of Computer Applications 2021, 41 (
1
): 145-149. DOI:
10.11772/j.issn.1001-9081.2020061008
Abstract
(
876
)
PDF
(823KB)(
1156
)
Knowledge map
Save
Entity relation extraction is a critical basic step of question answering, knowledge graph construction and information extraction in the medical field. In view of the fact that there is no open dataset available in the process of building knowledge graph specialized for cardiovascular disease, a professional training set for entity relation extraction of specialized cardiovascular disease knowledge graph was constructed by collecting some medical guidelines for cardiovascular disease and performing the corresponding professional labeling of the categories of entities and relations. Based on this dataset, firstly, Bidirectional Encoder Representation from Transformers and Convolutional Neural Network (BERT-CNN) model was proposed to realize the relation extraction in Chinese corpus. Then, the improved Bidirectional Encoder Representation from Transformers and Convolutional Neural Networks based on whole word mask (BERT(wwm)-CNN) model was proposed to improve the performance of relation extraction in Chinese corpus, according to the fact that word instead of character is the fundamental unit in Chinese. Experimental results show that, the improved BERT(wwm)-CNN model has the accuracy of 0.85, the recall of 0.80 and the
F
1
value of 0.83 on the constructed relation extraction dataset, which are better than those of the comparison models, Bidirectional Encoder Representation from Transformers and Long Short Term Memory (BERT-LSTM) and BERT-CNN, verifying the superiority of the improved BERT(wwm)-CNN.
Reference
|
Related Articles
|
Metrics
Select
Time series imputation model based on long-short term memory network with residual connection
QIAN Bin, ZHENG Kaihong, CHEN Zipeng, XIAO Yong, LI Sen, YE Chunzhuang, MA Qianli
Journal of Computer Applications 2021, 41 (
1
): 243-248. DOI:
10.11772/j.issn.1001-9081.2020060928
Abstract
(
744
)
PDF
(942KB)(
673
)
Knowledge map
Save
Traditional time series imputation methods typically assume that time series data is derived from a linear dynamic system. However, the real-world time series show more non-linear characteristics. Therefore, a time series imputation model based on Long Short-Term Memory (LSTM) network with residual connection, called RSI-LSTM (ReSidual Imputation Long-Short Term Memory), was proposed to capture the non-linear dynamic characteristics of time series effectively and mine the potential relation between missing data and recent non-missing data. Specifically, the LSTM network was used to model the underlying non-linear dynamic characteristics of time series, meanwhile, the residual connection was introduced to mine the connection between the historical values and the missing value to improve the imputation capability of the model. Firstly, RSI-LSTM was applied to impute the missing data of the univariate daily power supply dataset, and then on the power load dataset of the 9th Electrical Engineering Mathematical Modeling Competition problem A, the meteorological factors were introduced as the multivariate input of RSI-LSTM to improve the imputation performance of the model on missing value in the time series. Furthermore, two general multivariate time series datasets were used to verify the missing value imputation ability of the model. Experimental results show that compared with LSTM, RSI-LSTM can obtain better imputation performance, and has the Mean Square Error (MSE) 10% lower than LSTM generally on both univariate and multivariate datasets.
Reference
|
Related Articles
|
Metrics
Select
Hybrid greedy genetic algorithm for solving 0-1 knapsack problem
CHEN Zhen, ZHONG Yiwen, LIN Juan
Journal of Computer Applications 2021, 41 (
1
): 87-94. DOI:
10.11772/j.issn.1001-9081.2020060981
Abstract
(
671
)
PDF
(974KB)(
796
)
Knowledge map
Save
When solving the optimal solutions of 0-1 Knapsack Problems (KPs), the traditional Genetic Algorithm (GA) has insufficient local refinement ability and the simple local search algorithm has limited global exploration ability. Aiming at these problems, two algorithms were integrated to the Hybrid Greedy Genetic Algorithm (HGGA). Under the GA global search framework, local search module was added, and the traditional repair operator based only on item value density was improved, the greedy hybrid option based on item value was added, so as to accelerate the optimization process. In HGGA, the population was led to carry out fine search in the excellent solution space of evolution, and the classical operators of GA were relied on to expand the global search space, so as to achieve a good balance between the refinement ability and the development ability of the algorithm. HGGA was tested on three sets of data. The results show that in the first set of 15 test cases, HGGA is able to find the optimal solution on 12 cases, with a success rate of 80%; on the second small-scale dataset, the performance of HGGA is obviously better than those of other similar GA and other meta-heuristic algorithms; on the third large-scale dataset, HGGA is more stable and efficient than other meta-heuristic algorithms.
Reference
|
Related Articles
|
Metrics
Select
Ultra-short-term wind power prediction based on empirical mode decomposition and multi-branch neural network
MENG Xinyu, WANG Ruihan, ZHANG Xiping, WANG Mingjie, QIU Gang, WANG Zhengxia
Journal of Computer Applications 2021, 41 (
1
): 237-242. DOI:
10.11772/j.issn.1001-9081.2020060930
Abstract
(
614
)
PDF
(1078KB)(
725
)
Knowledge map
Save
Wind power prediction is an important basis for the monitoring and information management of wind farms. Ultra-short-term wind power prediction is often used to balance load and optimize scheduling and requires high prediction accuracy. Due to the complex environment of wind farm and many uncertainties of wind speed, the wind power time series signals are often non-stationary and random. Recurrent Neural Network (RNN) is suitable for time series tasks, but the non-periodic and non-stationary time series signals will increase the difficulty of network learning. To overcome the interference of non-stationary signal in the prediction task and improve the prediction accuracy of wind power, an ultra-short-term wind power prediction method combining empirical model decomposition and multi-branch neural network was proposed. Firstly, the original wind power time series signal was decomposed by Empirical Mode Decomposition (EMD) to reconstruct the data tensor. Then, the convolution layer and Gated Recurrent Unit (GRU) layer were used to extract the local features and trend features respectively. Finally, the prediction results were obtained through feature fusion and full connection layer. Experimental results on the dataset of a wind farm in Inner Mongolia show that compared with AutoRegressive Integrated Moving Average (ARIMA) model, the proposed method improves the prediction accuracy by nearly 30%, which verifies the effectiveness of the proposed method.
Reference
|
Related Articles
|
Metrics
Select
Short-term traffic flow prediction based on empirical mode decomposition and long short-term memory neural network
ZHANG Xiaohan, FENG Aimin
Journal of Computer Applications 2021, 41 (
1
): 225-230. DOI:
10.11772/j.issn.1001-9081.2020060919
Abstract
(
594
)
PDF
(1687KB)(
745
)
Knowledge map
Save
Traffic flow prediction is an important part of intelligent transportation. The traffic data to be processed by it are non-linear, periodic, and random, as a result, the unstable traffic flow data depend on long-term data range during data prediction. At the same time, due to some external factors, the original data often contain some noise, which may further lead to the degradation of prediction performance. Aiming at the above problems, a prediction algorithm named EMD-LSTM that can denoise and process long-term dependence was proposed. Firstly, Empirical Mode Decomposition (EMD) was used to decompose different scale components in the traffic time series data gradually to generate a series of intrinsic mode functions with the same feature scale, thereby removing certain noise influence. Then, with the help of Long Short-Term Memory (LSTM) neural network, the problem of long-term dependence of data was solved, so that the algorithm performed more outstanding in long-term field prediction. Experimental results of short-term prediction of actual datasets show that EMD-LSTM has the Mean Absolute Error (MAE) 1.916 32 lower than LSTM, and the Mean Absolute Percentage Error (MAPE) 4.645 45 percentage points lower than LSTM. It can be seen that the proposed hybrid model significantly improves the prediction accuracy and can solve the problem of traffic data effectively.
Reference
|
Related Articles
|
Metrics
Select
Prediction method on financial time series data based on matrix profile
GAO Shile, WANG Ying, LI Hailin, WAN Xiaoji
Journal of Computer Applications 2021, 41 (
1
): 199-207. DOI:
10.11772/j.issn.1001-9081.2020060877
Abstract
(
591
)
PDF
(1433KB)(
1052
)
Knowledge map
Save
For the fact that institutional trading in the financial market is highly misleading to retail investors in the financial market, a trend prediction method based on the impact of institutional trading behaviors was proposed. First, using the time series Matrix Profile (MP) algorithm and taking the stock turnover rate as the cut-in point, a knowledge base of turnover rate fluctuations based on the influence of institutional trading behaviors under motifs with different lengths was constructed. Second, the motif's length, which leads to the high accuracy of the prediction result of the stock to be predicted was determined. Finally, the fluctuation trend of single stock under the influence of institutional trading behaviors was predicted through the knowledge base of this motif's length. In order to verify the feasibility and accuracy of the new method of trend prediction, the method was compared with Auto-Regressive Moving Average (ARMA) model and Long Short Term Memory (LSTM) network, and the Root-Mean-Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) evaluation indicators were used to compare the 70 stocks' prediction results of three methods. The analysis of experimental results show that, compared with the ARMA model and the LSTM network, in the prediction of 70 stock price trends, the proposed method has more than 80% of the stock prediction results more accurate.
Reference
|
Related Articles
|
Metrics
Select
Instance selection algorithm for big data based on random forest and voting mechanism
ZHOU Xiang, ZHAI Junhai, HUANG Yajie, SHEN Ruicai, HOU Yingzhen
Journal of Computer Applications 2021, 41 (
1
): 74-80. DOI:
10.11772/j.issn.1001-9081.2020060982
Abstract
(
584
)
PDF
(906KB)(
608
)
Knowledge map
Save
To deal with the problem of instance selection for big data, an instance selection algorithm based on Random Forest (RF) and voting mechanism was proposed for big data. Firstly, a dataset of big data was divided into two subsets:the first subset is large and the second subset is small or medium. Then, the first large subset was divided into
q
smaller subsets, and these subsets were deployed to
q
cloud computing nodes, and the second small or medium subset was broadcast to
q
cloud computing nodes. Next, the local data subsets at different nodes were used to train the random forest, and the random forest was used to select instances from the second small or medium subset. The selected instances at different nodes were merged to obtain the subset of selected instances of this time. The above process was repeated
p
times, and
p
subsets of selected instances were obtained. Finally, these
p
subsets were used for voting to obtain the final selected instance set. The proposed algorithm was implemented on two big data platforms Hadoop and Spark, and the implementation mechanisms of these two big data platforms were compared. In addition, the comparison between the proposed algorithm with the Condensed Nearest Neighbor (CNN) algorithm and the Reduced Nearest Neighbor (RNN) algorithm was performed on 6 large datasets. Experimental results show that compared with these two algorithms, the proposed algorithm has higher test accuracy and smaller time consumption when the dataset is larger. It is proved that the proposed algorithm has good generalization ability and high operational efficiency in big data processing, and can effectively solve the problem of big data instance selection.
Reference
|
Related Articles
|
Metrics
Select
Reward highway network based global credit assignment algorithm in multi-agent reinforcement learning
YAO Xinghu, TAN Xiaoyang
Journal of Computer Applications 2021, 41 (
1
): 1-7. DOI:
10.11772/j.issn.1001-9081.2020061009
Abstract
(
580
)
PDF
(1410KB)(
1523
)
Knowledge map
Save
For the problem of exponential explosion of joint action space with the increase of the number of agents in multi-agent systems, the "central training-decentralized execution" framework was adopted to solve the curse of dimensionality of joint action space and reduce the optimization cost of the algorithm. A new global credit assignment mechanism, Reward HighWay Network (RHWNet), was proposed to solve the problem that only the global reward corresponding to the joint behavior of all agents was given by the environment in multiple multi-agent reinforcement learning scenarios. By introducing the reward highway connection in the global reward assignment mechanism of the original algorithm, the value function of each agent was directly connected with the global reward, so that each agent was able to consider both the global reward signal and its actual reward value when making strategy selection. Firstly, in the training process, each agent was coordinated through a centralized value function structure. At the same time, this centralized structure was also able to play a role in global reward assignment. Then, the reward highway connection was introduced in the central value function structure to assist the global reward assignment, thus establishing the reward highway network. Then, in the execution phase, each agent's strategy depended only on its own value function. Experimental results on the StarCraft Multi-Agent Challenge (SMAC) microoperation scenarios show that the proposed reward highway network achieves a performance improvement of more than 20% in testing winning rate on four complex maps compared to the advanced Counterfactual multi-agent policy gradient (Coma) and QMIX algorithms. More importantly, in 3s5z and 3s6z scenarios with a large number and different types of agents, the proposed network can achieve better results when the required number of samples is only 30% of algorithms such as Coma and QMIX.
Reference
|
Related Articles
|
Metrics
Select
Internet of vehicles system based on improved proof of vote consensus protocol
CHEN Jinyu, LIU Zhaowei
Journal of Computer Applications 2021, 41 (
1
): 170-176. DOI:
10.11772/j.issn.1001-9081.2020060987
Abstract
(
542
)
PDF
(1142KB)(
450
)
Knowledge map
Save
Aiming at the problems of information transmission efficiency and user safety and privacy in the Internet of Vehicles (IoV), an IoV system based on improved Proof of Vote (PoV) consensus protocol was proposed. First, according to the actual needs of IoV, the blockchain technology was used to ensure basic information transmission efficiency and user safety. Second, the structure and algorithm were optimized to improve the traditional PoV consensus protocol in order to further improve the transmission efficiency of entire IoV system. Finally, a supervision and punishment mechanism was designed to ensure the reliability of the system in order to protect the safety and privacy of IoV users. The protocol does not rely on third-party intermediaries, so that it is able to protect the privacy of vehicles and owners while ensuring the consensus efficiency, and is closer to the actual needs of IoV. Theoretical analysis and simulation experiments showed that compared with those of the traditional PoV consensus protocol, the transaction confirmation time and block interval time of the improved PoV consensus protocol were both reduced from 0.25 minutes to 0.2 minutes; and in the reliability comparison with the improved consensus protocol without supervision and punishment mechanism, the improved consensus protocol with supervision and punishment mechanism had the accuracy improved by 29.4%. Experimental results prove that the improved consensus protocol has higher consensus efficiency and safety in IoV.
Reference
|
Related Articles
|
Metrics
Select
Subgraph isomorphism matching algorithm based on neighbor information aggregation
XU Zhoubo, LI Zhen, LIU Huadong, LI Ping
Journal of Computer Applications 2021, 41 (
1
): 43-47. DOI:
10.11772/j.issn.1001-9081.2020060935
Abstract
(
536
)
PDF
(755KB)(
492
)
Knowledge map
Save
Graph matching is widely used in reality, of which subgraph isomorphic matching is a research hotspot and has important scientific significance and practical value. Most existing subgraph isomorphism algorithms build constraints based on neighbor relationships, ignoring the local neighborhood information of nodes. In order to solve the problem, a subgraph isomorphism matching algorithm based on neighbor information aggregation was proposed. Firstly, the aggregated local neighborhood information of the nodes was obtained by importing the graph attributes and structure into the improved graph convolutional neural network to perform the representation learning of feature vector. Then, the efficiency of the algorithm was improved by optimizing the matching order according to the characteristics such as the label and degree of the graph. Finally, the Constraint Satisfaction Problem (CSP) model of subgraph isomorphism was established by combining the obtained feature vector and the optimized matching order with the search algorithm, and the model was solved by using the CSP backtracking algorithm. Experimental results show that the proposed algorithm significantly improves the solving efficiency of subgraph isomorphism compared with the traditional tree search algorithm and constraint solving algorithm.
Reference
|
Related Articles
|
Metrics
Select
Joint extraction of entities and relations based on relation-adaptive decoding
DING Xiangguo, SANG Jitao
Journal of Computer Applications 2021, 41 (
1
): 29-35. DOI:
10.11772/j.issn.1001-9081.2020060934
Abstract
(
535
)
PDF
(1053KB)(
741
)
Knowledge map
Save
The model based on encoder-decoder for joint extraction of entities and relations solve the error propagation problem of the pipeline model. However, the previous model based on encoder-decoder has two problems:the one is that entities and relations are generated in the decoding stage at the same time, so that the mapping of the same semantic space reduces the extraction performance because entities and relations are two different types, the other is that the interactive information between different relations is never considered. Aiming at these two problems, a relation-adaptive decoding model for joint extraction of entities and relations was proposed. In the proposed model, the joint extraction task of entities and relations was converted into the generation task of entity pairs corresponding relations. Firstly, based on encoder-decoder, different relations were divided and ruled, and based on different relations, the entity pairs corresponding to the relations were output adaptively, making the decoding stage focus on the generation of entities. Then, the parameters of one model were shared between different relations, so that the correlation information between different relations was able to be utilized. In the experiment, the proposed model had the F1 scores increased by 2.5 percentage points and 2.2 percentage points respectively compared to the state-of-the-art model on two versions of New York Times (NYT) public dataset. Experimental results show that the proposed model can effectively improve the joint extraction ability of entities and relations through the relation-adaptive decoding.
Reference
|
Related Articles
|
Metrics
Select
Prediction of indoor thermal comfort level of high-speed railway station based on deep forest
CHEN Yanru, ZHANG Tujingwa, DU Qian, RAN Maoliang, WANG Hongjun
Journal of Computer Applications 2021, 41 (
1
): 258-264. DOI:
10.11772/j.issn.1001-9081.2020060888
Abstract
(
527
)
PDF
(1166KB)(
846
)
Knowledge map
Save
Since the semi-closed and semi-opened spaces such as high-speed railway station have the indoor thermal comfort level difficult to predict, a Deep Forest (DF)-based deep learning method was proposed to realize the scientific prediction of thermal comfort level. Firstly, the heat exchange environment of high-speed railway station was modeled based on field survey and Energy Plus platform. Secondly, 8 influence factors, such as passenger density, operating number of multi-evaporator air conditioners and setting temperatures of multi-evaporator air conditioners, were presented, and 424 operating conditions were designed to obtain massive data. Finally, DF was used to obtain the relationship between thermal comfort and influence factors in order to predict the indoor thermal comfort level of high-speed rail station. Deep Neural Network (DNN) and Support Vector Machine (SVM) were provided as comparison algorithms for the verification. Experimental results show that, among the three models, DF performs best in terms of the prediction accuracy and
weighted
-
F
1
, and has the best prediction accuracy of 99.76% and the worst of 98.11%. Therefore, DF can effectively predict the indoor thermal comfort level of high-speed railway stations.
Reference
|
Related Articles
|
Metrics
Select
Graph trend filtering guided noise tolerant multi-label learning model
LIN Tengtao, ZHA Siming, CHEN Lei, LONG Xianzhong
Journal of Computer Applications 2021, 41 (
1
): 8-14. DOI:
10.11772/j.issn.1001-9081.2020060971
Abstract
(
511
)
PDF
(972KB)(
717
)
Knowledge map
Save
Focusing on the problem that the feature noise and label noise often appear simultaneously in multi-label learning, a Graph trend filtering guided Noise Tolerant Multi-label Learning (GNTML) model was proposed. In the proposed model, the feature noise and label noise were tolerated at the same time by group sparsity constraint bridged with label enrichment. The key of the model was the learning of the label enhancement matrix. In order to learn a reasonable label enhancement matrix in the mixed noise environment, the following steps were carried out. Firstly, the Graph Trend Filtering (GTF) mechanism was introduced to tolerate the inconsistency between the noisy example features and labels, so as to reduce the influence of the feature noise on the learning of the enhancement matrix. Then, the group sparsity constrained label fidelity penalty was introduced to reduce the impact of label noise on the label enhancement matrix learning. At the same time, the sparsity constraint of label correlation matrix was introduced to characterize the local correlation between the labels, so that the example labels were able to propagate better between similar examples. Finally, experiments were conducted on seven real multi-label datasets with five different evaluation criteria. Experimental results show that the proposed model achieves the optimal value or suboptimal value in 66.67% cases, it is better than other five multi-label learning algorithms, and can effectively improve the robustness of multi-label learning.
Reference
|
Related Articles
|
Metrics
Select
Defect detection of refrigerator metal surface in complex environment
YUAN Ye, TAN Xiaoyang
Journal of Computer Applications 2021, 41 (
1
): 270-274. DOI:
10.11772/j.issn.1001-9081.2020060964
Abstract
(
508
)
PDF
(905KB)(
616
)
Knowledge map
Save
In order to improve the efficiency of detecting defects on the metal surface of refrigerators and to deal with complex production situations, the Metal-YOLOv3 model was proposed. Using random parameter transformation, the defect data was expanded hundreds of times; the loss function of the original YOLOv3 (You Only Look Once version 3) model was changed, and the Complete Intersection-over-Union (CIoU) loss function based on CIoU was introduced; the threshold of non-maximum suppression algorithm was reduced by using the distribution characteristics of defects; and the anchor value that is more suitable for the data characteristics was calculated based on
K
-means clustering algorithm, so as to improve the detection accuracy. After a series of experiments, it is found that the Metal-YOLOv3 model is far better than the mainstream Regional Convolutional Neural Network (R-CNN) model in term of detection speed with the Frames Per Second (FPS) reached 7.59, which is 14 times faster than that of Faster R-CNN, and has the Average Precision (AP) reached 88.96%, which is 11.33 percentage points higher than Faster R-CNN, showing the good robustness and generalization performance of the proposed model. It can be seen that this method is effective and can be practically applied to the production of metal products.
Reference
|
Related Articles
|
Metrics
Select
Multi-objective estimation of distribution algorithm with adaptive opposition-based learning
LI Erchao, YANG Rongrong
Journal of Computer Applications 2021, 41 (
1
): 15-21. DOI:
10.11772/j.issn.1001-9081.2020060908
Abstract
(
508
)
PDF
(4435KB)(
407
)
Knowledge map
Save
Aiming at the defect of poor global convergence of the regularity model-based multi-objective estimation of distribution algorithm, a multi-objective estimation of distribution algorithm based on adaptive Opposition-Based Learning (OBL) was proposed. In the algorithm, whether to carry out OBL was judged according to the change rate of the function. When the change rate of the function was small, the algorithm was easily to fall into the local optimum, so that OBL was performed to increase the diversity of individuals in current population. When the change rate of the function was large, the regularity model-based multi-objective estimation of distribution algorithm was run. In the proposed algorithm, with the timely introduction of OBL strategy, the influences of population diversity and individual distribution on the overall convergence quality and speed of optimization algorithm were reduced. In order to verify the performance of the improved algorithm, Regularity Model-based Multi-objective Estimation of Distribution Algorithm (RM-MEDA), Hybrid Wading across Stream Algorithm-Estimation Distribution Algorithm (HWSA-EDA) and Inverse Modeling based multiObjective Evolutionary Algorithm (IM-MOEA) were selected as comparison algorithms to carry out the test with the proposed algorithm on ZDT and DTLZ test functions respectively. The test results show that the proposed algorithm not only has good global convergence, but also improves the distribution and uniformity of solutions except on DTLZ2 function.
Reference
|
Related Articles
|
Metrics
Select
Work location inference method with big data of urban traffic surveillance
CHEN Kai, YU Yanwei, ZHAO Jindong, SONG Peng
Journal of Computer Applications 2021, 41 (
1
): 177-184. DOI:
10.11772/j.issn.1001-9081.2020060937
Abstract
(
507
)
PDF
(1377KB)(
585
)
Knowledge map
Save
Inferring work locations for users based on spatiotemporal data is important for real-world applications ranging from product recommendation, precise marketing, transportation scheduling to city planning. However, the problem of location inference based on urban surveillance data has not been explored. Therefore, a work location inference method was proposed for vehicle owners based on the data of traffic surveillance with sparse cameras. First, the urban traffic periphery data such as road networks and Point Of Interests (POIs) were collected, and the preprocessing method of road network matching was used to obtain a real road network with rich semantic information such as POIs and cameras. Second, the important parking areas, which mean the candidate work areas for the vehicles were obtained by clustering Origin-Destination (O-D) pairs extracted from vehicle trajectories. Third, using the constraint of the proposed in/out visiting time pattern, the most likely work area was selected from multiple area candidates. Finally, by using the obtained road network and the distribution of POIs in the road network, the vehicle's reachable POIs were extracted to further narrow the range of work location. The effectiveness of the proposed method was demonstrated by comprehensive experimental evaluations and case studies on a real-world traffic surveillance dataset of a provincial capital city.
Reference
|
Related Articles
|
Metrics
Select
Hybrid population-based incremental learning algorithm for solving closed-loop layout problem
DENG Wenhan, ZHANG Ming, WANG Lijin, ZHONG Yiwen
Journal of Computer Applications 2021, 41 (
1
): 95-102. DOI:
10.11772/j.issn.1001-9081.2020081218
Abstract
(
501
)
PDF
(992KB)(
443
)
Knowledge map
Save
The Closed-Loop Layout Problem (CLLP) is an NP-hard mixed optimization problem, in which an optimal placement order of facilities is found along adjustable rectangle loop with the objection of minimizing the total transport cost of material flow between facilities. In most of the existing methods, meta-heuristic algorithm was used to find the optimal order for the placement of facilities, and enumeration method was applied to find the optimal size of the rectangle loop, which causes extremely low efficiency. To solve this problem, a Hybrid Population-Based Incremental Learning (HPBIL) algorithm was proposed for solving CLLP. In the algorithm, the Discrete Population-Based Incremental Learning (DPBIL) operator and Continuous PBIL (CPBIL) operator were used separately to search the optimal placement order of facilities and the size of rectangle loop at the same time, which improved the efficiency of search. Furthermore, a local search algorithm was designed to optimize some good solutions in each iteration, enhancing the refinement ability. Simulation experiments were carried out on 13 CLLP instances. The results show that HPBIL algorithm finds the best new optimal layouts on 9 instances, and is significantly superior to the algorithms to be compared on the optimization ability for CLLP.
Reference
|
Related Articles
|
Metrics
Select
Precise visual navigation method for agricultural robot based on virtual navigation line
LIANG Zhen, FANG Tiyu, LI Jinping
Journal of Computer Applications 2021, 41 (
1
): 191-198. DOI:
10.11772/j.issn.1001-9081.2020060927
Abstract
(
477
)
PDF
(1980KB)(
572
)
Knowledge map
Save
Aiming at the problem of navigation in the condition without artificial markers in farmland or wild environment, a precise visual navigation method for agricultural robot based on virtual navigation line was proposed. In this method, the robot can be guided to walk in a straight line without laying navigation lines or road signs. Firstly, the target area to be tracked was determined according to the requirements, and the robot was controlled to adjust the direction until the target moved to the center of vision field. Secondly, the reference target was determined according to the positions of the robot and the target, and the virtual navigation line was determined according to the positions of two targets. Thirdly, the navigation line was updated dynamically, and the offset angle and the offset distance were obtained by combining the virtual calibration line and the virtual navigation line. Finally, the fuzzy control table was constructed with the offset parameters, and the adjustment of rotation angle and walking speed of the robot was realized by the table. Experimental results show that the proposed algorithm can accurately recognize the navigation route and use the fuzzy control strategy to make the robot walk in a straight line to the target, and has the navigation accuracy within 10 cm.
Reference
|
Related Articles
|
Metrics
Select
Comprehensive prediction of thermal comfort and energy consumption for high-speed railway stations
JIANG Yangsheng, WANG Shengnan, TU Jiaqi, LI Sha, WANG Hongjun
Journal of Computer Applications 2021, 41 (
1
): 249-257. DOI:
10.11772/j.issn.1001-9081.2020060889
Abstract
(
477
)
PDF
(1132KB)(
674
)
Knowledge map
Save
As many factors affect the thermal comfort of semi-enclosed buildings such as high-speed railway stations in a complex way and there exists contradiction between thermal comfort and energy consumption, a comprehensive prediction method for thermal comfort and energy consumption of high-speed railway station based on machine learning was proposed. Firstly, with sensor data capturing and Energy Plus platform, the indoor and outdoor status, the control units like multi-evaporator air conditioners and heat exchangers as well as the thermal energy transmission environment of high-speed railway station were modeled. Secondly, eight factors influencing the thermal comfort of high-speed railway station, such as the operating number of multi-evaporator air conditioners and setting temperatures of multi-evaporator air conditioners, the operating number of heat exchangers, passenger density, outdoor temperature, indoor temperature, indoor humidity, and indoor carbon dioxide concentration, were proposed, 424 model operating conditions and 3 714 240 instances were designed. Finally, in order to effectively predict indoor thermal comfort and energy consumption of high-speed railway station, six machine learning methods, which are deep neural network, support vector regression, decision tree regression, linear regression, ridge regression and Bayesian ridge regression, were designed. Experimental results show that decision tree regression has the best prediction performance in a short time with average mean squared error of 0.002 2. The obtained research results can directly provide actively predicted environmental parameters and realize real-time decision-making for the temperature control strategy in the next stage.
Reference
|
Related Articles
|
Metrics
Select
Classification algorithm based on undersampling and cost-sensitiveness for unbalanced data
WANG Junhong, YAN Jiarong
Journal of Computer Applications 2021, 41 (
1
): 48-52. DOI:
10.11772/j.issn.1001-9081.2020060878
Abstract
(
476
)
PDF
(752KB)(
827
)
Knowledge map
Save
Focusing on the problem that the minority class in the unbalanced dataset has low prediction accuracy by traditional classifiers, an unbalanced data classification algorithm based on undersampling and cost-sensitiveness, called USCBoost (UnderSamples and Cost-sensitive Boosting), was proposed. Firstly, the majority class samples were sorted from large weight sample to small weight sample before base classifiers being trained by the AdaBoost (Adaptive Boosting) algorithm in each iteration, the majority class samples with the number equal to the number of minority class samples were selected according to sample weights, and the weights of majority class samples after sampling were normalized and a temporary training set was formed by these majority class samples and the minority class samples to train base classifiers. Secondly, in the weight update stage, higher misclassification cost was given to the minority class, which made the weights of minority class samples increase faster and the weights of majority class samples increase more slowly. On ten sets of UCI datasets, USCBoost was compared with AdaBoost, AdaCost (Cost-sensitive AdaBoosting), and RUSBoost (Random Under-Sampling Boosting). Experimental results show that USCBoost has the highest evaluation indexes on six sets and nine sets of datasets under the F1-measure and G-mean criteria respectively. The proposed algorithm has better classification performance on unbalanced data.
Reference
|
Related Articles
|
Metrics
Select
Sentiment classification of incomplete data based on bidirectional encoder representations from transformers
LUO Jun, CHEN Lifei
Journal of Computer Applications 2021, 41 (
1
): 139-144. DOI:
10.11772/j.issn.1001-9081.2020061066
Abstract
(
471
)
PDF
(921KB)(
1063
)
Knowledge map
Save
Incomplete data, such as the interactive information on social platforms and the review contents in Internet movie datasets, widely exist in the real life. However, most existing sentiment classification models are built on the basis of complete data, without considering the impact of incomplete data on classification performance. To address this problem, a stacked denoising neural network model based on BERT (Bidirectional Encoder Representations from Transformers) was proposed for sentiment classification of incomplete data. This model was composed of two components:Stacked Denoising AutoEncoder (SDAE) and BERT. Firstly, the incomplete data processed by word-embedding was fed to the SDAE for denoising training in order to extract deep features to reconstruct the feature representation of the missing words and wrong words. Then, the obtained output was passed into the BERT pre-training model to further improve the feature vector representation of the words by refining. Experimental results on two commonly used sentiment datasets demonstrate that the proposed method has the F1 measure and classification accuracy in incomplete data classification improved by about 6% and 5% respectively, thus verifying the effectiveness of the proposed model.
Reference
|
Related Articles
|
Metrics
Select
Group scanpath generation based on fixation regions of interest clustering and transferring
LIU Nanbo, XIAO Fen, ZHANG Wenlei, LI Wangxin, WENG Zun
Journal of Computer Applications 2021, 41 (
1
): 150-156. DOI:
10.11772/j.issn.1001-9081.2020061147
Abstract
(
463
)
PDF
(2048KB)(
485
)
Knowledge map
Save
For redundancy chaos, and the lack of representation of group observers' scanpath data in natural scenes, by mining the potential characteristics of individual scanpaths, a method for group scanpath generation based on fixation Regions of Interest (ROI) spatial temporal clustering and transferring was proposed. Firstly, multiple observers' scanpaths under the same stimulus sample were analyzed, and multiple fixation regions of interest were generated by utilizing affinity propagation clustering algorithm to cluster the fixation points. Then, the statistics and analysis of the information related to fixation intensity such as the number of observers, fixation frequency and lasting time were carried out and the regions of interest were filtered. Afterwards, the subregions of interest with different types were extracted via defining fixation behaviors in the regions of interest. Finally, the transformation mode of regions and subregions of interest was proposed on the basis of fixation priority, so as to generate the group scanpath in natural scenes. The group scanpath generation experiments were conducted on two public datasets MIT1003 and OSIE. The results show that compared with the state-of-the-art methods, such as eMine, Scanpath Trend Analysis (STA), Sequential Pattern Mining Algorithm (SPAM), Candidate-constrained Dynamic time warping Barycenter Averaging method (CDBA) and Heuristic, the proposed method has the group scanpath generated of higher entirety similarity indexes with ScanMatch (w/o duration) reached 0.426 and 0.467 respectively, and ScanMatch (w/duration) reached 0.404 and 0.439 respectively. It can be seen that the scanpath generated by the proposed method has high overall similarity to the real scanpath, and has a certain function of representation.
Reference
|
Related Articles
|
Metrics
Select
Enhanced fireworks algorithm with adaptive merging strategy and guidance operator
LI Kewen, MA Xiangbo, HOU Wenyan
Journal of Computer Applications 2021, 41 (
1
): 81-86. DOI:
10.11772/j.issn.1001-9081.2020060887
Abstract
(
454
)
PDF
(1056KB)(
430
)
Knowledge map
Save
In order to overcome the shortcomings of traditional FireWorks Algorithm (FWA) in the process of optimization, such as the search range limited by explosion radius and the lack of effective interaction between particles, an Enhanced FireWork Algorithm with adaptive Merging strategy and Guidance operator (EFWA-GM) was proposed. Firstly, according to the position relationship between fireworks particles, the overlapping explosion ranges in the optimization space were adaptively merged. Secondly, by making full use of the position information of high-quality particles through layering the spark particles, the guiding operator was designed to guide the evolution of suboptimal particles, so as to improve the accuracy and convergence speed of the algorithm. Experimental results on 12 benchmark functions show that compared with Standard Particle Swarm Optimization (SPSO) algorithm, Enhanced FireWorks Algorithm (EFWA), Adaptive FireWorks Algorithm (AFWA), dynamic FireWorks Algorithm (dynFWA), and Guided FireWorks Algorithm (GFWA), the proposed EFWA-GM has better optimization performance in optimization accuracy and convergence speed, and obtains optimal solution accuracy on 9 benchmark functions.
Reference
|
Related Articles
|
Metrics
Select
Decomposition based many-objective evolutionary algorithm based on minimum distance and aggregation strategy
LI Erchao, LI Kangwei
Journal of Computer Applications 2021, 41 (
1
): 22-28. DOI:
10.11772/j.issn.1001-9081.2020060891
Abstract
(
453
)
PDF
(953KB)(
549
)
Knowledge map
Save
Concerning the issue that the selection pressure of Pareto control based many-objective evolutionary algorithm is reduced when solving the problem of high-dimension and the diversity of the population is reduced of many-objective evolutionary algorithm based on decomposition when improving convergence and distribution, a decomposition based many-objective evolutionary algorithm based on minimum distance and aggregation strategy was proposed. Firstly, the angle decomposition based technique was used to decompose the target space into a specified number of subspaces in order to improve the diversity of population. Then, the method of cross neighborhood based on aggregation was added in the process of generating new solution, making the generated new solution closer to the parent solution. Finally, the convergence and distribution were improved by selecting solutions in each subspace based on minimum distance and aggregation strategy in two stages. In order to verify the feasibility of the algorithm, benchmark functions ZDT and DTLZ were used to conduct simulation experiments. The results show that the performance of the proposed algorithm is superior to those of the classical MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition), MOEA/D-DE (MOEA/D based on Differential Evolution), NSGA-Ⅲ (Nondominated Sorting Genetic Algorithms Ⅲ) and GrEA (Grid-based Evolutionary Algorithm). It can be seen that the proposed algorithm can effectively balance convergence and diversity while improving diversity.
Reference
|
Related Articles
|
Metrics
Select
Multi-scale skip deep long short-term memory network for short-term multivariate load forecasting
XIAO Yong, ZHENG Kaihong, ZHENG Zhenjing, QIAN Bin, LI Sen, MA Qianli
Journal of Computer Applications 2021, 41 (
1
): 231-236. DOI:
10.11772/j.issn.1001-9081.2020060929
Abstract
(
445
)
PDF
(862KB)(
602
)
Knowledge map
Save
In recent years, the short-term power load prediction model built with Recurrent Neural Network (RNN) as main part has achieved excellent performance in short-term power load forecasting. However, RNN cannot effectively capture the multi-scale temporal features in short-term power load data, making it difficult to further improve the load forecasting accuracy. To capture the multi-scale temporal features in short-term power load data, a short-term power load prediction model based on Multi-scale Skip Deep Long Short-Term Memory (MSD-LSTM) was proposed. Specifically, a forecasting model was built with LSTM (Long Short-Term Memory) as main part, which was able to better capture long short-term temporal dependencies, thereby alleviating the problem that important information is easily lost when encountering the long time series. Furthermore, a multi-layer LSTM architecture was adopted and different skip connection numbers were set for the layers, enabling different layers of MSD-LSTM can capture the features with different time scales. Finally, a fully connected layer was introduced to fuse the multi-scale temporal features extracted by different layers, and the obtained fusion feature was used to perform the short-term power load prediction. Experimental results show that compared with LSTM, MSD-LSTM achieves lower Mean Square Error (MSE) with the reduction of 10% in general. It can be seen that MSD-LSTM can better capture multi-scale temporal features in short-term power load data, thereby improving the accuracy of short-term power load forecasting.
Reference
|
Related Articles
|
Metrics
Select
Multi-focus image fusion method based on guided filtering and difference image
CHENG Yaling, BAI Zhi, TAN Aiping
Journal of Computer Applications 2021, 41 (
1
): 220-224. DOI:
10.11772/j.issn.1001-9081.2020081456
Abstract
(
443
)
PDF
(1626KB)(
475
)
Knowledge map
Save
To address the problem of edge blurring in traditional space domain fusion of multi-focus images, a multi-focus image fusion method based on Guided Filtering (GF) and difference image was proposed. Firstly, the source images were filtered by GF in different levels, and the difference was performed to the filtered images, so as to obtain the focused feature map. Secondly, the Energy of Gradient (EOG) of the focused feature map was used to obtain initial decision map. And to remove the noisy pixels caused by similar HOG, the spatial consistency verification and morphological operation were performed to initial decision map. Thirdly, to avoid sudden change of image feature, the initial decision map was optimized by GF. Finally, weighted fusion was performed to source images based on the optimized decision map, so as to obtain the fusion image. Three sets of classic multi-focus images were selected as experimental images, and the results obtained by the proposed method and other 9 multi-focus image fusion methods were compared. The subjective visual effects showed that the proposed method was able to better preserve the detailed information of multi-focus images, and four objective evaluation indicators of images processed by the proposed method were significantly better than those of the images processed by comparison methods. Experimental results show that the proposed method can achieve high-quality fusion image, well preserve information in source images, effectively solve edge blurring problem of traditional multi-focus image fusion.
Reference
|
Related Articles
|
Metrics
Select
Label noise filtering method based on local probability sampling
ZHANG Zenghui, JIANG Gaoxia, WANG Wenjian
Journal of Computer Applications 2021, 41 (
1
): 67-73. DOI:
10.11772/j.issn.1001-9081.2020060970
Abstract
(
440
)
PDF
(1462KB)(
817
)
Knowledge map
Save
In the classification learning tasks, it is inevitable to generate noise in the process of acquiring data. Especially, the existence of label noise not only makes the learning model more complex, but also leads to overfitting and the reduction of generalization ability of the classifier. Although some label noise filtering algorithms can solve the above problems to some extent, there are still some limitations such as poor noise recognition ability, unsatisfactory classification effect and low filtering efficiency. Focused on these issues, a local probability sampling method based on label confidence distribution was proposed for label noise filtering. Firstly, the random forest classifiers were used to perform the voting of the labels of samples, so as to obtain the label confidence of each sample. And then the samples were divided into easy and hard to recognize ones according to the values of label confidences. Finally, the samples were filtered by different filtering strategies respectively. Experimental results show that in the situation of existing label noise, the proposed method can maintain high noise recognition ability in most cases, and has obvious advantage on classification generalization performance.
Reference
|
Related Articles
|
Metrics
Select
Incidence trend prediction of hand-foot-mouth disease based on long short-term memory neural network
MA Tingting, JI Tianjiao, YANG Guanyu, CHEN Yang, XU Wenbo, LIU Hongtu
Journal of Computer Applications 2021, 41 (
1
): 265-269. DOI:
10.11772/j.issn.1001-9081.2020060936
Abstract
(
430
)
PDF
(892KB)(
791
)
Knowledge map
Save
In order to solve the problems of the traditional Hand-Foot-Mouth Disease (HFMD) incidence trend prediction algorithm, such as low prediction accuracy, lack of the combination of other influencing factors and short prediction time, a method of long-term prediction using meteorological factors and Long Short-Term Memory (LSTM) network was proposed. First, the sliding window was used to convert the incidence sequence into the input and output of the network. Then, the LSTM network was used for data modeling and prediction, and the iterative prediction was used to obtain the long-term prediction results. Finally, the temperature and humidity variables were added to the network to compare the impact of these variables on the prediction results. Experimental results show that adding meteorological factors can improve the prediction accuracy of the model. The proposed model has the Mean Absolute Error (MAE) on the Jinan dataset of 74.9, and the MAE on the Guangzhou dataset of 427.7. Compared with the commonly used Seasonal Autoregressive Integrated Moving Average (SARIMA) model and Support Vector Regression (SVR) model, the proposed model has the prediction accuracy higher, which proves that the model is an effective experimental method for the prediction of the incidence trend of HFMD.
Reference
|
Related Articles
|
Metrics
page
Page 1 of 2
Total 42 records
First page
Prev page
Next page
Last page
2025 Vol.45 No.4
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF