Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multivariate long-term series forecasting model based on decomposition and frequency domain feature extraction
Yiyang FAN, Yang ZHANG, Shang ZENG, Yu ZENG, Maoli FU
Journal of Computer Applications    2024, 44 (11): 3442-3448.   DOI: 10.11772/j.issn.1001-9081.2023111684
Abstract176)   HTML4)    PDF (753KB)(370)    PDF(mobile) (1302KB)(18)    Save

In response to the problems that the existing Transformer-based Multivariate Long-Term Series Forecasting (MLTSF) models mainly extract features from the time domain, and it is difficult to find out reliable dependencies directly from the dispersed time points of the long-term series, a new decomposition and frequency domain feature extraction model was proposed. Firstly, a periodic term-trend term decomposition method based on the frequency domain was proposed, which reduced the time complexity of the decomposition process. Then, based on the extraction of trend features using periodic term-trend term decomposition, a Transformer network performing frequency domain feature extraction based on Gabor transform was utilized to capture periodic dependencies, which enhanced the stability and robustness of forecasting. Experimental results on five benchmark datasets show that compared with the current state-of-the-art methods, the proposed model has the Mean Squared Error (MSE) in MLTSF is reduced by an average of 7.6% with a maximum reduction of 18.9%, which demonstrates that the proposed model improves forecasting accuracy effectively.

Table and Figures | Reference | Related Articles | Metrics
Time series prediction algorithm based on multi-scale gated dilated convolutional network
Yu ZENG, Yang ZHANG, Shang ZENG, Maoli FU, Qixue HE, Linlong ZENG
Journal of Computer Applications    2024, 44 (11): 3427-3434.   DOI: 10.11772/j.issn.1001-9081.2023111583
Abstract239)   HTML5)    PDF (803KB)(733)       Save

Addressing challenges in time series prediction tasks, such as high-dimensional features, large-scale data, and the demand for high prediction accuracy, a multi-scale trend-period decomposition model based on a multi-head gated dilated convolutional network was proposed. A multi-scale decomposition approach was employed to decompose the original covariate sequence and the prediction variable sequence into their respective periodic terms and trend terms, thereby enabling independent prediction. For the periodic terms, the multi-head gated dilated convolutional network encoder was introduced to extract respective periodic information; in the decoder stage, channel information interaction and fusion were performed through the utilization of a cross-attention mechanism, and after sampling and aligning the periodic information of the prediction variables, the periodic prediction was performed through time attention and channel fusion information. The trend terms prediction was executed by using an autoregressive approach. Finally, the prediction sequence was obtained by incorporating the trend prediction results with the periodic prediction results. Compared with multiple mainstream benchmark models such as Long Short-Term Memory (LSTM) and Informer, on five datasets including ETTm1 and ETTh1, a reduction in Mean Squared Error (MSE) is observed, ranging from 19.2% to 52.8% on average, a decrease in Mean Absolute Error (MAE) is noted, ranging from 12.1% to 33.8% on average. Ablation experiments confirm that the proposed multi-scale decomposition module, multi-head gated dilation convolution, and time attention module can enhance the accuracy of time series prediction.

Table and Figures | Reference | Related Articles | Metrics
Improved extraction method on logic function optimization of mass data processing
Jing YE Lei YU Guang-yu ZENG Yan BAI
Journal of Computer Applications   
Abstract1294)      PDF (843KB)(1145)       Save
Extraction method is one of the classical methods that achieve the minimum coverage in two-level logic synthesis. But as the output variables and the prime implicant grow up, both the long processing time and the resource requirement become the major problems to be resolved with the extraction method. To overcome these drawbacks, a new ameliorated algorithm for the coverage minimization was presented in this thesis on the basis of the extraction method theory, which was adapted to the processing of mass data. Based on the intersection iterative and the local search algorithm theory, two major phases in this algorithm were improved, including the extremal selecting and the branches processing. As a result, by using the existing computer resources, testing shows a promising result and the improved algorithm is superior to the others multi-output logic function optimizations.
Related Articles | Metrics
High-resolution real-time semantic segmentation algorithm for edge deployment
Linlong ZENG, Miao CHENG, Shaobing ZHANG, Yu ZENG
Journal of Computer Applications    0, (): 159-163.   DOI: 10.11772/j.issn.1001-9081.2024020218
Abstract25)   HTML0)    PDF (1520KB)(88)       Save

Among the classic tasks in machine vision, semantic segmentation is a category with a large amount of calculation, making it difficult to deploy Convolutional Neural Networks (CNNs) for segmentation in edge computing systems. Field Programmable Gate Array (FPGA) is a hardware widely used in industrial vision sensors for data stream processing. In recent years, methods for deploying CNNs on FPGA have been proposed. However, due to limited computing resources, current technology cannot achieve acceptable speed and accuracy when performing semantic segmentation of high-resolution images on FPGA. After analyzing the characteristics of deep learning accelerators on FPGA, a new segmentation network, Trilateral Segment Network (TriSeNet), was proposed to achieve end-to-end inference of semantic segmentation tasks of high-resolution images on edge accelerators. TriSeNet was deployed on Xilinx Kria K26 SOM to process CityScapes semantic segmentation. TriSeNet achieved a mean Intersection over Union (mIoU) of 75%; for images with resolution of 512*1 024,it had a inference speed of 32 FPS. It could utilize computing resources at the edge efficiently, and achieved a calculator utilization of 62.6%. It is verified that TriSeNet is a model adapting to hardware characteristics of the accelerator successfully.

Table and Figures | Reference | Related Articles | Metrics