Journal of Computer Applications
Next Articles
Received:
Revised:
Online:
Published:
Contact:
吴俊衡1,王晓东2,何启学2
通讯作者:
基金资助:
Abstract: Aiming at the prediction difficulties caused by periodic complexity and high-frequency noise in time series data, a time series prediction model based on statistical distribution sensing and frequency domain dual-channel fusion was proposed. The model aimed to mitigate data drift, suppress noise interference, and improve prediction accuracy. First, the original time series data was processed through window overlapping slices, and the statistical distribution of data in each slice was calculated and normalized. Then, a multi-layer perceptron was used to predict the statistical distribution of future data. Next, the normalized sequence underwent adaptive time-frequency conversion, and the correlation features within the frequency domain and between channels were strengthened through the channel independent encoder and the channel interactive learner to obtain multi-scale frequency domain representation. Finally, a linear prediction layer was used to complete the inverse transformation from the frequency domain to the time domain. In the output stage, the statistical distribution of future data was used to perform an inverse normalization operation to generate the final prediction result. Comparative experiments with the current mainstream time series prediction model PatchTST, showed that Mean Square Error (MSE) on the Exchange, ETTm2, and Solar datasets was reduced by approximately 5.3%, and Mean Absolute Error (MAE) was reduced by approximately 4.0%, demonstrating good noise suppression capabilities and prediction performance. Ablation experiments further showed that the data statistical distribution sensing, adaptive frequency domain processing, and dual-channel fusion modules made significant contributions to improving prediction accuracy.
Key words: time series prediction, time-frequency analysis, Transformer, channel independence, channel mixing
摘要: 针对时间序列数据中周期复杂性和高频噪声导致的预测困难,提出一种基于统计分布感知与频域双通道融合的时序预测模型。该模型旨在缓解数据漂移、抑制噪声干扰并提高预测精度。首先,模型通过窗口重叠切片对原始时序数据进行处理,计算各切片的数据统计分布并进行归一化,再利用多层感知器预测未来数据的统计分布。其次,将归一化后的序列经过自适应时频转换,并通过通道独立编码器和通道交互学习器强化频域内和通道间的关联特征,获取多尺度频域表征。最后,采用线性预测层完成频域到时域的逆变换,模型在输出阶段利用未来数据统计分布进行逆归一化操作,生成最终预测结果。与当前主流的时序预测模型PatchTST的对比实验表明,该模型在Exchange、ETTm2、Solar数据集上的均方误差(MSE)降低大约5.3%,平均绝对误差(MAE)降低大约4.0%,体现了良好的噪声抑制能力和预测性能。消融实验进一步表明,数据统计分布感知、自适应频域与双通道融合模块在提升预测准确性方面具有显著贡献。
关键词: 时间序列预测, 时频分析, Transformer, 通道独立, 通道混合
CLC Number:
TP399
吴俊衡 王晓东 何启学. 基于统计分布感知与频域双通道融合的时序预测模型[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2024121750.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024121750