Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (1): 113-123.DOI: 10.11772/j.issn.1001-9081.2024121750
• Data science and technology • Previous Articles Next Articles
Junheng WU1,2, Xiaodong WANG1,2, Qixue HE1,2(
)
Received:2024-12-12
Revised:2025-03-17
Accepted:2025-03-18
Online:2026-01-10
Published:2026-01-10
Contact:
Qixue HE
About author:WU Junheng, born in 1998, M. S. candidate. His research interests include time series prediction, machine learning.Supported by:通讯作者:
何启学
作者简介:吴俊衡(1998—),男,重庆人,硕士研究生,主要研究方向:时间序列预测、机器学习基金资助:CLC Number:
Junheng WU, Xiaodong WANG, Qixue HE. Time series prediction model based on statistical distribution sensing and frequency domain dual-channel fusion[J]. Journal of Computer Applications, 2026, 46(1): 113-123.
吴俊衡, 王晓东, 何启学. 基于统计分布感知与频域双通道融合的时序预测模型[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 113-123.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024121750
| 模型 | 时间复杂度 | 空间复杂度 |
|---|---|---|
| 本文模型 | ||
| iTransformer[ | ||
| PatchTST[ | ||
| Crossformer[ | ||
| Autoformer[ | ||
| Informer[ | ||
| DLinear[ | ||
| Pyraformer[ |
Tab. 1 Time complexity and space complexity of different models
| 模型 | 时间复杂度 | 空间复杂度 |
|---|---|---|
| 本文模型 | ||
| iTransformer[ | ||
| PatchTST[ | ||
| Crossformer[ | ||
| Autoformer[ | ||
| Informer[ | ||
| DLinear[ | ||
| Pyraformer[ |
| 数据集 | 采集时间间隔/min | 维度 | 时间步数 | 平稳度系数 |
|---|---|---|---|---|
| Exchange | 1 440 | 8 | 7 588 | -1.90 |
| ETTm2 | 15 | 7 | 69 680 | -5.66 |
| Solar | 10 | 137 | 52 560 | -7.69 |
| Electricity | 60 | 321 | 26 304 | -8.44 |
| Weather | 10 | 21 | 52 696 | -26.68 |
Tab. 2 Basic information of datasets
| 数据集 | 采集时间间隔/min | 维度 | 时间步数 | 平稳度系数 |
|---|---|---|---|---|
| Exchange | 1 440 | 8 | 7 588 | -1.90 |
| ETTm2 | 15 | 7 | 69 680 | -5.66 |
| Solar | 10 | 137 | 52 560 | -7.69 |
| Electricity | 60 | 321 | 26 304 | -8.44 |
| Weather | 10 | 21 | 52 696 | -26.68 |
| 数据集 | 预测长度 | 本文模型 | PatchTST | iTransformer | DLinear | Crossformer | Autoformer | Informer | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | ||
| Weather | 96 | 0.145 | 0.197 | 0.163 | 0.212 | 0.176 | 0.237 | 0.226 | 0.301 | 0.301 | 0.343 | 0.354 | 0.405 | ||
| 192 | 0.190 | 0.239 | 0.203 | 0.249 | 0.220 | 0.282 | 0.215 | 0.289 | 0.325 | 0.370 | 0.419 | 0.434 | |||
| 336 | 0.243 | 0.281 | 0.255 | 0.289 | 0.265 | 0.319 | 0.319 | 0.317 | 0.351 | 0.391 | 0.583 | 0.543 | |||
| 720 | 0.308 | 0.328 | 0.326 | 0.337 | 0.326 | 0.366 | 0.381 | 0.379 | 0.422 | 0.433 | 0.916 | 0.705 | |||
| Exchange | 96 | 0.079 | 0.199 | 0.093 | 0.215 | 0.122 | 0.264 | 0.135 | 0.278 | 0.182 | 0.312 | 0.836 | 0.773 | ||
| 192 | 0.166 | 0.275 | 0.189 | 0.310 | 0.205 | 0.347 | 0.263 | 0.309 | 0.316 | 0.414 | 0.927 | 0.816 | |||
| 336 | 0.312 | 0.401 | 0.343 | 0.425 | 0.332 | 0.440 | 0.442 | 0.519 | 0.519 | 0.535 | 1.078 | 0.874 | |||
| 720 | 0.748 | 0.650 | 0.870 | 0.707 | 0.869 | 0.705 | 1.089 | 0.824 | 1.209 | 0.854 | 1.153 | 0.892 | |||
| ETTm2 | 96 | 0.163 | 0.254 | 0.166 | 0.179 | 0.272 | 0.239 | 0.298 | 0.280 | 0.366 | 0.355 | 0.462 | |||
| 192 | 0.222 | 0.295 | 0.241 | 0.315 | 0.226 | 0.306 | 0.307 | 0.346 | 0.310 | 0.371 | 0.595 | 0.586 | |||
| 336 | 0.269 | 0.312 | 0.290 | 0.344 | 0.274 | 0.335 | 0.323 | 0.377 | 0.343 | 0.388 | 1.270 | 0.871 | |||
| 720 | 0.355 | 0.379 | 0.376 | 0.397 | 0.380 | 0.408 | 0.405 | 0.429 | 0.412 | 0.433 | 3.999 | 1.704 | |||
| Electricity | 96 | 0.132 | 0.129 | 0.222 | 0.228 | 0.140 | 0.237 | 0.166 | 0.293 | 0.196 | 0.313 | 0.304 | 0.393 | ||
| 192 | 0.148 | 0.240 | 0.155 | 0.249 | 0.153 | 0.249 | 0.187 | 0.302 | 0.211 | 0.324 | 0.327 | 0.417 | |||
| 336 | 0.160 | 0.251 | 0.170 | 0.266 | 0.169 | 0.267 | 0.205 | 0.324 | 0.214 | 0.327 | 0.333 | 0.422 | |||
| 720 | 0.192 | 0.283 | 0.210 | 0.298 | 0.207 | 0.301 | 0.211 | 0.338 | 0.236 | 0.342 | 0.351 | 0.427 | |||
| Solar | 96 | 0.174 | 0.214 | 0.201 | 0.260 | 0.221 | 0.289 | 0.241 | 0.299 | 0.266 | 0.311 | 0.208 | 0.237 | ||
| 192 | 0.192 | 0.239 | 0.228 | 0.266 | 0.249 | 0.285 | 0.268 | 0.314 | 0.271 | 0.315 | 0.229 | 0.259 | |||
| 336 | 0.210 | 0.248 | 0.221 | 0.266 | 0.263 | 0.291 | 0.288 | 0.311 | 0.281 | 0.317 | 0.235 | 0.272 | |||
| 720 | 0.213 | 0.250 | 0.230 | 0.279 | 0.244 | 0.296 | 0.271 | 0.315 | 0.295 | 0.319 | 0.233 | 0.275 | |||
| 提升率/% | 5.3 | 4.0 | 7.0 | 6.3 | 11.2 | 11.2 | 24.7 | 19.6 | 33.1 | 26.2 | 53.2 | 42.5 | |||
Tab. 3 Comparison of prediction effects of different models on different datasets
| 数据集 | 预测长度 | 本文模型 | PatchTST | iTransformer | DLinear | Crossformer | Autoformer | Informer | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | ||
| Weather | 96 | 0.145 | 0.197 | 0.163 | 0.212 | 0.176 | 0.237 | 0.226 | 0.301 | 0.301 | 0.343 | 0.354 | 0.405 | ||
| 192 | 0.190 | 0.239 | 0.203 | 0.249 | 0.220 | 0.282 | 0.215 | 0.289 | 0.325 | 0.370 | 0.419 | 0.434 | |||
| 336 | 0.243 | 0.281 | 0.255 | 0.289 | 0.265 | 0.319 | 0.319 | 0.317 | 0.351 | 0.391 | 0.583 | 0.543 | |||
| 720 | 0.308 | 0.328 | 0.326 | 0.337 | 0.326 | 0.366 | 0.381 | 0.379 | 0.422 | 0.433 | 0.916 | 0.705 | |||
| Exchange | 96 | 0.079 | 0.199 | 0.093 | 0.215 | 0.122 | 0.264 | 0.135 | 0.278 | 0.182 | 0.312 | 0.836 | 0.773 | ||
| 192 | 0.166 | 0.275 | 0.189 | 0.310 | 0.205 | 0.347 | 0.263 | 0.309 | 0.316 | 0.414 | 0.927 | 0.816 | |||
| 336 | 0.312 | 0.401 | 0.343 | 0.425 | 0.332 | 0.440 | 0.442 | 0.519 | 0.519 | 0.535 | 1.078 | 0.874 | |||
| 720 | 0.748 | 0.650 | 0.870 | 0.707 | 0.869 | 0.705 | 1.089 | 0.824 | 1.209 | 0.854 | 1.153 | 0.892 | |||
| ETTm2 | 96 | 0.163 | 0.254 | 0.166 | 0.179 | 0.272 | 0.239 | 0.298 | 0.280 | 0.366 | 0.355 | 0.462 | |||
| 192 | 0.222 | 0.295 | 0.241 | 0.315 | 0.226 | 0.306 | 0.307 | 0.346 | 0.310 | 0.371 | 0.595 | 0.586 | |||
| 336 | 0.269 | 0.312 | 0.290 | 0.344 | 0.274 | 0.335 | 0.323 | 0.377 | 0.343 | 0.388 | 1.270 | 0.871 | |||
| 720 | 0.355 | 0.379 | 0.376 | 0.397 | 0.380 | 0.408 | 0.405 | 0.429 | 0.412 | 0.433 | 3.999 | 1.704 | |||
| Electricity | 96 | 0.132 | 0.129 | 0.222 | 0.228 | 0.140 | 0.237 | 0.166 | 0.293 | 0.196 | 0.313 | 0.304 | 0.393 | ||
| 192 | 0.148 | 0.240 | 0.155 | 0.249 | 0.153 | 0.249 | 0.187 | 0.302 | 0.211 | 0.324 | 0.327 | 0.417 | |||
| 336 | 0.160 | 0.251 | 0.170 | 0.266 | 0.169 | 0.267 | 0.205 | 0.324 | 0.214 | 0.327 | 0.333 | 0.422 | |||
| 720 | 0.192 | 0.283 | 0.210 | 0.298 | 0.207 | 0.301 | 0.211 | 0.338 | 0.236 | 0.342 | 0.351 | 0.427 | |||
| Solar | 96 | 0.174 | 0.214 | 0.201 | 0.260 | 0.221 | 0.289 | 0.241 | 0.299 | 0.266 | 0.311 | 0.208 | 0.237 | ||
| 192 | 0.192 | 0.239 | 0.228 | 0.266 | 0.249 | 0.285 | 0.268 | 0.314 | 0.271 | 0.315 | 0.229 | 0.259 | |||
| 336 | 0.210 | 0.248 | 0.221 | 0.266 | 0.263 | 0.291 | 0.288 | 0.311 | 0.281 | 0.317 | 0.235 | 0.272 | |||
| 720 | 0.213 | 0.250 | 0.230 | 0.279 | 0.244 | 0.296 | 0.271 | 0.315 | 0.295 | 0.319 | 0.233 | 0.275 | |||
| 提升率/% | 5.3 | 4.0 | 7.0 | 6.3 | 11.2 | 11.2 | 24.7 | 19.6 | 33.1 | 26.2 | 53.2 | 42.5 | |||
Fig. 5 Comparison of performance on the 8th dimension (transformer oil temperature) of ETTm2 dataset with input length of 336 and output length of 720
| [1] | 杨汪洋,魏云冰,罗程浩. 基于CVMD-TCN-BiLSTM的短期电力负荷预测[J]. 电气工程学报,2024, 19(2): 163-172. |
| YANG W Y, WEI Y B, LUO C H. Short-term electricity load forecasting based on CVMD-TCN-BiLSTM[J]. Journal of Electrical Engineering, 2024, 19(2): 163-172. | |
| [2] | KAUSHIK S, CHOUDHURY A, SHERON P K, et al. AI in healthcare: time-series forecasting using statistical, neural, and ensemble architectures[J]. Frontiers in Big Data, 2020, 3: No.4. |
| [3] | HOU M, XU C, LI Z, et al. Multi-granularity residual learning with confidence estimation for time series prediction [C]// Proceedings of the ACM Web Conference 2022. New York: ACM, 2022: 112-121. |
| [4] | 王艺霏,于雷,滕飞,等. 基于长-短时序特征融合的资源负载预测模型[J]. 计算机应用,2022, 42(5): 1508-1515. |
| WANG Y F, YU L, TENG F, et al. Resource load prediction model based on long-short time series feature fusion[J]. Journal of Computer Applications, 2022, 42(5): 1508-1515. | |
| [5] | LIU Z, ZHU Z, GAO J, et al. Forecast methods for time series data: a survey[J]. IEEE Access, 2021, 9: 91896-91912. |
| [6] | WU H, HU T, LIU Y, et al. TimesNet: temporal 2D-variation modeling for general time series analysis[EB/OL]. [2024-11-18].. |
| [7] | DENG A, HOOI B. Graph neural network-based anomaly detection in multivariate time series [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 4027-4035. |
| [8] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
| [9] | KALYAN K S, RAJASEKHARAN A, SANGEETHA S. AMMUS: a survey of transformer-based pretrained models in natural language processing[EB/OL]. [2024-12-08].. |
| [10] | HAN K, WANG Y, CHEN H, et al. A survey on Vision Transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 87-110. |
| [11] | WEN Q, ZHOU T, ZHANG C, et al. Transformers in time series: a survey [C]// Proceedings of the 32nd International Joint Conference on Artificial Intelligence. California: ijcai.org, 2023: 6778-6786. |
| [12] | ZHOU T, MA Z, WEN Q, et al. FEDformer: frequency enhanced decomposed transformer for long-term series forecasting [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 27268-27286. |
| [13] | ZHOU H, ZHANG S, PENG J, et al. Informer: beyond efficient transformer for long sequence time-series forecasting [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 11106-11115. |
| [14] | WU H, XU J, WANG J, et al. Autoformer: decomposition transformers with auto-correlation for long-term series forecasting [C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 22419-22430. |
| [15] | LIU S, YU H, LIAO C, et al. Pyraformer: low-complexity pyramidal attention for long-range time series modeling and forecasting[EB/OL]. [2024-11-18].. |
| [16] | ZHANG Y, YAN J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting[EB/OL]. [2024-11-18].. |
| [17] | ZENG A, CHEN M, ZHANG L, et al. Are Transformers effective for time series forecasting? [C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2023: 11121-11128. |
| [18] | NIE Y, NGUYEN N H, SINTHONG P, et al. A time series is worth 64 words: long-term forecasting with Transformers[EB/OL]. [2024-07-25].. |
| [19] | ZHANG X, JIN X, GOPALSWAMY K, et al. First de-trend then attend: rethinking attention for time-series forecasting[EB/OL]. [2024-07-31].. |
| [20] | ZHANG X, ZHAO S, SONG Z, et al. Not all frequencies are created equal: towards a dynamic fusion of frequencies in time-series forecasting [C]// Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 4729-4737. |
| [21] | PIAO X, CHEN Z, MURAYAMA T, et al. Fredformer: frequency debiased transformer for time series forecasting [C]// Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 2400-2410. |
| [22] | JIANG M, ZENG P, WANG K, et al. FECAM: frequency enhanced channel attention mechanism for time series forecasting[J]. Advanced Engineering Informatics, 2023, 58: No.102158. |
| [23] | DU Y, WANG J, FENG W, et al. AdaRNN: adaptive learning and forecasting of time series [C]// Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 402-411. |
| [24] | LIU Y, WU H, WANG J, et al. Non-stationary transformers: exploring the stationarity in time series forecasting [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 9881-9893. |
| [25] | KIM T, KIM J, TAE Y, et al. Reversible instance normalization for accurate time-series forecasting against distribution shift[EB/OL]. [2024-11-18].. |
| [26] | FAN W, WANG P, WANG D, et al. Dish-TS: a general paradigm for alleviating distribution shift in time series forecasting [C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2023: 7522-7529. |
| [27] | QIU X, CHENG H, WU X, et al. A comprehensive survey of deep learning for multivariate time series forecasting: a channel strategy perspective[EB/OL]. [2025-03-28].. |
| [28] | LIU Y, HU T, ZHANG H, et al. iTransformer: inverted Transformers are effective for time series forecasting[EB/OL]. [2024-11-05].. |
| [29] | YI K, ZHANG Q, FAN W, et al. Frequency-domain MLPs are more effective learners in time series forecasting [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2024: 76656-76679. |
| [30] | QIU X, WU X, LIN Y, et al. DUET: dual clustering enhanced multivariate time series forecasting [C]// Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1. New York: ACM, 2025: 1185-1196. |
| [31] | ZHOU L, WANG H. MST-GAT: a multi-perspective spatial-temporal graph attention network for multi-sensor equipment remaining useful life prediction[J]. Information Fusion, 2024, 110: No.102462. |
| [32] | LIU Z, CHENG M, LI Z, et al. Adaptive normalization for non-stationary time series forecasting: a temporal slice perspective [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Stroudsburg: ACL, 2023:14273-14292. |
| [33] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
| [34] | WANG S, WU H, SHI X, et al. TimeMixer: decomposable multiscale mixing for time series forecasting[EB/OL]. [2024-11-18].. |
| [35] | CHEN Y, LIU S, YANG J, et al. A Joint Time-Frequency Domain Transformer for multivariate time series forecasting[J]. Neural Networks, 2024, 176: No.106334. |
| [36] | LI Z, RAO Z, PAN L, et al. MTS-Mixers: multivariate time series forecasting via factorized temporal and channel mixing[EB/OL]. [2024-10-24].. |
| [37] | MUSHTAQ R. Augmented dickey fuller test[EB/OL]. [2024-12-25].. |
| [1] | Lifang WANG, Wenjing REN, Xiaodong GUO, Rongguo ZHANG, Lihua HU. Trident generative adversarial network for low-dose CT image denoising [J]. Journal of Computer Applications, 2026, 46(1): 270-279. |
| [2] | Lijin YAO, Di ZHANG, Piyu ZHOU, Zhijian QU, Haipeng WANG. Transformer and gated recurrent unit-based de novo sequencing algorithm for phosphopeptides [J]. Journal of Computer Applications, 2026, 46(1): 297-304. |
| [3] | Yu SANG, Tong GONG, Chen ZHAO, Bowen YU, Siman LI. Domain-adaptive nighttime object detection method with photometric alignment [J]. Journal of Computer Applications, 2026, 46(1): 242-251. |
| [4] | Jinggang LYU, Shaorui PENG, Shuo GAO, Jin ZHOU. Speech enhancement network driven by complex frequency attention and multi-scale frequency enhancement [J]. Journal of Computer Applications, 2025, 45(9): 2957-2965. |
| [5] | Yiming LIANG, Jing FAN, Wenze CHAI. Multi-scale feature fusion sentiment classification based on bidirectional cross attention [J]. Journal of Computer Applications, 2025, 45(9): 2773-2782. |
| [6] | Jin LI, Liqun LIU. SAR and visible image fusion based on residual Swin Transformer [J]. Journal of Computer Applications, 2025, 45(9): 2949-2956. |
| [7] | Xiang WANG, Zhixiang CHEN, Guojun MAO. Multivariate time series prediction method combining local and global correlation [J]. Journal of Computer Applications, 2025, 45(9): 2806-2816. |
| [8] | Fang WANG, Jing HU, Rui ZHANG, Wenting FAN. Medical image segmentation network with content-guided multi-angle feature fusion [J]. Journal of Computer Applications, 2025, 45(9): 3017-3025. |
| [9] | Li LI, Han SONG, Peihe LIU, Hanlin CHEN. Named entity recognition for sensitive information based on data augmentation and residual networks [J]. Journal of Computer Applications, 2025, 45(9): 2790-2797. |
| [10] | Jing WANG, Jiaxing LIU, Wanying SONG, Jiaxing XUE, Wenxin DING. Few-shot skin image classification model based on spatial transformer network and feature distribution calibration [J]. Journal of Computer Applications, 2025, 45(8): 2720-2726. |
| [11] | Jin ZHOU, Yuzhi LI, Xu ZHANG, Shuo GAO, Li ZHANG, Jiachuan SHENG. Modulation recognition network for complex electromagnetic environments [J]. Journal of Computer Applications, 2025, 45(8): 2672-2682. |
| [12] | Yongpeng TAO, Shiqi BAI, Zhengwen ZHOU. Neural architecture search for multi-tissue segmentation using convolutional and transformer-based networks in glioma segmentation [J]. Journal of Computer Applications, 2025, 45(7): 2378-2386. |
| [13] | Haoyu LIU, Pengwei KONG, Yaoli WANG, Qing CHANG. Pedestrian detection algorithm based on multi-view information [J]. Journal of Computer Applications, 2025, 45(7): 2325-2332. |
| [14] | Dehui ZHOU, Jun ZHAO, Jinfeng CHENG. Tiny defect detection algorithm for bearing surface based on RT-DETR [J]. Journal of Computer Applications, 2025, 45(6): 1987-1997. |
| [15] | Sheping ZHAI, Yan HUANG, Qing YANG, Rui YANG. Multi-view entity alignment combining triples and text attributes [J]. Journal of Computer Applications, 2025, 45(6): 1793-1800. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||