《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (1): 113-123.DOI: 10.11772/j.issn.1001-9081.2024121750
收稿日期:2024-12-12
修回日期:2025-03-17
接受日期:2025-03-18
发布日期:2026-01-10
出版日期:2026-01-10
通讯作者:
何启学
作者简介:吴俊衡(1998—),男,重庆人,硕士研究生,主要研究方向:时间序列预测、机器学习基金资助:
Junheng WU1,2, Xiaodong WANG1,2, Qixue HE1,2(
)
Received:2024-12-12
Revised:2025-03-17
Accepted:2025-03-18
Online:2026-01-10
Published:2026-01-10
Contact:
Qixue HE
About author:WU Junheng, born in 1998, M. S. candidate. His research interests include time series prediction, machine learning.Supported by:摘要:
针对时间序列数据中周期复杂性和高频噪声导致的预测困难,提出一种基于统计分布感知与频域双通道融合的时序预测模型,旨在缓解数据漂移、抑制噪声干扰并提高预测精度。首先,通过窗口重叠切片对原始时序数据进行处理,计算各切片的数据统计分布并进行归一化,再利用多层感知器(MLP)预测未来数据的统计分布;其次,将归一化后的序列经过自适应时频转换,并通过通道独立编码器和通道交互学习器强化频域内和通道间的关联特征,从而获取多尺度频域表征;最后,采用线性预测层完成频域到时域的逆变换,模型在输出阶段利用未来数据的统计分布进行逆归一化操作,从而生成最终预测结果。与当前主流的时序预测模型PatchTST (Patch Time Series Transformer)的对比实验结果表明,所提模型在Exchange、ETTm2和Solar数据集上的均方误差(MSE)平均降低了5.3%,平均绝对误差(MAE)平均降低了4.0%,体现了良好的噪声抑制能力和预测性能。消融实验结果进一步表明,数据统计分布感知、自适应频域与双通道融合模块在提升预测准确性方面都具有显著贡献。
中图分类号:
吴俊衡, 王晓东, 何启学. 基于统计分布感知与频域双通道融合的时序预测模型[J]. 计算机应用, 2026, 46(1): 113-123.
Junheng WU, Xiaodong WANG, Qixue HE. Time series prediction model based on statistical distribution sensing and frequency domain dual-channel fusion[J]. Journal of Computer Applications, 2026, 46(1): 113-123.
| 模型 | 时间复杂度 | 空间复杂度 |
|---|---|---|
| 本文模型 | ||
| iTransformer[ | ||
| PatchTST[ | ||
| Crossformer[ | ||
| Autoformer[ | ||
| Informer[ | ||
| DLinear[ | ||
| Pyraformer[ |
表1 不同模型的时间复杂度和空间复杂度
Tab. 1 Time complexity and space complexity of different models
| 模型 | 时间复杂度 | 空间复杂度 |
|---|---|---|
| 本文模型 | ||
| iTransformer[ | ||
| PatchTST[ | ||
| Crossformer[ | ||
| Autoformer[ | ||
| Informer[ | ||
| DLinear[ | ||
| Pyraformer[ |
| 数据集 | 采集时间间隔/min | 维度 | 时间步数 | 平稳度系数 |
|---|---|---|---|---|
| Exchange | 1 440 | 8 | 7 588 | -1.90 |
| ETTm2 | 15 | 7 | 69 680 | -5.66 |
| Solar | 10 | 137 | 52 560 | -7.69 |
| Electricity | 60 | 321 | 26 304 | -8.44 |
| Weather | 10 | 21 | 52 696 | -26.68 |
表2 数据集基本信息
Tab. 2 Basic information of datasets
| 数据集 | 采集时间间隔/min | 维度 | 时间步数 | 平稳度系数 |
|---|---|---|---|---|
| Exchange | 1 440 | 8 | 7 588 | -1.90 |
| ETTm2 | 15 | 7 | 69 680 | -5.66 |
| Solar | 10 | 137 | 52 560 | -7.69 |
| Electricity | 60 | 321 | 26 304 | -8.44 |
| Weather | 10 | 21 | 52 696 | -26.68 |
| 数据集 | 预测长度 | 本文模型 | PatchTST | iTransformer | DLinear | Crossformer | Autoformer | Informer | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | ||
| Weather | 96 | 0.145 | 0.197 | 0.163 | 0.212 | 0.176 | 0.237 | 0.226 | 0.301 | 0.301 | 0.343 | 0.354 | 0.405 | ||
| 192 | 0.190 | 0.239 | 0.203 | 0.249 | 0.220 | 0.282 | 0.215 | 0.289 | 0.325 | 0.370 | 0.419 | 0.434 | |||
| 336 | 0.243 | 0.281 | 0.255 | 0.289 | 0.265 | 0.319 | 0.319 | 0.317 | 0.351 | 0.391 | 0.583 | 0.543 | |||
| 720 | 0.308 | 0.328 | 0.326 | 0.337 | 0.326 | 0.366 | 0.381 | 0.379 | 0.422 | 0.433 | 0.916 | 0.705 | |||
| Exchange | 96 | 0.079 | 0.199 | 0.093 | 0.215 | 0.122 | 0.264 | 0.135 | 0.278 | 0.182 | 0.312 | 0.836 | 0.773 | ||
| 192 | 0.166 | 0.275 | 0.189 | 0.310 | 0.205 | 0.347 | 0.263 | 0.309 | 0.316 | 0.414 | 0.927 | 0.816 | |||
| 336 | 0.312 | 0.401 | 0.343 | 0.425 | 0.332 | 0.440 | 0.442 | 0.519 | 0.519 | 0.535 | 1.078 | 0.874 | |||
| 720 | 0.748 | 0.650 | 0.870 | 0.707 | 0.869 | 0.705 | 1.089 | 0.824 | 1.209 | 0.854 | 1.153 | 0.892 | |||
| ETTm2 | 96 | 0.163 | 0.254 | 0.166 | 0.179 | 0.272 | 0.239 | 0.298 | 0.280 | 0.366 | 0.355 | 0.462 | |||
| 192 | 0.222 | 0.295 | 0.241 | 0.315 | 0.226 | 0.306 | 0.307 | 0.346 | 0.310 | 0.371 | 0.595 | 0.586 | |||
| 336 | 0.269 | 0.312 | 0.290 | 0.344 | 0.274 | 0.335 | 0.323 | 0.377 | 0.343 | 0.388 | 1.270 | 0.871 | |||
| 720 | 0.355 | 0.379 | 0.376 | 0.397 | 0.380 | 0.408 | 0.405 | 0.429 | 0.412 | 0.433 | 3.999 | 1.704 | |||
| Electricity | 96 | 0.132 | 0.129 | 0.222 | 0.228 | 0.140 | 0.237 | 0.166 | 0.293 | 0.196 | 0.313 | 0.304 | 0.393 | ||
| 192 | 0.148 | 0.240 | 0.155 | 0.249 | 0.153 | 0.249 | 0.187 | 0.302 | 0.211 | 0.324 | 0.327 | 0.417 | |||
| 336 | 0.160 | 0.251 | 0.170 | 0.266 | 0.169 | 0.267 | 0.205 | 0.324 | 0.214 | 0.327 | 0.333 | 0.422 | |||
| 720 | 0.192 | 0.283 | 0.210 | 0.298 | 0.207 | 0.301 | 0.211 | 0.338 | 0.236 | 0.342 | 0.351 | 0.427 | |||
| Solar | 96 | 0.174 | 0.214 | 0.201 | 0.260 | 0.221 | 0.289 | 0.241 | 0.299 | 0.266 | 0.311 | 0.208 | 0.237 | ||
| 192 | 0.192 | 0.239 | 0.228 | 0.266 | 0.249 | 0.285 | 0.268 | 0.314 | 0.271 | 0.315 | 0.229 | 0.259 | |||
| 336 | 0.210 | 0.248 | 0.221 | 0.266 | 0.263 | 0.291 | 0.288 | 0.311 | 0.281 | 0.317 | 0.235 | 0.272 | |||
| 720 | 0.213 | 0.250 | 0.230 | 0.279 | 0.244 | 0.296 | 0.271 | 0.315 | 0.295 | 0.319 | 0.233 | 0.275 | |||
| 提升率/% | 5.3 | 4.0 | 7.0 | 6.3 | 11.2 | 11.2 | 24.7 | 19.6 | 33.1 | 26.2 | 53.2 | 42.5 | |||
表3 不同模型在不同数据集上的预测效果对比
Tab. 3 Comparison of prediction effects of different models on different datasets
| 数据集 | 预测长度 | 本文模型 | PatchTST | iTransformer | DLinear | Crossformer | Autoformer | Informer | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | ||
| Weather | 96 | 0.145 | 0.197 | 0.163 | 0.212 | 0.176 | 0.237 | 0.226 | 0.301 | 0.301 | 0.343 | 0.354 | 0.405 | ||
| 192 | 0.190 | 0.239 | 0.203 | 0.249 | 0.220 | 0.282 | 0.215 | 0.289 | 0.325 | 0.370 | 0.419 | 0.434 | |||
| 336 | 0.243 | 0.281 | 0.255 | 0.289 | 0.265 | 0.319 | 0.319 | 0.317 | 0.351 | 0.391 | 0.583 | 0.543 | |||
| 720 | 0.308 | 0.328 | 0.326 | 0.337 | 0.326 | 0.366 | 0.381 | 0.379 | 0.422 | 0.433 | 0.916 | 0.705 | |||
| Exchange | 96 | 0.079 | 0.199 | 0.093 | 0.215 | 0.122 | 0.264 | 0.135 | 0.278 | 0.182 | 0.312 | 0.836 | 0.773 | ||
| 192 | 0.166 | 0.275 | 0.189 | 0.310 | 0.205 | 0.347 | 0.263 | 0.309 | 0.316 | 0.414 | 0.927 | 0.816 | |||
| 336 | 0.312 | 0.401 | 0.343 | 0.425 | 0.332 | 0.440 | 0.442 | 0.519 | 0.519 | 0.535 | 1.078 | 0.874 | |||
| 720 | 0.748 | 0.650 | 0.870 | 0.707 | 0.869 | 0.705 | 1.089 | 0.824 | 1.209 | 0.854 | 1.153 | 0.892 | |||
| ETTm2 | 96 | 0.163 | 0.254 | 0.166 | 0.179 | 0.272 | 0.239 | 0.298 | 0.280 | 0.366 | 0.355 | 0.462 | |||
| 192 | 0.222 | 0.295 | 0.241 | 0.315 | 0.226 | 0.306 | 0.307 | 0.346 | 0.310 | 0.371 | 0.595 | 0.586 | |||
| 336 | 0.269 | 0.312 | 0.290 | 0.344 | 0.274 | 0.335 | 0.323 | 0.377 | 0.343 | 0.388 | 1.270 | 0.871 | |||
| 720 | 0.355 | 0.379 | 0.376 | 0.397 | 0.380 | 0.408 | 0.405 | 0.429 | 0.412 | 0.433 | 3.999 | 1.704 | |||
| Electricity | 96 | 0.132 | 0.129 | 0.222 | 0.228 | 0.140 | 0.237 | 0.166 | 0.293 | 0.196 | 0.313 | 0.304 | 0.393 | ||
| 192 | 0.148 | 0.240 | 0.155 | 0.249 | 0.153 | 0.249 | 0.187 | 0.302 | 0.211 | 0.324 | 0.327 | 0.417 | |||
| 336 | 0.160 | 0.251 | 0.170 | 0.266 | 0.169 | 0.267 | 0.205 | 0.324 | 0.214 | 0.327 | 0.333 | 0.422 | |||
| 720 | 0.192 | 0.283 | 0.210 | 0.298 | 0.207 | 0.301 | 0.211 | 0.338 | 0.236 | 0.342 | 0.351 | 0.427 | |||
| Solar | 96 | 0.174 | 0.214 | 0.201 | 0.260 | 0.221 | 0.289 | 0.241 | 0.299 | 0.266 | 0.311 | 0.208 | 0.237 | ||
| 192 | 0.192 | 0.239 | 0.228 | 0.266 | 0.249 | 0.285 | 0.268 | 0.314 | 0.271 | 0.315 | 0.229 | 0.259 | |||
| 336 | 0.210 | 0.248 | 0.221 | 0.266 | 0.263 | 0.291 | 0.288 | 0.311 | 0.281 | 0.317 | 0.235 | 0.272 | |||
| 720 | 0.213 | 0.250 | 0.230 | 0.279 | 0.244 | 0.296 | 0.271 | 0.315 | 0.295 | 0.319 | 0.233 | 0.275 | |||
| 提升率/% | 5.3 | 4.0 | 7.0 | 6.3 | 11.2 | 11.2 | 24.7 | 19.6 | 33.1 | 26.2 | 53.2 | 42.5 | |||
图4 Weather数据集第2个维度体感温度上输入长度为336且输出长度为96的效果对比
Fig. 4 Comparison of performance on the second dimension (perceived temperature)of Weather dataset with input length of 336 and output length of 96
图5 ETTm2数据集第8个维度变压器油温上输入长度为336且输出长度为720的效果对比
Fig. 5 Comparison of performance on the 8th dimension (transformer oil temperature) of ETTm2 dataset with input length of 336 and output length of 720
图6 Solar数据集第3个维度气压上输入长度为336且输出长度为720的效果对比
Fig. 6 Comparison of performance on the third dimension (barometric pressure) of Solar dataset with input length of 336 and output length of 720
| [1] | 杨汪洋,魏云冰,罗程浩. 基于CVMD-TCN-BiLSTM的短期电力负荷预测[J]. 电气工程学报,2024, 19(2): 163-172. |
| YANG W Y, WEI Y B, LUO C H. Short-term electricity load forecasting based on CVMD-TCN-BiLSTM[J]. Journal of Electrical Engineering, 2024, 19(2): 163-172. | |
| [2] | KAUSHIK S, CHOUDHURY A, SHERON P K, et al. AI in healthcare: time-series forecasting using statistical, neural, and ensemble architectures[J]. Frontiers in Big Data, 2020, 3: No.4. |
| [3] | HOU M, XU C, LI Z, et al. Multi-granularity residual learning with confidence estimation for time series prediction [C]// Proceedings of the ACM Web Conference 2022. New York: ACM, 2022: 112-121. |
| [4] | 王艺霏,于雷,滕飞,等. 基于长-短时序特征融合的资源负载预测模型[J]. 计算机应用,2022, 42(5): 1508-1515. |
| WANG Y F, YU L, TENG F, et al. Resource load prediction model based on long-short time series feature fusion[J]. Journal of Computer Applications, 2022, 42(5): 1508-1515. | |
| [5] | LIU Z, ZHU Z, GAO J, et al. Forecast methods for time series data: a survey[J]. IEEE Access, 2021, 9: 91896-91912. |
| [6] | WU H, HU T, LIU Y, et al. TimesNet: temporal 2D-variation modeling for general time series analysis[EB/OL]. [2024-11-18].. |
| [7] | DENG A, HOOI B. Graph neural network-based anomaly detection in multivariate time series [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 4027-4035. |
| [8] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
| [9] | KALYAN K S, RAJASEKHARAN A, SANGEETHA S. AMMUS: a survey of transformer-based pretrained models in natural language processing[EB/OL]. [2024-12-08].. |
| [10] | HAN K, WANG Y, CHEN H, et al. A survey on Vision Transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 87-110. |
| [11] | WEN Q, ZHOU T, ZHANG C, et al. Transformers in time series: a survey [C]// Proceedings of the 32nd International Joint Conference on Artificial Intelligence. California: ijcai.org, 2023: 6778-6786. |
| [12] | ZHOU T, MA Z, WEN Q, et al. FEDformer: frequency enhanced decomposed transformer for long-term series forecasting [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 27268-27286. |
| [13] | ZHOU H, ZHANG S, PENG J, et al. Informer: beyond efficient transformer for long sequence time-series forecasting [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 11106-11115. |
| [14] | WU H, XU J, WANG J, et al. Autoformer: decomposition transformers with auto-correlation for long-term series forecasting [C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 22419-22430. |
| [15] | LIU S, YU H, LIAO C, et al. Pyraformer: low-complexity pyramidal attention for long-range time series modeling and forecasting[EB/OL]. [2024-11-18].. |
| [16] | ZHANG Y, YAN J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting[EB/OL]. [2024-11-18].. |
| [17] | ZENG A, CHEN M, ZHANG L, et al. Are Transformers effective for time series forecasting? [C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2023: 11121-11128. |
| [18] | NIE Y, NGUYEN N H, SINTHONG P, et al. A time series is worth 64 words: long-term forecasting with Transformers[EB/OL]. [2024-07-25].. |
| [19] | ZHANG X, JIN X, GOPALSWAMY K, et al. First de-trend then attend: rethinking attention for time-series forecasting[EB/OL]. [2024-07-31].. |
| [20] | ZHANG X, ZHAO S, SONG Z, et al. Not all frequencies are created equal: towards a dynamic fusion of frequencies in time-series forecasting [C]// Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 4729-4737. |
| [21] | PIAO X, CHEN Z, MURAYAMA T, et al. Fredformer: frequency debiased transformer for time series forecasting [C]// Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 2400-2410. |
| [22] | JIANG M, ZENG P, WANG K, et al. FECAM: frequency enhanced channel attention mechanism for time series forecasting[J]. Advanced Engineering Informatics, 2023, 58: No.102158. |
| [23] | DU Y, WANG J, FENG W, et al. AdaRNN: adaptive learning and forecasting of time series [C]// Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 402-411. |
| [24] | LIU Y, WU H, WANG J, et al. Non-stationary transformers: exploring the stationarity in time series forecasting [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 9881-9893. |
| [25] | KIM T, KIM J, TAE Y, et al. Reversible instance normalization for accurate time-series forecasting against distribution shift[EB/OL]. [2024-11-18].. |
| [26] | FAN W, WANG P, WANG D, et al. Dish-TS: a general paradigm for alleviating distribution shift in time series forecasting [C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2023: 7522-7529. |
| [27] | QIU X, CHENG H, WU X, et al. A comprehensive survey of deep learning for multivariate time series forecasting: a channel strategy perspective[EB/OL]. [2025-03-28].. |
| [28] | LIU Y, HU T, ZHANG H, et al. iTransformer: inverted Transformers are effective for time series forecasting[EB/OL]. [2024-11-05].. |
| [29] | YI K, ZHANG Q, FAN W, et al. Frequency-domain MLPs are more effective learners in time series forecasting [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2024: 76656-76679. |
| [30] | QIU X, WU X, LIN Y, et al. DUET: dual clustering enhanced multivariate time series forecasting [C]// Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1. New York: ACM, 2025: 1185-1196. |
| [31] | ZHOU L, WANG H. MST-GAT: a multi-perspective spatial-temporal graph attention network for multi-sensor equipment remaining useful life prediction[J]. Information Fusion, 2024, 110: No.102462. |
| [32] | LIU Z, CHENG M, LI Z, et al. Adaptive normalization for non-stationary time series forecasting: a temporal slice perspective [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Stroudsburg: ACL, 2023:14273-14292. |
| [33] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
| [34] | WANG S, WU H, SHI X, et al. TimeMixer: decomposable multiscale mixing for time series forecasting[EB/OL]. [2024-11-18].. |
| [35] | CHEN Y, LIU S, YANG J, et al. A Joint Time-Frequency Domain Transformer for multivariate time series forecasting[J]. Neural Networks, 2024, 176: No.106334. |
| [36] | LI Z, RAO Z, PAN L, et al. MTS-Mixers: multivariate time series forecasting via factorized temporal and channel mixing[EB/OL]. [2024-10-24].. |
| [37] | MUSHTAQ R. Augmented dickey fuller test[EB/OL]. [2024-12-25].. |
| [1] | 王丽芳, 任文婧, 郭晓东, 张荣国, 胡立华. 用于低剂量CT图像降噪的多路特征生成对抗网络[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 270-279. |
| [2] | 姚理进, 张迪, 周丕宇, 曲志坚, 王海鹏. 基于Transformer和门控循环单元的磷酸化肽从头测序算法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 297-304. |
| [3] | 桑雨, 贡同, 赵琛, 于博文, 李思漫. 具有光度对齐的域适应夜间目标检测方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 242-251. |
| [4] | 李进, 刘立群. 基于残差Swin Transformer的SAR与可见光图像融合[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2949-2956. |
| [5] | 王翔, 陈志祥, 毛国君. 融合局部和全局相关性的多变量时间序列预测方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2806-2816. |
| [6] | 王芳, 胡静, 张睿, 范文婷. 内容引导下多角度特征融合医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3017-3025. |
| [7] | 吕景刚, 彭绍睿, 高硕, 周金. 复频域注意力和多尺度频域增强驱动的语音增强网络[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2957-2965. |
| [8] | 周金, 李玉芝, 张徐, 高硕, 张立, 盛家川. 复杂电磁环境下的调制识别网络[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2672-2682. |
| [9] | 陶永鹏, 柏诗淇, 周正文. 基于卷积和Transformer神经网络架构搜索的脑胶质瘤多组织分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2378-2386. |
| [10] | 王慧斌, 胡展傲, 胡节, 徐袁伟, 文博. 基于分段注意力机制的时间序列预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2262-2268. |
| [11] | 陈凯, 叶海良, 曹飞龙. 基于局部-全局交互与结构Transformer的点云分类算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1671-1676. |
| [12] | 陈鹏宇, 聂秀山, 李南君, 李拓. 基于时空解耦和区域鲁棒性增强的半监督视频目标分割方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1379-1386. |
| [13] | 许鹏程, 何磊, 李川, 钱炜祺, 赵暾. 基于Transformer的深度符号回归方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1455-1463. |
| [14] | 李慧, 贾炳志, 王晨曦, 董子宇, 李纪龙, 仲兆满, 陈艳艳. 基于Swin Transformer的生成对抗网络水下图像增强模型[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1439-1446. |
| [15] | 袁宝华, 陈佳璐, 王欢. 融合多尺度语义和双分支并行的医学图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 988-995. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||