《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (9): 2674-2682.DOI: 10.11772/j.issn.1001-9081.2023091359
收稿日期:
2023-10-09
修回日期:
2023-12-08
接受日期:
2023-12-11
发布日期:
2024-03-21
出版日期:
2024-09-10
通讯作者:
桑国明
作者简介:
李金金(2000—),女,河南漯河人,硕士研究生,CCF会员,主要研究方向:自然语言处理、谣言检测基金资助:
Jinjin LI, Guoming SANG(), Yijia ZHANG
Received:
2023-10-09
Revised:
2023-12-08
Accepted:
2023-12-11
Online:
2024-03-21
Published:
2024-09-10
Contact:
Guoming SANG
About author:
LI Jinjin, born in 2000, M. S. candidate. Her research interests include natural language processing, rumor detection.Supported by:
摘要:
为解决社交媒体新闻中的领域转移、领域标签不完整问题,以及探索更高效的多域新闻文本特征提取和融合网络,提出一种基于APK-CNN(Adaptive Pooling Kernel Convolutional Neural Network)和Transformer增强的多域虚假新闻检测模型Transm3。首先,设计三通道网络对文本的语义、情感和风格信息进行特征提取和表示,并利用多粒度跨域交互器对这些特征进行视图组合;其次,通过优化的软共享内存网络和域适配器来完善新闻领域标签;再次,将Transformer与多粒度跨域交互器结合,使用更先进的融合网络动态加权聚合不同领域的交互特征;最后,将融合特征输入分类器中用于真/假新闻判别。实验结果表明,Transm3与M3FEND(Memory-guided Multi-view Multi-domain FakE News Detection)和EANN(Event Adversarial Neural Networks for multi-modal fake news detection)相比,综合F1值在中文数据集上分别提高了3.68%和6.46%,在英文数据集上分别提高了6.75%和11.93%,在各分领域上F1值也有明显的提高,充分验证了Transm3在多域虚假新闻检测工作上的有效性。
中图分类号:
李金金, 桑国明, 张益嘉. APK-CNN和Transformer增强的多域虚假新闻检测模型[J]. 计算机应用, 2024, 44(9): 2674-2682.
Jinjin LI, Guoming SANG, Yijia ZHANG. Multi-domain fake news detection model enhanced by APK-CNN and Transformer[J]. Journal of Computer Applications, 2024, 44(9): 2674-2682.
领域 | 样本数 | 领域 | 样本数 | ||
---|---|---|---|---|---|
真新闻 | 假新闻 | 真新闻 | 假新闻 | ||
科技 | 143 | 93 | 健康 | 485 | 515 |
军事 | 121 | 222 | 金融 | 959 | 362 |
教育 | 243 | 248 | 娱乐 | 1 000 | 440 |
灾难 | 185 | 591 | 社会 | 1 198 | 1 471 |
政治 | 306 | 546 | 合计 | 4 640 | 4 488 |
表1 中文数据集Ch-9数据分布
Tab. 1 Data distribution of Chinese dataset Ch-9
领域 | 样本数 | 领域 | 样本数 | ||
---|---|---|---|---|---|
真新闻 | 假新闻 | 真新闻 | 假新闻 | ||
科技 | 143 | 93 | 健康 | 485 | 515 |
军事 | 121 | 222 | 金融 | 959 | 362 |
教育 | 243 | 248 | 娱乐 | 1 000 | 440 |
灾难 | 185 | 591 | 社会 | 1 198 | 1 471 |
政治 | 306 | 546 | 合计 | 4 640 | 4 488 |
领域 | 样本数 | |
---|---|---|
真新闻 | 假新闻 | |
合计 | 22 001 | 6 763 |
Gossipcop | 16 804 | 5 067 |
Politifact | 447 | 379 |
COVID | 4 750 | 1 317 |
表2 英文数据集En-3数据分布
Tab. 2 Data distribution of English dataset En-3
领域 | 样本数 | |
---|---|---|
真新闻 | 假新闻 | |
合计 | 22 001 | 6 763 |
Gossipcop | 16 804 | 5 067 |
Politifact | 447 | 379 |
COVID | 4 750 | 1 317 |
模型 | 不同领域上的F1 | overall | |||||
---|---|---|---|---|---|---|---|
Gossipcop | Politifact | COVID | F1 | Acc | AUC | ||
单 域 | BIGRU | 76.66 | 77.22 | 88.85 | 79.58 | 86.68 | 88.40 |
TextCNN | 77.86 | 80.11 | 90.40 | 80.79 | 86.92 | 90.23 | |
RoBERTa | 78.10 | 85.83 | 92.88 | 81.84 | 88.02 | 91.08 | |
混 合 域 | BIGRU | 74.79 | 73.39 | 74.48 | 75.01 | 83.21 | 85.04 |
TextCNN | 75.19 | 70.40 | 83.22 | 76.79 | 83.62 | 86.74 | |
RoBERTa | 78.23 | 79.67 | 90.14 | 81.01 | 87.44 | 90.58 | |
StyleLSTM | 80.07 | 79.37 | 92.52 | 82.85 | 88.26 | 92.50 | |
DualEmo | 80.56 | 78.68 | 90.19 | 82.70 | 88.18 | 92.51 | |
多 域 | EANN | 79.37 | 75.58 | 88.36 | 81.23 | 87.43 | 90.53 |
MMoE | 80.22 | 84.77 | 93.79 | 83.61 | 89.20 | 92.65 | |
MoSE | 79.81 | 93.26 | 83.18 | 88.85 | 92.52 | ||
EDDFN | 80.67 | 85.05 | 93.06 | 83.78 | 89.12 | 92.63 | |
MDFEND | 80.80 | 84.73 | 93.31 | 83.90 | 89.36 | 92.37 | |
M3FEND | 84.78 | ||||||
Transm3 | 84.67 | 89.82 | 94.86 | 90.92 | 92.12 | 96.54 |
表3 不同模型在En-3数据集上的实验结果 (%)
Tab. 3 Experimental results of different models on En-3 dataset
模型 | 不同领域上的F1 | overall | |||||
---|---|---|---|---|---|---|---|
Gossipcop | Politifact | COVID | F1 | Acc | AUC | ||
单 域 | BIGRU | 76.66 | 77.22 | 88.85 | 79.58 | 86.68 | 88.40 |
TextCNN | 77.86 | 80.11 | 90.40 | 80.79 | 86.92 | 90.23 | |
RoBERTa | 78.10 | 85.83 | 92.88 | 81.84 | 88.02 | 91.08 | |
混 合 域 | BIGRU | 74.79 | 73.39 | 74.48 | 75.01 | 83.21 | 85.04 |
TextCNN | 75.19 | 70.40 | 83.22 | 76.79 | 83.62 | 86.74 | |
RoBERTa | 78.23 | 79.67 | 90.14 | 81.01 | 87.44 | 90.58 | |
StyleLSTM | 80.07 | 79.37 | 92.52 | 82.85 | 88.26 | 92.50 | |
DualEmo | 80.56 | 78.68 | 90.19 | 82.70 | 88.18 | 92.51 | |
多 域 | EANN | 79.37 | 75.58 | 88.36 | 81.23 | 87.43 | 90.53 |
MMoE | 80.22 | 84.77 | 93.79 | 83.61 | 89.20 | 92.65 | |
MoSE | 79.81 | 93.26 | 83.18 | 88.85 | 92.52 | ||
EDDFN | 80.67 | 85.05 | 93.06 | 83.78 | 89.12 | 92.63 | |
MDFEND | 80.80 | 84.73 | 93.31 | 83.90 | 89.36 | 92.37 | |
M3FEND | 84.78 | ||||||
Transm3 | 84.67 | 89.82 | 94.86 | 90.92 | 92.12 | 96.54 |
模型 | 不同领域的F1 | overall | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
科技 | 军事 | 教育 | 灾难 | 政治 | 健康 | 金融 | 娱乐 | 社会 | F1 | Acc | AUC | ||
单 域 | BIGRU | 51.75 | 33.65 | 74.16 | 72.93 | 85.88 | 83.73 | 81.37 | 79.92 | 79.18 | 81.03 | 81.03 | 89.02 |
TextCNN | 40.74 | 33.65 | 80.59 | 43.88 | 84.82 | 88.19 | 82.15 | 79.73 | 86.15 | 83.69 | 83.70 | 90.94 | |
RoBERTa | 74.63 | 73.69 | 81.46 | 75.47 | 80.44 | 88.73 | 83.61 | 85.13 | 83.00 | 84.77 | 84.77 | 92.26 | |
混 合 域 | BIGRU | 72.69 | 87.24 | 81.38 | 79.35 | 83.56 | 88.68 | 82.91 | 86.29 | 84.85 | 85.95 | 85.98 | 93.09 |
TextCNN | 72.54 | 88.39 | 83.62 | 82.22 | 85.61 | 87.68 | 86.38 | 84.56 | 85.40 | 86.86 | 86.87 | 93.81 | |
RoBERTa | 77.77 | 90.72 | 83.31 | 85.12 | 83.66 | 90.90 | 87.35 | 87.69 | 85.77 | 87.95 | 87.97 | 94.51 | |
StyleLSTM | 77.29 | 91.87 | 83.41 | 85.32 | 84.87 | 90.84 | 88.02 | 88.46 | 85.52 | 88.20 | 88.21 | 94.71 | |
DualEmo | 83.23 | 90.26 | 83.62 | 83.96 | 84.55 | 89.05 | 89.44 | 85.69 | 88.46 | 88.46 | 95.41 | ||
多 域 | EANN | 82.25 | 92.74 | 86.24 | 86.66 | 87.05 | 91.05 | 87.10 | 89.57 | 88.77 | 89.75 | 89.77 | 96.10 |
MMoE | 91.12 | 87.06 | 87.70 | 86.20 | 93.64 | 85.67 | 88.86 | 87.50 | 89.47 | 89.48 | 95.47 | ||
MoSE | 85.02 | 88.58 | 88.15 | 86.72 | 88.08 | 91.79 | 86.72 | 89.13 | 87.29 | 89.39 | 89.40 | 95.43 | |
EDDFN | 81.86 | 91.37 | 86.76 | 87.86 | 84.78 | 93.79 | 86.36 | 88.32 | 86.89 | 89.19 | 89.19 | 95.28 | |
MDFEND | 83.01 | 93.89 | 89.17 | 94.00 | 89.51 | 90.66 | 89.80 | 91.37 | 91.38 | 97.08 | |||
M3FEND | 82.92 | 88.96 | 88.25 | 90.09 | |||||||||
Transm3 | 89.43 | 98.07 | 91.11 | 92.36 | 90.74 | 96.90 | 92.56 | 94.90 | 91.95 | 95.55 | 95.57 | 98.95 |
表4 不同模型在Ch-9数据集上的实验结果 (%)
Tab. 4 Experimental results of different models on Ch-9 dataset
模型 | 不同领域的F1 | overall | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
科技 | 军事 | 教育 | 灾难 | 政治 | 健康 | 金融 | 娱乐 | 社会 | F1 | Acc | AUC | ||
单 域 | BIGRU | 51.75 | 33.65 | 74.16 | 72.93 | 85.88 | 83.73 | 81.37 | 79.92 | 79.18 | 81.03 | 81.03 | 89.02 |
TextCNN | 40.74 | 33.65 | 80.59 | 43.88 | 84.82 | 88.19 | 82.15 | 79.73 | 86.15 | 83.69 | 83.70 | 90.94 | |
RoBERTa | 74.63 | 73.69 | 81.46 | 75.47 | 80.44 | 88.73 | 83.61 | 85.13 | 83.00 | 84.77 | 84.77 | 92.26 | |
混 合 域 | BIGRU | 72.69 | 87.24 | 81.38 | 79.35 | 83.56 | 88.68 | 82.91 | 86.29 | 84.85 | 85.95 | 85.98 | 93.09 |
TextCNN | 72.54 | 88.39 | 83.62 | 82.22 | 85.61 | 87.68 | 86.38 | 84.56 | 85.40 | 86.86 | 86.87 | 93.81 | |
RoBERTa | 77.77 | 90.72 | 83.31 | 85.12 | 83.66 | 90.90 | 87.35 | 87.69 | 85.77 | 87.95 | 87.97 | 94.51 | |
StyleLSTM | 77.29 | 91.87 | 83.41 | 85.32 | 84.87 | 90.84 | 88.02 | 88.46 | 85.52 | 88.20 | 88.21 | 94.71 | |
DualEmo | 83.23 | 90.26 | 83.62 | 83.96 | 84.55 | 89.05 | 89.44 | 85.69 | 88.46 | 88.46 | 95.41 | ||
多 域 | EANN | 82.25 | 92.74 | 86.24 | 86.66 | 87.05 | 91.05 | 87.10 | 89.57 | 88.77 | 89.75 | 89.77 | 96.10 |
MMoE | 91.12 | 87.06 | 87.70 | 86.20 | 93.64 | 85.67 | 88.86 | 87.50 | 89.47 | 89.48 | 95.47 | ||
MoSE | 85.02 | 88.58 | 88.15 | 86.72 | 88.08 | 91.79 | 86.72 | 89.13 | 87.29 | 89.39 | 89.40 | 95.43 | |
EDDFN | 81.86 | 91.37 | 86.76 | 87.86 | 84.78 | 93.79 | 86.36 | 88.32 | 86.89 | 89.19 | 89.19 | 95.28 | |
MDFEND | 83.01 | 93.89 | 89.17 | 94.00 | 89.51 | 90.66 | 89.80 | 91.37 | 91.38 | 97.08 | |||
M3FEND | 82.92 | 88.96 | 88.25 | 90.09 | |||||||||
Transm3 | 89.43 | 98.07 | 91.11 | 92.36 | 90.74 | 96.90 | 92.56 | 94.90 | 91.95 | 95.55 | 95.57 | 98.95 |
数据集 | Head | F1/% | Acc/% | 数据集 | Head | F1/% | Acc/% |
---|---|---|---|---|---|---|---|
En-3 | 1 | 87.45 | 88.01 | Ch-9 | 1 | 91.14 | 91.15 |
2 | 89.66 | 88.97 | 2 | 93.20 | 93.19 | ||
4 | 90.92 | 92.12 | 4 | 95.55 | 95.57 | ||
8 | 88.12 | 89.63 | 8 | 92.35 | 92.35 |
表5 注意力机制头数对模型性能的影响
Tab. 5 Impact of head number of attention mechanism on model performance
数据集 | Head | F1/% | Acc/% | 数据集 | Head | F1/% | Acc/% |
---|---|---|---|---|---|---|---|
En-3 | 1 | 87.45 | 88.01 | Ch-9 | 1 | 91.14 | 91.15 |
2 | 89.66 | 88.97 | 2 | 93.20 | 93.19 | ||
4 | 90.92 | 92.12 | 4 | 95.55 | 95.57 | ||
8 | 88.12 | 89.63 | 8 | 92.35 | 92.35 |
数据集 | layer | F1/% | Acc/% | 数据集 | layer | F1/% | Acc/% |
---|---|---|---|---|---|---|---|
En-3 | 1 | 90.92 | 92.12 | Ch-9 | 1 | 95.55 | 95.57 |
2 | 89.12 | 91.13 | 2 | 92.12 | 92.13 | ||
4 | 87.98 | 89.98 | 4 | 90.98 | 90.98 | ||
8 | 83.94 | 86.91 | 8 | 89.94 | 89.99 |
表6 Encoder层数对模型性能的影响
Tab. 6 Impact of number of Encoder layers on model performance
数据集 | layer | F1/% | Acc/% | 数据集 | layer | F1/% | Acc/% |
---|---|---|---|---|---|---|---|
En-3 | 1 | 90.92 | 92.12 | Ch-9 | 1 | 95.55 | 95.57 |
2 | 89.12 | 91.13 | 2 | 92.12 | 92.13 | ||
4 | 87.98 | 89.98 | 4 | 90.98 | 90.98 | ||
8 | 83.94 | 86.91 | 8 | 89.94 | 89.99 |
1 | SILVA A, LUO L, KARUNASEKERA S, et al. Embracing domain differences in fake news: cross-domain fake news detection using multi-modal data [C]// Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021, 35(1): 557-565. |
2 | NAN Q, CAO J, ZHU Y, et al. MDFEND: multi-domain fake news detection [C]// Proceedings of the 30th ACM International Conference on Information & Knowledge Management. New York: ACM, 2021: 3343-3347. |
3 | ZHU Y, SHENG Q, CAO J, et al. Memory-guided multi-view multi-domain fake news detection [J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(7): 7178-7191. |
4 | SINGHAL S, SHAH R R, CHAKRABORTY T, et al. Spotfake: a multi-modal framework for fake news detection [C]// Proceedings of the 2019 IEEE 5th International Conference on Multimedia Big Data. Piscataway: IEEE, 2019: 39-47. |
5 | MA J, GAO W, K-F WONG. Detect rumors on twitter by promoting information campaigns with generative adversarial learning [C]// Proceedings of the 2019 World Wide Web Conference. New York: ACM, 2019: 3049-3055. |
6 | GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-adversarial training of neural networks [J]. The Journal of Machine Learning Research, 2016, 17(1): 2096-2030. |
7 | MA J, ZHAO Z, YI X, et al. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts [C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2018: 1930-1939. |
8 | ZHU Y, ZHUANG F, WANG D. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources [C]// Proceedings of the 2019 AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019, 33(1): 5989-5996. |
9 | ZADEH A, LIANG P P, MAZUMDER N, et al. Memory fusion network for multi-view sequential learning [C]// Proceedings of the 2018 AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018, 32(1): 5634-5641. |
10 | ZHANG X, CAO J, LI X, et al. Mining dual emotion for fake news detection [C]// Proceedings of the Web Conference 2021. New York: ACM, 2021: 3465-3476. |
11 | YANG Y, CAO J, LU M, et al. How to write high-quality news on social network? Predicting news quality by mining writing style [EB/OL]. [2022-08-17]. . |
12 | CASTILLO C, MENDOZA M, POBLETE B. Information credibility on twitter [C]// Proceedings of the 20th International Conference on World Wide Web. New York: ACM, 2011: 675-684. |
13 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
14 | MA J, GAO W, MITRA P, et al. Detecting rumors from microblogs with recurrent neural networks [C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2016: 3818-3824. |
15 | XIA H, WANG Y, ZHANG J Z, et al. COVID-19 fake news detection: a hybrid CNN-BiLSTM-AM model [J]. Technological Forecasting and Social Change, 2023, 195: 122746. |
16 | SHAFIQ M, GU Z. Deep residual learning for image recognition: a survey [J]. Applied Sciences, 2022, 12(18): 8972. |
17 | ZENG G, CHI J, MA R, et al. ADAPT: adversarial domain adaptation with purifier training for cross-domain credit risk forecasting [C]// Proceedings of the 27th International Conference on Database Systems for Advanced Applications. Cham: Springer, 2022: 353-369. |
18 | RAZA S, DING C. Fake news detection based on news content and social contexts: a Transformer-based approach [J]. International Journal of Data Science and Analytics, 2022, 13(4): 335-362. |
19 | DAVOUDI M, MOOSAVI M R, SADREDDINI M H. DSS: a hybrid deep model for fake news detection using propagation tree and stance network [J]. Expert Systems with Applications, 2022, 198: 116635. |
20 | SHAHID W, JAMSHIDI B, HAKAK S, et al. Detecting and mitigating the dissemination of fake news: challenges and future research opportunities [J]. IEEE Transactions on Computational Social Systems, 2024, 11(4): 4649-4662. |
21 | HUANG K-H, McKEOWN K, NAKOV P, et al. Faking fake news for real fake news detection: propaganda-loaded training data generation [EB/OL]. [2023-03-13]. . |
22 | MOHAPATRA A, THOTA N, PRAKASAM P. Fake news detection and classification using hybrid BiLSTM and self-attention model [J]. Multimedia Tools and Applications, 2022, 81(13): 18503-18519. |
23 | KIM Y. Convolutional neural networks for sentence classification [EB/OL]. [2022-12-02]. . |
24 | LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach [EB/OL]. [2023-02-12]. . |
25 | CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for Chinese BERT [J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514. |
26 | PRZYBYLA P. Capturing the style of fake news [C]// Proceedings of the 2020 AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020, 34(1): 490-497. |
27 | WANG Y, MA F, JIN Z, et al. EANN: event adversarial neural networks for multi-modal fake news detection [C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2018: 849-857. |
28 | QIN Z, CHENG Y, ZHAO Z, et al. Multitask mixture of sequential experts for user activity streams [C]// Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2020: 3083-3091. |
[1] | 任烈弘, 黄铝文, 田旭, 段飞. 基于DFT的频率敏感双分支Transformer多变量长时间序列预测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2739-2746. |
[2] | 方介泼, 陶重犇. 应对零日攻击的混合车联网入侵检测系统[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2763-2769. |
[3] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[4] | 杨鑫, 陈雪妮, 吴春江, 周世杰. 结合变种残差模型和Transformer的城市公路短时交通流预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2947-2951. |
[5] | 贾洁茹, 杨建超, 张硕蕊, 闫涛, 陈斌. 基于自蒸馏视觉Transformer的无监督行人重识别[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2893-2902. |
[6] | 丁宇伟, 石洪波, 李杰, 梁敏. 基于局部和全局特征解耦的图像去噪网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2571-2579. |
[7] | 邓凯丽, 魏伟波, 潘振宽. 改进掩码自编码器的工业缺陷检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2595-2603. |
[8] | 杨帆, 邹窈, 朱明志, 马振伟, 程大伟, 蒋昌俊. 基于图注意力Transformer神经网络的信用卡欺诈检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2634-2642. |
[9] | 李大海, 王忠华, 王振东. 结合空间域和频域信息的双分支低光照图像增强网络[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2175-2182. |
[10] | 黄梦源, 常侃, 凌铭阳, 韦新杰, 覃团发. 基于层间引导的低光照图像渐进增强算法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1911-1919. |
[11] | 吕锡婷, 赵敬华, 荣海迎, 赵嘉乐. 基于Transformer和关系图卷积网络的信息传播预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1760-1766. |
[12] | 黎施彬, 龚俊, 汤圣君. 基于Graph Transformer的半监督异配图表示学习模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1816-1823. |
[13] | 刘子涵, 周登文, 刘玉铠. 基于全局依赖Transformer的图像超分辨率网络[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1588-1596. |
[14] | 孙子文, 钱立志, 杨传栋, 高一博, 陆庆阳, 袁广林. 基于Transformer的视觉目标跟踪方法综述[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1644-1654. |
[15] | 席治远, 唐超, 童安炀, 王文剑. 基于双路时空网络的驾驶员行为识别[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1511-1519. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||