Journal of Computer Applications ›› 2024, Vol. 44 ›› Issue (7): 2034-2040.DOI: 10.11772/j.issn.1001-9081.2023071005
• Artificial intelligence • Previous Articles Next Articles
Qianhui LU1, Yu ZHANG1, Mengling WANG1(
), Tingwei WU1, Yuzhong SHAN2
Received:2023-07-25
Revised:2023-10-08
Accepted:2023-10-10
Online:2023-10-26
Published:2024-07-10
Contact:
Mengling WANG
About author:LU Qianhui, born in 2001, M. S. candidate. Her research interests include data mining.Supported by:通讯作者:
王梦灵
作者简介:陆潜慧(2001—),女,江苏南通人,硕士研究生,主要研究方向:数据挖掘;基金资助:CLC Number:
Qianhui LU, Yu ZHANG, Mengling WANG, Tingwei WU, Yuzhong SHAN. Classification model of nuclear power equipment quality text based on improved recurrent pooling network[J]. Journal of Computer Applications, 2024, 44(7): 2034-2040.
陆潜慧, 张羽, 王梦灵, 吴庭伟, 单玉忠. 基于改进循环池化网络的核电装备质量文本分类模型[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2034-2040.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2023071005
| 序号 | 核电装备质量文本描述(样例) | 缺陷阶段 | 阶段样本数(统计值) |
|---|---|---|---|
| 1 | 发电机过频保护不受发电机***发电机过频保护的定值,则存在误跳主变的风险。***导致发电机过频保护误动作,误跳高压开关。机组正常运行期间发生***可能产生超速***导致发电机过频保护动作,跳开高压开关。 | 调试 | 6 380 |
| 2 | 在GSY系统***未考虑发电机出口断路器检修维护的便利性。 | 施工 | 1 348 |
| 3 | ***现场在执行***试验时,泵***润滑油冷却水回流流量***,低于“润滑油冷却器冷却水回流流量的最低要求值***要求。 | 采购 | 5 698 |
| 4 | ***泵出口排气管道有断断续续的冷凝水倒流至***。对润滑油取样分析,***润滑油严重乳化。***泵运行过程中润滑油生成大量泡沫。***造成油箱内液位降低。 | 设计 | 904 |
Tab. 1 Text samples of nuclear power equipment quality
| 序号 | 核电装备质量文本描述(样例) | 缺陷阶段 | 阶段样本数(统计值) |
|---|---|---|---|
| 1 | 发电机过频保护不受发电机***发电机过频保护的定值,则存在误跳主变的风险。***导致发电机过频保护误动作,误跳高压开关。机组正常运行期间发生***可能产生超速***导致发电机过频保护动作,跳开高压开关。 | 调试 | 6 380 |
| 2 | 在GSY系统***未考虑发电机出口断路器检修维护的便利性。 | 施工 | 1 348 |
| 3 | ***现场在执行***试验时,泵***润滑油冷却水回流流量***,低于“润滑油冷却器冷却水回流流量的最低要求值***要求。 | 采购 | 5 698 |
| 4 | ***泵出口排气管道有断断续续的冷凝水倒流至***。对润滑油取样分析,***润滑油严重乳化。***泵运行过程中润滑油生成大量泡沫。***造成油箱内液位降低。 | 设计 | 904 |
| 损失函数类别 | 阶段 | 精准率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 70 | 97 | 81 | 88 |
| 施工 | 95 | 91 | 93 | ||
| 采购 | 100 | 80 | 89 | ||
| 调试 | 100 | 81 | 90 | ||
| FL | 设计 | 100 | 74 | 85 | 90 |
| 施工 | 100 | 85 | 92 | ||
| 采购 | 77 | 97 | 86 | ||
| 调试 | 87 | 93 | 90 | ||
| RFFL | 设计 | 100 | 82 | 90 | 91 |
| 施工 | 100 | 90 | 95 | ||
| 采购 | 82 | 93 | 88 | ||
| 调试 | 85 | 94 | 89 |
Tab. 2 Performance analysis of classification model based on Text_RNN_Att
| 损失函数类别 | 阶段 | 精准率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 70 | 97 | 81 | 88 |
| 施工 | 95 | 91 | 93 | ||
| 采购 | 100 | 80 | 89 | ||
| 调试 | 100 | 81 | 90 | ||
| FL | 设计 | 100 | 74 | 85 | 90 |
| 施工 | 100 | 85 | 92 | ||
| 采购 | 77 | 97 | 86 | ||
| 调试 | 87 | 93 | 90 | ||
| RFFL | 设计 | 100 | 82 | 90 | 91 |
| 施工 | 100 | 90 | 95 | ||
| 采购 | 82 | 93 | 88 | ||
| 调试 | 85 | 94 | 89 |
| 损失函数类别 | 阶段 | 精确率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 64 | 93 | 76 | 86 |
| 施工 | 95 | 89 | 92 | ||
| 采购 | 100 | 76 | 86 | ||
| 调试 | 100 | 80 | 89 | ||
| FL | 设计 | 100 | 74 | 85 | 88 |
| 施工 | 100 | 83 | 91 | ||
| 采购 | 78 | 96 | 86 | ||
| 调试 | 88 | 92 | 90 | ||
| RFFL | 设计 | 100 | 82 | 90 | 91 |
| 施工 | 100 | 90 | 95 | ||
| 采购 | 82 | 95 | 88 | ||
| 调试 | 84 | 95 | 89 |
Tab. 3 Performance analysis of classification model based on Text_ DPCNN
| 损失函数类别 | 阶段 | 精确率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 64 | 93 | 76 | 86 |
| 施工 | 95 | 89 | 92 | ||
| 采购 | 100 | 76 | 86 | ||
| 调试 | 100 | 80 | 89 | ||
| FL | 设计 | 100 | 74 | 85 | 88 |
| 施工 | 100 | 83 | 91 | ||
| 采购 | 78 | 96 | 86 | ||
| 调试 | 88 | 92 | 90 | ||
| RFFL | 设计 | 100 | 82 | 90 | 91 |
| 施工 | 100 | 90 | 95 | ||
| 采购 | 82 | 95 | 88 | ||
| 调试 | 84 | 95 | 89 |
| 损失函数类别 | 阶段 | 精确率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 85 | 92 | 88 | 88 |
| 施工 | 74 | 95 | 83 | ||
| 采购 | 100 | 80 | 89 | ||
| 调试 | 100 | 83 | 91 | ||
| FL | 设计 | 100 | 80 | 89 | 89 |
| 施工 | 100 | 91 | 95 | ||
| 采购 | 81 | 94 | 87 | ||
| 调试 | 81 | 92 | 86 | ||
| RFFL | 设计 | 100 | 84 | 91 | 92 |
| 施工 | 100 | 93 | 96 | ||
| 采购 | 75 | 99 | 85 | ||
| 调试 | 97 | 91 | 94 |
Tab. 4 Performance analysis of classification model based on IRPN
| 损失函数类别 | 阶段 | 精确率 | 召回率 | F1 | 平均F1 |
|---|---|---|---|---|---|
| CE | 设计 | 85 | 92 | 88 | 88 |
| 施工 | 74 | 95 | 83 | ||
| 采购 | 100 | 80 | 89 | ||
| 调试 | 100 | 83 | 91 | ||
| FL | 设计 | 100 | 80 | 89 | 89 |
| 施工 | 100 | 91 | 95 | ||
| 采购 | 81 | 94 | 87 | ||
| 调试 | 81 | 92 | 86 | ||
| RFFL | 设计 | 100 | 84 | 91 | 92 |
| 施工 | 100 | 93 | 96 | ||
| 采购 | 75 | 99 | 85 | ||
| 调试 | 97 | 91 | 94 |
| 模型 | 精确率 | 召回率 | F1 |
|---|---|---|---|
| Fast_Text-RFFL | 93 | 88 | 90 |
| Text_RNN-RFFL | 92 | 90 | 90 |
| Text_RNN_Att-RFFL | 92 | 90 | 91 |
| Text_CNN-RFFL | 92 | 90 | 91 |
| Text_DPCNN-RFFL | 92 | 91 | 91 |
| IRPN_RFFL | 93 | 92 | 92 |
Tab. 5 Performance analysis of classification models for quality text
| 模型 | 精确率 | 召回率 | F1 |
|---|---|---|---|
| Fast_Text-RFFL | 93 | 88 | 90 |
| Text_RNN-RFFL | 92 | 90 | 90 |
| Text_RNN_Att-RFFL | 92 | 90 | 91 |
| Text_CNN-RFFL | 92 | 90 | 91 |
| Text_DPCNN-RFFL | 92 | 91 | 91 |
| IRPN_RFFL | 93 | 92 | 92 |
| 模型 | 精确率 | 召回率 | F1 |
|---|---|---|---|
| Fast_Text-RFFL | 94 | 93 | 93 |
| Text_RNN-RFFL | 94 | 93 | 93 |
| Text_RNN_Att-RFFL | 94 | 93 | 93 |
| Text_CNN-RFFL | 93 | 93 | 93 |
| Text_DPCNN-RFFL | 94 | 93 | 93 |
| IRPN_RFFL | 95 | 94 | 94 |
Tab. 6 Performance analysis of classification models for THUCNews text
| 模型 | 精确率 | 召回率 | F1 |
|---|---|---|---|
| Fast_Text-RFFL | 94 | 93 | 93 |
| Text_RNN-RFFL | 94 | 93 | 93 |
| Text_RNN_Att-RFFL | 94 | 93 | 93 |
| Text_CNN-RFFL | 93 | 93 | 93 |
| Text_DPCNN-RFFL | 94 | 93 | 93 |
| IRPN_RFFL | 95 | 94 | 94 |
| 1 | PANG B, LEE L, VAITHYANATHAN S. Thumbs up? sentiment classification using machine learning techniques [C]// Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2002, 10: 79-86. |
| 2 | ABBASI A, FRANCE S, ZHANG Z, et al. Selecting attributes for sentiment classification using feature relation networks [J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 23(3): 447-462. |
| 3 | ZHOU Y, LI J L, CHI J, et al. Set-CNN: a text convolutional neural network based on semantic extension for short text classification [J]. Knowledge-Based Systems, 2022, 257: 109948. |
| 4 | DENG J, CHENG L, WANG Z. Attention-based BiLSTM fused CNN with gating mechanism model for Chinese long text classification [J]. Computer Speech & Language, 2021, 68: 101182. |
| 5 | AL-ADHAILEH M H, ALDHYANI T H H, ALGHAMDI A D. Online troll reviewer detection using deep learning techniques [J]. Applied Bionics and Biomechanics, 2022, 2022: 4637594. |
| 6 | 邓钰,雷航,李晓瑜,等.用于目标情感分类的多跳注意力深度模型[J].电子科技大学学报, 2019, 48(5): 759-766. |
| DENG Y, LEI H, LI X Y, et al. A multi-hop attention deep model for aspect-level sentiment classification [J]. Journal of University of Electronic Science and Technology of China, 2019, 48(5): 759-766. | |
| 7 | 官永庆.基于机器学习的短文本分类在核电质量管理中的研究与应用[D].上海:上海交通大学, 2017. |
| GUAN Y Q. The research and application of short text classification based on machine learning in nuclear power quality management [D]. Shanghai: Shanghai Jiao Tong University, 2017. | |
| 8 | 徐霞军,秦绪涛,杨强,等.大数据技术在核电设备缺陷分析中的初步应用[J].核动力工程, 2020, 41(S1): 68-72. |
| XU X J, QIN X T, YANG Q, et al. Preliminary application of big data technology in defect analysis of nuclear power equipment [J]. Nuclear Power Engineering, 2020, 41(S1): 68-72. | |
| 9 | BUDA M, MAKI A, MAZUROWSKI M. A systematic study of the class imbalance problem in convolutional neural networks [J]. Neural Networks, 2017, 106: 249-259. |
| 10 | 叶枫,江永省.基于聚类融合欠采样的不平衡分类方法[J].计算机应用与软件, 2020, 37(1): 292-297. |
| YE F, JIANG Y S. Unbalanced classification method based on clustering ensemble and under-sampling [J]. Computer Applications and Software, 2020, 37(1): 292-297. | |
| 11 | LI X, YU L, CHANG D, et al. Dual cross-entropy loss for small-sample fine-grained vehicle classification [J]. IEEE Transactions on Vehicular Technology, 2019, 68(5): 4204-4212. |
| 12 | LIN T-Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection [C]// Proceedings of the 2017 International Conference on Computer Vision. Piscataway: IEEE, 2017: 2999-3007. |
| 13 | KIM J, KIM T, KIM S, et al. Edge-labeling graph neural network for few-shot learning [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11-20. |
| 14 | ZHOU Z, SHIN J, ZHANG L, et al. Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 4761-4772. |
| 15 | DEVLIN J, CHANG M-W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding [C]// Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
| 16 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31 st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
| 17 | LIU P, QIU X, HUANG X. Recurrent neural network for text classification with multi-task learning [C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence. New York: AAAI Press, 2016: 2873-2879. |
| 18 | MNIH V, HEESS N, GRAVES A. Recurrent models of visual attention [C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 2204-2212. |
| 19 | KIM Y. Convolutional neural networks for sentence classification [C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1746-1751. |
| 20 | JOHNSON R, ZHANG T. Deep pyramid convolutional neural networks for text categorization [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1(Long Papers). Stroudsburg: ACL, 2017: 562-570. |
| 21 | JOULIN A, GRAVE E, BOJANOWSKI P, et al. Bag of tricks for efficient text classification [C]// Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2(Short Papers). Stroudsburg: ACL, 2017: 427-431. |
| [1] | Qi SHUAI, Hairui WANG, Guifu ZHU. Chinese story ending generation model based on bidirectional contrastive training [J]. Journal of Computer Applications, 2024, 44(9): 2683-2688. |
| [2] | Quanmei ZHANG, Runping HUANG, Fei TENG, Haibo ZHANG, Nan ZHOU. Automatic international classification of disease coding method incorporating heterogeneous information [J]. Journal of Computer Applications, 2024, 44(8): 2476-2482. |
| [3] | Youren YU, Yangsen ZHANG, Yuru JIANG, Gaijuan HUANG. Chinese named entity recognition model incorporating multi-granularity linguistic knowledge and hierarchical information [J]. Journal of Computer Applications, 2024, 44(6): 1706-1712. |
| [4] | Yao LIU, Yumeng LI, Miaomiao SONG. Cognitive graph based on business process [J]. Journal of Computer Applications, 2024, 44(6): 1699-1705. |
| [5] | Longtao GAO, Nana LI. Aspect sentiment triplet extraction based on aspect-aware attention enhancement [J]. Journal of Computer Applications, 2024, 44(4): 1049-1057. |
| [6] | Xianfeng YANG, Yilei TANG, Ziqiang LI. Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network [J]. Journal of Computer Applications, 2024, 44(4): 1058-1064. |
| [7] | Baoshan YANG, Zhi YANG, Xingyuan CHEN, Bing HAN, Xuehui DU. Analysis of consistency between sensitive behavior and privacy policy of Android applications [J]. Journal of Computer Applications, 2024, 44(3): 788-796. |
| [8] | Kaitian WANG, Qing YE, Chunlei CHENG. Classification method for traditional Chinese medicine electronic medical records based on heterogeneous graph representation [J]. Journal of Computer Applications, 2024, 44(2): 411-417. |
| [9] | Yushan JIANG, Yangsen ZHANG. Large language model-driven stance-aware fact-checking [J]. Journal of Computer Applications, 2024, 44(10): 3067-3073. |
| [10] | Chenghao FENG, Zhenping XIE, Bowen DING. Selective generation method of test cases for Chinese text error correction software [J]. Journal of Computer Applications, 2024, 44(1): 101-112. |
| [11] | Xinyue ZHANG, Rong LIU, Chiyu WEI, Ke FANG. Aspect-based sentiment analysis method with integrating prompt knowledge [J]. Journal of Computer Applications, 2023, 43(9): 2753-2759. |
| [12] | Xiaomin ZHOU, Fei TENG, Yi ZHANG. Automatic international classification of diseases coding model based on meta-network [J]. Journal of Computer Applications, 2023, 43(9): 2721-2726. |
| [13] | Kezheng CHEN, Xiaoran GUO, Yong ZHONG, Zhenping LI. Relation extraction method based on negative training and transfer learning [J]. Journal of Computer Applications, 2023, 43(8): 2426-2430. |
| [14] | Zexi JIN, Lei LI, Ji LIU. Transfer learning model based on improved domain separation network [J]. Journal of Computer Applications, 2023, 43(8): 2382-2389. |
| [15] | Yao LIU, Xin TONG, Yifeng CHEN. Algorithm path self-assembling model for business requirements [J]. Journal of Computer Applications, 2023, 43(6): 1768-1778. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||