Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (3): 849-855.DOI: 10.11772/j.issn.1001-9081.2024091325
• Frontier research and typical applications of large models • Previous Articles Next Articles
Can MA1,2, Ruizhang HUANG1,2(), Lina REN1,2,3, Ruina BAI1,2, Yaoyao WU1,2
Received:
2024-09-20
Revised:
2024-12-11
Accepted:
2024-12-13
Online:
2025-02-13
Published:
2025-03-10
Contact:
Ruizhang HUANG
About author:
MA Can, born in 1992, M. S. His research interests include text correction, text mining.Supported by:
马灿1,2, 黄瑞章1,2(), 任丽娜1,2,3, 白瑞娜1,2, 伍瑶瑶1,2
通讯作者:
黄瑞章
作者简介:
马灿(1992—),男,湖北鄂州人,硕士,主要研究方向:文本纠错、文本挖掘基金资助:
CLC Number:
Can MA, Ruizhang HUANG, Lina REN, Ruina BAI, Yaoyao WU. Chinese spelling correction method based on LLM with multiple inputs[J]. Journal of Computer Applications, 2025, 45(3): 849-855.
马灿, 黄瑞章, 任丽娜, 白瑞娜, 伍瑶瑶. 基于大语言模型的多输入中文拼写纠错方法[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 849-855.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024091325
提示词名称 | 提示词模板 |
---|---|
Prompt-GEN-1 | 请对以下句子进行拼写纠错: |
Prompt-GEN-2 | 对于待纠错句子: … 参考候选纠错结果,请对待纠错句子进行拼写纠错,输出纠错后的句子。 |
Prompt-SEL | 下列那个句子没有拼写错误,输出对应句子。若所有候选句子均有错误,则输出无拼写正确结果。 … |
Tab. 1 Prompt templates
提示词名称 | 提示词模板 |
---|---|
Prompt-GEN-1 | 请对以下句子进行拼写纠错: |
Prompt-GEN-2 | 对于待纠错句子: … 参考候选纠错结果,请对待纠错句子进行拼写纠错,输出纠错后的句子。 |
Prompt-SEL | 下列那个句子没有拼写错误,输出对应句子。若所有候选句子均有错误,则输出无拼写正确结果。 … |
数据集 | 句子总数 | 平均句长 | 错误数 | |
---|---|---|---|---|
训练集 | SIGHAN13 | 700 | 41.8 | 343 |
SIGHAN14 | 3 437 | 49.6 | 5 122 | |
SIGHAN15 | 2 338 | 31.3 | 3 037 | |
Wang271K | 271 329 | 42.6 | 381 962 | |
测试集 | SIGHAN13 | 1 000 | 74.3 | 1 224 |
SIGHAN14 | 1 062 | 50.0 | 771 | |
SIGHAN15 | 1 100 | 30.6 | 703 | |
SIGHAN15-REVISED | 1 100 | 30.6 | 858 |
Tab. 2 Statistics of training and test datasets
数据集 | 句子总数 | 平均句长 | 错误数 | |
---|---|---|---|---|
训练集 | SIGHAN13 | 700 | 41.8 | 343 |
SIGHAN14 | 3 437 | 49.6 | 5 122 | |
SIGHAN15 | 2 338 | 31.3 | 3 037 | |
Wang271K | 271 329 | 42.6 | 381 962 | |
测试集 | SIGHAN13 | 1 000 | 74.3 | 1 224 |
SIGHAN14 | 1 062 | 50.0 | 771 | |
SIGHAN15 | 1 100 | 30.6 | 703 | |
SIGHAN15-REVISED | 1 100 | 30.6 | 858 |
数据集 | 方法 | 错误检测 | 错误纠正 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 精确率 | 召回率 | 准确率 | 精确率 | 召回率 | ||||
SIGHAN13 | DCN | 55.1 | 57.0 | 55.3 | 56.1 | 54.0 | 55.8 | 54.2 | 55.0 |
LEAD | 71.2 | 74.6 | 71.4 | 72.9 | 70.2 | 73.5 | 70.3 | 71.9 | |
REALISE | 69.0 | 73.2 | 69.1 | 71.1 | 67.6 | 71.6 | 67.7 | 69.6 | |
Prompt-GEN-1 | 63.8 | 71.1 | 63.1 | 66.9 | 60.5 | 67.3 | 59.7 | 63.3 | |
Prompt-GEN-2 | 66.7 | 67.7 | 66.5 | 67.1 | 64.9 | 65.8 | 64.7 | 65.2 | |
Prompt-SEL | 72.3 | 75.0 | 72.4 | 73.7 | 71.5 | 74.2 | 71.6 | 72.9 | |
SIGHAN14 | DCN | 69.6 | 54.3 | 61.0 | 57.4 | 69.2 | 53.6 | 60.2 | 56.7 |
LEAD | 75.9 | 63.9 | 67.5 | 65.7 | 75.0 | 62.3 | 65.8 | 64.0 | |
REALISE | 76.2 | 64.8 | 70.2 | 67.4 | 75.0 | 62.7 | 67.9 | 65.2 | |
Prompt-GEN-1 | 54.7 | 39.1 | 55.1 | 45.8 | 52.2 | 35.4 | 49.9 | 41.4 | |
Prompt-GEN-2 | 58.7 | 43.6 | 62.7 | 51.4 | 57.2 | 41.4 | 59.6 | 48.9 | |
Prompt-SEL | 76.4 | 64.7 | 72.3 | 68.3 | 75.3 | 62.8 | 70.2 | 66.3 | |
SIGHAN15 | DCN | 80.5 | 70.2 | 77.7 | 73.8 | 79.0 | 67.3 | 74.6 | 70.8 |
LEAD | 85.1 | 77.1 | 82.4 | 79.6 | 84.3 | 75.5 | 80.7 | 78.0 | |
REALISE | 83.7 | 76.2 | 80.9 | 78.5 | 83.1 | 75.0 | 79.6 | 77.2 | |
Prompt-GEN-1 | 67.2 | 52.7 | 64.2 | 57.9 | 63.2 | 46.0 | 56.0 | 50.5 | |
Prompt-GEN-2 | 71.9 | 59.5 | 80.3 | 68.4 | 70.5 | 57.3 | 77.4 | 65.8 | |
Prompt-SEL | 85.6 | 78.2 | 84.0 | 81.0 | 84.3 | 75.6 | 81.3 | 78.4 | |
SIGHAN15-REVISED | DCN | 76.4 | 70.5 | 68.3 | 69.4 | 75.2 | 68.6 | 66.5 | 67.5 |
LEAD | 76.1 | 70.1 | 67.0 | 68.5 | 75.5 | 68.9 | 65.9 | 67.3 | |
REALISE | 77.0 | 71.0 | 68.8 | 69.8 | 76.4 | 69.8 | 67.6 | 68.7 | |
Prompt-GEN-1 | 51.6 | 37.5 | 48.1 | 42.1 | 48.2 | 32.8 | 42.0 | 36.9 | |
Prompt-GEN-2 | 71.5 | 61.5 | 74.1 | 67.2 | 70.0 | 59.5 | 71.7 | 65.0 | |
Prompt-SEL | 77.7 | 75.3 | 70.1 | 72.6 | 76.9 | 73.7 | 68.6 | 71.1 |
Tab. 3 Spelling error correction effects on SIGHAN datasets
数据集 | 方法 | 错误检测 | 错误纠正 | ||||||
---|---|---|---|---|---|---|---|---|---|
准确率 | 精确率 | 召回率 | 准确率 | 精确率 | 召回率 | ||||
SIGHAN13 | DCN | 55.1 | 57.0 | 55.3 | 56.1 | 54.0 | 55.8 | 54.2 | 55.0 |
LEAD | 71.2 | 74.6 | 71.4 | 72.9 | 70.2 | 73.5 | 70.3 | 71.9 | |
REALISE | 69.0 | 73.2 | 69.1 | 71.1 | 67.6 | 71.6 | 67.7 | 69.6 | |
Prompt-GEN-1 | 63.8 | 71.1 | 63.1 | 66.9 | 60.5 | 67.3 | 59.7 | 63.3 | |
Prompt-GEN-2 | 66.7 | 67.7 | 66.5 | 67.1 | 64.9 | 65.8 | 64.7 | 65.2 | |
Prompt-SEL | 72.3 | 75.0 | 72.4 | 73.7 | 71.5 | 74.2 | 71.6 | 72.9 | |
SIGHAN14 | DCN | 69.6 | 54.3 | 61.0 | 57.4 | 69.2 | 53.6 | 60.2 | 56.7 |
LEAD | 75.9 | 63.9 | 67.5 | 65.7 | 75.0 | 62.3 | 65.8 | 64.0 | |
REALISE | 76.2 | 64.8 | 70.2 | 67.4 | 75.0 | 62.7 | 67.9 | 65.2 | |
Prompt-GEN-1 | 54.7 | 39.1 | 55.1 | 45.8 | 52.2 | 35.4 | 49.9 | 41.4 | |
Prompt-GEN-2 | 58.7 | 43.6 | 62.7 | 51.4 | 57.2 | 41.4 | 59.6 | 48.9 | |
Prompt-SEL | 76.4 | 64.7 | 72.3 | 68.3 | 75.3 | 62.8 | 70.2 | 66.3 | |
SIGHAN15 | DCN | 80.5 | 70.2 | 77.7 | 73.8 | 79.0 | 67.3 | 74.6 | 70.8 |
LEAD | 85.1 | 77.1 | 82.4 | 79.6 | 84.3 | 75.5 | 80.7 | 78.0 | |
REALISE | 83.7 | 76.2 | 80.9 | 78.5 | 83.1 | 75.0 | 79.6 | 77.2 | |
Prompt-GEN-1 | 67.2 | 52.7 | 64.2 | 57.9 | 63.2 | 46.0 | 56.0 | 50.5 | |
Prompt-GEN-2 | 71.9 | 59.5 | 80.3 | 68.4 | 70.5 | 57.3 | 77.4 | 65.8 | |
Prompt-SEL | 85.6 | 78.2 | 84.0 | 81.0 | 84.3 | 75.6 | 81.3 | 78.4 | |
SIGHAN15-REVISED | DCN | 76.4 | 70.5 | 68.3 | 69.4 | 75.2 | 68.6 | 66.5 | 67.5 |
LEAD | 76.1 | 70.1 | 67.0 | 68.5 | 75.5 | 68.9 | 65.9 | 67.3 | |
REALISE | 77.0 | 71.0 | 68.8 | 69.8 | 76.4 | 69.8 | 67.6 | 68.7 | |
Prompt-GEN-1 | 51.6 | 37.5 | 48.1 | 42.1 | 48.2 | 32.8 | 42.0 | 36.9 | |
Prompt-GEN-2 | 71.5 | 61.5 | 74.1 | 67.2 | 70.0 | 59.5 | 71.7 | 65.0 | |
Prompt-SEL | 77.7 | 75.3 | 70.1 | 72.6 | 76.9 | 73.7 | 68.6 | 71.1 |
方法 | 错误检测 | 错误纠正 | ||||||
---|---|---|---|---|---|---|---|---|
准确率 | 精确率 | 召回率 | 准确率 | 精确率 | 召回率 | |||
文献[ | — | — | — | — | — | 62.5 | 53.4 | 57.6 |
Qwen1.5-14B(ZSP) | — | — | — | — | — | — | — | 28.1 |
Qwen1.5-14B(FSP) | — | — | — | — | — | — | — | 31.6 |
Prompt-GEN2(ZSP) | 41.4 | 28.5 | 40.2 | 33.3 | 41.0 | 28.0 | 39.5 | 32.8 |
Prompt-GEN2(FSP) | 47.6 | 37.3 | 54.4 | 44.2 | 47.6 | 37.3 | 54.4 | 44.2 |
Prompt-SEL(ZSP) | 68.7 | 69.1 | 55.3 | 61.5 | 68.0 | 67.7 | 54.2 | 60.2 |
Prompt-SEL(FSP) | 78.5 | 73.6 | 71.8 | 72.7 | 77.8 | 72.3 | 70.6 | 71.4 |
Tab. 4 Spelling error correction effects of zero-shot and few-shot learning on SIGHAN15-REVISED dataset
方法 | 错误检测 | 错误纠正 | ||||||
---|---|---|---|---|---|---|---|---|
准确率 | 精确率 | 召回率 | 准确率 | 精确率 | 召回率 | |||
文献[ | — | — | — | — | — | 62.5 | 53.4 | 57.6 |
Qwen1.5-14B(ZSP) | — | — | — | — | — | — | — | 28.1 |
Qwen1.5-14B(FSP) | — | — | — | — | — | — | — | 31.6 |
Prompt-GEN2(ZSP) | 41.4 | 28.5 | 40.2 | 33.3 | 41.0 | 28.0 | 39.5 | 32.8 |
Prompt-GEN2(FSP) | 47.6 | 37.3 | 54.4 | 44.2 | 47.6 | 37.3 | 54.4 | 44.2 |
Prompt-SEL(ZSP) | 68.7 | 69.1 | 55.3 | 61.5 | 68.0 | 67.7 | 54.2 | 60.2 |
Prompt-SEL(FSP) | 78.5 | 73.6 | 71.8 | 72.7 | 77.8 | 72.3 | 70.6 | 71.4 |
统计维度 | Prompt-GEN-1 | Prompt-GEN-2 |
---|---|---|
长度不一(句子) | 88 | 73 |
修改字符数 | 801 | 887 |
修改错误字符数 | 368 | 300 |
Tab. 5 Character-level statistical analysis of LLM error correction results
统计维度 | Prompt-GEN-1 | Prompt-GEN-2 |
---|---|---|
长度不一(句子) | 88 | 73 |
修改字符数 | 801 | 887 |
修改错误字符数 | 368 | 300 |
准确率/% | 精确率/% | 召回率/% | F1值/% | |
---|---|---|---|---|
8 | 76.2 | 63.1 | 60.1 | 61.6 |
16 | 76.5 | 63.5 | 60.2 | 61.8 |
32 | 76.6 | 63.5 | 60.3 | 61.8 |
Tab. 6 Influence of LoRA parameter on model performance on SIGHAN15 dataset
准确率/% | 精确率/% | 召回率/% | F1值/% | |
---|---|---|---|---|
8 | 76.2 | 63.1 | 60.1 | 61.6 |
16 | 76.5 | 63.5 | 60.2 | 61.8 |
32 | 76.6 | 63.5 | 60.3 | 61.8 |
模型 | 纠错候选句 |
---|---|
待纠错句子 | 要求师公单位对项目进行垫资。 |
DCN | 要求市公单位对项目进行垫资。 |
LEAD | 要求施工单位对项目进行垫资。 |
REALISE | 要求示工单位对项目进行垫资。 |
Prompt-SEL | 要求施工单位对项目进行垫资。 |
Tab. 7 Correction case of method based on LLM with multiple inputs
模型 | 纠错候选句 |
---|---|
待纠错句子 | 要求师公单位对项目进行垫资。 |
DCN | 要求市公单位对项目进行垫资。 |
LEAD | 要求施工单位对项目进行垫资。 |
REALISE | 要求示工单位对项目进行垫资。 |
Prompt-SEL | 要求施工单位对项目进行垫资。 |
模型 | 纠错候选句 |
---|---|
待纠错句子 | 受到你的邮件,我一遍高兴一边难过。 |
DCN | 收到你的邮件,我一遍高兴一边难过。 |
LEAD | 受到你的邮件,我一边高兴一边难过。 |
REALISE | 受到你的邮件,我一遍高兴一边难过。 |
Prompt-SEL | 无拼写正确结果 |
Tab. 8 Failure correction case based on LLM with multiple inputs
模型 | 纠错候选句 |
---|---|
待纠错句子 | 受到你的邮件,我一遍高兴一边难过。 |
DCN | 收到你的邮件,我一遍高兴一边难过。 |
LEAD | 受到你的邮件,我一边高兴一边难过。 |
REALISE | 受到你的邮件,我一遍高兴一边难过。 |
Prompt-SEL | 无拼写正确结果 |
1 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
2 | CLARK K, LUONG M T, LE Q V, et al. ELECTRA: pre-training text encoders as discriminators rather than generators [EB/OL]. [2024-04-13]. . |
3 | ZHANG S, HUANG H, LIU J, et al. Spelling error correction with Soft-Masked BERT [C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 882-890. |
4 | ZHU C, YING Z, ZHANG B, et al. MDCSpell: a multi-task detector-corrector framework for Chinese spelling correction [C]// Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg: ACL, 2022: 1244-1253. |
5 | CHENG X, XU W, CHEN K, et al. SpellGCN: incorporating phonological and visual similarities into language models for Chinese spelling check [C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 871-881. |
6 | WANG B, CHE W, WU D, et al. Dynamic connected networks for Chinese spelling check [C]// Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 2437-2446. |
7 | XU H D, LI Z, ZHOU Q, et al. Read, listen, and see: leveraging multimodal information helps Chinese spell checking [C]// Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 716-728. |
8 | GANAIE M A, HU M, MALIK A K, et al. Ensemble deep learning: a review [J]. Engineering Applications of Artificial Intelligence, 2022, 115: No.105151. |
9 | KILICOGLU H, FISZMAN M, ROBERTS K, et al. An ensemble method for spelling correction in consumer health questions [C]// Proceedings of the 2015 AMIA Annual Symposium. Bethesda, MD: AMIA, 2015: 727-736. |
10 | TANG C, WU X, WU Y. Are pre-trained language models useful for model ensemble in Chinese grammatical error correction? [C]// Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg: ACL, 2023: 893-901. |
11 | OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 27730-27744. |
12 | FAN Y, JIANG F, LI P, et al. GrammarGPT: exploring open-source LLMs for native Chinese grammatical error correction with supervised fine-tuning [C]// Proceedings of the 2023 National CCF Conference on Natural Language Processing and Chinese Computing, LNCS 14304. Cham: Springer, 2023: 69-80. |
13 | BOROS E, EHRMANN M, ROMANELLO M, et al. Post-correction of historical text transcripts with large language models: an exploratory study [C]// Proceedings of the 8th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature. Stroudsburg: ACL, 2024: 133-159. |
14 | DONG M, CHEN Y, ZHANG M, et al. Rich semantic knowledge enhanced large language models for few-shot Chinese spell checking [C]// Findings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 7372-7383. |
15 | ZHOU H, LI Z, ZHANG B, et al. A simple yet effective training-free prompt-free approach to Chinese spelling correction based on large language models [C]// Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 17446-17467. |
16 | QIAO S, OU Y, ZHANG N, et al. Reasoning with language model prompting: a survey [C]// Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2023: 5368-5393. |
17 | HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models [EB/OL]. [2024-02-14]. . |
18 | LIU C L, LAI M H, CHUANG Y H, et al. Visually and phonologically similar characters in incorrect simplified Chinese words [C]// Proceedings of the 23rd International Conference on Computational Linguistics: Posters Volume. [S.l.]: Coling 2010 Organizing Committee, 2010: 739-747. |
19 | LIU S, YANG T, YUE T, et al. PLOME: pre-training with misspelled knowledge for Chinese spelling correction [C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: ACL, 2021: 2991-3000. |
20 | ZHANG R, PANG C, ZHANG C, et al. Correcting Chinese spelling errors with phonetic pre-training [C]// Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 2250-2261. |
21 | MENG Y, WU W, WANG F, et al. Glyce: glyph-vectors for Chinese character representations [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 2746-2757. |
22 | LI Y, MA S, ZHOU Q, et al. Learning from the dictionary: heterogeneous knowledge guided fine-tuning for Chinese spell checking [C]// Findings of the Association for Computational Linguistics: EMNLP 2022. Stroudsburg: ACL, 2022: 238-249. |
23 | 伍瑶瑶,黄瑞章,白瑞娜,等. 基于对比优化的多输入融合拼写纠错模型[J]. 模式识别与人工智能, 2024, 37(1):85-94. |
WU Y Y, HUANG R Z, BAI R N, et al. Multi-input fusion spelling error correction model based on contrast optimization [J]. Pattern Recognition and Artificial Intelligence, 2024, 37(1):85-94. | |
24 | WANG Y, WANG B, LIU Y, et al. LM-Combiner: a contextual rewriting model for Chinese grammatical error correction [C]// Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. [S.l.]: ELRA and ICCL, 2024: 10675-10685. |
25 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
26 | WU S H, LIU C L, LEE L H. Chinese spelling check evaluation at SIGHAN Bake-off 2013 [C]// Proceedings of the 7th SIGHAN Workshop on Chinese Language Processing. [S.l.]: Asian Federation of Natural Language Processing, 2013: 35-42. |
27 | YU L C, LEE L H, TSENG Y H, et al. Overview of SIGHAN 2014 Bake-off for Chinese spelling check [C]// Proceedings of the 3rd CIPS-SIGHAN Joint Conference on Chinese Language Processing. Stroudsburg: ACL, 2014: 126-132. |
28 | TSENG Y H, LEE L H, CHANG L P, et al. Introduction to SIGHAN 2015 Bake-off for Chinese spelling check [C]// Proceedings of the 8th SIGHAN Workshop on Chinese Language Processing. Stroudsburg: ACL, 2015: 32-37. |
29 | YANG L, LIU X, LIAO T, et al. Is Chinese Spelling Check ready? understanding the correction behavior in real-world scenarios [J]. AI Open, 2023, 4: 183-192. |
30 | WANG D, SONG Y, LI J, et al. A hybrid approach to automatic corpus generation for Chinese spelling check [C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 2517-2527. |
[1] | Xiaolin QIN, Xu GU, Dicheng LI, Haiwen XU. Survey and prospect of large language models [J]. Journal of Computer Applications, 2025, 45(3): 685-696. |
[2] | Chengzhe YUAN, Guohua CHEN, Dingding LI, Yuan ZHU, Ronghua LIN, Hao ZHONG, Yong TANG. ScholatGPT: a large language model for academic social networks and its intelligent applications [J]. Journal of Computer Applications, 2025, 45(3): 755-764. |
[3] | Xuefei ZHANG, Liping ZHANG, Sheng YAN, Min HOU, Yubo ZHAO. Personalized learning recommendation in collaboration of knowledge graph and large language model [J]. Journal of Computer Applications, 2025, 45(3): 773-784. |
[4] | Jing HE, Yang SHEN, Runfeng XIE. Recognition and optimization of hallucination phenomena in large language models [J]. Journal of Computer Applications, 2025, 45(3): 709-714. |
[5] | Peng CAO, Guangqi WEN, Jinzhu YANG, Gang CHEN, Xinyi LIU, Xuechun JI. Efficient fine-tuning method of large language models for test case generation [J]. Journal of Computer Applications, 2025, 45(3): 725-731. |
[6] | Yuemei XU, Yuqi YE, Xueyi HE. Bias challenges of large language models: identification, evaluation, and mitigation [J]. Journal of Computer Applications, 2025, 45(3): 697-708. |
[7] | Yan YANG, Feng YE, Dong XU, Xuejie ZHANG, Jin XU. Construction of digital twin water conservancy knowledge graph integrating large language model and prompt learning [J]. Journal of Computer Applications, 2025, 45(3): 785-793. |
[8] | Chenwei SUN, Junli HOU, Xianggen LIU, Jiancheng LYU. Large language model prompt generation method for engineering drawing understanding [J]. Journal of Computer Applications, 2025, 45(3): 801-807. |
[9] | Yanmin DONG, Jiajia LIN, Zheng ZHANG, Cheng CHENG, Jinze WU, Shijin WANG, Zhenya HUANG, Qi LIU, Enhong CHEN. Design and practice of intelligent tutoring algorithm based on personalized student capability perception [J]. Journal of Computer Applications, 2025, 45(3): 765-772. |
[10] | Bin LI, Min LIN, Siriguleng, Yingjie GAO, Yurong WANG, Shujun ZHANG. Joint entity-relation extraction method for ancient Chinese books based on prompt learning and global pointer network [J]. Journal of Computer Applications, 2025, 45(1): 75-81. |
[11] | Xindong YOU, Yingzi WEN, Xinpeng SHE, Xueqiang LYU. Triplet extraction method for mine electromechanical equipment field [J]. Journal of Computer Applications, 2024, 44(7): 2026-2033. |
[12] | Yuemei XU, Ling HU, Jiayi ZHAO, Wanze DU, Wenqing WANG. Technology application prospects and risk challenges of large language models [J]. Journal of Computer Applications, 2024, 44(6): 1655-1662. |
[13] | Junfeng SHEN, Xingchen ZHOU, Can TANG. Dual-channel sentiment analysis model based on improved prompt learning method [J]. Journal of Computer Applications, 2024, 44(6): 1796-1806. |
[14] | Xinyan YU, Cheng ZENG, Qian WANG, Peng HE, Xiaoyu DING. Few-shot news topic classification method based on knowledge enhancement and prompt learning [J]. Journal of Computer Applications, 2024, 44(6): 1767-1774. |
[15] | Yingjie GAO, Min LIN, Siriguleng, Bin LI, Shujun ZHANG. Prompt learning method for ancient text sentence segmentation and punctuation based on span-extracted prototypical network [J]. Journal of Computer Applications, 2024, 44(12): 3815-3822. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||