1 |
MINSKY M. Deep issues: commonsense-based interfaces[J]. Communications of the ACM, 2000, 43(8): 66-73. 10.1145/345124.345145
|
2 |
DAVIS E, MARCUS G. Commonsense reasoning and commonsense knowledge in artificial intelligence[J]. Communications of the ACM, 2015, 58(9): 92-103. 10.1145/2701413
|
3 |
BHAGAVATULA C, LE BRAS R, MALAVIYA C, et al. Abductive commonsense reasoning[EB/OL]. (2020-02-14) [2021-12-10]..
|
4 |
BOWMAN S R, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference[C]// Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2015: 632-642. 10.18653/v1/d15-1075
|
5 |
WILLIAMS A, NANGIA N, BOWMAN S R. A broad-coverage challenge corpus for sentence understanding through inference[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA: ACL, 2018: 1112-1122. 10.18653/v1/n18-1101
|
6 |
MacCARTNEY B, MANNING C D. Natural logic for textual inference[C]// Proceedings of the 2007 ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Stroudsburg, PA: ACL, 2007: 193-200. 10.3115/1654536.1654575
|
7 |
ZELLERS R, BISK Y, SCHWARTZ R, et al. SWAG: a large-scale adversarial dataset for grounded commonsense inference[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 93-104. 10.18653/v1/d18-1009
|
8 |
ZHU Y C, PANG L, LAN Y Y, et al. L2R2: leveraging ranking for abductive reasoning[C]// Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1961-1964. 10.1145/3397271.3401332
|
9 |
YU C L, ZHANG H M, SONG Y Q, et al. Enriching large-scale eventuality knowledge graph with entailment relations[C/OL]// Proceedings of the 2020 Conference on Automated Knowledge Base Construction. [2021-12-10].. 10.1145/3366423.3380107
|
10 |
BAUER L, BANSAL M. Identify, align, and integrate: matching knowledge graphs to commonsense reasoning tasks[C]// Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2021:2259-2272. 10.18653/v1/2021.eacl-main.192
|
11 |
MA K X, ILIEVSKI F, FRANCIS J, et al. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2021: 13507-13515. 10.1609/aaai.v35i15.17593
|
12 |
HUANG Y C, ZHANG Y Z, ELACHQAR O, et al. INSET: sentence infilling with INter-SEntential Transformer[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 2502-2515. 10.18653/v1/2020.acl-main.226
|
13 |
ZHOU W C S, LEE D H, SELVAM R K, et al. Pre-training text-to-text transformers for concept-centric common sense[EB/OL]. (2022-02-10) [2022-11-10]..
|
14 |
YU C W, CHEN J W, CHEN Y L. Enhanced LSTM framework for water-cooled chiller COP forecasting[C]// Proceedings of the 2021 IEEE International Conference on Consumer Electronics. Piscataway: IEEE, 2021: 1-3. 10.1109/icce50685.2021.9427706
|
15 |
岳增营,叶霞,刘睿珩. 基于语言模型的预训练技术研究综述[J]. 中文信息学报, 2021, 35(9):15-29. 10.3969/j.issn.1003-0077.2021.09.002
|
|
YUE Z Y, YE X, LIU R H. A survey of language model based pre-training technology[J]. Journal of Chinese Information Processing, 2021, 35(9):15-29. 10.3969/j.issn.1003-0077.2021.09.002
|
16 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019:4171-4186. 10.18653/v1/n18-2
|
17 |
LIU Y H, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. (2019-07-26) [2021-12-10]..
|
18 |
CHEN Q, ZHU X D, LING Z H, et al. Enhanced LSTM for natural language inference[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2017: 1657-1668. 10.18653/v1/p17-1152
|
19 |
HERBRICH R, GRAEPEL T, OBERMAYER K. Large margin rank boundaries for ordinal regression[M]// SMOLA A J, BARTLETT P L, SCHÖLKOPF B, et al. Advances in Large Margin Classifiers. Cambridge: MIT Press, 2000:115-132. 10.7551/mitpress/1113.003.0010
|
20 |
BURGES C, SHAKED T, RENSHAW E, et al. Learning to rank using gradient descent[C]// Proceedings of the 22nd International Conference on Machine Learning. New York: ACM, 2005: 89-96. 10.1145/1102351.1102363
|
21 |
BURGES C J C, RAGNO R, LE Q V. Learning to rank with nonsmooth cost functions[C]// Proceedings of the 19th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2006: 193-200. 10.7551/mitpress/7503.003.0029
|
22 |
CAO Z, QIN T, LIU T Y, et al. Learning to rank: from pairwise approach to listwise approach[C]// Proceedings of the 24th International Conference on Machine Learning. New York: ACM, 2007: 129-136. 10.1145/1273496.1273513
|
23 |
LI M H, LIU X L, J van de WEIJER, et al. Learning to rank for active learning: a listwise approach[C]// Proceedings of the 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2021: 5587-5594. 10.1109/icpr48806.2021.9412680
|
24 |
QIN T, LIU T Y, LI H. A general approximation framework for direct optimization of information retrieval measures[J]. Information Retrieval, 2010, 13(4):375-397. 10.1007/s10791-009-9124-x
|
25 |
PAUL D, FRANK A. Social commonsense reasoning with multi-head knowledge attention[C]// Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA: ACL, 2020: 2969-2980. 10.18653/v1/2020.findings-emnlp.267
|
26 |
MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[EB/OL]. (2013-09-07) [2021-12-10].. 10.3126/jiee.v3i1.34327
|
27 |
PENNINGTON J, SOCHER R, MANNING C D. GloVe: global vectors for word representation[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Scottsdale: Workshop Track Proceedings. Stroudsburg, PA: ACL, 2014: 1532-1543. 10.3115/v1/d14-1162
|
28 |
PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. Stroudsburg, PA: ACL, 2018: 2227-2237. 10.18653/v1/n18-1202
|
29 |
HE P C, LIU X D, GAO J F, et al. DeBERTa: decoding-enhanced BERT with disentangled attention[EB/OL]. (2021-10-06) [2021-12-10]..
|
30 |
LI W, GAO C, NIU G C, et al. UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 2592-2607. 10.18653/v1/2021.acl-long.202
|