1 |
NOORALAHZADEH F, BEKOULIS G, BJERVA J, et al. Zero-shot cross-lingual transfer with meta learning [C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 4547-4562.
|
2 |
GAO T, YAO X, CHEN D. SimCSE: simple contrastive learning of sentence embeddings [C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 6894-6910.
|
3 |
BOWMAN S R, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference [C]// Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2015: 632-642.
|
4 |
WILLIAMS A, NANGIA N, BOWMAN S. A broad-coverage challenge corpus for sentence understanding through inference [C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg: ACL, 2018: 1112-1122.
|
5 |
CONNEAU A, RINOTT R, LAMPLE G, et al. XNLI: evaluating cross-lingual sentence representations [C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018:2475-2485.
|
6 |
PIKULIAK M, ŠIMKO M, BIELIKOVÁ M. Cross-lingual learning for text processing: a survey [J]. Expert Systems with Applications, 2021, 165: No.113765.
|
7 |
Research Google. Multilingual BERT [EB/OL]. [2024-01-31]. .
|
8 |
CONNEAU A, KHANDELWAL K, GOYAL N, et al. Unsupervised cross-lingual representation learning at scale [C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 8440-8451.
|
9 |
FENG X, FENG X, QIN B, et al. Improving low resource named entity recognition using cross-lingual knowledge transfer [C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. California: IJCAI.org, 2018: 4071-4077.
|
10 |
REIMERS N, GUREVYCH I. Making monolingual sentence embeddings multilingual using knowledge distillation [C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 4512-4525.
|
11 |
PARK W, KIM D, LU Y, et al. Relational knowledge distillation [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 3962-3971.
|
12 |
CER D, DIAB M, AGIRRE E, et al. SemEval-2017 Task 1: semantic textual similarity multilingual and cross-lingual focused evaluation [C]// Proceedings of the 11th International Workshop on Semantic Evaluation. Stroudsburg: ACL, 2017: 1-14.
|
13 |
GAO J, HE D, TAN X, et al. Representation degeneration problem in training natural language generation models [EB/OL]. [2024-01-31]..
|
14 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186.
|
15 |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010.
|
16 |
XU L, XIE H, LI Z, et al. Contrastive learning models for sentence representations [J]. ACM Transactions on Intelligent Systems and Technology, 2023, 14(4): No.67.
|
17 |
SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting [J]. Journal of Machine Learning Research, 2014, 15: 1929-1958.
|
18 |
YAN Y, LI R, WANG S, et al. ConSERT: a contrastive framework for self-supervised sentence representation transfer [C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: ACL, 2021: 5065-5075.
|
19 |
WU X, GAO C, ZANG L, et al. ESimCSE: enhanced sample building method for contrastive learning of unsupervised sentence embedding[C]// Proceedings of the 29th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2022: 3898-3907.
|
20 |
REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using Siamese BERT-networks [C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3982-3992.
|
21 |
HUANG J T, LI J, YU D, et al. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers [C]// Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2013: 7304-7308.
|
22 |
ARORA V, LAHIRI A, REET H. Attribute based shared hidden layers for cross-language knowledge transfer [C]// Proceedings of the 2016 IEEE Spoken Language Technology Workshop. Piscataway: IEEE, 2016: 617-623.
|
23 |
WANG Y, WU A, NEUBIG G. English contrastive learning can learn universal cross-lingual sentence embeddings [C]// Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022:9122-9133.
|
24 |
CHOUSA K, NAGATA M, NISHINO M, et al. SpanAlign: sentence alignment method based on cross-language span prediction and ILP[C]// Proceedings of the 28th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2020: 4750-4761.
|
25 |
WANG L, ZHAO W, LIU J. Aligning cross-lingual sentence representations with dual momentum contrast [C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 3807-3815.
|
26 |
CONNEAU A, LAMPLE G. Cross-lingual language model pretraining[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 7059-7069.
|
27 |
CHI Z, DONG L, WEI F, et al. InfoXLM: an information-theoretic framework for cross-lingual language model pre-training [C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 3576-3588.
|
28 |
ARTETXE M, SCHWENK H. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond [J]. Transactions of the Association for Computational Linguistics, 2019, 7: 597-610.
|
29 |
HUANG H, LIANG Y, DUAN N, et al. Unicoder: a universal language encoder by pre-training with multiple cross-lingual tasks [C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 2485-2494.
|
30 |
GIORGI J, NITSKI O, WANG B, et al. DeCLUTR: deep contrastive learning for unsupervised textual representations [C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: ACL, 2021: 879-895.
|
31 |
PANG B, LEE L. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales [C]// Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2005: 115-124.
|
32 |
WANG S, MANNING C D. Baselines and bigrams: simple, good sentiment and topic classification [C]// Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg: ACL, 2012: 90-94.
|
33 |
VOORHEES E M, TICE D M. Building a question answering test collection [C]// Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2000: 200-207.
|
34 |
CONNEAU A, KIELA D. SentEval: an evaluation toolkit for universal sentence representations [C]// Proceedings of the 11th International Conference on Language Resources and Evaluation. Paris: European Language Resources Association, 2018: 1699-1704.
|
35 |
EL-KISHKY A, CHAUDHARY V, GUZMÁN F, et al. CCAligned: a massive collection of cross-lingual Web-document pairs [C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 5960-5969.
|