| [1] |
BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 1877-1901.
|
| [2] |
ZHU Y, YUAN H, WANG S, et al. Large language models for information retrieval: a survey [EB/OL]. [2024-06-20]. .
|
| [3] |
赵征宇,罗景,涂新辉.基于多粒度语义融合的信息检索方法[J].计算机应用, 2024, 44(6): 1775-1780.
|
|
ZHAO Z Y, LUO J, TU X H. Information retrieval method based on multi-granularity semantic fusion [J]. Journal of Computer Applications, 2024, 44(6): 1775-1780.
|
| [4] |
LIN J, NOGUEIRA R, YATES A. Pretrained Transformers for text ranking: BERT and beyond [M]. San Rafael, CA: Morgan & Claypool Publishers, 2021.
|
| [5] |
LIANG P, BOMMASANI R, LEE T, et al. Holistic evaluation of language models [EB/OL]. [2024-06-20]. .
|
| [6] |
ZHUANG H, QIN Z, HUI K, et al. Beyond yes and no: improving zero-shot LLM rankers via scoring fine-grained relevance labels [C]// Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). Stroudsburg: ACL, 2024: 358-370.
|
| [7] |
SACHAN D S, LEWIS M, JOSHI M, et al. Improving passage retrieval with zero-shot question generation [C]// Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 3781-3797.
|
| [8] |
ZHUANG S, LIU B, KOOPMAN B, et al. Open-source large language models are strong zero-shot query likelihood models for document ranking [C]// Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 8807-8817.
|
| [9] |
QIN Z, JAGERMAN R, HUI K, et al. Large language models are effective text rankers with pairwise ranking prompting [C]// Findings of the Association for Computational Linguistics: NAACL 2024. Stroudsburg: ACL, 2024: 1504-1518.
|
| [10] |
LUO J, CHEN X, HE B, et al. PRP-Graph: pairwise ranking prompting to LLMs with graph aggregation for effective text re-ranking [C]// Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2024: 5766-5776.
|
| [11] |
ZHUANG S, ZHUANG H, KOOPMAN B, et al. A Setwise approach for effective and highly efficient zero-shot ranking with large language models [C]// Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 38-47.
|
| [12] |
MA X, ZHANG X, PRADEEP R, et al. Zero-shot listwise document reranking with a large language model [EB/OL]. [2024-07-20]. .
|
| [13] |
SUN W, YAN L, MA X, et al. Is ChatGPT good at search? investigating large language models as re-ranking agents [C]// Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 14918-14937.
|
| [14] |
NOGUEIRA R, YANG W, CHO K, et al. Multi-stage document ranking with BERT [EB/OL]. [2024-07-20]. .
|
| [15] |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186.
|
| [16] |
NOGUEIRA R, JIANG Z, PRADEEP R, et al. Document ranking with a pretrained sequence-to-sequence model [C]// Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg: ACL, 2020: 708-718.
|
| [17] |
RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer [J]. Journal of Machine Learning Research, 2020, 21: 1-67.
|
| [18] |
ZHUANG H, QIN Z, JAGERMAN R, et al. RankT5: fine-tuning T5 for text ranking with ranking losses [C]// Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2023: 2308-2313.
|
| [19] |
MA X, WANG L, YANG N, et al. Fine-tuning LLaMA for multi-stage text retrieval [C]// Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 2421-2425.
|
| [20] |
SAHOO P, SINGH A K, SAHA S, et al. A systematic survey of prompt engineering in large language models: techniques and applications [EB/OL]. [2023-05-06]. .
|
| [21] |
ROBERTSON S, ZARAGOZA H. The probabilistic relevance framework: BM25 and beyond [J]. Foundations and Trends in Information Retrieval, 2009, 3(4): 333-389.
|
| [22] |
OpenAI. Hello GPT-4o [EB/OL]. [2024-06-20]. .
|
| [23] |
HUANG C W, CHEN Y N. InstUPR: instruction-based unsupervised passage reranking with large language models [EB/OL]. [2024-07-01]. .
|
| [24] |
CRASWELL N, MITRA B, YILMAZ E, et al. Overview of the TREC 2019 deep learning track [EB/OL]. [2024-06-02]. .
|
| [25] |
CRASWELL N, MITRA B, YILMAZ E, et al. Overview of the TREC 2020 deep learning track [EB/OL]. [2024-05-12]. .
|
| [26] |
THAKUR N, REIMERS N, RÜCKLÉ A, et al. BEIR: a heterogeneous benchmark for zero-shot evaluation of information retrieval models [EB/OL]. [2024-06-22]. .
|
| [27] |
LIN J, MA X, LIN S C, et al. Pyserini: a Python toolkit for reproducible information retrieval research with sparse and dense representations [C]// Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 2356-2362.
|
| [28] |
CHUANG H W, HOU L, LONGPRE S, et al. Scaling instruction-finetuned language models [J]. Journal of Machine Learning Research, 2024, 25: 1-53.
|
| [29] |
Team Llama. The Llama 3 herd of models [EB/OL]. [2024-11-23]. .
|
| [30] |
Team Qwen. Qwen2 technical report [EB/OL]. [2024-09-10]. .
|