When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
https://aclanthology.org/2023.acl-long.546/
The Web Can Be Your Oyster for Improving Language Models
https://aclanthology.org/2023.findings-acl.46/
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
https://openreview.net/forum?id=hSyW5go0v8
You can’t pick your neighbors, or can you? When and How to Rely on Retrieval in the kNN-LM
https://aclanthology.org/2022.findings-emnlp.218/
Corrective Retrieval Augmented Generation(进一步拓展:如果检索到的信息质量不高就再次检索)
https://arxiv.org/pdf/2401.15884
Efficient Nearest Neighbor Language Models
https://aclanthology.org/2021.emnlp-main.461/
Self-Knowledge Guided Retrieval Augmentation for Large Language Models
https://arxiv.org/abs/2310.05002
RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation
https://arxiv.org/abs/2310.04408
Learning to Filter Context for Retrieval-Augmented Generation
https://arxiv.org/abs/2311.08377
Repoformer: Selective Retrieval for Repository-Level Code Completion
https://arxiv.org/abs/2403.10059
Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
https://arxiv.org/abs/2403.14403