Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)

Retrieval-Augmented Generation: Advances, Applications, and Future Directions in Knowledge-Grounded Language Modeling

Authors
Hancheng Yu1, *
1Zhejiang University Edinburgh United College, Zhejiang University, Haining, China
*Corresponding author. Email: Hancheng.22@intl.zju.edu.cn
Corresponding Author
Hancheng Yu
Available Online 24 April 2026.
DOI
10.2991/978-94-6239-648-7_48How to use a DOI?
Keywords
Retrieval-Augmented Generation; Natural language processing; Large language models; Information retrieval; Knowledge-intensive tasks
Abstract

Retrieval-Augmented Generation (RAG) has emerged as a pivotal innovation in natural language processing (NLP), integrating information retrieval with generative modeling to overcome the limitations of static, parameter-based large language models. By dynamically retrieving relevant external knowledge during text generation, RAG enhances factual accuracy, contextual relevance, and adaptability across knowledge-intensive tasks. This paper presents a detailed survey of RAG’s development, methods, and applications. The paper first present the evolution of mainstream RAG architectures including Retrieval-Enhanced Language Model (REALM), Dense Passage Retrieval (DPR), Fusion-in-Decoder (FiD), Hypothesis-Driven Enhancement (HyDE), Self-Retrieval-Augmented-Generation (Self-RAG), Corrective Retrieval-Augmented Generation (CRAG), Graph-based Retrieval-Augmented Generation (GraphRAG) - and then discuss their retrieval, reasoning, and error correction mechanisms respectively. Afterwards, we review the gradually widening applications of RAG including open-domain question answering, summarization, and domain-specific applications such as medicine and multimodal learning. We also address several challenges faced by RAG, such as retrieval level, computational efficiency, hallucination suppression, and interpretability. Finally, we explore several research directions for RAG, including adaptive retriever, self-reflective generation, and structured and multimodal knowledge sources. We believe that RAG is an important step toward more accurate, explainable, and knowledge-based AI, with significant potential for both theoretical research and practical applications in intelligent language understanding.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
Series
Advances in Computer Science Research
Publication Date
24 April 2026
ISBN
978-94-6239-648-7
ISSN
2352-538X
DOI
10.2991/978-94-6239-648-7_48How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Hancheng Yu
PY  - 2026
DA  - 2026/04/24
TI  - Retrieval-Augmented Generation: Advances, Applications, and Future Directions in Knowledge-Grounded Language Modeling
BT  - Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
PB  - Atlantis Press
SP  - 439
EP  - 447
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6239-648-7_48
DO  - 10.2991/978-94-6239-648-7_48
ID  - Yu2026
ER  -