- Course
- AI
Improving Retrieval with RAG Fine-tuning
Learn how to improve Retrieval-Augmented Generation (RAG) systems with fine-tuning. This course will teach you to optimize retrievers, rerankers, and LLMs for accurate, domain-specific results.
What you'll learn
Retrieval-Augmented Generation (RAG) has become one of the most effective techniques for grounding large language models with external knowledge. In this course, Improving Retrieval with RAG Fine-tuning, you’ll learn to design and optimize RAG systems that are accurate, domain-aware, and production-ready. First, you’ll explore the core components of a RAG system—retrievers, rerankers, and generators—and uncover the common performance challenges they face in practice. Next, you’ll discover how to fine-tune underperforming components, including embedding models for better retrieval, rerankers for improved relevance, and large language models through prompting and supervised training. Finally, you’ll learn how to adapt retrievers to domain-specific datasets, leveraging techniques such as RAFT, creating datasets with positive/negative pairs, and best practices for balancing fine-tuning with prompt engineering. When you’re finished with this course, you’ll have the skills and knowledge of RAG fine-tuning needed to build retrieval-augmented systems that are not only more relevant and reliable, but also tailored to the unique demands of your domain—whether that’s finance, e-commerce, healthcare, or beyond.
Table of contents
About the author
Eduardo is a technology enthusiast, software architect and customer success advocate. He's designed enterprise .NET solutions that extract, validate and automate critical business processes such as Accounts Payable and Mailroom solutions. He's a well-known specialist in the Enterprise Content Management market segment, specifically focusing on data capture & extraction and document process automation.