Featured resource
2025 Tech Upskilling Playbook
Tech Upskilling Playbook

Build future-ready tech teams and hit key business milestones with seven proven plays from industry leaders.

Check it out

Agentic AI: Conversational Agents

Course Summary

This course teaches developers how to build, evaluate, and deploy end-to-end conversational agents using LangChain, FAISS, and real-world APIs. Participants gain deep insight into prompt engineering, memory management, document retrieval, function calling, and deployment workflows. Through hands-on exercises, participants will learn to build a fully functional web-based agent that answers internal questions, personalizes interactions, and performs real-world tasks using APIs.

Prerequisites
In order to succeed in this course, you will need:

  • Experience programming with Python
  • Familiarity with using external libraries and working with APIs
Purpose
Equip developers with the skills to build, evaluate, and deploy conversational agents using LangChain
Audience
Developers with Python experience interested in building LLM-powered applications
Role
Developers | Software Engineers | AI/ML Practitioners
Skill level
Intermediate
Style
Lecture | Hands-on Workshop
Duration
2 days
Related technologies
Agentic AI | LangChain | LLMs | Python

Course objectives

  • Understand the architecture of a modern LLM-based conversational agents
  • Use LangChain components to manage prompts, chains, memory, and tools
  • Connect to internal documents using FAISS and embedding models
  • Build chatbots that also act as agents by integrating external APIs
  • Evaluate chatbot responses for latency, cost, and safety
  • Deploy working agents using Hugging Face Spaces and Streamlit/Gradio

What you'll learn:

In this course, you'll learn:
  • Agentic AI Primer (Core Concepts & Terminology)
    • Defining agentic systems and how they differ from standard automation
    • Agentic components: memory, tools, and reasoning loops
    • Comparing agentic workflows with RAG and fine-tuning
    • Shared terms: planner, executor, environment, guardrails 
  • Anatomy of a Conversational Agent
    • Conversational agents vs. chatbots
    • Key LangChain components and agent workflows
    • Creating a minimal  LLMChain
  • Prompt Engineering for Dialogue
    • System and user prompt design
    • Tone matching and safety phrasing
    • Few-shot prompting and verbosity control
  • Conversational Memory
    • Short-term vs. summary memory 
    • Privacy tradeoffs and memory configs
    • How memory impacts response quality
  • Vector Search & Document Embedding
    • Embeddings and vector search basics
    • Using FAISS and chunking strategies
    • Document retrieval with OpenAI models
  • Retrieval-Augmented Generation (RAG)
    • How RAG improves accuracy and grounding
    • Using LLamaIndex with LangChain
    • When to prefer RAG over direct embedding
  • External Tool Integration & Function Calling
    • Tools vs. APIs in LangChain
    • Function calling and controlled execution
    • Routing inputs and chaining results
  • Evaluations & Guardrails
    • Latency targets for user experience
    • Token usage monitoring and cost control
    • Content filtering and fallback design
  • Deploying via Hugging Face Spaces
    • Deployment steps and required files
    • Streamlit vs. Gradio UI basics
    • Secret and token management

Dive in and learn more

When transforming your workforce, it’s important to have expert advice and tailored solutions. We can help. Tell us your unique needs and we'll explore ways to address them.

Let's chat

By filling out this form and clicking submit, you acknowledge our privacy policy.