This course equips participants with the skills to design, build, and deploy modern Generative AI applications using Large Language Models (LLMs). Moving beyond basic prompt engineering and simple chatbot examples, participants will explore real-world engineering patterns such as Retrieval-Augmented Generation (RAG), tool integration, and orchestration.
Through hands-on labs and activities, participants will be equipped to build intelligent systems that are reliable, scalable, and secure.
Prerequisites
To get the most out of this session, participants should have:
- Experience with Python
- Basic understanding of REST APIs
- Familiarity with JSON and data structures
- Basic knowledge of machine learing
Â
Purpose
| Learn the skills needed to design, build and deploy Generative AI applications using Large Language Models (LLMs) |
Audience
| IT professionals interested in creating Generative AI applications using LLMs |
Role
| Software Developers | Technical Managers | Data Professionals | DevOps Engineers |
Skill level
| Intermediate |
Style
| Lecture | Hands-on Activities | Labs |
Duration
| 2 days |
Related technologies
| AI/ML | Python | REST APIs |
Â
Learning objectives
- Explain how large language models (LLMs) work
- Design effective prompts and structured outputs
- Implement modern AI application patterns, including Retrieval-Augmented Generation (RAG) and tool-based orchestration
- Evaluate, debug, and improve LLM outputs, including handling hallucinations and testing system reliability
- Design production-ready and responsible AI systems