This course transitions developers from building linear, single-prompt LLM applications to engineering dynamic, multi-agent swarms. Utilizing the OpenAI Agents SDK, participants will master the Handoff architecture, where specialized agents autonomously transfer control to one another based on task requirements. The course emphasizes Durable Execution (via Temporal) and Persistent Sessions, ensuring agents can handle long-running, complex workflows that survive system restarts. From building "Triage Agents" that route user intent to deploying Model Context Protocol (MCP) servers for universal tooling, this course provides a production-ready blueprint for high-reliability AI orchestration.
Prerequisites:
In order to succeed in this course, you will need:
- Intermediate to advanced Python
- API & Web Fundamentals including OpenAI API and REST API experience
- Basic understanding of agentic workflows
- An understanding of CI/CD pipelines and containerization
Purpose
| Transition from building linear, single-prompt LLM applications to multi-agent swarms using OpenAI Agents SDK |
Audience
| Senior level programmers and engineers looking to move beyond single-prompt LLM applications |
Role
| AI Engineers & LLM Developers |Â DevOps & Platform Engineers |Â Software Architects |Â Backend Developers
|
Skill level
| Intermediate |
Style
| Lecture | Hands-on Activities | Labs |
Duration
| 3 days |
Related technologies
| OpenAI | Gen AI | Temporal |
Â
Learning objectives
- Implement Agent Handoffs
- Manage Stateful Sessions
- Engineer Type-Safe Tools
- Architect Durable Workflows
- Standardize External Access using the Model Context Protocol (MCP)