- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Core Tech

Guided: Building a Scalable Chatbot API with Python and GPT
In this Code Lab, you will build a functional and scalable chatbot API from scratch using Python’s modern FastAPI framework. You’ll learn how to define API endpoints, handle data with Pydantic models, and structure a clean, maintainable application. To simulate a real-world scenario, you will integrate OpenAII, providing a safe and cost-free environment to learn the fundamentals of connecting to external AI services.

Lab Info
Table of Contents
-
Challenge
Overview
Building a Scalable Chatbot API with Python and FastAPI
In this hands-on lab, you'll build a functional chatbot API using Python's modern FastAPI framework. You'll learn how to define API endpoints, handle data with Pydantic models, and structure a clean, maintainable application. You'll integrate with OpenAI to provide AI-powered responses.
What You'll Build
A REST API chatbot service with:
- FastAPI backend with automatic OpenAPI docs
- OpenAI integration for intelligent responses
- Request/response validation using Pydantic schemas
- Proper error handling and HTTP status codes
- Web UI for testing the chatbot
- Health check endpoint for monitoring
Lab Structure
This lab is broken into progressive steps:
- Health Check - Verify the health endpoint
- Schemas - Define request/response validation with Pydantic
- Service Layer - Integrate real OpenAI API calls
- Endpoint - Add proper error handling to the chat endpoint
- Testing - Verify the complete application works end-to-end
Each step builds on the previous one, transforming the mock chatbot into a real AI-powered service.
Prerequisites
All dependencies are pre-installed. You'll receive an API key displayed in the nav bar above the lab - you'll enter this in the first step.
Getting Started
1. Start the application
Click the Run button in the bottom right corner of the lab environment to start the FastAPI server.
Wait a few seconds for the server to start (you'll see "Application startup complete" in the Terminal).
2. Open the web UI
Click on the Browser tab at the top of the screen to view the web interface.
If the page is blank, click the refresh button in the browser or press the refresh icon.
You'll see the Chatbot API Demo page with a prompt to enter your API key.
3. Save your API key
Copy the API key from the nav bar above (displayed at the top of your lab environment) and paste it into the input field.
Click "Save Key" to save it to your
.env
file.The page will reload and show the chat interface.
4. Test the mock chatbot
Try sending a message like "Hello" - you'll get a mock response from the starter code.
The chat interface is working with hardcoded mock responses right now.
Over the next steps, you'll replace this mock behavior with real AI!
How the App Currently Works
Right now, the chatbot returns hardcoded mock responses:
- Type "hello" - Get a greeting
- Type anything else - Get an echo response
You can see the mock logic in
app/services/chat_service.py:20
Project Structure
workspace/ app/ main.py # FastAPI app setup config.py # Settings and environment variables api/ routes.py # API endpoints schemas/ chat.py # Request/response models services/ chat_service.py # Business logic (currently mocked) templates/ index.html # Web UI instructions/ # Lab steps (you are here!) requirements.txt # Python dependencies .env # Environment configuration
Key Concepts You'll Learn
- FastAPI - Modern Python web framework with automatic API docs
- Pydantic - Data validation using Python type hints
- OpenAI API - Integrating language models into your app
- Error handling - Graceful failure and helpful error messages
- HTTP status codes - Proper API semantics
- Service layer architecture - Separating business logic from API routes
Next Step
Head to the next step to verify your first endpoint!
Getting Help
- API Docs: Visit http://localhost:8000/docs while the app is running
- Health Check: Visit http://localhost:8000/health
- Stuck? Check the "Solved Code" sections in each instruction file
Note: This lab experience was developed by the Pluralsight team using Forge, an internally developed AI tool utilizing Gemini technology. All sections were verified by human experts for accuracy prior to publication. For issue reporting, please contact us.
-
Challenge
Health
Health Check Endpoint
In this step, you'll verify that the health check endpoint is working correctly. Health checks are essential for monitoring and ensuring your API is responsive.
Key Concepts
- Health endpoints - Simple endpoints that return service status
- Monitoring - Essential for production deployments (load balancers, Kubernetes, etc.)
- FastAPI routing - How to define endpoints with decorators
Your Task
The health endpoint is already implemented at
app/api/routes.py:25
. Your job is to:- Test the endpoint by visiting http://localhost:8000/health
- Verify the response - You should see:
{"status":"ok"}
- Check the API docs - Visit http://localhost:8000/docs and find the
/health
endpoint
Understanding the Code
# app/api/routes.py (lines 25-28) @router.get("/health") def health() -> dict: """Simple health check endpoint.""" return {"status": "ok"}
What's happening:
@router.get("/health")
- Registers a GET endpoint at/health
-> dict
- Type hint indicating the return type- Returns a simple JSON object with status
Testing
Option 1: Browser
Open http://localhost:8000/health in your browser
Option 2: curl
curl http://localhost:8000/health
Option 3: API Docs (Swagger UI)
- Visit http://localhost:8000/docs
- Find the
GET /health
endpoint - Click "Try it out"
- Click "Execute"
- See the response
Expected Output
{ "status": "ok" }
Why Health Checks Matter
Health checks are used by:
- Load balancers - Route traffic only to healthy instances
- Container orchestration (Kubernetes, ECS) - Restart unhealthy containers
- Monitoring tools - Alert when services are down
- Developers - Quick sanity check that the service is running
Challenge
Want to make it better? Try adding more information:
@router.get("/health") def health() -> dict: """Enhanced health check with timestamp and version.""" from datetime import datetime return { "status": "ok", "timestamp": datetime.now().isoformat(), "version": "1.0.0" }
Success Criteria
/health
endpoint returns{"status":"ok"}
- Endpoint appears in the API docs at
/docs
- You understand how FastAPI routing works
Next Step
Great! Your health endpoint is working.
Head to the next step to add proper validation to your chat requests and responses.
-
Challenge
Schemas
Request & Response Schemas
In this step, you'll add validation to your chat API using Pydantic. This ensures invalid data is rejected before it reaches your business logic.
Key Concepts
- Pydantic models - Type-safe data validation using Python classes
- Field validation - Enforce constraints like minimum length, ranges, etc.
- Type hints - Self-documenting code with automatic validation
- Contract-first API - Clear request/response structure
Current State
Look at
app/schemas/chat.py
- the schemas are minimal:class ChatRequest(BaseModel): user_id: Optional[str] = None message: str # L No validation - empty messages are accepted! class ChatResponse(BaseModel): model: str reply: str # L No validation - empty replies could be returned!
Problem: Users can send empty messages, and your API could return empty replies.
Your Task
Add validation to prevent empty messages and provide helpful descriptions for the API docs.
Starter Code
Open
app/schemas/chat.py
and update it:from pydantic import BaseModel, Field from typing import Optional class ChatRequest(BaseModel): """Incoming chat request from the user.""" user_id: Optional[str] = Field( default=None, description="Optional user identifier for tracking or personalization." ) message: str = Field( min_length=1, description="User's message or question (must not be empty)." ) class ChatResponse(BaseModel): """Chatbot response returned to the user.""" model: str = Field( description="AI model used to generate the response (e.g., gpt-4o-mini)." ) reply: str = Field( min_length=1, description="The chatbot's response text (always non-empty)." )
What Changed?
ChatRequest
-
message
now requiresmin_length=1
- rejects empty strings - Added
description
for API documentation -
user_id
is optional with a clear description
ChatResponse
-
reply
requiresmin_length=1
- guarantees non-empty responses - Added descriptions for auto-generated docs
Testing Your Changes
1. Restart the server
Stop the server with
Ctrl+C
in the Terminal, then click the Run button again to restart.Once restarted, refresh your Browser tab.
2. Check the updated docs
Visit http://localhost:8000/docs and click on
POST /chat
You should see:
- Field descriptions in the schema
- Example values
- Required vs. optional fields marked clearly
3. Test validation (via Swagger UI)
Test Case 1: Empty message (should fail)
- Go to http://localhost:8000/docs
- Click
POST /chat
"Try it out" - Send:
{ "user_id": "test123", "message": "" }
- Expected: 422 Validation Error
Test Case 2: Valid message (should succeed)
{ "user_id": "test123", "message": "Hello, how are you?" }
- Expected: 200 OK with a mock response
Understanding Pydantic Fields
message: str = Field( min_length=1, # Reject empty strings description="..." # Shows in /docs )
Other useful Field options:
max_length=500
- Limit message lengthge=0, le=2
- Numeric ranges (greater/less than or equal)default=None
- Optional with default valuepattern=r"^[a-z]+$"
- Regex validation
Why Validation Matters
Security - Prevent injection attacks and malformed data User experience - Clear error messages when input is invalid Documentation - Auto-generated OpenAPI docs show constraints Type safety - Catch bugs early during development
Solved Code - Click to expand
# app/schemas/chat.py from pydantic import BaseModel, Field from typing import Optional class ChatRequest(BaseModel): """Incoming chat request from the user.""" user_id: Optional[str] = Field( default=None, description="Optional user identifier for tracking or personalization." ) message: str = Field( min_length=1, description="User's message or question (must not be empty)." ) class ChatResponse(BaseModel): """Chatbot response returned to the user.""" model: str = Field( description="AI model used to generate the response (e.g., gpt-4o-mini)." ) reply: str = Field( min_length=1, description="The chatbot's response text (always non-empty)." )
Success Criteria
- Updated
ChatRequest
withField(min_length=1)
onmessage
- Updated
ChatResponse
with validation and descriptions - Tested empty message validation (should return 422 error)
- Tested valid message (should return mock response)
- Checked
/docs
to see updated schema documentation
Next Step
Excellent! Your API now validates inputs properly.
Head to the next step to replace the mock chatbot with real OpenAI integration.
-
Challenge
Service
Real Azure OpenAI Integration
In this step, you'll replace the mock chatbot with real Azure OpenAI API calls. This is where your chatbot comes to life with actual AI-powered responses!
Key Concepts
- Azure OpenAI SDK - Official Python library for Azure OpenAI API
- Chat completions - Modern format for conversational AI
- System prompts - Set the assistant's behavior and personality
- Error handling - Gracefully handle API failures
- Configuration - Use settings from
config.py
Current State
Look at
app/services/chat_service.py
- it returns hardcoded responses:def generate_reply(self, user_message: str) -> dict: """Mock implementation - returns hardcoded response.""" # Mock responses... return { "model": self.settings.openai_model, "reply": f"You said: '{user_message}'. (This is a mock response...)" }
Your Task
Replace the mock implementation in
chat_service.py
with real Azure OpenAI API calls.
Integrate Azure OpenAI
Open
app/services/chat_service.py
and replace the entire file with:""" Chat service with Azure OpenAI integration. """ from typing import List, Dict, Any from app.config import get_settings class ChatService: """Chat service that calls Azure OpenAI API.""" def __init__(self): self.settings = get_settings() # Check if API key is configured if not self.settings.openai_api_key: raise RuntimeError( "AZURE_OPENAI_API_KEY is not set. " "Please check your environment configuration." ) # Import Azure OpenAI client (lazy import) from openai import AzureOpenAI self._client = AzureOpenAI( api_key=self.settings.openai_api_key, api_version=self.settings.api_version, azure_endpoint=self.settings.endpoint ) def _messages(self, user_text: str) -> List[Dict[str, Any]]: """ Build the messages array for OpenAI. Includes a system prompt to set the assistant's behavior. """ return [ { "role": "system", "content": ( "You are a helpful and concise assistant. " "Answer accurately and briefly unless the user asks for detail." ) }, { "role": "user", "content": user_text } ] def generate_reply(self, user_message: str) -> dict: """ Generate a reply using Azure OpenAI API. Returns: dict with 'model' and 'reply' keys """ # Validate input text = (user_message or "").strip() if not text: return { "model": self.settings.openai_model, "reply": "Please enter a message and I'll do my best to help." } try: # Call Azure OpenAI API response = self._client.chat.completions.create( model=self.settings.openai_model, messages=self._messages(text), max_tokens=self.settings.max_tokens, temperature=self.settings.temperature ) # Extract the reply content = "" if response and getattr(response, "choices", None): first_choice = response.choices[0] if getattr(first_choice, "message", None): content = (first_choice.message.content or "").strip() # Fallback if no content if not content: content = "I couldn't generate a response. Please try rephrasing." return { "model": self.settings.openai_model, "reply": content } except Exception as e: # Handle API errors gracefully return { "model": self.settings.openai_model, "reply": f"Error calling Azure OpenAI API: {e}" }
What Changed?
1. API Key Validation
if not self.settings.openai_api_key: raise RuntimeError("AZURE_OPENAI_API_KEY is not set...")
- Fails early if no key is configured
- Provides a helpful error message
2. Azure OpenAI Client Setup
from openai import AzureOpenAI self._client = AzureOpenAI( api_key=self.settings.openai_api_key, api_version=self.settings.api_version, azure_endpoint=self.settings.endpoint )
- Creates an Azure OpenAI client with your API key and endpoint
- Uses lazy import (only imports when needed)
3. System Prompt
{ "role": "system", "content": "You are a helpful and concise assistant..." }
- Sets the AI's personality and behavior
- Can be customized for different use cases
4. API Call
response = self._client.chat.completions.create( model=self.settings.openai_model, # e.g., "gpt-4o-mini" messages=self._messages(text), # System + user message max_tokens=self.settings.max_tokens, # Response length limit temperature=self.settings.temperature # Randomness (0-2) )
5. Error Handling
- Returns friendly error messages instead of crashing
- Validates input before calling the API
Testing Your Changes
1. Test via the Web UI
Try these messages:
Test 1: Simple greeting
Hello, who are you?
Expected: Real AI response introducing itself
Test 2: Knowledge question
What is FastAPI?
Expected: Accurate explanation from the AI
Test 3: Math problem
What is 157 * 23?
Expected: Correct calculation
3. Test via API Docs
- Go to http://localhost:8000/docs
- Click
POST /chat
"Try it out" - Send:
{ "message": "Tell me a fun fact about Python" }
Expected: Real AI-generated response
Understanding the Code Flow
- User POST /chat - ChatRequest (validated) - ChatService.generate_reply() - OpenAI API call - Extract response content - ChatResponse (validated) - JSON response to user
Customizing the System Prompt
Want to change the AI's behavior? Update the system message:
# Make it more formal "You are a professional business assistant. Use formal language." # Make it creative "You are a creative writing assistant. Be imaginative and descriptive." # Make it concise "You are a concise assistant. Answer in one sentence unless asked for more."
Solved Code - Click to expand
# app/services/chat_service.py """ Chat service with OpenAI integration. """ from typing import List, Dict, Any from app.config import get_settings class ChatService: """Chat service that calls OpenAI API.""" def __init__(self): self.settings = get_settings() if not self.settings.openai_api_key: raise RuntimeError( "OPENAI_API_KEY is not set. " "Please check your environment configuration." ) from openai import OpenAI self._client = OpenAI(api_key=self.settings.openai_api_key) def _messages(self, user_text: str) -> List[Dict[str, Any]]: """Build messages array for OpenAI.""" return [ { "role": "system", "content": ( "You are a helpful and concise assistant. " "Answer accurately and briefly unless the user asks for detail." ) }, {"role": "user", "content": user_text} ] def generate_reply(self, user_message: str) -> dict: """Generate reply using OpenAI API.""" text = (user_message or "").strip() if not text: return { "model": self.settings.openai_model, "reply": "Please enter a message and I'll do my best to help." } try: response = self._client.chat.completions.create( model=self.settings.openai_model, messages=self._messages(text), max_tokens=self.settings.max_tokens, temperature=self.settings.temperature ) content = "" if response and getattr(response, "choices", None): first_choice = response.choices[0] if getattr(first_choice, "message", None): content = (first_choice.message.content or "").strip() if not content: content = "I couldn't generate a response. Please try rephrasing." return {"model": self.settings.openai_model, "reply": content} except Exception as e: return { "model": self.settings.openai_model, "reply": f"Error calling OpenAI API: {e}" }
Success Criteria
- Replaced mock responses with OpenAI API calls
- Tested with real messages via web UI
- Confirmed AI-generated responses are working
- Understand how system prompts affect behavior
Next Step
Amazing! Your chatbot is now powered by real AI.
Head to the next step to add better error handling to your API routes.
-
Challenge
Endpoint
Error Handling in Endpoints
In this step, you'll add proper error handling to your chat endpoint. This ensures users get helpful error messages when something goes wrong instead of cryptic server errors.
Key Concepts
- HTTPException - FastAPI's way of returning HTTP errors
- Try/catch blocks - Graceful error handling
- Status codes - Proper HTTP semantics (400, 500, etc.)
- Developer experience - Clear error messages help debugging
Current State
Look at
app/api/routes.py:32
- the chat endpoint has no error handling:@router.post("/chat", response_model=ChatResponse) def chat(payload: ChatRequest) -> ChatResponse: """Chat endpoint - no error handling!""" svc = ChatService() # What if initialization fails? result = svc.generate_reply(payload.message) return ChatResponse(**result)
Problems:
- If service initialization fails, server crashes with 500 error
- No user-friendly error messages
- Doesn't distinguish between client errors (400) and server errors (500)
Your Task
Add try/catch blocks to handle errors gracefully and return appropriate HTTP status codes.
Starter Code
Open
app/api/routes.py
and update thechat
endpoint:from fastapi import APIRouter, Request, HTTPException # ... other imports ... @router.post("/chat", response_model=ChatResponse) def chat(payload: ChatRequest) -> ChatResponse: """ Chat endpoint with proper error handling. Returns: ChatResponse with model and reply Raises: HTTPException 400: Invalid request or configuration error HTTPException 500: Unexpected server error """ try: # Try to create the service svc = ChatService() # Generate reply result = svc.generate_reply(payload.message) # Return response return ChatResponse(**result) except RuntimeError as e: # Handle missing API key or configuration errors raise HTTPException( status_code=400, detail=str(e) ) except Exception as e: # Handle unexpected errors raise HTTPException( status_code=500, detail=f"Chat error: {e}" )
What Changed?
1. Import HTTPException
from fastapi import HTTPException
2. Try/Catch for Service Creation
try: svc = ChatService() # May raise RuntimeError on configuration errors
3. Handle Configuration Errors (400)
except RuntimeError as e: raise HTTPException(status_code=400, detail=str(e))
- Catches configuration errors
- Returns 400 Bad Request (client error)
- Provides helpful error message
4. Handle Unexpected Errors (500)
except Exception as e: raise HTTPException(status_code=500, detail=f"Chat error: {e}")
- Catches any other errors
- Returns 500 Internal Server Error
- Logs the error for debugging
Testing Your Changes
1. Restart the server
Stop the server with
Ctrl+C
in the Terminal, then click the Run button again to restart.Once restarted, refresh your Browser tab.
Test 1: Valid Request (200 Success)
curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"message": "What is 2+2?"}'
Expected: 200 success with AI response
Test 2: Via Swagger UI
- Go to http://localhost:8000/docs
- Click
POST /chat
"Try it out" - Send various test cases
- Check the "Responses" section - you should see:
- 200: Successful response
- 400: Validation or configuration error
- 422: Validation error (from Pydantic)
- 500: Server error
HTTP Status Code Reference
| Code | Meaning | When to Use | |------|---------|-------------| | 200 | OK | Successful request | | 400 | Bad Request | Client error (missing key, invalid config) | | 422 | Unprocessable Entity | Validation error (automatic from Pydantic) | | 500 | Internal Server Error | Unexpected server error |
Enhanced Error Handling (Optional)
Want even better error messages? Try this:
@router.post("/chat", response_model=ChatResponse) def chat(payload: ChatRequest) -> ChatResponse: """Chat endpoint with enhanced error handling.""" try: svc = ChatService() result = svc.generate_reply(payload.message) return ChatResponse(**result) except RuntimeError as e: # Configuration errors raise HTTPException(status_code=400, detail=str(e)) except Exception as e: # Log the error for debugging import logging logging.error(f"Chat endpoint error: {e}", exc_info=True) raise HTTPException( status_code=500, detail={ "error": "internal_error", "message": "An unexpected error occurred. Please try again.", "details": str(e) if __debug__ else None # Only in dev mode } )
Better Environment Management
Update the
_write_env
helper to handle existing keys better:def _write_env(updates: dict) -> None: """Write or update keys in .env file.""" existing: dict[str, str] = {} # Read existing .env if it exists if os.path.exists(ENV_PATH): with open(ENV_PATH, "r", encoding="utf-8") as f: for line in f: line = line.strip() if not line or line.startswith("#") or "=" not in line: continue k, v = line.split("=", 1) existing[k] = v # Merge with updates existing.update({k: v for k, v in updates.items() if v}) # Write back with open(ENV_PATH, "w", encoding="utf-8") as f: for k, v in existing.items(): f.write(f"{k}={v}\n") # Reload environment load_dotenv(override=True)
This pattern helps manage environment configuration properly.
Solved Code - Click to expand
# app/api/routes.py (chat endpoint only) @router.post("/chat", response_model=ChatResponse) def chat(payload: ChatRequest) -> ChatResponse: """ Chat endpoint with proper error handling. Raises: HTTPException 400: Invalid configuration HTTPException 500: Unexpected server error """ try: # Initialize service (may raise RuntimeError) svc = ChatService() # Generate reply result = svc.generate_reply(payload.message) # Return validated response return ChatResponse(**result) except RuntimeError as e: # Client configuration error raise HTTPException(status_code=400, detail=str(e)) except Exception as e: # Unexpected server error raise HTTPException(status_code=500, detail=f"Chat error: {e}")
Success Criteria
- Added try/catch to chat endpoint
- Tested valid request (200 success)
- Understand difference between 400, 422, and 500 errors
- Error messages are helpful and actionable
Next Step
Great! Your API now handles errors gracefully.
Head to the next step to verify the complete end-to-end flow works perfectly.
-
Challenge
Wrap Up
Wrap-Up & Final Testing
Congratulations! You've built a complete chatbot API with FastAPI and Azure OpenAI. In this final step, you'll validate that everything works end-to-end, including the API, the docs, and the UI.
Key Concepts
- End-to-end validation – Confirming the whole system works
- OpenAPI documentation – Interactive API exploration
- Full-stack integration – Frontend and backend working together
1. Verify All Endpoints
Health Check
curl http://localhost:8000/health
Expected:
{"status":"ok"}
Chat API
curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"message": "What is FastAPI?"}'
Expected: JSON with
model
andreply
fields.Home Page (UI) Visit:
http://localhost:8000
Expected: Chat interface that lets you send messages and receive replies.
API Documentation Visit:
http://localhost:8000/docs
Expected: Interactive Swagger UI.
2. Explore the OpenAPI Docs
Go to http://localhost:8000/docs and check:
-
Schemas:
ChatRequest
→ shows request structureChatResponse
→ shows response structure
-
Endpoints:
GET /health
– Health checkPOST /chat
– Main chat endpoint
Try POST /chat directly from the docs:
{ "user_id": "demo_user", "message": "Explain REST APIs in one sentence" }
3. Test the Web UI
Even though you don’t need to change the UI, you should verify it works:
-
Open
http://localhost:8000
-
Confirm you see the default message (“Hello there”), optional user ID field, and Send button
-
Send messages and check that responses appear
-
Test error cases:
- Empty message → should show client-side error
- Valid message → should return an AI response
4. Test Error Cases via API
Empty Message (Validation Error)
curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"message": ""}'
Expected: 422 Validation Error.
Missing API Key
-
Temporarily rename
.env
:mv .env .env.backup
-
Call the chat API → Expect
400 Bad Request
. -
Restore
.env
:mv .env.backup .env
Invalid JSON
curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d 'not valid json'
Expected: 422 Unprocessable Entity.
What You’ve Learned
FastAPI Fundamentals
- Creating routes and serving templates
- Built-in OpenAPI docs
Data Validation
- Pydantic models for request/response
- Automatic validation error responses
Azure OpenAI Integration
- Calling the chat completions API
- Handling errors gracefully
Configuration Management
- Secure API key storage with
.env
- Settings via
pydantic-settings
Error Handling
- Using
HTTPException
and try/catch blocks - Clear, user-friendly error responses
Full-Stack Integration
- AJAX frontend talking to your API
- End-to-end testing with UI and API
Additional Resources
- FastAPI Docs: https://fastapi.tiangolo.com
- Azure OpenAI Docs: https://learn.microsoft.com/en-us/azure/ai-services/openai/
- Pydantic Docs: https://docs.pydantic.dev
You Did It
You now have a production-ready chatbot API built with FastAPI and Azure OpenAI, plus the skills to:
- Build APIs with modern Python frameworks
- Validate and secure data with Pydantic
- Integrate AI into real applications
- Test and troubleshoot a full-stack app
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.