- Lab
- Core Tech

Guided: Building a Full-stack AI-powered Application with React and Python
Build a full-stack AI-powered app in under an hour! In this hands-on Code Lab, you’ll create a React frontend and FastAPI backend that connects to a local AI model using LangChain. Learn how to send user input, process it with AI, and display intelligent results—all without needing deep AI expertise.

Path Info
Table of Contents
-
Challenge
Introduction
Welcome to the Guided: Building a Full-stack AI-powered Application with React and Python Lab
In this hands-on lab, you'll build a full-stack web application that integrates generative AI using tools like React, FastAPI, LangChain, and LiteLLM. The app you're building will serve as a Preschool Activity Generator—a chatbot designed to help users discover engaging preschool activities tailored for a specific age group and interest, such as "a 4-year-old interested in dinosaurs".
You'll create a system that allows a user to type in a natural language prompt, send it to a backend AI service, and receive a creative activity idea in return—all in real time.
You'll build the frontend and backend of the application from scaffolded starter files and progressively connect them to form a working full-stack system powered by a local AI model.
Lab Step Overview
### Step 1: Introduction to AI Integration in Full-stack AppsYou'll explore why and how AI is integrated into web applications. You'll get a high-level understanding of the architecture, including the roles of the React frontend, the FastAPI backend, and the AI model. You'll also learn how to start both services, and why the backend will error until Step 3 is complete.
### Step 2: Model InitializationIn this step, you'll import your LLM model using LangChain's
### Step 3: Python FastAPI AI EndpointChatOpenAI
class. You'll create an instance of the model and implement aget_ai_response
function to return generated responses for given prompts.You'll build the
### Step 4: Sending User Input From React UI to AI Model/analyze
endpoint using FastAPI. You'll configure CORS so your frontend can access it, implement the route to receive a prompt from the frontend, call the AI model, and return the response in JSON format.You'll complete the
### Step 5: Displaying AI-powered Results in the UIsendPrompt
function in yourapi.js
file using an axiosPOST
request. You’ll then update theApp
component to handle the form submission, send the prompt to the backend, and update the response state based on the returned result.You'll update the
### Step 6: Error Handling and Debugging Tips You’ll add error handling to both frontend and backend. In the backend, you’ll wrap the call to the model in a `try`/`except` block and return an error message if something fails. On the frontend, you’ll handle possible errors from Axios and show the user either the AI response or a meaningful error message.App
component to show a loading indicator while waiting for a response, and render the AI-generated content once it arrives.You'll also learn how to integrate and interact with modern AI services in full-stack applications by using LiteLLM, LangChain, and local chat models. Each technology plays a specific role:
- LiteLLM: A drop-in API for calling local or hosted LLMs with an OpenAI-compatible interface.
- Langchain: A framework that simplifies building AI-powered apps with LLMs.
- Chat Models: LLMs designed for conversational or instruction-following tasks, such as answering questions or generating responses based on prompts.
Scenario Overview
You are building a chatbot that helps parents, teachers, or caregivers discover fun and educational preschool activities. The app accepts prompts like:
Generate a fun preschool activity for a 4-year-old interested in dinosaurs.
This prompt is sent from the frontend to the backend, where it's processed by an AI model. The AI returns a suggested activity, which is then displayed to the user on the page.
How to Start the Project
To build and run this full-stack AI-powered application, you'll need to start both the frontend and backend services. You will use two Terminal tabs—one for each service—and a third optional tab to check that services are running.
1. Start the Backend API (Terminal Tab 1)
- Navigate to the backend directory:
cd backend
- Start the FastAPI server:
uvicorn main:app --reload
Note: You won’t be able to start the backend without errors until after you complete Task 2 in Step 3. This is because the backend requires an app instance and a model definition that you'll create during the lab.
2. Start the Frontend React App (Terminal Tab 2)
- Navigate to the frontend directory:
cd frontend
- Start the development server:
npm run dev
This will start the React frontend at: http://localhost:3000
You can open this URL in the Web Browser tab. When it's running, you'll see a page with the header:
Preschool Activity Generator
3. Check Supporting Services (Terminal Tab 3, optional)
In another Terminal, use these commands to check that your local AI infrastructure is running:
Check that the Ollama LLM is running:
curl http://localhost:11434
Check that LiteLLM is running and healthy:
curl http://localhost:4000/health
After completing Step 3 in the lab, your backend will be fully functional. At that point, you can test the AI pipeline end-to-end by sending a prompt using curl:
curl -X POST http://localhost:8000/analyze -H "Content-Type: application/json" -d '{"prompt": "Generate a fun preschool activity for a 4-year-old interested in dinosaurs."}'
This will return a JSON response from the AI with an activity suggestion.
What You’ll Learn
- How to structure a full-stack app using React and FastAPI
- How to integrate a generative AI model using LangChain and LiteLLM
- How to send and receive data between frontend and backend using JSON
- How to create an API endpoint that processes prompts and returns AI-generated responses
- How to build a React UI that sends user input and displays AI-powered suggestions
Technologies Used
- React – For building the frontend interface
- FastAPI – For building the backend API in Python
- Langchain – For abstracting AI model interactions
- LiteLLM – For serving and proxying local or hosted language models
- Chat Models – For generating intelligent responses based on user input
Prerequisites
You should have basic familiarity with:
- React components and state
- Making HTTP requests (e.g., using axios or
fetch
) - Python functions and modules
- JSON and REST APIs
No prior experience with LangChain or LiteLLM is required—this lab will introduce you to those tools step by step.
You're now ready to begin building your full-stack, AI-powered application!
Tip: If you need assistance at any point, you can refer to the
solution
directory. It contains subdirectories for each of the steps with example implementations. -
Challenge
Model Initialization in Python
Step 2: Model Initialization in Python
In this step, you'll begin connecting your backend service to an actual large language model (LLM). This forms the core of your AI-powered chatbot, which will generate preschool activity ideas based on user input.
You’ll be working in the
backend/model.py
file to initialize the model and implement the logic that handles AI responses.Why It Matters
This setup is crucial because it connects your backend to the AI that generates the preschool activity suggestions.
2.1 Initialize
get_ai_response
FunctionIn this task, you will start setting up the
get_ai_response
function insidebackend/model.py
.Currently, the function is just a placeholder:
def get_ai_response(prompt): return
Update the file by doing the following:
- Import
ChatOpenAI
fromlangchain_openai
.
Keep the
get_ai_response
function defined but leave its implementation empty for now.## 2.2 Create an Instance of the Chat ModelSolution
from langchain_openai import ChatOpenAI def get_ai_response(prompt): return
Now that you've imported
ChatOpenAI
, it's time to initialize the model you'll use to generate responses.In
backend/model.py
, make the following update:- Create a variable named
model
. - Set it equal to a new instance of
ChatOpenAI
. - Use the following parameters in the constructor:
model="gpt-4o"
base_url="http://0.0.0.0:4000"
api_key="test-key"
This sets up your connection to the local LiteLLM server.
Why This Matters
This model instance is your application's interface to the AI. You'll use it to send prompts and receive generated responses.## 2.3 Complete the AI Response FunctionSolution
from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4o", base_url="http://0.0.0.0:4000", api_key="test-key") def get_ai_response(prompt): return
Now that your model is initialized, it’s time to finish the
get_ai_response
function so it actually sends a prompt to the AI and returns the response.In
backend/model.py
, update theget_ai_response
function:- Use the
model.invoke(prompt)
method to send the prompt to the model. - Store the result in a variable named
response
. - Return
response.content
—this contains the text generated by the AI.
Why This Matters
This function is now fully wired to interact with the LLM. Whenever you call `get_ai_response` with a prompt, the model will return a relevant response that can be displayed in your UI.Solution
from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4o", base_url="http://0.0.0.0:4000", api_key="test-key") def get_ai_response(prompt): response = model.invoke(prompt) return response.content
- Import
-
Challenge
Python FastAPI AI Endpoint
Step 3: Python FastAPI AI Endpoint
In this step, you'll build a backend API using FastAPI that acts as a bridge between your frontend app and the AI model.
You'll start with an empty
main.py
file and incrementally build up a fully functional endpoint. This endpoint will accept a prompt from the frontend, pass it to the language model for analysis, and return the generated response.Here’s what you’ll accomplish across the next few tasks:
- Import
FastAPI
,CORS middleware
, and your AI function. - Initialize the FastAPI app.
- Configure CORS so your React frontend can talk to your backend.
- Create a
POST
route at/analyze
to accept prompts. - Parse the incoming request body.
- Pass the prompt to your AI model and return the result.
By the end of Step 3, you'll have a working backend that accepts requests and generates preschool activity ideas using your local LLM.
Note: In the Code Labs environment, your request to the AI model may sometimes take longer than expected or even timeout. If that happens, try again—your local model may just be running slowly.
3.1 Import Dependencies for the API
In this step, you'll begin building your FastAPI backend by importing the necessary libraries and functions.
In the empty
backend/main.py
file, add the following import statements:- Import
FastAPI
andRequest
fromfastapi
—these are needed to create the API and handle incoming requests. - Import
CORSMiddleware
fromfastapi.middleware.cors
—this will allow your React frontend to communicate with the backend. - Import the
get_ai_response
function frommodel
—this connects your API to the AI model you initialized earlier.
Why This Matters
These imports set up everything you need to start building the FastAPI app, connect it to your AI model, and allow communication with your frontend.## 3.2 Initialize the FastAPI ApplicationSolution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response
Now that you've imported your dependencies, the next step is to create an instance of the FastAPI application.
- In
backend/main.py
, create an instance of theFastAPI
class and assign it to a variable namedapp
.
Why This Matters
This app instance is what FastAPI uses to register routes, middlewares, and other configurations. It becomes the entry point for your backend service.Solution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI()
Note: At this point your backend should compile without errors. ## 3.3 Add CORS Middleware to Allow Frontend Requests
Now that you’ve initialized the FastAPI app, the next step is to allow requests from the frontend to the backend. This is done by adding Cross-Origin Resource Sharing (CORS) middleware.
In
backend/main.py
, make the following changes:- After creating the
FastAPI
app instance, callapp.add_middleware(...)
. - Use
CORSMiddleware
to configure cross-origin access. - Set
allow_origins=["*"]
to accept requests from any domain (suitable for development). - Set
allow_methods=["*"]
andallow_headers=["*"]
to allow all HTTP methods and headers.
Why This Matters
This setup allows the frontend (which may be running on a different origin, such as `http://localhost:3000`) to make requests to your backend without being blocked by the browser's same-origin policy. This configuration is essential for development in a full-stack environment.## 3.4 Create the Analyze EndpointSolution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], )
In this step, you’ll define your first backend endpoint. This endpoint will handle incoming
POST
requests from the frontend and return a basic response.In
backend/main.py
, do the following:- Add a new route using the
@app.post("/analyze")
decorator. - Define an async function called
analyze
that takes aRequest
object as input. - For now, just return a hardcoded JSON response with an empty
result
.
Why This Matters
This defines an HTTP `POST` endpoint at `/analyze`. When the frontend sends a prompt to this URL, FastAPI calls `analyze` function. For now, it just returns a dummy response, but you’ll add the actual AI logic next.## 3.5 Parse the Request BodySolution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], ) @app.post("/analyze") async def analyze(request: Request): return {"result": ""}
Now that you’ve created the
/analyze
endpoint, it’s time to extract the JSON data sent from the frontend.In
backend/main.py
, do the following:- Inside the
analyze
function, useawait request.json()
to read and parse the incoming JSON request body. - Store the result in a variable named
body
. - Return the same hardcoded response for now.
Why This Matters
When the frontend sends a `POST` request, the prompt is included in the request body as JSON. Using `await request.json()` allows you to access that data and store it for later use.## 3.6 Call the AI ModelSolution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], ) @app.post("/analyze") async def analyze(request: Request): body = await request.json() return {"result": ""}
Now that you're receiving the prompt from the frontend, it's time to pass it to the AI model and return the generated response.
In
backend/main.py
, do the following:- Extract the
"prompt"
field from the parsed request body. - Pass the
prompt
to theget_ai_response
function. - Store the result in a variable called
response
. - Return a JSON object with a
result
field set to theresponse
variable:return {"result": response}
Why This Matters
This change connects the frontend input to the AI model response, completing the flow: user input → backend → LLM → frontend response.Solution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], ) @app.post("/analyze") async def analyze(request: Request): body = await request.json() prompt = body.get("prompt") response = get_ai_response(prompt) return {"result": response}
- Import
-
Challenge
Sending User Input From React UI to AI Model
Step 4: Sending User Input From React UI to AI Model
In this step, you will connect the frontend React application with your backend AI model. You will start by implementing the API call to send user input (the prompt) from the React UI to the FastAPI backend. Then, you will update the React component to handle user input through a form, send that input to the backend, and finally display the AI model’s response in the UI.
By the end of this step, you’ll have a functional interface where users can submit prompts and see generated results in real time.
4.1 Implement the
sendPrompt
Function infrontend/api.js
Update the
sendPrompt
function to send the user’s prompt to your backend API and return the response.In
frontend/src/api.js
, make the following changes:- Use
axios.post
to send aPOST
request to/api/analyze
. - Pass the
prompt
in the request body as JSON. - Include
'Content-Type': 'application/json'
header. - Await the response and return the response data.
Solution
import axios from "axios"; export async function sendPrompt(prompt) { const res = await axios.post( "/api/analyze", { prompt }, { headers: { 'Content-Type': 'application/json', }, } ); return res.data; }
Note: The
POST
request is made to/api/analyze
because the Code Labs environment requires all requests to use the same port. The/api
path is proxied tohttp://localhost:8000
, your backend server. This proxy setup is configured infrontend/vite.config.js
. ## 4.2 Add Response State to React ComponentUpdate
frontend/src/App.jsx
to add state for storing the AI response.Make the following changes:
- Add a new state variable
response
initialized as an empty string usinguseState
.
## 4.3 Update `handleSubmit` to Call API and Store ResponseSolution
import { useState } from 'react'; import { sendPrompt } from './api'; function App() { const [input, setInput] = useState(''); const [loading, setLoading] = useState(false); const [response, setResponse] = useState(''); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); setLoading(false); }; return ( <div style={{ padding: "2rem" }}> <h1>Preschool Activity Generator</h1> </div> ); } export default App;
Modify the
handleSubmit
function infrontend/src/App.jsx
to:- Leave
e.preventDefault();
at the start of the function. - Set
loading
totrue
at the start of the function. - Call
sendPrompt
with the currentinput
value andawait
the result. - Set
loading
tofalse
after the API call completes. - Set the
response
state to the returned result.
## 4.4 Add Input Form to the UISolution
import { useState } from 'react'; import { sendPrompt } from './api'; function App() { const [input, setInput] = useState(''); const [loading, setLoading] = useState(false); const [response, setResponse] = useState(''); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); const res = await sendPrompt(input); setLoading(false); setResponse(res.result); }; return ( <div style={{ padding: "2rem" }}> <h1>Preschool Activity Generator</h1> </div> ); } export default App;
Enhance the return statement in your
App
component by adding a form with:-
An
<input>
field to capture user input:- Set
type
to"text"
. - Set the
placeholder
to"Enter age and interest (e.g. 3 years old, animals)"
. - Bind
value
to theinput
state. - Use an
onChange
handler to update theinput
state.
- Set
-
A submit
<button>
to trigger the form submission. -
Attach the
handleSubmit
function to the form’sonSubmit
event.
Solution
import { useState } from 'react'; import { sendPrompt } from './api'; function App() { const [input, setInput] = useState(''); const [loading, setLoading] = useState(false); const [response, setResponse] = useState(''); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); const res = await sendPrompt(input); setLoading(false); setResponse(res.result); }; return ( <div style={{ padding: "2rem" }}> <h1>Preschool Activity Generator</h1> <form onSubmit={handleSubmit}> <input type="text" placeholder="Enter age and interest (e.g. 3 years old, animals)" value={input} onChange={(e) => setInput(e.target.value)} style={{ width: "60%" }} /> <button type="submit">Submit</button> </form> </div> ); } export default App;
Note: At this point, when you visit http://localhost:3000 in the Web Browser tab, you’ll see a form. Submitting the form will trigger a request to your backend, which you can verify in the browser’s network tab. In the next step, you’ll learn how to display the results from that API request.
- Use
-
Challenge
Displaying AI-powered Results in the UI
Step 5: Displaying AI-powered Results in the UI
In this step, you will enhance the React UI to provide feedback while waiting for the AI response and display the generated results once available. This improves user experience by showing a loading indicator during the request and presenting the AI-powered activity suggestions clearly.
5.1 Display Loading State and Response in the UI
Update the React component to provide feedback while waiting for the AI model's response and display the response once it's received.
Make the following changes to
frontend/src/App.jsx
:- Inside the
return
statement, below the form, add a conditional rendering that:- Shows the text
Loading...
when theloading
state istrue
- Otherwise, displays the
response
text
- Shows the text
This will improve the user experience by showing a loading indicator while the backend processes the prompt and then displaying the generated activity suggestion.
Solution
import { useState } from 'react'; import { sendPrompt } from './api'; function App() { const [input, setInput] = useState(''); const [loading, setLoading] = useState(false); const [response, setResponse] = useState(''); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); const res = await sendPrompt(input); setLoading(false); setResponse(res.result); }; return ( <div style={{ padding: "2rem" }}> <h1>Preschool Activity Generator</h1> <form onSubmit={handleSubmit}> <input type="text" placeholder="Enter age and interest (e.g. 3 years old, animals)" value={input} onChange={(e) => setInput(e.target.value)} style={{ width: "60%" }} /> <button type="submit">Submit</button> </form> {loading ? <p>Loading...</p> : <p>{response}</p>} </div> ); } export default App;
Note: Now, when you submit the form at http://localhost:3000, you’ll see a loading state while waiting for the API response. Once the API responds, the AI-generated content will be displayed.
- Inside the
-
Challenge
Error Handling and Debugging Tips
Step 6: Error Handling and Debugging Tips
In this step, you'll enhance the robustness of your application by adding proper error handling both in the backend and frontend. This ensures that if something goes wrong—whether due to network issues, unexpected inputs, or server errors—your app can gracefully handle the situation and inform the user appropriately.
You will complete the following:
- Update the backend API endpoint to catch exceptions and return meaningful error messages.
- Modify the frontend API call to handle request errors without crashing.
- Adjust the React UI to display any errors returned from the backend, improving user feedback and debugging experience.
These improvements will help make your AI-powered app more reliable and user-friendly. ## 6.1 Add Error Handling to the
/analyze
EndpointEnhance the
/analyze
endpoint inbackend/main.py
by adding basic error handling. This will help gracefully catch and return any errors that occur when calling the AI model.Make the following changes:
- Add a comment above the CORS middleware setup explaining it is for the React frontend.
- Wrap the call to
get_ai_response(prompt)
inside atry
block. - Return the AI response inside the
try
block as before. - Add an
except
block to catch exceptions and return a JSON with an"error"
key and the exception message as the value.
## 6.2 Add Error Handling to `sendPrompt` API CallSolution
from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from model import get_ai_response app = FastAPI() # CORS for React frontend app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], ) @app.post("/analyze") async def analyze(request: Request): body = await request.json() prompt = body.get("prompt") try: response = get_ai_response(prompt) return {"result": response} except Exception as e: return {"error": str(e)}
Update the
sendPrompt
function infrontend/src/api.js
to handle possible errors when making the POST request to the backend. This will ensure your frontend gracefully handles failures like network issues or server errors.Make the following changes:
- Wrap the
axios.post
call in atry
block. - Return the response data as before inside the
try
. - Add a
catch
block to catch any errors. - Return an object containing an
error
key with the error message.
## 6.3 Display Errors Returned from the Backend in the UISolution
import axios from "axios"; export async function sendPrompt(prompt) { try { const res = await axios.post( "/api/analyze", { prompt }, { headers: { 'Content-Type': 'application/json', }, } ); return res.data; } catch (error) { return { error: error.message }; } }
Update the
handleSubmit
function infrontend/src/App.jsx
so that the UI can display error messages returned from the backend.Make the following changes:
- When setting the response state, check if
res.result
exists. - If
res.result
is falsy, displayres.error
instead. - This allows error messages from the backend to be shown to the user.
Solution
import { useState } from 'react'; import { sendPrompt } from './api'; function App() { const [input, setInput] = useState(''); const [loading, setLoading] = useState(false); const [response, setResponse] = useState(''); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); const res = await sendPrompt(input); setLoading(false); setResponse(res.result || res.error); }; return ( <div style={{ padding: "2rem" }}> <h1>Preschool Activity Generator</h1> <form onSubmit={handleSubmit}> <input type="text" placeholder="Enter age and interest (e.g. 3 years old, animals)" value={input} onChange={(e) => setInput(e.target.value)} style={{ width: "60%" }} /> <button type="submit">Submit</button> </form> {loading ? <p>Loading...</p> : <p>{response}</p>} </div> ); } export default App;
-
Challenge
Conclusion
Conclusion
Congratulations on completing the lab!
You have successfully built a full-stack AI-powered application, integrating a Python FastAPI backend with a React frontend. Along the way, you:
- Initialized and used a local LLM model with LangChain’s
ChatOpenAI
- Created a FastAPI endpoint to serve AI responses with proper request handling
- Connected the React frontend to the backend API, sending user input dynamically
- Displayed AI-generated results in the UI with loading states and user feedback
- Implemented error handling and debugging strategies for a more robust app
Together, these skills provide a solid foundation for building modern, interactive AI applications that span backend and frontend technologies.
Now that you’ve seen how to connect user interfaces to AI models seamlessly, consider expanding your app with additional features like user authentication, more advanced prompts, or richer UI components. Happy coding!
- Initialized and used a local LLM model with LangChain’s
What's a lab?
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Provided environment for hands-on practice
We will provide the credentials and environment necessary for you to practice right within your browser.
Guided walkthrough
Follow along with the author’s guided walkthrough and build something new in your provided environment!
Did you know?
On average, you retain 75% more of your learning if you get time for practice.