- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Core Tech

Guided: Python Development with LangChain
In this lab you will learn to build a summarizer agent step by step. You will create prompts, connect them into a chain, wrap that chain in a tool, and give it to an agent. By the end, you will be able to summarize any text into clear bullets with a TL;DR.

Lab Info
Table of Contents
-
Challenge
Overview
Introduction
Welcome to Python Development with LangChain.
In this lab, you’ll build a mini research assistant inside a Flask app — wiring it up step by step. You’ll start with a custom prompt, turn it into a chain, wrap it in a tool, and finally give an agent the ability to decide when to use that tool.
By the end, the app will do real work:
- Answer direct questions
- Summarize long text into 3–5 bullets with a TL;DR
- Let an agent intelligently choose when to summarize and when to just answer
Outcomes
- Content & learning: Summarize long readings or transcripts into tight bullet points and a TL;DR
- Productivity: Turn messy meeting notes or tickets into actionable items
- Compliance & QA: Extract key facts consistently using a governed style guide
- Pipelines: Reuse the same summarizer as a building block inside RAG or workflow systems
Mental Model
- Prompt: A script you hand to an actor (tone + structure)
- Chain: A repeatable recipe that runs the script with a chosen model
- Tool: A button that triggers that recipe
- Agent: A smart coordinator that reads the panel labels and decides which button to press — and when
Repo Layout
workspace/ └── app/ ├── base.py # already provided ├── prompts.py # Step 1: your custom prompt ├── chains.py # Step 2: your chain ├── tools.py # Step 3: wrap chain in a tool ├── agent.py # Step 4: build the agent └── templates/ └── index.html # already provided flask_app.py # app entrypoint (already provided) .env # created automatically when you paste your API key in the UI
Getting Started
- Click over to the Web Browser tab, you should see the UI.
- Paste your OpenAI API key into the UI field (it will be saved to
.env
). - Follow the steps to build out your prompt → chain → tool → agent pipeline.
By the time you finish, you’ll have a working page that answers questions, summarizes text, and lets an agent make smart decisions about when to summarize.
-
Challenge
Prompts
Prompts
In this step we’re going to learn about prompts—your app’s style guide for the model. You’ll see the basic structure, then finish a starter prompt that returns 3–5 bullets plus a one-line TL;DR.
Prompt structure (at a glance)
System message = director’s notes (rules: tone, format, bullet count, labels) Human message = raw material (data for this request) Inputs = as few as possible → {content}
Your task
-
Update the system message to enforce:
- 3–5 concise bullets
- Keep concrete facts (numbers, dates, names)
- Include source names if present
- Avoid fluff
- End with a one-line takeaway starting with
TL;DR:
- Provide an explicit output template (see example in comments)
-
Keep inputs minimal:
- Only use the
{content}
placeholder in the human message.
- Only use the
-
(Optional) Experiment:
- Move one rule (e.g., “include source names”) from system to human, run it, and observe how behavior changes. Move it back.
Starter code
Modify the code below, you'll find it in
app/prompts.py
. If you get stuck take a peak at the solved code at the bottom.# app/prompts.py from __future__ import annotations from langchain_core.prompts import ChatPromptTemplate def build_summarizer_prompt() -> ChatPromptTemplate: """ Starter prompt for summarization. TODOs for you: - Change bullet range from 2–3 to 3–5. - Add rules: keep concrete facts; include source names if present; avoid fluff. - Change the final line label from 'Summary:' to 'TL;DR:'. - Add an explicit output template showing 3 bullet lines and the final one-liner. """ # SYSTEM = director's notes (structure, tone, global rules) system_msg = ( "You are a helpful summarizer. " "Write a very short summary as 2–3 bullet points. " # TODO: change to 3–5 "Finish with a single line starting with 'Summary:'." # TODO: change to 'TL;DR:' # TODO: Add rules: # - Keep concrete facts (numbers, dates, names). # - If sources (papers, orgs, authors) are mentioned, include their names. # - Avoid speculation and marketing fluff. # - Add an explicit output template, e.g.: # Output format: # - <bullet 1> # - <bullet 2> # - <bullet 3> # TL;DR: <one sentence> ) # HUMAN = raw material for this request (keep inputs minimal) human_msg = ( "Summarize the following content.\n\n" "Content to summarize:\n{content}" # <- only input key ) return ChatPromptTemplate.from_messages([ ("system", system_msg), ("human", human_msg), ])
Consider this
- Which input key must you supply when invoking this prompt? →
content
- Why put structural rules (like bullet count) in the system message? They’re durable, high-priority instructions that should apply to every call—keeping output consistent regardless of the input data.
Show solution (finished
app/prompts.py
)# app/prompts.py from __future__ import annotations from langchain_core.prompts import ChatPromptTemplate def build_summarizer_prompt() -> ChatPromptTemplate: """ Returns a prompt that asks the model to: - Summarize input into 3–5 concise bullet points - Keep key facts; remove fluff - Include source names if present in the input - End with a one-sentence takeaway that starts with 'TL;DR:' The chain that uses this prompt must be invoked with {'content': <text>}. """ system_msg = ( "You are an expert technical summarizer. " "Write a concise summary as 3–5 bullet points. " "Keep concrete facts (numbers, dates, names). " "If the input mentions sources (e.g., papers, orgs, authors), include their names in the bullets. " "Avoid speculation and marketing fluff. " "After the bullets, add a single-sentence takeaway that begins with 'TL;DR:'. " "Output format:\n" "- <bullet 1>\n" "- <bullet 2>\n" "- <bullet 3>\n" "TL;DR: <one sentence>" ) human_msg = ( "Summarize the following content.\n\n" "Content to summarize:\n{content}" ) return ChatPromptTemplate.from_messages([ ("system", system_msg), ("human", human_msg), ])
-
-
Challenge
Chains
Chains
In this step we’re going to learn about chains — reusable “recipes” that connect your prompt to a model. Think of a chain as a named function that takes input, runs it through a prompt + model, and returns an
AIMessage
that you can read with.content
.
Chain Structure (at a glance)
Prompt | LLM → AIMessage
- Prompt: The instructions & placeholders you built in the previous step.
- LLM: The model you’re using (e.g.,
AzureChatOpenAI
). - AIMessage: The output object you’ll read from.
Your task
-
Build a chain that:
-
Uses your prompt from
app/prompts.py
-
Pipes (
|
) it into the model -
Returns the resulting chain so callers can do:
build_summarizer_chain(llm).invoke({"content": "..."}).content
-
-
Keep it small & predictable:
- Accept only
{"content": <text>}
as input. - Return a single
AIMessage
as output.
- Accept only
-
(Optional) Experiment:
- Replace the real
chain = prompt | llm
with a stubbed response (like a fake summary) and see how the app behaves without hitting the model.
- Replace the real
Starter code
Modify the code below, you'll find it in
app/chains.py
. If you get stuck take a peak at the solved code at the bottom.# app/chains.py from __future__ import annotations from langchain_openai import AzureChatOpenAI from .prompts import build_summarizer_prompt def build_summarizer_chain(llm: AzureChatOpenAI): """ Starter chain for summarization. TODOs for you: - Create a prompt using build_summarizer_prompt() - Pipe (|) the prompt into the llm to make a chain. - Return the chain so other code can call .invoke({'content': "..."}) """ # Build the prompt prompt = build_summarizer_prompt() # TODO: pipe the prompt into the llm to create the chain chain = prompt # <- placeholder, replace with prompt | llm return chain
Consider this
-
After
prompt | llm
, which attribute contains the final string?AIMessage.content
-
Why limit the chain to one input key (
content
)? To keep the interface predictable and easy to invoke — no need to remember multiple keys.
Show solution (finished
app/chains.py
)# app/chains.py from __future__ import annotations from langchain_openai import AzureChatOpenAI from .prompts import build_summarizer_prompt def build_summarizer_chain(llm: AzureChatOpenAI): """ Returns a Runnable that accepts {'content': <text>} and yields an AIMessage. Usage: build_summarizer_chain(llm).invoke({'content': "..."}).content """ prompt = build_summarizer_prompt() chain = prompt | llm return chain
-
Challenge
Tools
Tools
In this step you’re going to learn about tools — the way agents “press buttons.” You’ll wrap your summarizer chain inside a
Tool
so the agent can call it whenever it sees text that should be summarized.
Tool Structure (at a glance)
Tool( name="tool_name", # how the agent refers to it description="button label", # how the agent decides to use it func=callable_that_returns_str # wraps chain.invoke() )
- Name: Identifier the agent uses under the hood.
- Description: The label on the button — clear, action-oriented text helps the agent choose correctly.
- Func: A function that takes input, calls the chain, and returns a string.
Your task
-
Build a tool that:
-
Calls your
build_summarizer_chain()
fromchains.py
-
Defines a
summarize_text(text: str) -> str
function that:- Passes
{"content": text}
into the chain - Returns
.content
(a string)
- Passes
-
-
Create a
Tool
with:name="smart_summarizer"
- A clear, helpful
description
(agents use this to decide whether to call it) func=summarize_text
-
Return a list containing just that tool.
Starter code
Modify the code below, you'll find it in
app/tools.py
. If you get stuck take a peak at the solved code at the bottom.# app/tools.py from __future__ import annotations from langchain.tools import Tool from langchain_openai import AzureChatOpenAI from .chains import build_summarizer_chain def build_tools(llm: AzureChatOpenAI): """ Starter tool setup for summarization. TODOs for you: - Build the summarizer chain with build_summarizer_chain(). - Write summarize_text(text: str) -> str that invokes the chain and returns .content. - Wrap summarize_text in a Tool with a clear name and description. - Return a list containing this Tool. """ summarizer_chain = build_summarizer_chain(llm) # TODO: define summarize_text that uses summarizer_chain.invoke() def summarize_text(text: str) -> str: return text # <- placeholder; change to summarizer_chain.invoke(...).content # TODO: create the Tool summarizer_tool = Tool( name="placeholder_name", # <- change to smart_summarizer description="TODO: write a clear label so the agent knows this summarizes text", func=summarize_text, ) return [summarizer_tool]
Consider this
-
What happens if your tool’s function returns something other than a string? The agent might fail or get confused — always return plain text.
-
How could a vague description affect the agent? The agent might ignore the tool or use it in the wrong context.
Show solution (finished
app/tools.py
)# app/tools.py from __future__ import annotations from langchain.tools import Tool from langchain_openai import AzureChatOpenAI from .chains import build_summarizer_chain def build_tools(llm: AzureChatOpenAI): """ Return the lab's single tool: smart_summarizer. """ summarizer_chain = build_summarizer_chain(llm) def summarize_text(text: str) -> str: return summarizer_chain.invoke({"content": text}).content summarizer_tool = Tool( name="smart_summarizer", description=( "Summarize long passages, notes, or content into 3–5 concise bullets " "and end with 'TL;DR: ...'. Input is raw text to summarize." ), func=summarize_text, ) return [summarizer_tool]
-
Challenge
Agents
Agent
In this step, you’re going to learn about agents — think of them as your junior analyst: they read the available tool labels, reason step-by-step, and “press the right button” when it makes sense. Unlike a simple chain, which always runs the same prompt, an agent decides:
- Do I need to call a tool?
- Which tool should I use?
- What input should I send?
- What do I do with the result?
You’ll create an agent that knows how to call your
smart_summarizer
tool when it sees long text that should be summarized.
Agent Structure (at a glance)
AgentExecutor( tools=[...], # The "buttons" the agent can press llm=llm, # The "brain" doing the reasoning agent=AgentType..., # Reasoning style (e.g., ZERO_SHOT_REACT_DESCRIPTION) max_iterations=6, # Hard cap on reasoning loops before stopping early_stopping_method="generate", # Still try to answer if loop limit hits handle_parsing_errors=True # Recover gracefully from formatting glitches )
Your Task
-
Build an agent that:
- Accepts your tools list (from
tools.py
) - Uses the same
llm
you passed to your chain - Sets
AgentType.ZERO_SHOT_REACT_DESCRIPTION
as the reasoning strategy - Enables
verbose=True
(so you can see its thought process)
- Accepts your tools list (from
-
Configure:
max_iterations=6
— prevents infinite loops if the agent gets confusedhandle_parsing_errors=True
— prevents demos from crashing if the model outputs malformed reasoning stepsearly_stopping_method="generate"
— forces the agent to produce a best-effort answer even if it hits the step limit
-
Return the
agent
so other parts of the app can call:agent.invoke({"input": "Summarize this text ..."})
Starter Code
Modify the code below, you'll find it in app/agent.py. If you get stuck take a peak at the solved code at the bottom.
# app/agent.py from __future__ import annotations import warnings from langchain.agents import initialize_agent, AgentType from langchain_openai import AzureChatOpenAI from langchain_core._api.deprecation import LangChainDeprecationWarning # Ignore noisy deprecation warnings in the notebook try: warnings.filterwarnings("ignore", category=LangChainDeprecationWarning) except Exception: pass def build_agent(llm: AzureChatOpenAI, tools: list): """ Starter agent for calling tools like smart_summarizer. TODOs for you: - Use initialize_agent() with your tools and llm. - Choose AgentType.ZERO_SHOT_REACT_DESCRIPTION. - Set verbose=True so you can see reasoning steps. - Limit max_iterations to 6. - Set handle_parsing_errors=True to make demos smooth. - Optionally use early_stopping_method="generate" so you still get output even if the agent hits the step limit. """ # TODO: build and return the agent agent = None # <- replace with call to initialize_agent(...) return agent
Consider This
-
Which parameter caps the number of reasoning steps?
max_iterations
-
Why set
handle_parsing_errors=True
? To keep your app from crashing if the LLM outputs slightly malformed intermediate steps. -
Why use
verbose=True
? So you can see the "thoughts" → "actions" → "observations" flow, which makes learning how agents work much clearer.
Quick Usage Example
Once you implement
build_agent
, try running:from app.base import build_llm_from_env from app.tools import build_tools from app.agent import build_agent llm = build_llm_from_env() tools = build_tools(llm) agent = build_agent(llm, tools) result = agent.invoke({"input": "Summarize: LangChain is a framework for building LLM-powered apps..."}) print(result["output"])
When
verbose=True
, you’ll see logs like:> Entering new AgentExecutor chain... Thought: The user asked for a summary, I should call the smart_summarizer tool. Action: smart_summarizer Action Input: LangChain is a framework... Observation: - LangChain helps you... Final Answer: - LangChain helps developers build... TL;DR: LangChain is a framework for building LLM apps. > Finished chain.
This trace shows the reasoning process ("Thought"), the chosen tool ("Action"), the tool result ("Observation"), and finally the composed answer.
Show solution (finished
app/agent.py
)# app/agent.py from __future__ import annotations import warnings from langchain.agents import initialize_agent, AgentType from langchain_openai import AzureChatOpenAI from langchain_core._api.deprecation import LangChainDeprecationWarning # Ignore noisy deprecation warnings in the notebook try: warnings.filterwarnings("ignore", category=LangChainDeprecationWarning) except Exception: pass def build_agent(llm: AzureChatOpenAI, tools: list): """ Creates an AgentExecutor that can call our smart_summarizer tool when helpful. Parameters explained: - tools: The list of available tools (like buttons the agent can press). - llm: The language model "brain" that reasons step by step. - agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION: Uses the ReAct pattern: think → act (call tool) → observe → repeat. - verbose=True: Prints the reasoning steps so you can see how the agent thinks. - max_iterations=6: Hard limit to avoid infinite loops. - early_stopping_method="generate": Even if the step limit is hit, the agent will still try to produce a final answer. - handle_parsing_errors=True: Prevents the app from crashing if the model outputs slightly malformed reasoning steps. """ agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, # reasoning strategy verbose=True, # show thoughts/actions max_iterations=6, # max reasoning steps early_stopping_method="generate", # still generate output if stuck handle_parsing_errors=True, # recover from bad output ) return agent
-
Challenge
Wrap Up
Try It: Comparing QA, Summarize, and Agent
Paste this text into each mode and observe how the outputs differ:
Test Text The Beatles were an English rock band formed in Liverpool in 1960. The core lineup of the band comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are widely regarded as the most influential band in Western popular music and were integral to the development of 1960s counterculture and the recognition of popular music as an art form. Rooted in skiffle, beat and 1950s rock 'n' roll, their sound incorporated elements of classical music and traditional pop in innovative ways. The band also explored music styles ranging from folk and Indian music to psychedelia and hard rock. As pioneers in recording, songwriting and artistic presentation, the Beatles revolutionised many aspects of the music industry and were often publicised as leaders of the era's youth and sociocultural movements.
Led by primary songwriters Lennon and McCartney, the Beatles evolved from Lennon's previous group, the Quarrymen, and built their reputation by playing clubs in Liverpool and Hamburg, Germany, starting in 1960, initially with Stuart Sutcliffe playing bass. The core trio of Lennon, McCartney and Harrison, together since 1958, went through a succession of drummers, including Pete Best, before inviting Starr to join them in 1962. Manager Brian Epstein moulded them into a professional act, and producer George Martin developed their recordings, greatly expanding their domestic success after they signed with EMI and achieved their first hit, "Love Me Do", in late 1962. As their popularity grew into the intense fan frenzy dubbed "Beatlemania", the band acquired the nickname "the Fab Four". Epstein, Martin or other members of the band's entourage were sometimes informally referred to as a "fifth Beatle".
QA Mode
- Select Simple QA from the dropdown.
- Type: Who were the Beatles?
- Click Run.
What to expect: A short, factual answer — usually one or two sentences — because QA mode just runs a direct prompt with no tool calls.
Summarize Mode
- Select Summarize text from the dropdown.
- Paste the entire Beatles text above.
- Click Run.
What to expect:
- 3–5 bullet points with key facts (dates, names, genres)
- A single TL;DR: line at the end
- The same structured format every time, because this mode always uses the summarizer prompt
Agent Mode
- Select Agent from the dropdown.
- Paste the same Beatles text again.
- Click Run.
What to expect:
- The agent may or may not call the summarizer tool
- You’ll likely see one short summary sentence rather than bullets, because the agent decides it can answer directly
- If you watch the logs with verbose mode on, you’ll see there’s no tool call in this case
Key Takeaways
- QA Mode: Direct, simple answer — great for quick questions.
- Summarize Mode: Always returns the strict bullet + TL;DR format — best for consistent output.
- Agent Mode: Decides dynamically whether to call the summarizer tool — flexible, but less predictable.
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.