- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Core Tech
Guided: Building an AI-powered API with Python and LangChain
Generative AI (GenAI) has gained attention for its capabilities, but many may find it difficult to see how it can enhance their projects. This hands-on learning experience will guide you through incorporating that knowledge into a Python API that seamlessly integrates with GenAI using LangChain. By the end of this lab, you'll have a solid understanding of how to leverage GenAI in your own projects.
Lab Info
Table of Contents
-
Challenge
Welcome to the Hands-on Lab:
Welcome to the Hands-on Lab:
Guided: Building an AI-Powered API with Python and LangChain
Generative AI (GenAI) has gained attention for its various capabilities in processing natural language requests and creating compelling responses.
This lab assumes you have experience with these capabilities and are interested in learning how to integrate them into your own application.
The following Steps will guide you through building a recipe suggestion API using the FastAPI library and the LangChain framework to communicate with an AI API.
By the end of this lab, you'll have a solid foundation to leverage GenAI in your own projects.
General Prerequisites
This lab will guide you through all relevant steps, but the following knowledge will be beneficial.Python Development
While this lab will guide you through all relevant steps, having a foundational understanding of Python development will be beneficial.
Basic Prompt Engineering
GenAI is a powerful creative tool, and achieving the desired results relies heavily on how you frame your requests, highlighting the importance of prompt engineering. This skill is crucial when combining GenAI's creativity with the precision needed in coding, where exact outputs are essential. This lab will explore techniques to effectively bridge creative expression and predictable results, demonstrating how prompt engineering can enhance your development process.
Learning Environment
Before diving in, take a moment to familiarize yourself with the lab's interface.
README Tab
These instructions are accessible from the README tab, which is easily identifiable by the selected book icon on the left. You can return to these instructions anytime by clicking on this tab.
Explorer Tab
Access the filesystem quickly through the Explorer tab, marked by the stacked paper icon at the top left. Open this tab whenever you need to interact with the project files.
Toggle Fullscreen
In the upper right corner of this VSCode interface, look for an icon featuring two arrows pointing away from each other. This is the Toggle Fullscreen icon.
Clicking it will switch the interface to fullscreen mode, offering an expanded view. To exit fullscreen mode, simply press the
esckey to exit fullscreen mode.When ready, press the double right arrow button to begin exploring incorporating AI into your application.
-
Challenge
LangChain Framework
LangChain Framework
To integrate AI calls into your application, you have various options. One approach is to use Python's requests library to directly call APIs. However, this method requires you to handle requests and responses manually. Frameworks like LangChain simplify this process.
LangChain is an open-source framework that streamlines the development of AI-powered applications by providing abstractions and tools for seamlessly integrating large language models into Python projects.
Chat Models
LangChain offers wrappers around various LLM (Large Language Model) providers, referred to as models.
A key advantage of using these models is the ability to easily swap out LLMs without altering the underlying application logic built with other LangChain components.
This flexibility enables you to experiment with different language models and select the one that best fits your application's needs.
Model Initialization
To see this in action open the
/demos/model.pyfile.At the top of the file, you'll see the
ChatOpenAImodel imported from thelangchain_openaipackage. This model is specifically designed to interact with the OpenAI API.Each supported LLM is housed within its own package and is designed to interact with its specific API.
When initializing the model, you have several configuration options. For this lab, the model will utilize the
gpt-4ovariant, which will be accessed via the local proxy athttp://0.0.0.0:4000, using the temporary keysk-1234.Prompts
With your model instance ready, you can start making prompt requests. The simplest way to do this is by calling the
invoke()function and passing in your desired prompt.Modify the line
model.invoke("")to include your prompt. Since this lab will build a recipe suggestion API, you might use a prompt like:I would like a simple breakfast recipeTesting
To test this, you can run the following command in the Terminal.
python3 demos/model.pyWhen you run this script, you'll see a lot of content printed to the terminal. If you scroll to the top, you'll notice that it responds with an
AIMessageobject. For the purposes of this lab, you'll focus specifically on thecontentproperty of this response, which should contain a recipe recommendation.In the next Step you will be introduced LangChain's components that help compose more complex prompts.
-
Challenge
Chat Messages
Chat Messages
In addition to single prompt requests, many LLM's provide a chat message API that accepts a collection of messages. Each message is tagged with a specific role, allowing for better control in engineering the desired results.
Roles
In the previous step, you were introduced to LangChain's
AIMessageobject, which represents OpenAI's assistant role. While being used here as a response, it can be also be combined with other role type messages.system and user are two other role types used by OpenAI. The system message helps set the context and characteristics that the AI should assume when responding to the user's message.
LangChain encapsulates these roles in the
SystemMessageandHumanMessagecomponents.Prompt Engineering
These different types of messages can be combined in various ways to achieve the desired output. This process is often referred to as Prompt Engineering
You can see this in action by opening the
/n/demos/messages.pyfile.For the recipe suggestion API being developed, you can create a
SystemMessagethat helps the GenAI assume the characteristics of an expert in a specific cuisine style, such as"You are an expert in French cuisine."Then the
HumanMessagecould request a specific mealtime, such as"I would like an easy breakfast recipe."This collection of messages is then grouped together into an array to create a single prompt.
Testing
You can see this script in action by typing the following command into the Terminal:
python3 demos/messages.pyThese various message roles can be combined in multiple ways to engineer powerful prompts that produce the desired outcomes. However, as prompts become more complex, they can be difficult to manage. The next Step will explore methods for effectively managing and combining these components.
-
Challenge
LangChain Expression Language (LCEL)
LangChain Expression Language (LCEL)
To help manage these various components, LangChain has introduced the LangChain Expression Language (LCEL).
This language enables you to easily chain together different components, such as models and messages, to create more complex workflows.
Using LCEL
In the same
/demos/messages.pyfile, you can refactor the line:result = model.invoke([system, human])Instead, use LCEL to first combine the messages into a single
prompt.prompt = (system + human)This
promptcan then be piped (|) into themodelto create achain.chain = prompt | modelYou can
invoke()thischainin a similar way as before, but this time you will pass in an empty dictionary{}.result = chain.invoke({})Testing
You can rerun this script to verify this refactoring still works:
python3 demos/messages.pyReplacing one line with three may not seem like an improvement, but it can be beneficial as workflows become more complex.
Refactor to One Line
To demonstrate the potential of LCEL, you could refactor the above three lines into a single line: ```python result = ((system + human) | model).invoke({}) ```Feel free to test this by refactoring to this single line version and rerunning the
messages.pyscript.The LangChain prompt is coming together, but a few more aspects need to be addressed before it becomes useful for a recipe suggestion API.
While a French breakfast sounds delicious, it wouldn't make for a very interesting API. In the next Step, you will modify these messages to accept variables.
-
Challenge
Passing variables
Passing variables
To create a more engaging recipe API, certain parts of the messages need to be easily modifiable. This can be accomplished using message templates.
You can achieve this by switching to the
SystemMessagePromptTemplateandHumanMessagePromptTemplatecomponents.Prompt Templates
By opening the
/demos/variables.pyfile, you will see that the message components have been replaced with theirPromptTempateversions.You'll notice that these components are intialized using
from_template(). The previous strings have been copied over, but you will need to refactor these prompts to accept variables for the type of cuisine and mealtime.Declaring Variables
To declare a variable in a template, use the format
{variable}You can replace the word
Frenchwith the variable placeholder{cuisine}. Simailarly, replacebreakfastwith{mealtime}.Providing Values
Now that the prompts have variable sections, you can pass the desired values as a dictionary object to the
invoke({})call. The key for each item in the dictionary corresponds to the variable names in the declared placeholders.For example, you could call the
invoke()method with the following dictionary:result = chain.invoke({"cuisine":"southern","mealtime":"dinner"})Testing
You can test this new variable version by running the following command in the Terminal:
python3 demos/variables.pyYou are almost ready to incorporate this functionality into the recipe suggestion API. As previously noted, the returned
AIMessageobject contains many interesting but unnecessary properties. In the next Step, you will learn how to parse the response into a more useful format. -
Challenge
Parsing Output
Parsing Output
So far, a full
AIMessageobject has been returned by the request, which contains more information necessary for your API application.LangChain provides several types of output parsers to help structure the response more effectively.
String Output Parser
One of the simplest parsers to use is the
StrOutputParser.Continuing with the
/demos/variables.pyfile, you can create an instance of this parser just before declaring thechain.parser = StrOutputParser()Piping the Model's Output
Now that you have an instance of the parser, you can append it to the current
chainusing the pipe command:chain = prompt | model | parserTesting
You can see the results of this parser by running this script in the Terminal using the command:
python3 demos/variables.pyYou should now just see the full recipe formatted in the Terminal. This response is much more suitable for the recipe suggestion API than returning the full
AIMessageobject.In the next Step you will integrate what you have learned into a simple API.
-
Challenge
FastAPI Integration
FastAPI Integration
Now that you have a solid understanding of how to combine the various LangChain components, you are ready to integrate GenAI into a Python FastAPI project.
Recipe Function
The logic explored in the
variables.pyfile will serves as a good foundation to build on. This logic has been copied over to the/recipe_api/recipe.pyfile.The only modification is to wrap the
chain.invoke()call within a function, allowing it to be called by the API endpoint. This function accepts thecuisineandmealtimevariables to replace their corresponding placeholders in the templates.You will need to replace the hardcoded values that were copied over. The refactored
invoke()should not look like:result = chain.invoke({ "cuisine":cuisine, "mealtime":mealtime })Testing
Before calling this method from the endpoint, you can verify that it is working correctly by using the conditional script block. Running this script directly will execute the conditional block that requests a southern dinner recipe.
python3 recipe_api/recipe.pyYou should receive a
resultof a southern-style dinner recipe.Now that the
get_recipe()function is working as expected, in the final Step, you will modify the recipe endpoint to call this function. -
Challenge
Declaring the Endpoint
Declaring the Endpoint
Opening the
/recipe_api/main.pyfile, you will first notice that theget_recipefunction is imported, followed by a simple implementation of a FastAPI applicatin that declares a single GET endpoint.This endpoint accepts two query string parameters that match the
cuisineandmealtimeparameters ofget_recipe()function.Forwarding the parameters
You will need to modify the
get_recipe()function to use these parameters instead of the hard-coded values. The function call should be refactored to this:result = get_recipe( cuisine, mealtime )Running the API
Running a FastAPI application is slightly different, as it requires the use of the
uvicornserver. You can start this server by running the following command in the other Server terminal window.uvicorn recipe_api.main:appScroll down to the Server response section to see what GenAI thinks you should have.
Congratulations
Congratulations, you have successfully integrated generative ai into an API.
Hopefully, you have recognized the benefits of utilizing a framework to help integrate AI into your application. The LangChain framework offers significant advantages in composing complex prompts to retrieve more meaningful responses.
Wishing you the best as you continue your generative AI journey.
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.