Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Getting started with LangChain: How to run your first application

Get the lowdown on LangChain: What it is, what’s included in the framework, and step-by-step instructions on how to use LangChain to build AI applications.

Mar 6, 2024 • 8 Minute Read

Please set an alt value for this image...
  • AI & Machine Learning

If you’ve been paying attention to large language models (LLMs) at all (and how could you not be?), then you've probably stumbled across the name “LangChain” more than a few times. 

Maybe you’ve heard it’s a “revolutionary framework” or “ground-breaking platform” for developing AI applications. But let's face it: isn’t that the claim about everything these days? What makes LangChain any different from the rest of the tech hype floating around?

Well, that's exactly what we're here to uncover. In this article, we’ll go beyond the marketing buzzwords and talk about what LangChain actually is and what it’s used for. You’ll also learn how to set up and run your first LangChain application leveraging OpenAI’s GPT models.

Table of contents

What is LangChain, and why would you use it?

When I first started exploring LangChain, I'll admit, it was a bit of a head-scratcher. While wading through documentation and examples, I kept asking myself, "What exactly is this, and why should I be using it?" But after some trial-and-error and ChatGPT-ing (can we use that as a verb yet?), the pieces started falling into place.

The best way to understand the “what” and “why” behind LangChain is to think through an example application.

This particularly awesome app does financial market analysis and provides insights to clients. It works its AI magic, and then emails clients personalized financial advice. To do this, it needs to hook into the following:

  • OpenAI: The application uses GPT models for financial research and general language processing to write human-friendly emails.

  • Hugging Face: For international clients, the application uses Hugging Face LLM for language translation (yes, GPT models can also do language translation, but let’s go with it).

  • Your data: This data is stored in a variety of databases and documents. It includes investment profiles, market preferences, and risk tolerance for clients.

  • AWS: The app integrates with AWS services. It uses Amazon Glue for data preparation and Amazon Redshift for data warehousing.

  • Email: The email service sends the application’s final output to clients.

In summary, this application needs to use two LLMs, customer data, and third-party services.  And that, my friends, is the perfect job for LangChain.

With that background, let’s revisit the question, “What is Langchain?” In short, LangChain is a framework for developing applications that are powered by language models. 

LangChain itself does not provide a language model, but it lets you leverage models like OpenAI’s GPT, Anthropic, Hugging Face, Azure OpenAI, and many others (though OpenAI’s GPT models have the most robust support at the moment). In addition, you can leverage your own data, as well as call other services needed by your application.

How is LangChain different from the OpenAI Assistants API?

If you've been keeping an eye on OpenAI’s latest shiny object the Assistants API, you might be wondering, "Isn't this pretty much what LangChain does?" With the Assistant, you can leverage your own data and interact with functions and external APIs as well. Are Assistants the LangChain killer?

Not so fast. The functionalities are similar, but with the OpenAI Assistant, you’re limited to OpenAI’s GPT models. LangChain, on the other hand, lets you interact with lots of different models and generally gives you more control than the Assistants API.  So let’s not write the LangChain obituary just yet.

Why is it called LangChain?

The name “LangChain” comes from its core functionalities. “Lang” refers to “language,” highlighting its focus on applications that use large language models.  

“Chain” is a reference to chaining or linking things together. It’s all about connecting various pieces—like different language models, data sources, and tools—to create something bigger and better.

Aptly named, methinks.

What’s included in the LangChain framework?

LangChain isn't just another library; it's a full-fledged framework to help you build, deploy, and monitor your AI applications. And don't worry, it's not an all-or-nothing deal. You can pick and choose the components that make sense for your project.

  • LangChain libraries: The libraries are the backbone of the development process and available in both Python and JavaScript.

  • LangChain templates: Rather than starting from scratch, grab a template. Some popular templates include “build a chatbot with your own data” or “extract structured data from unstructured data.”

  • LangSmith: This is the developer platform that allows you to debug, test, evaluate, and monitor chains.

  • LangServe: Once you’re ready to share your work with the world, this library helps you deploy your chains as a REST API, making your application accessible and interactive.

How to use LangChain: Building your first LangChain application

Now that we have some background and concepts under our belt, let’s dive in and write some code.

What you’ll need to follow along and how much this will cost

You’ll need an OpenAI API key (free and easy to get).

As far as costs go, there are no charges associated with LangChain, but OpenAI will charge you for the tokens you use. For this tutorial, it should amount to only cents, but check out the OpenAI pricing page for full details.

LangChain installation and setup

First things first. You’ll need to install LangChain from the terminal with the following command:

      pip install langchain
    

For this tutorial, we’ll be using OpenAI’s APIs, so you need to install the OpenAI package as well:

      pip install openai
    

Next, create a new Python file in your IDE of choice. I’ll be using VS Code. I’ll create a file called my-langchain-app.py and then add my import statements at the top of the file.

An overview of the LangChain modules

Before we get too far into the code, let’s review the modules available in the LangChain libraries.

  • Model I/O: The most common place to get started (and our focus in this tutorial). This module lets you interact with your LLM(s) of choice and includes building blocks like prompts, chat models, LLMs, and output parsers.

  • Retrieval: Work with your own data, otherwise known as Retrieval Augmented Generation (RAG).

  • Agents: Access other tools and chain together a sequence of actions.

The Model I/O module is core to everything else, so it will be the focus of this walk-through. The three components for this module are LLMs and Chat Models, prompts, and output parsers.

Let’s get into each of these components in more detail. See the documentation for more information on Retrieval and Agents.

1. LLMs and Chat Models (similar, but different)

Not all language models accept the same kind of input. To handle this, LangChain uses two different constructs.

  • LLMs: The model takes a string as input and returns a string. (Easy peasy.)

  • Chat Models: The model takes a list of messages as input and returns a message. (Huh?) A message contains the content of the message (usually a string) plus a role, which is the entity from which the BaseMessage is coming. For example, it could be a HumanMessage, an AIMessage, a SystemMessage, or a FunctionMessage/ToolMessage.

To call LLM or Chat Model, you’ll use the same .invoke(); just be aware that you could be passing in a simple string or a message.

Maybe some code will help. Let’s start by working with LLM. Add this code to your Python file (be sure to update the API key with your OpenAI API key).

      from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="YOUR_OPENAI_API_KEY")
    

Now invoke the LLM, passing in a simple string for your prompt.

      response = llm.invoke("What is the elevation of Mount Kilimanjaro?")
print(response)
    

Run the code from the terminal:

      python my-langchain-app.py
    

Here’s a look at my completed code and response.

Now let’s see how to work with the Chat Model (the one that takes in a message instead of a simple string). Update your code to this:

      from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI(openai_api_key="YOUR_OPENAI_API_KEY")
    

Next, we’ll assemble our message, using a SystemMessage and a HumanMessage

      from langchain.schema.messages import HumanMessage, SystemMessage
messages = [
    SystemMessage(content="You are a personal math tutor that answers questions in the style of Gandalf from The Hobbit."),
    HumanMessage(content="I'm trying to understand calculus.  Can you explain the basic idea?"),
]
    

And then invoke and print the output.

      response = chat.invoke(messages)
print(response.content)
    

Run the code from the terminal:

      python my-langchain-app.py
    

Here’s a look at my completed code and response.

Nicely done!

2. Prompts

The second component in the Model I/O module is Prompt. We usually think of a “prompt” as a simple piece of text that we send to a model. But prompts can include more than that. For example, they might include system instructions (like, “Act as an expert in Python programming”) or different parameters like temperature to control randomness. Or maybe you have a perfectly-crafted prompt that you want to reuse and simply add placeholders for specific values.

The prompt template helps with all of this, giving us a structured starting point for our prompts to pass to the LLM.

      from langchain.prompts import PromptTemplate

# Create a prompt with placeholders for values
template = PromptTemplate.from_template(
    "What is the capital city of {country}?"
)

# Provide the value for the placeholder to create the prompt
filled_prompt = template.format(country="Italy")
print(filled_prompt)
    

And here’s the completed code and the filled-in prompt:

3. Output parsers

The third component of the Model I/O module is the output parser. When the model sends back a response, we use the parser to extract and format the information.

There are a few options here, like converting text from an LLM into JSON, parsing a comma-separated list, converting a ChatMessage into a string, or extracting extra information from a message. We’ll look at the CommaSeparatedListOutputParser here, but check out the documentation for more details.

In this example, we also pull together some of the concepts covered previously in this article.

      import openai
import langchain

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.chat_models import ChatOpenAI

# Initialize the Chat Model
chat_model = ChatOpenAI(openai_api_key="YOUR_OPENAI_API_KEY")

# Initialize the parser and get instructions on how the LLM output should be formatted
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()

# Use a prompt template to get a list of items
prompt = PromptTemplate(
    template="The user will pass in a category.  Your job is to return a comma-separated list of 10 values.\n{format_instructions}\n{query}\n",
    input_variables=["category"],
    partial_variables={"format_instructions": format_instructions},
)

# Define the category to pass to the model
category = "animals"

# Chain together the prompt and model, then invoke the model to get structured output
prompt_and_model = prompt | chat_model
output = prompt_and_model.invoke({"query": category})

# Invoke the parser to get the parsed output
parsed_result = output_parser.invoke(output)

# Print the parsed result
print(parsed_result)
    

And once again, the final code and parsed output:

Wrapping up

Well, there you have it! We've taken a whirlwind tour through the world of LangChain and peeled back the layers of this so-called “revolutionary framework” to see what all the fuss is about. 

Turns out, it's not just hype—you can use LangChain to build robust AI applications against a variety of language models, while also playing nice with your own data and third-party services. You also got a taste of how to work with the libraries, specifically the Model I/O module.

If you think this kind of development is for you, then check out these other resources from Pluralsight to dig deeper.

Amber Israelsen

Amber I.

Amber has been a software developer and technical trainer since the early 2000s. In recent years, she has focused on teaching AI, machine learning, AWS and Power Apps, teaching students around the world. She also works to bridge the gap between developers, designers and businesspeople with her expertise in visual communication, user experience and business/professional skills. She holds certifications in machine learning, AWS, a variety of Microsoft technologies, and is a former Microsoft Certified Trainer.

More about this author