Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

How to add GenAI to your applications with Anthropic Claude API

Get an overview of common Claude use cases and then learn how to add generative AI to your applications with the Anthropic Claude API.

May 2, 2024 • 7 Minute Read

Please set an alt value for this image...
  • Software Development
  • Data
  • AI & Machine Learning

Claude is a powerful natural language processing model that can understand and communicate in human language with remarkable accuracy and fluency. Claude is extremely good at handling open-ended queries and creative tasks.

In this blog post, I’ll show you how to use Claude programmatically via Anthropic’s API.

Table of contents

Why use Claude? Sample use cases

Why would you even want to integrate your application with Anthropic’s Claude API?

With GenAI, users can provide prompts to the AI model and receive outputs in natural language. This flexibility gives applications previously unheard of capabilities.

1. Enable open-ended conversation and exploration

One of Claude's greatest strengths is its ability to engage in open-ended dialogue. This capability allows you to add conversational powers to your applications.

2. Create intelligent chatbots and conversational agents

Similarly, Claude can understand natural language prompts and generate appropriate responses. If you integrate the API with your application, you can build advanced chatbots and virtual assistants that can hold natural, contextual conversations.

3. Perform research and analysis

Claude excels at creating, editing, and summarizing articles, stories, scripts, reports, and other content. It can analyze content your users provide and assist with research projects, literature reviews, and even data analysis.

4. Assist with creative ideation

Because Claude was trained on an extensive body of knowledge, it can brainstorm ideas for everything from product design to marketing campaigns and artistic projects. It may be able to provide fresh ideas and unique perspectives for your applications.

5. Improve coding and technical tasks

Because Claude can write and explain code in various programming languages, it can provide guidance on applications that deal with code, such as data platforms or integrated development environments.

A quick look at parameters

Before we dive in to the actual walkthrough, let’s quickly cover the different parameters you’ll see in the Anthropic API. 

In my case, let’s imagine I have an app that helps people be happy and find the meaning of life. I want it to respond to my user’s questions in a helpful manner. To do this, I'll need to configure certain parameters to tune the LLM’s response.


This refers to the Claude model that will be used to process your request. As of writing, Claude 3 is the latest version which includes three models: Opus, Sonnet, and Haiku. Opus is the most advanced. Sonnet balances intelligence and speed, making it ideal for enterprise workloads and scaled AI deployments. Haiku is the fastest and most compact model.


This places a limit on the number of tokens that will be generated. The model may finish before this limit is reached. Each model has its own max limit, which means you cannot set max_tokens with a value higher than the model’s limit.


This indicates the amount of randomness in the response, with values near 0.0 being almost deterministic and values closer to 1.0 being more creative.


This corresponds to the system prompt. It’s a way of providing instructions and assigning goals or roles for Claude.


This corresponds to the input messages, or what you send to Claude and what it replies with. Each input message is made up of a role and the content. user refers to what you send, and assistant refers to what the model replies. The content is either what you send or what the model replies.  

If you want to have a conversation, you need to include all of the messages in your request. Here’s an example of a conversation with alternating user and assistant messages.

  {"role": "user", "content": "Hi, I am Xavier."},
  {"role": "assistant", "content": "Hi, I'm Claude. How can I help you Xavier?"},
  {"role": "user", "content": "Can you tell me the meaning of life?"},

Other parameters

There are other parameters you can use, like stream to receive the response in a continuous stream instead of all at once, or stop_sequences to specify a condition for the model to stop generating an answer.

How to integrate your application with the Claude API

Now let me show you how to use the Claude API with Python, my language of choice.

To integrate your application with Claude, you need to be able to make calls to its API. There are several ways to connect your app with the Claude API: You can make direct HTTP calls, use the TypeScript SDK, or use the Python System Development Kit (SDK). I’ll be focusing on the Python SDK method. 

Note: Right now, Claude is only available in certain regions. If your region isn’t supported yet, you’ll unfortunately have to wait.

Getting started with the Claude API

To access and use the Claude API, you need to set up a console account. Once you do, you’ll be taken to the Dashboard. You may receive some free credits to get started, but you’ll eventually need to pay to use it.

Next up, navigate to the API keys screen and create a key. Remember, you are responsible for how your keys are used, so don’t commit them to repositories or post them on any potentially public platform. Store them securely.

Once you’re set up with access to the Claude API and have a key, it’s time to integrate your application.

Use the Anthropic API Quickstart Colab Notebook

Now it’s time to decide where you’ll run your code. You can use the Anthropic API Quickstart Colab Notebook or run it locally in your machine with Jupyter Notebook. The choice is up to you—I’ll explain how to use both.

Colab is great because it’s a web application that provides a version of Jupyter Notebook that runs in Google. Anthropic provides a Colab notebook you can use to get started—just be sure to make a copy first!

The instructions in this notebook are self explanatory.

Work locally in your machine with Jupyter Notebook

If you’d prefer not to use the Colab notebook, you can work locally in your development environment. This can give you a better understanding of the environment you’ll need to replicate in your deployment machines.

To get started, you’ll need:

  • A version of Python the SDK supports. Check the documentation in the prerequisites section for the minimum version.
  • The Anthropic Python client SDK which is currently hosted in GitHub.

Create a virtual environment

When you’re ready to begin, I recommend creating a virtual environment and activating it from the terminal using the commands shown below. Replace {venvname} with whatever you want to name your virtual environment.

      $ python -m venv {venvname}
$ cd {venevname}
$ source bin/activate
$ cd ..

Install Jupyter Notebook

Next, I need to install and start Jupyter Notebook using the following commands.

      $ pip install notebook
$ jupyter notebook

Create a notebook

Jupyter will start and open a browser. Create the notebook you’re going to use. I called mine claude-api-get-started.ipynb.

Install the Anthropic Python SDK

Open the notebook and install the Anthropic Python SDK. Since I’m going to install it from within the notebook, I need to use a magic command. To do that, I simply add an exclamation mark at the start of the line to run the command in the shell.

      !pip install anthropic

You only need to do this once. You can also install the library from the terminal.

Install python-dotenv

There’s another library I’ll need to manage environment variables: python-dotenv. This step is not needed if you set the API key as a global environment variable. However, in my case I’ll store it in a file and load it when the code starts running.

      !pip install python-dotenv

Import os and anthropic

There are also a few imports I need to use, namely os for working with environment variables, python-dotenv for setting the env variable, and anthropic. Let’s import those now.

      import os
from dotenv import load_dotenv
import anthropic

Create the client

Next, you need to create the client. This is what you’ll use to interact with the API.

      client = anthropic.Anthropic(

Interact with Claude via the API

At this point, you’re ready to start creating messages and interacting with Claude via the API. Use the function create. This receives the parameters that will be sent to the model.

Check out this sample call.

      message = client.messages.create(
    system="You are a useful assistant that responds in a formal tone.",
        {"role": "user", "content": "How are you today?"}

Then make the call.


Receive the response

Once you make the call, you’ll receive a response. Here’s the response I got.

      [ContentBlock(text="I am doing well, thank you for asking. As an AI language model, I don't have feelings, but I'm functioning properly and ready to assist you with any questions or tasks you may have. How may I help you today?", type='text')]

If you receive an error, Anthropics’s API uses the expected HTTP response codes to help you troubleshoot.

Include images in your request (optional)

Starting with Claude 3, you can also include images in your request. You specify type=image, and then include the encoded base64 string as shown below.

      {"role": "user", "content": [
    "type": "image",
    "source": {
      "type": "base64",
      "media_type": "image/jpeg",
      "data": "/9j/4AAQSkZJRgABAQAAAQABAAD/...",
  {"type": "text", "text": "Who is this person?"}

Other helpful Claude API tips

There are a few more things you need to know. First, the Claude API implemented rate limits to reduce misuse and manage capacity. Second, if your application strictly controls which IPs can be accessed, Claude provides a range of IP addresses you can use to limit access.

Concluding our look at the Claude API

You learned how to integrate your application with Anthropic’s Claude API—now feel free to continue exploring the API and test features that can add value to your applications. 

Ready to learn more? Check out my generative AI and machine learning Pluralsight courses.

Xavier Morera

Xavier M.

Xavier is very passionate about teaching, helping others understand search and Big Data. He is also an entrepreneur, project manager, technical author, trainer, and holds a few certifications with Cloudera, Microsoft, and the Scrum Alliance, along with being a Microsoft MVP. He has spent a great deal of his career working on cutting-edge projects with a primary focus on .NET, Solr, and Hadoop among a few other interesting technologies. Throughout multiple projects, he has acquired skills to deal with complex enterprise software solutions, working with companies that range from startups to Microsoft. Xavier also worked as a worldwide v-trainer/evangelist for Microsoft.

More about this author