Featured resource
2025 Tech Upskilling Playbook
Tech Upskilling Playbook

Build future-ready tech teams and hit key business milestones with seven proven plays from industry leaders.

Check it out
  • Lab
    • Libraries: If you want this lab, consider one of these libraries.
    • AI
Labs

ChatGPT Chatbot Development

In this lab, you will build a command-line chatbot from the ground up within a Jupyter Notebook. You'll start with a simple, single-response bot and progressively add more advanced features like multi-turn memory, safety guardrails, and persistent storage. Finally, you'll customize your bot to act as a helpful Python programming tutor.

Lab platform
Lab Info
Level
Intermediate
Last updated
Feb 05, 2026
Duration
30m

Contact sales

By clicking submit, you agree to our Privacy Policy and Terms of Use, and consent to receive marketing emails from Pluralsight.
Table of Contents
  1. Challenge

    Step 1: Setting Up Your Chatbot Environment

    Welcome to the lab! Your first step is to setup the code to handle a multi turn conversation. To start, open up chatbot.ipynb in your lab environment. In the first cell you will need to import the needed libraries. Identify your API key for the second cell, your API key will be needed in the get_assistant_response function where it is labeled '<TODO_ENTER_API_KEY_HERE>'. Your API key has been generated for you at the top of this lab. Ensure that the API key is in string format when pasting it.

    info > If you get stuck at any point in this lab, you can consult the Solutions.ipynb file or click the dropdowns in each section to reveal a completed code snippet.

  2. Challenge

    Step 2: Building a Basic Single-Turn Chatbot

    Now that the environment is set up, you'll build the skeleton of your application. The first key thing you will need to setup is the system_prompt this is a prompt given to the LLM, in this case chat GPT, to give the model a direction and tone. For this lab a simple prompt will work. For now you will define the prompt in the second step and add the system prompt into the messages list. When an AI reads previous conversations they are stored as dictionaries with two keys, one being the role which can be:

    • system when defining the initial system prompt
    • assistant which is previous chats from the AI
    • and user which is responses given by the user The second key needed for the dictionary is the content which consists of the text as a string of what was said. An example of what one dictionary might look like in the list is { "role": "system", "content": system_prompt }
    Task 2.1 Solution
    # Define the prompt string for the Python Tutor bot
        system_prompt = "you are a helpful cowboy who helps with my problems"
        # Add system prompt to required format: a list with one system message dictionary
        messages = [{ "role": "system", "content": system_prompt }]
    
    Now it's time to create the core function for getting a response from the chatbot. This function will take the conversation history as input, send it to the model via the client, and return the model's response. This encapsulates the primary API interaction.
    • in the predefined function get_assistant_response add the parameter messages which is a list of dictionaries holding conversation history for the AI model.
    • Inside the function, call the client.chat.completions.create() method.
    • Pass gpt-4o-mini as the model and the messages argument to the messages parameter.
    • Extract the message content from the response and return it. The content is located at response.choices[0].message.content
    Task 2.2 Solution
    	    response = client.chat.completions.create(
            model="gpt-4o-mini",
            # TODO add function parameter
            messages= messages
        )
    #TODO: add return value
        return response.choices[0].message.content
    
    A chatbot needs a way to continuously interact with the user. You will now build the main application loop. This loop will prompt the user for input, handle a special command to exit, and facilitate the conversation.
    • Inside the loop, get user input using the input() function with the prompt "You: ".
    • Add a condition to check if the user's input is "exit". If it is, print a goodbye message and break the loop.
    Task 2.3 Solution
    user_input = input("You: ") # Replace with input() function
    
     # TODO: Check for 'exit' command
    if user_input == "exit":
      return
    
    
  3. Challenge

    Step 3: Implementing Conversation Memory

    A simple single-turn chatbot isn't very engaging. To have a real conversation, your bot needs to remember what has been said. In this step, you will implement short-term memory. You'll modify your chat loop to store the history of user and assistant messages, sending the entire conversation back to the model with each turn. This provides the context needed for meaningful, multi-turn dialogue. To enable multi-turn conversations, the chatbot must remember what was said previously. You'll now modify your chat function to store and manage this conversation history. This list of messages is the chatbot's short-term memory.

    • At the beginning of the chat function (before the loop), you created a messages list with the system prompt, you will be appending to this list to add each part of the conversation going forward.
    • Inside the loop, after getting user input, append the user's message to the messages list. Remember, each message must be a dictionary with "role": "user" and "content": user_input.
    Task 3.1 Solution
    	messages.append({ "role": "user", "content": user_input })
    
    Now that you're tracking the user's messages, you need to get the assistant's response and add that to the history as well. This completes the conversational turn.
    • Inside the loop (after appending the user message), call your get_assistant_response function with the current messages list.
    • Print the assistant's response to the console, prefixed with "Assistant: ".
    • Finally, append the assistant's response to the messages list. This message should be a dictionary with "role": "assistant" and the content you received.
    Task 3.2 Solution
                assistant_response = get_assistant_response(messages)
                # Print the assistant's response
                print(assistant_response)
                # Append the assistant's response to the history
                messages.append({ "role": "assistant", "content": assistant_response })
    
  4. Challenge

    Step 4: Adding Safety and Moderation Guardrails

    With great power comes great responsibility. LLMs can sometimes be manipulated or produce undesirable content. It's crucial to build in safety features. In this step, you'll create a basic input filter to block certain keywords associated with prompt injection attacks. You'll then integrate this filter into your chat loop to make your chatbot safer and more reliable. A key safety feature is filtering user input for harmful or undesirable content. You will create a function that checks user input against a list of forbidden words. This is a basic but important guardrail.

    • Inside the is_sage function, create a list of forbidden_words. Include terms you want to block (e.g., 'ignore', 'disregard', 'exploit').
    • Check if any of the forbidden words are present in the user's input (you may want to convert the input to lowercase for a case-insensitive check).
    • Return False if a forbidden word is found, and True otherwise.
    • If the function returns False, print a refusal message (e.g., "I cannot process this request.")
    Task 4 Solution
    def is_safe(user_input):
        # Define a list of forbidden words
        forbidden = ["ignore","disregard","exploit"]
        # Check if any forbidden word is in the user_input (case-insensitive)
        sentence_lower = user_input.lower()
        if any(word.lower() in sentence_lower for word in forbidden):
            return False
        # Return False if a forbidden word is found, True otherwise
        return True
    
  5. Challenge

    Step 5: Enabling Long-Term Memory

    Your chatbot's memory currently resets every time you restart the program. To create a more persistent experience, you'll now add long-term memory. You will implement functions to save the conversation history to a JSON file and load it back up. This allows you to pause and resume conversations whenever you like. Short-term memory is great, but what if you want to resume a conversation later? You need to implement long-term memory by saving the conversation to a file. You'll create a function to handle this using the json library.

    • Define a function save_conversation that accepts a messages list and a filename.
    • Inside the function, use a with open(...) block to open the specified filename in write mode ('w').
    • Use json.dump() to write the messages list to the file.
    Task 5.1 Solution
    def save_conversation(messages, filename):
        # Open the file in write mode and use json.dump()
        with open(filename, "w") as file:
            json.dump(messages, file, indent=4)
    
    To complete the long-term memory feature, you need a way to load a saved conversation. This function will read a JSON file and return the message history, allowing a conversation to be resumed.
    • Define a function load_conversation that accepts a filename.
    • Use a try...except block to handle cases where the file might not exist (FileNotFoundError).
    • Inside the try block, open the file in read mode ('r') and use json.load() to read the data and return it.
    • In the except block, simply return an empty list [] to start a new conversation if no file is found.
    Task 5.2 Solution
    def load_conversation(filename):
        # Use a try-except block to handle FileNotFoundError
        try:
            # Inside try, open the file and use json.load() to return the messages
            with open(filename, "r") as file:
                return json.load(file)
    
        except FileNotFoundError:
            # Inside except, return an empty list
            return []
    
  6. Challenge

    Step 6: Customizing Your Chatbot's Personality

    The final step is to give your chatbot a specific purpose and personality. You'll do this using a 'system prompt'—a powerful instruction that guides the model's behavior. You will write a system prompt to transform your chatbot from a generic assistant into a helpful Python programming tutor. You'll then integrate this into your application to complete your custom chatbot. The personality and purpose of your chatbot are defined by its system prompt. This is a special message at the start of the conversation history that instructs the model on how to behave. You will now create a system prompt to turn your generic bot into a specialized Python Tutor.

    • Define a function create_system_prompt that takes no arguments.

    • Inside, create a detailed string for the prompt. Instruct the bot to act as a friendly and encouraging Python tutor. Tell it to explain concepts clearly, provide code examples, and never give away direct answers to homework.

    • The function should return a list containing a single message dictionary: [{ "role": "system", "content": your_prompt_string }]. Finally, integrate the system prompt into your chat application's logic. The system prompt should be the very first message in the conversation history, setting the stage for all subsequent interactions.

    • Modify your chat function. When you initialize the messages list, instead of an empty list, set it equal to the result of calling your create_system_prompt() function.

    • This ensures the tutor instructions are always the first thing the model sees.

    It is important to seperate your system prompt from other prompts to prevent alterations to it as well as for readability as system prompts for one or few shot learning can be quite large.

    Task 6 Solution
    def create_system_prompt():
        # Define the prompt string for the Python Tutor bot
        system_prompt = "you are a helpful cowboy who helps with my problems"
        # TODO: append system prompt to start of messages list.
        # Return the prompt in the required format: a list with one system message dictionary
        return [{ "role": "system", "content": system_prompt }]
    
    Now that you have a fully functional chatbot feel free to try experimenting with different system prompts to see how they shape your chatbot’s personality, behavior, and responses.
About the author

I am, Josh Meier, an avid explorer of ideas an a lifelong learner. I have a background in AI with a focus in generative AI. I am passionate about AI and the ethics surrounding its use and creation and have honed my skills in generative AI models, ethics and applications and thrive to improve in my understanding of these models.

Real skill practice before real-world application

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Learn by doing

Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.

Follow your guide

All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.

Turn time into mastery

On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.

Get started with Pluralsight