Featured resource
2026 Tech Forecast
2026 Tech Forecast

Stay ahead of what’s next in tech with predictions from 1,500+ business leaders, insiders, and Pluralsight Authors.

Get these insights
  • Lab
    • Libraries: If you want this lab, consider one of these libraries.
    • AI
Labs

ChatGPT Prompt Engineering and Evaluation

In this Code Lab, you will learn the fundamentals of prompt engineering, from basic styles to advanced techniques. You'll work in a Jupyter Notebook to interact with a mock AI model, analyze unstructured data, and evaluate the model's outputs.

Lab platform
Lab Info
Level
Beginner
Last updated
Dec 16, 2025
Duration
30m

Contact sales

By clicking submit, you agree to our Privacy Policy and Terms of Use, and consent to receive marketing emails from Pluralsight.
Table of Contents
  1. Challenge

    Step 1: Getting Started

    Welcome to the Prompt Engineering Code Lab! In this first step, you'll set up your Jupyter Notebook, which will be your workspace for the entire lab.

    You'll import the necessary helper code, including ChatGPT 4.0 as your AI model and the sample data you'll be working with. This initial setup is crucial for the hands-on exercises in the following steps.

    Setup

    First navigate to the Prompt_lab.ipynb notebook, then insert the API key viewable in the top center of your browser into the code within the second cell. This is different for every lab instance and simply needs to replace <TODO_ENTER_API_KEY_HERE> in the second cell.

    The second cell contains the function you will use to call almost all of your LLM calls for this lab. Near the end of the lab a different function will be needed for an iterative conversation, so you can also skip to the end to add the API key now, or later when prompted.

  2. Challenge

    Step 2: Foundational Prompting Styles

    Now that the environment is ready, you'll explore the two most fundamental prompting styles: zero-shot and few-shot.

    Understanding the difference between simply asking for something (zero-shot) and showing the model what you want with examples (few-shot) is the first major step toward becoming a skilled prompt engineer.

    You will apply these techniques to perform basic analysis on the unstructured CSV data.

    Zero-Shot Prompting

    Zero-shot prompting is the simplest form of prompting, where you ask the model to perform a task without giving it any prior examples.

    • First in the Prompt_lab.ipynb notebook, find the cell under the Zero-Shot Prompting section.
    • Your task is to go through the 5 precreated reviews and see how the zero shot sentiment system prompt returns different responses for each review.
    • Execute the cell to see the model's response and make a mental note of or save the results. ### Few-Shot Prompting

    Few-shot prompting involves providing the model with a few examples of the task you want it to perform. This helps the model understand the desired output format and context better.

    Using the precreated prompt for few shot learning, notice that several reviews and sentiments you're given to the model to show proper responses and how the format of the text will go.

    Review all 5 reviews again and see how the few shot learning response varies from the zero shot.

  3. Challenge

    Step 3: Advanced Prompting Techniques

    With the basics covered, you can move on to more advanced and powerful techniques. In this step, you'll learn how to assign a 'persona' to the model with role-based prompting to control its tone and style.

    You'll also learn about Chain-of-Thought (CoT) prompting, a method to guide the model through complex reasoning tasks by asking it to 'think step by step'. ### Role-Based Prompting

    You can assign a role or persona to the model to influence the tone and style of its response. This is useful for generating content for a specific audience.

    • Navigate to the Role-Based Prompting section.

    • Write a prompt in the role_prompt variable that begins by assigning the model a persona. Use one of the existing persona's and see how they review or summarize the different types of text differently.

    • Each persona will have different aspects it pays attention to when summarizing, which is important to acknowledge and understand when finding biases within models.

    • Execute the cell and observe how the persona changes the output. Chain-of-Thought (CoT) prompting encourages the model to break down a problem into steps, which often leads to more accurate results for complex reasoning tasks.

    • Go to the Chain-of-Thought Prompting section.

    • Your task is to use the different scenarios to see how AI models reason through tasks. Pay attention to which types of tasks the AI seems best suited to answer and try to notice the difference between subjective and objective problems solutions.

    • Run the cell and analyze how the model 'thinks' through the problem before giving a final answer.

  4. Challenge

    Step 4: Analyzing Unstructured Data

    This is where prompt engineering shows its true power. You'll use the techniques you've learned to perform a real-world task: analyzing a block of unstructured clustering of non-standardized data. You will guide the AI to first summarize the text into key points. ### Summarizing Unstructured Data

    Now, you'll apply what you've learned to a practical task: summarizing a large block of text. This is a common use case for LLMs.

    • In the 'Summarizing Unstructured Data' section, you're provided with the unstructured_csv data.
    • add onto the system_prompt_csv_extraction field and see what types of data the model is capable of extracting from such unstructured data.
    • Constraint-based prompting (telling the model the output format) is a powerful tool. Using bullet points is a great example of this.
    • Execute the prompt and see how the model condenses the information.
    • Double check what the model returned to see exactly where the information came from, this is a good practice to help ensure the model didn't hallucinate.
  5. Challenge

    Step 5: Evaluating and Iterating on AI Responses

    Getting a response from an LLM is easy; getting a good response is harder. In this final step, you'll focus on the critical skill of evaluation.

    You'll learn to spot and document AI 'hallucinations', use multi-turn conversations to refine a vague answer, and even ask the model to evaluate its own performance.

    Don't forget to repaste your API key into the modified iterating function. ### Identifying Hallucinations

    LLMs can sometimes 'hallucinate' or invent facts. It's crucial to be able to spot and handle these situations. You will use a deliberately tricky prompt to cause a hallucination.

    • Find the Identifying Hallucinations section.
    • As models evolve and update, hallucinations become less common and more difficult to find. In this case you will prompt the model to specifically act as if it is hallucinating for consistency sake.
    • Work through the different hallucination prompts and notice a key correlation where a small bit of truth may be within each question.
    • In the markdown cell below the code, document why the model's answer is a hallucination. There is a TODO comment to guide you. ### Multi-Turn Refinement

    Multi-turn interactions allow you to refine the model's output iteratively. You can provide feedback on a previous response to guide it toward a better answer.

    • In the Multi-Turn Refinement section, you start with a vague initial prompt.
    • You will be able to have an iterative conversation with the AI, you can establish the system prompt or use one provided in previous sections.
    • Try refining a model's response or asking for additional information on hallucinations to see if the model will realize it was mistaken.
    • Notice that you are using the same chat object, which retains the context of the conversation.
About the author

I am, Josh Meier, an avid explorer of ideas an a lifelong learner. I have a background in AI with a focus in generative AI. I am passionate about AI and the ethics surrounding its use and creation and have honed my skills in generative AI models, ethics and applications and thrive to improve in my understanding of these models.

Real skill practice before real-world application

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Learn by doing

Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.

Follow your guide

All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.

Turn time into mastery

On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.

Get started with Pluralsight