- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- AI
ChatGPT Prompt Engineering and Evaluation
In this Code Lab, you will learn the fundamentals of prompt engineering, from basic styles to advanced techniques. You'll work in a Jupyter Notebook to interact with a mock AI model, analyze unstructured data, and evaluate the model's outputs.
Lab Info
Table of Contents
-
Challenge
Step 1: Getting Started
Welcome to the Prompt Engineering Code Lab! In this first step, you'll set up your Jupyter Notebook, which will be your workspace for the entire lab. You'll import the necessary helper code, including GPT 4 which you'll be working with. This initial setup is crucial for the hands-on exercises in the following steps. Before you dive into prompting, you should get your environment ready. Your first task is to set up the Jupyter Notebook.
- Navigate to the
prompt_lab.ipynbfile. - Find the second code cell.
- Your task is to use the API key at the top center of the lab and replace the
AZURE_OPENAI_API_KEYfrom"TODO_ENTER_API_KEY_HERE>"to whatever your API key was generated as. - Near the end of the lab you will need to change the API for another LLM function so keep this step in mind.
This step ensures your notebook is connected to the GPT 4.0 turbo and has access to the data you'll be analyzing. Within the Setup section there are four code cells you will need to run. The first two will import the library and define the function you will use for calling chat GPT. The last two will simply call the function and print the results. Feel free to reference these as examples of how to call and get results from the LLM.
- Navigate to the
-
Challenge
Step 2: Foundational Prompting Styles
Now that the environment is ready, you'll explore the two most fundamental prompting styles: zero-shot and few-shot. Understanding the difference between simply asking for something (zero-shot) and showing the model what you want with examples (few-shot) is the first major step toward becoming a skilled prompt engineer. You will apply these techniques to perform basic analysis on your customer feedback data. ### Zero-shot Prompting Zero-shot prompting is the simplest form of prompting, where you ask the model to perform a task without giving it any prior examples.
-
In
prompt_lab.ipynb, find the cell under the 'Zero-Shot Prompting' section. -
Your task is to write a prompt or modify the existing prompt in the
zero_shot_sentiment_system_promptvariable that asks the model to determine the overall sentiment of the 5 reviews within your testing dataset. -
Go through the reviews and see how different reviews will be given different sentiments
-
Focus on how the responses change depending on the review. ### Few-shot Prompting Few-shot prompting involves providing the model with a few examples of the task you want it to perform. This helps the model understand the desired output format and context better.
-
Find the 'Few-Shot Prompting' section in the notebook.
-
Review the
few_shot_sentiment_system_promptand see how it lays out how the task will be done and the expected results. -
Now run the review analysis on the same reviews with the new few shot system prompt and see if providing examples improve the models output..
-
You can modify the system prompt to change how sentiment is returned, try comma separations, or include the original review before a separator and the review type.
-
-
Challenge
Step 3: Advanced Prompting Techniques
With the basics covered, you can move on to more advanced and powerful techniques. In this step, you'll learn how to assign a 'persona' to the model with role-based prompting to control its tone and style. You'll also learn about Chain-of-Thought (CoT) prompting, a method to guide the model through complex reasoning tasks by asking it to 'think step by step'. ### Role Based Prompting You can assign a role or persona to the model to influence the tone and style of its response. This is useful for generating content for a specific audience.
-
Navigate to the 'Role-Based Prompting' section.
-
Three different types of personas and three types of sample texts have been provided
-
After choosing your persona work through each of the three text samples
-
Pay attention to how different assigned personas will focus on different aspects of the text ### Chain-of-Thought Prompting Chain-of-Thought (CoT) prompting encourages the model to break down a problem into steps, which often leads to more accurate results for complex reasoning tasks.
-
Go to the 'Chain-of-Thought Prompting' section.
-
The system prompt provided is a basic prompt that will request the model to provide reasoning and an answer to any of the user prompts.
-
It is important to structure how you want the model to return its thoughts and ensure you use phrases such as
step by step -
From the different type of reasonings, you can select one and see how the model attempts to rationalize the math or concepts behind each logic type.
-
-
Challenge
Step 4: Analyzing Unstructured Data
This is where prompt engineering shows its true power. You'll use the techniques you've learned to perform a real-world task: analyzing a block of unstructured customer feedback. You will guide the AI to first summarize the text into key points and then to synthesize it further to extract actionable insights that a product team could use. ### Summarizing Unstructured Data Now, you should apply what you've learned to a practical task: summarizing a large block of text. This is a common use case for LLMs.
- In the 'Summarizing Unstructured Data' section, you're provided with a
CSVof random unstructured data from many different ID's. - You will need to add key fields you hope for the AI to parse out of the unstructured data that are contained within the raw text part of the CSV.
- Constraint-based prompting (telling the model the output format) is a powerful tool. Using bullet points is a great example of this.
- Execute the prompt and see how well the model condenses the information.
- In the 'Summarizing Unstructured Data' section, you're provided with a
-
Challenge
Step 5: Evaluating and Iterating on AI Responses
API KEY
For this final section you will need to re-enter your API key. The API key can be found in the top center or in the second cell of your lab notebooks. Getting a response from an LLM is easy; getting a good response is harder. In this final step, you'll focus on the critical skill of evaluation. You'll learn to spot and document AI 'hallucinations', use multi-turn conversations to refine a vague answer, and even ask the model to evaluate its own performance. ### Hallucinations LLMs can sometimes 'hallucinate' or invent facts. It's crucial to be able to spot and handle these situations. You will use a deliberately tricky prompt to cause a hallucination.
-
Find the 'Identifying Hallucinations' section.
-
The system prompt to help with Hallucinations is already created, and a few example prompts are provided.
-
Its important to note, most hallucinations happen when one part of the question is actually true, the model tries to connect it with the false part of the query. ### Multi-turn Interactions Multi-turn interactions allow you to refine the model's output iteratively. You can provide feedback on a previous response to guide it toward a better answer.
-
In the 'Multi-Turn Refinement' section, you try making your own or use any of the provided prompts to get started
-
From there you can have a iterative conversation with the model and try to revise returned values or get explainations on classifications.
-
Notice that you are appending messages to the model has the entire context of your conversation.
-
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.