- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- AI
Prompt Engineering Best Practices
In this lab, you’ll practice skills to properly prompt a AI model. When you’re finished, you’ll have you will have a better understanding on how to prompt models to return results you desire.
Lab Info
Table of Contents
-
Challenge
Introduction and Setup
Welcome to the Prompt Engineering Best Practices Code Lab!
In this lab, you'll learn how to programmatically interact with large language models (LLMs) like ChatGPT to produce more precise, structured, and reliable results. You will work in a Jupyter Notebook with connection to ChatGPT ready to be setup.
First, step up the environment. You'll be using a ChatGPT client to receive responses so you can develop and test your prompt-generation logic. ---
Before writing any prompts, you need to set up the environment.
Step 1: Set Up
- Open the
lab_notebook.ipynbfile. - Locate the code in the second cell labeled:
<TODO_ENTER_API_KEY_HERE> - Replace the placeholder with the API Key generated at the top center of your browser, ensuring the key remains wrapped in quotes (
""). - To verify that the API key is configured correctly, run the last two cells in the setup section of the notebook.
- Confirm the cells return a short response from the ChatGPT client.
- Open the
-
Challenge
Clarity, Specificity, and Iteration
The single most important rule of prompt engineering is to be clear and specific. A vague prompt will almost always yield a vague and unhelpful response. You'll start by transforming a poor prompt into a good one.
Additionally, prompt engineering is an iterative process. Your first prompt is rarely your best one. After reviewing the output, you'll often have ideas for how to improve it, such as requesting a more structured format like JSON. ---
A great prompt is clear and specific. Vague prompts lead to vague, unhelpful answers.
Your task is to transform a vague prompt into a much better one by adding specific details.
Step 2: Clarity, Specificity, and Iteration
- In the notebook, navigate to Step 2: Clarity, Specificity, and Iteration
- Locate the string used to prompt the AI:
"Tell me about that city with the tower" - Modify the string to create a prompt that is specific.
- Update the prompt so that it:
- Requests a summary of Seattle's main tourist attractions
- Mentions the Space Needle
- Requests the output as a single paragraph ---
Iterative refinement is key to prompt engineering. After your first attempt, you might realize you need a different format.
Your next task is to iterate on the previous prompt to request a structured format.
Step 2: Clarity, Specificity, and Iteration (part 2)
- Take the clear prompt you wrote in the previous task as a starting point.
- Modify the prompt to request the output as a JSON object.
- Specify the following structure:
- A top-level key
'attractions' - A list of objects as the value
- Each object must include
'name'and'description'keys
- A top-level key
-
Challenge
Structured Prompting with RTCF
To consistently get high-quality responses, it helps to use a structured format for your prompts. A popular and effective one is Role-Task-Context-Format (RTCF):
- Role: Tell the model what persona it should adopt (for example, "You are a senior software developer.").
- Task: Clearly state what you want the model to do.
- Context: Provide all necessary background information, data, or examples.
- Format: Specify the exact output format you need (for example, "JSON", "a bulleted list", "a Python function").
This structure removes ambiguity and guides the model toward the desired output. LLMs have a knowledge cutoff and no access to your private data or recent events. To make them useful for specific tasks, you need to provide relevant information directly within the prompt.
This is known as grounding the model in context. You can embed documents, data from tables, or previous outputs to give the model the information it needs to perform a task accurately.
In this lab, you will insert the context directly into the system content to ground the models context. ---
The Role-Task-Context-Format (RTCF) is a powerful structure for creating highly effective prompts.
Step 3: Structured Prompting with RTCF
Your task is to create a prompt that returns the rule describing how numbers should be handled in text from the predefined
editingGuidelinesvariable.Construct a prompt using an f-strings that explicitly defines the four RTCF components:
- Role: For example, "You are a helpful data analyst."
- Task: Instruct the model to return the specific rule about how to handle numbers.
- Context: Include
editingGuidelinesas thecontext - Format: Request that the output references the specific rule mentioned.
-
Challenge
Managing Multi-Turn Conversations
Chat models are inherently stateless. Each API call is independent.
To create the experience of a continuous conversation, you must send the entire chat history back to the model with every new turn.
The history is typically a list of message objects. Each message includes a
role(user,assistant, orsystem) and associatedcontent.
Set Up for Multi-Turn Conversations
For Step 4: Multi-Turn Conversations, you will need to configure your API access.
Use the API key generated in the top center of your browser and insert it into the code in the second cell, where it is labeled
<TODO_ENTER_API_KEY_HERE>. Ensure the API key remains wrapped in quotes. ---Models are stateless. To have a conversation, you must provide the entire chat history with each new message.
Step 4: Multi-Turn Conversations
Your task is to implement a function that adds a new user message to an existing conversation history.
The function defined within Step 4 demonstrates how to manage conversation history when maintaining a multi-turn interaction with an LLM.
In this step, you define the initial system prompt using:
[{"role": "system", "content": system_content}]This message tells the model what role it should assume and establishes the overall context for the conversation.
Following this, messages with the
userrole define what you ask or say to the model. Messages with theassistantrole represent responses previously generated by the model. Including both allows the model to understand the full conversational context.It is important to know that a conversation is represented as a list of dictionaries. Each dictionary contains a
role, and thecontentof the message.The role indicates who produced the message, and the content provides the message itself. The order of the list matters, as messages are read sequentially from lowest to highest index.
-
Challenge
Responsible and Safe Prompting
Building powerful AI applications comes with responsibility. As a developer, you must consider the safety, ethical implications, and reliability of your system.
Two major challenges include:
- Handling Sensitive Data: You must understand when sensitive data is involved, why it requires special care and how to avoid exposing or misusing it in prompts and responses.
- Hallucinations: Models can confidently produce incorrect information. You can add safeguards to your prompts to encourage factual accuracy and express uncertainty when needed. When dealing with potentially sensitive data, you must instruct the model to handle it responsibly.
Warning: The only way to ensure your model does not leak private data is to never give the model access.
If a generative AI model has access to sensitive or restricted information, there is always a risk that it may disclose that information in its responses. For this reason, it is highly recommended to never allow a generative model to handle data that should not be available to all who have access to the model.
Attempts to implement safety guardrails, such as redacting specific information, can reduce risk but are not foolproof. Over time, these approaches may still allow sensitive information to be exposed.
Systems such as agents can use permission levels to help enforce security boundaries. This allows for easier security structures to ensure the model doesn't have information outside of specific permission levels if needed. ---
LLMs can hallucinate, or invent facts. A well-designed prompt can help mitigate this by instructing the model on how to behave when it's uncertain.
Your task is to design a prompt that encourages factual accuracy.
Step 5: Mitigate Hallucinations
- Construct a prompt that asks the model to answer the question.
- Add an important condition: if the model is not 100% certain about the answer, it must state that it is uncertain instead of providing a potentially incorrect answer.
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.