Hamburger Icon

Generative AI: Prompt Engineering for Software Developers

Course Summary

The Generative AI: Prompt Engineering for Software Developers course provides participants with hands-on experience in leveraging Large Language Models (LLMs) and Natural Language Processing (NLP) in software development.  

The course begins with an overview of working with LLMs and NLP with OpenAI,  and learning how to craft and refine effective prompts for solving software engineering challenges. Participants will then implement NLP techniques to build custom AI-powered applications, such as chatbots. The course concludes with best practices for integrating Generative AI into software development workflows.

Prerequisites

  • This course assumes students have prior experience programming in Python, including installing dependency libraries.

Purpose
Apply Large Language Models (LLMs) and Natural Language Processing (NLP) in software development workflows
Audience
Developers interested in implementing NLP techniques
Role
Software Developers | Data Engineers
Skill level
Intermediate
Style
Lecture | Demonstrations | Hands-on Activities/Labs
Duration
4 days
Related technologies
Python

 

Course objectives
  • Understand how to utilize Large Language Models in software applications
  • Practice writing and iterating effective prompts for AI interactions
  • Produce AI-powered applications such as chatbots and automated assistants
  • Apply best practices for integrating LLMs into software development

What you'll learn:

In the Generative AI: Prompt Engineering for Software Developers course, you'll learn:

Using Large Language Models (LLMs)

  • Understand model capabilities and limitations
  • Format and preprocess input data for LLMs
  • Optimize prompt design for accuracy and efficiency
  • Apply retrieval-augmented generation (RAG) for contextual responses
  • Implement ethical safeguards and bias mitigation
  • Handle errors, manage API usage, and optimize costs
  • Monitor model performance and adjust strategies accordingly

Writing and Iterating on Prompts using OpenAI API

  • Define clear goals and expected outputs
  • Craft specific, structured prompts
  • Experiment with variations and control outputs
  • Leverage API parameters (temperature, max tokens, system messages)
  • Manage conversation context across multiple interactions
  • Review and refine prompts based on model behavior
  • Implement bias reduction techniques and responsible AI usage
  • Stay informed on evolving LLM capabilities

Building Custom Chatbots

  • Define purpose and audience
  • Choose a development platform
  • Design conversation flows with context retention and multi-turn interactions
  • Implement retrieval-augmented generation (RAG) for better responses
  • Integrate with communication channels
  • Monitor performance, optimize prompt engineering, and manage errors
  • Handle common queries effectively

Best Practices for LLM-Supported App Development

  • Optimize input data to improve LLM performance
  •  Use prompt engineering and retrieval-based techniques before fine-tuning
  • Design prompts that balance creativity, accuracy, and efficiency
  • Implement security best practices for API-based LLMs
  • Optimize latency and cost efficiency for real-time applications
  • Set ethical guidelines to prevent bias or harm
  • Monitor and review LLM-generated content

Dive in and learn more

When transforming your workforce, it’s important to have expert advice and tailored solutions. We can help. Tell us your unique needs and we'll explore ways to address them.

Let's chat

By filling out this form and clicking submit, you acknowledge our privacy policy.