Hamburger Icon

Fine Tuning LLMs with Hugging Face

Course Summary

The Fine Tuning LLM with Hugging Face course provides participants with the knowledge to create LLMs for production applications from start to end. The course will discuss when to reuse models and when to fine tune them, and when to just create the model from scratch using advanced fine tuning techniques. Participants will gain a deep understanding of why the techniques work. Throughout the course, both models and theory are taught using real-world datasets and production samples.

Prerequisites:

In order to succeed in this course participants must have experience with:

  • Machine Learning
  • NLP and using RNNs
  • Keras
  • scikit-Learn

Software Required:

  • Python >3.9
  • Tensorflow >2.7
  • Keras >2.3 within Google Collaboratory
Purpose
Create LLMs for production applications.
Audience
Data Scientists and Engineers that want to expand their practical use of LLMs for production applications.
Role
Data Scientists | Data Engineers | Software Developers
Skill level
Advanced
Style
Lextures | Demonstrations | Hands-on Activities/Labs
Duration
2 days
Related technologies
Python | Keras | sciki-Learn | Tensorflow 

 

Course objectives
  • Understand how LLMs are composed, trained and used.
  • Understand the Generative AI lifecycle, including how it differs from the traditional ML lifecycle.
  • Discover the optimal balance between compute cost, dataset size, and model parameters.
  • Establish guidance on when to perform transfer learning, when to fine tune models and when simple prompt engineering is enough.
  • Use PEFT to perform full finetuning on a small % of parameters to reduce cost
  • Apply state-of-the-art reinforcement-aligned training with PPO to create more helpful models without harmful output.
  • Quantize models for optimal deployment.

What you'll learn:

In this Fine Tuning LLMs with Hugging Face course, you'll learn:
  • Introducing LLMs
    • What lies behind ChatGPT?
    • LLMs as Transformers
    • Different types of transformers and tasks
    • Famous Transformers How to use Transformers without training: Prompt Engineering
    • The Generative AI project lifecycle vs ML lifecycle
    • Summarization and grammatical corrections with Prompt Engineering
  • Introducing Attention
    • What did we do before? Remembering word2vec and seq2seq
    • Seq2Seq limitations
    • Attention a la Badanau
    • Dot Product and Scaled Dot Product Attention
    • Introducing Attention in Keras
    • Seq2Seq with Attention 
  • Transformers
    • Why seq2seq with Attention has some drawbacks
    • Multi-Headed Attention
    • The Transformer architecture: Deep analysis of its components
  • Hugging Face
    • Introducing Hugging Face
    • Introducing datasets
    • How to use a model from Hugging Face
    • How to upload a checkpoint to Hugging Face
  • Training LLMs: The easy part
    • When to train and when not to train your LLM
    • Computing difficulties of training LLMs
    • Full fine tuning, costs and potential catastrophic forgetfulness
    • Perform full fine tuning of Flan-T5 and verify forgetfulness
    • Single task vs multi-task fine tuning
    • Perform transfer learning to avoid full fine tuning
    • Perform transfer learning of distillBERT on Sentiment Analysis
  • Training LLMs: The complex part
    • Introducing PEFT
    • LoRA based PEFT
    • Sof Promps based PEFT
    • Finetune with PEFT Flan-T5 and perform NER and summarization
  • Training LLMs: The more complex part
    • Introducing Reinforcement Learning from Human Feedback (RLHF)
    • The PPO algorithm
    • The Reward Model
    • Reward hacking, scaling issues and some tips
    • Using RLHF to avoid hateful responses from our LLM
  • Deploying LLMs: Quantization
    • Introducing Quantization
    • Weight pruning aware training in Keras
    • Quantization aware training in Keras for LLMs with PEFT
    • Serving quantized models

Dive in and learn more

When transforming your workforce, it’s important to have expert advice and tailored solutions. We can help. Tell us your unique needs and we'll explore ways to address them.

Let's chat

By filling out this form and clicking submit, you acknowledge our privacy policy.