The Fine Tuning LLM with Hugging Face course provides participants with the knowledge to create LLMs for production applications from start to end. The course will discuss when to reuse models and when to fine tune them, and when to just create the model from scratch using advanced fine tuning techniques. Participants will gain a deep understanding of why the techniques work. Throughout the course, both models and theory are taught using real-world datasets and production samples.
Prerequisites:
In order to succeed in this course participants must have experience with:
- Machine Learning
- NLP and using RNNs
- Keras
- scikit-Learn
Software Required:
- Python >3.9
- Tensorflow >2.7
- Keras >2.3 within Google Collaboratory
Purpose
| Create LLMs for production applications. |
Audience
| Data Scientists and Engineers that want to expand their practical use of LLMs for production applications. |
Role
| Data Scientists | Data Engineers | Software Developers |
Skill level
| Advanced |
Style
| Lextures | Demonstrations | Hands-on Activities/Labs |
Duration
| 2 days |
Related technologies
| Python | Keras | sciki-Learn | Tensorflow |
Course objectives
- Understand how LLMs are composed, trained and used.
- Understand the Generative AI lifecycle, including how it differs from the traditional ML lifecycle.
- Discover the optimal balance between compute cost, dataset size, and model parameters.
- Establish guidance on when to perform transfer learning, when to fine tune models and when simple prompt engineering is enough.
- Use PEFT to perform full finetuning on a small % of parameters to reduce cost
- Apply state-of-the-art reinforcement-aligned training with PPO to create more helpful models without harmful output.
- Quantize models for optimal deployment.