Scaling scikit-learn Solutions

This course covers the important considerations for scikit-learn models in improving prediction latency and throughput; specific feature representation and partial learning techniques, as well as implementations of incremental learning, out-of-core learning, and multicore parallelism.
Course info
Rating
(14)
Level
Advanced
Updated
Oct 30, 2019
Duration
2h 56m
Table of contents
Course Overview
Course Overview
Understanding Strategies for Computational Scaling
Observing the Factors Affecting Prediction Latency
Implementing Scaling of Instances Using Out-of-core Learning
Implementing Multicore Parallelism in scikit-learn
Autoscaling of scikit-learn with Apache Spark
Description
Course info
Rating
(14)
Level
Advanced
Updated
Oct 30, 2019
Duration
2h 56m
Your 10-day individual free trial includes:

Expert-led courses

Keep up with the pace of change with thousands of expert-led, in-depth courses.
Description

Even as the number of machine learning frameworks and libraries increases rapidly, scikit-learn is retaining its popularity with ease. scikit-learn makes the common use-cases in machine learning - clustering, classification, dimensionality reduction and regression - incredibly easy.

In this course, Scaling scikit-learn Solutions you will gain the ability to leverage out-of-core learning and multicore parallelism in scikit-learn.

First, you will learn considerations that affect latency and throughput in prediction, including the number of features, feature complexity, and model complexity.

Next, you will discover how smart choices in feature representation and in how you model sparse data can improve the scalability of your models. You will then understand what incremental learning is, and how to use scikit-learn estimators that support this key enabler of out-of-core learning.

Finally, you will round out your knowledge by parallelizing key tasks such as cross-validation, hyperparameter tuning, and ensemble learning.

When you’re finished with this course, you will have the skills and knowledge to identify key techniques to help make your model scalable and implement them appropriately for your use-case.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Summarizing Data and Deducing Probabilities
Intermediate
2h 50m
Jul 8, 2021
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
[Autogenerated] Hi, My name is Jenny Ravi, and welcome to the scores on scaling Psychic Learned solutions A little about myself. I have a master's degree in electrical engineering from Stanford and have worked at companies such as Microsoft, Google and Flip Card at Google was one of the first engineers working on real time collaborative editing in Google Dogs, and I hold four patents for its underlying technology's. I currently work on my own startup Lunatic on a studio for high quality video content. Even as the number of machine learning frameworks and libraries increases rapidly, Psychic Learned is regaining its popularity with ease. Psychic alone makes the common use cases in machine learning, clustering classifications, dimensionality reduction and regression Incredibly easy. In this course, you will gain the ability to leverage outof court learning and multi core parallelism in psych, it learns. First, you will learn consideration that affect Leighton see and true. Put in prediction, including the number of features, feature complexity and model complexity. Next, you will discover how smart choices and feature representation and in have you model. Sparse data can improve the scalability off. Your models will then understand what incremental learning is and how to use psychically on estimators that support this key enabler off out of court learning. Finally, you'll round out your knowledge by paralyzing key tasks such as cross validation, hyper parameter tuning on ensemble learning and you're finished with this course. You will have the skills and knowledge to identify key techniques to help make your model scalable and implement them appropriately for your news case.

Course Overview
[Autogenerated] Hi, My name is Jenny Ravi, and welcome to this course on scaling scikit-learn Solutions a little about myself. I have a master's degree in electrical engineering from Stanford. And a have worked at companies such as Microsoft, Google and Flip Card at Google. I was one of the first engineers working on real time collaborative editing in Google Dogs and I hold four patterns for its on blank technologies. I currently work on my own startup Loonycorn, a studio for high quality video content. Even as the number of machine learning frameworks and libraries increases rapidly, scikit-learn is retaining its popularity with these. Scikit-learn makes the common use cases in machine learning, clustering classifications, dimensionality reduction and regression Incredibly easy. In this course, you will gain the ability to leverage out of code learning and multi core parallelism in scikit-learn. First, you will learn considerations that affect latency and throughput in prediction, including the number of features, feature complexity and model complexity. Next, you will discover how smart choices and feature representation and in how you model sparse data can improve the scalability off your models. You will then understand what incremental learning is on how to use scikit-learn estimators that support this key enabler off out of code learning. Finally, URL round out your knowledge by iPad analyzing key tasks such as cross validation, hyper parameter tuning on ensemble learning and you're finished with this course, you will have the skills and knowledge to identify key techniques to help make your model scalable and implement them appropriately for your use case.