Reducing Complexity in Data

This course covers several techniques used to optimally simplify data used in supervised machine learning applications ranging from relatively simple feature selection techniques to very complex applications of clustering using deep neural networks.
Course info
Rating
(12)
Level
Intermediate
Updated
Apr 11, 2019
Duration
3h 20m
Table of contents
Course Overview
Understanding the Need for Dimensionality Reduction
Using Statistical Techniques for Feature Selection
Reducing Complexity in Linear Data
Reducing Complexity in Nonlinear Data
Dimensionality Reduction Using Clustering and Autoencoding Techniques
Description
Course info
Rating
(12)
Level
Intermediate
Updated
Apr 11, 2019
Duration
3h 20m
Description

Machine learning techniques have grown significantly more powerful in recent years, but excessive complexity in data is still a major problem. There are several reasons for this - distinguishing signal from noise gets harder with more complex data, and the risks of overfitting go up as well. Finally, as cloud-based machine learning becomes more and more popular, reducing complexity in data is crucial in making training more affordable. Cloud-based ML solutions can be very expensive indeed.

In this course, Reducing Complexity in Data you will learn how to make the data fed into machine learning models more tractable and more manageable, without resorting to any hacks or shortcuts, and without compromising on quality or correctness.

First, you will learn the importance of parsimony in data, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality.

Next, you will discover how and when to resort to feature selection, employing statistically sound techniques to find a subset of the features input based on their information content and link to the output.

Finally, you will explore how to use two advanced techniques - clustering, and autoencoding. Both of these are applications of unsupervised learning used to simplify data as a precursor to a supervised learning algorithm. Each of them often relies on a sophisticated implementation such as deep learning using neural networks.

When you’re finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data used in supervised machine learning applications.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi, my name is Janani Ravi, and welcome to this course on Reducing Complexity in Data. A little about myself. I have a Master's Degree in electrical engineering from Stanford and have worked at companies, such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high-quality video content. In this course, you will learn how to make the data fed into machine learning models more tractable, more manageable, without resorting to any hacks or shortcuts and without compromising on quality or correctness. First, you'll learn the importance of parsimony in data and understand the pitfalls of working with data of excessively high dimensionality. You will discover how and when to resort to feature selection, employing statistically sound techniques to find the subset of the features input based on their information content and link to the output. You will then learn important techniques for reducing dimensionality in linear data, such as principal components analysis, which seek to re-orient the original data, but improving the axes used. You will also work with reducing the complexity in manifold data, which can be likened to data points scattered on a carpet that is then rolled up into a shape like a Swiss roll or an S-curve. You will round out the course by using two advanced techniques, clustering and autoencoding. Each of them often relies on a sophisticated implementation, such as deep learning using neural networks. When you're finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data using supervised machine learning algorithms.