Reducing Complexity in Data

This course covers several techniques used to optimally simplify data used in supervised machine learning applications ranging from relatively simple feature selection techniques to very complex applications of clustering using deep neural networks.
Course info
Level
Intermediate
Updated
Apr 11, 2019
Duration
3h 20m
Table of contents
Course Overview
Understanding the Need for Dimensionality Reduction
Using Statistical Techniques for Feature Selection
Reducing Complexity in Linear Data
Reducing Complexity in Nonlinear Data
Dimensionality Reduction Using Clustering and Autoencoding Techniques
Description
Course info
Level
Intermediate
Updated
Apr 11, 2019
Duration
3h 20m
Description

Machine learning techniques have grown significantly more powerful in recent years, but excessive complexity in data is still a major problem. There are several reasons for this - distinguishing signal from noise gets harder with more complex data, and the risks of overfitting go up as well. Finally, as cloud-based machine learning becomes more and more popular, reducing complexity in data is crucial in making training more affordable. Cloud-based ML solutions can be very expensive indeed. In this course, Reducing Complexity in Data you will learn how to make the data fed into machine learning models more tractable and more manageable, without resorting to any hacks or shortcuts, and without compromising on quality or correctness. First, you will learn the importance of parsimony in data, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality. Next, you will discover how and when to resort to feature selection, employing statistically sound techniques to find a subset of the features input based on their information content and link to the output. Finally, you will explore how to use two advanced techniques - clustering, and autoencoding. Both of these are applications of unsupervised learning used to simplify data as a precursor to a supervised learning algorithm. Each of them often relies on a sophisticated implementation such as deep learning using neural networks. When you’re finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data used in supervised machine learning applications.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Mining Data from Text
Intermediate
2h 21m
Jun 28, 2019
Building Regression Models with scikit-learn
Intermediate
2h 42m
Jun 28, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi, my name is Janani Ravi, and welcome to this course on Reducing Complexity in Data. A little about myself. I have a Master's Degree in electrical engineering from Stanford and have worked at companies, such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high-quality video content. In this course, you will learn how to make the data fed into machine learning models more tractable, more manageable, without resorting to any hacks or shortcuts and without compromising on quality or correctness. First, you'll learn the importance of parsimony in data and understand the pitfalls of working with data of excessively high dimensionality. You will discover how and when to resort to feature selection, employing statistically sound techniques to find the subset of the features input based on their information content and link to the output. You will then learn important techniques for reducing dimensionality in linear data, such as principal components analysis, which seek to re-orient the original data, but improving the axes used. You will also work with reducing the complexity in manifold data, which can be likened to data points scattered on a carpet that is then rolled up into a shape like a Swiss roll or an S-curve. You will round out the course by using two advanced techniques, clustering and autoencoding. Each of them often relies on a sophisticated implementation, such as deep learning using neural networks. When you're finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data using supervised machine learning algorithms.

Understanding the Need for Dimensionality Reduction
Hi, and welcome to this course on Reducing Complexity in Data and this first module where we'll try and understand the need for dimensionality reduction. As a student of machine learning, you'll first work with toy datasets where there are not many features per instance. But out in the real world, it's quite possible that every instance has many, many features, and this can lead to dimensionality explosion. We'll study the different problems associated with high dimensionality training data. There are problems in training, as well as prediction. We'll then study the bias-variance trade- off to build machine learning models and understand overfitting the model on the training data. Overfitting is significant here because the more features your training data has, the higher the risk of overfitting your model on it. We'll then discuss in detail the curse of dimensionality and the drawbacks of excessively complex models. Complex models tend to be overfitted, which means they might do well in training, but perform poorly in the real world. And finally, we'll see how you can choose the right dimensionality reduction technique based on your use case.

Using Statistical Techniques for Feature Selection
Hi, and welcome to this module on Using Statistical Techniques for Feature Selection. Now we discussed in the last module that one way to reduce dimensionality in input data is by selecting significant features. Here in this module, we'll study different techniques for selecting and eliminating features. And we'll also see hands-on demos of how we can apply these techniques in practice. We'll first study and use variance thresholds, which we'll use to eliminate X features with low variance below a certain threshold. We'll then study a number of different univariate statistical analysis that we'll use to select significant features. We'll understand statistical techniques, such as the ANOVA, chi-square, and mutual information at a conceptual higher level, and we'll then apply these techniques in our demos to perform feature selection to build classification models. In addition to the statistical techniques that we'll discuss, we'll also study how dictionary learning works. This is a representation learning method. Dictionary learning is used for atom extraction, that is learning sparse representations of dense input data, such as images.

Dimensionality Reduction Using Clustering and Autoencoding Techniques
Hi, and welcome to this module on Dimensionality Reduction Using Clustering and Autoencoding Techniques. Now both clustering, as well as autoencoding are classic unsupervised learning algorithms, which means the data that you feed in to train these models do not contain any labels or correctly classified instances. Unsupervised learning techniques learn patterns and significant details from the data itself. There are no labels available to correct and train these models. Both of these techniques are inherently simple to learning significant features in the underlying data. In this module, you'll see how we can apply clustering to dimensionality reduction. We'll use the Kmeans clustering technique, find centroids in our data, and re-express every data point in the original data in terms of cluster centroids. We'll then feed this lower dimensionality output of the clustering model into a classification model and see how our classifier performs with these lower dimensions. We'll then study autoencoders that our machine learning models built using neural networks. Autoencoders learn efficient representations of complex input data by reconstructing the input that you feed in at the output.