Building Features from Nominal Data

by Janani Ravi

This course covers various techniques for encoding categorical data, starting with the familiar forms of one-hot and label encoding, before moving to contrast coding schemes such as simple coding, Helmert coding, and orthogonal polynomial coding.

What you'll learn

The quality of preprocessing the numeric data is subjected to the important determinant of the results of machine learning models built using that data. In this course, Building Features from Nominal Data, you will gain the ability to encode categorical data in ways that increase the statistical power of models. First, you will learn the different types of continuous and categorical data, and the differences between ratio and interval scale data, and between nominal and ordinal data. Next, you will discover how to encode categorical data using one-hot and label encoding, and how to avoid the dummy variable trap in linear regression. Finally, you will explore how to implement different forms of contrast coding - such as simple, Helmert, and orthogonal polynomial coding, so that regression results closely mirror the hypotheses that you wish to test. When you’re finished with this course, you will have the skills and knowledge of encoding categorical data needed to increase the statistical power of linear regression that includes such data.

Table of contents

Course Overview
2mins

About the author

Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework. After spending years working in tech in the Bay Area, New York, and Singapore at companies such as Microsoft, Google, and Flipkart, Janani finally decided to combine her love for technology with her passion for teaching. She is now the co-founder of Loonycorn, a content studio focused on providin... more

Ready to upskill? Get started