Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Azure icon

Feature Selection Before Training in Azure Machine Learning

If you are presented with a large number of distinct features to use to train your model, it is rarely a good idea to throw them all at the model. Many of the features will not have any predictive power for your desired label. In the best case scenario, using these extraneous features will only increase training times. At worst, they will increase model complexity, training time, and prediction error rates. To avoid these costly increases, we can use feature engineering to pick the most relevant features before we start training our model. In this lab, we will explore using the Pearson's Correlation, and Chi Squared statistics to pick the best features for our model.

Azure icon

Path Info

Clock icon Advanced
Clock icon 1h 0m
Clock icon Sep 24, 2020

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Setup the Workspace

    1. Log in and go to the Azure Machine Learning Studio workspace provided in the lab.

    2. Create a Training Cluster of D2 instances.

    3. Create a new blank Pipeline in the Azure Machine Learning Studio Designer.

  2. Challenge

    Create the Baseline Model

    1. Create a model using the Boosted Decision Tree Regression algorithm.

      Note: Since we are comparing multiple models, we want them to be initialized the same way. For good science, we want there to be only one difference between the control and experiment groups, which in our case, will be the features passed to the model. To accomplish this, set the random seed to any non-zero number.

    2. Train the model. Make sure to use the training data for this step.

    3. Generate predictions using the testing data.

    4. Generate statistics for the predictions.

    5. Submit the pipeline. This will take a couple minutes to run.

    6. When the pipeline completes, view the prediction statistics.

      Note: Since we're comparing models, we need a metric to compare against. Root Mean Square Error (RMSE) determines how far off our model is, on average, from the true price. It is measured in the same units as the label, which makes it very easy to work with. Lower values are better. This model produces the RMSE we will try to beat by engineering features.

  3. Challenge

    Select Features with Pearson's Correlation

    1. Create another Boosted Decision Tree Regression model. Use the same random seed as the first model to initialize it in the same way.
    2. Using Pearson's correlation, rank the features in the training data based on their correlation to the price data. Pass the top 5 to the model.

      Note: The algorithm for Pearson's correlation requires numbers, so only numeric columns will be considered.

    3. Select the same features from the testing data as you did in the training data.

      Note: You cannot use the same node for this because it will run the selection algorithm again, but use the training data, which can produce different results.

    4. Train the model using the selected features from the training data.
    5. Generate predictions on the testing data filtered to the same set of selected features.
    6. Generate statistics for the predictions.
    7. Submit the pipeline. This will take a couple minutes to run, but should be faster since the data doesn't have to reprocess.
    8. When the pipeline completes, view the chosen features for both the training and test data sets to see if they line up.
    9. Find Pearson's r values and see how strongly the selected features correlate to the price.

      Note: The closer that the value is to 1, the more strongly positively predictive the feature is, meaning it helps predict our label. At 0, there is no correlation (this also applies to non-numerical columns). Closer to -1, the feature is strongly negatively predictive, meaning it predicts the opposite of what we want.

    10. Check the RMSE (Root Mean Square Error). Did this model perform better or worse than our baseline?
  4. Challenge

    Select Features with Chi Squared

    1. Copy and paste each node we created in the previous step, then wire it up the same way as before.
    2. For this model, change the feature selection to use Chi Squared instead of Pearson's correlation. Also, since we learned that 5 features was not enough in the previous step, try increasing the number of features to 10.

      Note: The Chi Squared algorithm does not require numerical data, so all columns will be considered.

    3. Submit the pipeline. This will again be quick since we don't have to redo any of the previous steps.
    4. Once the pipeline completes, view the chosen features for both the training and testing data sets to see if they line up.
    5. Find the Chi Squared values for the chosen columns. Note that the top 5 fields chosen by Pearson's correlation are still considered predictive of price, but they are no longer ranked in the same order.
    6. Lastly, check the RMSE. This model performed better than our previous experiment, so this is a better feature-selected model of this data. How does it compare to the baseline?
  5. Challenge

    Prepare the Data

    1. Use the data from the Automobile price data (Raw) dataset.
    2. Remove the normalized-losses column.
    3. Remove rows that are missing the price. We can't train the model using data missing the label.
    4. Replace all missing values with 0.
    5. Split the data into training and testing sets. Use 70% of the data for training. Be sure to set a random seed for repeatability.

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans