Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Finding Relationships in Data with Python

High-performing machine learning algorithms depend on identifying relationships between variables. Learn how to find relationships in data with Python.

Nov 12, 2019 • 10 Minute Read

Introduction

Building high-performing machine learning algorithms depends on identifying relationships between variables. This helps in feature engineering as well as deciding on the machine learning algorithm. In this guide, you will learn techniques for finding relationships in data with Python.

Data

In this guide, we will use a fictitious dataset of loan applicants containing 200 observations and ten variables, as described below:

  1. Marital_status: Whether the applicant is married ("Yes") or not ("No")

  2. `Is_graduate’: Whether the applicant is a graduate ("Yes") or not ("No")

  3. Income: Annual income of the applicant (in USD)

  4. Loan_amount: Loan amount (in USD) for which the application was submitted

  5. Credit_score: Whether the applicant's credit score was good ("Good") or not ("Bad").

  6. Approval_status: Whether the loan application was approved ("Yes") or not ("No").

  7. Investment: Investments in stocks and mutual funds (in USD), as declared by the applicant

  8. Gender: Whether the applicant is "Female" or "Male"

  9. Age: The applicant’s age in years

  10. Work_exp: The applicant's work experience in years

Let’s start by loading the required libraries and the data.

      import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import statsmodels.api as sm

# Load data
dat = pd.read_csv("data_test.csv")
print(dat.shape) 
dat.head(5)
    

Output:

      (200, 10)


|   	| Marital_status 	| Is_graduate 	| Income 	| Loan_amount 	| Credit_score 	| approval_status 	| Investment 	| gender 	| age 	| work_exp 	|
|---	|----------------	|-------------	|--------	|-------------	|--------------	|-----------------	|------------	|--------	|-----	|----------	|
| 0 	| Yes            	| No          	| 72000  	| 70500       	| Bad          	| Yes             	| 117340     	| Female 	| 34  	| 8.10     	|
| 1 	| Yes            	| No          	| 64000  	| 70000       	| Bad          	| Yes             	| 85340      	| Female 	| 34  	| 7.20     	|
| 2 	| Yes            	| No          	| 80000  	| 275000      	| Bad          	| Yes             	| 147100     	| Female 	| 33  	| 9.00     	|
| 3 	| Yes            	| No          	| 76000  	| 100500      	| Bad          	| Yes             	| 65440      	| Female 	| 34  	| 8.55     	|
| 4 	| Yes            	| No          	| 72000  	| 51500       	| Bad          	| Yes             	| 48000      	| Female 	| 33  	| 8.10     	|
    

Relationship between Numerical Variables

Many machine learning algorithms require that the continuous variables not be correlated with each other, a phenomenon called multicollinearity. Establishing relationships between the numerical variables is a common step to detect and treat multicollinearity.

Correlation Matrix

Creating a correlation matrix is a technique to identify multicollinearity among numerical variables. In Python, this can be created using the corr() function, as in the line of code below.

      dat.corr()
    

Output:

      |             	| Income    	| Loan_amount 	| Investment 	| age       	| work_exp  	|
|-------------	|-----------	|-------------	|------------	|-----------	|-----------	|
| Income      	| 1.000000  	| 0.020236    	| 0.061687   	| -0.200591 	| 0.878455  	|
| Loan_amount 	| 0.020236  	| 1.000000    	| 0.780407   	| -0.033409 	| 0.031837  	|
| Investment  	| 0.061687  	| 0.780407    	| 1.000000   	| -0.022761 	| 0.076532  	|
| age         	| -0.200591 	| -0.033409   	| -0.022761  	| 1.000000  	| -0.133685 	|
| work_exp    	| 0.878455  	| 0.031837    	| 0.076532   	| -0.133685 	| 1.000000  	|
    

The output above shows presence of strong linear correlation between the variables Income and Work_exp and between Investment and Loan_amount.

Correlation Plot

The correlation can also be visualized using a correlation plot, which is implemented using the pairplot function in the seaborn package.

The first line of code below creates a new dataset, df, that contains only the numeric variables. The second line creates the plot, where the argument kind="scatter" creates the plot without the regression line. The third line plots the chart.

      df = dat[['Income','Loan_amount','Investment','age','work_exp']]

sns.pairplot(df, kind="scatter")
plt.show()
    

Output:

Correlation Test

A correlation test is another method to determine the presence and extent of a linear relationship between two quantitative variables. In our case, we would like to statistically test whether there is a correlation between the applicant’s investment and their work experience. The first step is to visualize the relationship with a scatter plot, which is done using the line of code below.

      plt.scatter(dat['work_exp'], dat['Investment'])
plt.show()
    

Output:

The above plot suggests the absence of a linear relationship between the two variables. We can quantify this inference by calculating the correlation coefficient using the line of code below.

      np.corrcoef(dat['work_exp'], dat['Investment'])
    

Output:

      array([[1.        , 0.07653245],
           [0.07653245, 1.        ]])
    

The value of 0.07 shows a positive but weak linear relationship between the two variables. Let’s confirm this with the linear regression correlation test, which is done in Python with the linregress() function in the scipy.stats module.

      from scipy.stats import linregress
linregress(dat['work_exp'], dat['Investment'])
    

Output:

      LinregressResult(slope=15309.333089382928, intercept=57191.00212603336, rvalue=0.0765324479448039, pvalue=0.28142275240186065,    stderr=14174.32722882554)
    

Since the p-value of 0.2814 is greater than 0.05, we fail to reject the null hypothesis that the relationship between the applicant’s investment and their work experience is not significant.

Let us consider another example of correlation between Income and Work_exp using the line of code below.

      linregress(dat['work_exp'], dat['Income'])
    

Output:

      LinregressResult(slope=6998.2868438531395, intercept=11322.214342089712, rvalue=0.8784545623577412, pvalue=2.0141691110555243e-65, stderr=270.52631667365495)
    

In this case, the p-value is smaller than 0.05, so we reject the null hypothesis that the relationship between the applicants’ income and their work experience is not significant.

Relationship Between Categorical Variables

In the previous sections, we covered techniques of finding relationships between numerical variables. It is equally important to understand and estimate the relationship between categorical variables.

Frequency Table

A frequency table is a simple but effective way of finding distribution between two categorical variables. The crosstab() function can be used to create the two-way table between two variables. In the line of code below, we create a two-way table between the variables marital_status and loan_approval.

      pd.crosstab(dat.Marital_status, dat.approval_status)
    

Output:

      | approval_status 	| No 	| Yes 	|
|-----------------	|----	|-----	|
| Marital_status  	|    	|     	|
| Divorced        	| 31 	| 29  	|
| No              	| 66 	| 10  	|
| Yes             	| 52 	| 12  	|
    

The output above shows that divorced applicants have a higher probability of getting loan approvals (at 56.8 percent) compared to married applicants (at 19.6 percent). To test whether this insight is statistically significant or not, we use the chi-square test of independence.

Chi-square Test of Independence

The chi-square test of independence is used to determine whether there is an association between two or more categorical variables. In our case, we would like to test whether the marital status of the applicants has any association with their approval status.

This can be easily done in Python using the chi2_contingency() function from the scipy.stats module. The lines of code below perform the test.

      from scipy.stats import chi2_contingency
chi2_contingency(pd.crosstab(dat.Marital_status, dat.approval_status))
    

Output:

      (24.09504482353403, 5.859053936061414e-06, 2, array([[44.7 , 15.3 ],
            [56.62, 19.38],
            [47.68, 16.32]]))
    

The second value of the above output — 5.859053936061414e-06 -— represents the p-value of the test. As evident, the p-value is less than 0.05, hence we reject the null hypothesis that the marital status of the applicants is not associated with the approval status. In order to understand the parameters and the result of the chi2_contingency function, you can use the help(chi2_contingency) command, which will give brief documentation of this function.

Conclusion

In this guide, you have learned techniques of finding relationships in data for both numerical and categorical variables. You also learned about how to interpret the results of the tests by statistically validating the relationship between the variables.

To learn more about data science using Python, please refer to the following guides:

  1. Scikit Machine Learning

  2. Linear, Lasso, and Ridge Regression with scikit-learn

  3. Non-Linear Regression Trees with scikit-learn

  4. Machine Learning with Neural Networks Using scikit-learn

  5. Validating Machine Learning Models with scikit-learn

  6. Ensemble Modeling with scikit-learn

  7. Preparing Data for Modeling with scikit-learn

  8. Data Science Beginners

Deepika Singh

Deepika S.

Coming soon...

More about this author