Author avatar

Gaurav Singhal

Exploratory Data Analysis and Pre-processing in Python

Gaurav Singhal

  • Mar 5, 2020
  • 10 Min read
  • 6,999 Views
  • Mar 5, 2020
  • 10 Min read
  • 6,999 Views
Languages Frameworks and Tools
Python

Introduction

The world is running on data. Data can be anything—numbers, documents, images, facts, etc. It can be in digital or in any physical form. The word "data" is the plural of "datum," which means "something given" and usually refers to a single piece of information.

Raw data is only useful after we analyze and interpret it to get the information we desire. This kind of information can help organizations design strategies based on facts and trends.

With recent advances in Python packages and their ability to perform higher-end analytical tasks, it has become a go-to language for data analysts.

By the end of Part 1, you will have hands-on experience with:

  • Important data analysis libraries
  • Data pre-processing
  • Exploratory data analysis

Part 2 will cover data visualization and building a predictive model.

Data scientists and analysts spend most of their time on data pre-processing and visualization. Model building is much easier. In these guides, we will use New York City Airbnb Open Data. We will predict the price of a rental and see how close our prediction is to the actual price. Download the data here.

Important Data Analysis Libraries

What makes Python useful for data analysis? It contains packages and libraries that are open-source and widely used to crunch data. Let's learn more about them.

Fundamental Scientific Computing

  1. Numpy: The name stands for Numeric Python. This library is capable of performing random numbers, linear algebra, and Fourier fransform.

  2. SciPy: The name stands for Scientific Python. This library contains a high-level science and engineering module. You can perform linear algebra, optimization, and fast Fourier transforms. SciPy is built on NumPy.

Data Manipulation and Visualization

  1. pandas: In data analysis and machine learning, pandas are used in the form of data frames. This package allows you to read data from different file formats, such as CSV, Excel, plain text, JSON, SQL, etc.

  2. Matplotlib: This library is used for plotting and visualizing data. You can plot histograms, graphs, line plots, heatmaps, and lot more. It can be embedded in GUI toolkits.

Machine Learning

  1. Scikit Learn: This is a free machine learning library. Scikit Learn is built on NumPy, SciPy, and Matplotlib. It contains efficient tools for statistical model building. It can run various classification, regression, and clustering algorithms. It integrates well with pandas while working on dataframes.

Importing Libraries and Loading the Data

1from __future__ import division
2import numpy as np # linear algebra
3import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
4
5import os
6for dirname, _, filenames in os.walk('nyc_airbnb'):
7    for filename in filenames:
8        print(os.path.join(dirname, filename))
9
10import seaborn as sns
11import matplotlib.pyplot as plt
12%matplotlib inline
13
14import warnings
15warnings.filterwarnings('ignore')
16import geopandas as gpd #pip install geopandas
17
18from sklearn import preprocessing
19from sklearn.linear_model import LinearRegression
20from sklearn.model_selection import train_test_split
21from sklearn import metrics
22
23sns.set_style('darkgrid')
python

Exploratory Data Analysis (EDA)

In data analysis, EDA is used to get a better understanding of data. Looking at the data, questions may arise, such as, how many rows and columns are there? Is the data numeric? What are the names of the features (columns)? Are there any missing values, text, and numeric symbols inappropriate to the data?

The shape and info classes are the answer we are looking for. The head function will display the first five rows of the dataframe, and the tail function will display the last five. The class describe function will give the statistical summary of the dataset. To split the data by groups giving specific criteria, we will use the groupby() function.

First, let's read our data.

1data = pd.read_csv(r'nyc_airbnb\AB_NYC_2019.csv')
2
3print('Number of features: %s' %data.shape[1])
4print('Number of examples: %s' %data.shape[0])
python

Imgur

1data.head().append(data.tail())
python

img

1data.info()
python

Imgur

1data.describe()
python

Imgur

Evaluation of Data

Let's start looking at which are the best hosts and neighborhoods.

1# Evaluation_1-top_3_hosts
2
3top_3_hosts = (pd.DataFrame(data.host_id.value_counts())).head(3)
4top_3_hosts.columns=['Listings']
5top_3_hosts['host_id'] = top_3_hosts.index
6top_3_hosts.reset_index(drop=True, inplace=True)
7top_3_hosts
python

Imgur

1# Evaluation_2-top_3_neighbourhoood_groups
2
3top_3_neigh = pd.DataFrame(data['neighbourhood_group'].value_counts().head(3))
4top_3_neigh.columns=['Listings']
5top_3_neigh['Neighbourhood Group'] = top_3_neigh.index
6top_3_neigh.reset_index(drop=True, inplace=True)
7top_3_neigh
python

Imgur

A word cloud will show a collection of the most frequent words written in the reviews. The larger the size of the word, the more frequently it is used. Start by installing a word cloud library.

1from wordcloud import WordCloud, ImageColorGenerator
2wordcloud = WordCloud(
3                          background_color='white'
4                         ).generate(" ".join(data.neighbourhood))
5plt.figure(figsize=(15,10))
6plt.imshow(wordcloud)
7plt.axis('off')
8plt.savefig('neighbourhood.png')
9plt.show()
python

Imgur

Data Cleaning

The below code will perform data cleaning on our raw data. We have to prepare the data before visualizing and predicting. This is a significant step in the data analysis workflow. Here we will use the pandas library, specifically the drop , isnull ,fillna and transform classes.

1data.drop(['id','host_id','host_name','last_review'],axis=1,inplace=True)
python
1data.isnull().sum()
python

Imgur

There are different ways of filling values. The most common practice is to fill either by mean or median of the variable. We will perform the z-test to know which will fit better.

A skewed data distribution has a long tail to either the right (positively skewed) or left (negatively skewed). For example, say we want to determine the income of a state, which is not distributed uniformly. A handful of people earning significantly more than the average will produce outliers("lies outside") in the dataset. Outliers are a severe threat to any data analysis. In such cases, the median income will be closer than the mean to the middle-class (majority) income.

Means are handy when data is uniformly distributed.

1data_check_distrib=data.drop(data[pd.isnull(data.reviews_per_month)].index)
2
3{"Mean":np.nanmean(data.reviews_per_month),"Median":np.nanmedian(data.reviews_per_month),
4 "Standard Dev":np.nanstd(data.reviews_per_month)}
python

The mean > median. Let's plot the distribution curve.

1def impute_median(series):
2    return series.fillna(series.median())
python
1# plot a histogram 
2plt.hist(data_check_distrib.reviews_per_month,  bins=50)
3plt.title("Distribution of reviews_per_month")
4plt.xlim((min(data_check_distrib.reviews_per_month), max(data_check_distrib.reviews_per_month)))
python

Imgur

It is right-skewed! Let's fill the values.

1def impute_median(series):
2    return series.fillna(series.median())
python
1data.reviews_per_month=data["reviews_per_month"].transform(impute_median)
python

Correlation Matrix Plot

For a given set of features, the correlation matrix shows the correlation, or mutual-relationship between the coefficients. Each random variable is correlated with each of its other values. The diagonal elements are always 1 because the correlation between a variable and itself is always 100%. An excellent way to check correlations among features is by visualizing the correlation matrix as a heatmap.

1data['reviews_per_month'].fillna(value=0, inplace=True)
2
3f,ax=plt.subplots(figsize=(10,10))
4sns.heatmap(data.corr(),annot=True,linewidths=5,fmt='.1f',ax=ax, cmap='Reds')
5plt.show()
python

Imgur

Notice the pastel shades. The darker the shade, the better the correlation. Accordingly, number_of_reviews is highly correlated with reviews_per_month, which is quite logical. We also find a correlation between price, number_of_reviews, and longitude with availability.

Conclusion

In this guide, we've looked at exploratory data analysis and data pre-processing. In Part 2, we will move on to visualizing and building a machine learning model to predict the price of Airbnb rentals.

Feel free to contact me with any questions at Codealphabet.