The world is running on data. Data can be anything—numbers, documents, images, facts, etc. It can be in digital or in any physical form. The word "data" is the plural of "datum," which means "something given" and usually refers to a single piece of information.
Raw data is only useful after we analyze and interpret it to get the information we desire. This kind of information can help organizations design strategies based on facts and trends.
With recent advances in Python packages and their ability to perform higher-end analytical tasks, it has become a go-to language for data analysts.
By the end of Part 1, you will have hands-on experience with:
Part 2 will cover data visualization and building a predictive model.
Data scientists and analysts spend most of their time on data pre-processing and visualization. Model building is much easier. In these guides, we will use New York City Airbnb Open Data. We will predict the price of a rental and see how close our prediction is to the actual price. Download the data here.
What makes Python useful for data analysis? It contains packages and libraries that are open-source and widely used to crunch data. Let's learn more about them.
Fundamental Scientific Computing
Numpy: The name stands for Numeric Python. This library is capable of performing random numbers, linear algebra, and Fourier fransform.
Data Manipulation and Visualization
pandas: In data analysis and machine learning, pandas are used in the form of data frames. This package allows you to read data from different file formats, such as CSV, Excel, plain text, JSON, SQL, etc.
Machine Learning
1from __future__ import division
2import numpy as np # linear algebra
3import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
4
5import os
6for dirname, _, filenames in os.walk('nyc_airbnb'):
7 for filename in filenames:
8 print(os.path.join(dirname, filename))
9
10import seaborn as sns
11import matplotlib.pyplot as plt
12%matplotlib inline
13
14import warnings
15warnings.filterwarnings('ignore')
16import geopandas as gpd #pip install geopandas
17
18from sklearn import preprocessing
19from sklearn.linear_model import LinearRegression
20from sklearn.model_selection import train_test_split
21from sklearn import metrics
22
23sns.set_style('darkgrid')
In data analysis, EDA is used to get a better understanding of data. Looking at the data, questions may arise, such as, how many rows and columns are there? Is the data numeric? What are the names of the features (columns)? Are there any missing values, text, and numeric symbols inappropriate to the data?
The shape
and info
classes are the answer we are looking for. The head
function will display the first five rows of the dataframe, and the tail
function will display the last five. The class describe
function will give the statistical summary of the dataset. To split the data by groups giving specific criteria, we will use the groupby()
function.
First, let's read our data.
1data = pd.read_csv(r'nyc_airbnb\AB_NYC_2019.csv')
2
3print('Number of features: %s' %data.shape[1])
4print('Number of examples: %s' %data.shape[0])
1data.head().append(data.tail())
1data.info()
1data.describe()
Let's start looking at which are the best hosts and neighborhoods.
1# Evaluation_1-top_3_hosts
2
3top_3_hosts = (pd.DataFrame(data.host_id.value_counts())).head(3)
4top_3_hosts.columns=['Listings']
5top_3_hosts['host_id'] = top_3_hosts.index
6top_3_hosts.reset_index(drop=True, inplace=True)
7top_3_hosts
1# Evaluation_2-top_3_neighbourhoood_groups
2
3top_3_neigh = pd.DataFrame(data['neighbourhood_group'].value_counts().head(3))
4top_3_neigh.columns=['Listings']
5top_3_neigh['Neighbourhood Group'] = top_3_neigh.index
6top_3_neigh.reset_index(drop=True, inplace=True)
7top_3_neigh
A word cloud will show a collection of the most frequent words written in the reviews. The larger the size of the word, the more frequently it is used. Start by installing a word cloud library.
1from wordcloud import WordCloud, ImageColorGenerator
2wordcloud = WordCloud(
3 background_color='white'
4 ).generate(" ".join(data.neighbourhood))
5plt.figure(figsize=(15,10))
6plt.imshow(wordcloud)
7plt.axis('off')
8plt.savefig('neighbourhood.png')
9plt.show()
1data.drop(['id','host_id','host_name','last_review'],axis=1,inplace=True)
1data.isnull().sum()
There are different ways of filling values. The most common practice is to fill either by mean or median of the variable. We will perform the z-test to know which will fit better.
A skewed data distribution has a long tail to either the right (positively skewed) or left (negatively skewed). For example, say we want to determine the income of a state, which is not distributed uniformly. A handful of people earning significantly more than the average will produce outliers("lies outside") in the dataset. Outliers are a severe threat to any data analysis. In such cases, the median income will be closer than the mean to the middle-class (majority) income.
Means are handy when data is uniformly distributed.
1data_check_distrib=data.drop(data[pd.isnull(data.reviews_per_month)].index)
2
3{"Mean":np.nanmean(data.reviews_per_month),"Median":np.nanmedian(data.reviews_per_month),
4 "Standard Dev":np.nanstd(data.reviews_per_month)}
The mean > median. Let's plot the distribution curve.
1def impute_median(series):
2 return series.fillna(series.median())
1# plot a histogram
2plt.hist(data_check_distrib.reviews_per_month, bins=50)
3plt.title("Distribution of reviews_per_month")
4plt.xlim((min(data_check_distrib.reviews_per_month), max(data_check_distrib.reviews_per_month)))
It is right-skewed! Let's fill the values.
1def impute_median(series):
2 return series.fillna(series.median())
1data.reviews_per_month=data["reviews_per_month"].transform(impute_median)
For a given set of features, the correlation matrix shows the correlation, or mutual-relationship between the coefficients. Each random variable is correlated with each of its other values. The diagonal elements are always 1 because the correlation between a variable and itself is always 100%. An excellent way to check correlations among features is by visualizing the correlation matrix as a heatmap.
1data['reviews_per_month'].fillna(value=0, inplace=True)
2
3f,ax=plt.subplots(figsize=(10,10))
4sns.heatmap(data.corr(),annot=True,linewidths=5,fmt='.1f',ax=ax, cmap='Reds')
5plt.show()
Notice the pastel shades. The darker the shade, the better the correlation. Accordingly, number_of_reviews
is highly correlated with reviews_per_month
, which is quite logical. We also find a correlation between price
, number_of_reviews
, and longitude
with availability.
In this guide, we've looked at exploratory data analysis and data pre-processing. In Part 2, we will move on to visualizing and building a machine learning model to predict the price of Airbnb rentals.
Feel free to contact me with any questions at Codealphabet.