Important Update
The Guide Feature will be discontinued after December 15th, 2023. Until then, you can continue to access and refer to the existing guides.
Author avatar

Deepika Singh

Coping with Missing, Invalid and Duplicate Data in R

Deepika Singh

  • Oct 24, 2019
  • 15 Min read
  • 9,088 Views
  • Oct 24, 2019
  • 15 Min read
  • 9,088 Views
Data
R

Introduction

A vital component of data science is cleaning the data and getting it ready for predictive modeling. The most common problem related to data cleaning is coping with missing data, invalid records and duplicate values.

In this guide, you will learn about techniques for dealing with missing, invalid and duplicate data in the statistical programming language R.

Data

In this guide, we will be using a fictitious dataset of loan applications containing 600 observations and 12 variables:

1. UID - Unique identifier for an applicant

2. Marital_status - Whether the applicant is married ("Yes") or not ("No")

3. Dependents - Number of dependents of the applicant

4. Is_graduate - Whether the applicant is a graduate ("Yes") or not ("No")

5. Income - Annual income of the applicant (in USD)

6. Loan_amount - Loan amount (in USD) for which the application was submitted

7. Term_months - Tenure of the loan

8. Credit_score - Whether the applicants credit score is good ("Satisfactory") or not ("Not Satisfactory")

9. Approval_status - Whether the loan application was approved ("1") or not ("0")

10. Age - The applicant's age in years

11. Sex - Whether the applicant is a male ("M") or a female ("F")

12. Purpose - Purpose of applying for the loan

Let’s start by loading the required libraries and the data.

1library(readr)
2
3dat <- read_csv("data_cleaning.csv")
4glimpse(dat)
{r}

Output:

1Observations: 600
2Variables: 12
3$ UID             <chr> "UIDA7", "UIDA354", "UIDA402", "UIDA467", "UIDA78", "U...
4$ Marital_status  <chr> "No", "Yes", "No", "No", "Yes", "No", "Yes", "Yes", "Y...
5$ Dependents      <int> NA, 0, 0, 0, NA, NA, NA, NA, 0, 2, 0, 2, 0, 0, NA, NA,...
6$ Is_graduate     <chr> "Yes", "Yes", "Yes", "Yes", "Yes", "No", "Yes", "No", ...
7$ Income          <int> 333300, 532400, 454700, 368500, 611100, 266700, 243700...
8$ Loan_amount     <int> 53500, 115000, 49000, 61000, 62000, 70000, 55500, 6300...
9$ Term_months     <int> 504, 384, 384, 384, 384, 384, 384, 204, 384, 384, 384,...
10$ Credit_score    <chr> "Satisfactory", "Satisfactory", "Satisfactory", "Satis...
11$ approval_status <int> 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ...
12$ Age             <int> -3, -3, -10, -12, -12, -12, -12, 23, 23, 23, 23, 23, 2...
13$ Sex             <chr> "M", "M", "F", "F", "F", "M", "M", "M", "M", "M", "M",...
14$ Purpose         <chr> "Education", "Travel", "Personal", "Business", "Educat...

The output shows that the dataset has six numerical variables (labeled as 'int'), while the remaining six are categorical variables (labelled as 'chr'). We will convert these into 'factor' variables, except for the 'UID' variable, using the line of code below.

1names <- c(2,4,8,9,11,12)
2dat[,names] <- lapply(dat[,names] , factor)
3glimpse(dat)
4 
{r}

Output:

1Observations: 600
2Variables: 12
3$ UID             <chr> "UIDA7", "UIDA354", "UIDA402", "UIDA467", "UIDA78", "U...
4$ Marital_status  <fct> No, Yes, No, No, Yes, No, Yes, Yes, Yes, Yes, Yes, Yes...
5$ Dependents      <int> NA, 0, 0, 0, NA, NA, NA, NA, 0, 2, 0, 2, 0, 0, NA, NA,...
6$ Is_graduate     <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, No, Yes, Yes, Yes, Y...
7$ Income          <int> 333300, 532400, 454700, 368500, 611100, 266700, 243700...
8$ Loan_amount     <int> 53500, 115000, 49000, 61000, 62000, 70000, 55500, 6300...
9$ Term_months     <int> 504, 384, 384, 384, 384, 384, 384, 204, 384, 384, 384,...
10$ Credit_score    <fct> Satisfactory, Satisfactory, Satisfactory, Satisfactory...
11$ approval_status <fct> 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ...
12$ Age             <int> -3, -3, -10, -12, -12, -12, -12, 23, 23, 23, 23, 23, 2...
13$ Sex             <fct> M, M, F, F, F, M, M, M, M, M, M, M, M, F, F, M, M, M, ...
14$ Purpose         <fct> Education, Travel, Personal, Business, Education, Educ...

We are now ready to carry out the data cleaning steps in the following sections.

Duplicate Values

The first step is to check for duplicate records, one of the most common errors in real world data. Duplicate records increase computation time and decrease model accuracy, and hence must be removed. In our dataset, 'UID' is the unique identifier variable and will be used to drop the duplicate records. The first line of code below uses the duplicated() function to find duplicates, while the second line prints the number of duplicates.

1dup_records <- duplicated(dat$UID)
2sum(dup_records)
{r}

Output:

1[1] 3 

The output shows that there are three duplicate records. We will drop these records using the first line of code below. The second line prints the dimension of the resulting dataset — 597 observations and 12 variables.

1dat <- dat[!duplicated(dat$UID), ]
2
3dim(dat)
{r}

Output:

1[1] 597  12

Invalid Values

When we looked at the data using the glimpse() function in the previous section, we realized that the age variable has incorrect entries. Let’s look at the summary of the age variable.

1summary(dat$Age)
{r}

Output:

1  Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's 
2 -12.00   36.50   51.00   49.03   61.00   76.00       2

The output shows that the minimum value of the variable 'Age' is -12. This is impossible and brings us to the next common problem in real world datasets: the presence of inaccurate records. It is safe to assume that for loan applications, the minimum age should be 18 years. This means that we will remove records of applicants below 18 years of age.

The first two of lines of code below give us the number of records in the dataset for which the age is below 18 years. The number of such records is seven, and they are removed with the third line of code. The fourth line prints the dimensions of the new data — 590 observations and 12 variables.

Finally, we relook at the summary of the age variable. This shows that the range of age is now 23 to 76 years, indicating that the correction has been made.

1age_18 <- dat[which(dat$Age<18),]
2dim(age_18)
3
4dat <- dat[-which(dat$Age<18),]
5dim(dat)
6
7summary(dat$Age)
{r}

Output:

1[1] 590  12
2 
3   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's 
4  23.00   37.00   51.00   49.72   61.00   76.00       2 

It was easy to detect incorrect entries in the age variable. In other cases, invalid values are in the form of outliers. Let’s look at an example of the 'Income' variable. The summary of the variable shows that the minimum and maximum income levels are USD 136700, and 3321001, respectively. This is a highly skewed range, indicating some extreme values. To better understand the distribution, we use the quantile function, which gives us the first to hundredth percentile values of the variable in the sequence of unit percentile.

1summary(dat$Income)
2
3quantile(dat$Income,seq(0,1,0.01))
{r}

Output:

1  Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
2 136700  386700  512800  687874  775300 3321001
3
4
5    0%      1%      2%      3%      4%      5%      6%      7%      8%      9% 
6 136700  136700  210676  240000  244288  254540  263684  274664  286108  295100 
7    10%     11%     12%     13%     14%     15%     16%     17%     18%     19% 
8 301560  311100  317700  320000  329100  332220  333300  335288  344400  346700 
9    20%     21%     22%     23%     24%     25%     26%     27%     28%     29% 
10 352860  358800  363084  371396  383332  386700  391172  397980  401508  405556 
11    30%     31%     32%     33%     34%     35%     36%     37%     38%     39% 
12 410220  412724  421052  422244  424804  431960  437016  444400  448620  454972 
13    40%     41%     42%     43%     44%     45%     46%     47%     48%     49% 
14 458920  465068  468448  476468  479696  486180  491380  495548  500000  506956 
15    50%     51%     52%     53%     54%     55%     56%     57%     58%     59% 
16 512800  515552  523184  532012  536480  551820  555504  563080  572852  577700 
17    60%     61%     62%     63%     64%     65%     66%     67%     68%     69% 
18 585380  607584  611276  620300  625904  633300  648308  656708  666700  683156 
19    70%     71%     72%     73%     74%     75%     76%     77%     78%     79% 
20 700000  721040  733300  753968  761484  775300  788132  800000  807740  821696 
21    80%     81%     82%     83%     84%     85%     86%     87%     88%     89% 
22 834660  853300  880008  914712  963752 1010680 1058180 1111100 1149276 1219460 
23    90%     91%     92%     93%     94%     95%     96%     97%     98%     99% 
241262060 1333300 1392412 1502676 1664032 1944400 2064768 2223884 2608396 3197268 
25   100% 
263321001

We can remove the outliers using the method described in the previous section. We can also address them through a different method of flooring and capping the extreme values. The first line of code below does the flooring of the lower outliers at the first percentile value, i.e., USD 136700. Similarly, the second line performs the capping of the higher outliers at the 99th percentile value, i.e., USD 3321001.

The third line of code prints the new summary of the variable, indicating that the correction has been done.

1dat$Income[which(dat$Income<136700)]<- 136700
2
3dat$Income[which(dat$Income > 3321001)]<- 3321001
4
5summary(dat$Income)
{r}

Output:

1  Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
2 136700  386175  508650  685301  772650 3321001 

Missing Values

Missing value treatment is the most common data cleaning step performed in a data science project. The line of code below prints the number of missing values in each of the variables.

1sapply(dat, function(x) sum(is.na(x)))
{r}

Output:

1UID  Marital_status      Dependents     Is_graduate          Income 
2              0               0               3               6               0 
3    Loan_amount     Term_months    Credit_score approval_status             Age 
4              7               6               0               0               2 
5            Sex         Purpose 
6              0               0 

The output above shows the presence of missing values across the variables, most of which are numerical variables, except 'Is_graduate', which is a categorical variable.

Missing Value Imputation for Numerical Variables

The most widely used technique for imputing values for a numerical variable is to replace the missing values with the mean or the median value. In the lines of code below, we replace missing values in 'Loan_amount' with the median value, while the missing values in 'Term_months' are replaced by the mean value. The output shows that the missing values have been imputed.

1dat$Loan_amount[is.na(dat$Loan_amount)] <- median(dat$Loan_amount, na.rm = TRUE)
2table(is.na(dat$Loan_amount))
3
4dat$Term_months[is.na(dat$Term_months)] <- mean(dat$Term_months, na.rm = TRUE)
5table(is.na(dat$Term_months))
{r}

Output:

1FALSE 
2  590 
3
4FALSE 
5  590

Missing Value Imputation for Categorical Variables

For categorical variables, it is important to understand the frequency distribution, which can be printed with the line of code below.

1table(dat$Is_graduate)
{r}

Output:

1No Yes 
2127 457

The output shows that most applicants were graduates, identified with the label 'Yes'. The lines of code below replace the missing values with the highest frequency label, 'Yes'.

1dat$Is_graduate[is.na(dat$Is_graduate)] <- "Yes"
2
3table(dat$Is_graduate)
{r}

Output:

1No Yes 
2127 463 
3 

Removing Rows with Missing Values

We have imputed missing values using measures of central tendency: mean, median and mode. Another technique is to delete rows where any variable has missing values. This is performed using the na.omit() function, which removes all the rows containing missing values.

1dat <- na.omit(dat)
2
3dim(dat)
{r}

Output:

1[1] 585  12

The resulting data has 585 observations of 12 variables. We can do a final check to see if all the missing values have been removed using the command below.

1sapply(dat, function(x) sum(is.na(x)))
{r}

Output:

1         UID  Marital_status      Dependents     Is_graduate          Income 
2              0               0               0               0               0 
3    Loan_amount     Term_months    Credit_score approval_status             Age 
4              0               0               0               0               0 
5            Sex         Purpose 
6              0               0 

We can look at the data and post all these transformations using the glimpse command below.

1glimpse(dat)
{r}

Output:

1Observations: 585
2Variables: 12
3$ UID             <chr> "UIDA209", "UIDA238", "UIDA256", "UIDA274", "UIDA430",...
4$ Marital_status  <fct> Yes, Yes, Yes, Yes, No, Yes, No, Yes, Yes, No, No, Yes...
5$ Dependents      <int> 0, 2, 0, 2, 0, 0, 0, 1, 3, 0, 0, 1, 0, 0, 1, 0, 0, 0, ...
6$ Is_graduate     <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, Yes, Yes, Yes, Yes, ...
7$ Income          <dbl> 1984000, 1066700, 834100, 775900, 421100, 402300, 1111...
8$ Loan_amount     <int> 1070000, 111000, 89000, 1330000, 49500, 56500, 104000,...
9$ Term_months     <dbl> 384, 384, 384, 384, 384, 384, 384, 384, 384, 384, 384,...
10$ Credit_score    <fct> Satisfactory, Satisfactory, Not _satisfactory, Satisfa...
11$ approval_status <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, ...
12$ Age             <int> 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 25, 25, 25, 25...
13$ Sex             <fct> M, M, M, M, M, F, M, M, M, M, F, M, M, M, M, F, M, M, ...
14$ Purpose         <fct> Personal, Personal, Personal, Travel, Personal, Person...

Conclusion

In this guide, you have learned methods of dealing with missing, invalid and duplicate data in R. These techniques will help you in cleaning data and making it ready for machine learning. To learn more about data science using 'R', please refer to the following guides: