42.26. Data preparation#

42.26.1. Exploring DataFrame information#

Learning goal: By the end of this subsection, you should be comfortable finding general information about the data stored in pandas DataFrames.

Once you have loaded your data into pandas, it will more likely than not be in a DataFrame. However, if the data set in your DataFrame has 60,000 rows and 400 columns, how do you even begin to get a sense of what you’re working with? Fortunately, pandas provides some convenient tools to quickly look at overall information about a DataFrame in addition to the first few and last few rows.

In order to explore this functionality, we will import the Python scikit-learn library and use an iconic dataset that every data scientist has seen hundreds of times: British biologist Ronald Fisher’s Iris data set used in his 1936 paper “The use of multiple measurements in taxonomic problems”:

import pandas as pd
from sklearn.datasets import load_iris

iris = load_iris()
iris_df = pd.DataFrame(data=iris['data'], columns=iris['feature_names'])

42.26.1.1. DataFrame.shape#

We have loaded the Iris Dataset in the variable iris_df. Before diving into the data, it would be valuable to know the number of datapoints we have and the overall size of the dataset. It is useful to look at the volume of data we are dealing with.

iris_df.shape

So, we are dealing with 150 rows and 4 columns of data. Each row represents one datapoint and each column represents a single feature associated with the data frame. So basically, there are 150 datapoints containing 4 features each.

shape here is an attribute of the dataframe and not a function, which is why it doesn’t end in a pair of parentheses.

42.26.1.2. DataFrame.columns#

Let us now move into the 4 columns of data. What does each of them exactly represent? The columns attribute will give us the name of the columns in the dataframe.

iris_df.columns

As we can see, there are four(4) columns. The columns attribute tells us the name of the columns and basically nothing else. This attribute assumes importance when we want to identify the features a dataset contains.

42.26.1.3. DataFrame.info#

The amount of data(given by the shape attribute) and the name of the features or columns(given by the columns attribute) tell us something about the dataset. Now, we would want to dive deeper into the dataset. The DataFrame.info() function is quite useful for this.

iris_df.info()

From here, we get to can make a few observations:

  1. The DataType of each column: In this dataset, all of the data is stored as 64-bit floating-point numbers.

  2. Number of Non-Null values: Dealing with null values is an important step in data preparation. It will be dealt with later in the notebook.

42.26.1.4. DataFrame.describe()#

Say we have a lot of numerical data in our dataset. Univariate statistical calculations such as the mean, median, quartiles etc. can be done on each of the columns individually. The DataFrame.describe() function provides us with a statistical summary of the numerical columns of a dataset.

iris_df.describe()

The output above shows the total number of data points, mean, standard deviation, minimum, lower quartile(25%), median(50%), upper quartile(75%) and the maximum value of each column.

42.26.1.5. DataFrame.head#

With all the above functions and attributes, we have got a top level view of the dataset. We know how many data points are there, how many features are there, the data type of each feature and the number of non-null values for each feature.

Now its time to look at the data itself. Let’s see what the first few rows(the first few datapoints) of our DataFrame look like:

iris_df.head()

As the output here, we can see five(5) entries of the dataset. If we look at the index at the left, we find out that these are the first five rows.

42.26.1.6. Exercise:#

From the example given above, it is clear that, by default, DataFrame.head returns the first five rows of a DataFrame. In the code cell below, can you figure out a way to display more than five rows?

# Hint: Consult the documentation by using iris_df.head?

42.26.1.7. DataFrame.tail#

Another way of looking at the data can be from the end(instead of the beginning). The flipside of DataFrame.head is DataFrame.tail, which returns the last five rows of a DataFrame:

iris_df.tail()

In practice, it is useful to be able to easily examine the first few rows or the last few rows of a DataFrame, particularly when you are looking for outliers in ordered datasets.

All the functions and attributes shown above with the help of code examples, help us get a look and feel of the data.

Takeaway: Even just by looking at the metadata about the information in a DataFrame or the first and last few values in one, you can get an immediate idea about the size, shape, and content of the data you are dealing with.

42.26.1.8. Missing Data#

Let us dive into missing data. Missing data occurs, when no value is stored in some of the columns.

Let us take an example: say someone is conscious about his/her weight and doesn’t fill the weight field in a survey. Then, the weight value for that certain person will be missing.

Most of the time, in real world datasets, missing values occur.

How Pandas Handles missing data

Pandas handles missing values in two ways. The first you’ve seen before in previous sections: NaN, or Not a Number. This is a actually a special value that is part of the IEEE floating-point specification and it is only used to indicate missing floating-point values.

For missing values apart from floats, pandas uses the Python None object. While it might seem confusing that you will encounter two different kinds of values that say essentially the same thing, there are sound programmatic reasons for this design choice and, in practice, going this route enables pandas to deliver a good compromise for the vast majority of cases. Notwithstanding this, both None and NaN carry restrictions that you need to be mindful of with regards to how they can be used.

42.26.1.9. None: non-float missing data#

Because None comes from Python, it cannot be used in NumPy and pandas arrays that are not of data type object. Remember, NumPy arrays (and the data structures in pandas) can contain only one type of data. This is what gives them their tremendous power for large-scale data and computational work, but it also limits their flexibility. Such arrays have to upcast to the “lowest common denominator,” the data type that will encompass everything in the array. When None is in the array, it means you are working with Python objects.

To see this in action, consider the following example array (note the dtype for it):

import numpy as np

example1 = np.array([2, None, 6, 8])
example1

The reality of upcast data types carries two side effects with it. First, operations will be carried out at the level of interpreted Python code rather than compiled NumPy code. Essentially, this means that any operations involving Series or DataFrames with None in them will be slower. While you would probably not notice this performance hit, for large datasets it might become an issue.

The second side effect stems from the first. Because None essentially drags Series or DataFrames back into the world of vanilla Python, using NumPy/pandas aggregations like sum() or min() on arrays that contain a None value will generally produce an error:

example1.sum()

Key takeaway: Addition (and other operations) between integers and None values is undefined, which can limit what you can do with datasets that contain them.

42.26.1.10. NaN: missing float values#

In contrast to None, NumPy (and therefore pandas) supports NaN for its fast, vectorized operations and ufuncs. The bad news is that any arithmetic performed on NaN always results in NaN. For example:

np.nan + 1
np.nan * 0

The good news: aggregations run on arrays with NaN in them don’t pop errors. The bad news: the results are not uniformly useful:

example2 = np.array([2, np.nan, 6, 8]) 
example2.sum(), example2.min(), example2.max()

42.26.1.11. Exercise:#

# What happens if you add np.nan and None together?

Remember: NaN is just for missing floating-point values; there is no NaN equivalent for integers, strings, or Booleans.

42.26.1.12. NaN and None: null values in pandas#

Even though NaN and None can behave somewhat differently, pandas is nevertheless built to handle them interchangeably. To see what we mean, consider a Series of integers:

int_series = pd.Series([1, 2, 3], dtype=int)
int_series

42.26.1.13. Exercise:#

# Now set an element of int_series equal to None.
# How does that element show up in the Series?
# What is the dtype of the Series?

In the process of upcasting data types to establish data homogeneity in Series and DataFrames, pandas will willingly switch missing values between None and NaN. Because of this design feature, it can be helpful to think of None and NaN as two different flavors of “null” in pandas. Indeed, some of the core methods you will use to deal with missing values in pandas reflect this idea in their names:

  • isnull(): Generates a Boolean mask indicating missing values

  • notnull(): Opposite of isnull()

  • dropna(): Returns a filtered version of the data

  • fillna(): Returns a copy of the data with missing values filled or imputed

These are important methods to master and get comfortable with, so let’s go over them each in some depth.

42.26.1.14. Detecting null values#

Now that we have understood the importance of missing values, we need to detect them in our dataset, before dealing with them. Both isnull() and notnull() are your primary methods for detecting null data. Both return Boolean masks over your data.

example3 = pd.Series([0, np.nan, '', None])
example3.isnull()

Look closely at the output. Does any of it surprise you? While 0 is an arithmetic null, it’s nevertheless a perfectly good integer and pandas treats it as such. '' is a little more subtle. While we used it in Section 1 to represent an empty string value, it is nevertheless a string object and not a representation of null as far as pandas is concerned.

Now, let’s turn this around and use these methods in a manner more like you will use them in practice. You can use Boolean masks directly as a Series or DataFrame index, which can be useful when trying to work with isolated missing (or present) values.

If we want the total number of missing values, we can just do a sum over the mask produced by the isnull() method.

example3.isnull().sum()

42.26.1.15. Exercise:#

# Try running example3[example3.notnull()].
# Before you do so, what do you expect to see?

Key takeaway: Both the isnull() and notnull() methods produce similar results when you use them in DataFrames: they show the results and the index of those results, which will help you enormously as you wrestle with your data.

42.26.1.16. Dealing with missing data#

Learning goal: By the end of this subsection, you should know how and when to replace or remove null values from DataFrames.

Machine Learning models can’t deal with missing data themselves. So, before passing the data into the model, we need to deal with these missing values.

How missing data is handled carries with it subtle tradeoffs, can affect your final analysis and real-world outcomes.

There are primarily two ways of dealing with missing data:

  1. Drop the row containing the missing value

  2. Replace the missing value with some other value

We will discuss both these methods and their pros and cons in details.

42.26.1.17. Dropping null values#

The amount of data we pass on to our model has a direct effect on its performance. Dropping null values means that we are reducing the number of datapoints, and hence reducing the size of the dataset. So, it is advisable to drop rows with null values when the dataset is quite large.

Another instance maybe that a certain row or column has a lot of missing values. Then, they maybe dropped because they wouldn’t add much value to our analysis as most of the data is missing for that row/column.

Beyond identifying missing values, pandas provides a convenient means to remove null values from Series and DataFrames. To see this in action, let’s return to example3. The DataFrame.dropna() function helps in dropping the rows with null values.

example3 = example3.dropna()
example3

Note that this should look like your output from example3[example3.notnull()]. The difference here is that, rather than just indexing on the masked values, dropna has removed those missing values from the Series example3.

Because DataFrames have two dimensions, they afford more options for dropping data.

example4 = pd.DataFrame([[1,      np.nan, 7], 
                         [2,      5,      8], 
                         [np.nan, 6,      9]])
example4

(Did you notice that pandas upcast two of the columns to floats to accommodate the NaNs?)

You cannot drop a single value from a DataFrame, so you have to drop full rows or columns. Depending on what you are doing, you might want to do one or the other, and so pandas gives you options for both. Because in data science, columns generally represent variables and rows represent observations, you are more likely to drop rows of data; the default setting for dropna() is to drop all rows that contain any null values:

example4.dropna()

If necessary, you can drop NA values from columns. Use axis=1 to do so:

example4.dropna(axis='columns')

Notice that this can drop a lot of data that you might want to keep, particularly in smaller datasets. What if you just want to drop rows or columns that contain several or even just all null values? You specify those setting in dropna with the how and thresh parameters.

By default, how='any' (if you would like to check for yourself or see what other parameters the method has, run example4.dropna? in a code cell). You could alternatively specify how='all' so as to drop only rows or columns that contain all null values. Let’s expand our example DataFrame to see this in action in the next exercise.

example4[3] = np.nan
example4

Key takeaways:

  1. Dropping null values is a good idea only if the dataset is large enough.

  2. Full rows or columns can be dropped if they have most of their data missing.

  3. The DataFrame.dropna(axis=) method helps in dropping null values. The axis argument signifies whether rows are to be dropped or columns.

  4. The how argument can also be used. By default it is set to any. So, it drops only those rows/columns which contain any null values. It can be set to all to specify that we will drop only those rows/columns where all values are null.

42.26.1.18. Exercise:#

# How might you go about dropping just column 3?
# Hint: remember that you will need to supply both the axis parameter and the how parameter.

The thresh parameter gives you finer-grained control: you set the number of non-null values that a row or column needs to have in order to be kept:

example4.dropna(axis='rows', thresh=3)

Here, the first and last row have been dropped, because they contain only two non-null values.

42.26.1.19. Filling null values#

It sometimes makes sense to fill in missing values with ones which could be valid. There are a few techniques to fill null values. The first is using Domain Knowledge(knowledge of the subject on which the dataset is based) to somehow approximate the missing values.

You could use isnull to do this in place, but that can be laborious, particularly if you have a lot of values to fill. Because this is such a common task in data science, pandas provides fillna, which returns a copy of the Series or DataFrame with the missing values replaced with one of your choosing. Let’s create another example Series to see how this works in practice.

42.26.1.20. Categorical Data(Non-numeric)#

First let us consider non-numeric data. In datasets, we have columns with categorical data. Eg. Gender, True or False etc.

In most of these cases, we replace missing values with the mode of the column. Say, we have 100 data points and 90 have said True, 8 have said False and 2 have not filled. Then, we can will the 2 with True, considering the full column.

Again, here we can use domain knowledge here. Let us consider an example of filling with the mode.

fill_with_mode = pd.DataFrame([[1,2,"True"],
                               [3,4,None],
                               [5,6,"False"],
                               [7,8,"True"],
                               [9,10,"True"]])

fill_with_mode

Now, lets first find the mode before filling the None value with the mode.

fill_with_mode[2].value_counts()

So, we will replace None with True

fill_with_mode[2].fillna('True',inplace=True)
fill_with_mode

As we can see, the null value has been replaced. Needless to say, we could have written anything in place or 'True' and it would have got substituted.

42.26.1.21. Numeric data#

Now, coming to numeric data. Here, we have a two common ways of replacing missing values:

  1. Replace with Median of the row

  2. Replace with Mean of the row

We replace with Median, in case of skewed data with outliers. This is because median is robust to outliers.

When the data is normalized, we can use mean, as in that case, mean and median would be pretty close.

First, let us take a column which is normally distributed and let us fill the missing value with the mean of the column.

fill_with_mean = pd.DataFrame([[-2,0,1],
                               [-1,2,3],
                               [np.nan,4,5],
                               [1,6,7],
                               [2,8,9]])

fill_with_mean

The mean of the column is

np.mean(fill_with_mean[0])

Filling with mean

fill_with_mean[0].fillna(np.mean(fill_with_mean[0]),inplace=True)
fill_with_mean

As we can see, the missing value has been replaced with its mean.

Now let us try another dataframe, and this time we will replace the None values with the median of the column.

fill_with_median = pd.DataFrame([[-2,0,1],
                               [-1,2,3],
                               [0,np.nan,5],
                               [1,6,7],
                               [2,8,9]])

fill_with_median

The median of the second column is

fill_with_median[1].median()

Filling with median

fill_with_median[1].fillna(fill_with_median[1].median(),inplace=True)
fill_with_median

As we can see, the NaN value has been replaced by the median of the column

example5 = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
example5

You can fill all of the null entries with a single value, such as 0:

example5.fillna(0)

Key takeaways:

  1. Filling in missing values should be done when either there is less data or there is a strategy to fill in the missing data.

  2. Domain knowledge can be used to fill in missing values by approximating them.

  3. For Categorical data, mostly, missing values are substituted with the mode of the column.

  4. For numeric data, missing values are usually filled in with the mean(for normalized datasets) or the median of the columns.

42.26.1.22. Exercise:#

# What happens if you try to fill null values with a string, like ''?

You can forward-fill null values, which is to use the last valid value to fill a null:

example5.fillna(method='ffill')

You can also back-fill to propagate the next valid value backward to fill a null:

example5.fillna(method='bfill')

As you might guess, this works the same with DataFrames, but you can also specify an axis along which to fill null values:

example4
example4.fillna(method='ffill', axis=1)

Notice that when a previous value is not available for forward-filling, the null value remains.

42.26.1.23. Exercise:#

# What output does example4.fillna(method='bfill', axis=1) produce?
# What about example4.fillna(method='ffill') or example4.fillna(method='bfill')?
# Can you think of a longer code snippet to write that can fill all of the null values in example4?

You can be creative about how you use fillna. For example, let’s look at example4 again, but this time let’s fill the missing values with the average of all of the values in the DataFrame:

example4.fillna(example4.mean())

Notice that column 3 is still valueless: the default direction is to fill values row-wise.

Takeaway: There are multiple ways to deal with missing values in your datasets. The specific strategy you use (removing them, replacing them, or even how you replace them) should be dictated by the particulars of that data. You will develop a better sense of how to deal with missing values the more you handle and interact with datasets.

42.26.1.24. Encoding categorical data#

Machine learning models only deal with numbers and any form of numeric data. It won’t be able to tell the difference between a Yes and a No, but it would be able to distinguish between 0 and 1. So, after filling in the missing values, we need to do encode the categorical data to some numeric form for the model to understand.

Encoding can be done in two ways. We will be discussing them next.

LABEL ENCODING

Label encoding is basically converting each category to a number. For example, say we have a dataset of airline passengers and there is a column containing their class among the following [‘business class’, ‘economy class’,‘first class’]. If Label encoding is done on this, this would be transformed to [0,1,2]. Let us see an example via code. As we would be learning scikit-learn in the upcoming notebooks, we won’t use it here.

label = pd.DataFrame([
                      [10,'business class'],
                      [20,'first class'],
                      [30, 'economy class'],
                      [40, 'economy class'],
                      [50, 'economy class'],
                      [60, 'business class']
],columns=['ID','class'])
label

To perform label encoding on the 1st column, we have to first describe a mapping from each class to a number, before replacing

class_labels = {'business class':0,'economy class':1,'first class':2}
label['class'] = label['class'].replace(class_labels)
label

As we can see, the output matches what we thought would happen. So, when do we use label encoding? Label encoding is used in either or both of the following cases :

  1. When the number of categories is large

  2. When the categories are in order.

ONE HOT ENCODING

Another type of encoding is One Hot Encoding. In this type of encoding, each category of the column gets added as a separate column and each datapoint will get a 0 or a 1 based on whether it contains that category. So, if there are n different categories, n columns will be appended to the dataframe.

For example, let us take the same aeroplane class example. The categories were: [‘business class’, ‘economy class’,‘first class’] . So, if we perform one hot encoding, the following three columns will be added to the dataset: [‘class_business class’,‘class_economy class’,‘class_first class’].

one_hot = pd.DataFrame([
                      [10,'business class'],
                      [20,'first class'],
                      [30, 'economy class'],
                      [40, 'economy class'],
                      [50, 'economy class'],
                      [60, 'business class']
],columns=['ID','class'])
one_hot

Let us perform one hot encoding on the 1st column

one_hot_data = pd.get_dummies(one_hot,columns=['class'])
one_hot_data

Each one hot encoded column contains 0 or 1, which specifies whether that category exists for that datapoint.

When do we use one hot encoding? One hot encoding is used in either or both of the following cases :

  1. When the number of categories and the size of the dataset is smaller.

  2. When the categories follow no particular order.

Key Takeaways:

  1. Encoding is done to convert non-numeric data to numeric data.

  2. There are two types of encoding: Label encoding and One Hot encoding, both of which can be performed based on the demands of the dataset.

42.26.2. Removing duplicate data#

Learning goal: By the end of this subsection, you should be comfortable identifying and removing duplicate values from DataFrames.

In addition to missing data, you will often encounter duplicated data in real-world datasets. Fortunately, pandas provides an easy means of detecting and removing duplicate entries.

42.26.2.1. Identifying duplicates: duplicated#

You can easily spot duplicate values using the duplicated method in pandas, which returns a Boolean mask indicating whether an entry in a DataFrame is a duplicate of an earlier one. Let’s create another example DataFrame to see this in action.

example6 = pd.DataFrame({'letters': ['A','B'] * 2 + ['B'],
                         'numbers': [1, 2, 1, 3, 3]})
example6
example6.duplicated()

42.26.2.2. Dropping duplicates: drop_duplicates#

drop_duplicates simply returns a copy of the data for which all of the duplicated values are False:

example6.drop_duplicates()

Both duplicated and drop_duplicates default to consider all columns but you can specify that they examine only a subset of columns in your DataFrame:

example6.drop_duplicates(['letters'])

Takeaway: Removing duplicate data is an essential part of almost every data-science project. Duplicate data can change the results of your analyses and give you inaccurate results!

42.26.3. Acknowledgments#

Thanks to Microsoft for creating the open-source course Data Science for Beginners. It inspires the majority of the content in this chapter. Original Notebook source from Data Science: Introduction to Machine Learning for Data Science Python and Machine Learning Studio by Lee Stott