Open Machine Learning Course. Topic 1. Exploratory Data Analysis with Pandas

Yury Kashnitsky
Feb 5, 2018 · 14 min read
Image for post
Image for post

With this article, we, OpenDataScience, launch an open Machine Learning course. This is not aimed at developing another comprehensive introductory course on machine learning or data analysis (so this is not a substitute for fundamental education or online/offline courses/specializations and books). The purpose of this series of articles is to quickly refresh your knowledge and help you find topics for further advancement. Our approach is similar to that of the authors of Deep Learning book, which starts off with a review of mathematics and basics of machine learning — short, concise, and with many references to other resources.

UPD: YouTube playlist with videolectures

The course is designed to perfectly balance theory and practice; therefore, each topic is followed by an assignment with a deadline in a week. You can also take part in several Kaggle Inclass competitions held during the course.

All materials are available as a Kaggle Dataset and in a GitHub repo.

Image for post
Image for post

The course is going to be actively discussed in the OpenDataScience Slack team. Please fill in this form to be invited. The next session of the course will start on October 1, 2018. Invitations will be sent in September.

Article outline

1. About the course
2. Assignments
3. Demonstration of main Pandas methods
4. First attempt on predicting telecom churn
5. Assignment #1
6. Useful resources

1. About the course


  1. Exploratory Data Analysis with Pandas
  2. Visual Data Analysis with Python
  3. Classification, Decision Trees and k Nearest Neighbors
  4. Linear Classification and Regression
  5. Bagging and Random Forest
  6. Feature Engineering and Feature Selection
  7. Unsupervised Learning: Principal Component Analysis and Clustering
  8. Vowpal Wabbit: Fast Learning with Gigabytes of Data
  9. Time Series Analysis with Python, Predicting the future with Facebook Prophet
  10. Gradient Boosting


One of the most vivid advantages of our course is active community. If you join the OpenDataScience Slack team, you’ll find the authors of articles and assignments right there in the same channel (#eng_mlcourse_open) eager to help you. This can help very much when you make your first steps in any discipline. Fill in this form to be invited. The form will ask you several questions about your background and skills, including a few easy math questions.

Image for post
Image for post

We chat informally, like humor and emoji. Not every MOOC can boast to have such an alive community.


The prerequisites are the following: basic concepts from calculus, linear algebra, probability theory and statistics, and Python programming skills. If you need to catch up, a good resource will be Part I from the “Deep Learning” book and various math and Python online courses (for Python, CodeAcademy will do). More info is available on the corresponding Wiki page.

What software you’ll need

As for now, you’ll only need Anaconda (built with Python 3.6) to reproduce the code in the course. Later in the course you’ll have to install other libraries like Xgboost and Vowpal Wabbit.

You can also resort to the Docker container with all necessary software already installed. More info is available on the corresponding Wiki page.

2. Assignments

  • Each article comes with an assignment in the form of a Jupyter notebook. The task will be to fill in the missing code snippets and to answer questions in a Google Quiz form;
  • Each assignment is due in a week with a hard deadline;
  • Please discuss the course content (articles and assignments) in the #eng_mlcourse_open channel of the OpenDataScience Slack team or here in the comments to articles on Medium;
  • The solutions to assignments will be sent to those who have submitted the corresponding Google form.

3. Demonstration of main Pandas methods

Well... There are dozens of cool tutorials on Pandas and visual data analysis. If you are familiar with these topics, just wait for the 3rd article in the series, where we get into machine learning.

The following material is better viewed as a Jupyter notebook and can be reproduced locally with Jupyter if you clone the course repository.

Pandas is a Python library that provides extensive means for data analysis. Data scientists often work with data stored in table formats like .csv, .tsv, or .xlsx. Pandas makes it very convenient to load, process, and analyze such tabular data using SQL-like queries. In conjunction with Matplotlib and Seaborn, Pandas provides a wide range of opportunities for visual analysis of tabular data.

The main data structures in Pandas are implemented with Series and DataFrame classes. The former is a one-dimensional indexed array of some fixed data type. The latter is a two-dimensional data structure - a table - where each column contains data of the same type. You can see it as a dictionary of Seriesinstances. DataFrames are great for representing real data: rows correspond to instances (objects, observations, etc.), and columns correspond to features for each of the instances.

We’ll demonstrate the main methods in action by analyzing a dataset on the churn rate of telecom operator clients. Let’s read the data (using read_csv), and take a look at the first 5 lines using the head method:

Image for post
Image for post

Recall that each row corresponds to one client, the object of our research, and columns are features of the object.

Let’s have a look at data dimensionality, features names, and feature types.


(3333, 20)

From the output, we can see that the table contains 3333 rows and 20 columns. Now let’s try printing out the column names using columns:


Index(['State', 'Account length', 'Area code', 'International plan',
'Voice mail plan', 'Number vmail messages', 'Total day minutes',
'Total day calls', 'Total day charge', 'Total eve minutes',
'Total eve calls', 'Total eve charge', 'Total night minutes',
'Total night calls', 'Total night charge', 'Total intl minutes',
'Total intl calls', 'Total intl charge', 'Customer service calls',

We can use the info() method to output some general information about the dataframe:


<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3333 entries, 0 to 3332
Data columns (total 20 columns):
State 3333 non-null object
Account length 3333 non-null int64
Area code 3333 non-null int64
International plan 3333 non-null object
Voice mail plan 3333 non-null object
Number vmail messages 3333 non-null int64
Total day minutes 3333 non-null float64
Total day calls 3333 non-null int64
Total day charge 3333 non-null float64
Total eve minutes 3333 non-null float64
Total eve calls 3333 non-null int64
Total eve charge 3333 non-null float64
Total night minutes 3333 non-null float64
Total night calls 3333 non-null int64
Total night charge 3333 non-null float64
Total intl minutes 3333 non-null float64
Total intl calls 3333 non-null int64
Total intl charge 3333 non-null float64
Customer service calls 3333 non-null int64
Churn 3333 non-null bool
dtypes: bool(1), float64(8), int64(8), object(3)
memory usage: 498.1+ KB

bool, int64, float64 and object are the data types of our features. We see that one feature is logical (bool), 3 features are of type object, and 16 features are numeric. With this same method, we can easily see if there are any missing values. Here, there are none because each column contains 3333 observations, the same number of rows we saw before with shape.

We can change the column type with the astype method. Let’s apply this method to the Churn feature to convert it into int64:

df['Churn'] = df['Churn'].astype('int64')

The describe method shows basic statistical characteristics of each numerical feature (int64 and float64 types): number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles.

Image for post
Image for post

In order to see statistics on non-numerical features, one has to explicitly indicate data types of interest in the include parameter.

df.describe(include=['object', 'bool'])

Image for post
Image for post

For categorical (type object) and boolean (type bool) features we can use the value_counts method. Let’s have a look at the distribution of Churn:


0    2850
1 483
Name: Churn, dtype: int64

2850 users out of 3333 are loyal; their Churn value is 0. To calculate the proportion, pass normalize=True to the value_counts function


0    0.855086
1 0.144914
Name: Churn, dtype: float64


A DataFrame can be sorted by the value of one of the variables (i.e columns). For example, we can sort by Total day charge (use ascending=False to sort in descending order):

df.sort_values(by='Total day charge', ascending=False).head()

Image for post
Image for post

Alternatively, we can also sort by multiple columns:

df.sort_values(by=['Churn', 'Total day charge'], ascending=[True, False]).head()

Image for post
Image for post

Indexing and retrieving data

DataFrame can be indexed in different ways.

To get a single column, you can use a DataFrame['Name'] construction. Let's use this to answer a question about that column alone: what is the proportion of churned users in our dataframe?



14.5% is actually quite bad for a company; such a churn rate can make the company go bankrupt.

Boolean indexing with one column is also very convenient. The syntax is df[P(df['Name'])], where P is some logical condition that is checked for each element of the Name column. The result of such indexing is the DataFrame consisting only of rows that satisfy the P condition on the Name column.

Let’s use it to answer the question:

What are average values of numerical variables for churned users?

df[df['Churn'] == 1].mean()

Account length            102.664596
Area code 437.817805
Number vmail messages 5.115942
Total day minutes 206.914079
Total day calls 101.335404
Total day charge 35.175921
Total eve minutes 212.410145
Total eve calls 100.561077
Total eve charge 18.054969
Total night minutes 205.231677
Total night calls 100.399586
Total night charge 9.235528
Total intl minutes 10.700000
Total intl calls 4.163561
Total intl charge 2.889545
Customer service calls 2.229814
Churn 1.000000
dtype: float64

How much time (on average) do churned users spend on phone during daytime?

df[df['Churn'] == 1]['Total day minutes'].mean()


What is the maximum length of international calls among loyal users (Churn == 0) who do not have an international plan?

df[(df['Churn'] == 0) & (df['International plan'] == 'No')]['Total intl minutes'].max()


DataFrames can be indexed by column name (label) or row name (index) or by the serial number of a row. The loc method is used for indexing by name, while iloc() is used for indexing by number.

In the first case, we would say “give us the values of the rows with index from 0 to 5 (inclusive) and columns labeled from State to Area code (inclusive)”, and, in the second case, we would say “give us the values of the first five rows in the first three columns (as in typical Python slice: the maximal value is not included)”.

df.loc[0:5, 'State':'Area code']

Image for post
Image for post

df.iloc[0:5, 0:3]

Image for post
Image for post

If we need the first or last line of the data frame, we use the df[:1] or df[-1:] syntax.

Applying Functions to Cells, Columns and Rows

To apply functions to each column, use apply():


State                        WY
Account length 243
Area code 510
International plan Yes
Voice mail plan Yes
Number vmail messages 51
Total day minutes 350.8
Total day calls 165
Total day charge 59.64
Total eve minutes 363.7
Total eve calls 170
Total eve charge 30.91
Total night minutes 395
Total night calls 175
Total night charge 17.77
Total intl minutes 20
Total intl calls 20
Total intl charge 5.4
Customer service calls 9
Churn 1
dtype: object

The apply method can also be used to apply a function to each line. To do this, specify axis=1. Lambda functions are very convenient in such scenarios. For example, if we need to select all states starting with W, we can do it like this:

df[df['State'].apply(lambda state: state[0] == 'W')].head()

Image for post
Image for post

The map method can be used to replace values in a column by passing a dictionary of the form {old_value: new_value} as its argument:

Image for post
Image for post

Same thing can be done with the replace method:

df = df.replace({'Voice mail plan': d})

Image for post
Image for post


In general, grouping data in Pandas goes as follows:

  1. First, the groupby method divides the grouping_columns by their values. They become a new index in the resulting dataframe.
  2. Then, columns of interest are selected (columns_to_show). If columns_to_show is not included, all non groupby clauses will be included.
  3. Finally, one or several functions are applied to the obtained groups per selected columns.

Here is an example where we group the data according to the values of the Churn variable and display statistics of three columns in each group:

Image for post
Image for post

Let’s do the same thing, but slightly differently by passing a list of functions to agg():

Image for post
Image for post

Summary tables

Suppose we want to see how the observations in our sample are distributed in the context of two variables — Churn and International plan. To do so, we can build a contingency table using the crosstab method:

pd.crosstab(df['Churn'], df['International plan'])

Image for post
Image for post

pd.crosstab(df['Churn'], df['Voice mail plan'], normalize=True)

Image for post
Image for post

We can see that most of the users are loyal and do not use additional services (International Plan/Voice mail).

This will resemble pivot tables to those familiar with Excel. And, of course, pivot tables are implemented in Pandas: the pivot_table method takes the following parameters:

  • values - a list of variables to calculate statistics for,
  • index – a list of variables to group data by,
  • aggfunc — what statistics we need to calculate for groups - e.g sum, mean, maximum, minimum or something else.

Let’s take a look at the average numbers of day, evening and night calls by area code:

df.pivot_table(['Total day calls', 'Total eve calls', 'Total night calls'], ['Area code'], aggfunc='mean')

Image for post
Image for post

DataFrame transformations

Like many other things in Pandas, adding columns to a DataFrame is doable in several ways.

For example, if we want to calculate the total number of calls for all users, let’s create the total_calls Series and paste it into the DataFrame:

Image for post
Image for post

It is possible to add a column more easily without creating an intermediate Series instance:

Image for post
Image for post

To delete columns or rows, use the drop method, passing the required indexes and the axis parameter (1 if you delete columns, and nothing or 0 if you delete rows). The inplace argument tells whether to change the original DataFrame. With inplace=False, the drop method doesn't change the existing DataFrame and returns a new one with dropped rows or columns. With inplace=True, it alters the DataFrame.

Image for post
Image for post

4. First attempt on predicting telecom churn

Let’s see how churn rate is related to the International plan variable. We’ll do this using a crosstab contingency table and also through visual analysis with Seaborn (however, visual analysis will be covered more thoroughly in the next article).

pd.crosstab(df['Churn'], df['International plan'], margins=True)
Image for post
Image for post
Image for post
Image for post

We see that, with International Plan, the churn rate is much higher, which is an interesting observation! Perhaps large and poorly controlled expenses with international calls are very conflict-prone and lead to dissatisfaction among the telecom operator’s customers.

Next, let’s look at another important feature — Customer service calls. Let’s also make a summary table and a picture.

pd.crosstab(df['Churn'], df['Customer service calls'], margins=True)

Image for post
Image for post

sns.countplot(x='Customer service calls', hue='Churn', data=df);

Image for post
Image for post

Perhaps, it is not so obvious from the summary table, but the picture clearly states that the churn rate strongly increases starting from 4 calls to the service center.

Let’s now add a binary attribute to our DataFrame — Customer service calls > 3. And once again, let's see how it relates to the churn.

Image for post
Image for post
Image for post
Image for post

Let’s construct another contingency table that relates Churn with both International plan and freshly created Many_service_calls.

pd.crosstab(df['Many_service_calls'] & df['International plan'] , df['Churn'])

Image for post
Image for post

Therefore, predicting that a customer will churn (Churn=1) in the case when the number of calls to the service center is greater than 3 and the International Plan is added (and predicting Churn=0 otherwise), we might expect an accuracy of 85.8% (we are mistaken only 464 + 9 times). This number, 85.8%, that we got with very simple reasoning serves as a good starting point (baseline) for the further machine learning models that we will build.

As we move on in this course, recall that, before the advent of machine learning, the data analysis process looked something like this. Let’s recap what we’ve covered:

  • The share of loyal clients in the sample is 85.5%. The most naive model that always predicts a “loyal customer” on such data will guess right in about 85.5% of all cases. That is, the proportion of correct answers (accuracy) of subsequent models should be no less than this number, and will hopefully be significantly higher;
  • With the help of a simple forecast that can be expressed by the following formula: “(Customer Service calls > 3) & (International plan = True) => Churn = 1, else Churn = 0”, we can expect a guessing rate of 85.8%, which is just above 85.5%. Subsequently, we’ll talk about decision trees and figure out how to find such rules automatically based only on the input data;
  • We got these two baselines without applying machine learning, and they’ll serve as the starting point for our subsequent models. If it turns out that with enormous efforts, we increase the share of correct answers by 0.5% per se, then perhaps we are doing something wrong, and it suffices to confine ourselves to a simple model with two conditions;
  • Before training complex models, it is recommended to manipulate the data a bit, make some plots, and check simple assumptions. Moreover, in business applications of machine learning, they usually start with simple solutions and then experiment with more complex ones.

5. Assignment #1

Full versions of assignments are announced each week in a new run of the course (October 1, 2018). Meanwhile, you can practice with a demo version: Kaggle Kernel, nbviewer.

6. Useful resources

Authors: Yury Kashnitskiy, and Katya Demidova. Translated and edited by Yuanyuan Pao, Christina Butsko, Anastasia Manokhina, Egor Polusmak, Sergey Isaev, and Artem Trunov.

Open Machine Learning Course

A series of articles on basics of Machine Learning.

Yury Kashnitsky

Written by

Data Scientist at KPN, Netherlands, leader of

Open Machine Learning Course

A series of articles on basics of Machine Learning. Each article is followed by an assignment with a deadline. Several Kaggle Inclass competitions are held throughout the course.

Yury Kashnitsky

Written by

Data Scientist at KPN, Netherlands, leader of

Open Machine Learning Course

A series of articles on basics of Machine Learning. Each article is followed by an assignment with a deadline. Several Kaggle Inclass competitions are held throughout the course.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight.
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox.
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store