How to win a Kaggle competition in Data Science (via Coursera): part 1/5

Eric Perbos-Brinck
9 min readApr 25, 2018

--

Source: Coursera

These are my notes from the 5-weeks course on Coursera, as taught by a team of data scientists and Kaggle GrandMasters.

## Week 1 ##

by Alexander Guschin, GM #5, Yandex, lecturer at MIPT
Mikhail Trofimo, PhD student at CCAS

Learning Objectives

  • Describe competition mechanics
  • Compare real life applications and competitions
  • Summarize reasons to participate in data science competitions
  • Describe main types of ML algorithms
  • Describe typical hardware and software requirements
  • Analyze decision boundries of different classifiers
  • Use standard ML libraries

1. Introduction and course overview

Among all topics of data science, competitive data analysis is especially interesting.
For an experienced specialist this is a great area to try his skills against other people and learn some new tricks; and for a novice this a good start to quickly and playfully learn basics of practical data science. For both, engaging in a competition is a good chance to expand the knowledge and get acquainted with new people.

  • Week #1:
    . Describe competition mechanics
    . Compare real life applications and competitions
    . Summarize reasons to participate in data science competitions
    . Describe main types of ML algorithms
    . Describe typical hardware and software requirements
    . Analyze decision boundries of different classifiers
    . Feature preprocessing and generation with respect to models
    . Feature extractions from text and images
  • Week #2:
    . Exploratory Data Analysis (EDA)
    . EDA examples and visualizations
    . Inspect the data and find golden features
    . Validation: risk of overfitting, strategies and problems
    . Data leakages
  • Week #3:
    . Metrics optimization in a competition, new metrics
    . Advanced Feature Engineering I: mean encoding, regularization, generalizations
  • Week #4:
    . Hyperparameter Optimization
    . Tips and Tricks
    . Advanced Feature Engineering II: matrix factorization for feature extraction, tSNE, feature interactions
    . Ensembling
  • Week #5:
    . Competition “walk-through” examples
    . Final project

2. Competition mechanics

2.1. There is a great variety of competitions: NLP, Time-Series, Computer Vision.

But they all share the same structure:
. Data is supplied with description
. An Evaluation function is given
. You build a model and use the Submission file
. Your submission is scored in a Leaderboard with Public and Private Test sets
The Public set is used during the competition, the Private one for final ranking
. You can submit between 2 and 5 files per day.

Why participate in a competition ?
. Great opportunity for learning and networking
. Interesting non-trivial tasks and state-of-the-art approaches
. A way to get recognition inside the Data Science community, and possible job offers

2.2. Kaggle overview

Walk-through of a Kaggle competition (Zillow home evaluation):
. Overview with description, evaluation, prizes and timeline
. Data provided by the organizer with description
. Public kernels created by participants, can be used as a starting point, especially the EDA.
. Discussion: the organizer can provide additional information and answer questions
. Leaderboard: shows the best score of each participant, and number of submissions. Calculated on Public set during competition.
. Rules
. Team: you can create a team with other participants, check the rules and beware of the max number of submissions allowed (unique participants vs team)

2.3. Real-World Applications vs Competitions

  • Real world ML pipeline is a complicated process, including:
    . Understanding the business problem
    . Formalize the problem (what is a spam ?)
    . Collect the data
    . Clean and preprocess the data
    . Choose a model
    . Define an evaluation of the model in real life
    . Inference speed
    . Deploy the model to users
  • Competitions focus only on:
    . Clean and preprocess the data
    . Choose a model

ML competitions are a great way to learn but they don’t address the questions of formalization, deployment and testing.

Don’t limit yourself: it’s OK to use Heuristics and Manual Data Analysis.
Don’t be afraid of complex solutions, advanced feature engineering, huge calculations, ensembling.
The ultimate goal is to achieve the highest score in the Metric value.

3. Recap of main ML algorithms

3.1. Main ML algorithms

  • Linear models: try to separate data points with a plane, into 2 subspaces
    ex: Logistic regression, Support Vector Machines (SVM)
    Available in Scikit-Learn or Vowpal Wabbit
  • Tree-based: use Decision Trees (DT) like Random Forest and Gradient Boosted Decision Trees (GBDT)
    Applies a “Divide and Conquer” approach by splitting the data into sub-spaces or boxes based on probabilities of outcome
    In general, DT models are very powerful for tabular data; but rather weak to capture linear dependencies as it requires a lot of splits.
    Available in Sickit-Learn, XGBoost, LightGBM
  • kNN: K-Nearest-Neighbors, looks for nearest data points. Close objects are likely to have the same labels.
  • Neural Networks: often seen as a “black-box”, can be very efficient for Images, Sounds, Text and Sequences.
    Available in TensorFlow, PyTorch, Keras

No Free Lunch Theorem: there’s not a single method that outperforms all the others for all the tasks.

3.2. Disclaimer

If you don’t know much about basic ML algorithms, check those links before taking the quizz.

3.3. Additional Materials and Links

Covers Scikit-Learn library with kNN, Linear Models, Decision Trees.
Plus H2O documentation on algorithms and parameters.
. Vowpal Wabbit
. XGBoost
. LightGBM
. Neural Nets with Keras, PyTorch, TensorFlow, MXNet & Lasagne
https://www.coursera.org/learn/competitive-data-science/supplement/AgAOD/additional-materials-and-links

4. Software and Hardware requirements

4.1. Hardware
Get a PC with a recent Nvidia GPU, a CPU with 6-cores and 32gb of RAM.
A fast storage (hard drive) is critical, especially for Computer Vision, so a SSD is a must, a NVMe even better.
Otherwise use cloud services like AWS but beware of the operating costs vs. a dedicated PC.

4.2. Software
Linux (Ubuntu with Anaconda) is best, some key libraries aren’t available on Windows.
. Python is today’s favorite as it supports a massive pool of libraries for ML.
. Numpy for linear algebra, Pandas for dataframes (like SQL), Scikit-Learn for classic ML algorithms.
. Matplotlib for plotting.
. Jupyter Notebook as an IDE (Integrated Development Environment).
. XGBoost and LightGBM for gradient-boosted decision trees.
. TensorFlow/Keras and PyTorch for Neural Networks.

4.3. Links for installation and documentations
https://www.coursera.org/learn/competitive-data-science/supplement/Djqi7/additional-material-and-links

5. Feature preprocessing and generation with respect to models

5.1. Overview with Titanic on Kaggle

  • Features: numeric, categorical (Red, Green, Blue), ordinal (old<renovated<new), datetime, coordinates, interval
    https://stats.idre.ucla.edu/other/mult-pkg/whatstat/what-is-the-difference-between-categorical-ordinal-and-interval-variables/
  • Feature preprocessing example: one-hot-encoding (like “pclass” in Titanic)
    Sometimes Decision-Trees (DT) do not require it but linear models do, if the feature doesn’t have a clear linear dependency (like survival rate vs pclass).
    RF (Random Forests) can easily overcome this challenge.
  • Feature generation: in the case of sales forecasts per day (ie. strong linear potential), it may help to add Week_number or Day_of_week.
    These can help both linear and DT models.

Feature preprocessing is often necessary.
Feature generation is a powerful technique.
But they both depend on the model type (DT vs Linear vs NN)

5.2. Numeric features

5.2.1. Feature Preprocessing: Decision-Trees (DT) vs non-DT models

  • Scaling: DT try to find the best split for a feature, no matter the scale.
    kNN, Linear or NN are very sensitive to scaling differences.
  • MinMaxScale
    Scale to [0, 1]: sklearn.preprocessing.MinMaxScaler
    X = (X — X.min) / (X.max — X.min)
  • StandardScale
    Scale to mean=0, std=1 : sklearn.preprocessing.StandardScaler
    X = (X — X.mean) / X.std

In general case, for a non-DT model: we apply a chosen transformation to ALL numeric features.

  • Outliers: we can clip for 1st and 99th percentiles, aka “winsorization” in financial data.
  • Rank: can better option than MinMaxScaler in case of Outliers present (and unclipped), good for non-DT.
    scipy.stats.rankdata
    Imp: must be applied to both Train and Test together.
  • Log transform as np.log(1 + x), or raising to the power<1 as np.sqrt(x + 2/3):
    They bring too big values closer together. Especially good for NN.
  • Advanced techniques for non-DT: concatenate dataframes produced by different pre-processings, or ensembling models from different pre-processings.

5.2.2. Feature Generation: based on EDA and business knowledge.

  • Easy one: with Sqm and Price features, we can generate a new feature “Price/Sqm”
    Or generating fractional part of a value, like 1.99€ -> 0.99; 2.49€ -> 0.49
  • Advanced one: generating time interval by a user typing a message (for spambot detection)

Conclusion: DT don’t depend on scaling but non-DT hugely depend on it.
Most used preprocessings: MinMaxScaler, StandardScaler, Rank, np.log(1+x) and np.sqrt(1+x)
Generation is powered by EDA and business knowledge.

5.3. Categorical and ordinal features

5.3.1. Feature Preprocessing:

There are three Categorical features in the Titanic dataset: Sex, Cabin, Embarked (Port’s name)
Reminder on Ordinal classification examples:
Pclass (1,2,3) as ordered categorical feature or
Driver’s license type (A, B, C, D) or
Education level (kindergarden, school, college, bachelor, master, doctoral)

A. One technique is Label Encoding (replaces categories by numbers)
Good for DT, not so for non-DT.

For Embarked (S for Southampton, C for Cherbourg, Q for Queenstown)
- Alphabetical (sorted): [S,C,Q] -> [2,1,3] with sklearn.preprocessing.LabelEncoder
- Order of Appearance: [S,C,Q] -> [1,2,3] with Pandas.factorize

- Frequency encoding: [S,C,Q] -> [0.5, 0.3, 0.2], better for non-DT as it preserves information about value distribution, but still great for DT.

B. Another technique is One-hot Encoding, (0,0,1) or (0,1,0) for each row
pandas.get_dummies, sklearn.preprocessing.OneHotEncoder
Great for non-DT, plus it’s scaled (min=0, max=1).

Warning: if too many unique values in category, then one-hot generates too many columns with lots of zero-values.
Then to save RAM, maybe use sparse matrices and store only non-zero elements (tip: if non-zero values far less than 50% total).

5.3.2. Feature Generation for categorical feature:
(more in next lessons)

5.4. Datetime and Coordinates features

A. Date & Time:

  • ‘Periodicity’ (Day number in Week, Month, Year, Season) is used to capture repetitive patterns.
  • ‘Time since’ drug was taken, or last holidays, or numbers of days left before etc.
    Can be Row-independent moment (ex: since 00:00:00 UTC, 1 january 1970) or Row-dependent (since last drug taken, last holidays, numbers of days left before etc.)
  • ‘Difference between dates’ for Churn prediction, like “Last_purchase_date — Last_call_date = Date_diff”

B. Coordinates:

  • Distance to nearest POI (subway, school, hospital, police etc)
  • You can also use Clusters based on new features and use “Distance to cluster’s center coords”.
  • Or create Aggregate Stats, such as “Number of Flats in Area” or “Mean Realty Price in Area”
  • Advanced tip: look for Rotate on coords

5.5. Handling missing values

  • Types of missing values: NaN, empty string, ‘-1’ (replacing missing values in [0,1]), very large number, ‘-99999’, ‘999’ etc.
  • Fillna approaches:
    -999, -1 or
    Mean & median or
    “isnull” binary feature can be beneficial or
    Reconstruct the missing value if possible (best approach)
  • Do not fill NaNs before Feature generation: this can pollute the data (ex: “Time since” or Frequency/Label Encoding) and screw the model.
  • XGboost can handle “NaN”, to try.
  • Treating Test values not present in train data: Frequency encoding in Train can help as it will look for Frequency in Test as well.

6. Feature extraction from text and images

6.1. Bag of Words (BOW)

Source: Coursera

For Titanic, we can extract information/patterns from the passengers’ names such as their family members/siblings or their titles (Lord, Princess)

How-to: sklearn.feature_extraction.text.CountVectorizer
Creates 1 column per unique word, and counts its occurence per row (phrase).

A. Text preprocessing

  • Lowercase: Very->very
  • Lemmatization: democracy, democratic, democratization -> democracy (requires good dictionary, corpus )
  • Stemming: democracy, democratic, democratization -> democr
  • Stopwords: get rid of articles, prepositions and very common words, uses NLTK (Natural Language ToolKit)
    ‘sklearn.feature_extraction.text.CountVectorizer’ with max_df

B. N-grams for sequences of words or characters, can help to use local context
‘sklearn.feature_extraction.text.CountVectorizer’ with Ngram_range and analyzer

C. TFiDF for postprocessing (required to scale features for non-DT)

  • TF: Term Frequency (in % per row, sum = 1), followed by
  • iDF: Inverse Document Frequency (to boost rare words vs frequent words)
    ‘sklearn.feature_extraction.text.TfidfVectorizer’

6.2. Using Word Vectors and ConvNets

A. Word Vectors

  • Word2vec converts each word to some vector in a space with hundreds of dimensions, creates embeddings between words oftn used together in the same context.
    King with Man, Queen with Woman.
    King-Queen = Man-Woman (in vector size)
  • Other Word Vectors: Glove, FastText
  • Sentences: Doc2vec

There are pretrained models, like on Wikipedia.
Note: preprocessing can be applied BEFORE using Word2vec

B. Comparing BOW vs w2v (Word2vec)

  • BOW: very large vectors, meaning of each value in vector is known
  • w2v: smaller vectors, values in vector rarely interpreted, words with similar meaning often have similar embeddings

C. Quick intro on extracting features from Images with CNNs
(covered in details in later lessons)

  • Finetuning or transfer-learning
  • Data augmentation

Next week : Exploratory Data Analysis (EDA) and Data Leakages

--

--

Eric Perbos-Brinck

Deep Learning practitioner// Founder: BravoNestor, Comptoir-Hydroponique, Maison-Kokoon, My-Tesla-in-Paris// Carrefour Hypermarket executive. Insead MBA:1 PhD:0