Flood Probability Forecasting Using Advanced ML Models

Will A. Tran
6 min readJun 13, 2024

--

Photo by Kelly Sikkema on Unsplash

The project of predicting flood probabilities through machine learning was both challenging and rewarding. Participating in Kaggle’s “Playground Series — Season 4, Episode 5” provided me with a valuable learning experience and the opportunity to sharpen my data analysis and machine learning skills. In this blog post, I will walk you through my approach to the competition, share the insights I gained, and offer tips for others venturing into the world of machine learning.

Overview of the Competition

The competition aimed to predict flood probability based on various environmental and infrastructural factors. The dataset included numerous features such as monsoon intensity, topography drainage, river management, deforestation, and many others; each playing a crucial role in determining the likelihood of floods.

Data Exploration and Preprocessing

My first step was to explore the dataset thoroughly. Understanding the distribution and relationship between variables is critical before diving into model building. To achieve this, I utilized visualizations to understand the distribution of features and their relationships with the target variable.

I observed that the features of both the training and test datasets exhibited right-skewed distributions. By overlaying the histograms of the training ( blue) and test (orange) datasets, I could see that both datasets had similar distributions (both graphs color blended and created a brown graph).

This step was crucial in ensuring that the model would generalize well across unseen data and not just perform well on the training data.

Visualization from the competition notebook

Next, I examined the target variable, FloodProbability, using a kernel density plot to understand its distribution. This helped in identifying any potential skewness or irregularities in the data.

Visualization from the competition notebook

Correlation Analysis

Understanding the relationships between features is essential for effective feature engineering and model selection. I created a correlation matrix to visualize these relationships. This step helped me identify features that were highly correlated with each other and the target variable.

Visualization from the competition notebook

Model Selection and Training

Choosing the right model is a pivotal part of any machine learning project. For this competition, I experimented with various regression models, including CatBoost, Decision Trees, and Gradient Boosting Machines. Each model was evaluated based on its R2 score, as this was the evaluation metric used in the competition.

After a series of trials and validations, I found that XGBRegressor, Ridge Regression, and LGBMRegressor offered the best performance. These models provided a good balance between bias and variance, making them suitable choices for this regression task. The XGBRegressor and LGBMRegressor, in particular, stood out due to their robustness and ability to handle the right-skewed distributions in the data effectively. Ridge Regression was also included to ensure a linear model baseline and to take advantage of its regularization capabilities, which helped in minimizing overfitting.

To further enhance the model’s performance, I used a stacking approach. Stacking involves combining multiple models to create a stronger overall model. In this case, I used XGBRegressor, Ridge Regression, and LGBMRegressor as the base models in the stacking ensemble. This method allowed me to leverage the strengths of each individual model, resulting in improved accuracy and generalization.

Feature Engineering and Hyperparameter Tuning

Feature engineering was a crucial step in improving the model’s performance. The objective was to transform and enhance the dataset to create new features and prepare the data for model training. Here are the steps I followed:

Sorting and Summing Features: The features were sorted, and a new feature representing the sum of the sorted features was added. Sorting the features along the feature axis helps capture the cumulative effect of all features. This technique simplifies the feature space, helping models better understand the underlying data patterns.

Adding Statistical Features: Additional statistical features were computed to capture more information about the distribution of the original features:

  • Standard Deviation (std): Measures the dispersion of the features.
  • Special Indicator (special): Checks if the sum feature is within a specified range.
  • Skewness (skew): Measures the asymmetry of the feature distribution.
  • Kurtosis (kurtosis): Measures the tailedness of the feature distribution.

Splitting the Dataset: The enhanced dataset was split into training and validation sets to prepare for model training and evaluation.

For hyperparameter tuning, I used Optuna, an automated optimization framework, to efficiently search for the best hyperparameters and enhance model performance. The process included:

  1. Defining the Objective Function: The objective function defined the hyperparameter search space for different models, including LightGBM, XGBoost, and Ridge Regression. Parameters such as lgm_params, xgb_params, and ridge_alpha specified the ranges and distributions of hyperparameters to be optimized.
  2. Base Models: A combination of Ridge Regression, XGBRegressor, and LightGBM models were used as base models in a stacking regressor framework. This ensemble approach leverages the strengths of each model type.
  3. Stacking Regressor: The stacking regressor combines predictions from the base models to produce a final prediction. Ridge Regression was used as the final estimator in the stacking process.
  4. Model Training and Evaluation: The stacking model was trained on the training dataset, and predictions were made on the validation dataset. The R2 score was used to evaluate model performance.

By using Optuna to optimize hyperparameters over multiple trials, I was able to identify the best combination of parameters, resulting in improved model accuracy and robustness.

Model Evaluation and Interpretation

After optimizing the models using Optuna, the final stacking model was trained and evaluated on the validation dataset.

  1. Evaluation Metric: The R2 score was used to evaluate the model’s performance, as it was the competition’s evaluation metric.
  2. Performance: The final stacking model achieved an R2 score of 0.873939. This indicates that the model explains approximately 87.34% of the variance in the target variable, FloodProbability.

The high R2 score demonstrates the model’s strong predictive performance. The successful application of feature engineering and hyperparameter tuning significantly enhanced the model’s accuracy and robustness, confirming the effectiveness of the implemented machine learning pipeline.

Lessons Learned and Tips for Machine Learning

  1. Thorough Data Exploration: Spend a lot of time understanding your data. Visualizations and summary statistics can reveal hidden patterns and insights.
  2. Feature Engineering: Don’t underestimate the power of feature engineering. Creating meaningful features can drastically improve model performance.
  3. Model Experimentation: Try different models and algorithms. What works for one dataset may not work for another.
  4. Hyperparameter Tuning: Fine-tuning model parameters can make a significant difference in performance. Use techniques like grid search or random search for optimization.
  5. Validation and Cross-Validation: Always validate your model to ensure it generalizes well to unseen data. Cross-validation is a powerful tool to assess model stability.
  6. Iterative Process: Machine learning is an iterative process. Continuously refine your model based on feedback and results.

Conclusion

Participating in the Kaggle competition was an enriching experience that enhanced my understanding of machine learning and its practical applications. Through rigorous data exploration, feature engineering, and model tuning, I was able to build a robust model for flood probability prediction. I hope this walkthrough provides valuable insights and encourages you to dive deeper into the fascinating world of machine learning.

For a detailed look at my code and approach, feel free to check out my Kaggle where I have documented the entire process.

Happy learning!

This blog post showcases my analytical approach and ability to interpret data through machine learning, making it an excellent addition to my personal portfolio. If you have any questions or feedback, feel free to reach out at me@willatran.com. Visit my website at willatran.com to check out more projects like this.

--

--

Will A. Tran

Diving into data, sharing my discoveries, and connect them with fellow data scientists.