Published in


Pooling layers in Neural nets and their variants

Everyone who has written code for any deep learning architecture must have heard or used a pooling layer. The pooling layer is one of the most fundamental layers in the development of neural networks as it executes the down-sampling on the feature maps coming from the previous layer and produces new feature maps with a condensed resolution. This layer drastically reduces the spatial dimension of input. It serves two main purposes. The first is to reduce the number of parameters or weights, thus lessening the computational cost. The second is to control the overfitting of…




This publication is purely dedicated towards Data Science, AI and ML. From the state-of-the-art research papers to the most fundamental AI concepts.

Recommended from Medium

Linear Regression Algorithm in Practice

Convolutions: Transposed and Deconvolution

Solving the Big Data Challenge? Deep Few-Shot Learning from a Multimodal Perspective

Unsupervised on the Streets of New York

Feature Importance

Facial Recognition SPA for BNK48 Idol group using React and face-api.js

Becoming a Machine Learning Geek

Where to start? Building Machine Learning & Deep Learning Environments for your Business

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Vishal Rajput

Vishal Rajput

AI Researcher/developer| loves writing blogs and codes | 2x🏆Top writer in AI |

More from Medium

Wasserstein GAN Paper Explained

Skip connections and adversarial examples

Vanishing Gradients in Recurrent Neural Networks | LaptrinhX

From Supervised To Unsupervised Learning: A Paradigm Shift In Computer Vision

Inverse Problems with Unrolled Adversarial Regularization