What is Vectorization?

Rajath Bharadwaj
2 min readJun 25, 2020

--

Vectorization is basically the art of getting rid of explicit for loops in your code. In the deep learning era, with safety deep learning in practice, you often find yourself training on relatively large data sets, because that’s when deep learning algorithms tend to shine. Also, it’s important that your code runs very quickly, otherwise, if it’s running on a big data set, your code might take a long time to run then you just find yourself waiting a very long time to get the result (which is kinda sorta a bummer). So in the deep learning era, I think the ability to perform vectorization has become a key skill.

Let’s start with an example.

Vectorization from Andrew NG’s course

In logistic regression you need to compute Z equals W transpose X plus B, where W was this column vector and X is also a vector. Maybe they are very large vectors if you have a lot of features. So, W and X were both, RNX dimensional vectors.

So, to compute W transpose X, if you had a non-vectorized implementation (On the left side of the line), you would do something like Z equals zero, and then

for i in range of (n_x): Z += W [i] * X[i]

Z+=B

So, that’s a non-vectorized implementation which you find that it’s going to be really slow.

In contrast, a vectorized implementation (On the right side of the line), you would just compute W transpose X directly.

In Python’s numpy (popularly known as np) package, the command you use for that is

Z = np.dot(w, x ) + B,

so this computes W transpose X, and you can also just add B to that directly. Doing vectorization, you find that the code runs much faster than just using for loops.

So this is a very brief explanation of Vectorization in Deep learning and why it is very important and how to it can save a bunch of time for you. So that’s it from me, if you’ve made it this far, consider clapping 😁.

--

--