What is dot product?
Let’s start with the standard definition of dot product. We take two vectors of same length and multiply each pair of coordinates. Then summing up these gives the dot product of two vectors.
Geometrically, dot product means one of the vectors are projected onto the line that other vector lies. Multiplication of lengths of the projected vector and the other vector gives the dot product. Dot product is negative when vectors point to opposite directions whereas it is zero if vectors are perpendicular because length of the projection is zero.
But why is this calculation related with the idea of projection? To understand that, let’s take a look at linear transformations from multiple dimensions to one dimension. They are like functions taking multi dimensional vectors and outputting one dimensional vectors, which is just numbers. In part 1, it is explained that transformations are called linear when the grid lines remain parallel and evenly spaced. Similarly, for linear transformations to one dimension, if we select evenly spaced dots, they remain evenly spaced.
We also saw in part 1 that transformations are defined with the coordinates that basis vectors land, in this case i^and j^. In the case where the transformation is from two dimensions to one dimension, corresponding matrix is 1x2.
To apply this transformation to a vector, we multiply a matrix of 1x2 by a 2D vector. This calculation is same as dot product.
Therefore, we can say that there is an association between 1x2 matrices and 2D vectors. But what does this association mean geometrically?
To understand the geometric meaning of the association between 1x2 matrices and 2D vectors, let’s imagine that we put a number line on top of 2D coordinate space diagonally and the number zero sitting at the origin. Let’s also define a 2D vector u^whose tip sits where the number one on the number line. After that, let’s define a projection transformation. This transformation is linear because evenly spaced dots remain evenly spaced after the transformation.
Since the transformation is linear, we can represent it as a 1x2 matrix. We need to find where i^and j^lands and those numbers will be the columns of the matrix.
Since u^, i^ and j^have unit length, we can use symmetry here. Following images explain how i^lands on ux and j^lands on uy. Therefore, entries of the matrix is the coordinates of u^.
Computing this projection transformation for arbitrary vectors is computationally identical to taking a dot product with u^. This is why taking a dot product with a unit vector can be interpreted as projecting a vector onto the span of that unit vector and taking the length.
The lesson here is that, any time you have one of these linear transformations, whose output space is the number line, there is going to be some unique vector v corresponding to that transformation in the sense that applying the transformation is the same thing as taking a dot product with that vector. This is an example of duality.
This series is a summary of the video series called Essence of linear algebra. You can watch the videos here.
Part 4 will be here soon :)