Computational Linear Algebra: Scalars, Vectors, Matrices and Tensors

Computational Linear Algebra: Unleashing the Power of Matrices and Vectors

Monit Sharma
8 min readMay 18, 2023

Introduction:

Welcome to the exciting world of Computational Linear Algebra! In this blog series, we will embark on a journey to explore the fundamental concepts and applications of linear algebra in the realm of computer science. Whether you’re a budding programmer, a data scientist, or simply curious about the underlying mathematics behind many computational tasks, this series is designed to equip you with the essential knowledge to tackle complex problems using linear algebra techniques.

Linear algebra serves as the backbone of many computational fields, enabling us to model and solve a wide range of problems efficiently. From image processing and machine learning to computer graphics and cryptography, the principles of linear algebra find their applications in countless areas of computer science.

In this introductory blog, we will lay the foundation by understanding the fundamental building blocks of linear algebra: scalars, vectors, matrices, and tensors. We will also explore basic operations such as matrix addition, and subtraction, which form the core of many linear algebra computations.

Scalars, Vectors, Matrices, and Tensors:

Let’s start by introducing the key elements of linear algebra:

  1. Scalars: Scalars are single numerical values that can be used to represent quantities such as temperature, time, or any other measurable quantity. They have magnitude but no direction. Examples of scalars include temperatures like 30 degrees Celsius or time intervals like 5 seconds.
  2. Vectors: Vectors are entities that consist of both magnitude and direction. In the context of linear algebra, vectors are represented as arrays of numbers. They can be visualized as arrows in space, where the length of the arrow represents the magnitude of the vector, and the direction represents its orientation. Vectors are widely used to represent physical quantities such as velocity, force, or position. For example, the velocity of a moving car or the position of an object in a three-dimensional space can be represented using vectors.
  3. Matrices: Matrices are rectangular arrays of numbers, consisting of rows and columns. They are a natural extension of vectors and enable us to organize and manipulate data in a structured manner. Matrices find extensive use in various computational tasks, including image processing, neural networks, and data analysis. For instance, an image can be represented as a matrix, with each element representing a pixel value.
  4. Tensors: Tensors generalize the concept of vectors and matrices to higher dimensions. While scalars are zero-dimensional, vectors are one-dimensional, matrices are two-dimensional, and tensors can have any number of dimensions. Tensors are often used to represent multi-dimensional data, such as colour images, volumetric data, or time series data.

Basic Linear Algebra Operations:

Now that we have a basic understanding of the building blocks, let’s delve into some of the fundamental operations in linear algebra:

  1. Matrix Multiplication: Matrix multiplication is a crucial operation that combines the rows and columns of two matrices to produce a new matrix. It forms the foundation for various transformations and computations in linear algebra, such as solving systems of linear equations and performing geometric transformations.
  2. Addition and Subtraction: Matrices can be added or subtracted element-wise if they have the same dimensions. These operations are often used in tasks such as data manipulation, feature extraction, and data visualization.

Jupyter Notebook

Introduction

This first chapter is quite light and concerns the basic elements used in linear algebra and their definitions. It also introduces important functions in Python/Numpy that we will use in this series. It will explain how to create and use vectors and matrices through examples.

Scalars, Vectors, Matrices and Tensors

Let’s start with some basic definition

Difference between a scalar, a vector, a matrix and a tensor

  • A scalar is a single number or a matrix with a single entry.
  • A vector is a 1-d array of numbers. Another way to think of vectors is identifying points in space with each element giving the coordinate along a different axis.

#import numpy

import numpy as np

A matrix is a 2-D array where each element is identified by two indices (ROW then COLUMN)

A tensor is an n-dimensional array with n>2

  1. scalars are written in lowercase and italics. For instance: n
  2. vectors are written in lowercase, italics and bold type. For instance: x
  3. matrices are written in uppercase, italics and bold. For instance: X

Create a vector with Python and Numpy

Coding tip: Unlike the matrix() function which necessarily creates 2-dimensional matrices, you can create 2-dimensional arrays with the array() function. The main advantage to use matrix() is the useful methods (conjugate transpose, inverse...). We will use the array() function in this series.

We will start by creating a vector. This is just a 1-dimensional array:

x = np.array([1,2,3,4])
x

array([1, 2, 3, 4])

Create a 3×2 matrix with nested brackets.

The array() function can also create 2-dimensional arrays with nested brackets:

A = np.array([[1,2],[3,4],[5,6]])
A

array([[1, 2],
[3, 4],
[5, 6]])

Shape

The shape of an array (that is to say its dimensions) tells you the number of values for each dimension. For a 2-dimensional array, it will give you the number of rows and the number of columns. Let’s find the shape of our preceding 2-dimensional array A. Since A is a Numpy array (it was created with the array() function) you can access its shape with:

A.shape

(3,2)

We can see that A has 3 rows and 2 columns.

Let’s see the shape of our first vector:

x.shape

(4,)

As expected you can see that x has only one dimension. The number corresponds to the length of the array:

len(x)

4

Transposition

With transposition, you can convert a row vector to a column vector and vice versa.

The transpose A^T of the matrix A corresponds to the mirrored axes. If the matrix is a square matrix (same number of columns and rows)

Create a matrix A and transpose it

A = np.array([[1, 2], [3, 4], [5, 6]])
A

array([[1, 2],
[3, 4],
[5, 6]])

A_t = A.T
A_t

array([[1, 3, 5],
[2, 4, 6]])

Checking the dimensions

print(A.shape)
print(A_t.shape)

(3,2)

(2,3)

We can see that the number of columns becomes the number of rows with transposition and vice versa.

Addition

Matrices can be added if they have the same shape: A+B = C

Each cell of A is added to the corresponding cell of B:

The shape of A,B and C are identical. Let’s check that in an example.

Create two matrices A and B and add them

With Numpy you can add matrices just as you would add vectors or scalars.

A = np.array([[1, 2], [3, 4], [5, 6]])
A

array([[1, 2],
[3, 4],
[5, 6]])

B = np.array([[2, 5], [7, 4], [4, 3]])
B

array([[2, 5],
[7, 4],
[4, 3]])

# Add matrices A and B
C = A + B
C

array([[ 3, 7],
[10, 8],
[ 9, 9]])

It is also possible to add a scalar to a matrix. This means adding this scalar to each cell of the matrix.

Add a scalar to a matrix

# Exemple: Add 4 to the matrix A
C = A+4
C

array([[ 5, 6],
[ 7, 8],
[ 9, 10]])

Broadcasting

Numpy can handle operations on arrays of different shapes. The smaller array will be extended to match the shape of the bigger one. The advantage is that this is done in C under the hood (like any vectorized operations in Numpy). The scalar was converted into an array of the same shape as A.

Here is another generic example:

is equivalent to

where the (3×1 ) matrix is converted to the right shape (3×2 ) by copying the first column. Numpy will do that automatically if the shapes can match.

Add two matrices of different shapes

A = np.array([[1, 2], [3, 4], [5, 6]])
A

array([[1, 2],
[3, 4],
[5, 6]])

B = np.array([[2], [4], [6]])
B

array([[2],
[4],
[6]])

# Broadcasting
C=A+B
C

array([[ 3, 4],
[ 7, 8],
[11, 12]])

Coding tip: Sometimes row or column vectors are not in proper shape for broadcasting. We need to employ a trick ( a numpy.newaxis object) to help fix this issue.

x = np.arange(4)
x.shape

(4,)

# Adds a new dimension
x[:, np.newaxis]

array([[0],
[1],
[2],
[3]])

A = np.random.randn(4,3)
A

array([[ 0.37898843, 0.42689999, -1.34790859],
[ 1.59115004, 0.59600385, 0.25510038],
[ 1.08659174, -1.6311077, -0.78809825],
[ 1.34425773, 0.07104051, 0.06759489]])

# This will throw an error
try:
A - x
except ValueError:
print("Operation cannot be completed. Dimension mismatch")

Operation cannot be completed. Dimension mismatch


# But this works -- subtract each column of A by the column vector x
A - x[:, np.newaxis]

array([[ 0.37898843, 0.42689999, -1.34790859],
[ 0.59115004, -0.40399615, -0.74489962],
[-0.91340826, -3.6311077, -2.78809825],
[-1.65574227, -2.92895949, -2.93240511]])

Conclusion:

Computational Linear Algebra provides a powerful toolkit for solving complex problems in computer science. By understanding the concepts of scalars, vectors, matrices, and tensors, as well as basic operations like matrix multiplication, we can harness the potential of linear algebra to analyze data, build algorithms, and develop innovative solutions.

In the upcoming blogs of this series, we will explore these concepts in greater detail, dive deeper into advanced linear algebra techniques, and discover how they can be applied to various domains within computer science. So, join me on this exciting journey, and let’s unlock the transformative power of Computational Linear Algebra together!

Stay tuned for the next blog, where we will delve into the intricacies of matrix multiplication and its applications.

--

--