Python Basics with Numpy theory

Sneja shah
4 min readFeb 4, 2020

--

Building basic functions with numpy

Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

sigmoid function, np.exp()

Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().

Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

Reminder: sigmoid(x)=11+e−xsigmoid(x)=11+e−x is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.

Sigmoid gradient

As you’ve seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let’s code your first gradient function.

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x))(2)(2)sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x))

You often code this function in two steps:

  1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
  2. Compute σ′(x)=s(1−s)σ′(x)=s(1−s)

Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape().

  • X.shape is used to get the shape (dimension) of a matrix/vector X.
  • X.reshape(…) is used to reshape X into some other dimension.

For example, in computer science, an image is represented by a 3D array of shape (length,height,depth=3)(length,height,depth=3). However, when you read an image as the input of an algorithm you convert it to a vector of shape (length∗height∗3,1)(length∗height∗3,1). In other words, you “unroll”, or reshape, the 3D array into a 1D vector.

Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to x∥x∥x∥x∥ (dividing each row vector of x by its norm).

For example, if

x=[023644](3)(3)x=[034264]

then

∥x∥=np.linalg.norm(x,axis=1,keepdims=True)=[556⎯⎯⎯⎯√](4)(4)∥x∥=np.linalg.norm(x,axis=1,keepdims=True)=[556]

and

x_normalized=x∥x∥=0256√35656√45456√(5)(5)x_normalized=x∥x∥=[03545256656456]

Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you’re going to learn about it in part 5.

Broadcasting and the softmax function

A very important concept to understand in numpy is “broadcasting”. It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.

Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

Instructions:

  • for x∈ℝ1×n, softmax(x)=softmax([x1x2…xn])=[ex1∑jexjex2∑jexj…exn∑jexj]for x∈R1×n, softmax(x)=softmax([x1x2…xn])=[ex1∑jexjex2∑jexj…exn∑jexj]
  • for a matrix x∈ℝm×n, xij maps to the element in the ith row and jth column of x, thus we have: for a matrix x∈Rm×n, xij maps to the element in the ith row and jth column of x, thus we have:
  • softmax(x)=softmaxx11x21⋮xm1x12x22⋮xm2x13x23⋮xm3……⋱…x1nx2n⋮xmn=ex11∑jex1jex21∑jex2j⋮exm1∑jexmjex12∑jex1jex22∑jex2j⋮exm2∑jexmjex13∑jex1jex23∑jex2j⋮exm3∑jexmj……⋱…ex1n∑jex1jex2n∑jex2j⋮exmn∑jexmj=softmax(first row of x)softmax(second row of x)…softmax(last row of x)softmax(x)=softmax[x11x12x13…x1nx21x22x23…x2n⋮⋮⋮⋱⋮xm1xm2xm3…xmn]=[ex11∑jex1jex12∑jex1jex13∑jex1j…ex1n∑jex1jex21∑jex2jex22∑jex2jex23∑jex2j…ex2n∑jex2j⋮⋮⋮⋱⋮exm1∑jexmjexm2∑jexmjexm3∑jexmj…exmn∑jexmj]=(softmax(first row of x)softmax(second row of x)…softmax(last row of x))

Note

Note that later in the course, you’ll see “m” used to represent the “number of training examples”, and each training example is in its own column of the matrix.
Also, each feature will be in its own row (each row has data for the same feature).
Softmax should be performed for all features of each training example, so softmax would be performed on the columns (once we switch to that representation later in this course).

However, in this coding practice, we’re just focusing on getting familiar with Python, so we’re using the common math notation m×nm×n
where mm is the number of rows and nn is the number of columns.

Vectorization

In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.

Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

Reminder:

  • The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (ŷ y^) are from the true values (yy). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
  • L1 loss is defined as:
  • L1(ŷ ,y)=∑i=0m|y(i)−ŷ (i)|(6)

Practice examples for the above:-

Please find the link:-

functions .http://localhost:8888/notebooks/Neural%20Network%20practice.ipynb

--

--