Parameterized Learning

Ramji Balasubramanian
Analytics Vidhya
Published in
3 min readDec 18, 2020

Parameterized Learning is a learning model which can summarize the data with a set of parameters of fixed size. No matter how much data you throw at the parametric Model, it wont change its mind about how many parameters it needs. — Russel and Norvig(2009)

Parameterized Model includes four major components — includes Data, Scoring , Loss Function and Weights and Biases. We are going to see each components with a practical example.

Data

Data will be our input component that we are going to use to train our model. This data includes input data as well as their corresponding class labels.(Supervised Learning data).

Our dataset includes three different classes — cats, dogs and pandas (each with 1000 images) but in this article we are not going to see how are we going to train our model by extracting features from images instead will have an idea what are all the main components for ML model.

Cat, Dog and Panda
# import the necessary packages
import numpy as np
import cv2
# initialize the class labels and set the seed of the pseudorandom
# number generator so we can reproduce our results
labels = [“dog”, “cat”, “panda”]
np.random.seed(2020)
# load our example image, resize it, and then flatten it into our
# "feature vector" representation
orig = cv2.imread("beagle.png")
image = cv2.resize(orig, (32, 32)).flatten()

The above code snippet just reads the image and converts multi-dimensional matrix into one dimensional matrix. For example; If you have an image of dimension (width, height, depth), then it will get converted to (1, width*height*depth).

Scoring

This component will takes our data as input and maps that to class labels/names. Its basically takes x as input and apply a defined function f(x) and returns/ projects into class names.

Scoring = Weights * inputs + Biases

# randomly initialize our weight matrix and bias vector — in a
# *real* training and classification task, these parameters would
# be *learned* by our model, but for the sake of this example,
# let’s use random values
W = np.random.randn(3, 3072)
b = np.random.randn(3)
# compute the output scores by taking the dot product between the
# weight matrix and image pixels, followed by adding in the bias
scores = W.dot(image) + b

Loss Function

There are multiple loss function like ReLu, Sigmoid, Tanh, etc., Loss function is used to quantify the accuracy of our predicted classes by comparing our predicted and true labels.

Weights and Biases

The weight matrix, denoted as W is nothing but the value assigned to a particular feature in input and biases are mentioned as b. These two parameters of our model to classify labels can be trained and optimize at any time based on our output result.

Only we have all these parameters set and ready, the model can be tuned with regularizing parameter.

# loop over the scores + labels and display them
for (label, score) in zip(labels, scores):
print(“[INFO] {}: {:.2f}”.format(label, score))
# draw the label with the highest score on the image as our
# prediction
cv2.putText(orig, “Label: {}”.format(labels[np.argmax(scores)]),
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
# display our input image
cv2.imshow(“Image”, orig)

The weights randomly assigned for W and b worked well for this picture with greater accuracy. But training will happen and the weights and biases will be updated with respect to the label prediction.

[INFO] dog: 7963.93
[INFO] cat: -2930.99
[INFO] panda: 3362.47

Reference:

Deep Learning for Computer Vision with Python by Adrian Rosebrock (Starter Bundle)

--

--