Deep Learning — Fundamental Building Blocks [MP Neuron Vs Perception]

Why you need to know about Deep learning :

Deep Learning is a subset of Machine Learning that achieves great power and flexibility by learning to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones.

Few Important things before go to Cook :

(i) Major difference between Deep Learning and Machine Learning technique is the problem solving approach. Deep Learning techniques tend to solve the problem end to end, where as Machine learning techniques need the problem statements to break down to different parts to be solved first and then their results to be combine at final stage.

(ii)Prerequisite -Majorly must know few important points while going to learn about DL and ML that is

(a) What is data - must know what type of data is yours?For example

  1. Structured Data
  2. Text Data
  3. Image Data
  4. Video Data
  5. Audio Data

But all data’s are encoded as numbers and result is whether true or false(Boolean value)

How can you get these datas -Lot of sources available for example google AI,data.gov.in etc.,

(b) what is tasks or Objectives of research /problems ?

This is second part .In this task part ,should know about that collected datas(encoded as numbers) are whether supervised or unsupervised categories

So what is supervised — -> These data are comes under classification or regression

then what is classification — -> that is instead of predict the result, map the input variable into discrete

then what is regression — -> that is prediction and map the input variables into continuous

So what is unsupervised — -> These data are comes under some classification or clustering

Question:
Transliteration is the task of converting a name written using the script of one language (say Hindi) to the equivalent name written using a script of another language (for example, transliterating Mumbai written in Devanagari/Hindi script to English). Suppose you wanted to use ML for this task. What kind of data would you need ? supervised or unsupervised ?

(c )Models

Models for approximation and not complex-from source:one fourthLab

When given a real-world data set a Machine Learning practitioner will not know the true relationship between the input(x) and output(y), the Machine Learning practitioner will then give out a bunch of functions that they think will accurately approximate the relationship between x and y.

(d) Loss functions --this is mainly for determination purpose-how better than others

Models -from source:one fourthLab

We need a quantifiable measure so that we can compare the accuracy of various models and different sets of function parameters(coefficients).

Mainly this Loss Function states that which function model is better to provide the solution

Loss function -from source:one fourthLab

Using this formula ,we can determine accuracy of your problems

(e) Some algorithms -this is focus on some parameters that is mainly used to identify

Learning Algorithm-from source:one fourthLab

Above this learning algorithm, we can say a,b and c are parameters .

Why do need these parameters to identify — -> in order to minimize the loss /error function

Here we let the machine compute the parameters in a brute force manner and arrive at a solution. But real life data will have a large number of parameters to estimate, and it will most definitely have more than 3. Running a brute force solution isn’t the most efficient way to solve this problem as it will be a resource-intensive task. We need an efficient way of computing the parameters.

Learning Algorithm-from source:one fourthLab

This optimization problem can be solved using various different solvers like gradient descent, Adagrad, RMSProp, Adam.

(f) Some evaluation Process

This is final steps that evaluation process.Purpose of this step is how to compute the value for data’s by giving the accuracy.

Evaluation -from source:one fourthLab

Note: Calculation of accuracy of the model is based on how many predictions the model get right out of all the predictions it made.

Then what is Test data and Training data?

It is a general practice in Machine Learning to split your data into training and testing sets. The evaluation of the model should always be performed on the testing data. You have defined the model, defined a loss function and trained the model to get the right set of parameters that minimize the loss function using the training data. For you to know if your model is able to generalize well for the given problem, you need to perform the evaluation task on something other than the training data.

So what is your conclusion :

First you need data’s for your problem,define it and classify it after that find that model to that classified one.Then find the loss function that is used to determine your task then apply the some algorithms and evaluate it .

Now coming to the main theme that is MPN(McCulloch Pitts Neuron)Vs Perceptron

Namely its called as Neuron .So its in format of the following

source:one fourthLab

Based on the biological Neurons -inspiration, artificial Neuron was modeled .

In this artificial Neuron, you can see x1,x2 and x3.these are input ,w is weight ,f is function and y is output or result

Here also we are going to use same approach to find the solution for your problems -refer prerequisite -(a)(b)( c) (d) (e) and (f)

source:one fourthLab

Summary of both approach :

source:one fourthLab

Discalimer : The contents and topics discussed in this post are based on the Deep Learning course offered by One Fourth Labs and this is my first try .Thanking you

--

--