A Neural Network is Just a Glorified Math Equation
We’ll discuss loss, weights, saving a model, node, activation layer, and a lot more in such simple terms that you’ll think that a neural network is just a glorified math equation.
There’s a lot to unpack here. Let’s go through them one by one.
Let’s assume that you spied on your “friend” at work and created a dataset of the number of cups of coffee he gets in a day, along with a few other parameters you think might affect the number mentioned above of cups of coffee.
The above dataset is for five days, and you want to know how many cups of coffee he’ll drink on a day when he checks in at 10, closes two tickets and attends three calls. You can make an educated guess based on the 5 data points you have collected. But imagine if you had a year’s worth of data (Get ready to call a lawyer) and want to predict the same.
Humans are good at many things, but going through many data points and coming up with a pattern is not one of them. That’s why we have computers.
So in the above data set, Check-in time, tickets closed, and Calls/ Meetings are your input columns. Also called X. And Cups of coffee…