Discussing a multiple regression model

On this part we shall consider a curious example, I must say. As said our reference for this case study [7, p. 74]:

“If we were the only ones in the world with access to this info, we could be the best Boston real-estate investors in 1978! Unless, somehow, someone were able to build an even more accurate estimate . . .” [highlight added]

Only if….🤣😂😁 now movie though! With the lovely Jennifer Love Hewitt

This is the Boston House problem. Essentially, the problem is used as benchmark for machine learning, generally, on competitions.

Basically, we want:

“to estimate the median value of the house prices in a neighborhood (MEDV) given all the input features from the neighborhood.” [7, p. 61]

This problem is different from the previous one only because we have several inputs instead of just one. This problem is closer from reality since most problem, at least the one that can be useful, will have to do more than humans can do either with simple models or by head; and machine learning is good at it! As long as you have the computer power, and time to wait, they solve it with their feet on their backs, if they have any!😂😂

Computers doing a mental human-heavy calculation

One interesting reflection we shall do is regarding interpreting their inner workings, beyond just prediction.

Prediction is the process by which we want to know what is next in time, on a system (e.g., stock market or demands on a company).

Not sure, but heard that as chinese wisdom

“Is there any way to peek inside the model to see how it understands the data?…. For the general case of large deep networks, model understanding — also known as model interpretability — is still an area of active research, filling many posters and talks at academic conferences.” [7, p. 74][highlight added]

“black boxes yes help us in decision-making but the problem is that we do not understand how the computer or the algorithm arrived at such a result? the human must first understand to make decisions but with black box models we do not understand what is going on inside” Translated from French using Google Translator!, open-public discussion of my letter to Academia Edu on Innovating with biomathematics [highlight added]

Coming back to our discussion.

As said our reference [7, p. 77]:

“Because of the way our minds like to tell stories, it is common to take this [thinking the model understands reality] too far and imagine these numbers say more than the evidence supports.”

On our live with prof Kasabov, I have asked him about the brain-inspired AI [see our ebook for a section on that!], if we could avoid what we have now say with Google, that learnt prejudice (e.g., Google image). His answer was the fact that we could “edit” the weights. My only concern is that concepts such as prejudice may be embedded, if I can say so, on the weights, making hard to pinpoint and eliminate; even on humans we cannot spot several abstract concepts, such as the spot on the brain called “racism”, or even more concrete such as “hunger”. A considerable amount of human’s abstractions, say our system 1 and system 2 [7], are “abstract”, local “somewhere in the brain”; suppose the Brain-inspired AI replicates the brain, at least on an information encoding perspective, it can be hard to pinpoint, and delete the undesirable side effects of learning “contaminated by humans” data.

What is interesting about this case, regarding if we had this model we would have been rich, is that machine learning as we see it today was being born: backpropagation, used to train, was developed around the 1980s. Deep learning was about 2012. Thus, we could never have done that on the past, unless someone from the future would send us either a machine learning tool, or McFly!

Photo by MARC RANGEL on Unsplash

You can follow all the simulations on our sandbox.

PS. with our sandbox, be patient, please. I am using a free-server from Heroku: it means the first access is slow. After that, it supposes to work properly.

Author’s note

Sample from our ebook

Part 3: Scientific Angular with TensorFlow.js

Computational Thinking: How computers think, decide and learn, when human limits start and computers champ, vol. 1

--

--