Artificial Intelligence in a function

Giacomo Veneri
digitalindustry
Published in
3 min readSep 30, 2018

--

Human intelligence (HI) is a very complex thing. After 150 years of Neuroscience and 50 years of (disappointing) Artificial Intelligence (AI) the issue remain the same

“When and How to build a computer able to do the same things of Human”

We know the famous Turing Test

“The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.”

(wikipedia)

but several chatbots have passed the Turing Test and we cannot tag these experiments as “Artificial Intelligence”.

So what is “Intelligence” ? What are the steps we need to do to a machine before we can tag it “smart”?

Let me propose this typical software stack about Human Intelligence (I know! It is an oversimplification).

We can simplify the AI using functional programming (lambda architecture). See also Future of Deep Learning .

Output = f ( Input )

Memory: the capability to maintain data by Computer is not under discussion

Input = f(Input, t) = f(Input, t+1) = f(Input, t+2)

Control: the capability to move limb of robotic is well known

[positiont0, postiont1, …] = f (“action”, {t0…t1})

Perception: it is the capability to extract feature from the environment

Feature1 = f( {Feature1 , Feature2 , Feature3 } )

Learning: it is the capability to reapply the same function from a finite set of input to a finite set of output

{…}n = f({…}m) : n , m ≠ ∞

Abstraction: it is the capability of defining a set of function as specialization of a given set

F = g(f) : F = {f1, f2, …, fn}

Generalization: it is the capability of building a new function from another function

f1 = g(f2)

Imagination: it is the capability to build a chain of function to carry out the given output

g (F) : {…}∞ = f1 ( f2 ( fn ( Input) ) ) )

Awareness : it is the capability of understanding what f function means

{“tag 1”,”tag 2",…,”tag n”} = g(f)

Consciousness : it is the capability of perceiving the actor who applied the function f

{“actor 1”, “actor 2”} = g(f , f)

Ethic : it is the capability of perceiving the good and bad of applying the function f

{“good”, “bad”, “neutral”} = g(f)

Given this stupid oversimplification, my question is the same: “Where we are in building the AI”? Still in the “blue” zone ?

very far from “Artificial Intelligence”?

Who is working on the other layers?

I know bibliography reference is missing and it is a oversimplification.

--

--