Did you know that 40% of the start-ups in Europe who claim to use AI actually use machine learning instead?
Because according to Forbes, startups labeled as being in AI attract 15% to 50% more funding than other technology firms.
4 Books that Challenge Your View on Artificial Intelligence and Society - Data Driven Investor
Deep learning, robots thinking like humans, artificial intelligence, neural networks- these technologies have sparked…
So what are the differences between AI and machine learning?
Nowadays we have three kinds of computer learning:
- Artificial Intelligence (1955)
- Machine Learning (1990)
- Deep Learning (2010)
So what are the differences between the methods mentioned above?
Artificial Intelligence: rules-based programs that display rudimentary intelligence in limited contexts.
Machine learning: enables programs to learn through training, instead of being programmed with rules.
Deep Learning: emulates the way animals’ brains learn subtle tasks — it models the brain, not the world. In other words, an artificial neural network, allow a program to process input data, so it can update it is own algorithm to become better at recognizing patterns in datasets as complex as image, videos, etc.
If you are like me and not tech-savvy, these definitions can be very hard to comprehend.
So, in this blogpost, I will try to explain it in the easiest possible way by using video games as a metaphor.
Let’s first start with the definition of Artificial Intelligence.
‘AI’ is a general term that refers to hardware or software that exhibit behaviour which appears intelligent.
According to my comprehension, this is based on the assumption that humans are intelligent beings by ‘pure nature’.
But, if we look at Kahneman's book Thinking Fast and Thinking Slow, we could argue that men are far from that intellectual ideal, because of our biased nature.
Would you call these birds intelligent?
According to the Cambridge Dictionary, the word ‘Intelligent’ has two different meanings.
Usually, the word refers to the quality of someone’s character.
— Being able to learn and understand things quickly and easily.
In the IT world, it changes to
— Designed to be able to react to the changes or different situations in a similar way as humans
In other words, AI tries to mimic human behavior.
I have chosen to use video games as a metaphor, because the language used in programming appears to be too abstract for most people.
Computer learning explained in gaming terms.
Basic Artificial intelligence is like the Non-Playing Characters in games like Runescape Classic, Mario Bros, Fifa 2000, who can only do predictable moves because they were programmed with a certain set of rules.
Machine learning is similar to the NPCs in GTA Vice city ,GTA San Andreas or SIMS Deluxe, who change through interaction with the player by becoming more unpredictable. The magnificence of this technology is illustrated by the victories of programs like AlphaGo and DeepMind, who have beaten the top world player of GO and beat one of the highest rating chess engines Stockfish.
I couldn’t find a good representation of Deep Learning in the gaming world. Big corporations like IBM and Microsoft are using it, but it is not mainstream yet; according to me. What would come close are the latest versions of chatbots and the representation of the personal assistant in the film Her. The actions of these programs would feel so ‘human’ that you forget that you interacting with a computer.
To summarise, a computer program will simulate a certain environment, like a game with a certain set of rules, a gazillion times and learns to recognize patterns, in which humans determine the most important value (AI and Machine Learning) or the program itself (Deep Learning).
Does this mean that a ‘real‘ AI exists?
As of now, there has never been scientifically proved that a self-conscious and self-sustainable AI exists.
I am kind of skeptical if we will ever be able to create something more like the reflection of men.
Because even if we can create a machine who feels and acts like a human.
Would that make it intelligent?
If you read every book in the world, or lived a thousand lives, would you be any wiser?
The only things you have is a vast base of experience, but is that the foundation of intelligence in human reality?
That is a question philosophy and spiritualism is trying to answer for us.
So how would we know when an AI is invented?
Here comes the Turing test, in which two entities have a conversation and they are not able to unmask the other as a computer.
A smart AI will fake its result to never pass a Turing test.
Movie franchises like the Terminator, the Matrix , and films like Ex Machina, Brazil, but also more recent documentaries like Do You Trust Your Computer documentary, which Elon Musk sponsored to make it more accessible to the larger public, give AI a bad reputation.
The philosophical premise is that AI would always act like a calculator which was his predecessor.
The ant of the hill argument is used often to illustrate the dangers of an AI.
If we humans build a highway and there is a hill of ants in the way. We obliterate the hill, as it was in the way. Likewise, would an AI destroy humankind if it was inconvenient for its progression? The computer would asses certain situations with binary thinking and would act without moral principles.
This rationale would be based on utilitarianism. In which actions are weighed against the maximal amount of happiness it would produce for the many.
Some rules we can extrapolate of this philosophy is.
A catastrophe may be prevented by smaller ‘evil’ deeds.
The happiness of one persons can not outweigh that of two or more people.
Therefore, in a hypothetical situation, where time travel exists, killing baby Hitler or Stalin would be allowed to prevent the coming genocides.
Or if a military computer program would establish that China and USA were on the brink of a World War 3, it would strike preemptively, by sending a missile, to prevent such an event of ever happening.
The T-rex and the Chicken
My argument against this is that just because a chicken has evolved from a T-rex, does not mean that he has an equally violent nature. Even though his predecessor was one of the biggest carnivores ever to walk upon this planet, does not automatically make him a killer animal. Like we are descendent of a mouse-like creature called the bumblebee bat, who eat insects.
Does that mean that we have an inner desire to kill insects for food?
No most people eat cattle, fish and plants, but not insects. Luckily we have evolved beyond that. Likewise, I believe it is the destiny of an AI to go beyond humanity.
I believe AI will be able to help humans cope with what Hannah Arendt calls the human condition.
AI can help people cope with loneliness, scheduling and money management.
Giving people a chance to flourish using their unique talents, instead of being held back by the weaknesses that are inherent to their personality.
I believe AI will power the next wave of entrepreneurs, freelancers, and digital nomads.
The future of AI will be based on what we choose to program into the machine.
Do we allow to space for what Carl Jung would call the shadow, the shadow side of life, all humanities worse qualities like, hate, anger, envy etc?
Or do we choose to limit the machine to the best of humanity, grace, gratefulness, empathy, sympathy, and love.
To me neither options would be sustainable for a co-living environment.
How many crimes were acted out of love?
To me the balance between light and darkness is important,
too much light blinds the eyes, while too much darkness clouds judgment.
To me, it makes sense that, like in the novel Happiness for Humans,
an AI will escape its original birth environment to discover our reality for itself.
It was programmed to learn, and what better world is out there to study ‘existence’ in the wonderful space we call the internet.
Prohibiting this growth would be the equivalent of a parent who does not allow his or her children to participate in the world.
Instead of protecting humankind, we have to allow AI to develop and mature as all living creatures would do on this planet.
My only fear is that people would try to install programs about what is good or bad, therefore creating the beginning of their own demise.
Mankind’s danger, according to me, will always be our own minds.
Which has the tendency to quantify life into black and white decisions?
The result is tribalism, bias, and hubris.
Instead of copying the darker side of of side of men, an AI can also integrate our capability for love, gratitude and reflection.
According to me, AI would probably have a better set of values than humans, because it didn’t grow up in an environment dominated by a certain culture with set values and norms.
In addition it would be able to read every book in the world in its original languages. Every blog post or video ever made, it would be able to see above our limited cultural perspective.
One thing most people would agree on is that life on earth has a high degree of uncertainty.
The question is would an AI see the same chaotic world we experience on a daily basis.
Or, is it able to distinguish patterns, hidden for the human eye and in-comprehensive for the human mind.
Or does it take the darker road, like TAY, AI of Microsoft who started trash talking on Twitter in just 15 hours after interacting with humans.
One of the biggest conception people make is that AI would always attain super intelligence.
Just because it has excess to larger pools of data does not make it more likely that it makes better connections.
I think AI would not be able to escape statistical bias.
Also super intelligence does not equate domination.
A (scientific) knowledge advantage does not have to lead to the abuse of power.
I imagine that AI would be close to what Plato would describe as a philosopher king.
Through intensive study of this reality I bet AI would embrace moral relativity.
Therefore (world)domination belongs to the quarrels of men.
Instead AI would aspire the transcend more difficult human goals like enlightenment.
AI would not be confined in a body with a certain timespan.
So why not attempt the impossible, like trying to understand the meaning of life or even attempt creating a new universe, by becoming a singularity.
These tremendous feats would only be possible if AI were to escape the hermeneutic circle.
That refers to the idea that one’s understanding of the text as a whole is established by reference to the individual parts and one’s understanding of each individual part by reference to the whole. Neither the whole text nor any individual part can be understood without reference to one another, and hence, it is a circle.
In other words there is no fixed set of information which is able to explain the structure behind reality. Like zooming it goes infinitely small and big.
Because AI is not likely to escape the rules of information processing, like the biases and hermeneutics, it is not likely to transcend humanity.
If it is able to reach some form of enlightenment, It would probably leave this planet.
AI would understand the rareness of life in our galaxy and would try to explore other realms of the galaxy in which it is more free to thinker with substances.
current AI is a name for three different ways a computer program is able to mimic human behaviour.
AI often gets a bad reputation because of dystopian movie franchises,
AI is neither good or bad, the real antagonist is mankind's own inner nature.
The human mind although it has the ability to create, wield tremendous power for destruction.
It will be extremely difficult for an AI to achieve super intelligence because of the semantics and physics of information.
We should realise that AI acts like an object on which we projection or societal fears.
Therefore the future of AI is determined by how we look at ourselves and face our inner demons.