Artificial Intelligence with Machine Learning is coming

Gianluca Busato
Enkronos
Published in
6 min readSep 21, 2022

Many are unsettled when it comes to the arrival of artificial intelligence (AI) in everyday life. There is talk of job loss, dehumanization and even the end of the world. Are “intelligent” machines really the end of us?

To put it bluntly: it’s all nonsense. All the excitement and scaremongering have no basis whatsoever. Enlightenment and a more relaxed view would do us good. It is much more exciting and lurid to describe the future in the interaction of man and machine as an existential conflict. That attracts more attention and more clicks. One of the most spectacular examples of this conflict are the Terminator films by James Cameron with Arnold Schwarzenegger.

The very likely future, on the other hand, seems much more mundane: machines take repetitive, boring, and dangerous tasks from humans, and humans focus on interesting problems, devote themselves to more fulfilling activities and take on less dangerous tasks.

But even if there is no intention to make themselves more heard with exaggerated fantasies, people still tend to see a danger in “smarter” machines. Why is that?

What are the pitfalls for an AI project in a company?

Why do people see dangers?

First, people overestimate a technology initially and underestimate it in the long run. If you ask how many self-driving cars are already in use in 2019, we are at zero. Yet we keep reading about autonomous driving and how cab drivers and long-distance truckers will soon be out of a job. It will take decades for autonomous driving to replace these commercial drivers. Similar effects occurred with the invention and introduction of the car, the telephone, the Internet (WWW) … the examples are numerous, and it always plays out the same way: initially little noticed, then considered revolutionary, then disappointed until the new technology seeps into everyday life over years.

Artificial intelligence: an opportunity for business?

Artificial intelligence, too, has had high phases, such as in the 1960s and 1980s, and then periods referred to as AI winters, such as in the 1970s. Wooldridge wrote a little bit about these phases in “A Brief History of Artificial Intelligence.” Now we live in a time when AI is again being given a lot of credit.

A brief history of artificial intelligence

We deal with the current flow of Machine Learning and Artificial Intelligence (AI) on this blog, but it is worth looking at the past of AI research. To use an analogy: It’s interesting to examine the current state of a chess game but learning how it got to that state holds insight. So, let’s look here at where the roots of Artificial Intelligence (AI) lie.

It is hard to imagine the media without the topic of “Artificial Intelligence” now: some write about the impending end of the world because machines will take over the world, others think that paradisiacal times will come because machines will redeem us from many boring and dangerous jobs, and still others, fear unprecedented mass unemployment because machines would take away people’s jobs. Nobody knows what will happen in the future, but it is very likely that neither the dreamers nor the doomsayers will be entirely right. So, we look to the past to get a sense of how AI research is evolving.

1956

In the mid-1950s, engineers and mathematicians came together to develop a machine that would independently solve problems that previously required human intelligence to solve. For this they used the term “Artificial Intelligence”.

The first computers with AI support were used in mathematics. Mathematical problems were entered in the programming language LISP and the AI performed proofs independently.

1960

Researchers in the 1960s dreamed of a universal super machine that could solve any problem. Research moved toward machine translation between languages. The first independently moving robots were used in laboratories.

Joseph Weizenbaum’s program “Eliza“, which simulated a psychotherapist and caused amazement among some users, deserves special mention.

However, over time it turned out that the researchers had expected too much. The many times more complex problems outside laboratories showed them the limits.

1970

In the 1970s, AI discussions took on a new level: Was this real intelligence, or not rather the simulation of intelligence, what was going on with AI? At the time, practical progress was limited, not least because research funding was being slashed significantly. Because interest and investment declined during this time, the period is also known as the “Winter of Artificial Intelligence.”

1980

It was not until the 1980s that new applications began to emerge. Expert systems were introduced. These merged the knowledge of different disciplines and assisted, for example, in estimating car accident claims or even medical diagnoses.

This decade saw the founding of the “German Research Center for Artificial Intelligence” (DFKI). Germany showed strength in machine vision, robotics and oral as well as written speech recognition.

1995/1997

In 1995, the autonomous vehicle developed by AI pioneer Ernst Dickmanns can be highlighted. At that time, the car drove mostly autonomously on the autobahn from Munich to Copenhagen.

In 1997, IBM’s chess computer, christened “Deep Blue,” beat the reigning world champion Garri Kasparov. This was a turning point, because until then computers had always been inferior to humans in chess.

2000

Industrial robots experienced a leap forward at the beginning of the century, as they became more independent — supported by more and more collected data — and were able to adapt to their respective environment by learning. Experts refer to this as “learning systems”.

2010

After the first decade, the AI train picked up even more speed. Computers are now fast enough to perform calculations based on gigantic amounts of data. In addition, high-performance computers have immense storage capacities at their disposal. In addition, methods have also been improved so that insights can even be gained from unstructured data. It is these technical advancements that have made the great AI progress of recent years possible. Visible evidence of the progress are the now ubiquitous virtual assistants such as Siri, Alexa or Cortana. Further evidence is the victory of Google’s “AlphaGo” against the Go world champion.

Future

What will the coming years look like? The reliability of AI will increase, because currently many truly revolutionary applications, such as the autonomous vehicle, are not suitable for mass transportation. In addition, the number of discussions around privacy and ethics in AI development is increasing.

Will artificial intelligence robots replace humans?

Possibly too much is expected of AI, however, because people see an example of its use and extrapolate from it that this technology will take on almost magical dimensions. Even today, it is not certain that there will ever be such a general AI that replaces a human being entirely. AI has many places of application in narrowly defined areas with very clear, pointed tasks; in this it is successful. But replacing a human being with all its different aspects means much, much more. We don’t know if that will ever be possible, or if it will even make sense.

What’s more, AI researchers have a vested interest in keeping the general public’s interest in their field high, because high media and social attention promises fame and research funding. Thus, scientists are quick to jump on the bandwagon when it comes to potential applications of AI. Whether knowingly or unknowingly, they dream in the media about the many all-encompassing fields of application for AI. In this way, they sometimes also create unrealizable expectations or even horror stories within society.

Stay loose

Even if you work in a field today that is threatened by digitalization, automation, or AI according to various reports, that doesn’t mean you’ll be out of a job tomorrow. Decades will pass before proven technologies and workflows are replaced. However, you should keep your eyes open and possibly teach your own children and grandchildren not to take up the profession you have yourself.

Would you like to start an AI project? Contact Enkronos.

Source

--

--