György Gáspár
9 min readMay 7, 2020

--

State of the Artificial Intelligence

Today is 31th August 2016. It’s been many years since we started to think about computers as something that can be personal or that we can do extraordinary things with. And basically computers have really transformed everything around us in the world. But let’s put aside the epic saga style that aims to have a sweet nostalgic flavour for the pre-historic, computerless times for the sake of convenience. Let’s stick with the term “artificial intelligence” or AI to put it short, although vaguely. Why it is vague I will get back later.

In every occasion that might be about computer science this term has a very heartwarming tone, this has become a favourite buzzword, like a cliché, for most of the top-dogs in the high tech industry, this is the one that everybody wants to listen to or get involved with, just to be a part of it at any level, because it feels cool. Does it really? The short answer is yes, but it is inevitebly, anyways, misleading. So here comes the trick: let alone we have built systems that can beat humans on board games, and we have built systems that can simulate tons of cases based on predefined conditions sucked up from big data analytics so that we even can ask that AI to make our morning toast, or we have the so called genetic algorithms that can teach a robot learn to walk — which is not more than creating a weighed order on the possible cases, then getting rid of the least possible ones — , or we have self-driving cars that can steer themselves, follow the curve of the road, but that is not artificial intelligence on its own. That is only the evolution of programming and hardware skills. This might be dissatisfactory for some, disappointing, but hopefully it will rather ring the bell just in time. Let’s face it.

I am sure many people who know Isaac Asimov’s writings must have something totally different on their mind when it comes to AI, or those who might have watched Stephen Spielberg’s movie called AI. There is one common thing in what they percieved about AI in their own piece of science-fiction artwork: there may be a future where we may be able to talk to robots just like humans. The reality is to have a chattering course with robots is not so far away, but it won’t happen in the near future, either. For me think we have to redefine the term what artificial intelligence really means for us: just because we might have achieved things that seem to work autonomously at some extent doesn’t mean it capitalizes anything, which we call artificial intelligence technology. Here comes the thing why I think AI is a vague definition: it doesn’t really imply anything why the computer of thing is and how it is intelligent. We all assume it very well indeed that a piece of human being is intelligent when it has brilliant abilities to think, solve problems, learn, reason and understand things like how the black-holes can be simulated in a laboratorium. But it ain’t got that thing when it comes to computers. It don’t mean a thing if it ain’t got that swing. What I think of is that, for example, I wish to give a book to a robot that has not read it before and then I can have a talk with the robot about the book after it has read it. That might sound like an achievement where I could state I had had a dream, just like when President Kennedy announced way back then, that we would land on the Moon by the end of the decade. We don’t hear things like that anymore, by the way, but at simplest it is not more than OCR scanning a book, and categorizing its content, creating context snippets at different scales, which can be sorted in one or more categorization systems to widen the references, retrieving hard subjects like names, etc. For a head-start we can be good to go.

The big problem, even bigger is that AI has become a very market-able computer of thing. You can sell it like gold, but it makes more harm to the brand, than progress. It’s been used like a label and put like a sticker on anything that has more than a dozen of predefined cases that the code will be able to execute based on conditions. Because this is what’s happening: we try to guess the main and some possible worst-case scenarios, and prepare our AI to handle the situation. It’s been all wrong from the beginning.

But we want AI to work and that’s promissing. I think to have robots, but better said, systems that can make decisions or can reason on their own, or learn things, first we must get back to human intelligance. For me have always been very fascinated about languages as to how people brought about so many different ones and they basically are able to talk about the same things, in very similar ways. And an another example is the process how a child grows up and learns to speak and communicate. I think it is important to make this comparison, because as it always has been like this throughout the history, that nothing did just pop out instantly with full-blown, full-featured capabilities, but on the contrary people had to learn it and then adapt themselves to it. It is true for languages, economy, literature and computer science. That is evolution. It will take time, with lot of failures and we must not be spoiled by our success.

In our times we already have achieved a lot of technological advancements, breakthroughs whatever we may call them that have been and are going to remain necessary. Some of them are the key components like storage of big data, the pace we access them, the channel we have to open to process them and how we do it. I think this is the phase we may deem part of the learning process, i.e. establishing ways how we can supply data to our AI systems. When we talk about how a human brain works we are going to find these elements as well: we have memories that we can recall, we have experiences that we can use to make actual decisions, but apart from computers we can also change our mind based on any kind of new informations. AIs can’t, yet. I don’t want to list everything now what makes a human intelligent except for one more key aspect: creativity. Creativity is not typical to the humans only, we can find it in many animals, as well, but this is the one of the most striking abilities that makes a living creature intelligent. The question pops instantly if computers will ever have the ability to be creative? I think the answer is rather no, because we will always be more afraid of computers who might create things on their own without any human control. But it is not that a bad news, because we still want our AI robots to do lot of things on their own, but in a channeled way and it won’t strip the meaning of intelligence from artificial whatever. For example how good would it be, if we could instruct AI robots to learn to do things in identical way it happens to us: Hey, Robot, listen, now it’s learning time! — and it prepares itself, and when we are done for a while the robot stores what it learned before. It shall mean it can recall activities later, and then execute them in a proper way. That would be supercool. For example I would like to have a robot who sometimes could take my dog for a walk. And I think it could be a huge business, as well, especially in Beverly Hills. But what about speaking in any language fluently, let alone in more than one? A typical human has to learn the language for 20 years to use it in a proficient way. We don’t realize it, because a child with age of 10 years old can already speak, but if everybody recalls what he/she did at age of 10 or 20, then it is not something a weird thing out of the blue sky, that, as we may remember it, we really had to learn to speak softly. Even today the state of the language processing in terms of AI is still in child shoes, and we are facing such basic challenges like context recognition, however, to achieve it is not a simple process, though. At this point I guess we can skip how many languages there are on the earth, because if we can do it with one all the things we dream about in terms of AI, then we can do it with any of them. When I talk about context I also mean our AI can ask questions and that should be just something more than “How are you?”. It implies we gotta have some damn pretty good engine here that can ask back, for example, if the AI doesn’t understand something from the inputs supplied, or needs more information, and then it shall use the new information provided, and inherit it throughout the system. It shall also mean it can draw conclusions or summarize the meaning of the context or text snippet. These are huge tasks because they are well beyond computer science itself. But if we understand how our brain recognizes a context, how it recalls sets of memories, then we will be able to implement it into our AI engine. It is way more than just asking what the weather is like at a certain location, which has nothing to do with AI, still. It is simply giving an input for a command to execute by voice recognition.

Apart from the language the AI can excel at many other fields, but the main princple will remain the same, or better said the main needs, which we demand from it: we would like AI to not just bring a reliable decision in certain occasions, but help our life based on the almost infinite data sets we store anywhere. But how valid is our desire? Will it be able to forecast things better than humans? We may fall into the pit where we forget about that we may not know all parts of the equation. Up to this date I am not sure if anybody has really nailed what it means… Again, I am not talking about automation, which is a superb thing for optimizing traffic and logistics. Meaning, in my opinion it is really not far away into the feature that truck drivers will be replaced by autonomous chauffeurs, that would drive trucks equipped with all kinds of sensors, and their way will be paved by super-smart highways, because we have the technology for it. For example, at any point an accident happens on our super-smart highway the automated trucks way backwards for 60 kms will receive the warning signal to immediately start slowing their vehicle, and then the whole line will have to receive the subsequent signal as well to adjust their speed and following distance, or come to halt, if necessary. Giving instructions to make detours may be harder stuff, because we need to check the destination of the trucks in accordance with their position on the highway and possible detours, and fuel or battery capacity. For it all seems to be done, and we just have to estabilish some convenient ways to assume human control, if needed. Another thing is regular traffic within city limits. When we will hear in the news that in a hybrid traffic — self-driving cars mixed with human driven cars — an accident happened, then it can be very interesting which side to blame. Maybe the self-driving car was abiding with all the traffic rules, and the another car was a human-driven one that was keeping on changing lanes repeatedly ahead the another car, when the accident happened. We all know that rapid lanes changes are common in large cities and they don’t yield accidents, or at very low rate. The question pops itself: can the self-driving car adjust its speed so quickly, reasonably in every occasion, or see so far away ahead, that it can recognize what it means when a hot headed guy pushes the gas pedal suddenly for the matter of 50 meters till the next traffic light, and crossing my lane? Or let’s blame the human from now on, because humans can make mistakes, if an accident happens? I think humans still can adapt more smoothly to the new situations then machines ever will be able, and human drivers will always be able to avoid traffic accidents, unlike self-driven cars. A self-driven car will be able to stop, that’s all, but without looking into the backward-mirror. But all we will succeed if we just find its - AI - place in our life without loosing our humanity.

--

--