Should we be scared of Artificial Intelligence in real roads?

Many tech entrepreneurs, as well as some statesmen from various countries who pay close attention to public opinion, have been talking about the high risks involved in implementing artificial intelligence.

In mid-March Elon Musk, one of the most famous innovators and technology entrepreneurs, reiterated his concerns about the lack of regulation of artificial intelligence, noting that “AI is far more dangerous than nukes, by far. So why do we have no regulatory oversight? This is insane.”

3 weeks ago the House of Lords Artificial Intelligence Committee published a new report entitled “AI in the UK: Ready, Willing and Able?” The report outlined various restrictions aimed at the spread of AI for both the British government and UK-based businesses as AI increasingly expands in power.

In Singapore there are also concerns at the state level about the scope of AI and the need to control it. Dr Janil Puthucheary, Senior Minister of State, Minister of Education, Minister of Communications and Information. and Minister-in-charge of the Government Technology Agency made a public statement in which he discussed AI in the following terms: “It means that from a policy perspective, we need to focus on the risks and issues that can be foreseen on the basis of what is happening today and perhaps accept that there will be unknown unknowns and we will deal with them when the time comes.” And in the last episode of Westworld producers warn AI creators: ‘Stop’…

I don’t count myself among the naysayers. In fact, I don’t think there are any serious risks at all involving AI. In the end, there are always other solutions that can counter AI systems. And all restrictions are likely to simply lead to a slowdown in AI-related innovation and development.

Artificial intelligence is a kind of hardware and software with functions similar to those that can be perform by a living brain. Artificial intelligence is divided into two categories in terms of scope: “general” and “narrow.” When government officials and entrepreneurs talk about the risks for mankind, they are probably referring to artificial general intelligence or “AGI” because the functionality of AGI resembles that of the human brain and is capable of learning several tasks at once.

Narrow AI systems perform specific tasks that would require intelligence in a human being and may even surpass human abilities in these areas. However, such systems are limited in the range of tasks they can perform. 99.9% of today’s AI applications (including self-driving cars, DeepMind’s Alphago, and pretty much everything else you have seen or read about) are in the category of what we call “narrow AI” (or “weak AI”). These are AI applications that operate at or above human-level of intelligence, but in a narrow domain. A self-driving car application cannot be ported over to drive a motorcycle, for example, let alone trade stocks. Even DeepMind’s Alphago was trained to play Go on a 19x19 Go board, but if we were to give it a board that is bigger, smaller, or even triangular, Alphago would need to be re-architected and re-trained.

Elon Musk predicts that self-driving systems will be able to perform essentially all driving-related tasks by the end of next year. However, this is a narrow AI, and any potential risks involved in it have no bearing on those commonly associated with strong AI.

Fear of AGI is simply not justified right now for a number of reasons.

First of all, these systems don’t have enough computing power

Reinforcement learning is currently a hot research topic in the world of artificial intelligence. Reinforcement learning uses trial-and-error approaches to maximize an algorithmic reward function. Some experts think that RL is the path to AGI because this approach makes it possible to get feedback from the environment.

Machines can learn to solve a certain problem not because they are given examples — “this is good, and this is bad” — but because they find themselves in a certain environment where they have a reward and a punishment, and they suffer for a long time until they get the result they need. For example, a robot can learn to walk or jump over obstacles. In theory, this kind of robot can start self-training. But in practice, this kind of solution has not been applied or tested on an industrial scale. The problem with these kinds of simulation tools is that it would be rather expensive to provide 200,000 cars to be destroyed by the technology. Therefore, the simulation itself is almost impossible to implement — as is providing a quantity of data comparable to that stored in the human brain.

Concerns about artificial intelligence are based on the fact that AI can and should work as a human brain does, which means that, just like a human, it can also pose a threat. However, the quality and quantity of data for development is still negligible compared to the human brain. In 2014 IBM boasted about its work on modeling the neural networks that make up the foundation of AI. IBM researchers claimed that they had developed a computer chip with the same power as the brain of a frog. Last year an IBM research staff member accepted that we should not expect to see AIs that can rival the depth of human consciousness anytime soon: “we’re only scratching the surface.”

No one knows how “it” works

Neural nets are very powerful when it comes to solving certain machine learning problems, but they cannot in and of themselves create machines that can think, or even reason, from sparse and ambiguous data, which is a core feature of AGI.

Nevertheless, neural networks are not a “black box.” It is possible to build many visualizations of the internal state of a neural network that could be used to analyze what the neural network looks at when making specific decisions. Processes in neural networks can be analyzed, but not in linear regression. When people say that they do not know how a neural network works, they are merely trying to please the crowd.

Being afraid of AI is the same as being afraid of software integration. All industries have embraced automation. And weak artificial intelligence can improve service quality. As with autopilot, the driver doesn’t need the car to drive itself. If the car can just assist the human, the number of accidents will decrease. You don’t have to entrust everything to a neural network, but it can help you out if you miss a thing or two. This is because a human can get tired or be influenced by external factors, while a neural network doesn’t care.

For example, let’s take a look at emergency braking systems for cars. They are not based on artificial intelligence, but rather on linear algorithms. When the intelligent connected car is in motion and gets close to a pedestrian, the system brings the car to an abrupt stop. Systems of this kind will also be developed and will also work, and they will place limits on what robots and AI can do.

The symbiosis of a neural network and a human can offer immense assistance to the human. It’s similar to putting an exoskeleton on a person. And today, in the age of weak AI, there is not a single process in a person’s life where they might encounter an uncontrolled neural network. Humans control everything, all the time. There are currently no neural networks that could ever rebel against their masters, and they are unlikely to emerge because, in order for a specific confrontation against humans to occur, AI would require computing power that private companies will not provide, and governments will oppose.

--

--

Alexander Dimchenko
Bright Box — Driving to the future

Chief Strategy Officer at Bright Box, global vendor of Connected Vehicle Platform - Remoto (www.remoto.com) https://goo.gl/K1E8NQ Download our free white paper!