The Long Path of AI: Destructive or Transformative?
The field of AI (artificial intelligence), to those on the outside, must appear to be an orderly gathering of intellectuals collaborating at the cutting edge of technology. The reality is a bit different. Last year, Elon Musk made headlines by describing AI as a “fundamental risk to the existence of civilization.” More recently, Google CEO Eric Schmidt suggested that the answer to fears about AI was to police it and offered the example of finding a way to police the misuse of the telephone because of its possible misuse by some individuals with bad intentions rather than not inventing the phone at all.
One of the biggest misconceptions surrounding AI is that most individuals think that humanity is close to it, although in fact the reality is quite different. Although a great deal has been learned about how to engineer certain narrow problems like speech recognition in ways which could not have been imagined five or ten years ago, the idea of having machines that can reason about the world in the ways that human-beings can is still implausible.
Moreover, discussions around the concept of the singularity, which are based on the misperception that machines will get faster and faster and faster than the brain, reduce a complicated problem to a single dimension ignoring other dimensions such as perception, the development of language, or the inner workings of the memory. Making machines faster means making them get the wrong answer more quickly, rather than making them more intelligent.
The other form of the singularity is the idea of machines redesigning themselves to become better. If an individual can’t ensure that a machine doesn’t redesign itself in some physical form, he does not have any control over it at all. The real issue, though, is if machines can be made more capable to have an impact at a global scale. If a more capable AI regulates electricity or the financial system, it can have an impact on a global scale and that impact won’t depend on whether it can redesign itself. So, it is not about the ability to redesign itself, it’s about the ability to change the world.
Many people think that AI is a silver bullet that will solve everything. In reality, there is a lot of high level reasoning it can’t do. To give a specific example, although AI can extract relevant features, analyze images, and understand speech, it can’t look at a picture and project into time about what will happen next or extrapolate as to what were the things that happened before and what the causal relationships were that led to the current image. That is a much more complicated understanding of a situation. It is important not to overestimate the current standards of AI. On the other hand, as a glorified signal processing tool, it can be super beneficial as almost any scientist would benefit from collaborating with machine learning specialists.
AI usually evolves around the following two concepts:
- Human-like intelligence — referred to as general AI — This must be possible as there already is human intelligence, yet it is unlikely that it will be built in the near future.
- A single algorithm that suddenly knows everything: If a human-being is cloned, there will be a biological human-being, but as for the algorithm that suddenly knows everything, which people sometimes refer to as the singularity — that would be impossible.
Another common misconception is that AI has suddenly happened. In reality, it is a longstanding domain of science that has been evolving over time. There are already a lot of application examples of various forms of AI which are used in many systems in society, although they look a bit different than what individuals may expect. To give a specific example, if AI is mentioned in warfare, people would think of smart drones, but in reality, it would likely appear in a logistics management system.
Research in the field of AI has been going for 60 years. Every so often research reaches a point where a product will be created for which individuals will pay and from the outside of the field it may look like some kind of breakthrough. In fact, this is a sign of how much progress has been made with regard to a certain problem with 10 or 20 years of research.
According to Simon Dedeo, an academic in AI, machine learning is an amazing accomplishment of engineering, but it cannot be called as science. It has given us, literally, no more insight than we had twenty years ago.
DeDeo’s point is that those global tech companies which are focused on AI for profit are not advancing science. In essence, their laboratories aren’t advancing the field of cognitive science anymore than Ford is advancing the field of physics at the edge.
No matter how impressive neural networks are, they operate on principles that date back decades. Big tech companies in Silicon Valley are not a science company, they are an AI company — and that means profits come first. It may be worth pondering whether or not famous AI scholars would be better off in an academic environment where they weren’t burdened with corporate decisions.
We shouldn’t view comments such DeDeo’s as condemning capitalism or being against big tech companies. On the other hand, at a certain level such criticism should also not completely be ignored. The functions machine learning performs today, things like classification and regression, were developed for neural nets in the 1980s and 90s. Perhaps what we should all do is recognize the work that these major companies are doing as innovation, rather than calling it science, while taking these basic ideas and finding value in them.
As can be seen from all of these examples, there is no questioning that AI has the potential to be destructive, and it also has the potential to be transformative, although in neither case does it reach the extremes sometimes portrayed by the mass media and the entertainment industry.
Originally published at www.datadriveninvestor.com on July 30, 2018.