Scenarios for how Artificial Intelligence could go as an industry in 7–10 years

Scalable Analysis
Open Source Futures
8 min readJul 4, 2016

--

First of all, I do not think that we will be able to build artificial general intelligence. It seems that the AI systems we see today are better computational tools. That’s a big deal nonetheless, and that alone warrants attention from policymakers about the impact it will create for jobs and for the economic structure. I used to think that the artificial intelligence systems today could replace jobs; even though there was that Frey and Osbourne paper from Oxford University about half of jobs are in danger of automation, I take my stand from the Economist that automation is going to create demand for new things, and that demand will create the need for more activity, which will fill the jobs. This is still a position that needs monitoring, but the history of technological change has been overall, a positive one for job creation.

There were a few posts on Wait but Why (part 1 and part 2)that looked at the development of artificial intelligence. I’m open to the possibilities of artificial general intelligence, and artificial super-intelligence. These things are certainly in the realm of possibility, but I’m focusing on something much more near-term at the current artificial intelligence scene. This certainly is a very fast-moving field. My focus is more on how the AI companies as a field/industry might change rather than across long-stretches of time when AGI or ASI might occur. I’m therefore focused on the industrial arrangements around Artificial Narrow Intelligence — the ones we have today. When I used AI in this post, I refer to ANI in the WBW series.

People are going to disagree with me on the above paragraphs, and maybe I will come round to fully elaborate on that view, but for now I will go on to another forward-looking topic about scenarios for how artificial intelligence systems will develop as an industry.

I’m going to look at a 7–10 year time range as a way to step back from current events and to look at things afresh. If AI is going to be an impactful innovation, can we compare that against other historical innovations? Things such as the electronics, chemicals, pharmaceuticals, and the automotive industries? Or maybe comparison with something more closer in relation, with computers and the hardware/software divides? If AI is going to develop like the other industries, then how is it going to look like?

For me, there are a couple of cases to consider when considering this question. I view AI systems as how other technological systems have emerged in the past. In the short technological history around contemporary computation, there have always been periods where a new technological paradigms emerged, new fields being opened up, before these fields enter a maturity stable state, or even decline as another field subsumes the mature one. There were the contests around the Platform, which Windows ‘won’ (I’m going to use “win” very loosely); the contest around browsers (still ongoing); smartphones (Samsung/Android won on market share; Apple wins on profitability); social networks (Facebook won over MySpace, Friendster); search engines (Google won); and so on. In each of these contests, we see a cycle that keeps emerging: there are pioneers that get a lot of attention, followed by competitors, eventually the competitors either fail or are bought over by stronger players, and an equilibrium emerges. Then a new technological paradigm comes over and shakes things up again, and the cycle repeats. In these contests, it is difficult to predict beforehand if an individual company/entity is going to win. ‘Victory’ or “Surviving” is often only apparent on hindsight. We could easily have seen Friendster become the dominant social networking website, but that did not happen. In these instances, contingent event matters.

Based on the view above, and based on historical developments around older technologies, the ‘end-state(s)’ has more to do with the concentration of companies and the applicability of the technologies throughout a range of human/economic activities. In economic and corporate history, there usually are a few end-states for how industries stabilise. For a time, we see vertically integrated companies that want to control as much of their production chain as possible — Ford in the past and Apple today come to mind; in the past we saw companies become diversified conglomerates growing through acquisition or through family holding companies — historical American companies come to mind, and the Japanese Keiretsus and Korean Chaebols come to mind. I can think about the extent to which AI systems can be applied to a whole range of purposes for those cases.

What has often decided outcomes between conglomerates and more streamlined companies has been regulations and market conditions. Regulators are often interested in ensuring that market power is not concentrated in a few actors as with traditional anti-trust regulations, but looking at Google and Facebook this might not always be the case. Of course, Peter Thiel wrote about the scope of industries for these companies and argued that they might not be monopolists at all. That depends entirely on the perspective you take. That other websites and other industries now have to take their cues from Facebook suggests tremendous influence on its part. I suggest that Google and Facebook are more monopoly-like than not. For all of these reasons, I take regulation as another critical uncertainty for the development of artificial intelligence across a wide range of activities.

The AI field is also developing quickly, and my own sense that it will begin to be applied across a whole range of purposes as soon as people make it easy to do so. When that happens, AI could also become a commodity technology, much as Arduino and Raspberry Pis made electronic hardware accessible. But it might not happen too. Perhaps AI is more like a craftsmanship project where one takes years to master it sufficiently for use.

Then, there are the shock events that could turn entire trajectories. There could be a period of protracted suspicion of digital systems if there were widespread hacks, or if major powers disrupt the Internet in the pursuit of a cyber-based conflict. If any of these happens, then applicability becomes narrowed. Because of these uncertainties, I use applicability as a critical uncertainty.

I’m going to use scenario analysis and backcasting (imagine the futures and work backwards) to characterise the situations that could emerge. I am thinking of two axes: concentration of the companies, and the applicability of AI systems across human/economic activities. This results in four possible situations:

  1. High concentration, wide applicability — “SuperCorps”: the creation of large companies that combine artificial intelligence with industrial capability;
  2. High concentration, low applicability — “Bonsai”: the creation of an industry with limited use (possibly due to shocks or regulation), dominated by a few players;
  3. Low concentration, wide applicability — “Ecosystems”: few very large companies dominate the industry;
  4. Low concentration, low applicability — “Brokenness”: in the aftermath of a system-wide attack or after a period of cyber-conflict where digital systems retreat or stop development for a period leading to a fractured system.

I chose the names for a reason, and I will explain them in further detail. As one can tell, these are all plausible states. How these states come to be depend on a wide variety of reasons, ranging from regulatory, to the degree to which AI systems become a commodity. Today, AI systems are created by a small group of people, but eventually knowledge about working with AI systems will become more accessible as it becomes more widespread. Regulations might become another area that might impede the growth of AI systems. It might be that there will be some very smart people who will be able to design AI systems for their own nefarious purposes, and as a result, cause regulations to become more restrictive on how AI systems can be applied. Because of these reasons, I thought that a framework for looking at the eventual state of things can become useful, as I highlight the salient drivers that happen in each scenario.

SuperCorps

SuperCorps is a world of where AI systems are concentrated in a few large companies across a wide range of activities and industries. The way to think about this is if Google were to merge with General Electric. How could that happen? It could simply start off with a partnership, as GE requests for Google’s computational services in GE’s industrial facilities. At the same time, Google is able to join in a partnership to build technologically-enabled urban centres. Eventually both entities decide that there are sufficient synergies in both companies to form a larger entity. This GEG becomes a corporate behemoth, being more able to affect the lives of more people around the world. But this would not be the only entity. Other companies also join in. Amazon partners with ABB; Microsoft could partner with say, UTC — as examples.

This scenario assumes that the AI systems can be created for the various industrial systems, and that these AI systems can interconnect with each other with minimal interruptions.

Bonsai and Brokenness

Bonsai and Brokenness look like each other though differing in severity, and I will discuss them together. In Brokenness, one can imagine that serious interruptions have happened in the information world, causing governments to take really serious precautions to slow down the rate of information technology development. In this context, be it through cyberwar or through a large cyberattack by a large criminal network, the Internet becomes something else — a shadow of today. In this world, AI development slows down, and becomes restricted to a few areas where they are critical. In Brokenness, AI remains in a state of fragmentation as technological development is slowed.

Bonsai is a less catastrophic view. Perhaps a combination of regulatory and technical difficulties makes it hard for AI to be used beyond a range of activities. The scope of AI, for various reasons become drastically reduced as regulators become cautious of its labour-saving potential/ability to create widespread job losses. In this view, the remaining AI efforts consolidate. The current companies that have these capabilities — Google (Deepmind/(with AlphaGo), Search Engine+++), Facebook, Microsoft (Cortana, language translation), Amazon (Kiva, Alexa), Baidu (language translation) might then be the only players in this field.

Ecosystems

Ecosystems is a world where there is low consolidation of AI efforts, and where there is high applicability. In this world, there are little inter-industry linkages perhaps owing to security concerns. In this world, what happens is that clusters of companies have to build ecosystems around them as they deal with different AI systems providers for particular services. Interconnections between different AI systems are unreliable as different approaches to AI emerges. In this world, system-integrators become highly prized as they choose from different AI companies for particular services.

Conclusion

There will certainly be different corporate responses depending how AI systems development unfold. For myself, SuperCorps appears to be where things will head to, as the technology companies and the industrial companies find synergies and begin to cooperate with each other. If that is the case, then governments will have to think about if this will be in the consumer interests — effectively creating corporations of incredible power and influence over the lives of people. Brokenness and Bonsai are also things that AI researchers should look at: this means that cybersecurity and government regulations will be important things to pay attention to. “Ecosystems” assumes that there will be different approaches to the development of AI platforms. If that is the case, then the onus is on AI researchers to think about compatibility issues between different approaches.

I am generally excited about the direction of how AI goes. As I’ve said, I don’t think that artificial general intelligence AI will happen. The systems right now appear more like savants, narrowly specialising in particular fields. And they remain tools, not self-aware (yet).

I will definitely want to hear from you if you have comments for this piece or if you want to suggest something I could write about. If you liked this, share it and recommend it with the buttons below!

--

--

Scalable Analysis
Open Source Futures

Looking at ideas, systems, organizations and interactions.