Loading…
0:00
16:14

Artificial intelligence will have a bigger impact than Moore’s Law, the dynamic relationship that drove the tech industry to today’s gargantuan scale. But ultimately, the value AI creates will be greater than all previous information technologies.

In economic terms, some suggest AI will support doubling of the growth of some of the leading economies, while increasing labor productivity by 40%. In the first piece of this series I covered how the flywheel of computing begets new applications begetting new algorithmic solutions begetting more demand for data begetting more demand for compute.

The cycle reminds me of the Wintel partnership, that Escher-like cycle of falling processing costs and rising computing performance. The Wintel cyclebrought personal computers to millions of households around the world; the current AI cycle captures the explosion of data, and redistributes intelligence closer to billions of end products: cars, phones, IoT devices. Revival of the edge, which I wrote about in the second part of this series, is crucial to bring truly intelligent devices to the consumers.

Broadly speaking there are three main areas where I believe we’ll see significant innovation on the back of the AI wave:

* Hardware
* Software
* Data

Hardware

In hardware, I previously explained how the increasing demands of machine learning are increasing the demands of silicon architecture. The specific — but similar demands — of machine learning (and, in particular, deep learning) provide a return to similarly specific technical architectures.

The result is a plethora of new hardware, and with that new opportunities in the market.

Graphic processing units, whose manufacture is dominated by Nvidia, were the first to bloom. But increasingly we are seeing a growth in new silicon architectures, optimised for deep learning tasks.

In a previous installment, I touched on startups like Graphcore (still independent) and Nervana Systems (acquired by Intel). But they are but the start of this wave of the new hardware wave. The Apple X phone uses the first Apple-developed GPU solution, called the A11 Bionic processor. Baidu announced in July deployment of Xilinx FPGA circuits to accelerate deep learning applications in its public cloud. Huawei is expected to launch an application processor that “combines CPU, GPU and AI functions” to bolster “smart computing.” Efinix, with major funding from Xilinx, is in the game to launch new Quantum programmable technology in 2018, chips that will squeeze AI into much smaller, more efficient point on the edge.

In other words, the winners in silicon have yet to be announced. In the previous cycle, there was one runaway winner. Intel’s generalised CPU (central processing unit) dominated the PC era. It’s x86 architecture became the architecture of choice, seeing if rivals like Motorola’s 68000 and PowerPC series as well as Sun’s Sparc chips. And the firm’s structural advantage (of owning the architecture and then securing market share leading to large economies of scales) saw it see off x86 cloners, like Nexgen, Cyrix and NEC. Only AMD remains in the now-declining PC chip market, with less than 25% market share.

The same dynamic was true in the mobile world. ARM’s CPU designs are present in more than nine-in-ten of our smartphones and tablets. Those chips are increasingly making their way into laptops and other devices. Intel didn’t succeed in getting a foothold at all in this market, and as the market share numbers attest, neither did many others.

Could Intel or ARM dominate the AI space? Many analysts, such as James Wang at Ark Invest, reckon that Nvidia has a significant lead in this domain. Nvidia’s graphic processors (GPU) are the tool of choice for deep learning. It’s argued that the academic breakthroughs made in deep learning back in 2008 were facilitated by Nvidia releasing CUDA, a method of programming it’s GPU chips more easily the fall before. The result is that most machine learning since then has running on Nvidia’s GPU chips, either alone or in large clusters.

Nor has Nvidia stood still since it became the darling of the machine learning AI community nearly a decade ago. Since then, it has not stood still:

  1. Nvidia improved architectural efficiency of its chips by 10x over four GPU generations;
  2. Nvidia supports all software frameworks for deep learning, whereas its competitors mostly support TensorFlow and Caffe;
  3. Nvidia open-sourced Deep Learning Accelerator, dedicated inferencing TPU, encouraging startups to build on top of the existent infrastructure.

The company just announced Pegasus, its new computing platform that will deliver over 320 Tera-operations per second, more than 10x its predecessor. The gauntlet is down.

So could Nvidia become the Intel of the AI wave? It is well positioned because, like Intel, it has the market share, it has the developers using it, it supports the tools, and it has increasing economies of scale. Those attributes have typically propelled platforms to very large market shares.

Is that enough to keep competition from the radical new architectures at bay? When Google Announced AlphaGoZero, its new Go playing system, it announced not only a new approach to Go (unsupervised learning) but also a new chip architecture. The first version of Go ran on 176 Nvidia GPUs. The new (and better version of Go) ran on just four of Google’s own tensor processing units.

From an investment perspective, Nvidia has risen nearly 14 times in the past four years. Investors have done extremely well. But early advantage aside, it isn’t clear that Nvidia has won this market for the long term and there aren’t opportunities for other companies to also succeed.

Software

The majority of the key frameworks used to manage or apply machine learning are in open source. The market leader is Google’s TensorFlow, which is the most widely accessed on the collaborative software repository GitHub.

But several other toolkits are making headway such as Paddle (from Baidu) and MxNet (from Amazon and CMU).

While popular discussion of the applications of deep learning tend to focus on these toolkits, they are but a small part of the full software stack required to make machine learning work.

There are still a range of emergent opportunities beyond the AI toolkits which have captured so many attention. How do you turn these clever algorithms into something operational?

One way of thinking about how to make machine learning useful is that it is like a factory operation. The inputs are data. The processes are inferencing. And the outputs are predictions such as when is this delivery like to reach its destination or what else might the customer want to put in their shopping basket. But the outputs aren’t always correct (or perhaps the definition of correct changes), so the quality of the output needs to be measured. And the key in the word learning is that a good machine learning deployment will learn to reduce the deviations of its output from the desired level.

The way you operationalise algorithms is through a machine learning pipeline. And yes, it’s really like a factory. It has its loading bay (where data gets loaded), quality control of the inbound supply, a set of processes that chop up and then add value to the data, a product that is produced, quality control on the product and, most importantly, a feedback loop to improve the future quality of the product.

An impressive example of a robust machine learning pipeline (and one of the few that has been discussed in any real detail) is Uber’s Michelangelo.

[Before Michelangelo,] there were no systems in place to build reliable, uniform, and reproducible pipelines for creating and managing training and prediction data at scale. Prior to Michelangelo, it was not possible to train models larger than what would fit on data scientists’ desktop machines, and there was neither a standard place to store the results of training experiments nor an easy way to compare one experiment to another. Most importantly, there was no established path to deploying a model into production–in most cases, the relevant engineering team had to create a custom serving container specific to the project at hand.

Any firm (which means every firm) planning to use machine learning at any scale will need to deploy a machine learning pipeline. I would expect standardisation to emerge in this realm,with traditional ISV like Oracle making a play, as well as novel startups, often based on open source technologies.

One place to look is UC Berkeley. Its AMP Lab is quite the hatchery for big data and machine tools. The Algorithms, Machine & People Lab (to give it its full name) has been responsible for projects like Spark (a very fast real-time engine for big data processing, which I have used in the past) and Mesos (a resource orchestration platform for cluster computing). Both Spark and Mesos have turned into full-scale Apache open-source projects. The AMP Lab has a cousin, called the RISE Lab, which is following a similar playbook to develop the growing demand for software frameworks which can handle the needs for large-scale, real-time machine learning systems.

Open source has transformed the software industry. Reducing the cost to license software to zero has unleashed a wave of innovation. But there aren’t any huge commercial open source successes — in AI, or to be honest, in any field.

RedHat, the largest of open source firm, has reached a market cap of around $21.3bn on revenues of $2.4bn, a mid-sized sprat in the world of public technology firms. Loss making Cloudera, home to many big data services and a Hadoop distribution, is ten times less valuable. Mesosphere (a commercial business built around Apache Mesos) has raised nearly $125m in private capital and may yet break the model.

There haven’t yet been great winners in software firms in the AI-stack per se. Obviously, Google and Facebook, which use machine learning to drive their businesses have been great investments. And services businesses built on open source have done moderately well. But software tooling for the sake of software tooling has generally had moderate outcomes. I’m not sure that AI will necessarily change that equation.

Data

The Economist deemed data the world’s most valuable resource, leaving oil behind in the Petroleum Age. Power, for companies in the intelligence information age, lies not only in accumulating abundant data, but also in getting most out of it to provide convenience, increased efficiency, and insights to the users. What makes them intelligent is the capability to learn from users’ behavior, looping it back into offering more personalised service.

As devices from watches to cars connect to the internet, the volume is increasing: some estimate that a self-driving car will generate 100 gigabytes per second. Meanwhile, artificial-intelligence (AI) techniques such as machine learning extract more value from data. Algorithms can predict when a customer is ready to buy, a jet-engine needs servicing or a person is at risk of a disease. It’s no surprise that industrial giants such as GE and Siemens now sell themselves as data firms.

The most powerful internet firms have a significant advantage. If you are Google or Amazon, you already have an enormous amount of data on consumer behaviour. If you aren’t, well you’ll need to have a data strategy.

This in turn has created opportunities for entrepreneurs. Take Q Data and Tasko.Ai. Q Data is a marketplace for selling and buying raw and aggregate data, and Tasko offers on-demand data collection. Both are very early in their development, so it is too early to say whether they will work. But it’s clearly been possible to make money off selling data. Take Thomson Reuters or Platts in the financial services space, both of whom make large revenues from selling useful data.

One technology that might help companies with data monetise their data is blockchain.

As Trent McConaghy says:

Here’s the issue. Many enterprises have plenty of data but don’t know how to make it available to the world. Latent value lurks everywhere. Conversely, many startups know how to turn data into value using AI, but they’re starving for data.

Trent’s excitement about the opportunities arising on the blockchain-AI intersection is understandable. One attenuating factor for data sharing is the difficulty in aligning the incentives of profit-seeking firms with the nature of data. Data is not like physical widgets. If you show me a single widget, I can’t make infinite copies of it for free. If you show me data, I can. And by the way, if I copy your data and exploit it, it doesn’t prevent you from using it. That is at odds with how the modern economy works.

Blockchain-based approaches to handling data may change that. The chain will allow figuring out who originated a chunk of data and who has used it. The decentralised control mechanism of blockchains will allow participants to trust the whole system (without trusting anyone in particular). It might create the right kind of substrate for co-operation in data that has the right kind of economic incentives to encourage investment.

In other words, to supply the factories of the future will require a large amounts of different types of data. Opportunities exist for companies who can supply that data or generate that data. And further afield, the blockchain may create sharing networks where firms can bring (and benefit from) the use of their own data while exploiting the contributions of others.

Applications

I haven’t covered applications in this analysis. There are excellent frameworks available, such as this by David Kelnar. As per investing in his AI companies and products, the framework recommends paying attention to six competencies: strategy, technology, data, people, execution and capital. As many teams succumb to AI hype and self-tag their data-crunching product as artificial intelligence, the key question for investors and customers should remain: (how) does this piece of software add or create value? Does it improve customer experience or helps identify new market opportunities? The rule of thumb from David is looking for problems that are: arduous, complex and inscrutable; basically, you want to direct your solution towards activities that prove to be of value for humans, but are ultimately difficult, impractical or impossible to complete.

Conclusion

It’s quite an exciting time isn’t it? Let’s recognise the multiple exponentials we’re sitting on here:

  • Only a handful of firms are currently have a grip of what they plan to do with machine learning and have implemented it. Those are mostly consumer internet firms (like Amazon or Facebook). They vast majority of large and small firms have yet to implement such systems.
  • Virtually none of these firms will have the in-house capabilities of a Google and will be dependent on ready made services, tools or training provided by third-parties. (A good example here is Seldon, which I advise. Seldon makes it easier for companies to deploy machine learning pipelines.)
  • Large swathes of industry are working double time to increase the range and quality of data inputs they can use to understand their business. While their sales data and inventory levels may be digitised, their footfall or in-factory behaviour is not. Deployments in machine vision will create new classes of data for these firms — and in turn engineer new applications.
  • The pace of academic and commercial research is quite incredible. We’ve been flogging the deep learning horse, with great results, for the past few years. Other approaches (like reinforcement learning, probabilistic models, neuroevolution) will start to reap rewards, kicking off new types of applications we can develop and deploy.

It is quite likely that the world won’t get any slower than it’s rapid pace today. From an investment perspective, clearly there will be a play in silicon chips. Intel, Arm and Nvidia have through their history been stellar performers because we relentlessly demand more computer cycles.

There will be opportunities in software but the caveat that this is likely to come from the services firms implementing open source software, than from software itself.

Finally, companies with novel approaches for feeding the AI-beast with data might have a look in. The most exciting area here is the combination of data and the blockchain to create sharing networks for AI applications. Exciting and emerging to be sure, but possibly too exotic for all but the most brave.