AI hardware: data was never the new oil

Aleph Alpha
Aleph Alpha Blog
Published in
7 min readJun 8, 2022

Artificial Intelligence (AI) technology has long been predicted to have a massive impact on the economy, international relations and society at large. In recent years, these promises have started to be fulfilled, and these changes are only set to accelerate in the upcoming years and decades.

While images of killer robots and other sci-fi-horror stories dominate the fictional landscape, these are not the technologies most worrying for our immediate future. Current AI technology such as facial recognition is already deployed at large scale, especially in China, for the tracking and control of dissidents and the general population. It also seems clear that access to and control over advanced AI technologies will be crucial for economic power in the unfolding century.

A natural question to ask is what forms of leverage might exist to influence the access and use of AI technologies. In this report, I will explain why some axes of control (data and algorithms) are not viable routes to regulation, while others (in particular, the access to computer hardware) are promising candidates for strategic intervention, if so desired.

Modern AI technology is built on the Deep Learning (DL) approach. DL is a method in which a neural network (NN) is trained on a collection of data…

Anatomy of AI technology

Modern AI technology is built on the pricinple of “Deep Learning” (DL). DL is a method in which a “neural network” (NN) is “trained” on a collection of data to perform some task (such as sorting images by category, autocompleting text or playing a video game). There are three fundamental ingredients needed for a DL system:

  • Algorithms: The technical knowledge how to build neural networks and run their training algorithms, along with software implementations of this.
  • Data: DL tends to require rather large and specialized collections of data to train on.
  • Compute: The training process of a DL system requires very large computing resources to run.

The AI research world has a widespread culture of openness. Algorithms and other research progress is usually available to everyone shortly after their discovery. High quality, well tested software implementations of most state of the art methods are readily and freely available. Patents and corporate secrecy is a much smaller factor in AI research than in other technical fields, and the software the public has access to is rarely significantly inferior to what the cutting edge academic and corporate labs have. As such, algorithms can be considered close to “free”.

Data was never the new oil

Over the last years, data has been singled out as the bottleneck for most applications of DL. This has led to a common saying that “data is the new oil”, meaning “data is the new valuable economic resource everyone wants to have”.

But this analogy is flawed. But all data is not equal. A dataset of images of faces might be valuable for developing a facial recognition AI, but is of much less use to develop a document scanning AI. So, unlike oil, data is not fungible. One can’t exchange one chunk of data for any other, one needs the exact right kind of data, in sufficient quantities, for the task one is tackling.

One of the most important developments in the AI world over the last years has been the reliance on data becoming less acute as algorithms become more flexible and larger datasets become more readily available. It is now quite easy to acquire “big enough” datasets for most standard tasks (perhaps ~70% of the most common tasks one might want to solve with AI), and modern algorithms allow the extraction of more performance from the same amount of data, with some extra work. This is in large parts thanks to recent breakthrough progress in “Unsupervised Learning” methods, which can learn useful tasks from large amounts of non-specific unlabeled data. OpenAI’s GPT3 model is a recent famous example of this. GPT3 performs state of the art (or close to it) on many tasks it was never explicitly trained on. Instead, it was simply trained on very large collections of text scraped from the internet.

These developments point towards a different candidate as the new oil, something valuable and fungible: Computing hardware.

Computing power is the new bottleneck

DL training has very special demands on the hardware it is run on. Any modern computer is run primarily by a Central Processing Unit (CPU). These chips are extremely flexible in their capabilities, but lack the raw power of more specialized chips. Modern DL training is so computationally heavy that it is no longer feasible to run on anything but very modern, specialized AI chips. The most important class of such AI chips are Graphics Processing Units (GPUs), which were originally designed for graphics applications but turned out to have the exact properties DL applications needed. Other, even more specialized chips, from companies such as Google, Graphcore (UK), Cambricon (China) and others, exist as well and are likely to become even more important in the future.

CPUs and even specialized chips that are a few years old in general are unsuitable for use in cutting edge DL work. The most cutting edge DL systems are consistently built on top of the most cutting edge hardware. The supply of such cutting edge chips is limited and expensive, building a high end DL supercomputer costs millions of dollars.

The life cycle of a computing chip

Simplifying the many steps involved in producing a cutting edge chip, there are three main industries involved:

Chip design Semiconductor Manufacturing Equipment (SME) Production Chip Fabrication (“fabs”)

Very few large companies dominate all phases of high end chip production, with the vast majority of them located in western countries, along with Japan, South Korea and Taiwan.

Chip design, being a purely software driven process, can theoretically be performed anywhere, but much of the work is dominated by the United States. Major players in this space include NVIDIA, Intel, AMD and Google.

SME production is the most concentrated of these steps, with only a single company (dutch company ASML) currently capable of providing the most cutting edge equipment required to produce the highest quality chips. Other SME providers, located in the USA and Japan, have large market shares as well but lack the capacity to produce the highest end equipment.

Chip Fabrication is also a very centralized business, with only 3 corporations having the capacity to manufacture cutting edge chips. By far the most dominant player in this field is taiwanese company TSMC, who currently are the ones most capable of delivering the most powerful chips (using ASML equipment). The two other noteworthy players in this field are Intel (USA) and Samsung (South Korea), though they currently lag in technical capacity behind TSMC.

Strategic Implications

Many aspects of the information technology sector are by their very nature hard-to-impossible to cohesively assess and regulate. Which CPUs one exactly uses to power an internet startup has, so far, been of very little importance. The computational demands of cutting egde DL systems is changing this dynamic.

More and more, whoever has access to the best AI hardware will be able to build the best AI software. As such, access to cutting edge AI hardware is of interest to policy makers and strategic analysts.

The IP embedded in the design of cutting edge chips will always have the usual difficulties of protection from espionage or reverse engineering. Designing high end chips requires great skill and effort, but is in itself hard to track and regulate, and therefor presents an unfavorable target for intervention.

SME production and chip fabrication on the other hand are extremely capital intensive, centralized and almost impossible to hide on a large scale, making them a favorable target for policy intervention.

Summary and recommendations

The future of competetive AI research and development will crucially depend on access to high end AI accelerator chips. Ensuring access to adequate supply of such chips is of crucial importance to any nation seeking economic relevance in the unfolding AI revolution.

Currently, the entire world’s capacity to construct high end chips is localized in a very small number of international corporations located in Europe-friendly nations. Despite what one may think, China and other countries without these corporations have extremely limited capabilities to produce high end chips, and, despite intense efforts to produce such capabilities domestically, are unlikely to achieve them soon.

The EU must act now if it wishes to build the capacities needed for chip sovereignity. Currently, all chips used in Europe are designed in the US and built in Taiwan, South Korea or The United States. Whether this is a desirable state of affairs is left to the reader to decide.

If the EU wished to seriously pursue chip sovereignity and ensure it’s competetive in the developing AI economy, a massive investment and development effort would be needed as soon as possible, as developing these capacities could take years or decades. The EU is in a good position to build such capacity, as it possesses a very educated workforce, a large market for its products, and hosts to the world’s leading SME manufacturer (ASML).

A heavy investment in an effort to design open hardware designs for AI accelerator chips and work to develop the production capabilities needed for them seems the most promising route for the EU. Developing these designs in an open manner (similar to the RISC Foundation) would have the benefit of allowing the EU to leapfrog its progress forward by allowing the input of many stakeholders and hopefully allowing the effort to catch up to private efforts with decades of lead within a foreseeable timeframe. As discussed in this report, such designs being open would not be a huge asset to hostile agents as these would be unlikely to have access to necessary production capacity, if the EU and its allies work to control access to their leading SME and fabrication providers.

SME and fabrication capacity presents itself as an unusually tractable and controllable target for policy. If one wanted to limit the use of AI for purposes counter to European values and strategic interests, restricting these steps in the hardware production cycle would exert enormous pressure on any such hostile nation or actor, and effectively cut them off from high end AI capacity, potentially for decades.

This article was originally published on August 30, 2020 on https://www.aleph-alpha.com/research.

--

--

Aleph Alpha
Aleph Alpha Blog

We are an independent European company researching, developing and operationalizing a new foundational AI technology for the public and private sector