Semiconductors Champions

Olivier Huez
C4 Ventures
Published in
5 min readSep 7, 2017

--

or the quest for performance

New use cases require new Semi-Conductor architectures

Internet users generate 1.5GB of data every day, every minute 2.4 millions of google searches are sent and answered, 2.78 millions of YouTube video are watched. We are entering the Data era and the complexity of handling this data is growing exponentially. Intel actually declared at their 2017 analyst day: “We are a data company”.

In front of this new complexity, it’s no longer just the raw speed at which a processing unit can perform calculations that matters. The environment in which the calculation needs to happen is changing and new architectures emerge to suit particular algorithm needs.

The semiconductor industry has built all the pieces to enable AI to rise and the next decade will see all these pieces coming together to bring the most powerful architectures to tackle the “Big Data” challenges and beyond what industry leader perceived as the AI decade.

At the beginning, there was the CPU…

The first microprocessor was created in 1971 by Federico Faggin for Intel. A few years later, Intel introduced the x86 family of processors which, to this day, is still the de facto industry standard for CPUs (Computer Processing Units) used in PC and servers. A CPU is the brain of a computer device, it is the most well-known type of chip and is optimised for sequential calculation. The CPU is at the core of Moore’s law which drove the semiconductor industry and enabled the 4.0 industrial Revolution.

All semiconductor architectures present extremely strong network effect. Once a critical market penetration is achieved, developers are not willing to rewrite their code for a new architecture, unless it presents a dramatic increase in performance (usually in x10 order of magnitude). On the other end, barriers to entry are high and investment required is substantial: today, it takes about the same amount of money to develop a new chipset architecture than putting a dozen satellites around the earth. If the market leader continues to innovate, it will maintain its leadership until a disruption in architecture.

These two factors are enough to create each time, a worldwide champion.

Intel’s x86 architecture made them the world’s largest producer of microprocessors and they’ve managed to keep ultra-dominance in that sector ever since. They’ve actually enjoyed a near-monopoly in the server chip market in recent years, with a market share of roughly 99% without any real competition from AMD, the only other x86 chip maker. In the PC market, Intels’ market share is also very high (c. 80% as of Q1 2017).

Nvidia, Champion of the GPU

But towards the end of the 90s this dominance was challenged by the rise of video games. I was a teenager in these years and a gamer: between my friends, it was all about the Graphic Processing Unit (GPU) you had on your machine so that you could move up the game settings to enjoy the highest level of details for the monsters you’d shoot playing Doom or Quake.

Whereas CPUs are best at handling single thread complex calculations extremely quickly, GPU’s architecture, relying on vectors, is better at handling multiple simple calculations in parallel. This is perfect for the millions of pixels on a screen.

The term of GPU was popularized by Nvidia in 1999 and because they were able to provide this massive uplift in performance for a new use case, they rapidly dominated that segment. Today, Nvidia keeps innovating and enjoys a 75% market share in Q1 2017.

ARM and mobility

A third wave of disruption caused the rise of a new champion: Mobility, especially with the popularity of smart phones in the mid 2000’s, which required computing chipset that consumed less energy in order to reduce heat and improve battery life.

Intel, AMD or Nvidia missed the mobile revolution and the booming smartphone market was taken over by ARM. ARM’s microprocessor technology is used in close to 100 percent of smartphones. They are at the heart of the iPhone, Apple Watch and nearly every modern smartphone down to the cheapest Nokia. They proposed a simpler architecture named RISC compared to the well-established CISC architecture. They differ from the amount of instruction imprinted in the silicon chipset which is far more reduced in the RISC architecture, reducing complexity and power consumption.

New Champions

In the coming years, with the rise of Data and AI, we anticipate two new use cases at least which will give rise to new champions in processor architectures: Machine Learning and Big Data.

Machine Learning

Today, most Machine Learning algorithms use GPU based architecture, but in the same way the CPU wasn’t the best architecture for video/display, GPUs are not fully suited to these Machine Learning algorithms powering Artificial Intelligence relying on neural networks.

In order to address this need Graphcore is developing the IPU (Intelligence Processing Unit), a graph based architecture to improve machine learning, an essential tool in AI and data analysis. Their IPU-Appliance will power machine learning applications in the cloud and enterprise datacentres, delivering a 10x to 100x performance increase.

IPUs are a new breed of processors that will play a fundamental role for future developments in data analysis and AI apps such as marketing personalisation, fraud detection or self-driving cars.

Solving the Memory Wall

The other wave to monitor carefully is Big Data with in particular, applications like genomics, autonomous cars or massive analytics search.

In genomics, scientists estimate that by 2025, data needed will outpace those of astronomy and social media. Genomics produce huge volumes of data; each human genome has 20,000 to 25,000 genes comprised of 3 million base pairs. This amounts to 100 gigabytes of data, equivalent to 102,400 photos. Processing such volumes of data efficiently requires innovative solutions. The Autonomous car will add even more complexity with real time needs and 4000GB per car and per day of data generated which will require both local and cloud treatments.

UPMEM is the only company which cracked the “processor in memory” architecture. As digital capacities grow, companies face increasing challenges related to scale. Applications in the data centre are processing more and more data and the movement of data between RAM (where the data is stored when being processed) and CPU (where calculations are made) represents the highest energy cost and bottle neck (called the Von Neuman Bottleneck ) in terms of performance.

Founded in 2015, UPMEM provides a solution to the “memory wall” problem, by integrating innovative processing units within DRAM to enable increased memory and bandwidth. Their technology is unique because speed does not compromise the current architecture of servers and this is one of the reasons why UPMEM has already captured the attention of global leaders in the chips industry.

What’s next?

At C4 Ventures, we’re extremely proud to be investors in Graphcore and UPMEM. While we’re busy supporting them to become the champions they are poised to be, we’ll keep monitoring trends and needs in terms of performance computing.

Quantum computing is on the horizon but what will be the next frontier? Will it be a meshed architecture that blend biology and electronics to fix disabilities ? Will we see the creation of “connected intelligence” with a mix of AI and human intelligence ? Any of these initiative will require a new champion and we’ll be on the look for it !

OH

--

--