The one thing you missed about Apple’s new GPU

Stuart Russell
You.i TV
Published in
3 min readApr 13, 2017

Apple’s new GPU is going to change the game for app development. Here’s why.

So Apple is building their own GPU.

Or should I say, Apple just unveiled how they think apps should be built for the foreseeable future.

Whenever Apple does something like this, it’s obvious that it represents a significant, market-moving shift in how things are done. Unfortunately, the narrative has been focused on “imagination”.

I think we need to start the right conversation about what this means for the rest of us.

Let’s start with the basics.

In the old app paradigm, the GPU was always a second-class citizen to the almighty CPU.

The CPU, the quarterback of the processing world, took center stage given its capacity to use a very large instruction set to allow for more complex operations, relying on a set number of cores for multi-processing: 8 cores, all the way up to 12 cores if you’re really serious.

Related: TV’s Interface Solution: GPUs

The GPU architecture, on the other hand, is based on highly parallel RISC (reduced instruction set) cores. This architecture was designed around pictures and graphical data rather than a more general compute model. However, the GPU can have at its disposal upwards of 1000 cores — not as powerful individually as the aforementioned cores, but there’s a lot of them.

Gone are the days of the GPU being used to render simple polygons for 3rd-person shooter video games. Today, puzzle games use GPUs, camera filters use GPUs, VR and AR experiences dump all the hard work onto the GPU to find objects and compute facial recognition, and much more. Why? Today’s users are conditioning the market to build better apps — apps that users expect to be as fun and as intuitive to use as video games.

So what if you created a custom architecture based on the parallel computing RISC model but with a more mathematical compute focus?

In other words, what if you could divide and conquer?

Technology companies could make the most of this approach for other applications beyond just rendering, helping to balance the workload with other system processors and increasing overall efficiency and performance.

Over 7 years ago, we did just that.

Starting with our game engine philosophy — highly immersive, highly performant experiences cross platform — we built out products based on this “divide and conquer” approach. We worked directly with the GPU vendors to bypass the driver limitations to avail ourselves of the processing power lying in wait.

Several years later we welcomed the introduction of OpenCL from the Khronos Group — an open standard framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, among other processors and hardware accelerators.

So this week, we see another major milestone for app development.

With this news from Apple, just think about the possibilities if the GPU was fully optimized for non-rendering centric applications.

Beyond the obvious implications like longer battery life and more processing power available for more complex applications, we can’t wait to see what other innovations can come from this. Apple has still not yet entered the mobile AR and VR realm with a GearVR competitor — perhaps this is the crucible for launching that offering.

Given the advantages we’ve found just through open APIs like OpenCL, think about what this could mean for the user, not to mention the mobile experience, if Apple releases this power to its developer community.

We applaud Apple. If anyone has the ability to pave the way and push the bounds on this front, it’s them.

All we have to say is — what took you so long?

--

--