Whatever happened with Apple’s PrimeSense acquisition?

Figuring out where 350 million dollars went

Matt Sayward
Matt Sayward
6 min readJun 30, 2015

--

In November 2013, Apple acquired an Israeli 3D-sensor company named PrimeSense for somewhere in between a reported 350,000,000 and 360,000,000 dollars. As Apple acquisitions go, that’s a biggie. Only Beats (the foundation of Apple Music at $3bn), NeXT (the deal that brought Steve Jobs back for $400m), and AuthenTec ($390m that manifested itself in Touch ID) were certifiably bigger buys.

And yet, two years on, we still can’t really say what happened with PrimeSense’s technology with any sense of certitude.

The first version of PrimeSense’s technology powered Microsoft’s Kinect, and so the lines that were drawn for what Apple would do with it were easily correlated; something based in the living room that would breathe fresh life into the Apple TV.

That’s certainly the obvious conclusion, but what if it’s not the correct one?

On this week’s excellent episode of The Talk Show, John Gruber and Horace Dediu discussed mapping software, and the different incentives for companies like Apple, Google and Uber to get maps right. It crossed my mind that Apple could actually be using the depth detection in PrimeSense’s sensor system to build a much more immersive rival to Google’s Street View.

All told, that seemed a bit small potatoes. So I kept thinking about it.

Did you know that Apple is the world’s biggest camera seller?

The camera on the iPhone is not the best in the world, but it’s the most popular, it’s always with you, and it’s increasingly consistent in a multitude of settings.

Because of Apple’s tight control of their entire stack (hardware, software, and services), they’re able to create unified experiences that other camera companies simply aren’t capable of.

Apple’s capital enables their R&D spend, which, when coupled with their expertise with industrial design and semiconductors, allows them to push their mobile processors beyond the capabilities of their competitors whilst crucially being able to manufacture their inventions at a large enough scale to bring them to market feasibly and affordably for their customers.

The likes of Canon and Nikon are simply not capable of creating a 64-bit chip that allows the kind of raw processing power of the iPhone 6 Plus, and they do not do so even in their most expensive DSLR offerings.

Take this from Minimal Mac:

First, instead of packing in more megapixels they packed in a sensor that delivered bigger pixels. Because, as Phil Schiller so pointedly stated, Bigger pixels = better picture. Bigger pixels mean more light, better range of color, and less noise.

Second, before you even take the picture it automatically sets the white balance, exposure, creates a dynamic local tone map, and matrix metering autofocus for fifteen focus zones (a feature not even all dSLRs have). Then, once you take the shot it actually takes three and then analyzes each in real time for which is the sharpest and that is the one you see.

Third, the new True Tone Flash. Now, I want you to understand something, there are photographers who spend thousands of dollars on flash and lighting equipment alone to achieve what this flash can do. It combines both a cool white and warm amber LED and, in real time analyzes the color of the surrounding and fires the flash to suit, thus giving you the best possible flash for that environment (over 1000 possible color variations). No other flash in any camera ever produced can do this. Let that sink in.

Next, auto image stabilization that, in real time, analyzes those multiple photos it takes with each shot and then — if they are all a bit blurry from movement or shaking, selects the sharpest portions of each image and combines them into the best possible picture.

Throw in burst mode at ten frames per second with the added bonus of allowing the camera to select the best of the shots based on a dozen variables, slow motion ability in the video shots (which captures at HD, 720p, 120 fps), and the fact you won’t have to spend a thousand dollars on some dSLR that would only get you half of these features because the rest are world first and not available in any other camera, and you know what you have?

Disruption. Apple just put the point and shoot camera industry (and some of the “Pro-sumer” dSLR ones) out of business.

The above citation actually describes the iPhone 5S, not the iPhone 6 or 6 Plus — Apple are even further ahead at this point.

So how do you move the needle even further along at this point?

Last November, on another episode of The Talk Show, Gruber dropped an unusually heavy hint about what he’d heard about the upcoming set of iPhones that will debut in Q3 of 2016:

The specific thing I heard is that next year’s camera might be the biggest camera jump ever. I don’t even know what sense this makes, but I’ve heard that it’s some kind of weird two-lens system where the back camera uses two lenses and it somehow takes it up into DSLR quality imagery.

Well, I had a think about this. And I might have something feasible.

Two lenses, two purposes

Lens 1:

The next iteration of the lens already found in the iPhone. The goal of the lens? Put everything into focus. Expect this new lens to have a staggeringly large depth of field for its size.

Lens 2:

The purpose of the second lens would be to map the actual distance of what is in front of the camera. This is where PrimeSense’s technology and sensors come in. Imagine a heatmap, where things that are cold are blue and green, and things that are hot are red and orange. Now imagine a depthmap, where things that are close to the camera lens are lighter and things that are further away are darker.

The secret sauce:

Controlling the stack.

First up, hardware. Thanks to the A8 and presumably the A9 to come this September, Apple can process images staggeringly fast. They can manipulate images and videos so efficiently that they can render in real time.

Secondly, crucially, software. A picture taken with this two-lens system would essentially have two layers of information. The first layer being the photograph itself, which thanks to its large depth of field shows almost everything within range in focus. The second layer of information is the depthmap. By having everything in focus, you could simulate defocusing in different areas simply by tapping on different areas of the photograph.

By combining the information in the depthmap with their software engineering expertise, Apple could easily create a ‘lens simulator’ that mimics the looks of different size lenses with different depths of field — all by having as much clarity as possible in the source input and the computing power to simulate the changes in focus in extraordinary quality.

Taking good photographs, especially with an SLR, is not easy. The learning curve is steep. But in truth, it’s actually kind of difficult to take a truly bad picture with an iPhone, and deliberately so. A progression like this would make it even harder.

Being able to get an amazing shot with that oh-so-desirable blown-out-blurred-background, with no photography lessons or bag full of camera gear necessary, all enabled by letting cutting-edge hardware and software do things that purpose-built cameras don’t have the opportunity or power to even try?

Doesn’t that sound like the Apple thing to do?

Want help with the next iteration of your product or app? I can help you take a look at things from a fresh perspective. You can reach me on email at hello@mattsayward.com, or find me on Twitter at @mattsayward.

--

--