The Privacy and Security Threat of the iPhone 12 Pro LIDAR Sensor

The ability to capture a 3D image of anything has unforeseen consequences

Rob Sturgeon
Jan 19 · 7 min read
Image for post
Image for post
Image by PIRO4D from Pixabay

Face ID: a Security Upgrade But a Privacy Risk

As we use smartphone touchscreens without a stylus, our phones get covered in an exact copy of our fingerprints. The iPhone 5S, the first iPhone to use Touch ID, was hacked by scanning fingerprints from the screen within days of its release. A bad actor that has access to your phone has no idea what your face looks like, and there is no way to access the phone without a high resolution 3D scan of your face. Face ID does not just recognise your face as an image, it recognises the actual shape, including the unique combination of distances between your facial features. It does this by projecting infrared dots onto your face, using a flood illuminator to detect the dots and build a 3D map of the face.

During the current pandemic, people have been unable to use Face ID as a method of unlocking their iPhones. This problem will persist unless users can manage to get Facial ID Masks to produce a likeness that their phones will accept. The company promises to allow you to upload a picture of your face to produce a mask that looks like you, but it seems like it will never actually do it. Months later, the site is still requiring interested customers to sign up to a mailing list to be notified when the product is launched.

Those masks only look like you in a very superficial sense. The Facial ID Masks, if they work at all, probably require you to enrol your face again with the mask on. Your face doesn’t normally have that shape, so it seems likely that the data it stores in the Secure Enclave needs to match it better. The Secure Enclave is a small area of the CPU that is only accessible by the Face ID cameras and the NFC chip for Apple Pay. In August this year, an ‘unpatchable’ exploit was found on practically all Apple devices older than the iPhone 11, but newer iPhones are secure.

Tricking Face ID

Since the iPhone X there have been apps like Bellus3D that are capable of creating a 3D model of your face. It has been possible to do this with any camera since at least 2011, but the TrueDepth front camera used for Face ID has made this far easier and more accurate. The problem is that there is a limit to the distance at which the 3D model can be made. Scanning a person’s face requires that the camera is as close to their face as it would be if the user was enrolling their own face for Face ID. But once you have a 3D scan of someone’s face, 3D printing a mask that can trick their iPhone into unlocking is extremely easy.

There are amazing examples of how the LIDAR sensor in this year’s iPad Pro can create a 3D model of its surroundings. Simply walking around with the device allows the lasers it sends out to measure distances, allowing a photorealistic recreation of objects, rooms and buildings. It’s clear that the LIDAR sensor has a far greater range, and this increases the distance at which a face can be scanned. In my tests I was able to scan a human face at a distance of 6 feet, with far greater accuracy within 1 foot. It seems that the overall shape of the head can be scanned at these distances, but the device struggles to measure the size of the nose. Now that LIDAR is likely to be a feature on iPhones for the foreseeable future, the accuracy of the sensor is likely to increase dramatically.

The Promise Of Blood Vessel Scanning

The problem holding back blood vessel scanning on phones is that the current methods involve shining a near-infrared light from one side of the finger, and detecting the illuminated blood vessels on the other side. Unless a phone has a hole to stick your finger into, so that the scanner can be both below and above your finger, this cannot be used on a phone. But the advantages of the technology are obvious: no trace of your finger veins are left on the scanner or anywhere else, much like with Face ID.

Apple has registered patents that point to a possible solution in the future. There may be methods that can similarly scan the veins of the face, which would make it difficult if not impossible to produce a mask capable of tricking the device. This would also make it impossible for siblings to unlock the same device, as two brothers on Reddit found they were able to do. As the brothers themselves note however, this was only possible when the passcode was known to both of them. When unlocking the device with the passcode, Face ID still scans the face, in order to increase the accuracy of the stored data.

If a face is significantly similar, this could mean that a sibling’s face data is recorded and can unlock the device later. Obviously the complete uniqueness of the blood vessels eliminates this possibility. As the iPhone 12 has already been released without mention of the technology, we can be sure that it would be the iPhone 13 generation or later before this feature is announced. We will therefore have at least a year in which LIDAR is widely available on phones, which are much less noticeable than iPads when it comes to being secretly scanned in public.

This time gap may present a problem for biometrics, as the very device that relies on Face ID may now have a long-range Face ID bypass device.

The Dangers Of Capturing Real World Locations In 3D

Since Apple has their own Measure app, which admittedly works on older devices but is more accurate with LIDAR, it’s clear to anyone the value of calculating the size of real world objects. It’s inevitable that apps will be released that can actually measure a human body in terms of clothing size. This is information people may not want to reveal, and could lead to body-shaming and bullying in schools.

There were concerns when Google Maps launched its Street View feature, as people worried that a burglar could see and assess any home as a potential target. While this fear was probably exaggerated, it doesn’t relax anxious residents to know how easily a 3D map of the exterior of their homes can be made by anyone with an iPhone 12 Pro (Max or otherwise).

The fact that this scanning can be done without crossing onto someone’s property means that it gets around restrictions on flying a drone on someone else’s property. You can simply scan the area, and move a virtual camera around the 3D model as much as you want. It’s not inconceivable that a criminal would pose as a plumber or an electrician, and make a 3D model of some parts of someone’s house when the resident isn’t looking. This would provide a way to calculate the size of doors and windows for instance, or to show potential areas where valuables might be stored.

The LIDAR can and (no doubt will) be used with CoreML, Apple’s Machine Learning framework, for more accurate detection of objects.

Image for post
Image for post
Source: Machine Learning — Models at developer.apple.com

You can see that the CoreML website has many suggested ML models, three of which (MobileNetV2, Resnet50 and SqueezeNet) are specifically for classifying the main image in the camera frame. DeeplabV3 provides image segmentation, which means that the camera frame is broken down into individually detected elements, such as a piece of furniture being a different entity from the floor or walls. The horribly named YOLOv3 can detect 80 types of objects in the camera frame, which is impressive.

It should be noted however that MobileNetV2, the image classification model I mentioned earlier, can detect 999 object types if they are the dominant object in the frame.

Now obviously these models are trained on 2D images, and so they can’t take advantage of data from the LIDAR sensor. But it’s easy to imagine that the general shape of objects will help train subsequent models to distinguish objects. Considering that a significant number of iPhone users will soon have the ability to take a 3D scan of any object and upload it to social media, possibly even using a tag of what the object is, the datasets needed for future machine learning models will quickly explode.

This would make it easier than ever to discover valuables, and CoreML could mean price estimates to allow criminals to instantly identify and calculate the value of items in someone’s home, should they get a few seconds to be able to scan it.

It’s really hard to predict the potential dangers of being able to instantly create a 3D model of any object or location.

Like with many new technologies, it’s likely that we will only find out the ways LIDAR can be misused for harm after it’s already widespread.

Mac O’Clock

The best stories for Apple owners and enthusiasts

Thanks to Anupam Chugh

Rob Sturgeon

Written by

An iOS developer who writes about gadgets, startups and cybersecurity. Swift programming tutorials and SwiftUI documentation too. robsturgeon.com

Mac O’Clock

The best stories for Apple owners and enthusiasts

Rob Sturgeon

Written by

An iOS developer who writes about gadgets, startups and cybersecurity. Swift programming tutorials and SwiftUI documentation too. robsturgeon.com

Mac O’Clock

The best stories for Apple owners and enthusiasts

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store