Biometrics in Motion

I have always found the term “contactless” to be misleading because it only makes sense for fingerprints, which have traditionally required a user to place their hand or finger on a sensor. Having to touch a sensor reduces throughput and also presents potential hygiene issues. However, iris and face images have always been contactless, so calling this new technology “contactless” doesn’t really paint the full picture. The term “in motion” seems to bring out the key differences and challenges in this aspect of biometric technology.

Biometrics in Motion (BIM) are usually poised as a technology that can increase throughput of a biometric system because it doesn’t require users to stop and engage directly with a sensor. Stopping tends to create queues of people and delays. Queues of people in the commercial world are annoying, but in a war zone a queue of people might be a terrorist target. Our goal is to allow people to walk normally into a secure area and be identified using biometrics without stopping or slowing down.

Biometrics or identity in motion technology presents several challenges focused on usability. With a static sensor such as a fingerprint pad, the system can notify the user that their finger isn’t positioned correctly, offer corrective guidance and then wait until a high quality biometric is collected. However, with BIM we don’t have that luxury — we have to take what we can get and hope for the best because we can’t tell people to stop. This is why biometrics in motion are typically going to involve multiple biometric modalities. If one is missed, we can use one of the others. The system we have been working on uses all three — fingerprint, face and iris. Any one of those is more than sufficient to uniquely identify a person in a 1M record database.

A second challenge is the problem of motion itself. When we take images of people while they are moving, we need much more processing power than we would need if they were still. If a person were to walk up to a static sensor, we could take several seconds to wait for a good image, and if the person is relatively still we should be able to get that quickly. However, if they are walking at a normal walking speed we might need to process 20–30 “frames” of video per second just to get a few frames that are usable for matching. There is an enormous amount of processing power required to examine those frames, find the features, and measure quality in order to generate a template — many times the processing required for a static sensor. This problem is magnified with irises, because we need a lot of pixels for a good iris — capturing this from a distance requires a lot of processing AND advanced optics plus infrared lighting to be able to find a high quality iris for matching. Capturing faces is relatively easy because we don’t need as much resolution for faces.

The systems we work on are “cooperative” in that they require the users to know the system and know how to use it. For example, they must pass their hand closely over the fingerprint sensor and must look at the iris sensor as the pass through the system. These systems are by definition voluntary because they require users to perform actions or they just won’t work. However, as this technology advances it will become more “passive” in that it won’t require as much cooperation from the user. Eventually it will reach a point where biometrics can be collected without the user’s knowledge or consent. We are already close to this point with facial biometrics, but within a few years passive collection of irises and fingerprints will also be possible. As with any new technology, there are issues around privacy that need to be addressed.

Originally published at