BT/ Biometric digital ID providers partner with Microsoft on decentralized ID, passwordless pilot

Paradigm
Paradigm
Published in
41 min readMar 15, 2021

Biometrics biweekly vol. 7, 1st March — 15th March

TL;DR

  • Biometric digital ID providers partner with Microsoft on decentralized ID, passwordless pilot: AuthenTrend, Acuant, AU10TIX, Idemia, Jumio, Socure, Onfido, Vu Security involved in Ignite announcements.
  • LG Innotek and Microsoft have announced a new 3D sensing collaboration to integrate LG Innotek’s Time of Flight (ToF) cameras into Microsoft’s Azure Depth 3D sensing platform.
  • Facebook has announced the development of a new computer vision model, named SEER (SElf-supERvised). SEER has been pre-trained on a billion public (non-EU) Instagram images, and is able to make inferences between the data’s parts unlike most CV models who learn from pre-labelled datasets.
  • Facebook’s upcoming smart glasses are due to launch this year, and the company is assessing whether face biometrics could be an added feature to the devices.
  • Fingerprint Cards turns to mirrors in a new biometric in-display sensor patent.
  • The Biometrics Institute has released a high-level overview of how biometrics intersects with digital identity onboarding to guide decision-makers considering, or already implementing, the use of biometrics in online sign-ups.
  • Researchers show deepfakes can beat face biometric web services, propose defense strategy.
  • Scientists developed a clever way to detect Deepfakes by analyzing light reflections in the eyes.
  • Corning report suggests iris biometrics improvements with new liquid lens technology.
  • A Europe-wide study on students with Special Educational Needs and Disabilities (SEND) conducted to explore whether biometric and online authentication methods for online education portals affected a students’ willingness to use the resource indicates that having a range of authentication methods is more important than choosing or avoiding a particular method.
  • Biometrics providers must adopt new models to leverage IoT monetization opportunities: ABI Research.
  • What information is stored in face biometric templates? EAB explores.
  • AnyVision facial recognition scores among FRVT 1:1 leaders, CloudWalk revealed as MoonTime developer.
  • IDmission and ForgeRock protect digital identity data with security certifications.
  • Mastercard and International Chamber of Commerce to foster interoperable health passes.
  • NEC America explains top biometric accuracy finish for masked faces in DHS Rally.
  • Isorg fingerprint biometrics module based on organic photodetectors PIV-certified by FBI.
  • Paravision and Innovatrics each score among biometric accuracy leaders in US federal agency testing.
  • Smart Engines and Promobot partner on next-gen data scanning technology for digital ID documents.
  • Idemia joins a UK pilot with IDway biometrics for digital ID checks against passport data.
  • Third-party test shows ID R&D leap forward in voice biometrics accuracy.
  • UN economist makes case for biometric vaccine passport adoption by developing nations.
  • Iris biometrics facilitate change in the aid sector worth hundreds of billions: IrisGuard says tech to go mainstream.
  • A market for emotion recognition grows without tackling deep concerns by the public.
  • Biometrics industry events. And more!

Biometrics market

The Biometric system market size is projected to grow from USD 36.6 billion in 2020 to USD 68.6 billion by 2025; it is estimated to grow at a CAGR of 13.4% during the forecast period. Increasing use of biometrics in consumer electronic devices for authentication and identification purposes, the growing need for surveillance and security with the heightened threat of terrorist attacks, and the surging adoption of biometric technology in automotive applications are the major factor propelling the growth of the biometric system market.

Biometric Research & Development

Latest Researches:

Am I a Real or Fake Celebrity?

by Shahroz Tariq, Sowon Jeon, Simon S. Woo

Researchers show deepfakes can beat face biometric web services, propose defense strategy

Commonly used methods for generating deepfakes can result in images that regularly defeat face biometric algorithms, according to a new report by researchers at Sungkyunkwan University Suwon in South Korea.

The three researchers’ pits deepfake impersonation attacks against commercial facial recognition web services for identifying celebrities from Microsoft, Amazon and Naver. Researchers Shahroz Tariq, Sowon Jeon and Simon S. Woo state that the attacks can easily be generalized to non-celebrities.

They attempted targeted attacks, intending to trick the algorithm into misidentifying the submission as a particular celebrity, and non-targeted attacks, to trick the algorithm into mistakenly identifying the image as any celebrity, the latter of which were consistently successful.

When making mistakes, the biometric algorithms returned high confidence scores, in some cases higher than the real image, which the study authors attribute to the deepfakes retaining key identity data.

Three publicly available datasets and two custom ones created by the researchers were used to create a total of 8,119 deepfakes and extracted faces from the frames to submit to the web APIs.

They found that some methods of attack are more successful than others, and each biometric matching system responds differently to deepfakes.

With images taken from the VoxCelebTH dataset, Microsoft’s Azure Cognitive Services API identified 78 percent of deepfakes the researchers submitted to it as the targeted celebrity, while Amazon mismatched 68.7 percent of submitted images. Overall attack success rates across the five datasets used in the test were 28 percent for Amazon, 33.1 percent for Microsoft, and 4.7 percent for Naver, but fell to less than 4 percent, 5 percent, and 1 percent respectively when the researchers employed a proposed defense method. The researchers declared “no clear winner among the three APIs” in terms of resistance to deepfake impersonation.

The researchers proposed method of defense against the deepfake impersonation attacks applies off-the-shelf deepfake detectors to the biometric API. They plan to build a REST API to screen incoming requests to the celebrity facial recognition APIs.

“The proposed defense method can provide excellent results. And, to some extent, it can be an effective defense mechanism,” the researchers write. “However, these off-the-shelf models may not be optimal against each DI attack, and false positives can play avital role in increasing the attack’s success rate. In addition, due to the rise of new deepfakes, existing detection models are not guaranteed to work well against them. Therefore, a more generic and effective defense method against different types of existing and new DI attacks is urgently required. And more research is needed in that direction, exploring transfer learning, domain adaptation, and meta transfer learning to better cope with new DI attacks.”

A paper presented earlier this year showed a troubling new deepfake method capable of defeating deepfake detectors.

Scientists developed a clever way to detect Deepfakes by analyzing light reflections in the eyes

A new AI tool provides a surprisingly simple way of spotting them: looking at the light reflected in the eyes.

The system was created by computer scientists from the University at Buffalo. In tests on portrait-style photos, the tool was 94% effective at detecting Deepfake images.

The system exposes the fakes by analyzing the corneas, which have a mirror-like surface that generates reflective patterns when illuminated by light.

In a photo of a real face taken by a camera, the reflection on the two eyes will be similar because they’re seeing the same thing. But Deepfake images synthesized by GANs typically fail to accurately capture this resemblance.

Instead, they often exhibit inconsistencies, such as different geometric shapes or mismatched locations of the reflections.

The AI system searches for these discrepancies by mapping out a face and analyzing the light reflected in each eyeball.

It generates a score that serves as a similarity metric. The smaller the score, the more likely the face is a Deepfake.

The system proved highly effective at detecting Deepfakes taken from This Person Does Not Exist, a repository of images created with the StyleGAN2 architecture. However, the study authors acknowledge that it has several limitations.

The tool’s most obvious shortcoming is that it relies on a reflected source of light in both eyes. The inconsistencies in these patterns can be fixed with manual post-processing, and if one eye isn’t visible in the image, the method won’t work.

It’s also only proven effective on portrait images. If the face in the picture isn’t looking at the camera, the system would likely produce false positives.

The researchers plan to investigate these issues to improve the effectiveness of their method. In its current form, it’s not going to detect the most sophisticated Deepfakes, but it could still spot many of the cruder ones.

You can read the study paper here on the arXiv pre-print server.

Improving iris recognition with liquid lens technology

Materials science innovation firm Corning has released a new report describing a technique to improve iris biometrics with liquid lens technology.

The publication comes specifically from one of Corning’s business teams, Corning Varioptic Lenses, which focuses on the development of adjustable lens solutions for industrial applications.

According to the new paper, iris recognition and biometric verification systems are becoming increasingly widespread, thanks to the high level of security that derives from the iris’s unique and distinguishable features.

However, for these systems to be effective, they need to be able to capture high-image resolution photos while also keeping the image in focus.

The study mentions how many of the biometric requirements related to iris recognition today were first established by a paper published by Cambridge University in 2007 and dubbed “New Methods in Iris Recognition.” In that paper, John Daugman described various “disciplined methods” designed to detect and accurately model the iris inner and outer boundaries with active contours.

The study also examined Fourier-based methods for solving problems with iris trigonometry and projective geometry, as well as examining statistical inference methods for excluding eyelashes.

Xavier Berthelon, an engineer in optics and imaging at Corning and author of the liquid lenses study, argues that systems capable of delivering Daugman’s high-precision results are traditionally bulky due to the optical constraints of mechanical lens-based cameras. This also makes them considerably expensive.

The new study aims at reducing the footprint of these systems, increasing their efficiency, and reducing their cost by introducing liquid lens devices.

The technology behind liquid lenses is called electrowetting and works by emulating human eyes’ fluid and their adaptable characteristics to create a rapid response in variable light and movement scenarios.

It is important to note, however, that while the integration of a liquid lens within the optical system improves the final resolution and focus of the image, it does not directly increase the system’s depth of field. Instead, the technology allows the system to automatically adjust focus and maximize the sharpness on the user’s eye region with a fast response time of approximately 10 ms.

Liquid lenses also eliminate the need for mechanical parts that are the staple of traditional camera systems, and that are subject to wear down during use, according to the report. Thanks to these features, liquid lenses can reportedly endure hundreds of millions of cycles with low power consumption and at a speed consistently to conventional actuators.

Corning specified that liquid lenses can be integrated into an existing optical biometric system either as an add-on, being placed either at the front or at the rear of the optical system, or as an add-in component, in which case they have to be integrated within the optical stack.

As far as real-world applications are concerned, Berthelon’s study describes a number of commercial, governmental, and forensic uses for the new biometric recognition technology.

These range from verifying financial transactions at ATMs, in healthcare, and for access control, to national ID cards, border and passport control, and criminal investigation and suspect identification.

Biometrics providers must adopt new models to leverage IoT monetization opportunities: ABI Research

Players in the global biometrics industry must break away from certain traditional and inflexible models so as to better align with monetization opportunities in the Internet of Things (IoT) ecosystem, a recent study by technology advisory body ABI Research suggests.

The findings of ABI Research’s Transformative Horizons: Biometrics in the IoT report indicate that biometric players currently face significant opposition breaking into the multifaceted and use-case specific IoT environment, even as IoT connections look to reach 23.7 billion by 2026. The report however acknowledges that requirements for connectivity and digitization, along with higher demand for intelligence will help advance the biometrics market in a wider range of IoT applications.

According to highlights of the study, the current misalignment is caused by a number of factors including a lack of IT security investments in the IoT ecosystem, difficulties demonstrating monetization opportunities for biometric and security service providers, complications faced by biometric vendors to break the hardware-based revenue model, as well as resistance against BaaS (biometrics-as-a-service).

“Biometric service providers do not need to re-invent the wheel to get a bigger slice of the IoT pie, but they also must not cling to traditional, inflexible models. They must prove to IoT players that biometrics can offer intelligent solutions for targeted applications, demonstrate the ease of device deployment, system integration and interoperability with centralized platforms, and showcase pricing model modularity and versatility,” Dimitrios Pavlakis, security analyst at ABI Research, said of the situation.

Pavlakis said it would make little sense for ‘IoT’ to be considered in its entirety by most biometric vendors, just like most IoT players are not drawn to biometric authentication. IoT applications in smart homes, connected vehicles, government, surveillance, casinos and other areas have different requirements, regulatory responsibilities, and monetization strategies.

He thus suggested that “Biometric players need to look past standard user authentication service offerings to penetrate the highly complex and multifaceted IoT ecosystem,” and “…should not look into IoT applications to solve problems that do not exist, but rather to add intelligence that IoT players did not know they could use.”

ABI forecast a 22 percent drop in biometric device revenues in 2020, followed by a bounce-back in 2021, in a report last year.

New guidance on digital onboarding with biometrics

The Biometrics Institute has released a high-level overview of how biometrics intersects with digital identity onboarding to guide decision-makers considering, or already implementing, the use of biometrics in online sign-ups.

Online sign-up processes — or onboarding — have been part of the digital transformation landscape for some time. In low-security contexts, like social media, onboarding requires minimal identity proof from the new customer. In more sensitive contexts, like banking and government services, greater identity proof is required to link the digital service to a specific person. These contexts have until recently typically used in-person, rather than remote, sign-up processes.

Pressure to streamline onboarding experiences in these more sensitive contexts is building from different directions. Customers have raised expectations based on simple sign-up experiences elsewhere. And as COVID-19 has curtailed our ability for face-to-face engagement, a remote option has become imperative for many organisations.

Business functions which rely on in-person implementation have been impaired by the pandemic, while remote onboarding and service delivery have experienced dramatic growth in many areas.

The new paper from the international membership organisation is aimed at bodies considering the attachment of a digital identity to a human identity using biometric technology.

The paper covers:

  • The re-use of an existing digital identity
  • Considerations in the process of attaching a digital identity to a person
  • De-duplication — the process of ensuring a unique representation of a person
  • Watchlists for screening the opening of accounts
  • Guidance in formulating strategies
  • Making ethical and responsible decisions in biometric applications

What information is stored in face biometric templates? EAB explores

Deeply-learned face representations enable the success of current facial recognition systems today. Despite the ability of facial recognition systems representations to encode the identity of an individual, however, recent works have shown that much information is stored within them.

A webinar organized by the European Association for Biometrics (EAB) and led by Philipp Terhörst, Research Scientist at Fraunhofer IGD, examined these issues last Tuesday.

The talk showed how many soft-biometric attributes are embedded in face biometric templates and that these attributes often have a strong correlation to face verification performance.

The event was divided into three parts, each answering a different question.

  1. What information is stored in face templates?

According to Terhörst, the main information stored in face biometric templates include demographics, image characteristics, and social traits.

The research conducted by the Fraunhofer IGD scientist analyzed two face templates regarding 113 attributes, and concluded that 74 of them could easily be predicted, particularly non-permanent ones.

Information stored in Facenet (2015) templates: Credit Terhörst et all

Terhörst achieved this by training a massive attribute classifier (MAC) to jointly predict multiple attributes, such as face shape, beard type, and whether the individual is wearing lipstick or not.

The biometric algorithms tested during the experiments were LFW and CelebA. Terhörst’s team analysed 13 thousand images from over five thousand individuals and up to 73 attribute annotations from LFW. The CelebA testing took into consideration 200 thousand images from over ten thousand celebrities and 40 binary attributes.

Information stored in Arcface (2019) templates: Credit: Terhörst et all

The tests were carried out via FaceNet and ArcFace embeddings, and showed that head pose and social traits were the easiest to predict.

Face geometry, nose, and image quality, among others, were also predictable, while skin, mouth and environment were the hardest traits to predict.

2. How does it relate to fairness in facial recognition?

In order to assess these biases, Terhörst then proceeded to analyze the influence of soft-biometric attributes on the performance of facial recognition algorithms, particularly ArcFace and FaceNet.

To do so, the scientist’s team used a database named MAAD-Face holding a high number of face images, featuring several and high-quality attribute annotations.

These included six positive/negative control groups for each attribute, that were created by randomly selecting samples from the database. These synthetic groups had the same number of samples as their positive/negative counterparts.

“For example,” Terhörst explained, “if we had ten thousand sample images with eyeglasses, and ninety thousand without, for the positive control group we would just look for randomly selected ten thousand samples, and for the negative control group randomly selected ninety thousand.”

The results of analyzing both facial recognition algorithms through the control groups showed that, in terms of demographics, middle-aged, senior, white, and male individuals showed higher recognition accuracy rates than young, Asian, Black, and female individuals.

When visibility-related attributes were concerned, fully visible forehead, receding hairline, bald, non-eyeglass-wearing individuals scored higher accuracy rates than those with an obstructed forehead, bangs, eyeglasses and wavy hair.

Temporary attributes like hats, earrings, lipstick and eyeglasses also reduced the precision of the facial recognition algorithms, while arched eyebrows, big or pointy nose, bushy eyebrows, double chin and high cheekbones were responsible for higher accuracy rates.

Both algorithms also scored better with smiling faces and closed mouths, as opposed to individuals’ non-neutral expressions. According to Terhörst, this might be due to the fact that a vast number of images from the database were from smiling celebrities.

Additional biases regarded the colors of users’ hair and eye color, and whether they had a beard or not.

3. How can biases in facial recognition be mitigated?

Knowing encoded information in face templates might help to develop bias-mitigating solutions, Terhörst explained, proceeding to the third part of the webinar.

According to the scientist, however, previous works in this field required labels of the bias-related attributes beforehand, and could only mitigate specific biases. These actions have also reportedly been known to degrade the overall performance of facial recognition algorithms, as well as present difficulties in integration into existing systems.

A possible alternative to traditional systems would be fair score normalization (FNS). The technique, Terhörst explained, can operate on unlabelled data, and effectively mitigate biases of unknown origin.

FNS can allegedly also improve the performance of facial recognition systems considerably, and can be integrated easily into existing systems.

How FNS works. Credit: Terhörst et all

The webinar was part of the EAB virtual events series on Demographic Fairness in Biometric Systems.

University students with special educational needs highlight the benefits of e-assessment

More than 250 participants in the study recognized the specific benefits provided by e-authentication but expressed concerns about the risk of technical problems

A Europe-wide study on students with Special Educational Needs and Disabilities (SEND) conducted to explore whether biometric and online authentication methods for online education portals affected a students’ willingness to use the resource indicates that having a range of authentication methods is more important than choosing or avoiding a particular method.

The digitization of learning resources in higher-education (particularly as a result of COVID-19) has allowed for greater learning flexibility and availability, but inevitably demands greater use of personal identity data for authentication while logging onto online educational and resource portals. Last year, CourseKey released a biometric facial and fingerprint scanning solution to keep track of student attendance and curtail online education fraud.

David Bañeres, a professor at the Faculty of Computer Science at Universitat Oberta de Catalunya (UOC) in Spain, ran a research study on 14,000 students at 8 European universities to evaluate the needs of those with special needs and disabilities. UoC, utilises an e-assessment platform called TeSLA (Adaptive Trust-based E-assessment System for Learning) a system that facilitates access to online resources for SEND students.

“We evaluated the perceptions of students with special educational needs and disabilities in relation to the use of an authentication and authorship validation system and the sharing of personal biometric data, which, for some students, may include highly sensitive information related to their special need or disability,” Baneres says of the study.

This type of sensitive data is protected under Europe’s privacy regulations (GDPR).

Personal student information could be compromised in a number of ways, therefore reliable digital authentication systems and trust in new educational technologies is paramount to student use.

At the start of the outbreak, TypingDNA offered free biometric authentication for educational institutes and students, to support remote learning.

The study ruled out special educational needs or disabilities as a variable impacting a student’s willingness to use a given biometric or authentication system, yet Baneres suggests alternative authentication log-on methods should be used by Higher Education Institutes to improve accessibility to everyone.

Age, gender, and previous experience with technology are more likely to affect the acceptance of an authentication system, the report concludes.

Main Development News:

Biometric digital ID providers partner with Microsoft on decentralized ID, passwordless pilot

Microsoft is launching its decentralized identity credentials for public preview this spring, and partnered up with many leading device-based biometrics providers to do so, according to an announcement at its Ignite conference and in a blog post:

Azure Active Directory verifiable credentials (AAD VCs) are intended to provide secure, user-controlled, revocable credentials that support Zero Trust security strategies. They will do this in part through a partnership with leading biometric digital ID providers Acuant, AU10TIX, Idemia, Jumio, Socure, Onfido, and Vu Security, which unveiled a multi-modal biometric video conferencing solution in December. The partners will work on improving verifiability and data security.

In a blog post, AU10TIX CEO Carey O’Connor Kolaja calls the collaboration “a critical milestone for our industry,” and that the aim “is to improve verifiability while protecting privacy for businesses, employees, contractors, vendors, and customers.”

Microsoft will release its SDK in the next few weeks to allow developers to build the applications that issue and use the credentials.

Users will be able to use the Microsoft Authenticator App to share university transcripts, diplomas, and professional credentials at first, with plans to expand it to other credentials as new applications are developed.

The system is already being piloted at Japan’s Keio University, by the government of Flanders, Belgium, and the UK’s National Health Service. The NHS also implemented Yoti’s digital ID with face biometrics for workers for contactless credential proofing last year.

AAD VCs are built on the W3C’s WebAuthn open authentication standard, the Bitcoin blockchain and open protocol Sidetree, which is used to add new blocks. The Identity Overlay Network (ION) Sidetree implementation is customized but open-source, with organizations each verifying and storing identifiers on their own node.

Wired points out that the Solarwinds hack took advantage of flaws in organizations’ implementations of Active Directory, but the decentralized platform means that should an attack succeed in accessing stored data, it will be impossible to decrypt it without the private key held by the user.

Microsoft also announced the general availability of passwordless authentication for Azure Active Directory at Ignite, and its Passwordless Pilot Program launched last November in collaboration with AuthenTrend has been extended to cover the growing list of passwordless features supported by Microsoft.

LG Innotek and Microsoft partner up on 3D sensing smart camera project

LG Innotek and Microsoft have announced a new 3D sensing collaboration to integrate LG Innotek’s Time of Flight (ToF) cameras into Microsoft’s Azure Depth 3D sensing platform.

This will allow cloud-based 3D sensing for position and objects in a variety of applications from healthcare to logistics, including 3D face biometrics. LG Innotek will supply its ultra-slim camera modules with Microsoft’s 3D sensing technology and know-how to aid scalable smart monitoring systems based in the cloud.

LG Innotek’s depth-sensing camera technology powers Face ID biometrics on iPhones.

The partnership is aimed at adding value to Azure’s edge AI platforms used for computer vision applications. For example, Azure clients in the fitness industry will soon be able to track movements and more accurately to provide better workout guidance. Similarly, the technology can assist health professionals in sensing body shapes and postures for more accurate robot-assisted surgeries.

Microsoft Silicon and Sensor Group Business Incubation Head Daniel Bar said, “LG Innotek brings world-class manufacturing expertise in complex optoelectronic systems. We are excited to welcome LG Innotek to our ecosystem and accelerate time to market for 3D cameras. This is a key step towards providing easy access for computer vision developers to create 3D vision applications.”

Retail and logistics clients can also utilize the cloud-based smart cameras to better monitor customer traffic and inventories, as well as tracking production lines.

Facebook self-supervised computer vision model promises object recognition breakthrough

Facebook has announced the development of a new computer vision model, named SEER (SElf-supERvised). SEER has been pre-trained on a billion public (non-EU) Instagram images, and is able to make inferences between the data’s parts unlike most CV models who learn from pre-labelled datasets, reports Venture Beat.

“Self-supervised learning has incredible ramifications for the future of computer vision, just as it does in other research fields. Eliminating the need for human annotations and metadata enables the computer vision community to work with larger and more diverse datasets, learn from random public images, and potentially mitigate some of the biases that come into play with data curation. Self-supervised learning can also help specialize models in domains where we have limited images or metadata, like medical imaging. And with no labor required up front for labeling, models can be created and deployed quicker, enabling faster and more accurate responses to rapidly evolving situations,” Facebook wrote in a blog post.

Self-supervision is believed to be key in the step away from machine learning, and towards human level intelligence. It could improve speech and object recognition, among other AI applications. A range of issues related to dataset collection and annotation have also plagued biometrics development, particularly in facial recognition.

Instagram’s terms of service allow the company to use data uploaded to it in almost any way, but as OneZero notes, the avoidance of images from European users is likely an attempt to avoid falling afoul of GDPR.

Images do not incorporate semantic concepts, like words do, therefore designing a model that is able to make these inferences required Facebook researchers to use a convolutional network (ConvNet) that was big enough to learn every visual concept from the images. Because the dataset does not use labeling, Facebook plans to automatically populate it with new images every 90 days.

Development of SEER included the use of several components of architecture; an ultra-fast algorithm called SwAV, as well as RegNets (ConvNet); capable of scaling billions of parameters without compromising run-time or accuracy. According to Facebook, the model outperformed the most advanced state-of-the-art self-supervised systems.

Facebook software engineer Priya Goyal says that use of individuals’ Instagram pictures for research is stated in Instagram’s data policy, therefore there was no chance for people to opt-out of this data use. However, Goyal mentions that Facebook does not plan to share the images or the SEER model itself due to potential biases, further described in the full research paper:

Fingerprint Cards turns to mirrors in a new biometric in-display sensor patent

Fingerprint Cards has received a U.S. patent for an optical biometric sensor with angled mirrors imbedded in an active portion of a display.

The design gets more biometric information from fingerprints in the form of greater resolution using a display that has a protective glass or transparent epoxy layer. That layer is a common feature on portable devices like phones and watches, but also on access control systems.

Most of those devices have capacitive biometric sensors, which have been adequate to most identification tasks to date. The problem is that capacitive sensors require close proximity to the surface of a fingerprint in order to record the ridges and whorls with the higher fidelity that is increasingly required.

Even the thin protective layer used on electronic devices reduces a capacitive sensor’s effectiveness.

Fingerprint Cards’ patent gets around the problem of excess depth between a fingerprint and the sensor by collapsing that distance using mirrors.

Light illuminating a fingerprint bounces off the skin’s surface down through apertures to two sets of extremely small, angled mirrors. In some configurations of the patented device, LCD backlight is enough to do the illumination. A secondary source, though, is also described in the patent. The apertures can be set among LCD components, but in at least one option, the apertures are more or less free floating. In this case, the biometric sensors are set among the LCD components.

The first mirrors reflect light rays at a 90-degree angle across the layer in which the sensors are set to the second set of mirrors, which bounce the rays at a 90-degree angle down to sensors. This design gives a sensor a longer focal length without having to increase the distance between the sensor and the subject — the fingerprint.

Two other biometric patent applications from Fingerprint Cards, for ‘single-feature fingerprint recognition’ and an ‘authentication method for an electronic device’, have also been published by the USPTO.

The first is an attempt to match features beyond minutiae for less complex fingerprint matching algorithms, while the second describes a system for using finger-position as a means of enhancing protection against biometric spoof attacks.

AnyVision facial recognition scores among FRVT 1:1 leaders

AnyVision has earned top rankings for biometric accuracy across all five category groups in the latest Face Recognition Vendor Test (FRVT) 1:1 conducted by the National Institute of Standards and Technology (NIST).

The March FRVT 1:1 evaluated 177 individual biometric algorithms and reported the performance of their facial recognition technologies in one-to-one (1:1) scenarios. The test compared various face images against input images, measuring the algorithms’ accuracy, speed, storage and memory consumption, and resilience.

The ‘anyvision-005’ algorithm was not entered in the child category, but finished between 7th and 13th among all algorithms worldwide in the visa, visa border, border, wild, and both mugshot categories. The companies ahead of AnyVision on the overall 1:1 leaderboard all come from China, except VisionLabs, which is based in Russia.

“AnyVision technology is optimized for interpreting real-world video sequences on low power embedded devices, which is actually much harder to solve than what NIST focuses on, which is static images and photos, processed using Intel hardware,” explained AnyVision CTO Dieter Joecker.

NIST’s FRVT measures the performance of automated facial recognition technologies in order to assess their validity in a variety of different scenarios, including civil, law enforcement, and homeland security applications.

“The fact that we do not design our algorithms for NIST and still achieved very balanced, high rankings across all categories — including two that were not included in the initial ranking — is truly impressive,” Joecker added. “It’s a testament to how advanced our technology really is.”

The point was echoed by AnyVision CEO Avi Golan, who said that the results in the FRVT reflect the significant investments AnyVision has made to make its algorithms and AI engine among the most efficient in the market.

AnyVision has been quite active in 2021, recently filing a patent for delivery drones with facial recognition.

CloudWalk revealed as MoonTime developer

CloudWalk has been revealed as the developer behind the MoonTime and Hengrui AI Technology facial recognition algorithms, both notable for their outstanding performance in NIST’s recent FRVT tests across 1:1, 1:N, and Face Mask Effects tests.

The MoonTime ‘mt-003’ algorithm, which was first benchmarked by NIST earlier this year, tops the NIST 1:1 leaderboard after the March update, with top 5 finishes in the visa, mugshot, visa border and border categories. The same algorithm also finished second in the March update of the leaderboard for biometric accuracy with Face Mask Effects.

Hengrui’s ‘hr-000’ algorithm sits first among 1:N leaders, and places between 1st and 5th in each category except the long-duration mugshot match, which it was not tested for.

IDmission and ForgeRock protect digital identity data with security certifications

IDmission has announced it is now certified for ISO 27001:2013, assuring customers and partners that the company’s information management systems now officially comply with the global standards for best practices in information security management systems. At its core, the certification is a win for IDmission’s efforts to secure sensitive data like the four modalities of biometrics its technology supports.

“As a global leader in identity verification, we are committed to the protection of consumer data and ongoing risk management,” said Ashim Banerjee, CEO of IDmission. “We are proud that IDmission is now one of only a handful of organizations in the identity verification space to achieve this certification.”

IDmission demonstrated its success in providing end-to-end AI-powered digital identity solutions used in verification and authentication services. The ISO certification validated the efficacy of these measures in the company’s information security management system serving clients ranging from India to the United States.

ForgeRock announced the certification of its Identity Cloud platform to meet the Service Organization 2 (SOC 2) Type I requirements set forth by the American Institute of Certified Public Accountants (AICPA). The third-party certification validates the company’s security for enterprise-grade customer data services in the cloud.

ForgeRock highlights the comprehensive, extensible, and customizable nature of its SaaS-delivered digital identity platform as its key strengths that set it apart in the industry. Identity Cloud allows clients to run their full IAM platforms on the cloud, further strengthening seamless on-site applications. Earning AICPAs SOC 2 Type 1 certification further reinforces Identity Cloud’s security, availability, and processing integrity. This certification follows ForgeRock’s recent award of the overall leader in a platform analyst report published by the KuppingerCole Leadership Compass for Customer Identity and Access Management (CIAM).

Forge Rock Chief Information Security Officer Russ Kirby said, “ForgeRock is committed to ensuring our customer’s data is managed with the highest standard of security and compliance. Our SOC 2 Type 1 certification is another significant achievement building on our ISO27001 program. It also reinforces our leadership position in the industry as a trusted partner to our customers and the nearly three billion identities under management globally.”

Isorg fingerprint biometrics module based on organic photodetectors PIV-certified by FBI

A FAP10 optical fingerprint biometric module from Isorg made with organic photodetectors (OPDs) has been certified by the FBI, which makes it the first in a new organic photodiode sensor category to be approved, according to the company announcement.

The Isorg-Bio11 single finger livescan capture device at 500 ppi has been certified to the PIV (personal identity verification) specification 071006, according to the FBI website. The FAP 10 module is now approved for identity security applications with mobile devices, such as for access control at airports.

The scanner module is manufactured by printing organic photodiodes onto a Thin Film Transistor (TFT) backplane. Isorg says it is in the only company in the world able to mass-produce this technology, and that its Limoges, France plant is ready to ramp-up production to industrial quantities.

The module also offers high tolerance to bright light, whether indoors or outdoors, the company says, for consistent biometric data quality. It includes an image sensor, dedicated light source, optical filters and related electronics, all in less than 2mm thickness.

Isorg also plans to provide customers with a reference design with its Read Out Integrated Circuit (ROIC) and image enhancement software optimized for its OPD technology. The company also says anti-spoofing capabilities can be easily integrated into the hardware and software.

“This FBI certification confirms Isorg’s capacity to deliver biometrics modules based on organic electronics that rise to the challenges of the security market and meet its stringent requirements,” says Jean-Yves Gomez, CEO of Isorg. “We are the very first to gain security approval of an OPD fingerprint sensor that assures the high-level image quality, accuracy and robustness that customers need in border control, access control, voter identification, etc. The security market will continue to benefit from our ongoing developments to achieve certification on higher form factors (up to FAP 60) based on the same scalable OPD technology.”

NEC America explains top biometric accuracy finish for masked faces in DHS Rally

NEC Corporation of America’s face biometrics outperformed the median competition by 19 percent for identifying subjects wearing masks in the latest Biometric Technology Rally held by the U.S. Department of Homeland Security (DHS), the company has calculated.

The NEC-submitted algorithm, codenamed ‘Alan’ for the anonymized test, scored a 98.7 percent true identification rate (TIR) when used with the top-performing acquisition system, the best result in the Rally.

“Face masks are not a new challenge to NEC. They have been quite commonly used to protect public health throughout Japan and parts of Asia ever since H1N1 in 2009, or even before that,” explains Kris Ranganath, CTO of NEC Corporation of America. “We’re pleased that this deep experience allows us to respond to the COVID-19 pandemic quickly to address the hygiene issues with effective contactless technology.”

The 2020 DHS Biometric Technology Rally was held at the agency’s Maryland Test Facility (MdTF), as the annual event is each year. Part two of the test was held in January.

NIST testing has suggested significant gains in biometric accuracy with masked faces by algorithms from many companies over the course of the pandemic.

In the DHS Rally, NEC America’s algorithm outperformed its competition in combination with most of the image acquisition systems tested.

The best test of pure algorithm performance is considered to be the ‘focussed’ category, which excludes errors from the front-end capture system, according to the announcement. The company says including these external errors, its algorithm achieved a 95.9 percent TIR, 4 percent better than the second-place algorithm.

The algorithm also exceeded DHS’ criteria of 99 percent or higher matching rate for people without masks. NEC America’s ‘Pearl’ entry topped biometric accuracy in that test, at 99.8 percent TIR without image acquisition errors, and 99.3 percent with errors included.

Paravision and Innovatrics each score among biometric accuracy leaders in US federal agency testing

Paravision has scored the top rank among biometrics vendors from the U.S., UK and EU in matching accuracy for mugshot and webcam images in both “Identification” and “Investigation” modes of the National Institute of Standards and Technology’s (NIST’s) 1:N Face Recognition Vendor Test (FRVT), and fourth among all vendors.

The company also topped U.S., UK and EU biometrics vendors in identification mode for visa and border images, investigation mode for databases of 6 million or 12 million images, and investigation mode for aging faces, trailing only one non-Western vendor in the latter category. Paravision was also first among U.S., UK and EU vendors in investigation mode accuracy for profile images, and second globally.

Taken together, Paravision says the results show its performance for different image types, travel use cases, at scale, with older source images and in challenging conditions, as well as overall top performance among Western companies.

The latest NIST benchmark evaluates the fourth generation of Paravision’s face biometrics algorithm, and shows a reduction in error rates of between 25 and 30 percent from its previous submission.

“NIST FRVT is the gold-standard for benchmarking face recognition performance across a range of scenarios and meaningful datasets, and 1:N is the premier test within FRVT,” comments Paravision Chief Technology Officer Charlie Rice. “As a U.S.-based AI company, we’re proud to deliver world-class accuracy and raise the bar for western technology providers.”

Smart Engines and Promobot partner on next-gen data scanning technology for digital ID documents

Robotics manufacturer Promobot and computer vision solutions provider Smart Engines (SE) have partnered on a new high-end document recognition device.

Dubbed Promobot Scanner, the new solution is capable of automatically filling forms with data from ID documents like biometric passports via SE’s ID-scanning software, based on GreenOCR technology, and its Smart ID Engine.

Since the scanning happens within Smart Engine’s platform, users’ personal identity information and sensitive data can be scanned and then used to fill forms autonomously without the need for transferring images to third-party sources.

The hardware was designed and developed by Promobot. Once a digital ID document is placed on the scanning surface, images and data are transmitted to the artificial intelligence-powered recognition module for processing.

“The Promobot Scanner release is a story about cooperation between hardware and software developers,” explained Smart Engines Director of Special Projects Nikita Arlazarov. “Such synergy will allow organizations to implement the best practices of automatic document processing and improve the level of customer service in the offices.”

According to initial tests, the ID scanning process from document presentation to filling out the form is about five seconds, with the estimated net document recognition time on a single frame taking less than a second.

This, in turn, would potentially translate into a reduction of clients’ waiting time in the queue and document data entry time of roughly nine times, when compared to current standards.

Third-party test shows ID R&D leap forward in voice biometrics accuracy

ID R&D has announced major gains in the accuracy of its voice biometrics, with a 0.01 percent false acceptance rate (FAR) at 5 percent false rejection rate (FRR) for device unlocking through biometric authentication in third-party testing.

The company says that up until now, the voice modality could not meet the security standard for mobile device or laptop unlocking, relegating voice to the position of a useful convenience for a limited range of applications. The increased accuracy level, however, now rivals a PIN, according to ID R&D, opening up new practical applications for voice.

Enabling voice biometric authentication for device unlocking, perhaps along with a wake word in the style of voice assistants, gives users the ability to carry out hands-free logins. Voice could then also be used to mobile and web applications.

The advance is described, along with the possible applications it can power, in a new white paper from ID R&D titled “Voice Biometric Revolution: Why Voice ID Is Now Secure Enough for Device Unlock”. In a test for authentication with a wake word using text-dependent speaker recognition, a random command using text-independent speaker recognition and anti-spoofing for both, the third-party evaluation an FAR of 1 in 50,000 in indoor and driving environments, with an average FRR of 9.9 percent. This meets the Android Compatibility Definition Document (CDD) threshold.

“ID R&D is laser-focused on R&D efforts that move the market forward and enable new and exciting use cases for biometrics,” says ID R&D Chief Scientific Officer Konstantin Simonchik. “Our modern voice biometric algorithms consistently push the limits of what’s possible on voice-enabled devices in terms of size, speed, performance, and convenience. As voice becomes the de facto standard for interacting with everything from our televisions to our cars, biometrics emerge as the most convenient way to quickly identify users for security and personalization.”

A market for emotion recognition grows without tackling deep concerns by the public

Private and government researchers see few unassailable obstacles to — and plenty of money in — making emotion recognition a commonplace AI analysis tool. But one essential building block continues to be studiously overlooked by insiders.

There is little new in concerns raised in recent news coverage about algorithms that go beyond trying to biometrically identify people to try discerning emotions.

Indeed, the fact that the x recognition industry pushes ahead with invasive and covert-friendly monitoring products without addressing a growing public trust deficit is becoming the news.

Plain vanilla facial recognition technology is raising hackles in the United States and the European Union. Increasingly uncertain citizens fear real, personal harm due to abuse by government and commercial deployment.

This is unlikely to be another anti-vaccination movement where some people get worked up over fact-free alarmist messaging casting doubt on decades old laboratory science. Simply walking out one’s front door can make someone an unwitting participant in the surveillance infrastructure. The pool of outrage about privacy intrusions could dwarf the loud minority of anti-vaxxers today.

OneZero, an online publication defining how technology is impacting and will impact people, published an article this month that finds little more than junk science and overheated marketing supports growing accuracy claims by startups. Another publication, Global Government Forum, analyzed a paper by UK human rights advocate Article 19 that finds that AI emotion readers are illegal under existing international law.

Development and doubts go back years. A psychologist named Paul Ekman in the 1970s began research on spotting what he called micro expressions.

In 2017, an American Civil Liberties Union report written up in The Guardian claimed that “dubious behavioral science” underpinning a U.S. Transportation Security Administration emotion recognition system that could “easily give way to implicit or explicit bias.”

Manual efforts to read emotions in a person’s face and mannerism was rolled out in U.S. airports in 2007. Psychologists have been poking holes in the notion of digital or human face-reading ever since.

Government officials in the West keep the faith if for no other reason than they do not want to be explaining, after a terror attack, why a potentially affective program however unrealistic had been zero-funded.

China’s authoritarian regime is deploying applications with abandon because they might work and there is no meaningful public resistance to government surveillance.

And while businesses are far less aggressive in investing in emotion recognition today, consumer-facing firms never stop doing the math that leads to a sale. Cautious advertising and marketing executives are biding their time until the risk-reward equation shifts.

It would not be the first dodgy concept that they have embraced. See subliminal messaging.

In fact, technology analyst firm Market Research Engine in January published an extensive paper on emotion recognition, predicting that the market will have grown from $5 billion in 2015 to $85 billion in 2025.

UV sanitizer device for fingerprint biometric scanners launched by TechnoBravo

New Jersey-based startup TechnoBravo is launching a novel device for sanitizing contact biometric scanners with Ultraviolet C (UVC) light.

The new BioSan device kills up to 99.9 percent of harmful viruses and bacteria to enable the safe use of fingerprint scanners without compromising their biometric performance. The device’s effectiveness has been verified by the University of Siena, according to a company announcement.

TechnoBravo points out that fingerprint recognition remains the most often-used biometric modality, and has been a trusted method of identifying individuals for decades. Fingerprint scanners continue to be widely deployed, despite the increasing maturity of other biometric modalities, reliably supporting comparisons against large databases.

The COVID-19 pandemic has compounded growing resistance to the use of fingerprint scanners, inspiring TechnoBravo to develop an automated method of cleaning fingerprint sensors after each use. ABI Research forecast in October that fingerprint revenues would drop 22 percent in 2020, before bouncing back in 2021.

Alestra selects 1Kosmos’ biometric and blockchain authentication for new innovation program

1Kosmos’ biometric passwordless authentication technology has been selected by Mexican IT Services company Alestra to be part of its 5th generation NAVE open innovation program.

Alestra selected 1Kosmos as one of five winners of this program from over 1,000 entrants for its innovative use of biometric and blockchain technologies in cybersecurity.

“We are very excited to be a winner of the NAVE program out a rigorous evaluation competing against a thousand other innovative companies,” said Michael Engle, 1Kosmos’s chief strategy officer, “This partnership with Alestra is another strong validation of our vision and platform to disrupt the Cybersecurity space.”

The digital identity proofing firm’s flagship product is BlockID, a solution that enables user onboarding through passwordless, biometric authentication, and advanced document verification.

The W3C and GDPR-compliant biometric solution was released on the Auth0 Marketplace in January and features a level of identity insurance (IAL3) and authentication insurance (AAL3) per the NIST 800–63–3 guidelines. 1Kosmos, which recently announced a $15 million funding round, will now work together with Alestra for 16 weeks to bring BlockID to Alestra’s 18,000 clients.

“At Alestra, we are very excited to have 1Kosmos as part of the latest edition of our NAVE Open Innovation Program,” commented Jenaro Martínez, director of innovation and strategic alliances with Alestra. “They were selected after analyzing more than a thousand scaleups around the world, and we are sure that their unique passwordless identity solution based on blockchain will help companies like ours to enable our digital transformation.”

These Weeks’ News by Categories

Access Control:

Financial Services:

Civil / National ID:

Government Services & Elections:

Facial Recognition:

Fingerprint Recognition:

Iris / Eye Recognition

Voice Biometrics

Behavioral Biometrics

Wearables

Mobile Biometrics

Biometrics Industry Events

The FIDO Alliance has announced the speaker lineup for the first edition of the 2021 Authenticate Virtual Summit, taking place on Thursday, March 25th, from 9:00 a.m. to 12:00 p.m. PDT:

2nd Annual Facial Recognition Summit: Apr 7, 2021 — Apr 8, 2021

Secure Identification 2021: Apr 14, 2021 — Apr 16, 2021

Identity Management Symposium: Apr 21, 2021 — Apr 22, 2021

Critical Infrastructure Protection & Resilience Europe: May 11, 2021 — May 13, 2021

5th India Homeland Security: May 13, 2021 — May 14, 2021

The Biometrics Institute has announced its calendar of events for 2021 with a focus on educational events.

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Biometric Update

Science Daily

Find Biometrics

Planet biometrics

--

--