BT/ Google puts multimodal biometrics unlock on Pixels, Samsung gets a faster fingerprint sensor

Paradigm
Paradigm
Published in
38 min readAug 29, 2022

--

Biometrics biweekly vol. 46, 15th August — 29th August

TL;DR

  • Google intends to use the under-display fingerprint sensor of its Pixel models in conjunction with face unlock
  • Samsung will give the Galaxy S23 Ultra faster fingerprint sensors with a larger active area
  • The new Xiaomi flagship smartphones carry the company’s fingerprint biometrics.
  • Mastercard follows Visa, approves Zwipe Pay platform for biometric cards
  • Google Wallet enters 6 new countries, hints at digital ID feature for US airport checkpoints
  • NIST is advancing towards the publication of a standard for the quality of images used in face biometrics
  • SES RFID shows off a biometric smart card with the sensor under PVC
  • GBT Technologies nears patent grant for real-time 3D biometric security for devices
  • Socure praises digital identity focus, overdue standards project in CHIPS and Science Act
  • Nixu joins with Ioxio on digital identity for smart cities
  • Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’
  • trinamiX unveils new liveness detection tech and complete biometric authentication solution
  • Passwordless authentication partnerships signed by ForgeRock with SDO, 1Kosmos with Simeio
  • SecureAuth IAM authentication platform certified for FIDO2
  • TypingDNA integrates behavioral biometrics with WSO2 IAM for customer, and employee authentication
  • Hitachi ID and Hypr partner on passwordless biometrics to tackle phishing, account takeovers
  • BIO-key adding more biometric modalities to mobile authenticator for IAM
  • Transmit Security updates biometric passwordless authentication suite, reports record growth
  • Scylla deploys face biometrics for Dronedek autonomous delivery security
  • Ondato partners for modular, biometric compliance solutions with transaction monitoring
  • Plurilock expands reach of behavioral biometrics with subsidiary acquisition, fundraising
  • Sentry appoints blockchain, crypto expert CTO to develop solutions for biometric ID card
  • Humanode public sale details and roadmap are out
  • Daon digital health pass VeriFLY reaches 1M users on Carnival Cruises
  • Kardome expands speech recognition enhancement portfolio with voice biometrics
  • Estonia’s AI Govstack to help other countries digitalize, offer virtual admin assistant
  • Russia’s biggest bank gets patents for detecting deepfakes
  • Philippines toolkit will allow digitization, ID verification across local, national govt
  • SmilePay launches face biometrics payments in Azerbaijan following Mastercard partnership
  • Canada’s DIACC has lessons for trusted digital identity in Australia
  • Biometric ID, prison programs questioned at federal, and state levels in Pakistan
  • Nigeria ID4D continues partnership spree to meet digital ID ambitions
  • Biometric audit finds thousands of ‘ghost workers’ on Ghana’s public service payroll
  • Singapore introduces parents’ dialects to digital birth certificates after pushback
  • Mexican soccer removes face biometrics from Fan ID security scheme
  • UN expects 350M more people worldwide to get legal identity by 2025
  • Facial recognition getting better at handling modest angles and big smiles
  • Dividing face images makes for better biometric presentation attack detection
  • A team of researchers says they used a convolutional neural network to extract characteristics from close-up photographs of veiled people’s faces to obtain high facial recognition accuracy
  • Experimental research by a team from the Department of Computer Science at the University of Durham, UK, demonstrates that image compression techniques that remove data have a negative bearing on the way facial recognition algorithms function with regard to data training and testing
  • Some people really do have doppelgangers and that could be a biometric problem, researchers say
  • A live biometric deepfake of Binance spokesman Patrick Hillmann was used on Zoom calls to convince would-be investors to list their tokens on what is the world’s largest crypto spot exchange
  • Biometric industry events. And more!

Biometrics Market

The Biometric system market size is projected to grow from USD 36.6 billion in 2020 to USD 68.6 billion by 2025; it is estimated to grow at a CAGR of 13.4% during the forecast period. Increasing use of biometrics in consumer electronic devices for authentication and identification purposes, the growing need for surveillance and security with the heightened threat of terrorist attacks, and the surging adoption of biometric technology in automotive applications are the major factors propelling the growth of the biometric system market.

Biometric Research & Development

Latest Research:

Deep-learning advance improves facial recognition for people wearing veils

A team of researchers spanning Asia, and U.S. universities say they used a convolutional neural network to extract characteristics from close-up photographs of veiled people’s faces to obtain high facial recognition accuracy, reports Marktechpost.

The researchers say their DeepVeil technique, used in a demonstration involving photographs of 150 subjects, obtained 99.95 percent facial recognition general accuracy, including faces veiled behind a niqab, a garment, mostly worn by Muslim women, that covers their face except the eyes.

Using DeepVeil as a proof of concept, the researchers analyzed photos of 109 female and 41 male participants, comparing them to photos in an internal database. The method, the study shows, can also help determine with up to 80.9 percent accuracy a person’s facial expression by just examining their eyes.

The study was published in the International Journal of Biometrics. In it, the researchers say DeepVeil was 99.9 percent accurate at age estimation and gender recognition for women wearing the niqab.

The researchers’ main objective was to use deep learning-based systems to identify not only people with obscured faces, but also to identify gender, age, and facial expressions such as an “eye smile.”

Identifying people when just their face is visible (or hiding their identity) continues to be a popular endeavor for researchers and companies thanks to Covid.

Idiap researchers propose biometric fairness metrics to decouple bias from accuracy

Several new measures for quantifying demographic differentials, or bias, in biometric identity verification systems are suggested in a paper which has been accepted for publication. The new metrics represent an effort to move beyond whether a sample has been matched, to include consideration of how well, by considering matching scores.

The paper ‘Fairness Index Measures to Evaluate Bias in Biometric Recognition’ was authored by Sébastien Marcel and Ketan Kotwal of the Idiap Research Institute. It has been accepted by the International Conference on Pattern Recognition Workshops.

While biometric bias is often associated with facial recognition, the researchers say their metrics are agnostic to the modality used.

The recently-proposed Fairness Discrepancy Rate (FDR) is considered, along with the use of “the ROC (Receiver Operating Characteristic) curve as a proxy to measure demographic differentials.”

“While few, existing fairness measures are based on post-decision data (such as verification accuracy) of biometric systems, we discuss how pre-decision data (score distributions) provide useful insights towards demographic fairness,” the paper’s authors write in the abstract of their paper.

Marcel and Kotwal propose methods based on weighted fusion of results for each of the three measures, and three variants for each measure to allow assessment from multiple perspectives.

Separation Fairness Index (SFI) measures how close genuine and imposter-matching scores for different demographic groups depart from expected values. Compactness Fairness Index (CFI) measures the spread of scores across different groups. Distribution Fairness Index (DFI) measures equitability towards overall score distributions. In each case, similar scores and therefore fair systems are indicated by closeness to a value of 1.0.

If successful, these metrics indicate how fair a biometric verification is separately from its accuracy, but are supposed to compliment, not replace, outcome-based fairness measures.

NIST is also working on how to measure biometric bias, and currently seeking feedback on the state of the art.

Two concerning deepfake developments but new hope for robust detection

TechCrunch is reporting on an open-source AI image generator just out that is being adopted “stunningly” fast. Stable Diffusion, a model created by Stability AI, creates realistic image content from simple text, and can do so on consumer-grade computers.

Stable Diffusion is being used by organizations that generate digital art or allow people to create their own art using their machine learning software, according to TechCrunch, a publisher of tech-business news.

But the model was leaked on 4chan, a dingey point in the internet where poor digital choices often are made and celebrated. The assumption is that realistic, unethical and harmful deepfakes will flourish in the democracy of inexpensive computing.

A good example of this, although it is not known if Stable Diffusion or 4chan are remotely involved, is a cryptocurrency exchange executive who reports that he is the unwitting model for a holographic deepfake.

According to reporting by tech news publication The Register, a live biometric deepfake of Binance spokesman Patrick Hillmann was used on Zoom calls to convince would-be investors to list their tokens on what is the world’s largest crypto spot exchange.

PCMag has a good description of how the caper was pulled off and how it was discovered. Here are Hillmann’s thoughts on the incident. There are no reports on who pulled the stunt.

There is room for hope that reliable, long-lived detection is possible, though the record of putting deepfakes in their place is one of repeated failure as creation technology evolves.

An article in tech publication Unite.AI (based on a new paper) is encouraging for the moment. It is possible that the drive for biometric deepfake precision could be the lasting clue sought.

Creators want every video frame to be perfect at the expense of context or even recreating the peculiar signatures of compressed video.

A condition called regularity disruption occurs in deepfakes alone. A graphic in the research paper illustrates the disruption. It looks like jagged horizonal lines representing a facial feature over time. The image of a real subject is smoother, like an extruded material.

Lossy image compression negatively affects facial recognition algorithms, research shows

Experimental research by a team from the Department of Computer Science at the University of Durham, UK, demonstrates that image compression techniques that remove data have a negative bearing on the way facial recognition algorithms function with regard to data training and testing. The affects may contribute to biometric bias, the researchers say.

Lossy compression is the process by which some data is removed from an image file in order to reduce its size or other original properties. The data removed from such a file using lossy techniques cannot be restored.

The study carried out by a quartet of academics investigates the impact of lossy JPEG compression algorithm on contemporary facial recognition performance, according to an abstract of the research findings.

The researchers say there is a gap in how this impact varies with different race groups. Their experiment finds that common compression methods can impact facial recognition performance by up to 34.55 percent for racial phenotype categories like darker skin tones.

Further, they found that removing chroma subsampling (a type of compression that reduces color information in the image) during compression improves a system’s false match rate by up to 15.95 percent across all affected groups, “including darker skin tones, wide noses, big lips, and monolid eye categories,” the abstract explains.

Overall, the evaluation finds that using lossy compressed facial image samples for matching decreases performance more significantly on specific phenotypes.

However, the use of compressed imagery during training does make the resulting models more resilient and limits the performance degradation encountered, as lower performance amongst specific racially-aligned subgroups remains, notes the report in its conclusion.

The paper also considers the impact of factors such as balanced and unbalanced training datasets and compression levels.

The authors add that their work looking at the impact of lossy compression algorithms on phenotype-based racial groups is part of stakeholder efforts in making available additional evidence-based insights and understanding to guide the mitigation of bias in the development of future face biometrics algorithms and systems.

NIST’s Patrick Grother noted the benefit of low image compression to address demographic issues in a presentation in late 2020.

Other research efforts published recently have tried to find better ways to quantify demographic disparities in the effectiveness of facial recognition systems.

Dividing face images makes for better biometric presentation attack detection

Seeing the big picture is useful in completing some tasks, but it introduces extraneous details that can confuse matters on the ground. Apparently, the same is true in presentation attack detection for biometric verification.

Industry-funded research in Turkey indicates that it can be more effective to train deep learning-based presentation attack detection models using only small square patches from live and spoof facial images.

That means tightly cropping facial images, real or fake, to remove as much non-face biometric data, and then breaking that image into 32 x 32-pixel patches. The squares are stitched into larger image sets that include genuine articles as well as patches from manufactured faces.

In experiments, patches were assembled randomly or in a design, and random patterns worked better.

Two of the three researchers working on the project were from Istanbul Technical University‘s Computer Engineering Department. The third works for Sodec Technologies, an Istanbul-based KYC software firm.

The team used four data sets in its research, one of which was Real-World, developed by Sodec, which provided a research grant and helped collect images. The Technological Research Council of Turkey also supported the work.

Convolutional neural networks are commonly used in training models for presentation attack detection, write the researchers. Still, in this area, at least, they really only work well on intra-data sets.

They take subtle cues from collective background information in a data set, which might create a dynamic like that when a horse appears to be able to count, but really is only reading subtle signals from its trainer.

Add a new biometric trainer, or in this case, live data, and results are less interesting.

Cropping face images as tightly as possible to minimize other data and then breaking them up forces the model to focus entirely on the most important bits. It also means researchers can use data sets containing fewer subjects overall.

Some patches have too little information to train, such as foreheads, and were struck from the collections.

Facial recognition getting better at handling modest angles and big smiles

The state of the art of facial recognition has improved over recent years to the point of handling certain kinds of occlusion well, along with illumination from different directions. The accuracy of implementations on mobile phones has also dramatically improved.

These are just a few of the noteworthy findings from a paper reviewing the last eight years of progress in the field of face biometrics.

‘Eight Years of Face Recognition Research: Reproducibility, Achievements and Open Issues’ has been released as an open-access paper and shared in a LinkedIn post by Sébastien Marcel.

The time-frame is selected because it was roughly in 2015 that deep learning models became the dominant approach to facial recognition development. Further, the researchers completed a similar review of the state-of-the-art in 2014, providing a convenient reference point for follow-up.

Researchers from the Idiap Research Institute, Université de Lausanne (ESC) and the University of Zurich collaborated on the research. They found significant gaps in the existing research corpus, and identify several problems in the field that are yet to be solved.

Angles below 60 degrees are handled well by most facial recognition networks, according to the paper, but beyond that false non-matches rise quickly.

Recognition at a distance with low-quality images is identified as a problem, though this surely comes as a relief to some privacy advocates.

Facial expressions also pose much less of a challenge to biometric matching systems than they did before the adoption of deep learning, according to the research.

The paper does not analyze demographic disparities, though Marcel and fellow co-author Tiago de Freitas Pereira acknowledge in the LinkedIn thread that algorithmic and system bias are among ongoing challenges that persist, but fell outside of the scope of the research. Security aspects, such as morphing and other presentation attacks, also fell outside of that scope.

The research was conducted using six datasets and five facial recognition algorithms, all publicly-available, with the intention of enabling reproducibility.

Some people really do have doppelgangers and that could be a biometric problem

If everyone really does have an unrelated twin somewhere on the planet, that is bad news for a facial recognition-based security systems.

It is not a situation keeping biometrics and AI practitioners awake at night, but neither is it an entirely academic question. A physical “copy” is no different than a divulged password. Actual twins are enough of a concern.

A large Barcelona-based team of biology and IT researchers — plus one from biometric firm Herta Security — found 16 unrelated look-alikes around the world using algorithmic analysis. Nine of that set were considered “‘ultra’ look-alikes.”

Writing in the journal Cell Reports, the researchers say their look-alike dataset came from a unique photographic collection shot by Canadian photographer François Brunelle. Brunelle has for years looked for unrelated people who share uncannily similar faces. He has found 32 pairs.

The researchers fed the faces of the 32 pairs to three facial recognition algorithms: Custom-Net, a custom deep convolutional neural net created by Herta; MatConvNet and Microsoft’s Oxford Project face API. All three found 16 “objectively similar” and, of them, 9 were counted as ultra-similar.

(Thirteen pairs were of European ancestry. One couple each were Hispanic, East Asian and Central South Asian.)

It appears that the grouping of 16 objectively similar (including the nine ultra look-alikes) share more than face biometrics.

Researchers found that compared to people not judged to be similar by all three algorithms, the 15 lookalikes “share a more comprehensive physical and probably behavioral, phenotypes.” They did not delve too deeply into the implications of this discovery, but it is at least plausible that other physical traits, like personality and gait, could match.

Main News:

Google puts multimodal biometrics unlock on Pixels, Samsung gets faster fingerprint sensor

Several big names in smartphones have unveiled updates (or have been the object of rumors) regarding new biometrics within their devices.

9to5Google said Google intends to use the under-display fingerprint sensor of its Pixel models in conjunction with face unlock. SamMobile said Samsung will give the Galaxy S23 Ultra faster fingerprint sensors with a larger active area. Also, a patent spotted by Patently Apple suggests the temperature sensor in Galaxy Watch 5 devices will not be functional when the device launches. And finally, Goodix announced the new Xiaomi flagship smartphones carry the company’s fingerprint biometrics.

  • Multimodal biometrics for Google Pixel devices

Google is working on a version of Pixel face unlock that supports the under-display fingerprint sensor, reports 9to5Google.

According to the trade publication, the multimodal technology will lower the recognition threshold needed for fingerprint unlock when a face is at least partially recognized.

The technology seen by 9to5Google reportedly does not require additional hardware and is undergoing testing on both Pixel 6 and Pixel 6 Pro devices. Because of this, it could also potentially be integrated within the Pixel 6a and other a-series devices.

The approach is supposedly more battery efficient while also addressing Google’s earlier face unlock issues.

The technology giant has not yet confirmed native face unlock will come to Pixel 7 devices.

  • Galaxy S23 Ultra to sport improved fingerprint biometrics

Samsung’s upcoming Galaxy S23 Ultra may integrate Qualcomm’s 3D Sonic Max fingerprint sensor, suggests a rumor passed by SamMobile.

From a technical standpoint, the 3D Sonic Max is a larger and more accurate fingerprint sensor than is found in Galaxy devices. Integration within Galaxy S23 Ultra devices would mean faster finger authentication and fewer false negatives.

It is important to notice that, while Galaxy biometric software was hacked last March, the fingerprint sensor on the Galaxy S22 was praised by experts for its precision, which reportedly outstripped fingerprint biometrics on the Pixel 6 line.

As for the Galaxy S23 series, Samsung has not announced when it will introduce the devices, but chances are that the company’s next Unpacked event will take place in the first couple of months of 2023.

  • Galaxy Watch 5s temperature sensor is yet to receive approval

The Watch 5’s recently announced temperature sensor has not yet received Korea’s Ministry of Food and Drug Safety’s approval.

The news comes from a new report by The Korea Herald, which suggests the Watch 5 line may not have a working sensor enabled upon launch next week (despite supporting the technology to do so).

The delay in getting approval from the ministry may mean the Apple Watch 8 may be the first commercial watch to feature a temperature sensor.

Apple also beat Samsung on sleep tracking features, which the Cupertino, Calif.-based firm has supported for a couple of years.

  • Xiaomi’s flagships integrate Goodix’s fingerprint biometrics

The latest generation of Xiaomi’s phones carries fingerprint biometrics by Chinese consumer electronics supplier Goodix.

The company made the announcement in a video on Twitter, showcasing a number of devices with biometric technologies.

These include a side-key capacitive fingerprint sensor for the MIX Fold 2 and an optical under-display finger sensor for the Redmi K50 Extreme Edition.

The video also shows that Goodix is providing Xiaomi with a health sensor able to measure heart rate and O2 levels in the blood. It is for the Xiaomi Watch S1 Pro and the VersaSensor with in-ear detection and Force Touch for the Xiaomi Buds 4 Pro.

The announcement comes months after Goodix reported a sharp decrease in revenues due to a number of factors.

Mastercard follows Visa, approves Zwipe Pay platform for biometric cards

Norway-based Zwipe’s biometric payment card platform has passed a raft of Mastercard tests to win a key certification from the financial services firm.

The Component Conformity Statement received by Zwipe means that its biometric platform, Pay, meets Mastercard’s security, reliability, functionality and performance standards.

For Zwipe’s customers — smartcard makers — the certification means they can apply for Mastercard‘s approval for their own Mastercard-branded biometric payment cards using the Pay platform.

This follows Visa’s similar certification for the Pay platform in March.

Being “certified by Mastercard is a significant milestone and major step forward for Zwipe and our customers,” says Robert Puskaric, CEO of Zwipe. “This development will further accelerate issuer pilots and planned launches based on Mastercard’s network. The pathway is now open for Smart Card Manufacturers and issuers all over the world to certify, produce and deploy Mastercard biometric cards based on Zwipe Pay.”

Mastercard has previously approved Idemia’s F.CODE biometric payment card platform, which integrates Zwipe Pay ONE.

trinamiX unveils new liveness detection tech and complete biometric authentication solution

Germany-based trinamiX has collaborated with Qualcomm Technologies to launch a pair of new biometric technologies for implementation on mobile devices; one a reference design for high-resolution scans of people’s skin, and another a full hardware and software solution to provide all-in-one facial authentication.

The skin-sensing technology, which can be used in liveness detection, is a near-infrared spectroscopy module for smartphone integration. TrinamiX pitched this technology when it was accepted into Qualcomm’s software accelerator in 2020, referring to is as “beam profile analysis.”

The all-in-one trinamiX Face Authentication solution includes the skin detection capability for presentation attack detection, as well as 2D facial recognition. The software components run within the Qualcomm Trusted Execution Environment to protect users’ personal data, according to the announcement. The biometric hardware is intended for integration beneath an OLED screen.

“During the development of our solution, we have seen and unveiled so many crucial security gaps in available biometric solutions,” says trinamiX Head of Smartphone Business Asia Stefan Metz. “Qualcomm Technologies has helped us pave the way for a face authentication solution that finally closes these gaps.”

The company says solution meets the highest requirements for security from the FIDO Alliance, International Internet Finance Authentication Alliance (IIFAA), and Android, and is approved for use in digital payment processes on Android devices. TrinamiX further claims the solution is the first under-display biometric technology with these certifications, and that it has low technical requirements allowing easy integration.

NIST’s image standards could become win/lose factor for govt biometrics contractors

The U.S. government is developing a vendor-neutral standard for assessing how useful an image might be for biometric identification. The face image quality standard could be set by the start of 2024.

According to reporting by Bloomberg Law, the goal is to show facial images to Transportation Security Administration and Customs and Border Protection agents that are more accurate and useful and less biased.

The Department Homeland Security (which oversees the TSA and CBP) and the National Institute for Standards and Technology reportedly are working together on the tool.

Machine vision vendors right now find it difficult to judge how their image capture technology for use with biometric systems will satisfy the government’s needs.

As basic as it sounds, demonstrating proper lighting through the standard could, according to Bloomberg, eliminate at least some of the image bias against darker skin tones.

Idemia North America CEO Donnie Scott is quoted in the article saying the government has used NIST well as a technology evaluator. Scott is referring in part to the well-known and -respected Face Recognition Vendor Test, an exhaustive and ongoing program of testing biometric algorithms for strengths and faults — among them being demographic bias.

Scott said NIST’s work needs to be carried forward, though. Adherence to NIST standards should be a box to tick for contractors bidding on government work.

And while not everyone thinks this idea is comprehensive enough to celebrate, NIST is planning its first full AI Risk Management Framework, according to Bloomberg.

Its goal is to show coders how best to address five critical factors in deploying facial recognition or other AI techniques: bias, security, explainability, reliability and accuracy.

Sentry appoints blockchain, crypto expert CTO to develop solutions for biometric ID card

Sentry Enterprises has appointed Mikhail Friedland as its new chief technology officer to lead the development of decentralized digital identity solutions based on the company’s biometric ID card.

Friedland comes to Sentry with expertise in blockchain and cryptocurrency applications on resource-constrained devices, and developing and deploying scalable, secure high-assurance operating systems. He founded jNet ThingX, which is now known as jNet Secure, in 2001, which has now entered into a strategic partnership with Sentry, and will continue operating to serve its legacy clients.

“Mikhail’s software expertise is arguably unrivaled, and when combined with Sentry’s market-leading biometric hardware, the potential for creating industry-changing solutions is unmatched,” comments Sentry CEO and Co-founder Mark Bennett.

Estonia’s AI Govstack to help other countries digitalize, offer virtual admin assistant

Estonia has one of the most advanced digital identity schemes in the world, allowing its citizens to access government services and keep tabs on how their data is used. The Baltic state is developing artificial components on which digital services are based and making them available via its AI Govstack, reports Computer Weekly.

Estonia has a site collating all the government use cases for AI and ML, whether in the works or already in use, from tools to determine the probability of a person getting a job, analyzing calls to the Social Insurance Board through to a chatbot that can talk about Estonia in any known language.

Computer Weekly notes that 26 open-source building blocks for digital government are available to countries and companies, with more to come.

The country is also calling on data science, machine learning and language technology experts to answer a procurement call worth €5 million (US$5 million) to “build the next generation AI-led country in Estonia together,” according to the E-Estonia site. The framework will be developed over three years to 2025.

Participants will have the chance to work with Bürokratt, the country’s AI virtual assistant and there is a clear emphasis on privacy enhancing technologies. Bürokratt works via voice activation and is proactive, contacting individuals ahead of them needing to perform an admin task.

“Having already built a highly successful foundation through our AI strategy and with Bürokratt we’re keen to raise and improve the AI service offering even more, so we can continue to provide our citizens and companies with the very best government services,” writes Ott Velsberg, Estonia’s government chief data officer.

“This initiative allows to build the next stage of data-driven e-Estonia more efficiently together with you!”

The site states under its Cooperation Principles for working on government AI that the virtual assistant is a public/private hybrid:

“Bürokratt is the vision of how digital public services should work in the age of artificial intelligence (AI). Bürokratt is thus an interoperable network of public and private sector AI solutions, which from the user’s point of view, act as a single channel for public services and information.”

There are plans for Bürokratt to be implemented across ten government departments by the end of the year, according to Computer Weekly. Estonia is working to make it available to other EU countries.

The government is considering developing databases to build data on its citizens that would not be identifiable, but would help in the delivery of services via AI.

SES RFID shows off biometric smart card with sensor under PVC

The cold lamination technology acquired by SES RFID Solutions to put fingerprint biometric sensors under the PVC layer of smart cards appears to have been fully integrated and implemented.

SES RFID acquired Jinco unit Cold Lamination Technologies in a deal earlier this year to allow SES to add the manufacturing capability to its BIOMTX biometric smart card solutions and services. SES RFID, with offices in Germany, Taiwan and the U.S., welcomed “key members” of Jinco Universal at the time.

BIOMTX cards provide “Hidden, integrated fingerprint sensors for smart cards and highly secure devices,” according to a LinkedIn post.

The BIOMTX cards are demonstrated in a YouTube video, with the sensor area indicated with a printed image of a fingerprint, rather than a visible sensor. The demonstration is of a physical access control scenario with a proximity reader. The card operates without a battery, harvesting power from the reader like most fingerprint-enabled cards.

GBT Technologies nears patent grant for real-time 3D biometric security for devices

GBT Technologies has received a notice of allowance from the U.S. Patent and Trademark Office (USPTO) for a patent on 3D biometric computer vision technology for securing electronics and mobile devices.

Systems and methods of facial and body recognition, identification and analysis’ describes a method for transforming data from images and videos into three-dimensional figures with advanced point detection. These figures would then be used for biometric comparisons to secure access control to the mobile device or computer.

The technique uses machine learning and three-dimensional modeling to match complete or occluded images in real-time, according to the company announcement. The patent document makes clear that the inclusion of occluded biometric data is a reference to the face masks that have become much more common around the world since the onset of the COVID-19 pandemic.

The document also suggests that the method is capable of monitoring and adjusting to “bodily changes like weight gain.”

“Our smartphones hold sensitive and personal information like passwords, medical information, account numbers, emails, photos, messages, and videos,” states Danny Rittman, GBT’s CTO. “Losing a mobile device can be worse, leading to identity theft or hacking into an email or social media accounts.

“Biometric security has become a standard to protect phones and computer data,” Rittman adds. “We believe our technology will further expand this horizon through more secure digital identification. Our facial and body recognition patent application covers comprehensive AI algorithms with the goal of learning a human’s features and identifying them with or without cover. It is our goal to continue researching this technology, combining it with our similar concepts to offer superior security and privacy capabilities for individuals and businesses.”

GBT says the same technology could also be implemented for identifying persons of interest in crowded settings like airports.

The company also received a notice of allowance for radio-based object detection and imaging technology earlier this year.

Socure praises digital identity focus, overdue standards project in CHIPS and Science Act

The CHIPS and Science Act, signed into U.S. law last week, is a huge win for the global technology industry, both due to the investment in manufacturing for microchips, but also the support it provides for research and development in digital identity technologies, according to a new blog post from Socure.

The Act comes after more than three years of intense lobbying for domestic investment to lower reliance in Chinese suppliers for semiconductors, the post says, but also supports other areas that Socure believes are just as vital to domestic interests.

A new technology directorate being created at the National Science Foundation and new authorities for NIST can help boost the state of the art in machine learning and identity verification, respectively. Socure says it contributed to the inclusion of new requirements by NIST to help cut fraud against government programs.

The NIST standards for trusted online transactions with digital identity are long-awaited, and the focus on attribute validation services likely to help move beyond legacy models that can hold out up to 20 percent of people in the U.S., Socure’s VP of Government Relations Brendan Peter writes.

“Identity proofing and verification mechanisms are specifically called out in the legislation, and Congress recognized that proofing systems must be risk-based and adaptive to continuously evolving fraud schemes to ensure trust and security,” explains Peter. “One-size-fits-all approaches that leverage static rules have not met the market’s needs for years. The new requirement for NIST will accelerate the adoption of novel risk and identity verification approaches like Socure’s graph-defined identity verification, which is built to address rising synthetic and third-party fraud vectors that networked fraud rings use to steal money and take over legitimate users’ identities.”

Russia’s biggest bank gets patents for detecting deepfakes

Add to the list of things Russia’s Sber is, deepfake detector. The Kremlin-controlled bank/new-economy conglomerate has been awarded two domestic patents for improvements to the task of spotting a deepfake. That includes when a synthetic person has been placed in a scene with actual humans.

One patent is for a way to use AI to spot and analyze micro changes in an object’s color from frame to frame. The other involvesensembles of neural network models of the EfficientNet class.”

Sber, formerly Sberbank, still holds about a third of all bank assets in Russia. It is 98 percent efficient, according to an article posted by Russian tech and business publisher, Rusbase. A translation of the patents was not available at deadline, but it is apparent that the tools will find and analyze small color changes in a piece of video. Presumably, AI can be inconsistent at some scales in rendering video images.

It is known that deepfakes are as yet incapable of simulating the red flashes crossing human faces as the flesh is washed through with circulating blood. Unnoticeable to the human eye, video analysis of natural people shows it vividly.

According to a machine translation of the story, the product will be called Sbera (although that term sometimes is used to refer to the company) will be used to thwart deepfake attacks that get past face biometrics validation and liveness detection.

Nixu joins with Ioxio on digital identity for smart cities

Cybersecurity services company Nixu is partnering with Ioxio, a digital services and solutions provider.

The collaboration, focused on the creation of an interoperable digital identity for smart cities, saw its first practical application in June in a trusted data-sharing pilot in Jyväskylä, Finland.

The smart city pilot results were unveiled at the ‘City in the Pocket’ event as part of Jyväskylä’s Business Rally.

The pilot brought together Nixu’s identity access management software and Ioxio’s Dataspace service, enabling fully remote and secure access control to digital services and physical locations in the city.

“We are proud to be a part of a project like this,” says Veera Relander, business unit lead for digital identity at Nixu.

“It is in line with Nixu’s mission to keep the digital society running, and benefits all parties in the society — citizens and the public and private sectors alike,” according to Relander. “We are able to build and provide platforms that speed up digitalization, removing barriers.”

According to Mika Kataikko, leader of the innovation project at Business Jyväskylä, the pilot has been well-received by residents with its benefits being adopted quickly across Kangas, a major development in the city.

“Many people contacted us about joining the pilot to get access to the parking facilities with their home key,” Kataikko says. “With this pilot, we truly managed to create the ‘experienced smartness’ for people that we aim at when developing digital services in Jyväskylä.”

More generally, the development of the project is now moving on to the commercialization phase, in which interested stakeholders are invited to join.

Two additional use cases using the same interoperable digital identity technology have been deployed around home elderly care and sports center services.

Enable firms to understand and tackle their AI biases, NIST to launch ‘socio-technical approach’

Enterprises will need to have the tools, skills and human oversight to detect and remove bias in their artificial intelligence applications to maintain a safe online world, writes Steve Durbin, Chief Executive of security risk firm Information Security Forum in a think piece for the World Economic Forum. Meanwhile, NIST wants entities to try a “socio-technical” approach to AI to tackle bias.

“AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an algorithm is deployed,” writes Durbin, setting out the issue.

“The complexity of getting inside an AI algorithm to see how it’s been written and responding cannot be underestimated.”

Durbin uses privacy laws as a comparison and warning. Privacy laws rely on giving notice and choice such as when disclaimers pop up on websites. “If such notices were applied to AI, it would have serious consequences for the security and privacy of consumers and society,” he notes.

AI could accelerate malware attacks by detecting vulnerabilities or poisoning security AI systems by feeding them with incorrect information. Durbin offers some solutions.

Durbin recommends five general ways to approach issues with discrimination in AI.

“Because AI decisions increasingly influence and impact people’s lives at scale, enterprises have a moral, social and fiduciary responsibility to manage AI adoption ethically,” he notes, urging ethics to be treated as metrics for firms and organizations.

Firms must adopt tools and methods to help them understand and find the biases in any systems they use. The autonomy of algorithms must be balanced with the creation of an ethics committee. Employees must be empowered to promote responsible AI.

Finally, using AI to run algorithms alongside human decision processes, then comparing the outcomes and examining the reasons for the AI decision can help benefit traditional methods of assessing human fairness, writes Durbin.

“AI models must be trustworthy, fair and explainable by design,” he concludes, “As AI becomes more democratized and new governance models take shape, it follows that more AI-enabled innovations are on the horizon.”

Experts at the U.S. National Institute of Standards and Technology (NIST) are set to launch a new playbook for approaching AI biases and other risks, reports Nextgov.

Written for both public and private entities, the recommendations, expected in the next few days, will be adaptable and flexible and contain areas such as human management of AI systems. ‘Socio-technical’ meaning having an awareness of the human impact on technology, according to Nextgov, so as to prevent it from being used in ways its designers had not intended.

The playbook is intended to help entities prevent human biases entering their AI technologies.

Similar to Durbin, the playbook is expected to encourage governance of the technology and clear roles of responsibility.

NIST has also been working on both assessing the extent of bias in face biometrics, and how to better measure disparities in performance between subjects from different demographics, with its ongoing Face Recognition Vendor Test series.

Kardome expands speech recognition enhancement portfolio with voice biometrics

Israeli startup Kardome has added voice biometrics and wake-up word detection to its speech recognition product enhancement portfolio.

The first of the two features is designed to enable developers and original equipment makers to build voice interfaces that recognize individuals and respond only to them.

“Our voice biometric technology works with embedded systems, offering secure and fast response times, and does not require Internet connectivity,” Kardome executives have written on their website.

Furthermore, the company says it offers voice profiles that can be securely stored online so people speaking can be identified across different systems regardless of location.

The wake word detection feature reportedly relies on Kardome’s deep learning and noise reduction algorithms to provide highly accurate detection and integrate with the company’s echo-cancellation and noise-reduction products.

According to Kardome, its wake word technology offers a false response rate of less than 10 percent with a signal-to-noise ratio of -20db. The software is compatible with Qualcomm, HIFI4, ARM ADSP-SC58x/59x devices and others.

More generally, Kardome’s 3D Audio Front End software is designed to enhance speech signals from multiple speakers and in real time.

According to CEO Dani Cherkassky, adding wake words and voice biometrics will give developers of speech recognition systems a one-stop solution for their voice technology.

“Our goal is to help developers and manufacturers of voice-controlled devices to create [automatic speech recognition] systems that are accurate, fast and secure,” Cherkassky says.

Humanode public sale updates

After receiving more than 11k sign-ups from various groups in the Humanode community the team has finally settled upon the dates! Note that changes might occur and these dates are preliminary.

  • Wave 1 : Sep 14, 2022
  • Wave 2 : Sep 20, 2022
  • Wave 3 : Sep 27, 2022

Those who’d like to get into Wave 1 still have time to deploy a node or reserve a spot through Wave 1 for non-validators by writing an in-depth article or doing a video review.

The whitelist will be open for all the waves right until their respective sales are conducted.

Whitelist details can be accessed here.

The Humanode team tried to approach the price formation by balancing out various risks posed by differences in price, unlock amounts, and cliff-vesting timings. All the options will be available to all the waves.

  • Option 1 : 0.1725 | 100% unlocked at TGE
  • Option 2 : 0.15 | 30% unlocked at TGE | 70% — 3-month cliff and 3-month vesting
  • Option 3 : 0.13 | 20% unlocked at TGE | 80% — 6-month cliff and 6-month vesting

You can find more details here.

To make sure that Humanode’s path is aligned with the vision the team has also released a roadmap to highlight the milestones after the mainnet.

Check it out here.

These Weeks’ News by Categories

Access Control:

Consumer Electronics:

Mobile Biometrics:

Financial Services:

Civil / National ID:

Government Services & Elections:

Facial Recognition:

Fingerprint Recognition:

Iris / Eye Recognition:

Voice Biometrics:

Liveness Detection:

Behavioral Biometrics:

Biometrics Industry Events

SPIE Remote Sensing: Sep 5, 2022 — Sep 8, 2022

Identity Week Asia: Sep 6, 2022 — Sep 7, 2022

Future Tech Expo & Summit: Sep 12, 2022 — Sep 13, 2022

BIOSIG 2022–21st International Conference of the Biometrics Special Interest Group: Sep 14, 2022 — Sep 16, 2022

Border Management & Technologies Summit Asia: Sep 20, 2022 — Sep 22, 2022

Biometrics India Expo 2022 co-located with RFID India Expo / SmartCards Expo: Sep 21, 2022 — Sep 23, 2022

Identity Week America: Oct 4, 2022 — Oct 5, 2022

Authenticate 2022: Oct 17, 2022 — Oct 19, 2022

IFINTEC Finance Technologies Conference: Oct 18, 2022 — Oct 19, 2022

Digital Identity and Digital Onboarding for Banking 3rd Annual: Oct 20, 2022 — Oct 21, 2022

Money 20/20 USA: Oct 23, 2022 — Oct 26, 2022

Biometrics Institute Annual Congress: Oct 26, 2022 — Oct 27, 2022

International Face Performance Conference (IFPC) 2022: Nov 15, 2022 — Nov 17, 2022

6th Border Management and Identity Conference (6th BMIC): Dec 7, 2022 — Dec 9, 2022

MISC

  • A good bartender is able to anticipate the kind of interaction a customer is looking for from cues provided by their facial expressions and patterns of speech. So too is a new bartending robot developed by artificial intelligence researchers and featuring facial recognition capabilities, CNBC reports.

Subscribe to Paradigm!

Medium, Twitter, Telegram, Telegram Chat, LinkedIn, and Reddit.

Main sources

Research articles

Biometric Update

Science Daily

Identity Week

Find Biometrics

--

--