Biometric spoofing, also known as biometric spoof attacks or biometric presentation attacks, refers to the manipulation or falsification of biometric data to deceive a biometric authentication system.

What Is Biometric Spoofing?

Biometric spoofing refers to deceptive biometric authentication attempts by presenting fake samples related to fingerprints, facial scans, or iris scans–a “presentation attack.” While biometric identification methods are significantly stronger than password-based systems, this does not make them invulnerable. Modern threats are evolving to bypass such protections. As such, many strong authentication requirements in compliance and regulatory standards require some form of liveness detection that can determine if the credentials presented are real.

In general, liveness detection methods can be classified as active or passive:

  • Active methods require user interaction, such as performing a specific action. For example, an active detection method might require the user to undertake some action, like smiling or speaking.
  • Passive methods work in the background without the user’s awareness. This method requires no direct interaction with the user.

Broadly speaking, active methods are typically harder to spoof but require more engagement with the user, which can impact usability. Passive methods are smoother and seamless but present more opportunities for fraud.

What Are Some Types of Biometric Spoofing?

Because different forms of biometric spoofing align with various forms of authentication, each attempt to address the specific weaknesses and opportunities of those methods.
As such, presentation attacks can target various biometric modalities, including:

Facial Recognition Spoofing Attacks

Attackers may use different techniques to deceive facial recognition systems. Some of the common methods include:

  • Print Attack: The attack uses a printed photograph of the target person’s face to trick the facial recognition system. This is one of the simplest methods and can be effective against less sophisticated systems (most of which need to be deployed in a context where they would protect important information).
  • Replay Attack: Hackers record a video of the target person’s face and play it back in front of the camera. This approach is often more successful than a print attack since it incorporates motion, which some facial recognition systems may require.
  • 3D Mask Attack: The attacker creates a realistic 3D mask of the target person’s face and wears it during authentication. This method can be more challenging to detect, but it’s equally challenging to do effectively without specific skills and equipment.
  • Deep Fake Attack: An attack uses a machine learning/AI program to create a video of the target’s face. Deepfake technology can create convincing facial movements and expressions, making it difficult for some facial recognition systems to differentiate between real and fake.

Facial recognition liveness detection techniques can include analyzing facial movements like blinking or verifying 3D depth information. Additionally, while there isn’t a consensus on how accurate deep fakes are, new technology from Intel can look for artifacts that signal that a video is artificial–rendering deep fakes relatively niche.

Fingerprint Spoofing Recognition Attacks

Fingerprint verification systems, while generally secure, can still be vulnerable to spoofing if appropriate countermeasures are not in place. Some of the common fingerprint spoofing methods include:

  • Fake Fingerprints: Hackers create artificial fingerprints using materials like gelatin that replicate the target user’s fingerprint pattern, often taken directly from a fingerprint. The fake fingerprint can then be placed over the attacker’s finger or a dummy finger to deceive the fingerprint scanner.
  • Latent Fingerprints: An attacker lifts a target user’s latent fingerprint from a surface using adhesive tape or other methods and then transfers it onto a material that can deceive the fingerprint scanner.
  • 3D-Printed Fingerprints: A sophisticated attack that involves someone creating a 3D model of the target user’s fingerprint using digital techniques and then 3D printing it with materials that mimic human skin properties. This method can create realistic replicas that can deceive some fingerprint scanners.

Countermeasures against these attacks include measuring finger skin temperature, moisture, or electrical properties to ensure the presented fingerprint comes from a live person.

Iris Recognition Spoofing Attacks

While iris recognition is generally considered to be a highly secure biometric modality, it can still be vulnerable to spoofing attacks if appropriate countermeasures are not in place.
Some of the common iris presentation attacks include:

  • Digital Iris Images: Displaying a digital image or video of the target user’s iris on a device screen, such as a smartphone or tablet, and presenting it to the iris scanner. This method can use different lighting and sharpness settings on devices to fool some biometric scanners.
  • Artificial Eyes or Contact Lenses: Creating an artificial eye or a custom contact lens with the target user’s iris pattern imprinted. These can be harder to detect if the contacts are created well.
  • Physical Eyes: Although a rare and extreme method, using a preserved cadaver eye with the target user’s iris pattern can also deceive the iris recognition system. It would require someone to steal the eye of a dead subject and use it relatively quickly to be effective, which may be its form of deterrence.

To defend against iris presentation attacks, defenders may use techniques like examining the natural movement and contraction of the iris, verifying light reflection patterns, or analyzing the unique texture of the iris surface.

How Is Liveness Detection Used in Identity Assurance Level (IAL) Verification?

Identity Assurance Level (IAL) is a classification system used by the National Institute of Standards and Technology (NIST) Special Publication 800-63-3, “Digital Identity Guidelines” to categorize the level of confidence in an individual’s asserted identity. These standards are often used to add layers of identity verification to processes involving sensitive government systems or classified data.

IAL2 is an intermediate level of assurance, the second of three levels. At IAL2, an individual’s identity must be verified through remote or in-person proofing processes, which involve validating and verifying identity information against trusted records and sources. Liveness detection plays a role in IAL2 by ensuring the integrity and authenticity of biometric data collected during the identity-proofing process.

Combating Biometric Spoofing with 1Kosmos

Biometrics can improve security by replacing passwords, but they can be subject to theft, spoofing, and decision bias. As discussed, bad actors are implementing approaches to bypass biometric assurances with increasing success.

The 1Kosmos platform performs a series of checks to prevent biometric-based attacks. For instance, 1Kosmos LiveID can perform both “active” liveness (requiring the user to perform randomized expressions) and “passive” liveness, one without the user’s involvement. Additionally, 1Kosmos utilizes true-depth camera functionality to prevent presentation attacks and offers an SDK to protect against camera manipulation to prevent an injection attack. Alongside these advances, 1Kosmos BlockID also offers the following features:

  • Anti-Spoofing Algorithms: 1Kosmos anti-spoofing algorithms detect and differentiate between genuine biometric data and spoofed data. Our algorithms analyze factors like texture, temperature, color, and movement to determine the authenticity of the biometric sample, catching virtual/hardware camera and JavaScript injections and ensuring the validity of the transmitted identity.
  • Data Encryption: 1Kosmos ensures that biometric data is encrypted both during transmission and storage to prevent unauthorized access. Implementing strict access controls and encryption protocols prevents man-in-the-middle and protocol injections, ensuring the validity of the transmitted identity.
  • Regular Audits and Penetration Testing: 1Kosmos conducts regular audits and penetration testing to identify and address vulnerabilities, including access to a user’s biometric data. This helps ensure that security measures are effective and up to date.
  • Regulatory Compliance: 1Kosmos complies with regulations and standards related to biometric data protection and security, such as the National Institute of Standards and Technology (NIST 800-63-3), iBeta DEA EPCS, UK DIATF, General Data Protection Regulation (GDPR) and Know Your Customer/Employee (KYC/KYE). For a list of certifications, click here.
  • Human “Failover”: 1Kosmos offers 24×7 staffed call centers to assist when an attack is detected or if a user has trouble completing a verification process.

Learn how 1Kosmos can help your organization modernize Identity and Access Management and prevent biometric-based attacks—visit our Architectural Advantage page and schedule a demo today.

A Customer First Approach to Identity Based Authentication
Read More

Expert Insights in Your Inbox

Subscribe to the blog
Meet the Author

Robert MacDonald

Vice President of Product Marketing

Robert is the Vice President of Product Marketing at 1Kosmos. He is a highly influential senior global marketer with more than 15 years of marketing experience in B2B and B2C software in the biometric authentication space. Prior to 1Kosmos, Rob managed product strategy and vision for the Identity and Access Management portfolio at Micro Focus, leading a team of product marketers to drive sales and support the channel. Earlier in his career he set the foundation for content planning, sales enablement and GTM activities for ForgeRock. He has also held senior marketing positions at Entrust, Dell, Quest and Corel Corporation.