Researchers Show Vulnerabilities in Facial Recognition

Researchers Show Vulnerabilities in Facial Recognition
The algorithms that check for a user's 'liveness' have blind spots that can lead to vulnerabilities.

BLACK HAT USA 2019 – Las Vegas – The multifactor authentication that some have touted as the future of secure authentication is itself vulnerable to hacks as complex as injected video streams and as simple as tape on a pair of eyeglasses. That was the message delivered by a researcher at Black Hat USA earlier today.


Researchers Yu Chen, Bin Ma, and Zhuo (HC) Ma of Tencent Security's Zuanwu Lab were scheduled to speak here at Black Hat USA, but Visa denials left HC Ma alone on the stage. He said his colleagues had begun the research to find out how biometric authentication was being implemented and, specifically, how the routines designed to separate a living human from a photo or other fake were put into practice.


"Previous studies focused on how to generate fake audio or video, but bypassing 'liveness detection' is necessary for a real attack," Ma said, citing some of the techniques researchers and fiction authors have used to do so.


Most liveness detection is based on a variety of factors, from body temperature (for fingerprint scans) and playback reverberation (for voice recognition) to focus blur and frequency response distortion in facial recognition.


During his presentation, Ma focused on facial recognition as the most complex of the techniques. In the first demonstration, he showed a method the team developed for injecting a video stream into an authentication device between the optical sensor (camera) and processor. This technique, he said, had to consider issues like latency – too much will trigger the system's defense mechanisms –information loss, and remaining sufficiently "transparent" to avoid detection by the system's defenses.


While this injection is certainly possible, Ma said i ..

Support the originator by clicking the read the rest link below.