AimBrain, the Biometric Identity as-a-Service platform, has today raised the bar in user authentication by introducing optional audio and lip synchronisation into its facial authentication module to create AimFace//LipSync. Designed to prove liveliness and counter even the most sophisticated spoofing technologies, AimFace//LipSync provides AimBrain customers with stronger user authentication by combining facial recognition with a spoken challenge and lip movement analysis.
Traditionally, defences against facial biometric fraud – that the user is genuine and not a photo, video or computer-generated simulation – have been to train algorithms to process images at low levels, searching for fraud signals such as chromatic anomalies, textural differences, <a href=”https://en.wikipedia.org/wiki/Moir%C3%A9_pattern” target=”_blank” rel=”noopener”>Moiré patterns</a> or screen exposures. This low-level processing has, however, made the algorithms sensitive to subtle changes in camera, projection means and external environments, resulting in high accuracy but only within the limited parameters in which they were trained.
“We decided to take a different approach the problem,” said Efstathios Vafeias, Lead Scientist at AimBrain, who led the project. “We have developed an algorithm that uses both visual and audio data to detect a real person, not a presentation attack. By asking a user to say a randomised number to the camera, our technology now not only authenticates their face against a template, but verifies that the numbers match the prompt and analyses the synchronisation between the voice and lip movement. So as well as providing a step-change in security, this method maintains accuracy while being less susceptible to hardware or environmental changes.”
AimBrain is first to market with this unique combination of visual and audio syncing, which can be used across any industry in place of any process that uses passwords or two factor authentication. An enterprise’s user or customer authenticates themselves by saying a randomised number to camera when prompted, whereupon the algorithm behind AimBrain’s new feature (AimFace//LipSync) assesses three components:
- Facial recognition: Does the face match the template?
- Liveliness detection: Do the lips move in response to the voice challenge?
- Anti-spoofing technology: Is the sound synchronised to the lip movement?
“Many of today’s anti-spoofing technologies can be fooled using the simplest of measures,” said Andrius Sutas, CEO and co-founder at AimBrain. “We have all seen the high profile hacks that have beaten new smartphone biometric security systems within days of their release. Our lip sync technology means that to beat it, an attacker must be human, look exactly like the user and correctly say a random number, while we analyse the lip movements, within a limited timeframe.”
“The level of sophistication required for an attack goes far beyond ordinary capabilities.”
Alesis Novik, CTO and co-founder at AimBrain, believes that the fight against fraud is one that will never end, and his product roadmap does not stop with lip syncing. “We are in the late stages of developing an integrated solution that combines facial authentication and voice authentication, and assigns an automatic weighting of the two, depending on the environment and context. If a user is in a noisy environment, the weighting will be on the visual side. In a dark environment, for example, the audio authentication plays a stronger part. We are nearly there and expect to launch later this year.”