When there is zero room for error, AimFace//LipSync offers unbeatable (literally) liveliness detection and anti-spoof technology. It evaluates three factors simultaneously, to beat any of today’s presentation attack technology, combining visual, audio & lip movement synchronisation:
Does the face match the template?
Do the numbers spoken match the randomised number challenge?
Does the audio synchronise exactly with the lip movement?
By combining voice and face data within an artificial neural network designed for audio-visual synchronisation detection, AimBrain can provide you with a single risk-based assessment that a user is who they say they are, stopping spoofing fraud before it happens.
Use as part of a step-up/step-down security sequence in conjunction with AimBehaviour (passive, continuous) or an existing authentication step.
Wherever you need to check a user’s authenticity, from account Access to Banking to Catfishing…all the way through to Treasury and Workflow automation, AimFace//LipSync makes it easy to spot the genuine from the fraudulent.
Use cases are virtually unlimited, but include:
Contact us today to talk AimFace//LipSync.
Download the AimFace//LipSync fact sheet to find out more about our groundbreaking response to synthetic voice and video fraud.
AimFace//LipSync is part of AimBrain’s Biometric Identity as-a-Service, or BIDaaS, platform.
Integrate our SDK into your mobile app and start using AimFace//LipSync today. All voice and image data is captured within your architecture, as our server-side authentication model converts the raw voice & image data as it hits our server wall. Here it becomes an encrypted and pseudonymised version of a user’s voice, and the original data deleted.
Any future requests are simply sent to our server where the new encrypted, pseudonymised data is compared to the original template. We then send an in-session, risk-based assessment that the user is who they say they are.