Generative AI is being used to support use cases in a growing number of industries—including scam enterprises. Deep-fakes-as-a-service businesses are now available to established and aspiring cybercriminals from technology developers that sell via end-to-end encrypted chats on the Telegram messaging app or on the dark web.
Criminals can buy deepfake images, cloned voices, biometric datasets and synthetic identities. They pay between $10 and $50 if they want to create a custom deepfake image. A ready-to-use synthetic identity can be purchased for around $15.
Deepfakes that spoof biometric data are used to create synthetic identities aimed at beating the defenses of businesses that deploy KYC (know your customer) checks and are commonly used to make fraudulent loan applications. (In recent years, the definition of synthetic identity has expanded beyond the use of documents only to include biometric data.)
Also available from deepfakes-as-a-service businesses are voice impersonation services and AI-enhanced malware that leverage dark web-based large language models to create more effective bots. Voice cloning software is available off the shelf for less than $10.
LexisNexis Risk Solutions’ Dynamic Decision Platform delivers through a single API the integrated layers of defense needed to combat deepfakes and synthetic identity fraud.
Integral to the company’s fraud fighting capability is IDVerse, which provides document and biometric verification.
It is used by clients worldwide in financial services, gambling, retail, telecom and other industries.
IDVerse, which is available through the Dynamic Decision Platform, verifies the authenticity of government-issued photo IDs from more than 200 countries and territories to confirm that a user is who they say they are.
When a selfie is used as part of the process of opening a new account, IDVerse improves liveness detection and reduces false positives.
Generative AI’s ability to create legitimate-looking faces is not only impressive, but through a phenomenon referred to as “AI hyperrealism,” AI-generated fake faces actually are perceived by human fraud fighters as being more human than the faces of real people. This factor helps cybercriminals in their fraudulent efforts.
IDVerse’s video-based ID document capture feature stops injection of AI-generated or manipulated images.
Dynamic Decision Platform’s other features include device risk detection, digital identity profiling, risk pattern analysis, suspicious behavior monitoring and identity fraud controls.
LexisNexis Risk Solutions’ adaptive models are self-learning and evolve in real time. By comparison, purely rules-based fraud fighting technologies that use templates are proving to be slower to adapt to criminal use of generative AI.
All fraud fighting and risk management technology from LexisNexis is available through the Dynamic Decision Platform and from the company’s risk portal. IDVerse can also be accessed directly via an API.
Prior issues: 1277, 1223, 1173, 1158, 970, 868