4 mins read

Gen AI could make KYC effectively useless


KYC, or “Know Your Customer,” is a process intended to help financial institutions, fintech startups and banks verify the identity of their customers. Not uncommonly, KYC authentication involves “ID images,” or cross-checked selfies used to confirm a person is who they say they are. Wise, Revolut and cryptocurrency platforms Gemini and LiteBit are among those relying on ID images for security onboarding.

But generative AI could sow doubt into these checks.

Viral posts on X (formerly Twitter) and Reddit show how, leveraging open source and off-the-shelf software, an attacker could download a selfie of a person, edit it with generative AI tools and use the manipulated ID image to pass a KYC test. There’s no evidence that gen AI tools have been used to fool a real KYC system — yet. But the ease with which relatively convincing deepfaked ID images is cause for alarm.

Fooling KYC

In a typical KYC ID image authentication, a customer uploads a picture of themselves holding an ID document — a passport or driver’s license, for example — that only they could possess. A person — or an algorithm — cross-references the image with documents and selfies on file to (hopefully) foil impersonation attempts.

ID image authentication has never been foolproof. Fraudsters have been selling forged IDs and selfies for years. But gen AI opens up a range of new possibilities.

Tutorials online show how Stable Diffusion, a free, open source image generator, can be used to create synthetic renderings of a person against any desired backdrop (e.g. a living room). With a little trial and error, an attacker can tweak the renderings to show the target appearing to hold an ID document. At that point, the attacker can use any image editor to insert a real or fake document into the deepfaked person’s hands.

Now, yielding the best results with Stable Diffusion requires installing additional tools and extensions and procuring around a dozen images of the target. A Reddit user going by the username _harsh_, who’s published a workflow for creating deepfake ID selfies, told TechCrunch that it takes around 1-2 days to make a convincing image.

But the barrier to entry is certainly lower than it used to be. Creating ID images with realistic lighting, shadows and environments used to require somewhat advanced knowledge of photo editing software. Now, that’s not necessarily the case.

Feeding deepfaked KYC images to an app is even easier than creating them. Android apps running on a desktop emulator like Bluestacks can be tricked into accepting deepfaked images instead of a live camera feed, while apps on the web can be foiled by software that lets users turn any image or video source into a virtual webcam.

Growing threat

Some apps and platforms implement “liveness” checks as additional security to verify identity. Typically, they involve having a user take a short video of themselves turning their head, blinking their eyes or demonstrating in some other way that they’re indeed a real person.

But liveness checks can be bypassed using gen AI, too.

Early last year, Jimmy Su, the chief security officer for cryptocurrency exchange Binance, told Cointelegraph that deepfake tools today are sufficient to pass liveness checks, even those that require users to perform actions like head turns in real time.

The takeaway is that KYC, which was already hit-or-miss, could soon become effectively useless as a security measure. Su, for one, doesn’t believe deepfaked images and video have reached the point where they can fool human reviewers. But it might only be a matter of time before that changes.





Source link