Cybercriminals are increasingly using sophisticated artificial intelligence to fake human identities, creating what experts describe as an “arms race” in digital security. This technology allows thieves to impersonate individuals using stolen likenesses and audio to bypass security systems and intercept funds across the globe.
The surge in AI-driven crime has transformed traditional identity theft, moving beyond stolen passwords to the creation of deepfakes on the dark web. With just a single photograph or a short voice recording, criminals can now create functional versions of individuals to manipulate others or bypass security protocols without the victim’s knowledge.
Artificial intelligence allows cybercriminals to launch attacks against millions of identity records in less than a second. These large-scale schemes target banking and government services to intercept funds and disrupt livelihoods. Jordan Burris, head of public sector at the technology company Socure and a former White House technology officer, works with organizations to combat these growing risks.
“The reality here is that we are what I would call in a crisis moment, we are in an arms race with the way AI is being used to launch these attacks,” Burris said. He noted that the speed of the technology is a primary factor, stating that “AI in seconds, less than a second, can launch an attack where you have millions of identity records that are being used across banking, across government services, to try to intercept funds, to try to disrupt livelihoods.”
The process of identity theft has evolved beyond stealing usernames and passwords. Criminals can now use AI to scour the web and create a version of an individual using just one picture or a couple of seconds of their voice. This likeness is often taken without the person’s knowledge, sometimes even using video recorded in public spaces. Once captured, the information is distributed among criminal rings.
“They will repackage it, they will sell it, they will find ways to utilize it for different attacks, different schemes in particular,” Burris said regarding the handling of stolen identities.
On the dark web, criminals use specialized tools such as “Fraud GPT” to facilitate their activities. Unlike standard AI assistants, this software is designed to answer questions about how to conduct cybercrimes and is fueled by insights shared among global fraud networks. Burris described the coordination among these groups as a structured business operation.
“It will spit out a result, and it’s curated based on information that the entire network of fraudsters that operate on the dark web share insights into,” Burris said. “So they’re networked, they’re coordinated, it is a business, and they’re using this to attack everyday Americans and many folks across the globe.”
This technology also enhances common fraudulent activities, such as romance scams. Instead of text-based chats, fraudsters can now use virtual video conversations with fake people to manipulate victims. Deepfakes are also being used to bypass legacy security systems that rely on identity verification questions.
Burris says the most effective way for organizations to protect consumers is to adopt a preventative strategy that uses technology to counter criminal software.
“And so, with organizations, the best way that they can work to help protect consumers is to take more of a preventative approach in front, adopting what I would say is AI to fight AI, that’s the only way you’re actually going to stop it these days,” Burris said.
Download the FREE WPXI News app for breaking news alerts.
Follow Channel 11 News on Facebook and Twitter. | Watch WPXI NOW
©2026 Cox Media Group



