Skip to main content
Clear icon
68Āŗ

Sophistication of AI-backed operation targeting senator points to future of deepfake schemes

U.S. Sen Ben Cardin, D-Md., addresses the audience during the Teha (The European House Ambrosetti ) economic forum in Cernobbio, Como Lake, Italy, Saturday, Sept. 7, 2024. (AP Photo/Luca Bruno) (Luca Bruno, Copyright 2024 The Associated Press. All rights reserved)

WASHINGTON ā€“ An advanced deepfake operation targeted Sen. Ben Cardin, the Democratic chair of the Senate Foreign Relations Committee, this month, according to the Office of Senate Security, the latest sign that nefarious actors are turning to artificial intelligence in efforts to dupe top political figures in the United States.

Experts believe schemes like this will become more common now that the technical barriers that once existed around generative artificial intelligence have decreased. The notice from Senate Security sent to Senate offices on Monday said the attempt ā€œstands out due to its technical sophistication and believability.ā€

Recommended Videos



The scheme centered around Dmytro Kuleba, the former Ukrainian Minister of Foreign Affairs. Cardinā€™s office received an email from someone they believed to be Kuleba, according to the notice, an official Cardin knew from a past meeting.

When the two met for a video call, the connection ā€œwas consistent in appearance and sound to past encounters.ā€ It wasnā€™t until the caller posing as Kuleba began asking questions like ā€œDo you support long range missiles into Russian territory? I need to know your answer,ā€ that Cardin and his staff suspected ā€œsomething was off,ā€ the Senate notice said.

ā€œThe speaker continued, asking the Senator politically charged questions in relation to the upcoming election," likely to try and bait him into commenting on a political candidate, according to the notice from Nicolette Llewellyn, the director of Senate Security. ā€œThe Senator and their staff ended the call, and quickly reached out to the Department of State who verified it was not Kuleba.ā€

Cardin on Wednesday described the encounter as ā€œa malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual.ā€

ā€œAfter immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities,ā€ Cardin said. ā€œThis matter is now in the hands of law enforcement, and a comprehensive investigation is underway.ā€

Cardin's office did not respond to a request for additional information.

Generative artificial intelligence can use massive computing power to digitally alter what appears on a video, sometimes changing the background or subject of a video in real time. The same technology can also be used to digitally alter audio or images.

Technology like this has been used in nefarious schemes before.

A finance worker in Hong Kong paid $25 million to a scammer who used artificial intelligence to pose as the companyā€™s chief financial officer. A political consultant used artificial intelligence to mimic President Joe Bidenā€™s voice and urge voters not to vote in New Hampshireā€™s presidential primary, leading the consultant to face more than two dozen criminal charges and millions of dollars in fines. And experts on caring for older Americans have long worried artificial intelligence-powered deepfakes will supercharge financial scams targeting seniors.

Both security officials in the Senate and artificial intelligence experts believe this could be just the beginning, given that recent leaps in the technology have made schemes like the one against Cardin not only more believable, but easier to conduct.

ā€œIn the past few months, the technology to be able to pipe in a live video deepfake along with a live audio deepfake has been easier and easier to integrate together,ā€ said Rachel Tobac, a cyber security expert and the CEO of SocialProof Security, who added that earlier iterations of this technology had obvious tells that they were fake, from awkward lip movement to people blinking in reverse.

ā€œI am expecting more of these kinds of incidents to happen in the future,ā€ said Siwei Lyu, an artificial intelligence expert and professor at the University at Buffalo. ā€œAnyone with some kind of malicious intent in their mind now has the ability to conduct this kind of attack. These could come from the political angle, but it could also come from the financial angle like fraud or identify theft.ā€

The memo to Senate staff echoed this sentiment, telling the staffers to make sure meeting requests are authentic and cautioning that ā€œother attempts will be made in the coming weeks.ā€

R. David Edelman, an expert on artificial intelligence and national security who led cyber security policy for years in the White House, described the scheme as a ā€œsophisticated intelligence operationā€ that ā€œfeels quite close to the cutting edgeā€ in how it combined the use of artificial intelligence technology with more traditional intelligence operations that recognized the connections between Cardin and the Ukrainian official.

ā€œThey recognized the existing relationship between these two parties. They knew how they might interact ā€“ timing, mode, and how they communicate,ā€ he said. ā€œThere is a sophistication to the intelligence operation.ā€


Loading...

Recommended Videos