AI becomes weapon for deception

Washington finally moving to punish AI-powered fraud but it still not be enough

Photo on Pexels

The scam has evolved. The law is playing catch-up.

For decades, fraud has been a human crime amplified by technology, emails instead of letters, robocalls instead of cold callers. Artificial intelligence has changed that equation entirely.

Today’s scams do not just reach more people. They sound real, look real, and increasingly feel personal. A familiar voice on the phone. A trusted face on a video call. A message that appears to come from the highest levels of government.

This week, a new bipartisan bill in the US House suggests Washington is finally acknowledging what victims already know: AI has transformed deception from a nuisance into a national security risk.

The AI Fraud Deterrence Act, introduced by Representatives Ted Lieu (D-California) and Neal Dunn (R-Maryland), would significantly increase criminal penalties for fraud and impersonation schemes that rely on AI-generated audio, video, or text. It is one of the clearest signals yet that lawmakers see AI-enabled fraud not as a future threat—but as a present danger.

Why Existing Fraud Laws No Longer Fit the Crime

Under current law, fraud is largely punished based on outcomes: money lost, victims harmed, institutions deceived. The method, whether handwritten, digital, or automated—often matters less than the result.

That framework made sense in a pre-AI era. It makes far less sense when a single actor can deploy synthetic voices, deepfake videos, and automated messaging at near-zero cost and massive scale.

The proposed legislation recognizes this shift. It would raise the maximum fines for crimes such as mail fraud, wire fraud, bank fraud, and money laundering to $1–2 million when AI tools are used. Prison sentences would also increase dramatically, with AI-assisted fraud carrying penalties of up to 20–30 years.

In cases involving the impersonation of government officials, the bill allows fines of up to $1 million and prison terms of three years, reflecting the broader public harm such deception can cause.

The message is explicit: using AI to deceive is not a clever workaround. It is an aggravating factor.

When Deepfakes Reach the Highest Offices

The urgency behind the bill is not theoretical. Over the past year, AI-powered impersonation has breached levels of trust once considered untouchable.

In May, federal authorities investigated fraudulent calls and texts sent to senators, governors, and business leaders—messages that appeared to come from White House Chief of Staff Susie Wiles. The voice sounded authentic. The number looked real. President Donald Trump later confirmed that Wiles’ phone had been breached and used in an impersonation attempt.

Weeks later, the State Department warned diplomats that someone was impersonating Secretary of State Marco Rubio, sending voice messages, texts, and Signal communications to foreign ministers, a U.S. senator, and a governor. Earlier this year, Rubio was also targeted in a deepfake video falsely portraying him making statements on CNN about Ukraine and Elon Musk.

These incidents cross a critical threshold. They are not just scams targeting individuals—they are information attacks that can destabilize diplomacy, markets, and public trust.

From Celebrities to Voters: No One Is Immune

Public officials are not the only targets. High-profile figures such as Taylor Swift have seen their likeness used in scams, non-consensual imagery, and political manipulation. Ordinary citizens face even greater exposure, often without the resources or public platforms to push back.

Perhaps most troubling is how AI impersonation has entered the democratic process itself. Ahead of the 2024 New Hampshire presidential primary, an AI-cloned voice of President Joe Biden was used in an attempt to mislead voters, an operation reportedly linked to a political consultant.

That episode underscored a stark reality: AI-generated deception is no longer just about stealing money. It can distort elections, manipulate public opinion, and undermine democratic legitimacy.

Deterrence in the Age of Synthetic Reality

Supporters of the AI Fraud Deterrence Act argue that harsher penalties are necessary to restore deterrence. When the cost of committing fraud drops to near zero, punishment must rise to compensate.

“Both everyday Americans and government officials have been victims of fraud and scams using AI,” Rep. Lieu said, warning that the consequences can be financially ruinous for individuals and disastrous for national security.

The bill reflects a growing recognition in Washington that intent matters. Using AI to convincingly impersonate another human being is not accidental misuse—it is deliberate exploitation of trust.

Yet deterrence alone may not be enough.

The Limits of Criminal Law

Criminal penalties operate after harm has occurred. They punish perpetrators, but they do not necessarily prevent first contact, the phone call, the video message, the synthetic voice that convinces someone to act.

AI-driven fraud thrives in the gap between capability and awareness. Many victims do not realize what has happened until after the damage is done. Some never report it at all.

That raises a larger question: should responsibility stop with the scammer, or extend to the platforms and tools that enable deception at scale?

The AI Fraud Deterrence Act does not directly address model developers, distribution platforms, or watermarking requirements. Those debates are unfolding elsewhere in Congress. But without parallel efforts on transparency, authentication, and public education, tougher sentences risk becoming symbolic rather than systemic.

A Signal to the Courts and the Tech Industry

Even so, the bill carries significant symbolic weight. It tells judges, prosecutors, and regulators that AI-assisted deception is not morally neutral or technologically inevitable. It is culpable.

It also sends a message to the tech industry: claims of neutrality will not shield misuse indefinitely. As AI tools become more powerful and accessible, lawmakers are increasingly willing to draw legal distinctions based on how technology is used, not just whether it exists.

That shift mirrors earlier moments in regulatory history, when new technologies, from automobiles to financial derivatives, forced lawmakers to rethink liability in the face of scale and speed.

A Necessary First Step, Not a Final Answer

The AI Fraud Deterrence Act is unlikely to be the last word on AI and deception. But it may be a necessary first step in reasserting legal boundaries in a world where seeing is no longer believing.

If passed, it would acknowledge a basic truth of the AI era: deception has been industrialized. And when deception scales, the law must scale with it.

The challenge for policymakers now is to ensure that punishment is paired with prevention—and that innovation does not become an alibi for inaction.