Chatbots, Deepfakes, and Digital Personas: AI Laws Take Effect Across United States

After years of warnings and debates, 2026 marks a turning point, from Silicon Valley’s boardrooms to state capitols, AI is now governed by real law: From California’s safety mandates to New York’s synthetic performer disclosures and Texas’s governance act is not incremental regulation, it’s structural reform of the AI era

Photo on Pexels

As of January 2026, a sweeping set of AI laws, targeting everything from companion chatbots to synthetic advertising and government AI use, have taken full effect across multiple U.S. states. These laws represent some of the most consequential frameworks yet enacted in the AI era, creating both protections for citizens and compliance burdens for developers and businesses alike.

Significant among them are:

•          California’s SB 243, the first of its kind in the U.S. to impose safety protocols and transparency requirements on AI “companion chatbot” operators, especially to protect minors and vulnerable users.

•          New York’s “AI Transparency in Advertising” law, requiring explicit disclosure when synthetic performers (AI-created human-like characters) are used in commercial content.

•          Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), a comprehensive law that focuses on governmental AI use and civil liberties guardrails, with important implications for social scoring and biometric identification.

Rather than waiting for federal action, states across the U.S. have moved decisively to regulate AI with tangible legal consequences, a significant shift from industry self-policing to public accountability.

US AI Regulation Landscape Has Changed

For years, policymakers warned that artificial intelligence, from large language models to generative systems, was advancing faster than the law. In the absence of comprehensive federal regulation, state governments took the lead in crafting statutes to protect citizens, preserve fairness, and hold developers accountable.

By January 2026, key pieces of this regulatory framework took effect, touching core aspects of everyday life: how AI interacts with children, how AI is used in commercials, and how governments can deploy automated systems without violating privacy and civil liberties.

This movement is not isolated. According to the National Conference of State Legislatures, 38 states adopted more than 100 AI-related laws by late 2025, addressing issues from deepfakes to government transparency.

Taken together, these laws form the most substantial body of AI regulation in the United States to date, even as debate continues around federal preemption and consistency.

California: Setting the Nation’s AI Safety Standard

Senate Bill 243: Regulating “Companion Chatbots”

California’s Senate Bill 243, signed into law on Oct. 13, 2025, and effective January 1, 2026, is a landmark statute. It is the first comprehensive state law in the US specifically regulating AI companion chatbots, systems designed to simulate sustained human-like interaction.

Core Requirements of SB 243

•          Clear disclosure that users are interacting with AI rather than a human.

•          Implementation of safety protocols to prevent harmful output, including content that incites suicidal ideation, self-harm, or sexual misuse.

•          Reminder messages for minors every three hours of continued use that they are engaging with AI and should take breaks.

•          Annual reporting of safety protocol effectiveness to California authorities.

This law applies to all operators offering companion chatbot platforms in California, from global tech giants to innovative startups.

The statute’s enactment stemmed from growing concern about reports of emotional dependency and harm associated with AI systems marketed as social companions, including instances of teenagers engaging in self-harm in response to chatbot interactions.

SB 243 sharply signals that user safety, especially for minors,  must be integral to AI design and deployment. It prioritizes proactive protection over after-the-fact damage control.

New York’s Synthetic Performer and AI Advertising Law

Parallel to California’s chatbot safety law, New York State enacted a law targeting AI’s use in commercial media by requiring explicit labeling when “synthetic performers”, AI-generated human likenesses, appear in advertisements.

This statute recognizes that AI’s ability to create photorealistic digital humans blurs the line between genuine human performers and algorithmic creations. Without clear disclosure, consumers can be misled about what is real, who is paid or credited, and what rights individuals have regarding their likenesses.

Key Provisions

•          Businesses must conspicuously disclose when AI-generated or materially altered digital content appears in ads.

•          Civil penalties are applied for failure to comply (e.g., fouls start at $1,000 per violation, increasing on repeat offenses).

•          There are narrow exceptions, e.g., audio-only ads or use of synthetic performers in entertainment works where such characters are consistently present.

This law tackles consumer deception risks and protects transparency in the burgeoning AI-driven advertising ecosystem.

Texas: Responsible AI Governance

Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), also known as HB 149, took effect on January 1, 2026. It is one of the most comprehensive state AI laws, especially focused on government use of artificial intelligence systems.

Focus Areas of TRAIGA

•          Restrictions on governmental use of AI for activities such as social scoring and biometric identification without appropriate consent.

•          Clarification of biometric data consent standards, defining when an individual has truly agreed to the collection or use of scans, facial data, or identifiable information.

•          Balance between innovation and civil liberties, allowing some beneficial uses of AI while preventing discriminatory or opaque practices.

Unlike California’s consumer-centric approach, Texas’s law shines a spotlight on how AI is wielded by government entities, ensuring public authority does not exploit automation to invade privacy or impose novel forms of surveillance.

Patchwork Effect, Why State Laws Matter

Because comprehensive federal AI policy is still under development, states have created a patchwork of rules, some overlap, others diverge.

This mosaic of regulations now touches:

•          AI transparency requirements, including training data disclosures in some jurisdictions.

•          Deepfake and intimate imagery regulation, such as the federal Take It Down Act aimed at non-consensual explicit images (effective May 2025).

•          Labor and employment AI audits, algorithmic equity standards, and more (in other states beyond California, New York and Texas).

In some cases, states have exercised legislative creativity in areas federal law has yet to touch, from balancing algorithmic fairness to ensuring AI does not undermine consumer rights or personal autonomy.

Legal and Industry Implications

For Businesses and Developers

These legal developments mean:

•          AI firms must invest substantially in compliance, auditing, and user safety tools.

•          Platform design can no longer defer accountability solely to internal policies.

•          Legal risk now includes potential fines and civil liability.

Developers working across multiple states face the challenge of divergent requirements, for example, what constitutes appropriate user disclosure in California may differ from Texas or New York. This dynamic will pressure industry coalitions and legal standards bodies to harmonize practices.

For Users

Ultimately, citizens gain new protections that promote clarity and accountability:

•          Personal interactions with AI come with explicit labeling and safety warnings.

•          Advertising transparency safeguards protect consumers from synthetic deception.

•          Government use of AI is constrained to safeguard civil liberties.

This new legal ethos emphasizes that AI’s societal impacts, from mental health to democratic discourse, merit public interest protections as robust as those governing other powerful technologies.

Conclusion: A Defining Moment in AI Governance

The AI laws now in effect across the U.S. are more than bureaucratic artifacts; they are a cultural and political statement about how society views this moment in technological history.

In a global context where some nations lag in AI governance or approach it heavy-handedly, the United States, through a decentralized but substantive regulatory thrust, is charting a distinctive path: one that balances innovation with human dignity, economic potential with ethical oversight.

If 2023 and 2024 were years of generative AI enthusiasm, then 2025–2026 is the era of AI accountability, when law and technology must learn to coexist.

The coming decade will test the resilience of these frameworks, their scalability, and their influence on global norms. But for now, America’s AI rulebook is no longer hypothetical, it is law.