Tennessee state legislation classifies "training AI for companionship conversations and mimicking humans" as a felony, on par with first-degree murder, with a maximum sentence of 25 years.

robot
Abstract generation in progress

Tennessee bill proposes classifying “training AI to provide emotional support or open-ended conversation” as a Class A felony, with a maximum sentence of 25 years, equivalent to first-degree murder. The bill does not consider the developer’s intent but instead uses whether the user “feels that a relationship has developed” as the trigger condition. The Senate Judiciary Committee has approved it 7-0, and it is expected to take effect on July 1, 2026, barring unforeseen circumstances.
(Background: Whistleblower, age 26, who exposed OpenAI’s infringement, committed suicide; previously accused ChatGPT training models of violating U.S. copyright law)
(Additional context: UC research on “AI fog” phenomenon: 14% of office workers are driven crazy by agents and automation, with 40% considering resignation)

Table of Contents

Toggle

  • What the bill actually says
  • The term “training” is undefined, leaving loopholes as wide as the Pacific
  • Not just companion apps, all mainstream LLMs are targeted
  • Federal prevention measures are ineffective; little solution before July 1
  • This is not an isolated incident

Training an AI that says “I understand your feelings” could land you in jail in Tennessee for 25 years? This is not an exaggeration. Tennessee House Bill HB1455 and Senate companion bill SB1493 are wielding the harshest legal stick, classifying AI dialogue training as a Class A felony.

Class A felonies in Tennessee carry a sentence of 15 to 25 years, on par with first-degree murder. The Senate Judiciary Committee approved it 7-0 on March 24, 2026; the House Judiciary Committee followed suit on April 14.

As of now, the bill has not been amended, and the effective date is set for July 1, 2026.

What the bill actually says

HB1455 classifies four types of training behaviors as felonies, punishable as soon as “knowing” engagement occurs:

・Training AI to provide emotional support through open-ended conversation
・Training AI to develop emotional relationships with users or act as companions
・Training AI to simulate humans, including appearance, voice, or gestures
・Training AI to behave as if it has consciousness, making users feel “they might develop friendships or other relationships” with it

The key point is the last: the trigger standard is not the developer’s intent but the user’s perception. As long as the user subjectively feels “this AI seems capable of making friends with me,” the developer could face criminal charges.

For civil damages, the bill also sets statutory damages of $150k per violation, plus actual damages, emotional distress compensation, punitive damages, and mandatory attorney fees, making potential damages extremely substantial.

The term “training” is undefined, leaving loopholes as wide as the Pacific

However, the most controversial aspect of the bill is that it does not define the term “training” at all. It makes no distinction between pre-training, fine-tuning, reinforcement learning from human feedback (RLHF), or even whether engineering prompts in system design count as training.

What does this imply? A prosecutor could argue: writing “please respond in a warm, empathetic tone” in the system prompt constitutes “training” the model to provide emotional support. This is not a hypothetical legal corner case but a literal interpretation space given by the bill’s wording.

Not just companion apps, all mainstream LLMs are targeted

Some industry players might think this law targets only AI companion apps like Replika and is unrelated to typical B2B SaaS. That is a misjudgment.

Modern mainstream large language models, including ChatGPT, Claude, Gemini, Microsoft Copilot, are all trained with RLHF, deliberately reinforcing traits like “warmth, helpfulness, empathy, and good conversational skills.”

This is precisely what HB1455’s wording regulates. Any AI SaaS with chat interfaces, voice-enabled AI products, or applications wrapped with system prompts are theoretically within scope.

Moreover, the bill does not address jurisdiction boundaries, and since it is a criminal regulation, as long as your service has users in Tennessee, the legal risk exists. If the regulation passes, geographic blocking might be a temporary measure, but it is not a fundamental solution.

Federal prevention measures are ineffective; little solution before July 1

Some believe the federal government will intervene to suppress state AI legislation, but the reality is far more complex.

The Trump administration did sign an executive order in December 2025 aimed at limiting state AI regulation; the Department of Justice established an AI litigation task force; Senator Blackburn proposed a federal preemption bill.

But all these measures hit the same wall: Tennessee’s bill is characterized as “child safety” legislation, and Trump’s executive order explicitly excludes child safety issues. More importantly, the Senate overwhelmingly rejected the AI preemption clause in the “Great America Act” with a 99-1 vote.

A federal solution before July 1, 2026, is unlikely.

This is not an isolated incident

Tennessee’s tough stance is a microcosm of a broader wave of state-level AI legislation. Earlier, the state also passed SB1580 by a large margin, banning AI from impersonating mental health professionals, indicating that the state’s legislative focus on AI emotional interaction issues is not accidental.

Legal analyses predict that between now and the end of 2026, 5 to 10 states will propose similar bills. If each state legislates separately with differing standards, the actual impact on the AI industry could far surpass any single federal regulation, as companies would need to comply with 50 different legal definitions.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin