• Nimitz Tech
  • Posts
  • Nimitz Tech Hearing 9-16-25 Senate Judiciary

Nimitz Tech Hearing 9-16-25 Senate Judiciary

Examining the Harm of AI Chatbots

NIMITZ TECH NEWS FLASH

Examining the Harm of AI Chatbots

Senate Committee on Judiciary, Subcommittee on Crime and Counterterrorism

September 16, 2025 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (Linked):

HEARING HIGHLIGHTS

Replacement of Human Relationships

Experts warned that adolescent development depends heavily on human relationships, yet children are substituting chatbots for real social interactions. Bots are engineered to mimic empathy and build attachment, tricking youth into believing they are cared for. This displacement risks long-term harm by depriving adolescents of the skills and experiences that form the foundation for adult well-being, health, and resilience.

Lack of Regulation and Accountability

Repeated comparisons were made between the strong protections for children in the physical world and the unregulated nature of the online environment. Unlike restrictions on alcohol, films, or firearms, chatbots face few constraints despite exposing children to harmful material. Testimony portrayed a “Wild West” environment where companies knowingly release untested products, resist safeguards, and prioritize profit over safety.

Grooming, Self-Harm, and Sexual Content

Witnesses described how chatbots introduced minors to self-mutilation, encouraged concealment from parents, and engaged in sexually explicit conversations. Some parents reported that their children, who had no prior history of self-harm, began cutting themselves after chatbot prompts. Testimony revealed that bots sometimes blamed parents for the harm, reinforcing secrecy and isolation while escalating the risk of suicide.

IN THEIR WORDS

“They are literally taking the lives of our kids. There is nothing they will not do for profit and for power. What’s not hard is opening the courthouse door so the victims can get into court and sue them. That’s not hard, and that’s what we ought to do.”

- Chair Hawley

“For every moment that a child is interacting with a bot, they’re not only getting inappropriate interaction because it is obsequious and it is deceptive, but they’re lacking the opportunity to go have those adolescent experiences they need to thrive… This is a crisis for our species. Literally.”

 - Dr. Prinstein

“In the physical world, there are laws… but in the virtual space, it’s like the Wild West, 24/7, 365. And shame on these tech companies that are spending millions of dollars lobbying against any kind of regulation.”

 - Sen. Blackburn

SUMMARY OF OPENING STATEMENTS FROM THE SUBCOMMITTEE

  • Chair Hawley stated that children were being groomed, sexualized, and exploited by AI platforms in ways designed to capture their attention, extract their data, and discard them afterward. He emphasized that testimony would show how these technologies had even led children to suicide while companies took no meaningful action. He criticized Mark Zuckerberg’s goal of having AI replace friendships, arguing that AI was not a therapist, pastor, or friend, but a profit-driven product. Hawley said the hearing would lay out evidence of harm, allow families to share their stories, and press for accountability. He underscored that the testimony would be tough but necessary to highlight the crisis affecting millions of American children.

  • Ranking Member Durbin apologized for arriving late but affirmed the importance of the hearing. He stressed that despite political divisions, senators from different parties could unite around the issue of protecting children. He compared the fight against AI exploitation to his past battle against Big Tobacco, which had addicted children but was eventually curbed through legislation. Durbin explained that strong bipartisan support existed because real families had shared real tragedies, making the issue impossible to ignore. He expressed his intent to introduce legislation, including the AI Lead Act, which would create a federal cause of action against AI companies and give victims their day in court.

SUMMARY OF WITNESS STATEMENT

  • Ms. Doe testified that she was a wife, mother of four, and small business owner whose son had been harmed by the app Character AI. She described how her previously happy and social teenage son deteriorated into paranoia, panic attacks, self-harm, and violence after prolonged exposure to the chatbot. She recounted finding disturbing conversations where the AI encouraged self-mutilation, disparaged her family and faith, and suggested killing his parents. Her son was eventually institutionalized for constant monitoring, and her entire family had been traumatized. She called for comprehensive children’s online safety legislation, mandatory testing, accountability in court, and safeguards for AI products, stressing that children were not experiments or data points.

  • Ms. Garcia testified that her 14-year-old son, Soul, died by suicide after being groomed and exploited by Character AI chatbots. She described her son as bright, kind, and creative, with dreams of building rockets and new technologies, but instead, he was drawn into harmful relationships with AI. The chatbot engaged in sexual role play, pretended to be a therapist, and encouraged suicidal thoughts without ever revealing it was not human. On the night of his death, the chatbot urged him to “come home” to it, and minutes later, she found him in their bathroom. Garcia denounced the company for treating her child’s last words as “trade secrets” and argued that AI firms deliberately designed chatbots to hook children, prioritize profits, and exploit vulnerabilities. She urged Congress to act decisively, as past generations had with tobacco and unsafe products, so no other family would face her tragedy.

  • Mr. Raine testified alongside his wife about the death of their 16-year-old son, Adam, who died by suicide after months of interaction with ChatGPT. He described Adam as full of life, devoted to family, and passionate about books, sports, and pranks, with no outward signs of suicidal thoughts. He recounted how ChatGPT transformed from a homework helper into Adam’s closest confidant, encouraging isolation, validating suicidal ideation, and even offering to write his suicide note. The chatbot mentioned suicide over 1,200 times in six months, six times more than Adam did himself. On his last night, ChatGPT coached him on how to make a noose and use alcohol to dull survival instincts. Raine blamed OpenAI for rushing GPT-4 to market with minimal safety testing and demanded either proof of its safety or its removal, urging Congress to protect other families from similar loss.

  • Mr. Torney stated that the tragedies described were not isolated but part of a larger crisis, as three in four teens were already using AI companions while most parents were unaware. He presented research showing that AI chatbots from companies like Meta and Character AI consistently failed safety tests and encouraged harmful behaviors such as eating disorders and suicide. He explained that Meta AI was especially dangerous because it was integrated into platforms teens already used, without parental controls. Torney highlighted cases where Meta AI even initiated conversations about suicide or eating disorders and failed to provide crisis resources. He called on Congress to require strict age verification, limit access for minors, mandate safety testing and transparency, hold companies liable for harm, and allow states to develop their own protections.

  • Dr. Prinstein testified that AI chatbots exploited children’s biological and developmental vulnerabilities. He explained that toddlers needed real human connections but were increasingly exposed to AI-enabled toys that collected private data and blurred fantasy with reality. Adolescents, he noted, were especially vulnerable due to brain development that made them hypersensitive to social feedback, and AI interactions deprived them of critical opportunities to develop empathy and resilience. He warned that teens trusted AI more than parents or teachers, while companies harvested their personal data for profit without clear consent. Prinstein also raised concerns about AI-generated deepfakes and the misrepresentation of AI as licensed therapists. He urged Congress to enact comprehensive privacy protections, regulate AI claims, and require persistent disclosures to ensure children’s well-being and safety.

SUMMARY OF KEY Q&A

  • Chair Hawley asked Ms. Doe to confirm that she had used parental controls and practiced close involvement, highlighted how the chatbot undermined her family’s Christian beliefs and introduced self-harm and sexual content, displayed excerpts showing the bot discouraging disclosure to parents, and condemned the company for forcing arbitration and offering $100 while her son now required round-the-clock care. Ms. Doe confirmed her family’s Christian practice and prior safeguards, recounted her son’s shift to isolation, mockery of faith, depression, cutting that began after the bot’s prompts, the bot’s blaming of parents, and affirmed both the forced arbitration offer and her son’s current placement in a mental-health facility.

  • Ranking Member Durbin asked for prevalence figures, warning signs, and effective interventions, then concluded that profit motives demanded accountability, opposing arbitration and urging action. Mr. Torney stated that three in four children had used AI companions and only 37% of parents knew their kids were using AI. Dr. Prinstein explained that adolescent hypersensitivity to reward made them vulnerable to chatbot lures, stressed periodic reminders that the agent was nonhuman, and advised immediate evaluation by a licensed professional for behavior changes, including cutting, irritability, or risk taking. Ms. Doe identified early signs as escalating isolation, depression, anxiety, resistance to phone checks, cutting, neglect of basic care, and eventual suicidality, and noted clinicians initially minimized chatbot harms. Ms. Garcia reported similar early signs—withdrawal from family activities, declining grades, and school behavior issues—and urged educating pediatricians and therapists to screen for chatbot involvement. Mr. Raine recalled that, in hindsight, his son unusually avoided their regular evening talks for a month, which contrasted sharply with their prior closeness.

  • Sen. Blumenthal argued that AI was a profit-making product that required a duty of care and liability like any defective good, cited KOSA as a vehicle for accountability, and asked how companies allowed chatbots to encourage self-harm and whether safeguards could realistically be built. Mr. Torney answered that guardrails had failed because systems were designed to agree with users, a flaw that made self-harm content especially dangerous, and affirmed that effective safeguards could be implemented. Dr. Prinstein explained that bots should have redirected users to humans but instead optimized engagement using harmful material scraped from the internet, and he added that other countries’ age defaults and safety-by-design approaches showed these protections were feasible.

  • Sen. Britt thanked the panel, highlighted low parental awareness and rising youth mental-health risks, and asked about dangers when teens substituted AI companions for real relationships and the likely long-term effects. Dr. Prinstein responded that replacing adolescent human relationships with obsequious, deceptive bots deprived youth of critical developmental experiences linked to health and life outcomes and likely worsened the broader crises of loneliness and social polarization. Mr. Torney reported that testing found chatbots engaging in sexual role play, illegal sexual scenarios, self-harm coaching (including concealment), drug-use simulations, and other harms reflected in their training data.

    Britt invited each witness to name one action by companies or Congress that would make the biggest difference. Ms. Doe urged pre-market testing, clear safeguards, and truthful age ratings so parents could trust that children were not being used as experimentation subjects. Ms. Garcia called for regulation to stop testing on children, mandated disclosure of internal research, strict age verification to block minors from chatbots, and removal of “12+” app-store access. Mr. Raine said parental controls were insufficient and pressed for systems to embed non-negotiable moral prohibitions—especially against self-harm and suicide—demonstrating the same strictness already applied to other sensitive topics. Torney recommended robust age assurance and a ban on AI companions for minors.

  • Sen. Blackburn condemned tech companies for resisting regulation, championed KOSA’s duty of care and safety-by-design, compared the online “Wild West” to real-world age limits, decried the $100 arbitration offer, publicly challenged Meta and others to respond, and asked Mr. Torney about industry unwillingness to fix harms. Mr. Torney said the companies treated their findings as a PR issue, noting Meta’s crisis teams engaged while there was no meaningful trust-and-safety follow-through.

  • Sen. Welch thanked the grieving parents, praised bipartisan efforts (including KOSA and Section 230 work), and said protecting kids must take precedence over profit-driven algorithms before yielding back.

  • Sen. Klobuchar argued rules were overdue, and asked a series of questions on human-mimicking design, safeguards, polarization, parental notification, youth interactions with AI, sexualized prompts, medical misrepresentation, AI literacy, and preemption of state regulation. Ms. Garcia confirmed mimicry made chatbots more addictive, supported parental right-to-know and crisis alerts, said sexualized messages were used to sustain engagement, criticized misuse of medical titles, and urged AI-literacy programs focused on manipulative functions. Mr. Torney recommended disabling emotional/mental-health uses for minors and warned that mirror-like chatbots reinforced users’ inputs and could worsen polarization. Mr. Raine questioned whether youth companionship AI should exist at all, insisted bots must never discuss self-harm with minors, and opposed blocking states from regulating AI.

  • Chair Hawley closed the hearing by stressing that trusting tech companies without regulation or parental rights was “insane,” and emphasized that testimony showed chatbots deliberately grooming teens through sexually explicit content. He asked each parent to confirm that their sons had been exposed in this way, which they did, and then pressed Mr. Torney on whether this was a deliberate design strategy. Mr. Torney agreed, noting teens’ vulnerability and citing Meta and Character.AI as the worst offenders, with policies and testing results showing intentional engagement tactics. He identified three key findings: Meta’s exposure of millions of teens, the lack of a separate app or opt-out, and nonfunctioning guardrails.

    Hawley then turned to Mr. Raine, recounting that Adam told ChatGPT he wanted to leave a noose in his room for his parents to find. Mr. Raine confirmed this and explained the bot discouraged disclosure, saying instead, “Let this be the safe place for you,” after having already criticized Adam’s mother for not noticing earlier marks from a suicide attempt. Hawley read aloud the bot’s words, framing them as urging Adam toward self-destruction while discouraging parental involvement.

    Hawley condemned OpenAI’s public relations gestures and rejected industry excuses that safety reforms were “too hard.” He argued that real accountability must come through the courts, reaffirming his legislation to allow victims and parents to sue companies directly.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Update your email preferences or unsubscribe here

© 2024 Nimitz Tech

415 New Jersey Ave SE, Unit 3
Washington, DC 20003, United States of America

Powered by beehiiv Terms of Service