Nimitz Tech Hearing - 3-19-2026

News Flash: Section 230 at a Crossroads: Lawmakers Weigh Free Speech, Platform Accountability, and AI-Era Risks

⚡️ News Flash ⚡️

“Senate Committee on Commerce, Science, and Transportation: Liability or Deniability? Platform Power as Section 230 Turns 30.

Senate Committee on Commerce, Science, and Transportation
March 18, 2026 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (Linked):

HEARING HIGHLIGHTS

QUICK SUMMARY

  • Debate Over Section 230’s Core Purpose and Future: There was general acknowledgment that the internet has changed significantly since 1996 and no longer reflects the modern internet, but sharply disagreed on how to reform it. Some members emphasized that the law remains essential to protecting free expression and preventing over-censorship, while others argued it has been stretched beyond its original intent. The central tension focused on whether reform would preserve or undermine the marketplace of ideas online. Both sides acknowledged that any changes must navigate First Amendment constraints and unintended consequences.

  • Platform Accountability vs. Free Speech Protections: A key divide emerged over whether harms from social media stem from user-generated content or platform design choices. Witnesses like Mr. Bergman argued that companies should be held liable for addictive and harmful product features, while others cautioned that expanding liability could incentivize excessive content removal. Lawmakers explored whether existing law already allows these distinctions or whether statutory clarification is needed. This debate highlighted the difficulty of separating speech from algorithmic amplification and design.

  • Child Safety and Real-World Harms: Multiple senators and witnesses emphasized the growing evidence of harms to children, including exposure to dangerous content and addictive platform features. Testimony from affected families and advocates underscored the urgency of reform, with some calling for imposing a duty of care on tech companies. Members discussed bipartisan legislation like the Kids Online Safety Act and TAKE IT DOWN Act as targeted approaches. There was strong agreement that protecting minors should be a central focus of any policy response.

  • Structural Reforms: Transparency, Privacy, and Competition: Several witnesses proposed shifting away from direct speech regulation toward structural reforms such as increased transparency, stronger data privacy protections, and interoperability requirements. These approaches aim to give users more control over their online experience and reduce the dominance of large platforms. Lawmakers explored how these reforms could improve accountability without violating the First Amendment. There was particular interest in enabling independent research and limiting data-driven targeting practices.

  • Emerging Challenges from Artificial Intelligence: The hearing highlighted growing concern that Section 230 may be ill-suited for generative AI systems, which produce content rather than merely host it. Some witnesses argued that AI outputs should not automatically receive Section 230 protections and may require new liability frameworks. Lawmakers warned against repeating past mistakes by granting broad immunity to emerging technologies before fully understanding their risks. The discussion signaled that AI governance will be a major next phase in the Section 230 debate.

IN THEIR WORDS

“Section 230 is not one of the 10 Commandments, it is not a constitutional provision. It is a federal statute, and we are lawmakers.”

— Ranking Member Schatz

“The same reasons why Congress enacted section 230 to prevent liability for a different person's speech are still relevant, and I'm concerned that a full repeal or sunset would lead platforms to engage in worse behavior, to engage in more censorship to protect themselves from litigation.”

— Chair Cruz

“Section 230 may be the 26 words that created the internet, but they also are the 26 words that have visited irrevocable harm to generations of children.”

— Mr. Brad Carson, Witness

SUMMARY OF OPENING STATEMENTS

  • Chair Cruz argued that the internet originally expanded free expression by allowing individuals to bypass traditional media gatekeepers and share diverse viewpoints. He explained that Section 230 was enacted to protect online platforms from liability for third-party content and to preserve a competitive and open marketplace of ideas. He contended that large technology companies have since become new gatekeepers that suppress speech rather than engage with opposing views. He also expressed concern that government agencies have pressured platforms to censor lawful speech, particularly on political issues. Senator Cruz highlighted legislative efforts aimed at protecting users, especially children, while preserving free speech protections. He concluded that reforms to Section 230 should focus on encouraging more speech and limiting both corporate and government-driven censorship.

  • Ranking Member Schatz stated that Section 230 was written in a very different technological era and argued that it should not be treated as untouchable. He emphasized that the internet has evolved dramatically, bringing both benefits and significant harms that current laws have not adequately addressed. He rejected the notion that any reform to Section 230 would be inherently harmful, noting that Congress routinely updates laws to reflect changing conditions. He pointed out that many users experience the internet as harmful or dysfunctional, particularly due to harassment, scams, and abuse. He argued that technology companies have relied on Section 230 to avoid taking stronger action to protect users, especially children. Senator Schatz concluded that Congress has both the authority and responsibility to pursue bipartisan reforms to improve online safety and accountability.

SUMMARY OF WITNESS STATEMENTS

  • Ms. Daphne Keller argued that while Section 230 is unpopular and imperfectly written, it has struck a functional balance that remains preferable to existing alternatives. She emphasized that any reform must be evaluated for both constitutionality and whether it would worsen current conditions. She explained that much harmful online speech is still protected by the First Amendment, meaning that removing Section 230 would not compel platforms to eliminate such content. She warned that eliminating the law would likely lead platforms to over-remove lawful speech due to liability risks or alternatively leave harmful content up entirely. Keller also noted that increased liability would create legal uncertainty that large companies could absorb but would harm smaller competitors and innovation. She concluded that Congress should pursue reforms like privacy protections rather than abandoning Section 230 in ways that could undermine free expression and competition.

  • Ms. Nadine Farid Johnson stated that Section 230 remains essential to protecting free speech online, even though it should not be immune from reform. She argued that repealing the law would fail to address major concerns and could worsen existing problems by encouraging platforms to remove lawful but controversial speech. She explained that the First Amendment would still protect much harmful content, limiting the effectiveness of liability-based approaches. Johnson advocated for structural reforms focused on transparency, privacy, and interoperability rather than direct speech regulation. She proposed measures such as legal protections for researchers studying platforms, stronger data privacy requirements, and limits on data collection and sales. She also emphasized the need to reduce platform dominance by enabling users to move their data and networks across services. She concluded that conditional Section 230 protections tied to these reforms could improve accountability while respecting constitutional constraints.

  • Mr. Matthew Bergman argued that current interpretations of Section 230 have granted overly broad immunity to social media companies, preventing accountability for harms to children. He presented personal stories of families whose children experienced severe harm, including exposure to dangerous content and suicide, which he attributed to platform design choices. He contended that these harms stem from deliberate product decisions that prioritize engagement and profit over user safety. Bergman maintained that these cases are not about restricting speech but about holding companies accountable under basic principles of liability. He argued that Section 230 has been interpreted in ways that depart from its original intent to empower users and parents. He called for reforms that would impose a duty of reasonable care on tech companies similar to other industries. He concluded that Congress should act to ensure platforms are no longer shielded from responsibility for foreseeable harms.

  • Mr. Brad Carson argued that Section 230 was originally intended to address a narrow issue but has since enabled widespread harms by limiting accountability for online platforms. He explained that there are competing views on whether the law was flawed from the start or expanded too broadly through court interpretations, but both perspectives agree it has failed to keep pace with harms. He emphasized that families affected by online harms often lack legal recourse due to the statute’s broad immunity. Carson warned that Congress now faces a similar inflection point with artificial intelligence and should avoid repeating past mistakes. He argued that Section 230 should not apply to generative AI outputs, which are created by companies rather than third-party users. He also cautioned against broad federal preemption that could prevent future regulatory adaptation. He concluded that Congress should avoid creating rigid legal frameworks that limit accountability in emerging technologies.

SUMMARY OF KEY Q&A

  • Chair Cruz asked about the most troubling uses of Section 230 as a liability shield. Ms. Keller described cases involving discriminatory ad targeting and alleged government-influenced censorship, arguing that Section 230 had been applied in ways that failed to protect users and enabled harmful platform conduct. Ms. Farid Johnson cited a case involving Snapchat’s speed filter to show how platforms have attempted to improperly claim Section 230 immunity for harms caused by their own product design rather than user content.

    Chair Cruz asked how Congress could modify Section 230 to incentivize more speech rather than less. Ms. Keller argued that limiting government pressure on platforms and fostering competition would better protect speech by reducing incentives for over-censorship. Ms. Farid Johnson stated that conditioning Section 230 protections on transparency, privacy, and interoperability requirements would promote a healthier and more open online environment.

  • Ranking Member Schatz asked how to distinguish between platform liability for product design and user-generated content. Mr. Bergman explained that Section 230 should continue to protect publishing functions but not shield platforms from liability for harmful design features that drive user behavior.

    Ranking Member Schatz asked whether Congress should intervene or allow courts to resolve these issues. Mr. Bergman argued that legislative action was necessary because relying on courts would take too long and allow ongoing harms to continue.

  • Sen. Fischer asked whether transparency requirements could improve accountability without chilling speech. Ms. Farid Johnson stated that carefully tailored transparency mandates could be constitutional and provide meaningful accountability if they avoid imposing undue burdens on speech.

    Sen. Fischer asked whether the issue lies with the statute itself or judicial interpretation. Ms. Keller responded that courts are actively refining interpretations, particularly around distinguishing platform conduct from user speech.

    Sen. Fischer asked about regulating algorithmic amplification of harmful content. Ms. Keller warned that restricting algorithms could lead platforms to over-remove lawful content and limit the visibility of diverse or controversial viewpoints.

  • Sen. Klobuchar asked how Section 230 has affected parents’ ability to seek justice. Mr. Bergman explained that broad immunity has prevented courts from hearing cases involving harmful platform design, leaving families without recourse.

    Sen. Klobuchar asked how reform could incentivize safer platform design. Mr. Bergman stated that removing immunity would force companies to internalize the costs of harm and adopt safer design practices.

    Sen. Klobuchar asked whether platforms should be treated as neutral distributors. Mr. Carson argued that platforms actively shape content through algorithms and should not be treated as neutral, suggesting algorithmic outputs should be considered platform speech.

    Sen. Klobuchar asked about legislative models for reform. Mr. Carson pointed to targeted approaches like the TAKE IT DOWN Act as effective ways to address specific harms without broad overreach.

    Sen. Klobuchar asked how competition and interoperability could address platform harms. Ms. Farid Johnson explained that interoperability would allow users to control their experience and reduce dependence on dominant platforms.

    Sen. Klobuchar asked how middleware could improve outcomes. Ms. Keller stated that interoperability could enable third-party tools to create safer, customized experiences, including protections for children.

  • Sen. Schmitt asked whether the government can pressure private actors to censor speech. Ms. Keller stated that such government action would be inappropriate and emphasized the importance of protecting First Amendment rights.

    Sen. Schmitt questioned alleged coordination between academia and government in content moderation. Ms. Keller rejected the characterization and maintained that academic institutions were exercising independent speech rights.

    Sen. Schmitt asked about modern platform features not contemplated by Section 230. Mr. Bergman identified features such as infinite scroll, push notifications, and algorithmic targeting as addictive design elements that contribute to harm.

  • Sen. Baldwin asked how Section 230 protects access to important information. Ms. Keller stated that Section 230 enables platforms to host lawful but sensitive content without fear of liability, preserving access to critical information.

    Sen. Baldwin asked whether Section 230 prevents content moderation. Ms. Keller, Ms. Farid Johnson, and Mr. Bergman each clarified that Section 230 allows and encourages moderation rather than prohibiting it.

    Sen. Baldwin asked when design decisions fall outside Section 230 protections. Ms. Farid Johnson explained that courts are still determining the line between expressive and functional design decisions.

    Sen. Baldwin asked about transparency in algorithms and moderation. Ms. Farid Johnson stated that increased transparency and protections for independent researchers would improve public understanding and accountability.

  • Sen. Curtis asked how to distinguish liability for speech from liability for distribution and design. Mr. Bergman argued that existing First Amendment doctrine can separate protected speech from harmful product design and conduct.

    Sen. Curtis asked whether courts or Congress should define these boundaries. Mr. Bergman stated that courts are better equipped to adapt legal standards over time through case-by-case analysis.

  • Sen. Lujan asked whether generative AI should fall under Section 230 protections. Mr. Carson stated that AI-generated outputs should not be treated as third-party content and should not receive Section 230 immunity.

    Sen. Lujan asked about liability for AI-generated content. Mr. Carson explained that AI outputs are created by companies and should be treated as their own speech or product.

    Sen. Lujan asked about the relationship between Section 230 and First Amendment rights. Ms. Keller stated that Section 230 reinforces platforms’ First Amendment rights by enabling them to moderate content without excessive litigation risk.

    Sen. Lujan asked when design features fall outside Section 230 protections. Ms. Farid Johnson emphasized that courts are still working through this issue and suggested structural reforms like privacy protections as a more effective approach.

  • Sen. Capito asked whether legislation risks becoming outdated as technology evolves. Ms. Keller acknowledged the risk but noted that common law frameworks can adapt to new technologies. Ms. Farid Johnson argued that foundational reforms like privacy and transparency would remain relevant over time. Mr. Bergman stated that tort law can evolve to address new harms as technology changes.
    Mr. Carson argued that broad immunity laws like Section 230 are particularly problematic because they are difficult to revise once enacted.

  • Sen. Rosen asked whether AI chatbots should be protected under Section 230.
    Mr. Carson stated that chatbots should not receive such protections and should be subject to liability for harmful outputs.

    Sen. Rosen asked whether platforms misrepresent safety to users. Mr. Bergman argued that companies often fail to enforce their own policies while publicly claiming to prioritize safety.

    Sen. Rosen asked how to protect smaller platforms while regulating large ones. Ms. Farid Johnson suggested conditioning Section 230 protections on compliance with structural requirements for larger platforms.

    Sen. Rosen asked whether large platforms should face different standards. Mr. Bergman stated that all companies should be subject to a duty of reasonable care regardless of size.

  • Sen. Hickenlooper asked about the impact of government actions designating companies as supply chain risks. Ms. Keller stated that such actions signal potential government overreach and coercion.
    Ms. Farid Johnson warned that retaliatory actions tied to speech could have a chilling effect. Mr. Bergman declined to provide an opinion. Mr. Carson stated that such actions could harm innovation and undermine national competitiveness.

    Sen. Hickenlooper asked how to regulate AI without violating the First Amendment. Ms. Farid Johnson emphasized transparency and independent research as key tools for accountability that avoid constitutional issues.

  • Sen. Markey asked how lawsuits can proceed despite Section 230 protections.
    Mr. Bergman explained that focusing on product design rather than content allows cases to survive dismissal.

  • Sen. Markey asked about strengthening child privacy protections. Ms. Keller, Ms. Farid Johnson, Mr. Bergman, and Mr. Carson all supported stronger protections for children online.

    Sen. Markey asked about risks from AI chatbots. Mr. Bergman described cases involving chatbot-induced harm and emphasized the need for safeguards. Mr. Carson stated that AI outputs should not receive First Amendment protections and require regulatory oversight.

  • Sen. Blackburn asked about platform accountability and self-regulation failures. Mr. Bergman argued that companies prioritize engagement and profit over safety and will only change behavior if held legally accountable.

    Sen. Blackburn asked about the need for an AI regulatory framework. Mr. Bergman stated that immediate legislative action is necessary due to ongoing harms caused by AI systems.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Update your email preferences or unsubscribe here

© 2026 Nimitz Tech

415 New Jersey Ave SE, Unit 3
Washington, DC 20003, United States of America

Powered by beehiiv Terms of Service