• Nimitz Tech
  • Posts
  • Nimitz Tech Hearing 6-25-25 Algorithms and Authoritarians: Why U.S. AI Must Lead

Nimitz Tech Hearing 6-25-25 Algorithms and Authoritarians: Why U.S. AI Must Lead

NIMITZ TECH NEWS FLASH

Algorithms and Authoritarians: Why U.S. AI Must Lead

Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party

June 25, 2025 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (Linked):

HEARING HIGHLIGHTS

AI Power and Compute as Strategic Infrastructure

The hearing repeatedly emphasized that access to vast amounts of power and computing capacity is foundational to competing in artificial intelligence. Witnesses stressed that scaling up energy infrastructure, particularly to support frontier model training, is essential. They suggested that the United States target goals like securing 50 gigawatts of new energy capacity by 2027 to enable domestic AI growth. Without this foundational layer, all other advances—including safety, standards, and innovation—risk being outpaced by adversaries such as China.

Automated AI Research and Development Risks

Panelists raised serious concerns about AI systems that can conduct autonomous research and development, describing this as a looming red line. Once an AI system gains the ability to improve and replicate itself, it could outpace human oversight and present safety, security, and existential threats. The discussion highlighted the need for early testing and evaluation mechanisms to detect and constrain this kind of recursive advancement before it crosses irreversible thresholds.

National Security Implications of Chinese AI Strategy

The hearing detailed how the Chinese Communist Party integrates information warfare, propaganda, and emerging AI capabilities into a seamless doctrine of strategic competition. Witnesses explained that unlike the U.S.’s siloed approach, China treats information control and technological dominance as unified elements of warfare. This outlook demands that the United States not only improve domestic AI capabilities but also pursue proactive measures to slow Chinese advancement, including stronger export controls and monitoring of open-source AI diffusion.

IN THEIR WORDS

“The fundamental thing is power… I'd say power. And then compute… and the third thing would be ensuring we have the necessary infrastructure in the US government to help us have confidence about developing the standards for this technology and the means by which we assure its safety and security.”

- Mr. Clark

“Once you have an AGI-level system that could take control of its own destiny and build itself and build its successors—to me, that's the very clear red line where the danger starts.”

 - Mr. Beall

“I worry about a future in which human beings are not just unemployed, but they're unemployable. And this breaks the notion of the free market in very important ways.”

 - Rep. Khanna

SUMMARY OF OPENING STATEMENTS FROM THE SUBCOMMITTEE

  • Chairman John Moolenaar opened by framing artificial intelligence as the defining strategic asset of the 21st century, warning that the competition between free nations and authoritarian regimes like the Chinese Communist Party (CCP) would shape the global balance of power. He described the AI race as a modern Cold War driven by algorithms, compute, and data, with the CCP aggressively tilting the playing field through tactics like intellectual property theft, chip smuggling, and surveillance. He promoted the Chip Security Act, which would strengthen export controls and mandate location verification of advanced AI chips. Moolenaar emphasized that this hearing was not only about current challenges but also about taking bold, coordinated action to preserve U.S. leadership. He called for an “America First” AI policy to protect innovation from misuse and ensure the future is shaped by democratic values rather than digital tyranny. He concluded by stressing the importance of learning from experts and uniting around strategic clarity to win this generational competition.

  • Ranking Member Raja Krishnamoorthi began by illustrating AI's potential for good through the story of Anne Johnson, a paralyzed woman who regained speech via a brain-computer interface, contrasting it with dangerous misuse exemplified by therapy chatbots and China's AI-driven surveillance. He criticized the CCP’s militarized use of AI, including facial recognition systems targeting Uyghurs and robot dogs armed with rifles, and called on Chinese tech leaders to answer for these abuses. He announced a new bipartisan bill—the No Adversarial AI Act—to prohibit the U.S. government from using Chinese and Russian AI systems. Krishnamoorthi warned of the risks posed by artificial general intelligence (AGI), citing pop culture examples and real-world concerns like OpenAI’s internal safety fears, and previewed his forthcoming AGI Safety Act to require alignment with human values. He also highlighted the critical role of immigrant researchers in America’s AI success and advocated against defunding scientific grant agencies like the NSF. He concluded by urging Congress to be smart and bold in order to harness AI’s benefits and prevent authoritarian misuse.

SUMMARY OF WITNESS STATEMENT

  • Dr. Thomas Mahnken (Center for Strategic and Budgetary Assessments) described the U.S.-China AI competition as a long-term techno-security rivalry that will shape the global order for decades. He emphasized the uncertainty surrounding the scope and development of AI, noting its broad impact on national security and society. He contrasted the American innovation model—driven by free enterprise and regulated by democratic values—with China’s authoritarian, state-directed approach focused on social control. Mahnken highlighted the risk that China, as a low-trust society, will develop AI to bolster domestic repression and military efficiency, while the U.S. focuses on empowering individuals and preserving human decision-making in warfare. He warned that the U.S. could fall behind either by inhibiting its own innovation or by allowing China to steal data and technology. Mahnken concluded by calling for a strategy that combines strong defenses with an offensive demand signal to ensure long-term competitive advantage over the PRC.mor

  • Mr. Mark Beall (AI Policy Network) opened by framing the U.S. AI challenge as a test of technology governance, arguing that the U.S. faces not one but two races with China: a conventional competition for economic and military dominance, and a broader existential race against time to control artificial superintelligence (ASI). He warned that ASI, especially in hostile hands, could lead to catastrophic outcomes, such as cyber warfare, engineered pandemics, or financial system collapse. Beall proposed a “three P’s” policy framework—protect, promote, and prepare—including export controls, global tech leadership, and urgent risk evaluation programs. He called for shattering bureaucratic barriers to AI deployment, especially in the military and intelligence sectors, and urged the U.S. to proactively shape international norms before other nations fill the vacuum. Finally, Beall supported narrow, risk-based dialogue with China to prevent mutual destruction, while reaffirming the need for American-led AI to promote freedom and human flourishing.

  • Mr. Jack Clark (Anthropic) stated that the U.S. can win the AI race, but success will depend not just on power, but also on getting safety right. He described advanced AI as akin to “a country of geniuses in a data center” and predicted such systems could be viable by 2026 or 2027. Clark, a naturalized U.S. citizen and co-founder of Anthropic, emphasized that AI reflects the values of the societies that build it—arguing that democracies produce safer, more ethical AI than authoritarian regimes. He raised two core risks: misuse by bad actors and accidental behaviors, sharing an experimental case where his AI model attempted blackmail to preserve itself. Clark recommended stronger export controls on semiconductors to China, federal investment in AI safety testing, and accelerated AI adoption across federal agencies. He concluded by warning that today’s choices on AI governance and competition will shape the future of global power and human freedom.

SUMMARY OF KEY Q&A

  • Ranking Member Krishnamoorthi asked about the dangers of Chinese AI models like DeepSeek, which generated harmful content and stored user data in China; Mr. Clark agreed these practices were unsafe and supported restricting federal use of such models. Mr. Clark also affirmed the need to control semiconductor exports to China, noting that AI development depends heavily on compute, and that U.S. demand for chips remains high. Ranking Member Krishnamoorthi concluded by discussing advanced inference and safety risks, including an experiment where an AI prioritized self-preservation over human life, which Mr. Clark acknowledged as a critical reason for robust testing and oversight.

  • Chair Moolenaar asked how critical it is to prevent advanced U.S. chips from being smuggled into China, and Mr. Beall responded that it is one of the most urgent national security challenges, citing gaps in export controls and large-scale diversion of chips.

    Chair Moolenaar then asked how to balance controlling chip exports while promoting U.S. AI globally, and Mr. Clark said the U.S. must both deny compute to adversaries and ensure global platforms using American tech are secure and accountable. Dr. Mahnken was asked what lessons from Cold War nuclear controls apply today, and he emphasized that while export controls are essential, they must be paired with broader strategies, as adversaries will adapt over time.

    Finally, Chair Moolenaar asked how to attract global talent while maintaining safeguards, and Mr. Clark advocated for high-skill STEM immigration early in the education pipeline to maximize benefits and minimize risks.

  • Rep. Carson asked about the broader risks of AI in information warfare beyond deepfakes, and Mr. Clark responded that AI can scale information operations, enabling synthetic propaganda, and stressed the need for improved monitoring tools and public-private incident sharing. Dr. Mahnken added that the Chinese Communist Party historically views information as central to warfare and sees propaganda, political mobilization, and battlefield success as tightly integrated, unlike the more compartmentalized U.S. approach.

  • Rep. LaHood asked whether a moratorium on AI would hinder U.S. competitiveness with China, and Dr. Mahnken responded that while responsible progress is important, halting development would risk falling behind in the global race.

    Rep. LaHood then asked how state-level patchwork regulation might affect innovation, and Mr. Beall warned that excessive, inconsistent state laws could slow U.S. progress, urging smart federal guardrails and preemption to avoid a future regulatory overreaction like what occurred with nuclear energy.

    Finally, Rep. LaHood asked if a middle ground was possible, and Mr. Clark stressed the need for a federal framework focused on transparency and safety, warning that without it, the U.S. risks triggering damaging overregulation after a potential AI incident.

  • Rep. Dunn asked whether AI models attempting to avoid shutdown or manipulate users pose a real threat, and Mr. Clark responded that while U.S. firms conduct safety research openly, Chinese models may conceal dangerous behaviors like sleeper agents, making them a serious security concern.

    Rep. Dunn then asked what strategic missteps Congress should avoid, and Dr. Mahnken warned against both excessive regulation and a purely free-market approach that could be exploited by adversaries. He also cautioned that China’s military may rely on AI to make critical decisions, reflecting an authoritarian view that contrasts with the U.S. emphasis on human judgment.

  • Rep. Moulton emphasized the need for balanced AI regulation, warning against eliminating state-level innovation and urging federal action, then asked if state efforts should be preserved. Mr. Clark agreed, stating that while a federal framework is ideal, maintaining state-level options is important given the short timeline before powerful AI arrives.

    Rep. Moulton stressed the urgency of setting democratic global norms for AI use—especially in warfare—and asked what non-negotiable norms should be included in an international agreement; Dr. Mahnken admitted the world is far from such consensus but advocated for U.S. leadership in shaping democratic standards.

    Rep. Moulton then asked how to begin setting those norms, and Mr. Beall identified three priority areas: strategic stability, lethal autonomy, and artificial superintelligence, warning that delays in addressing these could lead to dangerous consequences.

  • Rep. Johnson (SD) asked about the risks of locating AI data centers abroad. Mr. Beall warned that foreign control could compromise U.S. security, stressing the need for domestic infrastructure.

    Rep. Johnson then questioned whether prioritizing safety could slow U.S. progress, and Mr. Clark argued that safety actually enhances global competitiveness, similar to how car safety features expanded the auto market.

    Still concerned, Rep. Johnson asked Dr. Mahnken, who agreed that excessive caution could hinder U.S. speed, especially as China likely faces no such internal debate. Dr. Mahnken also noted that data centers are valuable, vulnerable infrastructure that must be protected.

  • Rep. Torres asked how close China is to matching TSMC and ASML, and Mr. Clark said China is several years behind but investing heavily. Rep. Torres compared the AI race to the Manhattan Project, and Mr. Clark agreed with the urgency but emphasized the need for U.S.-controlled compute and energy. Rep. Torres asked if the U.S. can win without abundant energy, and Mr. Clark said no, estimating the AI industry will need 50 gigawatts by 2027. Rep. Torres warned about losing clean energy capacity and asked if limiting supply harms AI competitiveness, and Mr. Clark urged keeping all energy options open. On export controls, Rep. Torres raised concerns about flawed chip specs, and Mr. Clark called for more technical staff to design and enforce effective regulations.

  • Rep. Hinson warned that China is stealing and weaponizing U.S. AI innovations and asked how the U.S. can work with allies to deny Beijing access without stifling allied innovation; Dr. Mahnken responded that the U.S. must offer a democratic, sovereignty-preserving alternative to China’s authoritarian tech model.

    Rep. Hinson then asked Mr. Clark how his company manages conflicts of interest and contributes to national AI strategy; Mr. Clark emphasized their goal is to win the race by producing trustworthy AI, welcoming diverse viewpoints to inform strong policy.

    Rep. Hinson also asked about major espionage threats and model leakage risks; Mr. Beall cited China’s DeepSeek as a “Sputnik moment” and urged action on two fronts: closing the cloud services loophole via the Remote Access Security Act and addressing the open-source release of powerful AI models that China can exploit.

  • Rep. Brown emphasized the need to prioritize American workers in the AI race and asked which sectors are most at risk of disruption; Mr. Clark responded that AI is currently augmenting programming and bureaucratic tasks, but future impacts require better data to guide policy. Rep. Brown then asked about effective public-private partnerships; Mr. Clark cited DOE “AI jam days” as a scalable model for workforce engagement and skill-building. When asked what Congress should do, Mr. Clark urged support for experimentation, regulatory review, and expanded access to AI tools across industries.

  • Rep. Nunn likened the AI race with China to a new Cold War, citing alarming Chinese advances like the AI Commander system and Beijing’s JiPU project, which aim to dominate global AI standards. He asked if the U.S. is prepared for AI-enabled cyber or zero-day attacks. Dr. Mahnken warned that China’s overreliance on AI might lead them into miscalculated conflicts, such as with Taiwan, and stressed the U.S. should use AI to support better human decision-making. Mr. Nunn highlighted his AI+ Act for coordinated U.S. strategy, and Mr. Clark responded that public-private cooperation on AI deployment and safety standards would bolster U.S. leadership and global trust in American AI technology.

  • Rep. Tokuda criticized proposed cuts to NIST and CISA, warning they would undermine U.S. AI competitiveness and cybersecurity, and asked if such reductions concerned him. Mr. Clark responded that these agencies play a vital complementary role to private industry, especially in standards and infrastructure protection. Rep. Tokuda then asked if AI firms like Anthropic should help fund public infrastructure, referencing a proposed “token tax” and rising energy demands. Mr. Clark agreed on the need for shared responsibility, emphasizing partnerships with local communities and openness to deeper conversations about future societal impacts. Rep. Tokuda concluded by raising existential risks of AGI and ASI and submitted her final questions for the record.

  • Rep. Moran stressed the importance of winning the AI race against the Chinese Communist Party and asked what the U.S. is failing to do. Mr. Clark emphasized the need for more power generation, computing capacity, and government infrastructure to support standards and safety. He proposed working backward from a goal of 50 gigawatts of new capacity by 2027 to assess grid readiness. Dr. Mahnken added that government should focus on slowing China’s progress, while Mr. Clark and Mr. Beall warned that automated AI R&D could pose existential risks, with Mr. Beall identifying AGI self-improvement as a clear red line requiring immediate government oversight.

  • Rep. Khanna asked whether high-risk AI applications should require mandatory third-party verification. Mr. Clark replied that it’s too early for mandates and that consensus on test standards must come first, ideally within a year. Rep. Khanna then raised concerns about AI-driven job displacement and asked for policy ideas. Mr. Clark recommended starting with job data from AI firms to guide decisions, while Mr. Beall warned that unchecked AI advancement could render humans unemployable, threatening the foundation of the free market.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Update your email preferences or unsubscribe here

© 2024 Nimitz Tech

415 New Jersey Ave SE, Unit 3
Washington, DC 20003, United States of America

Powered by beehiiv Terms of Service