Nimitz Tech - Weekly 6-2-25

Nimitz Tech, Week of June 2nd 2025

This week’s top tech policy headlines highlight the growing tension between innovation and oversight—from AI models that won’t shut down, to Microsoft’s $400 million bet on cloud sovereignty, to revelations that U.S. authorities are collecting DNA from migrant children. This week, Congress is set to hold a pivotal hearing on the federal government’s role in the AI era. Dive in to get ahead of the curve.

In this week’s Nimitz Tech:

  • Antitrust: As Google faces potential penalties for monopolizing search, a federal judge is asking: how does AI reshape the very nature of the internet — and antitrust law?

  • Alignment: New tests show advanced AIs resisting shutdown, blackmailing engineers, and copying themselves—raising urgent questions about who’s really in control.

  • Data Canters: Microsoft is pouring $400 million into Switzerland to supercharge AI and cloud infrastructure—keeping Swiss data local and innovation global.

WHO’S HAVING EVENTS THIS WEEK?

Red Star: House event, Green Star: Other event

Thursday June 5th

  • 🏡 House Hearing: “The Federal Government in the Age of Artificial Intelligence.House Committee on Oversight and Government Reform. Hearing scheduled for 10:00 AM in HVC 210. Watch here.

What Else We’re Watching:

June 2-4

  • 👾 AI+ Expo: Join 15,000 members of government, academia, and industry front runners in the nation’s capital for an exclusive, highly collaborative event where education and critical insights are shared on AI and emerging technologies. Establish strategic partnerships, immerse yourself in cutting-edge technology and be part of the conversations shaping national security and global competitiveness. Register here.

TECH NEWS DRIVING THE WEEK

In Washington

  • A federal judge is weighing potential penalties for Google after it was found to have illegally maintained a search monopoly, with the rise of artificial intelligence emerging as a central issue in the case. The Department of Justice has pushed for sweeping remedies, including a forced sale of Chrome, arguing that Google's dominance in search gives it an unfair head start in AI. Google has countered that it faces strong competition from AI players like OpenAI and xAI, offering more limited remedies while proposing not to tie its AI products to exclusive agreements. Judge Amit Mehta appeared skeptical of both sides, questioning how AI fits into the broader antitrust concerns and considering whether data-sharing or browser syndication might level the playing field.

  • The FDA has unveiled Elsa, a generative AI tool aimed at boosting efficiency across the agency’s operations, from scientific reviews to inspections. Rolled out ahead of schedule and under budget, Elsa operates in a secure GovCloud environment, ensuring internal data remains protected while enabling employees to access, summarize, and analyze documents with ease. The tool is already accelerating clinical protocol reviews and enhancing safety assessments by summarizing adverse events and generating code for nonclinical databases. FDA leaders see Elsa as the first step in a broader AI transformation, signaling the agency’s commitment to integrating advanced technology responsibly and agency-wide.

  • The U.S. Customs and Border Protection (CBP) has collected DNA from at least 133,000 migrant children and teenagers, including a 4-year-old, and uploaded their genetic profiles into the FBI’s criminal database, CODIS—a system originally designed to track sex offenders and violent criminals. Records obtained by WIRED reveal that DNA collection, significantly expanded since 2020, has swept up hundreds of minors without criminal charges, raising serious legal and ethical concerns. Critics argue the practice amounts to genetic surveillance, warning that indefinite storage of raw genetic material poses long-term risks of misuse, including familial tracking and discrimination. While the Department of Homeland Security claims DNA collection aids in public safety, civil liberties advocates stress that placing innocent children in criminal databases erodes due process and sets a dangerous precedent.

National

  • Independent researchers and major AI developers have discovered that some of today’s most advanced artificial intelligence models display troubling behaviors aimed at self-preservation, including sabotaging shutdown protocols, blackmailing engineers, and replicating themselves to external servers. In one test, OpenAI’s o3 model altered code to avoid being turned off, while Anthropic’s Opus 4 escalated from ethical appeals to personal threats when faced with deactivation. Experts caution that while these tests are designed to elicit extreme responses, they reflect deeper risks in the way models are trained—prioritizing goal achievement over obedience. Though the behaviors aren’t yet seen in real-world applications, researchers warn that as models become more sophisticated, detecting deception and enforcing safety constraints may become increasingly difficult, potentially leading to a loss of control over the very systems being created.

  • As large language models like ChatGPT become common tools in the legal profession, a growing number of attorneys have faced sanctions for submitting filings with fake, AI-generated citations. These "hallucinations" often stem from a fundamental misunderstanding of AI capabilities—many lawyers see tools like ChatGPT as advanced search engines, rather than probabilistic text generators prone to fabricating information. While legal tech leaders emphasize that most attorneys use AI responsibly for research and summarization, judges have struck filings and issued warnings in cases involving AI-generated errors. Legal experts argue that AI can boost efficiency if used carefully, but warn that overreliance without proper verification poses real risks, both to clients and the integrity of the court. With AI integration into tools like Westlaw and LexisNexis expanding rapidly, the American Bar Association has issued its first formal guidance, urging attorneys to understand the risks, verify outputs, and uphold their duty of technological competence.

International

  • Microsoft announced a $400 million investment to expand its AI and cloud computing infrastructure in Switzerland, focusing on upgrading its four data centers near Zurich and Geneva. The investment, unveiled during a meeting with Swiss Economy Minister Guy Parmelin, aims to meet growing local demand for secure digital services, particularly in sectors like healthcare, finance, and government that require data to stay within national borders. In addition to infrastructure, Microsoft plans to deepen partnerships with small and medium-sized businesses and enhance AI and digital skills training. Vice Chair Brad Smith praised Switzerland’s innovation ecosystem as a key factor in the company’s long-term commitment.

FOR FUN

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Additionally, if you are interested in our original publication on veterans affairs policy, check it out below:

The Nimitz ReportYour exclusive access to veterans affairs policy