Nimitz Tech - Weekly 3-3-25

Nimitz Tech, Week of March 3rd 2025

From the latest congressional hearings on AI-driven immigration enforcement and deterring Beijing’s cyber threats to groundbreaking developments in quantum computing and child online safety, this week’s tech policy landscape is more dynamic than ever. As the U.S. and China agree to keep AI out of nuclear decision-making, federal scientists are testing next-gen AI models, and Apple introduces new privacy protections for kids, policymakers face critical questions about the role of technology in governance, security, and society. Dive into this week’s must-read stories and stay ahead of the policy curve shaping Washington’s digital future.

In this week’s Nimitz Tech:

  • DOGE: A late-night purge at the General Services Administration has gutted a key tech unit, raising alarms about the future of critical government services like Login.gov.

  • Quantum: Amazon Web Services unveils Ocelot, a breakthrough quantum computing chip that slashes error correction costs by 90%, accelerating the path to practical quantum computing.

  • War: AI has no place in deciding nuclear war—history proves that human judgment, not machine logic, has saved the world from catastrophe.

WHO’S HAVING EVENTS THIS WEEK?

Red Star: House event, Blue Star: Senate Event, Purple Star: Other Event

Tuesday, March 4th

  • 🪨 House Hearing: “Leveraging Technology to Strengthen Immigration Enforcement.” House Committee on Oversight, Cybersecurity, Information Technology, and Government Innovation Subcommittee. Hearing scheduled for 10:00 AM in 2247 Rayburn HOB. Watch here.

Wednesday, March 5th

  • 🚀 House Hearing: “End the Typhoons: How to Deter Beijing’s Cyber Actions and Enhance America’s Lackluster Cyber Defenses.” House Select Committee on the Chinese Communist Party. Hearing scheduled for 9:15 AM in 390 Canon HOB. Watch here.

WHAT ELSE WE’RE WATCHING 👀

March 5th

🤖 The AI Accelerator Institute, Genenerative AI Summit: Discover what's next in GenAI and network with local AI professionals. Register here.

TECH NEWS DRIVING THE WEEK

Source: DALL-E

In Washington

  • In a dramatic late-night move, the General Services Administration (GSA) slashed a significant portion of its 18F technology unit, cutting approximately 70 employees across various roles, with previous layoffs already impacting the team in February. The cuts, driven by the Trump administration’s broader push to shrink the federal workforce and reduce spending, could have widespread consequences for agencies reliant on 18F-built services, such as Login.gov, which supports programs like Medicare and Social Security. GSA officials attributed the decision to top-level directives from both the administration and the agency’s leadership, signaling a broader shift toward a Silicon Valley-inspired cost-cutting approach. Federal employees are now grappling with a culture of heightened scrutiny and job insecurity, as the administration enforces new productivity mandates and threatens further reductions.Engineers within

  • Elon Musk’s Department of Government Efficiency (DOGE) are modifying AutoRIF, a Defense Department-developed software designed to automate reductions in force, potentially accelerating mass firings of federal employees. Internal documents suggest DOGE operatives have accessed and altered AutoRIF’s code through an Office of Personnel Management (OPM) GitHub repository, raising fears of AI-driven terminations at an unprecedented scale. While probationary employees have been the primary targets so far, recent developments indicate that an unspecified large language model may soon be used to assess worker necessity across agencies. This comes as government workers received new demands to justify their productivity, with agencies like the FBI advising employees to ignore the directives, signaling internal resistance to Musk and Trump’s aggressive downsizing agenda.

  • Nearly 1,000 scientists from nine U.S. National Laboratories participated in the “1,000 Scientist AI Jam Session” on Friday, an unprecedented event organized by OpenAI in collaboration with the Department of Energy (DOE). The session tested cutting-edge AI models, including OpenAI’s o3-mini and Anthropic’s newly released Claude 3.7 Sonnet, to explore applications in experiment planning, data analysis, and scientific problem-solving. Energy Secretary Chris Wright likened the AI-driven initiative to the Manhattan Project, emphasizing the national security stakes in AI development. The event follows the launch of ChatGPT Gov, OpenAI’s tailored AI model for federal agencies, highlighting the deepening partnership between AI firms and the U.S. government.

National

  • AWS has introduced Ocelot, a prototype quantum computing chip designed to significantly reduce the cost and complexity of quantum error correction, a major hurdle in developing fault-tolerant quantum computers. Developed at the AWS Center for Quantum Computing, Ocelot integrates ‘cat qubits’—which inherently suppress certain errors—into a scalable microchip architecture, potentially cutting error correction resource requirements by up to 90%. This innovation could fast-track the development of practical quantum computers by as much as five years, according to AWS director of Quantum Hardware Oskar Painter. With its potential applications spanning from drug discovery to financial modeling, Ocelot represents a major step toward turning quantum computing from theory into reality.

  • Apple has introduced “age assurance” technology, allowing parents to select an age range for their children instead of providing exact birthdates when setting up child accounts, aiming to enhance online safety while preserving privacy. The new system enables third-party app developers to access this age range via a "Declared Age Range API," but only if parents consent, with the option to disable sharing at any time. The announcement comes amid growing legislative efforts to require tech companies to verify users' ages, with Apple maintaining that such responsibility should fall on app developers rather than app stores. Additional child safety updates include an improved parental setup process, default safety settings for unfinished account setups, and the ability to correct a child’s age if entered incorrectly.

International

  • In a landmark decision, U.S. President Joe Biden and Chinese President Xi Jinping agreed in late 2024 that artificial intelligence should never be entrusted with launching nuclear weapons. The agreement follows years of discussions and historical analysis of Cold War-era crises, such as the Cuban Missile Crisis, the 1983 Soviet false-alarm incident, and the Able Archer exercise—each of which could have escalated to nuclear war if left to AI trained on prevailing doctrines. In each case, human intuition and restraint prevented disaster, while AI might have misinterpreted the threat and triggered catastrophic retaliation. While AI may serve as a tool to support human decision-making, the risks of automating nuclear launch authority are too great, reinforcing the wisdom of Biden and Xi’s commitment to keeping the final decision in human hands.

FOR FUN

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Additionally, if you are interested in our original publication on veterans affairs policy, check it out below:

The Nimitz ReportYour exclusive access to veterans affairs policy