Nimitz Tech - Weekly 11-18-24

Nimitz Tech, Week of November 12th 2024

Welcome to this week’s Nimitz Tech. This week in Washington, the intersection of technology, policy, and national security takes center stage. From Congress tackling AI-enabled scams and China’s cybersecurity threat to closed-door discussions on emerging military tech, the stakes are high for consumer protection, critical infrastructure, and global digital competition. Dive into exclusive updates on Elon Musk’s antitrust battle with OpenAI, ICE’s surveillance expansion under the new administration, and the EU’s latest draft for AI regulation. Plus, don’t miss insights from upcoming hearings addressing the evolving risks of AI, cybersecurity, and fraud.

In this week’s Nimitz Tech:

  • Military AI Meta's Llama fuels the global AI arms race—who should set the rules?

  • Antitrust: Elon Musk accuses OpenAI and Microsoft of antitrust violations—will this reshape the AI industry?

  • Surveillance: Trump’s presidency signals a massive expansion of ICE’s surveillance and deportation plans—private companies stand ready to profit.

WHO’S HAVING EVENTS THIS WEEK?

Red Star: House event, Blue Star: Senate Event, Purple Star: Other Event

Tuesday, November 19th

  • 🚅 HOUSE HEARING: “Impacts of Emergency Authority Cybersecurity Regulations on the Transportation Sector,” House Committee on Homeland Security, Subcommittee on Transportation and Maritime Security. Hearing scheduled for 10:00 AM in 310 Cannon HOB. Watch here.

  • 🇨🇳 SENATE HEARING: “Big Hacks & Big Tech: China’s Cybersecurity Threat,” Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law. Hearing scheduled for 2:00 PM in 226 Dirksen SOB. Watch here.

  • ❌ SENATE HEARING: “Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams,” Senate Commerce, Science, and Transportation, Subcommittee on Consumer Protection, Product Safety, and Data Security. Hearing scheduled for 2:30 PM in 253 Russell SOB. Watch here.

  • 👽️ SENATE HEARING: “The Activities of the All-domain Anomaly Resolution Office.” Senate Committee on Armed Services, Subcommittee on Emerging Threats and Capabilities. Hearing scheduled for 4:30 PM in G50 Dirksen SOB. Watch here.

WHAT ELSE WE’RE WATCHING 👀

November 20th

  • 🤖 Tech + Startups Social: An electric and exciting evening of Startups Founders, Tech Professionals, and Investors gathering to be a catalyst, and a way to meet others in the DC's Tech Scene. Register here.

TECH NEWS DRIVING THE WEEK

In Washington

  • The Biden administration’s approach to AI governance began with the 2022 Blueprint for an AI Bill of Rights and culminated in the landmark Executive Order on Artificial Intelligence (EO) in 2023, aimed at fostering safe, secure, and ethical AI development. This comprehensive EO directed federal agencies to address AI risks, civil rights protections, and transparency, while promoting public trust. Key initiatives include pre-deployment testing led by the U.S. AI Safety Institute, the release of the NIST AI Risk Management Framework, and new guidance from the Office of Management and Budget (OMB) on rights- and safety-impacting AI. One year on, the White House marked the EO’s anniversary with a scorecard highlighting progress, including inter-agency collaborations and industry agreements. However, critics note gaps in reigning in Big Tech, safeguarding civil rights, and addressing ethical concerns around AI use in national security, such as lethal autonomous weapons.

  • The Software Alliance (BSA), representing major tech firms like OpenAI, Microsoft, and Adobe, has urged President-elect Trump and Vice President-elect Vance to maintain U.S. leadership in AI by fostering innovation through clear and enforceable policies. In a letter, BSA recommended updating existing AI risk management guidelines while retaining some of the Biden administration’s policies, such as the AI Safety Institute. The group also emphasized the need for laws that ensure responsible AI deployment, particularly in critical areas like credit, housing, and employment, while encouraging international collaboration on AI standards.

  • Just hours after Donald Trump’s election victory, Immigration and Customs Enforcement (ICE) issued a call for proposals to expand its surveillance infrastructure for non-citizens awaiting immigration court hearings or deportation. The plan could grow ICE’s monitored population from under 200,000 to over 5 million, leveraging tools like ankle monitors, GPS trackers, and biometric check-ins. This expansion aligns with Trump’s campaign promise of mass deportations and could significantly rely on private contractors like GEO Group, which already operates ICE’s Intensive Supervision Appearance Program (ISAP). GEO Group has indicated its readiness to scale operations dramatically, seeing Trump’s victory as an opportunity to expand its role.

National

  • Elon Musk has expanded his legal battle against OpenAI by filing new antitrust claims, accusing the AI company and Microsoft of collusion to stifle competition in the AI industry. Musk alleges that OpenAI, originally founded as a nonprofit for safe and transparent AI development, has become a for-profit entity heavily influenced by Microsoft, which has invested billions into the company. He claims this partnership gives OpenAI unfair advantages, such as cheaper computing power and restricted licensing access, making it harder for competitors like Musk's own AI venture, xAI, to secure funding and talent.

  • Meta’s open Llama AI model has sparked controversy over its use in defense applications, highlighting the complexities of open-source AI governance. A Reuters investigation revealed Chinese military researchers adapted Llama for intelligence purposes, violating Meta’s policy prohibiting military use. Days later, however, Meta announced it would waive this restriction for U.S. defense projects, joining other AI giants like Anthropic and OpenAI in supporting national security initiatives. Critics argue that Meta’s decision undermines global security, with David Evan Harris likening it to handing advanced military technology to adversaries. Conversely, experts like Ben Brooks view open AI models as transparent and adaptable tools that can be effectively regulated.

International

  • The European Union has released its first draft of a Code of Practice for general-purpose AI (GPAI) models, aiming to clarify compliance requirements under the EU’s AI Act, which took effect in August. This draft focuses on transparency, copyright compliance, risk assessment, and governance for advanced AI models, such as those developed by OpenAI, Google, and Meta. Key provisions include disclosure of web crawlers used for training to address copyright concerns, implementing a Safety and Security Framework for managing risks, and ensuring accountability through internal and external audits. The draft sets steep penalties for non-compliance, with fines reaching up to €35 million or 7% of global annual profits.

QUOTE OF THE WEEK

“By choosing not to secure their cutting-edge technology, Meta is single-handedly fueling a global AI arms race.”

David Evan Harris (link)

FOR FUN

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Additionally, if you are interested in our original publication on veterans affairs policy, check it out below:

The Nimitz ReportYour exclusive access to veterans affairs policy