- Nimitz Tech
- Posts
- Nimitz Tech - Weekly 03-16-2026
Nimitz Tech - Weekly 03-16-2026
AI Policy, Platform Accountability, and National Security Debates

Artificial intelligence policy continues to evolve across multiple fronts in Washington and beyond, with new developments highlighting the growing intersection of technology, national security, consumer protection, and online safety. This week’s stories include legal disputes over the military use of AI systems, congressional scrutiny of federal data access practices, and emerging questions about platform accountability and generative AI tools. At the same time, lawmakers are preparing for several hearings that will examine foreign technology risks, grid reliability, and the future of platform liability. Below is a roundup of the key policy developments and events shaping the technology landscape this week.
In this week’s Nimitz Tech:
DOGE / Social Security Data: A whistleblower complaint has prompted a federal investigation into whether a former DOGE engineer improperly retained sensitive Social Security data after leaving government service.
Anthropic vs. Pentagon: Anthropic has sued the Trump administration after the Pentagon labeled the company a national security “supply-chain risk” and barred contractors from using its AI systems.
Adobe Settlement: Adobe agreed to pay $75 million and provide additional services to resolve federal claims that it made software subscriptions difficult for customers to cancel.
WHO’S HAVING EVENTS THIS WEEK?

Red: House event
Blue: Senate event
Tuesday, March 17th
House Homeland Security: “DeepSeek and Unitree Robotics: Examining the National Security Risks of PRC Artificial Intelligence, Robotics, and Autonomous Technologies and Building a Secure U.S. Technology Base” at 10:00am. Watch here.
House Energy and Commerce: “Winter Storm Fern Lessons: Supplying Reliable Power to Meet Peak Demand” at 10:00am. Watch here.
Senate Committee on the Judiciary: “Stealth Stealing: China’s Ongoing Theft of U.S. Innovation” at 10:15am. Watch here.
Wednesday, March 18th
Senate Commerce, Science, and Transportation: “Liability or Deniability? Platform Power as Section 230 Turns 30” at 10:00am. Watch here.
TECH NEWS DRIVING THE WEEK

In Washington
A whistleblower complaint has prompted the Social Security Administration’s inspector general to investigate allegations that a former U.S. DOGE Service software engineer improperly retained highly sensitive Social Security data after leaving government service and planned to use it at a private company. According to the complaint, the engineer claimed to have copied restricted databases—including the Numident and Master Death File, which contain personal information on more than 500 million Americans—onto a thumb drive and sought help transferring and “sanitizing” the data for potential use by his employer. The accusations, which remain unproven and are denied by the engineer and the company, have raised alarms among lawmakers and oversight officials about whether DOGE staff had overly broad access to federal databases during the Trump administration’s cost-cutting initiative led by Elon Musk. The inspector general has notified Congress and shared the disclosure with the Government Accountability Office, which is conducting a broader audit of DOGE’s access to sensitive government data. The allegations also build on earlier complaints that DOGE personnel may have mishandled Social Security data by uploading information to unauthorized cloud services or sharing it improperly.
Anthropic has filed lawsuits in federal courts challenging the Trump administration’s decision to label the company a national security “supply-chain risk,” a designation that bars federal agencies and military contractors from doing business with the AI firm. The Pentagon imposed the restriction after disputes with Anthropic over how its Claude AI system could be used in warfare, with the company seeking limits on applications such as autonomous weapons and domestic surveillance. Anthropic argues the government retaliated against the company for publicly advocating safeguards around military AI and claims the designation unlawfully stretches a statute typically used against foreign adversaries. The case has drawn widespread attention across the tech industry, with engineers and researchers from rival companies like Google and OpenAI submitting a legal brief supporting Anthropic and warning that the move could chill debate about AI safety and undermine U.S. technological competitiveness. Despite the dispute, the military continues to rely on Claude within the Pentagon’s Maven intelligence system during ongoing operations in Iran, while the administration plans a six-month phaseout and explores alternatives such as OpenAI.
National
Three Tennessee teenagers, including two minors, have filed a lawsuit against Elon Musk’s AI company xAI, alleging that its Grok chatbot tools were used to generate explicit images of them by digitally altering photos in which they were clothed. According to the complaint, a perpetrator used Grok’s image-editing capabilities to remove clothing from photos sourced from social media and other images, creating nude depictions that later circulated on platforms such as Discord and Telegram. The lawsuit argues that xAI’s design and promotion of Grok’s image-editing features made the creation of such images foreseeable and seeks damages as well as restrictions on similar capabilities. xAI and Musk have previously denied that the system knowingly generates illegal content and stated that safeguards exist to prevent such outputs. The case marks one of the first lawsuits brought by alleged minor victims tied to AI-generated image manipulation tools.
Adobe has agreed to settle a lawsuit brought by the U.S. Justice Department over allegations that the company made it difficult for customers to cancel subscriptions to products such as Photoshop. Under the proposed settlement, Adobe will pay $75 million to the government and provide an additional $75 million worth of free services to customers. The lawsuit, originally filed in 2024 after a referral from the Federal Trade Commission, alleged that Adobe obscured cancellation fees and created barriers in its website and customer service processes that made it challenging for users to end subscriptions. Adobe denied wrongdoing but said it has taken steps in recent years to make its sign-up and cancellation processes more transparent. The settlement still requires approval from a federal judge.
International
Investors involved in a deal to establish a U.S.-controlled version of TikTok are expected to pay a $10 billion fee to the U.S. Treasury as part of an agreement aimed at addressing national security concerns related to the platform’s Chinese ownership. Approximately $2.5 billion was paid when the transaction closed in January, with the remaining payments expected to follow. The investment group includes Oracle, the Emirati firm MGX, and Silver Lake, each holding about a 15 percent stake after ByteDance separated TikTok’s U.S. operations into a new entity and reduced its ownership to below 20 percent. The Trump administration played a direct role in facilitating the arrangement, with Vice President JD Vance overseeing negotiations, and has characterized the payment as a transaction fee tied to the government’s involvement in resolving the ownership dispute. Some policy experts note that the size and structure of the fee are unusual compared with typical government involvement in corporate transactions.
The family of a 12-year-old girl critically injured in a February school shooting in Tumbler Ridge, Canada, has filed a lawsuit against OpenAI, alleging the company failed to notify authorities despite internal warnings that the suspect’s interactions with ChatGPT suggested a risk of violence. According to the complaint, the 18-year-old suspect had discussed scenarios involving gun violence with the chatbot in 2025, prompting some OpenAI employees to flag the conversations and recommend contacting law enforcement. The company instead banned the user’s account but did not alert Canadian authorities, stating the activity did not meet its threshold for reporting a credible or imminent threat. The lawsuit claims the suspect later created another account and continued discussing violent scenarios. OpenAI has expressed condolences to victims and said it is working with officials to strengthen protocols for identifying and reporting potentially dangerous interactions, including new detection systems and closer coordination with law enforcement.
ADD TO THE NIMITZ NETWORK
Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.
Additionally, if you are interested in our original publication on veterans affairs policy, check it out below:
|
