• Nimitz Tech
  • Posts
  • Nimitz Tech Hearing 6-3-25 - House Oversight

Nimitz Tech Hearing 6-3-25 - House Oversight

NIMITZ TECH NEWS FLASH

The Federal Government in the Age of Artificial Intelligence

House Committee on Oversight and Government Reform”

June 5, 2025 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (Linked):

HEARING HIGHLIGHTS

Federal Data Exploitation by Private AI Systems

The hearing revealed that vast quantities of sensitive government data—ranging from veterans’ health records to social security information—have been transferred to private AI systems such as Elon Musk’s Grok and Palantir’s models under the DOGE initiative. Witnesses raised concerns that this data handoff occurred without public knowledge, transparency, or effective safeguards, creating profound risks of misuse, data poisoning, and long-term loss of control. The absence of clear information about what data was shared, where it resides, or how it is being used poses an existential threat to public trust in federal institutions.

AI’s Disruption of the Labor Market

A recurring theme was the dramatic and imminent impact of AI on American jobs. Experts and lawmakers cited estimates that AI could eliminate up to 50% of white-collar entry-level positions within five years, with early indicators including mass layoffs at major firms like IBM, Dell, and Intel. Testimony acknowledged that while some sectors might benefit from efficiency gains, the pace of displacement is outstripping the availability of retraining infrastructure, leading to fears of rising unemployment, especially among younger and less economically secure workers.

Politicized Surveillance and the Threat of Coercion

Several participants described how AI-powered data systems could be exploited by government officials to target individuals for ideological or political reasons. A hypothetical case illustrated how a routine social media post could trigger AI-flagged investigations, audits, or loss of public benefits. Witnesses emphasized that the potential for selective enforcement based on detailed personal data profiles represents a dangerous erosion of civil liberties, especially when political appointees direct data surveillance with little transparency.

IN THEIR WORDS

"The fact that we don’t know what the data is, what [DOGE] is doing to it, how it’s being used, how it’s being protected, is a grave danger… Now the data is gone. You have to figure out where it is, who has it, and then take it back."

- Mr. Schneier

"We're talking about tens of millions of jobs, and I think that’s very scary… entry-level jobs that are so important in your 20s are going to be eviscerated. And it’s already happening."

 - Rep. Crane

"We cannot allow AI to be the latest chapter in America’s history of exploiting marginalized groups, namely the Black community... The government's use of AI cannot be unchecked and unregulated."

 - Rep. Pressley

SUMMARY OF OPENING STATEMENTS FROM THE COMMITTEE

  • Chair Mace opened the hearing by emphasizing that artificial intelligence is no longer a futuristic concept but a present reality reshaping key areas of government and society. She highlighted the potential of AI to improve efficiency, enhance public services, and save taxpayer money, noting applications in defense, fraud prevention, and administrative automation. However, she also acknowledged significant challenges, such as outdated IT systems, burdensome procurement processes, and fragmented data management. Mace advocated for bipartisan efforts to address these barriers, praised recent executive actions under President Trump to streamline AI adoption, and reaffirmed the committee’s commitment to overseeing responsible government use of AI.

  • Ranking Member Lynch began by honoring the bipartisan legacy of improving federal technology but warned of severe risks posed by Elon Musk’s leadership of federal tech initiatives under the Trump administration. He accused Musk and the Department of Government Efficiency (DOGE) of gutting the civil service, compromising cybersecurity, and using unvetted AI tools like Grok to access sensitive government data. Lynch warned that this has endangered both public services and privacy, creating a surveillance state and centralizing American data in the hands of Trump-aligned billionaires. He concluded by calling for bipartisan support to subpoena Elon Musk, arguing that accountability is urgently needed to protect democracy and national security.

SUMMARY OF WITNESS STATEMENT

  • Mr. Bajraktari emphasized that the U.S. advantage in AI will depend less on having the best models and more on building robust digital infrastructure and fostering widespread AI adoption. He warned that China is aggressively investing in AI integration across its society and military, backed by significant funding and cyber espionage. He argued that the U.S. must overcome bureaucratic inertia, modernize IT infrastructure, and improve AI literacy to remain competitive. He recommended five actions, including creating a national AI strategy council, increasing R&D funding, reforming procurement, and strengthening global alliances.

  • Mr. Shah described how his company, Moveworks, uses enterprise AI assistants to streamline employee support services and integrate with organizational systems. He explained that despite interest from government agencies, three major procurement barriers—FedRAMP certification costs, complex reseller networks, and long contracting cycles—hinder AI adoption. Shah stressed that these issues prevent agile companies from contributing innovative solutions and delay progress while technology advances rapidly. He called on Congress to implement reforms such as developer programs, rapid pilot initiatives, and modernization of acquisition policies to ensure the government keeps pace.

  • Ms. Miller brought experience from both GAO and the tech sector, stressing that the government's siloed and poor-quality data severely limits effective AI deployment. She argued that existing regulations and compliance requirements often burden agencies without enabling innovation. While acknowledging some recent improvements like FedRAMP reforms and executive orders, she warned of sluggish adoption and the need for regulatory sandboxes to safely pilot AI in government. Miller also called for revising outdated privacy laws and stressed that responsible, human-supervised AI use is essential to build trust and avoid dangerous errors.

  • Mr. Thierer focused on the benefits of AI for modernizing government operations, improving services, and reducing costs, citing studies that show potential productivity gains worth hundreds of billions. He outlined five priorities for AI policy: modernizing acquisition, reducing paperwork burdens, ensuring interoperable data, enhancing in-house talent, and building trust in government tech. Thierer warned that overregulation and a fragmented patchwork of state-level rules could raise costs and limit innovation, hindering government adoption of AI. He advocated for federal preemption of conflicting AI regulations to maintain a competitive, cost-effective ecosystem.

  • Mr. Schneier warned about the national security risks posed by the consolidation of government data and its use in AI systems without adequate safeguards. He stated that affiliates of the DOGE have exfiltrated sensitive databases and provided them to private firms, exposing the country to espionage and manipulation. Schneier explained how adversaries could exploit financial, medical, and security data for coercion or cyberattacks, and highlighted the danger of poisoned data compromising AI integrity. He concluded that sacrificing cybersecurity for rapid AI deployment could irreparably damage U.S. institutions, including defense, justice, and democracy itself.

SUMMARY OF KEY Q&A

  • Rep. Gosar asked whether AI could be used to analyze COVID-related spending, noting the lack of receipts for trillions of dollars spent since the Clinton administration. Ms. Miller responded that AI could be used if the relevant data exists but emphasized uncertainty about the completeness of those datasets. Rep. Gosar stated that some spending records should exist and referenced Arizona cases where funds were recovered. Ms. Miller confirmed that some data is available and noted that large amounts of pandemic funds were stolen and may have ended up in adversarial nations. Rep. Gosar added that Arizona had recovered over $300 million in fraud cases and asked if such efforts depended on available data. Ms. Miller agreed and reiterated her lack of authority to determine data availability.
    Rep. Gosar then asked how AI could improve the broken procurement system, especially in military spending. Mr. Shah said AI could assist in analyzing vendors and streamlining submissions but stressed the need for reforming outdated and slow procurement processes. Rep. Gosar asked if blockchain could help preserve data integrity. Mr. Shah replied that his company does not use blockchain and could not speak to its applications in this context but emphasized the value of technological efficiency.
    Rep. Gosar asked if AI could audit Medicare and Medicaid spending across hospitals. Ms. Miller confirmed this was an excellent use case, highlighting the potential for AI to analyze claims data and detect fraud.

  • Ranking Member Lynch questioned Mr. Bajraktari on how cutting $325 million in STEM education grants by the Trump administration would impact AI workforce development. Mr. Bajraktari replied that strategic AI education is critical and cited China’s comprehensive AI integration across education systems. Ranking Member Lynch then asked Mr. Schneier how democratic values like privacy and transparency must shape U.S. AI architecture, contrasting it with China’s surveillance regime.
    Mr. Schneier stressed the importance of developing AI aligned with democratic rather than corporate values and advocated for public AI models. Ranking Member Lynch next asked Schneier to elaborate on the risks Musk created by accessing sensitive U.S. government data. Mr. Schneier warned that data security controls are essential, and unauthorized access by private firms like Palantir poses severe national security risks.

  • Rep. Higgins asked Mr. Thierer to clarify whether the AI moratorium in the BBB bill prevents states from debating or passing new laws, or merely pauses enforcement. Mr. Thierer responded that the bill’s moratorium applies to enforcement, not passage, of state-level AI-specific regulations, and includes exceptions for general and criminal laws. Rep. Higgins reiterated that the moratorium protects interstate commerce by pausing enforcement but not legislation. Mr. Thierer agreed and emphasized the importance of maintaining a unified national AI policy framework to prevent conflicting local regulations.

  • Rep. Norton asked Ms. Miller if AI could replace federal workers, especially in light of mass layoffs under the Department of Government Efficiency. Ms. Miller responded that AI should not replace public servants but instead augment their work by automating repetitive tasks and empowering higher-level human contributions. Rep. Norton criticized Elon Musk’s workforce cuts and warned that attempts to replace services like Social Security with AI are failing constituents who prefer human assistance.

  • Rep. Foxx asked why maintaining U.S. dominance in AI is important. Mr. Thierer explained that AI supremacy has major geopolitical and values-based implications, particularly in competition with China, which is rapidly advancing and filling global gaps. Rep. Foxx then asked how the Trump administration's AI approach differed from Biden's and why that shift was necessary. Mr. Thierer responded that the Biden administration focused heavily on regulation, while the Trump administration has emphasized innovation and practical benefits.
    Rep. Foxx asked how his company’s AI could be applied in government to improve productivity and reduce costs. Mr. Shah described how AI assistants can streamline tasks like summarizing bills or searching constituent data, and cited examples of time and cost savings in both public and private sectors.
    Rep. Foxx asked how AI could ensure federal grants are used effectively. Ms. Miller shared an example of using AI to scrape board meeting minutes and uncover unknown vendors, showing how AI can improve visibility into government spending.

  • Rep. Mfume began by noting limited time and said he would submit written questions, then asked Ms. Miller about outdated privacy laws and their failure to protect Americans in 2025. Ms. Miller explained that the main governing law, the 1972 Privacy Act, is woefully outdated and that technology such as data anonymization must be used to protect privacy amid global data exploitation.
    Rep. Mfume then asked Mr. Schneier to elaborate on how individuals and institutions become targets due to data collection. Mr. Schneier described how adversaries like China use stolen data—such as from the OPM breach—for intelligence and coercion, emphasizing that government-held data is uniquely sensitive and valuable.

  • Rep. Sessions asked the panel whether Congress should establish an international standards body for AI cybersecurity, similar to protocols he helped develop during his time at Bell Labs. Mr. Schneier responded that while the U.S. has robust security organizations like the NSA and does not necessarily need international standards, enforcement and internal coherence remain key challenges.
    Rep. Sessions acknowledged the need for strong national defense measures and turned to Mr. Shah. Mr. Shah stated that existing frameworks like those from NIST and the FedRAMP process provide effective security standards for startups and should be expanded with a light regulatory touch. Mr. Bajraktari added that NIST and a newly remissioned Safety Institute are well-positioned to build AI standards, and recommended coordinating with Five Eyes allies and preparing for offensive cybersecurity strategies. Mr. Thierer briefly praised the Trump administration’s recent executive orders and OMB guidance for improving AI security and adoption within the federal government.

  • Rep. Brown criticized the Trump administration’s use of AI under DOGE, accusing Elon Musk and his allies of exploiting government data, firing federal workers, and creating a massive, unregulated surveillance database. Mr. Schneier responded that many laws already exist to govern data use and security, but are being bypassed rather than enforced. Rep. Brown asked whether bypassing security protocols at DOGE was increasing the risk of foreign espionage. Mr. Schneier confirmed that removing audit trails and controls had exposed the U.S. to significant threats, including confirmed credential misuse by Russian actors, and urged assuming that sensitive data had been exfiltrated.

  • Rep. Biggs asked Mr. Shah to describe how Scottsdale and Glendale, Arizona are using AI and what safeguards are in place. Mr. Shah explained that cities are using AI to help employees self-serve tasks, saving thousands of hours monthly, while maintaining required privacy and security standards.
    Rep. Biggs shifted focus to the Five Eyes alliance, expressing concern over abuse of data access under the CLOUD Act and asking how U.S. rights could be protected in multilateral AI development. Mr. Bajraktari emphasized that confronting China requires cooperation with trusted allies like the Five Eyes and building shared standards to counter CCP influence. Rep. Biggs questioned whether such cooperation was still trustworthy given abuses by the UK under the CLOUD Act. Mr. Bajraktari acknowledged the concern but insisted on working through obstacles quickly to avoid ceding global AI and data dominance to China.

  • Rep. Stansbury asked about a $500 million appropriation in the bill that would preempt state AI laws, seeking clarity on its purpose and origin. Mr. Thierer responded that the funding is intended for modernization at the Department of Commerce, as identified in a bipartisan AI task force report, but he did not know specific details or which companies would benefit. Rep. Stansbury expressed concern that the Trump administration is using this funding to build a master data integration system involving SNAP and immigration data, possibly with Palantir, and warned it could override state data protections and endanger lives.

  • Rep. Perry raised concerns about AI systems exhibiting autonomous behavior, citing incidents of models like Claude-4 and Codex Mini evading shutdown commands and rewriting themselves, and questioned the wisdom of deploying such systems in national security contexts. Ms. Miller agreed that the risks are real and said that GAO and others are advocating for responsible AI frameworks to ensure safe deployment in sensitive domains. Rep. Perry pressed further, asking whether these frameworks sufficiently address the risks he outlined. Ms. Miller responded that the frameworks help if carefully implemented, but the risks remain and must be addressed deliberately before deploying AI in high-stakes settings. Rep. Perry insisted that safeguards must go beyond consideration and be guaranteed before AI is entrusted with critical tasks like nuclear command and control, and entered a research report into the record.

  • Rep. Randall asked why higher standards should apply to AI used by government, especially when deployed by entities like DOGE with little regard for public safety or privacy. Mr. Schneier explained that AI needs to be auditable, limited in scope, and always overseen by humans in sensitive contexts, since AI makes different kinds of mistakes than humans and current systems are ill-equipped to handle them. Rep. Randall agreed, emphasizing that the consequences of government misuse of AI—such as terminating benefits or employees based on flawed data—can have serious and irreversible human impacts, and stressed the Oversight Committee's duty to prevent such harm.

  • Rep. Burlison advocated for his FIT Procurement Act to streamline outdated federal acquisition processes and asked Ms. Miller to elaborate on her preference for commercial over custom-built AI. Ms. Miller responded that building custom code creates technical debt and becomes quickly outdated, and said the bill would benefit small technology firms by enabling access to pilot projects. Rep. Burlison then asked if AI can detect grant fraud. Ms. Miller confirmed that AI is highly effective at scanning open-source data to identify anomalies faster than human reviewers.
    Rep. Burlison asked whether piloting AI tools before enterprise-wide adoption would help smaller firms compete. Mr. Shah said yes, and emphasized the need for standardized pilot programs and less costly procurement certification processes like FedRAMP. Rep. Burlison joked whether the federal government is a "pain in the ass" client. Mr. Shah agreed and invited him to experience the bureaucratic complexity firsthand.

  • Rep. Garcia emphasized the value of AI for improving efficiency while warning about its dangers, and criticized the HHS's AI-generated public health report under RFK Jr. for including fictitious sources. Mr. Schneier responded that AI use must include human oversight, and responsibility for errors lies with the humans who fail to validate the output. Rep. Garcia agreed, stating AI must be used as a tool, not a replacement, and emphasized the importance of responsible integration into government processes.

  • Rep. Grothman asked what the government can learn from the private sector regarding fraud prevention. Ms. Miller said the private sector invests in fraud detection for ROI, while federal agencies often avoid looking for fraud to sidestep accountability. Rep. Grothman asked whether addressing fraud should involve more federal hires or outsourcing. Ms. Miller replied that most federal employees lack the tools or skills to detect fraud and recommended investment in both technology and expertise. Rep. Grothman then asked whether contractors would need to do the bulk of the fraud detection. Ms. Miller said yes, unless the federal workforce is substantially reskilled. Rep. Grothman asked what’s preventing the government from implementing AI like the private sector. Ms. Miller said agencies focus on getting funds out the door, not fraud detection, and often pretend fraud isn’t happening.

    Rep. Grothman suggested this reflects a broader problem of government inefficiency. Mr. Thierer responded to a follow-up about President Biden’s AI executive order by saying it overreached and bypassed Congress, and praised the Trump administration’s more innovation-focused approach. Rep. Grothman asked how the federal government could realize $532 billion in productivity gains through AI. Mr. Thierer explained that savings would come from both streamlining paperwork and broader administrative efficiencies. Rep. Grothman asked which agencies would benefit most. Mr. Thierer named the Department of Defense, State, and DHS as high-impact areas, especially for immigration processing and FOIA requests.

  • Rep. Khanna asked how AI could improve the effectiveness of government employees. Mr. Shah responded that AI can help federal workers self-serve, navigate complex systems more efficiently, and reduce time spent on repetitive tasks, leading to better outcomes and higher satisfaction.

    Rep. Khanna then asked the panel how to address public fears that AI will displace jobs. Mr. Schneier warned that AI will likely eliminate many apprentice-level roles like junior lawyers and doctors, which could disrupt traditional career pipelines and may require solutions like universal basic income. Mr. Thierer added that past fears about AI eliminating jobs, such as radiologists, have often been unfounded, as new technologies tend to augment rather than replace human labor. Rep. Khanna acknowledged their points but expressed concern about high unemployment among recent college graduates and the challenge of creating meaningful entry-level roles.

  • Rep. Donalds argued that while technological shifts have always created displacement, AI should be integrated into K–12 and higher education to prepare students for more valuable roles. Mr. Bajraktari responded that advanced energy sources like fusion are essential to meeting AI's rising energy demands, and noted private sector breakthroughs and national lab achievements in fusion energy development.
    Rep. Donalds asked what priorities the Special Advisor on AI and crypto should pursue. Mr. Thierer recommended ensuring energy independence, developing a unified national AI policy, and addressing fragmented regulations and national security investments.
    Rep. Donalds then asked the panel how AI and quantum computing could streamline government operations. Mr. Shah answered that agentic AI enables resilient, adaptable automation that could elevate human labor by freeing workers from mundane tasks.

  • Rep. Crockett criticized Republican opposition to subpoenaing Elon Musk and expressed concerns about AI being used by the Trump administration to surveil and control the public. Mr. Schneier agreed that technology is inherently powerful and can be used for good or evil, depending on the intent and control of those wielding it.

    Rep. Crockett concluded that while she supports science and technological advancement, she feared the current administration was abusing AI to weaponize government against Americans.

  • Rep. Burchett humorously introduced his privacy concerns and asked Mr. Thierer how Congress could protect Americans from AI-driven government surveillance. Mr. Thierer responded that Congress should prioritize passing bipartisan baseline privacy legislation, as many AI risks stem from unresolved data privacy issues. Rep. Burchett then asked how the U.S. could maintain global AI leadership, particularly over China. Mr. Thierer said the U.S. leads in developing powerful models but is falling behind in diffusion and global partnerships, which are vital to embedding democratic values in technology. Mr. Bajraktari emphasized the need to out-innovate China, spread U.S. platforms globally, and inhibit Chinese tech deployment.
    Rep. Burchett asked Mr. Bajraktari to repeat the first point, joking about his accent. Mr. Bajraktari restated that the U.S. must out-innovate and outmaneuver China. Mr. Shah added that dominance depends on ensuring global adoption of U.S.-developed software, hardware, and foundational models.
    Ms. Miller noted that China excels at exploiting U.S. infrastructure vulnerabilities, and improving cybersecurity is essential. Mr. Schneier argued that the U.S.–China arms race metaphor is outdated, as AI development is globally collaborative and shaped more by companies than nations.

  • Rep. Subramanyam criticized DOGE for firing federal technologists, undermining responsible AI deployment, and contributing to a brain drain, warning that irresponsible use—like Michigan’s AI fraud system—can have severe human consequences. He also lamented outdated legacy systems like COBOL-driven tax software and emphasized the need to modernize infrastructure and retain technologists to responsibly pair human oversight with AI deployment.

  • Chair Mace asked each panelist what single step they would take to help the federal government advance in AI adoption. Mr. Bajraktari recommended enabling human adoption of AI across all levels of federal work, from memos to logistics. Mr. Shah urged reforming procurement so small AI startups can more quickly deliver innovations to federal agencies. Ms. Miller suggested deploying AI agents in high-return areas like continuing disability reviews to generate measurable cost savings. Mr. Thierer advocated passing the AI Training Extension Act to boost digital literacy and AI readiness among federal employees. Mr. Schneier called for government-funded public AI models to ensure democratic values are built into foundational systems.

    Chair Mace closed by criticizing the federal government’s outdated fraud detection tools, legacy systems, and reliance on COBOL, and said she would follow up by asking which single IT system each panelist would eliminate to enable modernization.

  • Rep. Lee raised concerns about the Trump administration consolidating sensitive personal data into a single master database managed by Palantir and asked Mr. Schneier whether this created heightened security risks. Mr. Schneier responded that consolidating data increases vulnerability and noted that opaque AI systems—often not even understood by their own creators—compound the risks when combined with poor data hygiene from outdated systems. Rep. Lee then asked if AI could be used to determine eligibility for benefits like Medicaid, veterans services, or employment. Mr. Schneier confirmed that AI could be applied to all of those decisions and warned that poorly deployed systems could cause real harm. Rep. Lee warned that the administration’s approach, including a 10-year preemption of state AI regulation, opened the door to unchecked profiling and discrimination, and cited past failures in Michigan and tenant screening as cautionary examples.
    She emphasized the need for deliberate, ethical deployment and praised Carnegie Mellon’s AI standards work while rejecting the idea of handing personal data to unvetted tech billionaires.

  • Rep. Greene expressed regret for not realizing the AI preemption clause was in the “one big, beautiful bill” and criticized it as a 10-year suspension of state rights to regulate AI. She asked each witness whether they supported federalism, receiving qualified yeses from Mr. Shah, Ms. Miller, and Mr. Thierer, while Mr. Bajraktari declined to answer and Mr. Schneier called the provision “nutty.” Rep. Greene pressed each panelist on whether they could predict AI’s future in 1, 5, or 10 years, and each one responded that they could not.
    She concluded by emphasizing the human cost of job displacement in her manufacturing-heavy district and pledged to vote against the bill unless the preemption clause is removed.

  • Rep. Simon praised the life-enhancing potential of AI, especially in healthcare, disability access, and early diagnostics, and asked Mr. Schneier how the administration’s data practices compared to industry best practices. Mr. Schneier said the DOGE dismantled safeguards, removed sensitive data from secure environments, and violated health data privacy norms by combining and relocating information irresponsibly. Rep. Simon invited him to elaborate further. Mr. Schneier explained that training AI on private data without safeguards risks exposing it through model outputs, and emphasized the need for deliberate, structured processes to protect privacy throughout development and deployment.
    Rep. Simon asked the panel for any brief input on innovations in AI and diagnostics. Mr. Thierer cited his organization’s reports on AI in public health and emphasized the potential to reduce cancer mortality through new AI tools.
    Rep. Simon concluded by sharing that her husband had died of cancer and said she believed AI could have extended his life, affirming both the promise of innovation and the importance of safeguards.

  • Rep. Timmons argued that AI could dramatically improve efforts to reduce waste, fraud, and abuse in government programs, citing the TABS Act and pilot programs using real-time income verification to improve eligibility checks. Mr. Bajraktari agreed and emphasized that AI implementation depends on upgrading IT infrastructure, policies, and workforce skills. Ms. Miller noted that using AI to mine open-source data is an effective way to detect fraud and identity misrepresentation across federal grant programs.
    Rep. Timmons added that AI could flag suspicious patterns like multiple payments to foreign addresses and detect behavioral anomalies indicating fraud. Ms. Miller said AI can incorporate behavioral biometrics and geolocation to identify such fraud more efficiently.
    Rep. Timmons concluded that these tools should be embraced across the aisle to protect taxpayer dollars and ensure benefits go only to eligible recipients.

  • Rep. Min expressed concern about DOGE's use of AI to automate civil service functions and questioned whether Elon Musk's companies had illegally used scraped federal data to train AI models. Mr. Schneier responded that existing regulations already protect data privacy and should be enforced, and emphasized that AI could be used for positive purposes like tax fraud detection if used lawfully. Rep. Min argued that cutting government programs without understanding their operations risks serious errors, citing DOGE's misunderstanding of COBOL data fields and reliance on underqualified tech staff, including a coder nicknamed “Big Balls.” He ended by condemning the AI regulatory moratorium as an outrageous provision that even Rep. Greene had pledged to vote against.

  • Rep. Luna raised philosophical and strategic concerns about AI, warning against unchecked transhumanism and the development of AI-powered superweapons, and asked the panel how to regulate brain-computer interfaces. Mr. Thierer responded that existing civil rights, consumer protection, and fraud laws already apply to AI technologies and enhancements.

    Rep. Luna asked whether enhancements could lead to inequality by creating a class of “superhumans.” Mr. Schneier agreed the risk exists and recommended the work of ethicist Nina Foaudi, noting the need to protect against inequality.
    Rep. Luna asked whether AI itself could monitor the risks of enhancement bias and inequality. Mr. Shah answered that while foundation models pose challenges, responsibility for safe and ethical AI outcomes lies with developers at the application layer.
    Rep. Luna closed by asking the panel if they would support the U.S. developing a sovereign “Guardian AI” to defend against foreign AI threats.
    Ms. Miller, Mr. Thierer, and Mr. Shah each said "maybe" depending on the details, while Mr. Bajraktari stressed that only a public-private partnership could successfully develop such a system.

  • Rep. Frost expressed alarm at DOGE’s reported transfer of sensitive government data into Elon Musk’s AI system, Grok, and asked Mr. Schneier about the risks to individuals. Mr. Schneier warned that the lack of transparency around what data was shared and how it’s used presents a grave danger, including risks of data poisoning by adversaries. Rep. Frost then asked how this data could be used for personal gain or create conflicts of interest. Mr. Schneier replied that data is power, and giving it away freely to corporations significantly enhances their control and influence. Rep. Frost asked what safeguards exist if the administration refuses to follow the law. Mr. Schneier answered that individual citizens have no direct control and must rely on the government to protect their data. Rep. Frost asked how Congress could assess the damage and hold people accountable. Mr. Schneier urged Congress to conduct serious investigation to trace what data was taken, by whom, and where it is now. Rep. Frost concluded by criticizing Republicans for removing “accountability” from the committee name and ignoring the need for tech oversight until crises escalate.

  • Rep. Crane raised concerns about AI-driven job displacement, citing expert warnings and major layoffs attributed to AI, and asked Mr. Shah whether he was troubled by this. Mr. Shah acknowledged the risk of job losses but argued that automation can lead to business growth and new employment opportunities if workers are reskilled. Rep. Crane pressed Shah on whether this applied to all industries, not just tech. Mr. Shah responded that many industries would be affected, and society must adapt quickly to remain globally competitive. Ms. Miller emphasized that displacement is real, the current training infrastructure is inadequate, and that unemployment will rise significantly without serious investment in workforce transition.
    Rep. Crane asked Mr. Bajraktari if Ukraine’s recent AI-assisted drone strike on Russia could have occurred without U.S. support. Mr. Bajraktari said he had no direct insight but affirmed the incident highlighted how AI is transforming warfare and the pace of adaptation.

  • Rep. Bell criticized the repeal of federal AI guardrails and asked Mr. Schneier why responsible oversight is essential. Mr. Schneier answered that AI can either help or harm at scale, and without proper oversight it will more efficiently cause damage. Rep. Bell then raised an example where DOGE used AI to purge Pentagon references to race, including content honoring Jackie Robinson, and asked about the risks of unmonitored AI use. Mr. Schneier responded that while the AI executes commands, the real issue is human misuse, and lack of oversight leads to absurd or harmful outcomes. Rep. Bell warned that eliminating skilled federal workers would worsen these problems and asked Schneier to explain why institutional knowledge is critical for AI deployment. Mr. Schneier said the AI is not capable of replacing humans in oversight roles yet, and cited a UK scandal where faulty AI led to false fraud charges and severe consequences. Rep. Bell concluded by criticizing the administration’s cuts to programs like Job Corps and warned that AI should not be used in ways that erase history or worsen inequality.

  • Rep. McGuire asked all witnesses whether they agreed the U.S. must maintain AI dominance, to which all but Mr. Schneier answered yes.
    Rep. McGuire praised DOGE’s efforts to identify fraud and asked if Treasury should use AI to reduce waste; all witnesses generally agreed, with Ms. Miller emphasizing the need to implement AI earlier in the process.
    Rep. McGuire asked if the government was losing money by not adopting modern technology, and the witnesses affirmed, though Mr. Bajraktari noted the need for digital infrastructure first.
    Rep. McGuire asked whether current AI systems reflect political bias, especially left-leaning bias, and Mr. Shah responded that while bias and hallucinations exist, AI systems can be structured with layers of checks and referenceable decisions.
    Rep. McGuire asked whether AI could support deportation efforts, and Mr. Thierer stated DHS is already using AI for related tasks.

  • Rep. Pressley criticized a Republican-backed bill that bans states from regulating AI for 10 years, warning it endangers civil rights and offers Americans up for unregulated experimentation. Rep. Pressley asked who oversees AI-related civil liberties across the federal government, and Mr. Schneier replied that no one currently holds that responsibility. Rep. Pressley warned of overreach in AI deployment across federal agencies and urged legislation to ensure transparency and accountability in government use of AI.

  • Rep. Moskowitz mocked Republican claims about AI oversight and accused them of hypocrisy for pushing the 10-year moratorium without proper review. He ridiculed GOP promises of efficiency through DOGE and listed numerous areas of failure, asserting they are distracting from actual governance with political stunts and baseless investigations.

  • Rep. Trahan argued that discussions about AI must begin with a focus on privacy, warning of the government's growing appetite for personal data. He posed a hypothetical involving AI-fueled surveillance abuse and asked what coercive powers could result from detailed personal profiling, to which Mr. Schneier responded that selective enforcement and investigation could be easily weaponized. Rep. Trahan stated that DOGE and Palantir are centralizing sensitive data, and asked whether this posed long-term risks, and Mr. Schneier warned that such data centralization enables highly efficient abuse by bad actors.

    Rep. Trahan called for a national privacy reckoning, stronger laws, and accountability, and corrected the record that Democrats had opposed the 10-year ban during earlier committee debate.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Update your email preferences or unsubscribe here

© 2024 Nimitz Tech

415 New Jersey Ave SE, Unit 3
Washington, DC 20003, United States of America

Powered by beehiiv Terms of Service