- Nimitz Tech
- Posts
- Nimitz Tech Hearing 9-17-24 - Senate Judiciary
Nimitz Tech Hearing 9-17-24 - Senate Judiciary
Nimitz Tech Hearing 9-17-24 - Senate Committee on the Judiciary
⚡NIMITZ TECH NEWS FLASH⚡
“Oversight of AI: Insiders’ Perspectives”
Senate Committee on the Judiciary; Subcommittee on Privacy, Technology, and the Law
September 17, 2024 (recording linked here)
HEARING INFORMATION
Witnesses and Written Testimony (linked):
Helen Toner: Director of Strategy and Foundational Research Grants;
Center for Security and Emerging Technology, Georgetown University.
Margaret Mitchell: Former Staff Research Scientist; Google AI.
William Saunders: Former Member of Technical Staff; OpenAI.
David Evan Harris: Senior Policy Advisor, California Initiative for Technology and Democracy; Chancellor’s Public Scholar, UC Berkeley.
Key words:
AI, technology, systems, models, regulation, AGI, safety, harm, law, government, research, policy.

Photo Credit: techbullion.com
IN THEIR WORDS
"Every AI system today is vulnerable to something called jailbreaking, where people can come up with some way to convince the system to provide advice and assistance on anything they want, no matter what the companies have tried to do so far."
"We are already seeing the consequences; generative AI tools are being used by Russia, China, and Iran to interfere in our democracy. Those tools are being used to mislead voters about elections and spread falsehoods about candidates."
"The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence or AGI. This term, AGI isn't well defined, but many top AI companies, including OpenAI, Google, and Anthropic, are treating building AGI as an entirely serious goal, which could lead to literal human extinction."
OPENING STATEMENTS FROM THE SUBCOMMITTEE AND FULL COMMITTEE
Chairman Blumenthal noted previous hearings with industry leaders who expressed excitement but also significant fears about AI's potential harm. He mentioned that current witnesses have firsthand experience with safety issues in major AI companies like Meta, Google, and OpenAI. Blumenthal stressed the urgency of creating enforceable rules to hold these companies accountable, warning that the rapid development of AI without sufficient safeguards could lead to the same mistakes made with social media, where regulation was "too little, too late."
Ranking Member Hawley remarked on previous hearings where AI executives made optimistic claims about AI's benefits, often influenced by their financial interests. Hawley expressed hope that today’s witnesses, who have worked inside the industry without the same vested interests, would provide a more realistic perspective on the challenges posed by AI. He emphasized the importance of this hearing in helping Congress legislate effectively to protect the American people.
WITNESS HIGHLIGHTS
Ms. Helen Toner highlighted the disconnect between public perceptions and insider views on Artificial General Intelligence (AGI), noting that many companies see building AGI as a serious goal that could be achieved soon, with potentially catastrophic outcomes, including human extinction. She argued against a "wait and see" policy approach and recommended initial steps such as transparency requirements, research investments, third-party audits, whistleblower protections, and increased government expertise to manage AI risks.
Mr. William Saunders discussed his concerns about the rapid progress toward Artificial General Intelligence (AGI). He shared that recent developments at OpenAI, like the GPT-01 system, demonstrate AI's accelerating capabilities, raising serious risks, including cyberattacks and biological threats. Saunders emphasized that while companies like OpenAI claim to prioritize safety, internal security and rigorous testing often take a back seat to speed and profit. He called for regulatory measures, including whistleblower protections and third-party testing requirements, to ensure that AI companies act responsibly in the development of AGI.
Mr. David Harris highlighted the urgent need for regulation, stressing that self-regulation in the tech industry is ineffective. He noted the decline of trust and safety teams and the rise of secrecy within tech companies, underscoring the failures of voluntary commitments from these firms. Harris advocated for binding legislation, such as the framework proposed by the Subcommittee, which includes provisions for liability and transparency in AI-generated content. He warned that while the pace of AI development is fast, there is still time to implement meaningful oversight to prevent future harms.
Dr. Margaret Mitchell described her efforts to address biases and risks associated with AI systems. She discussed the challenges of incorporating foresight and critical thinking into AI development due to the industry's pressure to launch quickly. Mitchell emphasized the importance of government involvement in shaping AI through research funding, documentation requirements, and transparency measures. She supported increased whistleblower protections and stressed the need for rigorous analysis and foresight in AI practices to ensure the technology benefits the public.
SUMMARY OF Q and A
Chairman Blumenthal asked Mr. Harris if the reduction of trust and safety teams and the increase in secrecy were true across all tech companies. Mr. Harris responded that this trend was widespread across the industry, highlighting significant layoffs, including Elon Musk's reduction of 81% of Twitter staff. He noted that TikTok was the exception, as it had been hiring staff from other companies. Mr. Harris emphasized that despite tech companies’ claims of commitment to safety, their actions suggested otherwise.
Chairman Blumenthal observed that companies' investment in trust and safety teams often did not match their stated commitment to oversight. Mr. Harris agreed, using a metaphor from the tech industry where regulators are compared to a bear chasing companies, with the goal of companies being to avoid being the slowest or worst performer, rather than genuinely improving safety.
Chairman Blumenthal asked Ms. Toner about her statement on the fragility of internal guardrails within tech companies when profits were at stake, seeking examples of what she meant. Ms. Toner provided instances, such as Microsoft launching GPT-4 to tens of thousands of users in India without approval from OpenAI’s Deployment Safety Board. She also described internal concerns at OpenAI about prioritizing competitive positioning over safety commitments, such as rushing the launch of a voice assistant model ahead of a major Google event.Ranking Member Hawley asked Ms. Toner if her departure from the OpenAI board was due to her inability to oversee Mr. Altman’s safety decisions effectively, noting her statement that Mr. Altman provided inaccurate information about safety processes. Ms. Toner confirmed that safety processes were publicly announced but often faced challenges in execution, particularly when weighed against profit-driven pressures. She emphasized that many tech companies, including OpenAI, face internal conflicts between safety efforts and market demands, often resulting in insufficient resources and influence for safety teams.
The Ranking Member asked if Mr. Altman’s claims that OpenAI conducts extensive safety testing and independent audits were accurate based on her experience. Ms. Toner acknowledged that these efforts exist but questioned their adequacy, emphasizing that the real issue is who decides what is sufficient and under what incentives. She noted that safety teams are often brought into the process too late, undermining their effectiveness, and stressed that relying solely on companies to make these trade-offs is problematic.
Ranking Member Hawley asked if OpenAI was doing enough to ensure the safety of its products. Ms. Toner responded that the adequacy of OpenAI’s safety measures depends on the pace of their research. If OpenAI’s most aggressive predictions about AI advancements are accurate, she expressed serious concerns about their current safety procedures. However, if their progress is slower than expected, her level of concern would be reduced.
The Ranking Member asked Ms. Toner to elaborate on her statement that competition with China should not be used as an excuse to avoid regulation. Ms. Toner argued that China is already regulating its AI sector heavily and faces its own significant challenges, including economic issues and access to semiconductors. She pointed out that regulation and innovation are not mutually exclusive and that thoughtful, light-touch regulation could actually increase consumer trust and help the government stay informed, ultimately benefiting the AI industry without stifling innovation.Sen. Durbin asked about the government’s role in regulating AI, drawing comparisons to historical projects like the Manhattan Project and the race to the moon, which were heavily influenced by government involvement. He questioned whether the government could effectively regulate AI without a comparable level of expertise and resources as private companies. Dr. Mitchell responded that the government could play a valuable role by incentivizing good work and filling gaps in AI development, such as rigorous data analysis. She pointed to successful examples, like DARPA’s contributions to machine translation, as models for government involvement in advancing AI in beneficial ways.
Sen. Durbin asked Dr. Mitchell if the government had the expertise necessary to regulate the private sector effectively. Dr. Mitchell expressed confidence that the government could hire the necessary talent, though she noted the challenge of competing with high compensation packages offered by big tech companies. She suggested that government employees could work closely with private companies under non-disclosure agreements (NDAs) to better understand and regulate AI technologies.
Sen. Durbin asked Mr. Harris about the government's capability to regulate AI and whether it had the right talent. Mr. Harris affirmed the possibility of effective regulation, citing the Blumenthal-Hawley framework, which includes key elements such as licensing, liability, and provenance of AI-generated content. He highlighted the ongoing challenges in attracting qualified talent to the federal government, pointing to past issues like the healthcare.gov launch as an example of the difficulties in hiring technical experts. Harris mentioned that the AI Safety Institute within NIST has begun hiring for these roles but stressed the need for increased funding to attract and retain top talent.Sen. Kenned asked Mr. Harris about the concept of provenance in AI regulation, specifically whether it included notifying consumers when they are interacting with AI. Mr. Harris confirmed that provenance includes disclosure, where companies should inform consumers if they are dealing with AI. He elaborated on other aspects of provenance, including watermarking, which can be direct (visible notices on content) or indirect (invisible signals embedded in AI-generated content).
Sen. Kennedy asked if any companies were currently providing proper notice to consumers. Mr. Harris mentioned that Google’s DeepMind was attempting to implement a technology called SynthID for this purpose, although he had not personally tested it.
Sen. Kennedy questioned the effectiveness of licensing for AI companies, comparing it to other professions like law and medicine where practitioners need licenses to operate. Mr. Harris responded that licensing could establish a code of behavior that companies must follow, and violations could lead to the loss of their license, similar to other professional fields.
Sen. Kennedy asked what should be included in the code of behavior for AI companies. Mr. Harris suggested that AI should be designed to avoid harming humans, prevent discrimination, and not give harmful or incorrect advice. Sen. Kennedy then noted that this would require a government agency to enforce such a code, to which Mr. Harris responded that various proposals exist, including a malpractice regime for AI engineers.
Sen. Kennedy asked why current tort and contract laws were not sufficient to address AI-related liabilities. Ms. Toner explained that the complexity of AI and its distinct harms make it difficult to allocate responsibility clearly within existing legal frameworks, unlike traditional software or cybersecurity issues. She emphasized that establishing clear liability in AI could help set appropriate incentives for companies to manage risks responsibly.Sen. Klobuchar asked Dr. Mitchell about the impact of AI on journalism, expressing concerns that AI models could be trained on journalistic content without compensation or attribution. Dr. Mitchell highlighted several harms, including that AI-generated content only reflects past events, thus lacking the real-time reporting crucial to journalism. She also noted that AI was displacing journalists, leading to job losses, and that biases in AI-generated content could spread through the media, diluting diverse perspectives and making information less reliable.
Sen. Klobuchar asked Mr. Harris if he agreed that AI poses significant risks to elections and how it could amplify election-related disinformation. Mr. Harris confirmed the risks, citing his work with the Brennan Center for Justice and a guide he co-authored on preparing for AI threats in elections. He discussed how AI could create deepfakes of candidates, election officials, and tampering scenarios, misleading the public. Mr. Harris mentioned California legislation that would require platforms to remove such harmful content, reflecting the need for regulatory action at the federal level.
Sen. Klobuchar expressed frustration with the lack of federal action on bills addressing AI-generated deepfakes in elections, noting that while state laws provide some protection, they are inadequate for federal elections. She criticized the inaction of Senate leadership, specifically mentioning Senator McConnell’s reluctance to advance relevant bills, and emphasized the urgency of enacting these protections to prevent public confusion and ensure the integrity of federal elections.Sen. Padilla asked Ms. Toner and Dr. Mitchell for their thoughts on the recent agreement between the U.S. AI Safety Institute and companies like OpenAI and Anthropic to share their models for research and evaluation purposes. Ms. Toner expressed optimism about the agreement, noting that its success would depend on specifics such as timing, access, and resources. Dr. Mitchell echoed this sentiment, emphasizing that independent evaluations could help hold companies accountable for their claims about AI safety and effectiveness. However, she cautioned that safeguards must be in place to prevent misuse of access by malicious actors posing as researchers.
Sen. Padilla asked how policymakers should assess the success of these agreements. Dr. Mitchell suggested that success would involve rigorous and transparent evaluations of AI systems to better understand their performance in various scenarios, beyond what profit-driven companies might report. Ms. Toner added that Congress should closely monitor the effectiveness of these testing arrangements by staying engaged with the AI Safety Institute, ensuring it has the necessary resources, staffing, and access to conduct meaningful evaluations.
Sen. Padilla asked Dr. Mitchell to explain the importance of understanding how inputs affect outputs in AI models. Dr. Mitchell used a cooking analogy, comparing data to ingredients and the training process to cooking. She explained that unlike cooking, where the effects of ingredients are well understood, AI lacks a similarly developed science to predict how different data inputs impact the model’s final outputs, highlighting a critical gap in current AI development.Chairman Blumenthal asked the panel if they agreed with the concerns expressed by Dr. Yan Leica, former head of OpenAI's super alignment team, who left the company due to disagreements about its safety priorities. Mr. Saunders, who worked closely with Dr. Leica, supported his concerns, noting that OpenAI was not adequately prepared to handle models with catastrophic risks, such as those that could assist in creating biological weapons or conducting sophisticated cyberattacks. He emphasized the need for rigorous security measures to prevent these models from being misused and pointed out the problem of "jailbreaking," where users can bypass safety measures to exploit AI systems.
Dr. Mitchell agreed with Dr. Leica’s statement, highlighting the broader issue of responsible AI teams being disempowered within tech companies. She noted that while companies may maintain these teams to claim they are addressing safety concerns, these teams often lack the authority to influence critical decisions about how AI technologies are developed and deployed.Sen. Blackburn asked Ms. Toner about Meta’s recent announcement to use public content from Facebook and Instagram in the UK to train their generative AI models. Ms. Toner explained that this practice was already occurring in the United States due to a lack of privacy protections and that Meta had only delayed similar actions in the UK because of stricter privacy laws there. She suggested this as an example of the effectiveness of UK privacy protections, highlighting the need for similar measures in the U.S.
Mr. Harris added that while European and UK users had options to opt-out of having their data used for AI training, those options were not available to U.S. users. He shared his personal experience of trying to find the opt-out feature and discovering it was unavailable in the U.S., illustrating how Americans lack privacy protections compared to Europeans. He expressed support for federal privacy legislation to ensure that Americans have the same rights as those in other countries to control how their data is used.
Sen. Blackburn asked Ms. Toner about the difference between intelligence and agency in AI systems, questioning whether these concepts carry different threats and should be approached separately. Ms. Toner responded that intelligence and agency, while distinct, are closely related, especially as AI models are increasingly being developed to take autonomous actions. She noted that this trend is actively pursued by major AI companies, with potential applications ranging from personal assistants to complex systems that could operate businesses independently, highlighting the connection to AGI and advanced AI.Chairman Blumenthal asked Mr. Saunders about the need for whistleblower protections in the AI industry. Mr. Saunders shared his experience at OpenAI, where departing employees were often required to sign restrictive non-disparagement agreements that made it difficult to speak out about company practices without losing financial benefits. He emphasized the need for clear points of contact within the government for whistleblowers and legal protections that extend beyond violations of law to include situations where societal harm is at risk.
Dr. Mitchell highlighted the lack of knowledge and resources available to employees who consider whistleblowing. She described how potential whistleblowers often face the daunting challenge of acting alone against powerful companies with vast legal resources. Dr. Mitchell suggested companies should provide employees with clearer information and support about when and how to report issues, including mandatory whistleblowing orientations.
Ms. Toner explained that the absence of regulation in tech means many concerning practices are not illegal, making existing whistleblower protections ambiguous. She stressed that whistleblowers are often hesitant to come forward because they cannot be sure whether their concerns fall under current legal protections, highlighting the need for more specific rules.
Chairman Blumenthal questioned whether tech companies incentivize employees to prioritize safety. Mr. Harris recounted an experience at Facebook where safety teams struggled to demonstrate impact because their job was to prevent crises, which are inherently difficult to quantify. Mr. Saunders added that companies often prioritize rapid development and market competition over safety, and standards are needed to ensure that companies do not cut corners.
Chairman Blumenthal highlighted reports of foreign adversaries using AI to interfere in U.S. democracy, citing examples of AI-generated disinformation from Iran, China, and Russia. He asked Mr. Harris about the effectiveness of California’s law aimed at safeguarding elections. Mr. Harris responded that while California has made progress, the scope and scale of AI threats require federal action, particularly given the limitations of state-level resources.
Chairman Blumenthal raised concerns about AI systems trained on child sexual abuse material (CSAM), which can generate new abusive content. Dr. Mitchell suggested that data analysis in AI is underdeveloped and that governmental assistance is needed to identify and prevent the use of such harmful data. Mr. Harris added that voluntary self-regulation by companies has proven ineffective, with companies slow to remove offending AI models despite prior commitments.
In closing, Chairman Blumenthal addressed the argument that regulation stifles innovation. All panelists generally agreed that thoughtful, well-designed regulations could actually spur innovation by setting clear rules and boosting public trust in AI. Dr. Mitchell noted that regulation can set high-level goals, like privacy and safety, which can drive technological advances, while Ms. Toner emphasized that poorly designed regulations, not regulation itself, pose a risk to innovation.
SPECIAL TOPICS
Threats to Democracy and National Security from AI:
A significant topic discussed was the use of AI by foreign adversaries such as Russia, China, and Iran to interfere in U.S. elections and spread disinformation. Witnesses and lawmakers highlighted how AI tools are being used to create misleading content, deepfakes, and disinformation campaigns that threaten the integrity of democratic processes. This issue underscores the need for robust regulatory frameworks to safeguard national security and prevent electoral manipulation.
Development and Regulation of Artificial General Intelligence (AGI):
The hearing extensively covered the concept of AGI, with witnesses stressing that some top AI companies are seriously pursuing AGI development, which could lead to catastrophic outcomes, including human extinction. The discussion emphasized the need for proactive regulation and oversight to manage the potential risks of AGI, which is a critical concern for policymakers who must balance innovation with safety.
Lack of Accountability and Safety Measures in AI Companies:
Witnesses highlighted the internal failings of AI companies regarding safety practices, accountability, and the prioritization of profit over precaution. There was specific focus on how companies like OpenAI have inadequate internal safety measures, are prone to rushing products to market without proper safeguards, and do not adequately incentivize safety-related work. This topic is crucial for policymakers considering how to enforce transparency, oversight, and accountability within the AI industry.
ADD TO THE NIMITZ NETWORK
Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.