• Nimitz Tech
  • Posts
  • Nimitz Tech Hearing 9-18-24 - Senate Intel

Nimitz Tech Hearing 9-18-24 - Senate Intel

NIMITZ TECH NEWS FLASH

“Foreign Threats to Elections in 2024 – Roles and Responsibilities of U.S. Tech Providers”

Senate Select Committee on Intelligence

September 18, 2024 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (linked):

Photo Credit: foxnews.com

HEARING HIGHLIGHTS

Challenges of Content Moderation and Algorithmic Influence:

  • The difficulty in distinguishing between legitimate political speech and foreign-influenced disinformation was a major theme. The role of algorithms in amplifying certain content, as highlighted by Sen. Rubio and others, raises questions about transparency and accountability for tech platforms. Policymakers must decide how to balance free speech with the need to prevent harmful influence operations, particularly in politically charged environments.

Transparency and Data Sharing from Tech Companies:

  • Several members pushed for greater transparency from tech companies on how many Americans have been exposed to foreign disinformation. They requested specific data on manipulated content and its reach. Policymakers should ensure that tech platforms cooperate fully and share relevant information to better protect the public.

Accountability and Regulatory Frameworks for Social Media and AI:

  • The lack of regulatory progress on social media oversight and AI governance was a recurring theme. Chairman Warner and others expressed frustration with Congress’s failure to pass meaningful legislation in these areas. They pointed out that while tech companies claim to support regulation in theory, actual legislative action has been slow. Policymakers are grappling with how to create effective regulations without stifling innovation, particularly around emerging technologies like AI.

IN THEIR WORDS

"It seems to me, what's happening here is that foreign governments are engaged in a kind of geopolitical judo, where they're using our own strength against us. Our strength is our democracy and our regular elections, plus freedom of expression, and that's what they are taking advantage of in order to try to manipulate our fundamental way of making decisions."

- Senator King

"It is a constant cat and mouse game. The adversaries are always moving forward, and we have to continuously improve our capabilities to stay ahead. We remain humble in knowing that we likely don’t know everything, but we are using AI and various techniques to do our best in tracking and mitigating disinformation."

- Mr. Kent Walker (Google)

"The truth is we're 48 days away from an election. And the final point I want to make clear is that we need to do all we can before the election… One of my gravest concerns is that the level of misinformation, disinformation that may come from our adversaries after the polls close could actually be as significant as anything that happens up to the closing of the polls on election night."

- Chairman Warner

SUMMARY OF OPENING STATEMENTS FROM THE COMMITTEE

  • Chairman Warner opened by explaining the context of the hearing, which built on the committee’s efforts to expose foreign adversaries’ manipulation of the U.S. electoral process. He recounted the discoveries since 2017 regarding Russia’s extensive use of social media platforms to sow division and influence voters. The Chairman highlighted the progress made in response, including recommendations for greater collaboration between government and the private sector and transparency from tech platforms. He raised concerns about the continuing and evolving threats from foreign actors, including Russia, Iran, and China, particularly with new technologies like AI.

  • Vice Chairman Rubio acknowledged the complexity of dealing with foreign disinformation, particularly when it overlaps with preexisting political beliefs in the U.S. He emphasized the challenge of distinguishing between foreign-generated disinformation and legitimate domestic viewpoints that may be amplified by adversaries. Rubio expressed concern about labeling individuals as collaborators for holding views that happen to align with foreign interests. He highlighted specific examples, such as the Hunter Biden laptop story and censorship during the COVID-19 pandemic, which demonstrated the difficulties in navigating disinformation without infringing on free speech.

SUMMARY OF WITNESS STATEMENTS

  • Mr. Kent Walker (Alphabet) began by emphasizing Google's commitment to protecting free expression while ensuring responsible policies to safeguard the integrity of democratic processes. He explained how Google has invested in tools and capabilities to prevent foreign actors from misusing its platforms, such as the creation of the Google Threat Intelligence Group. This group actively monitors and disrupts malicious activities, including cyber attacks and influence operations, particularly from countries like Russia, China, and Iran. Walker also highlighted the use of AI by foreign actors in disinformation campaigns and detailed Google's efforts to detect and counter AI-generated content, such as its introduction of synth ID and content credentials.

  • Mr. Brad Smith (Microsoft) expressed that while tech companies may compete in many areas, they are united in their commitment to protecting American elections. He underscored the reality of foreign adversaries, like Russia, Iran, and China, seeking to undermine democracy and emphasized the responsibility of the tech sector to counter these efforts. Mr. Smith outlined three key roles: preventing foreign exploitation of platforms, protecting political candidates and election officials through technology and training, and preparing the American public for potential disinformation risks. He noted the critical moment before elections when foreign interference is most likely and called for unity across political and industry lines to protect the integrity of the electoral process.

  • Mr. Nick Clegg (Meta) stated that Meta is committed to protecting free expression and the integrity of elections worldwide, with more than 40,000 people focused on safety and security. He described Meta's comprehensive approach, which includes strong policies against voter interference, connecting people to reliable information, and combating foreign influence and misinformation. Mr. Clegg highlighted Meta's transparency in political advertising and collaboration with other platforms to address cross-industry threats. He acknowledged the emerging challenge of AI but noted that generative AI tactics have not yet significantly impacted elections. Meta continues to work internally and externally to address AI risks, including labeling AI-generated content and signing onto industry-wide AI standards.

SUMMARY OF Q and A

  • Chairman Warner asked about tech companies' responsibility in detecting AI-generated disinformation resembling real news outlets, particularly in the 48 hours after elections. He highlighted the issue of millions of Americans viewing fake sites in 2016 and asked how such disinformation could be prevented. Mr. Clegg responded that Meta had banned the organization responsible for such disinformation, confirming it was part of a Russian effort to undermine American democracy. He emphasized that Meta was actively addressing these threats.

    The Chairman then asked how many Americans had viewed Russian-generated content and raised concerns about targeted ads aimed at specific groups, such as Jewish and Latino Americans, in key states. He questioned why foreign-paid advertising was still allowed on platforms. Mr. Walker explained that Google had implemented checks requiring election ad registration and, after the 2016 election, found Russian actors had spent less than $4,000 on ads. Chairman Warner countered with evidence that Russian actors were still using ad tools, urging for more detailed data on content purchased by these actors. Mr. Walker responded that Google had removed around 11,000 Russian-linked disinformation efforts.

    The Chairman stressed the need for quick access to data on how many Americans had viewed disinformation, particularly false ads posing as legitimate news, to better inform the public.

  • Vice Chairman Rubio asked about Meta’s content moderation policies regarding political speech, specifically how misinformation is handled. He referenced the COVID-19 lab leak theory, which was initially considered false but later seen as plausible. The Vice Chair wanted to know how Meta would prevent similar mistakes today, where fact checkers might wrongly suppress information based on changing knowledge. Mr. Clegg responded that Meta relies on independent fact-checkers who are not employed by the company but are vetted by third-party organizations. He acknowledged that during the pandemic, mistakes were made under government pressure, and Meta now strives to act independently, resisting temporary pressures on content.

    The Vice Chair pressed further, asking whether the same fact-checking system would again demote content, like the Hunter Biden laptop story, which was initially labeled as Russian disinformation by experts. He wanted to know if this would lead to accounts being blocked or diminished. Mr. Clegg explained that in the Hunter Biden case, the story was demoted temporarily while fact checkers reviewed it. However, the content was never removed, and once the review concluded, it was fully restored. He confirmed that the story was always available and circulated widely.

    Vice Chairman Rubio then asked whether the fact checkers themselves would face consequences for promoting the false narrative signed by 51 intelligence experts, which turned out to be untrue. Mr. Clegg did not specifically address the question but reiterated Meta’s general approach of demoting content temporarily for review.

  • Sen. Heinrich asked whether the companies had a policy of removing fraudulent news sites that mimic legitimate sources once identified. He also questioned why it often takes a long time to remove these sites and whether AI is used to proactively identify them. Mr. Smith responded that the companies do remove such counterfeit sites, which violate terms of use. He acknowledged that while the industry is improving in detecting these sites using AI, there is always a race to remove them quickly. Mr. Smith gave an example of a recent AI-enhanced video of Vice President Harris that was identified and addressed.

    Sen. Heinrich then asked about improving platforms' ability to detect and remove content that threatens or harasses election workers, given the rise in such behavior due to false election claims. Mr. Walker explained that companies have policies against incitements to violence and harassment and are working to safeguard election officials through cybersecurity tools and advanced protection programs to prevent their personal information from being exposed. Mr. Clegg added that Meta encourages local election officials to use platforms like Facebook to communicate directly with voters through their voting alert system, which has issued millions of alerts since 2020 to ensure voters receive accurate information.

  • Sen. Collins asked about the risks of Chinese influence in down-ballot races, particularly since local officials are less likely to receive intelligence briefings. She expressed concern about China's attempts to build relationships with state and local officials and inquired about how platforms safeguard these lower-profile elections. Mr. Clegg agreed with the concern and emphasized that vigilance needs to be constant across all levels of elections, not just presidential races. He explained that Meta focuses on behavioral patterns, like networks of fake accounts, rather than individual content, citing a recent example where they disabled accounts targeting the Sikh community in the U.S.

    Sen. Collins then asked why platforms don’t watermark posts to show their country of origin, such as marking content from Russia, allowing users to decide for themselves. Mr. Smith responded that it was an interesting idea, and that some organizations already use metadata to indicate the source of their content. He mentioned that this approach has been used by the Republican National Convention to protect its content. He suggested that the focus should be on content where protections are removed and noted ongoing discussions in the industry about identifying foreign content more clearly for the public.

  • Sen. Kelly asked what the companies are doing to address Russian-made fake websites that mimic trusted American news outlets like Fox News and The Washington Post. He expressed concern about how well-crafted these sites are and how they specifically target swing state voters, including his constituents in Arizona. He asked whether these fake sites are still accessible through platforms like Google. Mr. Walker responded by explaining that Google has tools like "about this image" and "about this result" to help identify when images first appeared online, which aids in detecting disinformation. He also mentioned ongoing efforts to watermark AI-generated content and ensure transparency through cross-industry initiatives.

    Sen. Kelly further inquired whether Google can prevent users from accessing these fake sites once they are identified as false. Mr. Walker clarified that Google's actions differ depending on whether they are hosting the content. On platforms like YouTube, they remove demonstrably false and harmful content, including AI-generated or manipulated media. He emphasized that if it involves trademark or copyright infringement, such as fake news sites impersonating real ones, the content would be removed once complaints are received.

    Sen. Kelly confirmed the importance of removing content that co-opts legitimate news sites.

  • Sen. Cotton opened by putting the influence of Russian election interference in perspective, citing that more content is produced by American users in hours than by foreign actors. He pointed to a larger concern of tech platforms suppressing domestic stories, such as the Hunter Biden laptop case, which Facebook demoted in 2020. He asked Mr. Clegg if Meta regrets that decision. Mr. Clegg that Meta no longer demotes stories like the Hunter Biden case and expressed regret over the action.

    Sen. Cotton asked Mr. Walker if Google suppressed results about the Hunter Biden laptop. Mr. Walker responded that Google did not suppress the story as it did not meet their standards for action.

    Sen. Cotton humorously asked Mr. Smith if AI-generated memes of Donald Trump saving ducks and geese posed a serious election threat. Mr. Smith confirmed that such content was not a major concern.

    Sen. Cotton inquired about a past issue where Google didn’t autocomplete searches for an assassination attempt on Donald Trump, asking Mr. Walker why this happened. Mr. Walker explained that Google has a policy of not associating search terms with violence against political figures unless it’s a historic event. The assassination attempt fell between updates and has since been corrected.

    Sen. Cotton then asked how tech companies plan to comply with new California laws criminalizing deep fakes before elections. Mr. Walker and Mr. Clegg both noted that they were still reviewing the laws and hadn’t yet determined how to comply but acknowledged the challenge of distinguishing between playful and harmful AI content. Sen. Cotton expressed skepticism about who would draw the line between innocuous and harmful content, questioning the neutrality of organizations like PolitiFact and the Southern Poverty Law Center.

  • Sen. King emphasized that the focus should be on foreign influence when addressing disinformation, noting that adversaries like Russia are using democratic freedoms and social media against the U.S. He asked whether it’s technically possible to determine when content originates from foreign sources.

    Mr. Smith responded that while it’s not always possible, it often is. He provided an example from the Slovakian election, where Russia used a sophisticated strategy to release a fake audio recording and amplify it via social media and official channels.

    Sen. King highlighted the sophistication and determination of adversaries like Russia and asked about the best way to respond. Mr. Smith agreed that educating the public, as seen in Estonia, is crucial. He stressed that people need to be aware that not everything they see online is true, especially when it may be foreign disinformation.

    Sen. King suggested educating the public about these threats, using humor by referencing a sign in his kitchen about internet quotes. He asked if alerting users to manipulated content is a reasonable approach. Mr. Walker added that AI is helping detect disinformation patterns and improving policy enforcement. He also mentioned the importance of providing accurate information as a countermeasure to disinformation, ensuring people know essential facts, such as voting details.

    Sen. King concluded by thanking the witnesses and affirming the importance of providing good information to counter false content.

  • Chairman Warner recalled how Russian interference in 2016, such as inciting violence between opposing groups in Texas, was exposed by the committee. He questioned how the average American could detect fake sites that look identical to trusted news sources.

  • Sen. Cornyn asked whether the panel believed that TikTok should be required to divest from ByteDance to operate in the U.S. and if foreign-owned social media companies from adversarial nations should be allowed to operate freely. Mr. Walker deferred to Congress on this issue but noted that Google has acted to remove companies distributing malware. Mr. Smith stated that Congress had already legislated on this issue, and the law would be followed after the courts adjudicate it. Mr. Clegg pointed out that there isn’t a global level playing field, as American apps are banned in China while Chinese apps are available in the U.S.

    Sen. Cornyn explored the idea of reciprocity and questioned how historical legal frameworks that governed communications (like newspapers and radio) could guide social media regulation today, particularly when dealing with censorship and information warfare by foreign adversaries. Mr. Smith emphasized the importance of focusing on areas of agreement, such as addressing foreign adversaries, as a starting point for effective action. He noted that building consensus on foreign threats could lay a foundation for future regulation.

  • Sen. Bennet expressed concerns about the sheer scale of tech companies and the lack of negotiation between the American people and platforms regarding privacy, data, and the impact on democracy. He highlighted the need for social media companies to take responsibility for safeguarding elections and questioned whether the massive capital expenditure on AI reflects a true commitment to protecting American democracy from foreign adversaries. Mr. Clegg responded by stating that Meta has around 40,000 people working on security and integrity, with $5 billion invested in capital expenditures over the last year. He acknowledged the unprecedented scale of tech operations and emphasized the importance of cooperation between global democracies to address regulatory challenges.

    Sen. Bennet remained skeptical of the numbers, emphasizing that the real investment in protecting elections is what matters, not just the size of the workforce. Mr. Smith added that the American tech sector is an engine of growth, but companies have a high responsibility to protect elections and uphold the law. He argued that while there has been ample debate on issues like privacy, what’s needed is decision-making and action from Congress to pass necessary laws, with tech companies providing support.

  • Sen. Lankford raised concerns about foreign-owned platforms like TikTok selectively filtering content to influence the national conversation, such as suppressing Uyghur and Tibet-related content while amplifying pro-Palestinian content. He questioned how tech companies could build trust with the American public by ensuring transparency in the algorithms delivering content and maintaining neutrality in content delivery. Mr. Clegg responded by emphasizing the importance of user controls and transparency. He noted that Meta allows users to turn off algorithms, prioritize certain content, and see why they are seeing particular posts. He also mentioned that Meta publishes transparency reports, audited by EY, to show how they act on content that violates policies. Mr. Walker added that Google builds trust by using content raters from across the U.S. to ensure grounded results. He mentioned that YouTube promotes not just the most popular content but the most valuable, based on user feedback, and ensures transparency in enforcing its policies.

    Sen. Lankford then emphasized the need for better attribution of content origin, particularly foreign disinformation. He noted that it’s crucial for Americans to know where content comes from to combat misinformation effectively, and highlighted the challenge of tracking content after it has been shared multiple times. He called for faster attribution of disinformation to better inform the public.

  • Sen. Ossoff asked Mr. Walker about Google’s independent ability to detect foreign government influence on its platforms without government notification. Mr. Walker responded that while it’s challenging due to increasingly sophisticated tactics, Google has over 500 analysts working to track hundreds of foreign state actor groups. However, he acknowledged that it’s a constant "cat and mouse" game.

    Sen. Ossoff asked if Google is fully aware of the scale of the problem or if it is still missing significant amounts of foreign influence. Mr. Walker admitted that they likely lack full knowledge due to constantly evolving adversaries.

    Sen. Ossoff inquired about the use of machine learning to detect foreign influence and whether it relies on content or behavioral patterns to avoid suppressing legitimate speech. Mr. Walker explained that detection is based on a mix of content metadata, network activity, and behavioral patterns.

    Sen. Ossoff shifted to the issue of deep fake content in U.S. elections, asking Mr. Smith how tech companies would handle a fake, defamatory video targeting a candidate just before an election. Mr. Smith emphasized that action should only be taken with a high level of confidence. He noted that AI and crowdsourcing can help identify such fakes, and the priority would be to alert the public quickly.

    Sen. Ossoff asked how each company would handle proven deep fake content that can't be attributed to a foreign actor. Mr. Clegg stated that Meta would label such content to question its veracity and could demote it in the algorithm to limit its spread. Mr. Smith added that while Microsoft's platform isn't a consumer platform, public notification and labeling would be key actions. Mr. Walker mentioned that Google would also notify the Foreign Influence Task Force to ensure government awareness of the situation.

  • Chairman Warner expressed frustration about the failure of some companies, like X, to adhere to agreements made regarding disinformation. He questioned why platforms like Meta, Google, and YouTube didn’t catch Russian-manipulated content mimicking trusted news sources and asked if companies should take more responsibility rather than leaving it to the government. Mr. Clegg responded that the key challenge is removing the networks of fake accounts that generate this content. He mentioned progress made since the Munich agreement, including using visible and invisible watermarking and metadata to detect fake content.

    Chairman Warner pressed further, asking why consumers would not notice fake URLs and whether companies should do more to help, rather than relying solely on government action. Mr. Clegg agreed that companies need to take more responsibility in this area.

    Chairman Warner then requested data from the companies on how many Americans had seen Russian-manipulated images or ads that sow division and undermine campaigns. He noted that, with only 48 days until the election, the problem is likely to worsen, and having this information would help alert the public. He expressed concern that, while some progress has been made, too much disinformation is still slipping through and called for better data from the companies, especially around manipulated ads. He criticized X and other platforms like TikTok, Telegram, and Discord for exacerbating the problem, rather than helping to solve it, and reiterated the need for urgent action.

  • Sen. Ossoff highlighted a recent Russian influence campaign that targeted swing states, including Georgia, and stressed the importance of fostering a skeptical, resilient society that critically evaluates information. He asked how tech companies and public leaders can help build that resilience to prevent manipulation by foreign and domestic actors. Mr. Clegg suggested that countries like the Baltics and Taiwan demonstrate that voter skepticism is an effective antidote to disinformation. He emphasized the importance of transparency, including sharing findings with researchers and government. He mentioned Meta’s efforts, such as publishing adversarial threat reports and placing signals on GitHub for public scrutiny. Mr. Walker added that YouTube has launched a program called "Hit Pause," offering short videos to help users recognize signs of misinformation. Research showed that exposure to these videos helped users become more resistant to fake news. Mr. Smith discussed similar efforts by Microsoft, including a media campaign in Europe urging people to double-check information before voting, which reached millions of people. He said they are bringing similar initiatives to the U.S.

    Sen. Ossoff asked about the distinction between the role of social media platforms in labeling content and the editorial judgment of traditional news organizations. Mr. Clegg responded that social media platforms do not generate content but rely on user-generated content. Unlike traditional news editors, they do not select content for publication but use algorithms to ensure personalized feeds for users. He noted that most users engage with non-political content, such as family and social activities.

  • Chairman Warner expressed concerns about foreign actors like China and Russia using platforms such as TikTok to manipulate content and influence U.S. elections. He highlighted the risks posed by algorithms controlled by foreign governments and criticized platforms for not taking enough responsibility. Warner emphasized the importance of transparency and accountability, pointing out that many independent academic reviewers who could offer oversight have been marginalized. He requested data on how many Americans have seen Russian-manipulated content and how many ads have bypassed safeguards, stressing the need for companies to prepare for potential foreign influence in the critical 48-hour period after the election.

    The Chairman also criticized Congress’s lack of action on regulating social media and AI, referencing bipartisan efforts that have stalled. He urged the tech companies to act swiftly and provide preliminary information by the middle of the following week, given the urgency of the upcoming election. While acknowledging some progress, he stressed that foreign manipulation of U.S. elections remains a significant threat, and more needs to be done to protect democracy.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.