- Nimitz Tech
- Posts
- Nimitz Tech Hearing 11-19-24 - Senate Commerce
Nimitz Tech Hearing 11-19-24 - Senate Commerce
⚡NIMITZ TECH NEWS FLASH⚡
“Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams”
Senate Committee on Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security
November 19, 2024 (recording linked here)
HEARING INFORMATION
Witnesses and Written Testimony (linked):
Dr. Hany Farid: Professor, University of California Berkeley, School of Information
Mr. Justin Brookman: Director, Technology Policy, Consumer Reports
Mr. Mounir Ibrahim: Chief Communications Officer & Head of Public Affairs, Truepic
Ms. Dorota Mani: Mother of Deepfake Pornography Victim
HEARING HIGHLIGHTS
AI-Enabled Fraud and Consumer Scams
The hearing highlighted the growing use of artificial intelligence in enabling sophisticated scams and fraud, such as voice cloning and deepfakes. Criminals are using these tools to impersonate loved ones, create convincing fake content, and prey on vulnerable populations, including seniors and military families. These scams often leverage personal data readily available online, raising concerns about consumer protection and the need for stronger privacy laws.
Deepfakes and Non-Consensual Content
Witnesses and lawmakers addressed the dangers posed by deepfakes, particularly in the creation and distribution of non-consensual intimate images. With 96% of circulating deepfake videos reported to be non-consensual pornography, this issue disproportionately impacts women and minors. Discussions emphasized the need for legislation, like the Take It Down Act, to empower victims by requiring platforms to remove harmful content swiftly.
Transparency and Accountability in AI Systems
Witnesses underscored the importance of holding AI tool creators and platforms accountable for misuse. Technologies such as generative AI and voice cloning often lack sufficient safeguards, making them easily exploitable. Proposals included mandating content provenance through watermarks and metadata and ensuring AI developers take responsibility for the misuse of their products.
IN THEIR WORDS
"We stand today at a crossroads where American leadership in AI is going to depend on which of many courses Congress takes going forward. This has been, can, and should remain a bipartisan effort that focuses on promoting transparency in how developers build new models, adopting evidence-based standards to deliver solutions, and building Americans' trust in what has been a very disruptive technology."
"Today's predictive AI is not much more than a sophisticated parrot. If the historical data is biased, then your AI-powered future will be equally biased. When it comes to generative AI, the harms we see today were completely predictable and baked into the very DNA of this technology."
"We cannot let AI stand for accelerating inequality. As Congress considers AI legislation, we just can't ignore how these algorithms will impact marginalized communities, which are often the greatest targets of scams and biases."
SUMMARY OF OPENING STATEMENTS FROM THE SUBCOMMITTEE
Chairman Hickenlooper began the hearing by emphasizing the critical importance of American leadership in artificial intelligence (AI) and the need for bipartisan efforts to address the associated challenges. He highlighted three core priorities: promoting transparency in AI development, adopting evidence-based standards to address known problems, and building public trust in the technology. He provided examples of the risks posed by AI, such as deepfake scams and non-consensual imagery, while acknowledging its benefits in areas like clean energy and medicine. The Chairman detailed ongoing legislative efforts, including bipartisan bills aimed at AI innovation, accountability, and consumer protection, and stressed the need for a consistent federal approach to address AI risks comprehensively.
Ranking Member Blackburn described AI as having a "good, bad, and ugly" impact on society, citing its benefits in industries like healthcare and logistics but also its role in increasing fraud and scams, particularly targeting seniors. She expressed concern about the rapid advancement of AI outpacing legislation and highlighted the need for comprehensive strategies to combat AI-enabled scams, such as improving consumer education and digital literacy. Blackburn pointed out the rise in financial losses due to AI-driven scams and stressed the importance of creating online privacy standards to protect individuals' digital identities. She emphasized bipartisan efforts and legislative priorities to enhance privacy and consumer protections in the AI age.
SUMMARY OF WITNESS STATEMENTS
Dr. Farid provided an overview of the AI landscape, dividing it into predictive and generative AI, and highlighted the risks and biases inherent in predictive algorithms. He discussed the ethical concerns surrounding generative AI, particularly its use in creating harmful content like deepfakes and non-consensual imagery. He argued for a risk-based approach to predictive AI and emphasized the need for broader regulation of the technology ecosystem to mitigate harms. Dr. Farid criticized claims that guardrails hinder innovation, advocating instead for proactive measures to prevent the misuse of AI, learning from past failures in regulating social media.
Mr. Brookman acknowledged the consumer benefits of AI while cautioning against the harms it enables, such as fraud, personalized pricing manipulation, and non-consensual imagery. He highlighted the role of AI in facilitating large-scale scams, such as spear phishing and fake reviews, and stressed the need for stronger enforcement, platform accountability, transparency measures, and privacy laws. Brookman called for educating consumers on recognizing AI-enabled fraud and improving tools to detect and combat such activities. He emphasized that companies offering AI tools must implement safeguards to prevent misuse and supported the development of robust regulatory frameworks.
Mr. Ibrahim focused on the challenges posed by AI-generated content and the importance of digital content provenance to verify authenticity. He described the alarming rise of non-consensual imagery and phishing scams, highlighting their impact on individuals and businesses. He outlined the role of content credentials in ensuring transparency and authenticity in digital media, sharing examples of industry adoption. Ibrahim called for widespread adoption of content provenance standards, increased education on their significance, and Congressional support for initiatives to scale these technologies and foster a healthier digital environment.
Ms. Mani shared her personal experience as the mother of a victim of AI-enabled deepfake misuse, underscoring the devastating emotional and reputational impacts. She emphasized the importance of education to promote ethical AI use among youth and prepare them to navigate AI's risks and benefits. Mani advocated for stronger laws and school policies to hold perpetrators accountable and protect victims, highlighting the urgent need for legislation like the Take It Down Act. She also called for AI labeling tools, reforms to Section 230, and collaboration with major tech companies to address harmful content and enhance accountability.
SUMMARY OF Q and A
Chairman Hickenlooper asked Mr. Justin Brookman how data privacy protections could help mitigate consumer fraud. Mr. Brookman explained that social engineering scams become more effective when they exploit detailed personal data, much of which is collected and sold by hundreds of data brokers. He noted that while state-level privacy laws, such as those in California, have made progress, they rely on individuals actively opting out, which is time-consuming and burdensome. He recommended stronger federal privacy protections that are default and require less effort from individuals to safeguard their information. Such measures, he argued, would reduce the availability of personal data, thereby diminishing the success rate of AI-enabled scams.
Chairman Hickenlooper then asked Dr. Hany Farid about the effectiveness of tools and techniques for detecting AI-generated media, such as deepfakes.
Dr. Farid highlighted the growing challenge of distinguishing AI-generated content from reality, even for experts in the field. He described two primary approaches: proactive techniques, such as cryptographic provenance tools that tag content at the moment of creation, and reactive techniques that analyze and flag harmful content after it is uploaded. While proactive tools are more scalable, reactive techniques are less efficient and time-sensitive, given the rapid spread of content on social media. He emphasized the importance of combining these approaches and educating consumers on digital hygiene, noting that expecting individuals to detect deepfakes unaided is unrealistic.Chairman Hickenlooper followed by asking Ms. Dorota Mani why it is important to include the perspectives of individuals impacted by AI technology when shaping policy responses. Ms. Mani underscored the importance of incorporating victims’ experiences to better understand the real-world harms of AI misuse. She argued for proactive education in schools to teach responsible use of AI and prepare students to navigate its risks. She called for stronger accountability measures to deter bad actors while empowering victims to regain control of their lives. She stressed the urgency of addressing the lack of educational guidelines and legislation on deepfake-related harms, pointing to the FBI's acknowledgment that such content is illegal but remains underregulated.
Sen. Luján asked Dr. Hany Farid about the accuracy of deepfake detection tools for audio and video in Spanish compared to English to which Dr. Farid explained that proactive tools like cryptographic provenance are language-agnostic and maintain consistent accuracy regardless of language. Reactive tools, however, can differ based on the language-specific training data they use. He noted that while accuracy rates for both Spanish and English detection are generally high—around 90-95%—biases in training data persist due to the predominance of English content online. He emphasized that while this bias exists, it is not the most pressing issue compared to other challenges in the AI landscape.
Sen. Luján then asked Mr. Mounir Ibrahim if the C2PA standards include displaying content origin information in Spanish for Spanish-speaking consumers. Mr. Ibrahim explained that the technical provenance mechanism is universal across languages, enabling digital content credentials to function independently of linguistic differences. However, he emphasized the importance of internationalizing the specification to ensure accessibility for Spanish-speaking regions and other non-English-speaking populations. He highlighted efforts by C2PA to collaborate with international stakeholders, including the French government and G20 nations, to raise awareness and promote global adoption of the standards
Ranking Member Blackburn asked Dr. Hany Farid about the added protections a federal privacy standard could provide in the context of AI-driven content. Dr. Farid acknowledged that while a federal privacy standard is long overdue, it would have been far more impactful if implemented years ago. He explained that current AI technologies require minimal personal data—such as 20 seconds of audio or a single image—to create deepfakes, making comprehensive protection difficult even with privacy laws in place. While such a standard would provide better safeguards for future generations, he noted that the personal data of most Americans has already been widely disseminated and is challenging to secure retroactively.
Sen. Klobuchar asked Ms. Dorota Mani how a federal law like the Take It Down Act would have changed her family’s experience when dealing with her daughter’s deepfake case. Ms. Mani explained that a federal law would have empowered schools to act more decisively by providing them with the authority to address deepfake incidents. She emphasized that laws serve as educational tools for society, setting clear boundaries on acceptable behavior. The Take It Down Act’s 48-hour takedown provision, she noted, is particularly critical, as it allows victims to regain control of their images without the need for expensive civil lawsuits or complex criminal processes. Such measures, she argued, would provide immediate relief and deter future misuse.
Sen. Markey concluded by asking Mr. Justin Brookman to elaborate on Consumer Reports’ research showing that Black and Hispanic Americans are disproportionately impacted by digital scams and fraud. Mr. Brookman referenced findings that Black and Hispanic Americans experience financial losses from scams at twice the rate of white Americans, with nearly a third of affected individuals losing money. These patterns align with research by the FTC, which shows marginalized communities are more frequently targeted. He emphasized the need for focused outreach and education campaigns to help these groups better prepare for AI-powered scams, which build on longstanding vulnerabilities in the digital landscape.
ADD TO THE NIMITZ NETWORK
Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.
Update your email preferences or unsubscribe here © 2024 Nimitz Tech 415 New Jersey Ave SE, Unit 3 |