• Nimitz Tech
  • Posts
  • Artificial Intelligence and Criminal Exploitation: A New Era of Risk

Artificial Intelligence and Criminal Exploitation: A New Era of Risk

NIMITZ TECH NEWS FLASH

“Artificial Intelligence and Criminal Exploitation: A New Era of Risk”

House Judiciary Subcommittee on Crime and Federal Government Surveillance

July 16, 2025 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (linked):

  • Dr. Andrew Bowne: Former Counsel, Department of the Air Force Artificial Intelligence Accelerator at the Massachusetts Institute of Technology

  • Mr. Ari Redbord: Global Head of Policy, TRM Labs; former Assistant United States Attorney

  • Ms. Zara Perumal: Co-Founder, Overwatch Data; former member, Threat Analysis Department, Google

  • Mr. Cody Venzke: Senior Policy Counsel, National Political Advocacy Division, American Civil Liberties Union

HEARING HIGHLIGHTS

Deepfakes and Non-Consensual AI-Generated Exploitation:

The hearing spotlighted the alarming rise in deepfake technology used to create sexually explicit or harmful content, particularly involving children and unwitting adults. AI now enables the creation of synthetic child sexual abuse material (CSAM) and fake pornography with increasing realism and reach, often escaping current legal definitions and enforcement. Witnesses emphasized the emotional and reputational harm caused to victims, as well as the legislative vacuum in both criminal and civil law for addressing this fast-moving threat.

Law Enforcement Resource and Capability Gaps:

Experts testified to the widening gap between criminal use of AI and law enforcement’s ability to respond effectively. While a few federal agencies are adopting tools like blockchain analytics and AI-based media authentication, the vast majority of state and local agencies lack access, training, or funding. The hearing underscored the urgent need for national investment in investigative technology and public-private coordination to prevent AI-enabled crimes from outpacing law enforcement.

Gaps in Legal Authority and Constitutional Tensions:

A recurring theme was the inadequacy of current legal frameworks to address AI-facilitated harm. Witnesses discussed the need to amend statutes like 18 U.S.C. § 2252A to include AI-generated CSAM, while also grappling with constitutional challenges such as First Amendment protections. On the civil side, there are few, if any, actionable paths for victims harmed by AI-generated impersonations or identity theft. The balance between innovation, individual rights, and public safety remains a central and unresolved tension.

IN THEIR WORDS

"You're expressing to us that there are gaps, there are gaps in legislation, there are gaps in things that we need to do here in Congress to make sure that there are protections that are enforcing and preventing these kinds of deep fakes and all that we're talking about today."

- Ranking Member McBath

"As we move from crime on city streets to crime on blockchains and in cyberspace, we're going to need every agent and investigator, not just federal but state and local, to also have access to these types of tools and training."

-Ari Redbord (TRM Labs)

"The criminal ecosystem is using [AI] to scale their offense. We have to use it to scale our defense and to make that more effective if we're going to keep up."

-Zara Perumal (Overwatch Data)

SUMMARY OF OPENING STATEMENTS FROM THE SUBCOMMITTEE 

  • Chairman Biggs opened the hearing by emphasizing the dual nature of artificial intelligence, noting its rapid evolution and the serious threat it poses when leveraged by criminals. He detailed how bad actors are using AI for deepfake scams, synthetic identity fraud, financial crimes, and child sexual abuse material (CSAM), with tactics increasingly targeting vulnerable populations such as the elderly and minors. He cited specific examples, including voice cloning scams and AI-generated sextortion, to illustrate how generative models exploit emotional and psychological vulnerabilities. Biggs also warned of terrorist groups using AI to anonymize recruitment efforts and produce tailored propaganda. While acknowledging AI’s growing utility in law enforcement—especially in handling large data sets and improving investigative efficiency—he cautioned that these benefits come with privacy, bias, and ethical concerns. He concluded by calling for a collaborative approach between law enforcement, legal experts, and the public to both curb misuse and encourage responsible AI adoption.

  • Ranking Member McBath underscored the importance of examining both the opportunities and risks of AI in the criminal justice system. She acknowledged AI’s potential to help law enforcement identify patterns and solve crimes but warned that in the wrong hands, it could facilitate fraud, harm children, and even breach national security. McBath recounted the wrongful arrest of Portia Woodruff, a pregnant Black woman misidentified by facial recognition software, to illustrate the dangers of biased or inaccurate AI tools. She emphasized that AI systems are only as good as the data they are trained on and that biased data worsens existing racial disparities. McBath criticized Republican efforts to pass a moratorium on state and local AI regulation, citing bipartisan opposition from governors and attorneys general who advocated for local control. She urged Congress to follow the states' lead in implementing sensible guardrails and expressed her commitment to protecting civil rights from AI-related abuses.

SUMMARY OF WITNESS STATEMENT

  • Dr. Andrew Bowne emphasized that artificial intelligence is a transformative enabler that multiplies both the speed and scale of criminal activity. He identified three high-risk AI technologies: computer vision, generative adversarial networks (GANs), and large language models (LLMs), all of which are actively used to commit crimes like identity theft, deepfake creation, phishing, and CSAM generation. He warned that generative AI enables emotional manipulation and fraud, with low barriers to entry and minimal need for technical expertise. Dr. Bowne stated that gaps in federal law leave emerging threats—like AI-driven autonomous crime and algorithmic market manipulation—unaddressed. He proposed criminal law reform, including new offenses and sentencing enhancements for AI-enabled crimes, as well as transparency and safety standards. He concluded that AI does not self-regulate, and the legal system must adapt quickly to meet the threat.

  • Ms. Zara Perumal described how AI is reshaping the cybercrime landscape by making criminal tools more accessible, more targeted, and harder to detect. She explained that AI reduces barriers to entry by teaching users how to commit crimes, subverts identity verification systems with synthetic media, and enables personalized scams like voice cloning, job fraud, and "nudifying" apps targeting children. These tools are now being used to harass and extort victims, often leading to devastating consequences such as suicide. She advocated for a dual strategy of public education and technological innovation to combat these threats. According to Ms. Perumal, AI can also be harnessed defensively to detect scams and malware. She concluded with optimism, stressing that investment in education, innovation, and partnerships could help build a safer digital future.

  • Mr. Daniel Venzke urged lawmakers to ensure that any response to criminal AI use fully protects constitutional rights, including free speech and privacy. He warned against policies that may undermine civil liberties, such as blanket surveillance mandates, encryption restrictions, or data-sharing schemes that bypass the Fourth Amendment. He strongly opposed the proposed federal moratorium on state and local AI regulations, arguing that it would prevent communities from addressing harms like deepfake exploitation or voice theft. Venzke noted that dozens of states have enacted targeted AI laws, such as the Tennessee “Elvis Act,” and that federal overreach could nullify these protections. He expressed deep concern over federal data consolidation, warning that it could create vast surveillance systems vulnerable to misuse by AI-powered tools. He concluded by emphasizing that efficiency must never come at the cost of liberty and that Congress must act to prevent the creation of centralized government dossiers.

  • Mr. Ari Redbord highlighted how criminals rapidly adopt new technologies like AI to commit fraud, extortion, laundering, and other crimes at scale. He described how AI is removing human limitations from criminal activity, making scams cheaper, more frequent, and more sophisticated, including autonomous laundering and synthetic CSAM. He argued that this evolution poses systemic national security threats and undermines public trust in law enforcement's ability to protect the public. Mr. Redbord emphasized that the solution is not to restrict AI, but to use it strategically to detect, trace, and disrupt illicit activity. At TRM, he explained, AI is integrated into platforms that help global law enforcement monitor cryptocurrency transactions, identify fraud patterns, and stop threats in real time. He closed by stating that with the right tools and collaboration, enforcement agencies can stay ahead of AI-enabled criminal networks.

SUMMARY OF KEY Q&A

  • Representative Knott asked how the rise of AI and blockchain technologies is impacting criminal infrastructure, particularly across international lines. Mr. Redbord explained that while these tools are now commonly used in crimes like ransomware and fraud, blockchain’s transparency allows investigators to trace illicit transactions more effectively than ever before. Rep. Knott followed up by asking whether specific criminal actors can be identified, and Mr. Redbord confirmed that TRM regularly links crypto wallets to real-world individuals such as terrorists and cartel members. Rep. Knott then raised concerns about bad actors exploiting legitimate blockchain systems developed in the U.S., and Mr. Redbord acknowledged the risk but emphasized targeting malicious users rather than restricting lawful services. Rep. Knott warned that overzealous enforcement could stifle innovation, and Mr. Redbord agreed, urging a balanced approach. In closing, Mr. Redbord recommended that Congress expand resources and training for federal agents to help law enforcement keep pace with AI and blockchain-enabled crime.

  • Ranking Member McBath asked how AI-enabled facial recognition technology aligns with Fourth Amendment equal protection principles, citing data showing disproportionate targeting of Black individuals. Mr. Venzke responded that while there is no definitive court ruling on the matter, the technology raises serious concerns due to its demonstrated bias against protected classes, particularly Black men, and stressed the need for due process safeguards and a moratorium on its use by law enforcement. Rep. McBath then asked what warrant requirements and limitations should apply to facial recognition tools. Mr. Venzke replied that although the Fourth Amendment may not explicitly prohibit such use, the risks of perpetual surveillance without oversight present urgent policy questions for lawmakers.

    Rep. McBath asked whether AI tools used by law enforcement are tested for safety and effectiveness before deployment. Dr. Bowne explained that testing varies by jurisdiction and that while the Air Force requires rigorous evaluation, many agencies lack uniform standards, especially for edge cases, leading to potential reliability gaps.

  • Rep. Lee asked what Congress should prioritize to better equip law enforcement to counter AI-enabled threats, especially those targeting children. Ms. Perumal recommended strengthening public-private partnerships and easing the process for technology sharing, particularly for small businesses developing innovative detection tools. Rep. Lee then asked about specific legislative or funding priorities. Ms. Perumal cited the growing challenge of online identity fraud and emphasized the need for support to help industry adapt and collaborate more effectively.

    Rep. Lee asked about the most urgent national security risks posed by AI and what role Congress should play. Mr. Redbord responded that AI is accelerating cybercrime by enabling autonomous malware deployment, large-scale scams, and rapid laundering of stolen funds, particularly by foreign adversaries like North Korea. He stressed that law enforcement must be given the same tools and training as cybercriminals, along with the necessary funding, to effectively respond in real time.

  • Chair Biggs asked whether current law provides adequate protection against malicious deepfakes that portray public figures saying inflammatory or false statements, and whether such acts fall under existing defamation or libel statutes. Mr. Redbord responded that while some laws apply, additional measures are likely needed to address AI-generated content. Mr. Venzke explained that although such speech can resemble political commentary protected by the First Amendment, exceptions like defamation still apply if malice is shown. Ms. Perumal deferred on the legal specifics but noted that AI significantly amplifies existing threats like fraud. Dr. Bowne stated that while some civil remedies exist, such as defamation or wire fraud statutes, there are still serious gaps in legal protections when AI-generated content causes realistic and harmful deception.

  • Rep. Kiley asked how public and private sectors can work together to equip law enforcement with the tools and expertise needed to keep pace with rapidly evolving AI threats. Mr. Redbord stressed that while some federal agents already use advanced tools like blockchain analytics, broader training and access across all levels of law enforcement are essential, along with deeper collaboration between agencies and the private sector. Ms. Perumal agreed, emphasizing the need to scale defensive capabilities using AI to detect threats, analyze trends, and respond more quickly to the criminal ecosystem’s advancements. Mr. Redbord added that AI allows law enforcement to move beyond isolated investigations to mapping broader criminal networks, identifying typologies, and proactively disrupting national security threats. He cited a recent $225 million civil forfeiture case related to pig butchering scams as an example of this evolving approach.

    Rep. Kiley noted that Congress should play a role in supporting public-private partnerships and training initiatives for law enforcement nationwide.

  • Rep. Knott asked how close society is to witnessing autonomous criminal behavior driven by AI, and all four panelists agreed that such activity is already occurring, particularly in the form of scams, fraud, and decision-making systems that influence key aspects of life.

    Rep. Knott then asked what would be required to respond effectively, and Mr. Redbord emphasized the need for robust public-private partnerships to ensure the government has access to tools developed by the private sector.

    Rep. Knott asked if current laws are sufficient, and Mr. Redbord and Mr. Venzke noted that while some existing statutes apply, new AI-specific criminal laws and complementary civil penalties will be necessary to keep pace with evolving threats. Rep. Knott also raised jurisdictional concerns about foreign-developed AI tools infiltrating the U.S. market, to which Mr. Redbord and Mr. Venzke stressed the need for coordinated multi-jurisdictional and international frameworks that protect innovation while curbing misuse.

    Finally, Rep. Knott asked what parents can do to protect their children, and all four witnesses emphasized education as key—both to warn children of risks like CSAM and to empower them to engage with AI safely and productively.

  • Ranking Member McBath asked what specific legislative actions Congress should prioritize to close the gaps in protecting against deepfakes and AI-related harms. Dr. Bowne highlighted H.R. 1283, which would amend federal criminal statutes to explicitly cover AI-generated child sexual abuse material (CSAM), and noted additional gaps remain, especially in civil remedies. Mr. Redbord emphasized the need for dedicated congressional funding for investigative tools like blockchain analytics and AI-powered media authentication systems. Mr. Venzke agreed, citing existing prosecutions under Section 2258A for AI-generated CSAM involving identifiable children, and called for broader education and cybersecurity infrastructure for vulnerable communities.

    Ranking Member McBath then asked what principles should guide Congress as it revisits AI regulatory preemption, and Mr. Venzke recommended the House AI Task Force’s final report as a nuanced framework that supports flexible state-level responses while federal rules develop. Finally, Ranking Member McBath asked how procurement processes could help ensure AI system safety, and Mr. Venzke responded that public agencies can drive private sector innovation by clearly defining needs and enforcing standards through their purchasing decisions.

  • Chair Biggs asked how soon society might reach a point where AI, rather than humans, becomes the first mover in decision-making—particularly in legal contexts such as determining probable cause—and how Congress should prepare for the rise of artificial superintelligence. Dr. Bowne responded that while true artificial general intelligence (AGI) or superintelligence remains theoretical, agentic AI is already making decisions at speeds and scales that challenge human oversight, making it essential to establish limits and guardrails. Ms. Perumal noted that AI already makes simple decisions, and more complex reasoning is on the horizon, so the degree of human oversight should depend on the risk and significance of each decision. Mr. Venzke emphasized that the role of AI is ultimately a policy choice, and certain foundational functions like determining probable cause should remain strictly human-led to protect due process. Mr. Redbord agreed, noting the unprecedented speed of AI development and urging Congress to act quickly to establish legal frameworks while continuing to rely on human adjudication for critical legal decisions.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.