Senator Challenges AI Voice-Cloning Firms on Safeguards Amid Rising Scam Concerns

April 19, 2026
Senator Challenges AI Voice-Cloning Firms on Safeguards Amid Rising Scam Concerns
  • A U.S. senator queried four AI voice-cloning firms—ElevenLabs, LOVO, Speechify, and VEED—seeking details on safeguards against scam use, consent verification for cloning, detection of impersonation of public figures or minors, and whether audio carries provenance watermarks.

  • The piece argues that detecting misuse after the fact isn’t enough; it urges forensic digital provenance—caller IDs, routing records, device data, and financial traces—to support legal action.

  • The next moves hinge on how the companies respond and on new FBI data from 2026 to determine whether voluntary safeguards work and how lawmakers should proceed.

  • Hassan highlighted the FBI IC3 report showing AI-driven scams caused nearly $893 million in losses across more than 22,000 complaints in 2025, underscoring the urgency.

  • Consumer Reports’ 2025 findings, which found that four of six voice-cloning products allowed cloning without consent, illustrate safeguards gaps and spur regulatory scrutiny.

  • The article maps a policy path, including the AI Fraud Accountability Act that would criminalize digital impersonation fraud, with potential prison time, and notes Hassan’s deadline for company replies to shape forthcoming legislation.

  • ElevenLabs said it blocks celebrity/public-figure cloning and uses a mix of automated and human review; other firms had not publicly responded at the time.

  • The letters request specifics on how misuse is monitored, consent is verified, impersonations are detected, and how watermarking and provenance tracking could assist forensic and prosecutorial work.

  • A June 2025 New Hampshire grandparent scam, where an AI-cloned voice was used to defraud a victim, serves as a concrete example of real-world harm from tech gaps.

Summary based on 1 source


Get a daily email with more Tech stories

More Stories