Join Fox News for access to this content
Plus special access to select articles and other premium content with your account - free of charge.
Please enter a valid email address.

The Federal Communications Commission (FCC) put a final point on its reforms related to automatic or "robocalls" after deciding to ban the use of artificial intelligence (AI) generated voices for marketing calls. 

"Although voice cloning and other uses of AI on calls are still evolving, we have already seen their use in ways that can uniquely harm consumers and those whose voice is cloned," the FCC wrote in a Feb. 8 decision.

"Voice cloning can convince a called party that a trusted person, or someone they care about such as a family member, wants or needs them to take some action that they would not otherwise take," the agency wrote. "Requiring consent for such calls arms consumers with the right not to receive such calls or, if they do, the knowledge that they should be cautious about them." 

A new – and final – amendment to the Telephone Consumer Protection Act added a provision that allows for consumers to withdraw consent to receive robocalls. The decisions follow an inquiry launched in Nov. 2023 to look at the impact of AI on robocalls and robotexts. The findings pushed the agency to ban the practice outright.


Federal Communications Commission

The Federal Communications Commission seal hangs inside a meeting room at the headquarters ahead of an open commission meeting in Washington, D.C., on Thursday, Dec. 14, 2017. (Andrew Harrer/Bloomberg via Getty Images)

The FCC insisted that its new guidance will cut off any potential negative uses of AI for marketing campaigns and automatic calls, stressing that the agency understands "not all AI-generated calls may be deceptive or annoy all customers." However, the decision to ban the tech outright has ensured that the agency has power to go after any use of the technology. 

In the Feb. 8 guidance, the FCC addressed AI scams that have increased in prevalence over the past few months: Now-famous incidents include deep fake videos of celebrities such as Gordon Ramsay, Taylor Swift, Jennifer Garner and Selena Gomez selling Le Creuset and other cookware from counterfeit websites.  


Artificial intelligence robocalls

The FCC has added a new provision to existing rules that allows users to revoke consent for automated marketing material, robocalls or robotexts, some of which companies have used AI to generate. (iStock)

The FCC warned that more elaborate scams use AI to clone voices, often used to "spread misinformation, endorse products or steal money and personal information." The agency already has reports of scammers using AI in the all-too-common con in which they call an unsuspecting grandparent about their grandchild in trouble and need of financial help. 

Allowing consumers to revoke permission from companies to use robocalls or robotexts provides another layer of protection that makes it easier to go after companies or individuals that use AI-generated calls without prior consent or disregard when consumers opt-out. 


End of annoying robocalls? FTC cracks down on deceptive practices

A new – and final – amendment to the Telephone Consumer Protection Act added a provision that allows for consumers to withdraw consent to receive robocalls. (Kurt "CyberGuy" Knutsson)

The FCC has decided that all revocation applies to all forms of communication from the same company. This will place a greater burden on companies to ensure they have cleared all the hoops in marketing – especially when using AI, which has the potential to take operations on autopilot – or face significant consequences.


The new provisions will not stop scammers who use deep fake ads on social media platforms, but it will create the foundation to craft new rules as AI use develops, and the FCC observes novel uses and scams.