Skip to main content

YouTube is supporting the ‘No Fakes Act’ targeting unauthorized AI replicas

The legislation that would establish rules around the use of AI copies of someone’s likeness is once again up for consideration.

The legislation that would establish rules around the use of AI copies of someone’s likeness is once again up for consideration.

Richard Lawler
is a senior editor following news across tech, culture, policy, and entertainment. He joined The Verge in 2021 after several years covering news at Engadget.

Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN) are again introducing their Nurture Originals, Foster Art, and Keep Entertainment Safe, or NO FAKES, Act, which standardizes rules around using making AI copies of a person’s faces, names, and voices. This time, the bill — previously introduced in 2023 and 2024 — has the backing of a major web platform: YouTube.

In a statement announcing its support, YouTube claims the act “focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down.” It joins a list of supporters that already included SAG-AFTRA and the Recording Industry Association, in spite of opposition by civil liberties groups like the Electronic Frontier Foundation (EFF), which have criticized previous drafts as too broad.

Related

The 2024 version of the bill said that online services (like YouTube) can’t be held liable for storing a third-party-provided “unauthorized digital replica” if it removes the material in response to claims of unauthorized use, and notifies the uploader that it has been removed. Another exception is if the service is “primarily designed” or is marketed for its ability to produce deepfakes.

During a press conference announcing the bill, Senator Coons said that part of the “2.0” refresh included addressing free speech concerns and caps for liability.

YouTube has also expressed support for the Take It Down Act that would make it a crime to publish non-consensual intimate images, even if they’re AI-generated deepfakes, and force social media sites to have processes to quickly remove these images when reported. The latter provision has been strongly opposed by civil liberties groups, and even some groups that advocate against NCII; despite this, it has been passed by the Senate and advanced out of a House committee earlier this week.

Today, YouTube is also announcing the expansion of a pilot of the “likeness management technology” it debuted last year in partnership with CAA. YouTube pitches the program as a way for celebrities and creators to detect AI copies of themselves and submit requests to have the content removed. According to YouTube, some top creators now participating in the pilot include MrBeast, Mark Rober, and Marques Brownlee, among others.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.