Federal AI deepfake law should have safe harbor for online platforms, US copyright regulator says

Federal AI deepfake law should have safe harbor for online platforms, US copyright regulator says

31 July 2024
By Xu Yuan

Federal legislation targeting the spread of illegal deepfakes made with artificial intelligence should provide a safe harbor to encourage social media and other online platforms to take them down, the US Copyright Office suggested.

“The Copyright Office concludes that new federal legislation is urgently needed,” the office said in the first part of the agency’s report on legal and policy issues related to copyright and AI released today.

“The widespread availability of generative AI tools that make it easy to create digital replicas of individuals’ images and voices has highlighted gaps in existing laws and raised concerns about the harms that can be inflicted by unauthorized uses,” the agency said.

Digital replicas refer to “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual,” such as songs featuring the voices of famous artists.

The proposed safe harbors for hosting or linking to infringing content are conditioned upon, among other things, a requirement that the online service provider, or OSP, act expeditiously to remove infringing content upon receiving a valid notification or otherwise becoming aware of the infringing activity.

The report recommends that traditional rules of secondary liability should apply, but with an appropriately conditioned safer harbor.

Under the secondary liability principles, a defendant may be contributorily liable if it, with knowledge of the infringing activity, induces, causes or materially contributes to the infringing conduct of another. Liability also lies in the act of distributing a device with the object of promoting its use to infringe copyright.

The role of OSPs has attracted most of comments on the issue of secondary liability, according to the regulator, noting in the report the “most far-reaching” example of safe harbors established by Congress for OSPs: Section 230 of the Communications Decency Act, which shields online platform from liability for third-party content. That shield permits OSPs to exercise judgment in removing content without creating a legal responsibility to do so.

Currently intellectual property laws are carved out from the Section 230 immunity, and the regulator recommends similar “exclusion from section 230” of digital replica protection. It is “advisable to encourage prompt removal of unauthorized digital replicas from online platforms," it said.

The office has received differing views on whether a federal digital replica law would constitute a law pertaining to intellectual property.

A carve-out is appropriate because OSPs are best positioned to prevent the continuing harm from the availability of deepfakes. “OSPs should be incentivized to assist in removing the replicas once they know they are unauthorized and protected from liability when they do so,” the report said.

The office also “agrees that a notice and takedown system, combined with an appropriate safe harbor, could be an effective approach,” but such a system should be conditioned on “the OSP expeditiously removing the digital replicas when it has actual knowledge or has received a sufficiently reliable notification that the replica is infringing.”

A federal law

A federal law is needed because “existing laws do not provide sufficient legal redress” for harms caused by unauthorized deepfakes, according to the report.

State laws targeting deepfakes are “inconsistent and insufficient in various respects," and existing federal laws that touch on this issue “are too narrowly drawn to fully address the harm from today’s sophisticated digital replicas," the report said.

The proposed law should target “digital replicas, whether generated by AI or otherwise, that are so realistic that they are difficult to distinguish from authentic depictions,” the report said, but it is not recommended that it include the use of AI to imitate artistic styles.

Under the recommended legislation, a person would be held responsible for creating, distributing and making available an unauthorized digital replica, and that responsibility would not be limited to commercial use. But, the report recommends, liability should attach only where the distributor, publisher, or displayer acted with actual knowledge both that the representation in question was a digital replica of a real person, and that it was unauthorized.

In addition to monetary and injunctive relief, the regulator recommends “inclusion of special damages enabling recovery by those who may not be able to show economic harm or afford the cost of an attorney.”

The federal law should not preempt state laws to avoid reducing existing protections in some states or imposing a one-size-fits-all solution, the report said.

To address concerns about the First Amendment, the office called for “a balancing framework rather than categorical exemptions” to “avoid overbreadth and allow greater flexibility.”

For the latest developments in AI & tech regulation, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.

blue and yellow star flag

desk globe on table