House Judiciary Members Eye AI Deep Fake Solutions
Existing law needs updating to protect artists and individuals from fake AI-generated content, House Intellectual Property Subcommittee Chairman Darrell Issa, R-Calif., said Friday during a hearing in Los Angeles.
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
House Judiciary Committee members voiced varying degrees of support for legislation holding individual abusers and social media platforms accountable.
Last month, fake pornographic images of musician Taylor Swift went viral on X. Millions viewed the content before X acted, and reports suggest the material remains on X. The Swift incident represents the “first wave,” and it should worry all prominent figures, said Rep. Matt Gaetz, R-Fla. Social media companies shield themselves from liability using the First Amendment and Communications Decency Act Section 230, while profiting off the material, Gaetz said. “Robots should not be subject to free speech,” he added. “I can’t believe I even have to say that out loud.”
In addition, Gaetz said the No AI Fake Replicas and Unauthorized Duplications (No AI Fraud) Act is a good start for protecting individuals and holding platforms accountable. Reps. María Elvira Salazar, R-Fla., and Madeleine Dean, D-Pa., introduced the No AI Fraud Act last month. It establishes that individuals have a personal property right in the use of their image and voice in AI-generated material.
Congress should avoid making the situation worse by creating conflicts with existing laws that apply to AI-related harms, testified University of Pennsylvania Law School professor Jennifer Rothman. Existing protections include federal and state laws that apply to the right of publicity and the Lanham Act, she said. Her biggest concern is that the No AI Fraud Act would let artists transfer ownership of their image and likeness to individuals or companies. This could potentially allow new owners to create AI-based content in perpetuity, she said. It opens the door to new owners potentially blocking those artists from creating new, original work.
The Recording Academy supports imposing liability on tech platforms for the dissemination of AI-generated material that relies on copyrighted material, said Recording Academy CEO Harvey Mason. The patchwork of state laws that apply to the right of publicity is inconsistent, outdated and doesn’t address AI issues, he said: Only Congress can solve the problem.
The Software and Information Industry Association believes courts are the best venues for deciding AI-related copyright and trademark issues, said President Chis Mohr. Using copyrighted works without permission to train generative AI models is the subject of litigation, and SIIA members are confident the courts will sort out fair use, Mohr said. He agreed with Rothman that many existing statutes can be used to address non-copyright-related harms like malicious AI-generated content. Mohr conceded that deep fake porn is an issue Congress hasn’t fully addressed.
It’s a “gut punch” having your name and likeness used in ways you could never imagine, said country singer Lainey Wilson. AI tools can harvest an artist’s body of work in a matter of seconds, she said. Some artists are OK with that, but it should require consent, she said.
“We have a lot of work to do,” said Issa. “We must do no harm, but in fact, there is harm being done to people every day by the use of AI to their detriment.” Existing laws apply in some cases, but in many instances, enforcement needs to be streamlined to protect creators, he said. Issa drew attention to Rep. Adam Schiff, D-Calif., who is working through draft legislation on transparency requirements for AI model ingestion of material. The bill would require transparency when artistic material is used in AI models so artists can protect their copyrights, said Schiff.
Rep. Ted Lieu, D-Calif., expressed interest in AI technology providers compensating artists when models used to train AI ingest original material. Mason said that as soon as individuals or companies try to commercialize AI-generated material that relies on original work, they should pay the artists.