Rosenworcel Circulates NPRM on AI Political Ad Disclosures
FCC Chairwoman Jessica Rosenworcel circulated a draft NPRM Wednesday that would seek comment on requiring disclosures when a political ad on TV or radio contains AI-generated content. The item proposes requiring on-air and written disclosures in broadcaster online public files, and also requiring disclosures by cable operators and satellite TV providers, said an FCC news release. “As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in the release.
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
The draft NPRM is on circulation and isn’t associated with an open meeting agenda, so draft language wasn't made available. Wednesday’s release said the draft NPRM would seek comment on applying the disclosure rules to candidate and issue ads, and also on a specific definition of AI-generated content. “The proposed FCC proceeding does NOT propose any prohibition of such content, only the disclosure of any AI-generated content within political ads,” the release said. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue,” said Rosenworcel. NCTA didn’t comment.
“The use of AI is expected to play a substantial role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters,” the FCC release said. There is “a clear public interest obligation for Commission licensees, regulatees, and permittees to protect the public from false, misleading, or deceptive programming and to promote an informed public,” the release said. The Bipartisan Campaign Reform Act gives the agency authority to make such requirements for political advertising, the FCC said. The release describes the proposal as the agency's "first step" in a "new AI transparency effort."
AI in political ads is a "timely issue," and broadcasters go to great lengths to ensure accuracy in the information they air, an NAB spokesperson said in response to the FCC announcement. "We look forward to working with the Commission to ensure our viewers and listeners can continue to depend on local stations for news and information they can trust."
Broadcast attorneys told us the details of what is required of broadcasters will determine their reaction to the proposal. Stations don’t have a reliable way of knowing when AI is or isn’t used in an ad, said Wilkinson Barker broadcast attorney David Oxenford in an interview. Accordingly, broadcasters shouldn’t be required to investigate its potential use, he said. With “fake news” a common refrain during political campaigns, it could “seemingly become routine for candidates, every time an attack ad is run, to claim that the ad should be pulled as it contains AI, leaving broadcasters with the impossible task of determining whether to pull it from the airwaves," Oxenford wrote in a blog post.
An FCC AI disclosure requirement would be more palatable to broadcasters if it could be satisfied with a relatively simple certification obtained from an advertiser, Oxenford said. Broadcasters would likely also be concerned about AI ad rules providing another potential vector for FCC enforcement against them. Commissioner Nathan Simington warned broadcasters in an April speech at the NAB Show that the agency could seek to come after them over AI ads.
A patchwork of state laws already governs AI-generated content in political ads, said Pillsbury broadcast attorney Scott Flick in an interview. Many of them don’t require that broadcasters police ad content, he said. “Every one is worded differently,” Oxenford said. Congress or the Federal Election Commission creating a federal rule putting the burden on advertisers would be a more efficient way of stopping deepfake political ads, he said.