The authoritative news source for communications regulation
Intended for 2024?

FCC Approves AI Political Ads NPRM on 3-2 Party Line Vote

The FCC approved on a 3-2 party line vote an NPRM seeking comment on requiring disclosures for political ads that use AI-generated content. The proposal, which was approved July 10 on circulation but not released until Thursday, doesn’t specify the timing of an eventual order. Commissioner Brendan Carr and Federal Election Commission Chair Sean Cooksey condemned it as an attempt to interfere with the 2024 election. The FCC declined to comment on the intended timing of a final rule, or the delay in the item’s release.

TO READ THE FULL STORY
Start A Trial

The agency “can only muddy the waters and thumb the scales by interjecting at the last moment before an election,” Carr wrote in a statement released with the NPRM. “Is the government really worried that voters will find these political ads misleading in the absence of a regulator’s guiding hand? Or is the government worried that voters might find these ads effective?” This isn't “about telling the public what is true and what is false,” wrote Chairwoman Jessica Rosenworcel. “It is about empowering every voter, viewer, and listener to make their own choices.”

The NPRM seeks comment on requiring broadcasters, cable operators, direct broadcast satellite providers and satellite radio licensees to provide on-air disclosures for political ads that use content generated with AI. It also proposes requiring entities to include disclosures in their political files. The NPRM proposes to define AI-generated content as “an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.”

Entities would determine if an ad contained AI content by asking advertisers at the time of sale, the NPRM said. The item also seeks comment on the source of the FCC’s authority to require such disclosures, and whether the rules would raise First Amendment concerns.

The NPRM doesn’t specify timing of a final rule, or say whether the item is aimed at the 2024 election. Comments are due within 30 days, replies in 45 days, after Federal Register publication. A circulated order would likely require weeks for approval. That makes it unlikely a rule could be ready for the November election. However, Carr said the timing is uncomfortably close, and that the FCC reportedly intended for the policy to be in place for 2024. “We are in the home stretch of a national election,” Carr wrote. “We are so close, in fact, that the comment cycle in this proceeding will still be open in September -- the same month when early voting starts in states across the country.” The FCC declined to respond to Carr’s comment on the intended timing.

This is a recipe for chaos,” Carr added, noting that the FCC rules wouldn't apply to the same ads when they run online. “Suddenly, Americans will see disclosures for 'AI-generated content' on some screens but not others, for some political ads but not others, with no context for why they see these disclosures or which part of the political advertisement contains AI.” The FCC’s proposed rules “would mire voters in confusion, create a patchwork of inconsistent rules, and encourage monied, partisan interests to weaponize the law for electoral advantage.” Carr connected the proposal with what he said is a Democratic National Committee effort to have “the administrative state impose new controls on political speech before voters hit ballot boxes this fall.”

The FEC's Cooksey, a Republican, also said the proposal was aimed at 2024. “Every American should be disturbed that the Democrat-controlled FCC is pushing ahead with its radical plan to change the rules on political ads mere weeks before the general election,” he said in an email. “Not only would these vague rules intrude on the Federal Election Commission’s jurisdiction, but they would sow chaos among political campaigns and confuse voters before they head to the polls.” Democrat FEC Commissioner Ellen Weintraub supports the FCC proposal (see 2406060051).

The FEC has said it planned to act on AI regulation before the election, Rosenworcel pointed out. “I welcome that upcoming announcement,” she said. “With our complementary authorities, the FEC can regulate AI use in online advertisements for federal candidates while the FCC can focus on the areas where the FEC is powerless to act.” The FEC’s authority “over campaigns is limited to federal political candidates and does not extend to independent issue campaigns or State and local elections. These gaping loopholes can be addressed by the FCC,” Rosenworcel said.

Carr and FCC Commissioner Nathan Simington said the proposal is beyond the agency’s authority. “None of the laws cited in the FCC’s proposal vest this agency with the sweeping authority it claims over political speech, and the FCC will receive no deference for its novel interpretation of the Communications Act following the Supreme Court’s decision in Loper Bright v. Raimondo,” Carr said. Similarly, Simington said, “Our authority to accomplish this regulation doesn’t exist.”

Simington and Carr said the lack of clarity in the proposed rules would motivate companies to slap such disclosures on all their ads to avoid accidentally running afoul of the FCC, making it harder to know when an ad used deepfakes. Campaigns might also opt to run fewer ads on broadcast outlets, Simington said. The rules would “provide yet another small reason to shift programming to online and streaming platforms and away from the regulated space of broadcast,” he said. That trend is happening anyway “and I don’t think the occasion warrants additional burden.”

Anyone seeking to make this an ideological issue has lost their mooring,” Public Citizen co-President Robert Weissman said in a release. Public Citizen supports the FCC and FEC AI proposals, and pointed out that all state laws on AI in political ads have stemmed from bipartisan efforts. “There’s no political benefit to any party, cause, issue or candidate in nontransparent AI ads or deepfakes, no ideology or principle served by permitting deepfakes to go undisclosed. The only winner is chaos.”

Any meaningful effort to address misinformation” must also focus on large tech platforms, an NAB spokesperson said. “We look forward to continue working with the commission to ensure our viewers and listeners can continue to depend on local stations for news and information they can trust.”