House Democrats Reinforce Backing for FCC's AI Political Ad Disclosures NPRM but Want More
House Administration Committee ranking member Joe Morelle of New York, Communications Subcommittee ranking member Doris Matsui of California and other Democrats voiced continued support Wednesday for FCC Chairwoman Jessica Rosenworcel’s embattled AI political ad disclosures NPRM (see 2407250046). However, they suggested the agency should take further steps if Congress can agree on relevant legislation. Congressional Republicans have repeatedly criticized FCC action on the matter so near the November elections, including during a July House Communications agency oversight hearing (see 2407090049).
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
Rosenworcel strongly defended the AI NPRM during a Wednesday Capitol Hill roundtable with Morelle, who hosted the event, and other House Democrats, saying the “best place to start” addressing the technology’s impact on advertising is to institute “simple” disclosure rules. Democratic Federal Election Commission Vice Chair Ellen Weintraub noted the FEC’s own “narrow” rulemaking plans but appeared to support the FCC NPRM as a “first essential step” addressing AI ad transparency.
“We think the public has the right to know” whether an ad includes AI-generated content and putting that information “in the hands of the viewer” and U.S. voters would fall squarely within the FCC’s existing mandate under the Communications Act, Rosenworcel said. Critics on and off Capitol Hill who believe the FCC lacks authority to require those disclosures are “wrong.” Republican FCC Commissioners Brendan Carr and Nathan Simington both oppose the NPRM.
Rosenworcel pointed to the FCC’s unanimous August vote to advance an NPRM aimed at reducing unwanted AI robocalls (see 2408070037) as proof the commission and other federal entities can reach a bipartisan consensus on AI regulation. She also pointed to the FCC’s work with New Hampshire Attorney General John Formella (R) in prosecuting political consultant Steve Kramer, broadband provider Lingo Telecom and others for distributing robocalls to likely Democratic voters. Those calls included deepfake simulations of President Joe Biden's voice and were sent just two days before the state’s Jan. 23 presidential primary (see 2403150034).
Morelle believes “simple disclosure” rules like what the FCC is proposing could have an “effect of hopefully policing in some degree” the prevalence of AI content in political advertising. But, he and other House Democrats noted interest in the FCC going further in AI regulation if Congress granted them further authority. Morelle pointed to his AI Transparency in Elections Act (HR-8668) as one potential vehicle. Disclosure rules may not be “enough” over time because of how quickly generative AI has already “become embedded to a certain degree” in a short amount of time, Matsui said.
Rep. Hillary Scholten of Michigan suggested the FCC or other agencies could bar certain “categories” of AI content while only requiring disclosure of others. “If you want me” as FCC chair “deciding what’s real and what’s fake … you’ve got to be comfortable” with a Republican FCC chairman during a potential second Trump administration “doing that too,” Rosenworcel said. “I’m nervous” about giving authority to federal actors” generally but that could change if initial disclosure rules make clear what classes of AI content “are beyond the pale.” Morelle cautioned that it would be “hard to figure out” which classes of AI content Congress should task the FCC with prohibiting. Scholten suggested lawmakers could come up with such rules “over time.”