The authoritative news source for communications regulation
‘Light Touch Is Best’

Conservatives Say Heavy AI Rules Will Stifle Speech and Innovation

Federal and state legislators should take a light-touch regulatory approach to AI because there are unsettled questions about free speech and innovation potential, a Trump-appointed trade judge, a religious group and tech-minded scholars said Tuesday.

TO READ THE FULL STORY
Start A Trial

Cato Institute fellow Jennifer Huddleston said during a Federalist Society panel that legislators are considering reactionary AI proposals that could end up looking as absurd as calls to ban the camera were in the early 1900s. Prohibiting the use of AI in election campaign content, for example, could eliminate useful political parody, she said: It’s impossible to predict all future applications of AI, and overly burdensome restrictions could preempt new art forms and other beneficial uses of the technology.

The government’s decision to allow the tech industry freedom to innovate through Communications Decency Act Section 230 helped spur the growth of companies like Microsoft, Google and Apple, said U.S. Court of International Trade Judge Stephen Vaden, a Trump appointee. He urged legislators to remember the “birth” of the internet when regulating AI: “I strongly think that what was true 30 years ago remains true today. A light touch is best.”

While Congress has done “next to nothing” on AI regulation, executive branch agencies want to prove they can handle enforcement using existing statutes, Vaden argued.

A DOJ prosecutor recently said current laws are sufficient to prosecute child sexual abuse material (CSAM) crimes, including AI-generated deepfakes (see 2410020051). Vaden said other agencies are creatively applying existing authorities. This is mainly in response, he said, to President Joe Biden’s AI executive order (see 2310300056), which includes directives for the Commerce Department, the FCC, the FTC and other agencies. Vaden noted the judicial branch has also been instrumental in shaping the conversation: As of Oct. 1, 24 different federal courts have issued specific orders on AI.

Alliance Defending Freedom, a Christian group focused on religious liberty, argued for “judicial modesty” when setting AI precedent. Jeremy Tedesco, an attorney with the organization, said the alliance supports the recommendation from U.S. Supreme Court Justice Samuel Alito in the NetChoice case (see 2407010053). Alito, Tedesco said, rightly noted that even researchers and developers aren’t sure why AI technology makes certain decisions. The jurist disagreed with the majority’s opinion that platforms create “expressive products” through content moderation and therefore are entitled to First Amendment protection.

University of Notre Dame professor Paolo Carozza said the tech industry can’t have it both ways when it comes to free speech rights and liability protection under Section 230: While it’s defending its First Amendment right to moderate content, it's also absolving itself of editorial responsibility using Section 230. These contradictory positions played out in SCOTUS' Taamneh and Gonzalez cases (see 2409030044) and the recent Section 230-related 3rd U.S. Circuit Court of Appeals decision against TikTok, he said (see 2408280014). Tech associations told the 3rd Circuit Tuesday that platforms can, in fact, have it both ways (see 2410080043). The implications for the rule of law are “profound” when humans are distanced from responsibility for the real-world impacts of algorithms they create, said Carozza.

Alliance Defending Freedom attorney Ryan Bangert conceded there have been attempts to “exempt the entire enterprise of artificial intelligence” under the First Amendment. However, it shouldn’t be used as a “blanket” to prevent all AI regulation, he said.

Abundance Institute AI Policy Head Neil Chilson said state and federal legislators have tried regulating AI using broad terms that could “sweep in most of computing” activity. He noted states have introduced more than 700 AI-related bills, which could have “enormous” consequences for a rapidly evolving technology.