Expect the U.S. Supreme Court to issue a major interpretation on Section 230 as lower courts continue to make conflicting rulings about social media platforms’ free speech rights, legal experts told us in interviews.
Section 230
TikTok engages in “expressive activity” when its algorithm curates content. Accordingly, the platform can’t claim Section 230 immunity from liability when that content harms users, the Third U.S. Circuit Court of Appeals ruled Tuesday (docket 22-3061). A three-judge panel remanded a district court decision dismissing a lawsuit from a mother of a 10-year-old TikTok user who unintentionally hung herself after watching a “Blackout Challenge” video on the platform. The U.S. District Court for the Eastern District of Pennsylvania dismissed the case, holding TikTok was immune under Communications Decency Act Section 230. The Third Circuit reversed in part, vacated in part and remanded the case back to the district court. Judge Patty Schwarz wrote the opinion, citing U.S. Supreme Court findings about “expressive activity” in Moody v. NetChoice (see 2402270072). SCOTUS found that “expressive activity includes presenting a curated compilation of speech originally created by others.” Schwarz noted the court’s holding that a platform algorithm reflects “editorial judgments” about compiling third-party speech and amounts to an “expressive product” that the First Amendment protects. This protected speech can also be considered “first-party” speech under Section 230, said Schwarz. According to the filing, 10-year-old Nylah Anderson viewed the Blackout Challenge on her “For You Page,” which TikTok curates for each individual user. Had Anderson viewed the challenge through TikTok’s search function, TikTok could have been viewed as more of a “repository of third-party content than an affirmative promoter of such content,” Schwarz wrote. Tawainna Anderson, the child’s mother who filed the suit, claims the company knew about the challenge, allowed users to post videos of themselves participating in it and promoted videos to children via an algorithm. The company didn’t comment.
A state court allowed an Iowa lawsuit against TikTok that claims the social media company duped parents about children’s access to inappropriate content. The Iowa District Court for Polk County in Des Moines denied TikTok’s motion to dismiss the state’s Jan. 17 petition in a ruling this week. While the court also denied Iowa’s motion for preliminary injunction, Iowa Attorney General Brenna Bird (R) said in a Wednesday statement that the decision is “a big victory in our ongoing battle to defend Iowa’s children and parents against the Chinese Communist Party running TikTok. Parents deserve to know the truth about the severity and frequency of dangerous videos TikTok recommends to kids on the app.” Bird claimed TikTok violated the Iowa Consumer Fraud Act through misrepresentations, deceptions, false promises and unfair practices, which allowed it to get a 12+ rating on the Apple App Store despite containing content inappropriate for kids aged 13-17. “Considering the petition as a whole, the State has submitted a cognizable claim under the CFA,” wrote Judge Jeffrey Farrell. TikTok doesn’t get immunity from Section 230 of the Communications Decency Act because the state’s petition “addresses only the age ratings, not the content created by third parties,” the judge added. However, Farrell declined to preliminarily enjoin TikTok since the state hasn’t “produced any evidence to show an Iowan has been viewed and harmed” by videos with offensive language or topics. The judge said, “The State presented no evidence of any form to show irreparable harm.” TikTok didn’t comment Wednesday.
Democratic National Convention delegates were expected to vote Monday night on the Democratic National Committee’s 2024 platform, which includes a pledge that promises the party will “keep fighting to reinstate” the FCC’s lapsed affordable connectivity program. The draft program repeatedly references President Joe Biden and his now-ended reelection bid because the DNC Platform Committee adopted it July 16, before the incumbent stepped aside in favor of new nominee Vice President Kamala Harris, the committee said when it released the document Sunday night. “23 million households received free or monthly discounts” via ACP, “saving $30 to $75 per month on high-speed broadband through the largest internet affordability program in history,” the Democrats’ proposed platform said: The program lapsed “because Republicans refused to act.” ACP's supporters are tempering their expectations that Congress will act to restore the subsidy this year despite the Senate Commerce Committee successfully advancing a surprise amendment July 31 to the Proper Leadership to Align Networks for Broadband Act (S-2238) that would allocate $7 billion to the program for FY 2024 (see 2408090041). The DNC platform references the Biden administration’s implementation of the 2021 Infrastructure Investment and Jobs Act, which included $65 billion for connectivity. “We’re bringing affordable, reliable, high-speed internet to every American household,” the platform said. “But a full 45 million of us still live in areas where there is no high-speed internet. Democrats are closing that divide.” Democrats are also “determined to strengthen data privacy,” through passage of a revamped “Consumer Privacy Bill of Rights” and an update of the Electronic Communications Privacy Act “to protect personal electronic information and safeguard location information.” The document notes Democrats' continued push to “fundamentally reform” Communications Decency Act Section 230 and “ensure that platforms take responsibility for the content they share.” It also mentions Democrats’ interest in “promoting interoperability between tech services and platforms, allowing users to control and transfer their data, and preventing large platforms from giving their own products and services an unfair advantage in the marketplace.”
The FTC was unanimous in finalizing a rule that will allow it to seek civil penalties against companies sharing fake online reviews, the agency announced Wednesday. Approved 5-0, the rule will help promote “fair, honest, and competitive” markets, Chair Lina Khan said. Amazon, the Computer & Communications Industry Association and the U.S. Chamber of Commerce previously warned the FTC about First Amendment and Section 230 risks associated with the draft proposal (see 2310030064). The rule goes into effect 60 days after Federal Register publication. It allows the agency to seek civil penalties via unfair and deceptive practices authority under the FTC Act. It bans the sale and purchase of fake social media followers and views and prohibits fake, AI-generated testimonials. The rule includes transparency requirements for reviews that people with material connections to businesses write. Moreover, it bans companies from misrepresenting the independence of reviews. Businesses are also banned from “using unfounded or groundless legal threats, physical threats, intimidation, or certain false public accusations to prevent or remove a negative consumer review,” the agency said.
New Mexico Attorney General Raul Torrez (D) is working with state lawmakers on legislation aimed at holding social media platforms more accountable for disseminating deepfake porn, he told us Wednesday.
Companies like Meta intentionally target children and must be held more accountable for social media-related harm, attorneys general from New Mexico and Virginia said Wednesday. New Mexico AG Raul Torrez (D) and Virginia AG Jason Miyares (R) discussed potential solutions to online child exploitation during the Coalition to End Sexual Exploitation Global Summit that the National Center on Sexual Exploitation and Phase Alliance hosted. Torrez said the tech industry received an “extraordinary grant” through Communications Decency Act Section 230, which Congress passed in 1996 to promote internet innovation. Section 230 has been a hurdle to holding companies accountable, even when they knowingly host illegal activity that’s harmful to children, Torrez added. Miyares said AGs won't wait for legislators in Washington to solve the problem, noting state enforcers' success in the courts. Tech companies shouldn’t be able to use Section 230 as a shield from liability while also acting as publishers and removing political content they disfavor, Miyares added. Torrez acknowledged he and Miyares disagree on many things, but they agree on the need to increase liability and accountability of tech platforms when it comes to children.
Vermont’s lawsuit alleging Meta designed Instagram with the intention of addicting young users can proceed, a superior court judge ruled last week (docket 23-CV-4453). Superior Court Judge Helen Toor denied Meta’s motion to dismiss, saying the company’s First Amendment and Communications Decency Act Section 230 arguments didn't persuade her. Vermont alleges Meta violated the Vermont Consumer Protection Act by intentionally seeking to addict young users through methods it knows are harmful to mental and physical health. The company misrepresented its intentions and the harm it’s “knowingly causing,” the state argued. Vermont is seeking injunctive relief and civil damages. In Meta's request for dismissal, it argued the state lacks jurisdiction, the First Amendment and Section 230 bar the claims, and state enforcers failed to offer a valid claim under state law. The court heard oral argument July 3. The state noted more than 40,000 Vermont teens use Instagram and about 30,000 do so daily. The company uses targeted advertising and other features to maximize the amount of time teens spend on the app. Toor said the First Amendment protects companies' speech, but it doesn’t protect against allegations that a company is manipulating younger users. She noted Section 230 protects a company against liability for hosting third-party content, but it doesn’t shield from liability when a company engages in illegal conduct. Vermont isn’t seeking to hold Meta liable for content it hosts, she said: “Instead, it seeks to hold the company liable for intentionally leading Young Users to spend too much time on-line. Whether they are watching porn or puppies, the claim is that they are harmed by the time spent, not by what they are seeing.” Attorney General Charity Clark filed the lawsuit in October.
FCC Commissioner Brendan Carr’s Project 2025 ties likely won’t damage his chances of becoming the agency's chair if Donald Trump is elected president in November, even though the Trump campaign has distanced itself from the project (see 2407110054). Commissioner Nathan Simington is listed as a project adviser but didn’t write a chapter, as Carr did, or play a more public role.
A bipartisan group of senators on Wednesday formally filed legislation that would establish liability for sharing AI-driven content without the original creator’s consent. Sens. Chris Coons, D-Del.; Marsha Blackburn, R-Tenn.; Amy Klobuchar, D-Minn.; and Thom Tillis, R-N.C., introduced the Nurture Originals, Foster Art and Keep Entertainment Safe (No Fakes) Act (see 2310120036). The measure would hold individuals, companies and platforms liable for creating and hosting such content. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness,” Coons said in a statement. The Computer & Communications Industry Association said the bill is “well-intentioned,” but as written it would “undermine Section 230, place limits on freedom of expression, and shrink fair use.” In addition, it lacks provisions protecting fair use and free expression, said Brian McMillan, vice president-federal affairs: “We understand the risks of false information that appears real, as our members deploy many algorithmic tools to identify and respond to deepfakes. This legislation emphasizes liability over support for these efforts.”