Amazon, Meta, Google, TikTok and other companies should change their data “surveillance” practices to improve user privacy, the FTC said Thursday, concluding a probe the Trump administration started. The FTC issued a staff report with recommendations for nine companies that received orders in December 2020 (see 2012140054). Republican commissioners said in statements that some of the recommendations exceed the FTC’s authority. The report details how companies “harvest an enormous amount” of data and monetize it for billions of dollars annually, Chair Lina Khan said. “These surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms.” The agency issued Section 6(b) orders to Amazon, Facebook, YouTube, X, Snap, ByteDance, Discord, Reddit and WhatsApp. Staff recommended data minimization practices, targeted advertising limits and more-stringent restrictions for children. The commission voted 5-0 to issue the report, but Commissioners Andrew Ferguson and Melissa Holyoak dissented in part. Ferguson argued that some of the FTC’s recommended actions for companies exceed agency authority: “We are not moral philosophers, business ethicists, or social commentators. ... [A]s Beltway bureaucrats, our opinion on these matters is probably worth less than the average American’s.” Some of the recommendations are “thinly-veiled threats,” he said. Ferguson cited the recommendation that companies not willfully ignore user age because it won’t “help companies avoid liability under” the Children’s Online Privacy Protection Act. Holyoak said some of the agency’s recommendations could chill online speech. For example, should a company follow recommendations to redesign algorithms for classes the agency deems “protected,” it could undermine the speech rights of certain populations. The report “fails to robustly explore the full consequences of its conclusions and recommendations,” she added. Khan in her statement denied the report “somehow endorses or encourages the platforms to disfavor certain viewpoints.” The report directly states that it doesn’t “address or endorse any attempt to censor or moderate content based on political views,” said Khan.
A Texas anti-porn law violates the First Amendment by requiring websites to verify users’ ages, the Free Speech Coalition said at the U.S. Supreme Court. The American Civil Liberties Union filed a brief Monday on behalf of the FSC, a pornography industry trade association. The U.S. District Court in Austin agreed in August 2023 to block the law (HB-1181), one day before its Sept. 1 effective date. U.S. District Court Judge David Ezra found the law likely violates the First Amendment rights of adults trying to access constitutionally protected speech. But the 5th U.S. Circuit Court of Appeals partially vacated the injunction, finding the age-verification requirements constitutional. SCOTUS in July agreed to hear the case, Free Speech Coalition v. Paxton (docket 23-1122) (see 2407020033). “Under strict scrutiny, this is a straightforward case,” said the coalition’s brief: The Texas law “is both overinclusive and underinclusive, and it fails to pursue its objective with the means least restrictive of adults’ protected speech.” The coalition added, “Restoring the preliminary injunction … would not undermine genuine efforts to limit minors’ access to sexually inappropriate material.” Adults “have a First Amendment right to read about sexual health, see R-rated movies, watch porn, and otherwise access information about sex if they want to," said Vera Eidelman, ACLU Speech, Privacy and Technology Project staff attorney. “They should be allowed to exercise that right as they see fit, without having to worry about exposing their personal identifying information in the process.”
Instagram launched Teen Accounts, with protections limiting who can contact teen users and monitoring the content they see, it said Tuesday. People younger than 16 will need parental permission to change any of the built-in protections to make them less strict. "This new experience is designed to better support parents, and give them peace of mind that their teens are safe with the right protections in place," Instagram said. The product is default private, with Instagram users who aren't following teens unable to see their content or interact with them, it said. As such, only people teens follow or are already connected to them can message Teen Accounts users, it added. Instagram will automatically place teens in the most-restrictive setting of its sensitive-content control. After using Instagram for 60 minutes daily, teens will receive a notice from Teen Accounts telling them to leave the app.
Despite numerous social media platform competitors, X has maintained its user base due to its personalized algorithms and people engaging with one another on the platform, Midia analyst Hanna Kahlert blogged Tuesday. However, that changed with X's recent ban in Brazil, as Bluesky has seen a surge in user downloads and usage, she said. It's unknown whether that translates into consistent, long-term usage. But with Bluesky also seeing growth in Portugal and other South American nations such as Chile and Argentina, it appears a competitor can finally draw users from X, she said.
Forty-two attorneys general supported U.S. Surgeon General Vivek Murthy’s recommendation that social media carry warnings like the labels on cigarette packages. Murthy suggested last June that social media companies display warnings about mental health risks associated with their platforms (see 2406170059). The 42 bipartisan AGs, writing Monday under National Association of Attorneys General letterhead and representing states including California, New York and Indiana, supported the idea in a letter to House Speaker Mike Johnson, R-La., and Senate leaders Chuck Schumer, D-N.Y., and Mitch McConnell, R-Ky. “Young people are facing a mental health crisis, which is fueled in large part by social media,” wrote the AGs from 39 states, the District of Columbia, American Samoa and the U.S. Virgin Islands. “This generational harm demands immediate action. By mandating a surgeon general’s warning on algorithm-driven social media platforms, Congress can help abate this growing crisis and protect future generations of Americans.” New York AG Letitia James hopes “warning labels will be implemented swiftly to raise more awareness about this issue," the Democrat said in a news release Tuesday. Arkansas AG Tim Griffin (R) said, "A Surgeon General’s warning on social media platforms isn’t a cure-all, but it’s a step in the right direction toward keeping our kids safe in digital spaces.”
Snapchat’s design and algorithms help sexual predators target children, a lawsuit that New Mexico Attorney General Raul Torrez (D) filed last week argues. The complaint was filed at the New Mexico First Judicial District Court in Santa Fe County (case D-101-CV-202402131). “Our undercover investigation revealed that Snapchat’s harmful design features create an environment where predators can easily target children through sextortion schemes and other forms of sexual abuse,” said Torrez. “Snap has misled users into believing that photos and videos sent on their platform will disappear, but predators can permanently capture this content and they have created a virtual yearbook of child sexual images that are traded, sold, and stored indefinitely.” Snap is reviewing the complaint “carefully, and will respond to these claims in court,” the company said in a statement. “We have been working diligently to find, remove and report bad actors, educate our community, and give teens, as well as parents and guardians, tools to help them be safe online. We understand that online threats continue to evolve and we will continue to work diligently to address these critical issues.”
Meeting the demand of AI-driven data center growth is critical to U.S. “competition, leadership and security,” NTIA Administrator Alan Davidson said Wednesday. NTIA issued a request for comment to better understand data center issues related to the power grid, supply chain, workforce development and cybersecurity. The demand for data centers is rapidly growing in relation to demand for machine learning systems, he said: There are more than 5,000 data centers in the U.S., and demand is projected to grow by 9% percent annually through 2030. AI machine learning systems will have “enormous benefits” for the economy, he said. “Right now, we know we do not have enough data centers in the U.S.,” and NTIA wants to “chart a path forward to meet that demand in a sustainable way, and realize the full potential of the AI revolution to the benefit of all.”
X permanently agreed to stop using personal data in public posts of EU and European Economic Area users to train its AI tool, Grok, the Irish Data Protection Commission (DPC) said Wednesday. Accordingly, the Irish High Court dismissed a proceeding against the social media site, the DPC said (see 2408080020). The regulator is now more generally "addressing issues arising from the use of personal data in AI models across industry" and asked the European Data Protection Board for an opinion under the EU general data protection regulation to "trigger discussion" on several core issues.
Texas’ social media age-restriction law likely violates the First Amendment, a federal judge ruled Friday, partially blocking the measure and marking a victory for the tech industry (see 2408230014). The Computer & Communications Industry Association and NetChoice sued to block HB-18, which was set to take effect Sunday. The trade associations, which requested a preliminary injunction, met their burden in showing HB-18’s speech restrictions “fail strict scrutiny, are unconstitutionally vague, and are preempted by Section 230,” wrote Judge Robert Pitman, on behalf of the U.S. District Court for the Western District of Texas (docket 1:24-cv-00849). The decision enjoins HB-18’s monitoring and filtering provisions, but Pitman found the law’s remaining provisions can take effect because they don’t “unconstitutionally regulate a meaningful amount of constitutionally protected speech.” The court “recognized that this Texas law restricts protected speech in a way that likely violates the First Amendment and that it deserves the most stringent constitutional scrutiny,” said CCIA Chief of Staff Stephanie Joyce. “This ruling will ensure that internet users can continue accessing information and content online while we further prove that this law is unlawful and unconstitutional.”
Canada’s digital services tax (DST) appears to violate the country’s trade commitments with the U.S., the Office of the U.S. Trade Representative said Friday, requesting a review under the United States-Mexico-Canada Agreement. The DST, reflected in a budget passed in June, seems discriminatory toward U.S. companies and is inconsistent with chapters 14 and 15 of the USMCA, the USTR said. The DST imposes a 3% tax on “the sum of revenues deemed connected to Canada from online marketplaces, online targeted advertising, social media platforms, and user data,” according to the filing. It applies to companies with global revenue of €750 million or more and Canadian digital services revenue of more than CAD20 million. The measure violates Canada’s commitments to the USMCA, which requires equal treatment for U.S. and Canadian services, service suppliers and investors, said USTR. The Computer & Communications Industry Association welcomed the filing, citing Canadian Parliamentary Budget Office figures showing American companies will be responsible for the “vast bulk” of the $3 billion estimated for the first payment in June. “We expect that under USMCA, the facts and the law will demonstrate that Canada should remove this measure expeditiously. And, absent compliance, we look to USTR to follow through on its pledge to use all tools available to remedy this trade-distortive measure,” said CCIA Vice President-Digital Trade Jonathan McHale. CCIA, CTA and TechNet joined more than 10 associations in writing a letter to USTR in June opposing the DST.