The authoritative news source for communications regulation
State Enforcers Analyze AI

Pa. Governor Flags Social Media, Privacy as 'Pressing Challenges' for AGs

PHILADELPHIA -- Pennsylvania Gov. Josh Shapiro (D) challenged state enforcers Tuesday to collaboratively address privacy and social media issues, speaking at a National Association of Attorneys General meeting. North Carolina AG Josh Stein (D) asked an algorithms panel later for suggestions on what states can do amid a rise of AI chatbots like ChatGPT.

TO READ THE FULL STORY
Start A Trial

It will be this group, if it harnesses the will to work together again as we did on opioids, that will ultimately come up with the answers to those pressing challenges” of social media and privacy, said Shapiro, who participated in NAAG as Pennsylvania’s previous AG before he was elected governor Nov. 8. “I don’t oftentimes look to Washington, D.C., for answers because oftentimes they don’t come from there. But I do look to this group, and I hope that you will come together to address those big challenges … to protect our children and address the very serious privacy and security issues that exist in our country.”

Preserve NAAG and its bipartisan work, said Shapiro, apparently responding to reported controversy about the association’s use of public money. “Please don't let the toxicity of the politics in Washington, D.C., seep into the important work that needs to be done by this group.”

Stein asked how state policymakers can help, after listening to a panel on AI’s possible risks and benefits. “What should we be asking our state legislatures to do to ensure that these are positive things, not harmful, and what should we doing as enforcers?” the North Carolina AG asked. “Are there rogue AI companies out there that we don’t know about that we should be trying to go after in order to send the signal to the industry that they need to get in line?” If Section 230 of the Communications Decency Act is an impediment, as one panelist suggested, “should we be focusing our energy on trying to get Congress to change that law?” Stein asked.

Generative AI presents the same problems and the same questions -- but just on a magnified scale -- that we’ve been dealing with since social media was introduced,” said NTIA Senior Adviser-Algorithmic Justice Ellen Goodman earlier in the panel. “Because Section 230 has been there to sort of body block the … natural development of theories of harm and liability and regulations, we kind of don’t have the muscles that we need as a society” or as enforcement authorities or lawyers to “deal with the new harms and the new questions that generative AI is presenting.” As a result, the law is “way behind,” she said.

Section 230 might not cover generative AI since “much of this is not third-party content,” clarified Goodman in response to Stein’s question: “It’d be really interesting to see some litigation … that probes that frontier.”

We are now in a rapidly changing era of generative AI and generative machine learning,” said University of Pennsylvania professor Michael Kearns: Predictive models trained by machine learning have become huge, complex “neural networks” with “hundreds of billions of parameters.” Kearns has “yet to see any regulation that would tell someone like me what I should do differently,” said the technologist: “Until algorithmic regulation looks more algorithmic,” with specific, quantified definitions for terms like fairness, it won’t have teeth.

State agencies will need legal advice as they acquire AI-based tools, said New Jersey Chief Innovation Officer Beth Noveck. AI may hold many benefits for government, including the ability to sort and deduplicate large amounts of information, she said. For example, the FCC net neutrality rulemaking brought in more than 22 million comments, “6% of which were original,” Noveck said. It would take AI about two seconds to find “which of those comments are actually unique and which are just cut and paste.” State governments can also use AI to answer people’s questions via chat, translate websites from English to other languages or from “legalese” to plain English, she said.

AI can be as biased as humans, but it can also uncover and expose bias, Noveck said. “There are some tremendously powerful applications,” including in health, transportation and democracy “that we need to take account of at the same time as we’re assessing the risks.” Risks may be mitigated by including transparency, accountability and human intervention, said Noveck. Governments shouldn’t substitute AI for human engagement in contexts like community outreach, she added.

AGs should start by trying AI apps themselves, said Noveck. “You have to know what they do and how they work to be able to have meaningful conversations about what’s possible and what’s a tolerable level of risk.”