Blumenthal, Microsoft President Agreed on Need for AI Licensing
Sen. Richard Blumenthal, D-Conn., agreed with Microsoft President Brad Smith Tuesday on the need for a federal agency to license high-risk AI systems.
Smith told the Senate Privacy Subcommittee that Congress should pass a bill creating a licensing regime for AI systems in high-risk scenarios. There should be a central agency to oversee the regulation and coordinate with other agencies that have authority over AI technology, he said. The FCC and the FTC are two agencies exploring how statutory authority applies to AI.
AI will be used in every sector, and every agency from the FTC to the FDA will be relevant, said Blumenthal, subcommittee chairman. The idea is to prevent harms through a licensing regime focused on risk, he said. Blumenthal said the subcommittee’s ultimate goal is to produce legislation as AI enters a “new era.” There needs to be regulation that allows for innovation with enforceable guardrails, he said.
The hearing took place at the same time the Senate Consumer Protection Subcommittee hosted a hearing on AI legislation. Ranking member Marsha Blackburn, R-Tenn., said this is “AI week on the Hill,” noting the AI Insight Forum that Senate Majority Leader Chuck Schumer, D-N.Y., is hosting Wednesday (see 2309060062).
Senate Commerce Committee Chair Maria Cantwell, D-Wash., told reporters while leaving the hearing that she has had “several conversations” with Schumer about AI. “I think we all want to see people thinking about this issue in the right way and ... all the other stuff that we have to get done,” she said. “What is it that we need to do right now versus two years from now or some other time?” She compared it to negotiations on chips legislation, which covered many sectors of the economy.
Sen. Todd Young, R-Ind., who worked closely with Schumer on chips legislation and who is part of his AI working group, said he has heard from experts that existing laws can address most but not all issues related to AI. Young said individual agencies and the White House need better tech expertise to understand how to apply existing statutes and flag when new regulatory authorities are needed. Carnegie Mellon University professor Ramayya Krishnan told Young the high-risk AI uses involve national security, autonomous vehicles, healthcare, recruiting and housing.
Sen. Amy Klobuchar, D-Minn., announced at both hearings that she has introduced bipartisan legislation to ban the use of deceptive, AI-generated content in political ads. She introduced the bill with Sens. Josh Hawley, R-Mo.; Chris Coons, D-Del.; and Susan Collins, R-Maine. It would work in concert with watermarking systems to identify AI-generated content and would include exemptions for satire, she said. She noted she’s working on a separate bill with Sen. John Thune, R-S.D., that would require companies to identify risks and deploy measures to help mitigate risk before they release AI on the market.
Congress should pass a bill that requires companies with AI systems in high-risk situations to conduct impact assessments and deploy internal risk management programs, testified BSA | The Software Alliance CEO Victoria Espinel. It’s “very important” to regulate high-risk systems with impact assessments, she said.
Too many questions remain about what rights people have when it comes to how their data is collected and shared, which is why Congress needs to pass privacy legislation, said Senate Consumer Protection Subcommittee Chairman John Hickenlooper, D-Colo. Blackburn agreed passing federal privacy legislation should be the first step. Strong privacy protections will help prevent AI-related data abuse, said Cantwell in her opening remarks. Sen. Jerry Moran, R-Kan., said it’s “annoying” the committee is discussing an item of “huge significance” like AI when Congress has been “unsuccessful” in reaching conclusions on data privacy legislation. Rob Strayer, Information Technology Industry Council executive vice president-policy, told the panel that passing a comprehensive privacy law is “absolutely critical,” but it doesn’t necessarily need to come before AI regulation. Both are needed, he said.