The Cybersecurity and Infrastructure Security Agency named its first chief artificial intelligence officer Thursday. The agency promoted Lisa Einstein, a senior adviser on AI for the past year. In addition, Einstein served as CISA Cybersecurity Advisory Committee executive director in 2022.
The private sector is largely responsible for the U.S. maintaining a lead over China in R&D investment, particularly in AI technology, White House Office of Science and Technology Policy Director Arati Prabhakar said Tuesday. Speaking at a Brookings Institution event, she said China is seeing unprecedented increases in R&D spending, but the U.S. remains ahead. She cited the most recent statistics, from 2021, a year in which the U.S. spent $800 billion in R&D across the public and private sectors. The U.S. is spending 3.5% of its gross domestic product on R&D, “which is terrific,” Prabhakar said. She noted the federal government spends about $3 billion to $4 billion on AI R&D annually, which is “pretty modest,” compared with the private sector.
The federal government shouldn’t impose immediate restrictions on the “wide availability of open model weights in the largest AI systems,” the NTIA said Tuesday (see 2402210041 and 2404010067). Model weights refer to core components of AI systems that enable machine learning. Open AI models are open-source, allowing public access to data, while closed models are private. NTIA gathered public comment on the benefits and risks of open and closed models in response to President Joe Biden’s executive order on AI. Current evidence isn’t “sufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future,” NTIA said in its Report on Dual-Use Foundation Models with Widely Available Model Weights. The agency recommended the federal government “actively monitor a portfolio of risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.” However, NTIA laid out possible restrictions for the technology, including a ban on “wide distribution” of model weights, “controls on the exports of widely available model weights,” a licensing framework for access to models and limits on access to application programming interfaces and web interfaces. NTIA noted that restrictions on open public model weights “would impede transparency into advanced AI models.” Model weight restrictions could limit “collaborative efforts to understand and improve AI systems and slow progress in critical areas of research,” the agency said. Open Technology Institute Policy Director Prem Trivedi said in a statement Tuesday that NTIA is correct in recommending the rigorous collection and evaluation of empirical evidence, calling it the right starting point for policymaking on the issue.
Apple signed President Joe Biden’s voluntary commitment to ensure AI develops safely and securely, the White House announced Friday. Apple joins Amazon, Google, Meta, Microsoft, OpenAI, Adobe, IBM, Nvidia and several other companies in supporting the plan. Companies initially signed in July 2023 (see 2307210043). They agreed to internal and external security testing and sharing information with industry, government and researchers to ensure products are safe before they’re released to the public.
AI poses potential competition challenges and countries must work together to address them, DOJ, the FTC, the European Commission and the U.K. Competition and Markets Authority said in a joint statement Tuesday. “We are working to share an understanding of the issues as appropriate and are committed to using our respective powers where appropriate,” they said. There are risks companies “may attempt to restrict key inputs for the development of AI technologies” and those “with existing market power in digital markets could entrench or extend that power in adjacent AI markets or across ecosystems,” the entities said. Lack of choice for content creators among buyers “could enable the exercise of monopsony power,” they said: AI also “may be developed or wielded in ways that harm consumers, entrepreneurs, or other market participants.” FCC commissioners are expected to vote at their Aug. 7 open meeting on an NPRM examining consumer protections against AI-generated robocalls (see 2407170055).
The generative AI marketplace is “diverse and vibrant” and there are no “immediate signs” of competition issues related to market entry, the Computer & Communications Industry Association told DOJ in comments that were due Monday. DOJ requested comments on AI marketplace competition. Previously, the department declined releasing AI-related comments publicly (see 2405310039). “There are several new entrants present with diversified business models and products, with more entering the market every week, showing how there are no evident signs of competitive problems,” said CCIA. If competition concerns fall outside the scope of antitrust law in the future, it “would then be appropriate to consider new laws or regulations that focus on addressing real problems that the current framework cannot reach.”
AI was the “hottest topic” on earnings calls that LightShed monitored last quarter, analyst Walter Piecyk said Tuesday. AI was covered on more than 55% of the calls, with mentions up 9% over the previous quarter, “exceeding 500 for the first time,” the firm said: “Big Tech (Amazon, Apple, Google, Meta, Netflix) still accounted for ~40% of mentions.” Inflation was mentioned on only about 15% of calls compared with 20% for the prior three quarters, he said: “There has not been a pick-up in mentions of a potential recession, which was only discussed on two calls, matching the lowest level last seen in 4Q21.”
Principles For Music Creation With AI now has backing from more than 50 music industry organizations, according to a news release. The statements, which urge responsible use of AI in music creation, were initially published by Roland Corp. and Universal Music Group in March. Organizations that have endorsed the principles include the Virgin Music Group, National Association of Music Merchants, the University of Sydney and music creation software Landr.
The FTC wants to make “clear” that sharing certain types of sensitive data is “off-limits,” and the agency is paying close attention to AI-driven business models, Consumer Protection Director Samuel Levine said Wednesday. Speaking at the Future of Privacy Forum’s D.C. Privacy Forum, Levine highlighted instances where the FTC has reached settlements with data privacy violators that include prohibitions on sharing certain types of data. He noted five cases where the FTC banned sharing of health information in advertising, another case banning sharing of browsing data for advertising and at least two other cases in litigation in which the agency wants to ban sharing sensitive geolocation data. “We have made clear in our words, in our cases, complaints that certain uses of sensitive data can be off-limits.” FTC Chair Lina Khan has made similar remarks in the past (see 2401090081 and 2208290052). Levine said banning those practices will depend on the agency’s three-part FTC Act test for unfairness. Data sharing practices violate the FTC Act if they cause or are likely to cause substantial consumer injury, can’t be reasonably avoided by consumers and the potential harm isn’t outweighed by “countervailing” benefits to consumers or competition. So much of how “people experience” social media platforms and how data is handled is driven by behavioral advertising business models, said Levine. Some companies are clear about the business model incentives for AI, while other companies are “not being as clear,” he said. “It’s not illegal to want to make money. We want that in this country, but we do want to think about how these business models shape development of the technology and contribute to some of the harms we’ve seen.” It makes sense the director has a “strong view” there’s a “wide range” of statutory authority for the FTC when it comes to AI-driven data practices, said FPF CEO Jules Polonetsky. The FTC already has a “substantial ability” to enforce against AI-related abuse under its consumer protection regulations, Polonetsky told us. However, hard societal questions surround the technology that only Congress can answer, and that starts with a federal data privacy law, he said.
Lenovo and Cisco Thursday said they are collaborating on AI solutions for businesses. The companies announced a memorandum of understanding “to jointly establish design, engineering, and execution plans for accelerating digital transformation with turnkey solutions that extend world-class networking and purpose-built AI infrastructure solutions from edge to cloud for customers worldwide.”