The Irish Data Protection Commission is investigating whether Google performed a required assessment before it started processing personal data of EU and European Economic Area (EEA) citizens for its AI model Pathway Language Model 2. Under the country's data protection act, assessments can be required to ensure that people's rights are protected when data processing will likely result in a high risk, the DPC said. The cross-border inquiry is part of a wider effort by the DPC and its EU counterparts to regulate personal data processing as AI models and systems develop, it said. A Google spokesperson, in an email, said the company takes "seriously our obligations under the [EU general data protection regulation] and will work constructively with the DPC to answer their questions." Earlier this month, the privacy watchdog announced that X permanently agreed to stop using personal data in public posts of EU/EEA users to train its AI tool Grok (see 2409040001).
The U.S. is among the first 11 signers of a Council of Europe treaty on AI, said the 46-member organization that promotes democracy, the rule of law and human rights. The agreement is a legal framework covering the entire lifecycle of AI systems and includes public authorities and private actors, the CoE said. Among other things, it requires signers to ensure that AI systems comply with fundamental principles such as respect for privacy and personal data protection. It requires risk and impact management assessments to ensure that AI systems protect rights, along with prevention and mitigation measures. Moreover, it gives authorities power to introduce bans on some AI applications. Signers must also ensure that remedies, safeguards and procedures are in place for challenging AI systems. The treaty will be effective three months after the date on which five signatories, including at least three CoE members, ratified it. Signers so far include seven CoE members, including the U.K., two nonmembers (the U.S. and Israel) and one international organization (the EU).
The Cybersecurity and Infrastructure Security Agency named its first chief artificial intelligence officer Thursday. The agency promoted Lisa Einstein, a senior adviser on AI for the past year. In addition, Einstein served as CISA Cybersecurity Advisory Committee executive director in 2022.
The private sector is largely responsible for the U.S. maintaining a lead over China in R&D investment, particularly in AI technology, White House Office of Science and Technology Policy Director Arati Prabhakar said Tuesday. Speaking at a Brookings Institution event, she said China is seeing unprecedented increases in R&D spending, but the U.S. remains ahead. She cited the most recent statistics, from 2021, a year in which the U.S. spent $800 billion in R&D across the public and private sectors. The U.S. is spending 3.5% of its gross domestic product on R&D, “which is terrific,” Prabhakar said. She noted the federal government spends about $3 billion to $4 billion on AI R&D annually, which is “pretty modest,” compared with the private sector.
The federal government shouldn’t impose immediate restrictions on the “wide availability of open model weights in the largest AI systems,” the NTIA said Tuesday (see 2402210041 and 2404010067). Model weights refer to core components of AI systems that enable machine learning. Open AI models are open-source, allowing public access to data, while closed models are private. NTIA gathered public comment on the benefits and risks of open and closed models in response to President Joe Biden’s executive order on AI. Current evidence isn’t “sufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future,” NTIA said in its Report on Dual-Use Foundation Models with Widely Available Model Weights. The agency recommended the federal government “actively monitor a portfolio of risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.” However, NTIA laid out possible restrictions for the technology, including a ban on “wide distribution” of model weights, “controls on the exports of widely available model weights,” a licensing framework for access to models and limits on access to application programming interfaces and web interfaces. NTIA noted that restrictions on open public model weights “would impede transparency into advanced AI models.” Model weight restrictions could limit “collaborative efforts to understand and improve AI systems and slow progress in critical areas of research,” the agency said. Open Technology Institute Policy Director Prem Trivedi said in a statement Tuesday that NTIA is correct in recommending the rigorous collection and evaluation of empirical evidence, calling it the right starting point for policymaking on the issue.
Apple signed President Joe Biden’s voluntary commitment to ensure AI develops safely and securely, the White House announced Friday. Apple joins Amazon, Google, Meta, Microsoft, OpenAI, Adobe, IBM, Nvidia and several other companies in supporting the plan. Companies initially signed in July 2023 (see 2307210043). They agreed to internal and external security testing and sharing information with industry, government and researchers to ensure products are safe before they’re released to the public.
AI poses potential competition challenges and countries must work together to address them, DOJ, the FTC, the European Commission and the U.K. Competition and Markets Authority said in a joint statement Tuesday. “We are working to share an understanding of the issues as appropriate and are committed to using our respective powers where appropriate,” they said. There are risks companies “may attempt to restrict key inputs for the development of AI technologies” and those “with existing market power in digital markets could entrench or extend that power in adjacent AI markets or across ecosystems,” the entities said. Lack of choice for content creators among buyers “could enable the exercise of monopsony power,” they said: AI also “may be developed or wielded in ways that harm consumers, entrepreneurs, or other market participants.” FCC commissioners are expected to vote at their Aug. 7 open meeting on an NPRM examining consumer protections against AI-generated robocalls (see 2407170055).
The generative AI marketplace is “diverse and vibrant” and there are no “immediate signs” of competition issues related to market entry, the Computer & Communications Industry Association told DOJ in comments that were due Monday. DOJ requested comments on AI marketplace competition. Previously, the department declined releasing AI-related comments publicly (see 2405310039). “There are several new entrants present with diversified business models and products, with more entering the market every week, showing how there are no evident signs of competitive problems,” said CCIA. If competition concerns fall outside the scope of antitrust law in the future, it “would then be appropriate to consider new laws or regulations that focus on addressing real problems that the current framework cannot reach.”
AI was the “hottest topic” on earnings calls that LightShed monitored last quarter, analyst Walter Piecyk said Tuesday. AI was covered on more than 55% of the calls, with mentions up 9% over the previous quarter, “exceeding 500 for the first time,” the firm said: “Big Tech (Amazon, Apple, Google, Meta, Netflix) still accounted for ~40% of mentions.” Inflation was mentioned on only about 15% of calls compared with 20% for the prior three quarters, he said: “There has not been a pick-up in mentions of a potential recession, which was only discussed on two calls, matching the lowest level last seen in 4Q21.”
Principles For Music Creation With AI now has backing from more than 50 music industry organizations, according to a news release. The statements, which urge responsible use of AI in music creation, were initially published by Roland Corp. and Universal Music Group in March. Organizations that have endorsed the principles include the Virgin Music Group, National Association of Music Merchants, the University of Sydney and music creation software Landr.