EU Proposes Risk-Based Approach in New Legal Framework for AI, Law Firm Says
The European Union's proposal for new legal framework for artificial intelligence distinguishes among different types of risk and highlights what sorts of applications would be barred under the new regulation, Mayer Brown said May 5 in an analysis. The proposal distinguishes among unacceptable, high and low risks that AI poses to users of the technology and recommendss differing levels of restrictions accordingly.
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
For instance, an unacceptable risk would be one that manipulates vulnerabilities of specific groups of people or grants individuals a "social score" via personal evaluations. High-risk AI could include uses related to "critical infrastructure, educational training and employee selection." Strict requirements would be placed on these applications as opposed to an outright ban, with limited exceptions, for unacceptable risk uses. Under the proposal, conformity assessments would be completed before the technology enters the market for high-risk AI applications. As for low-risk systems, transparency obligations would be put in place for applications that interact with humans, detect emotions or determine social category association based on biometric data. "Users must be notified of the circumstances surrounding their interaction with the AI system so as to allow those users to make an informed choice in continuing to use the technology," the analysis said.
Noncompliance with the proposed regulations would result in penalties of up to 30 million euros or "6% of total worldwide annual turnover." A European Artificial Intelligence Board made up of representatives from the member states would govern the emerging technology, according to the proposal.