As Compliance Date for Banned AI Systems Nears, EU AI Act Needs Clarity, Lawyer Says
The EU AI Act's rules concerning prohibited AI uses will be effective Feb. 2, and several need more clarity, data protection lawyer and Digiphile Managing Director Phil Lee said in an interview Thursday. The prohibition giving companies the biggest headache…
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
is the ban on inferring people's emotions in the workplace and educational institutions, except where use of AI is intended for medical or safety reasons, he said. Emotion-recognition systems sometimes use biometric data obtained by scanning people's faces. Many companies use language-scanning tools, such as monitoring how their customer service teams are interacting with customers, to decipher how workers are feeling. But the act is unclear about whether businesses can lawfully use biometrics in these situations. Another ambiguity arises from the prohibition on deploying subliminal, manipulative or deceptive techniques to cause people to make decisions they would otherwise not have made, Lee said. Some would argue this covers targeted advertising, but the act exempts commercial, legitimate advertising practices. A European Commission consultation launched Wednesday on the act's prohibitions and the definition of an AI system could help clarify things, he said. Most of the barred AI systems involve activities companies shouldn't do in the first place, but with some AI use cases businesses are uncertain how they should proceed and where they could face the threat of litigation, Lee said. In addition to the prohibitions above, Article 5 of the regulation forbids: (1) Exploiting vulnerabilities of a natural person or specific group of people based on age, disability or specific social or economic situations in order to distort their behavior in a way that causes others to harm them. (2) Classifying people or groups over certain time periods based on their social behavior or inferred or predicted personal or personality characteristics. (3) Assessing people to predict their risk of committing a criminal offense based solely on profiling. (4) Using biometric categorization systems to deduce someone's race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This doesn't apply to labeling or filtering lawfully acquired biometric data in the area of law enforcement. (5) Creating facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. (6) Deploying ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes unless it's strictly necessary for one of several stated objectives. Comments on the EC consultation are due Dec. 11.