top of page

EU 2024/1689 - Europe Tries to Reign in AI

This week the EU regulation 2024/1689, "laying down harmonised rules on artificial intelligence" became effective. The European Artificial Intelligence Act will regulate the use of artificial intelligence with an aim towards protecting the rights and the safety of EU citizens.


The regulation does not seek to restrict spam or AI that suggests products to consumers. It will require chatbots to disclose to people who they are communicating with that they are in fact AI and not a human being. Generative AI, such images or video created by AI, will need to be flagged as content created by artificial intelligence.



Paragraph 30 of EU regulation 2024/1689 forbids using biometric data to predict a person's sexual orientation, religion, race, sexual behavior, or political opinions, although it provides for an exception for filtering through biometric data to comply with other EU and member nation laws - specifically noting that police forces can sort images by hair or eye color to identify suspects.


Paragraph 31 addresses the prohibition of social scoring systems, which use AI to evaluate the trustworthiness of an individual:


AI systems providing social scoring of natural persons by public or private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups thereof on the basis of multiple data points related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics over certain periods of time. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices and leading to such detrimental or unfavourable outcomes should therefore be prohibited. That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.


The People's Republic of China's Social Credit System helps to put individual debtors on blacklists, but is usually used to enforce regulations against companies.


The Act requires AI systems used for healthcare or employee recruitment to be monitored by humans, and ensure that they use high quality data. High risk AI systems will need to be registered in a database maintained by the EU and receive a declaration of conformity.


High risk AI systems will have to have a CE marking (physical or digital) to show that they conform with the Act. A CE 'conformité européenne' marking looks like this:



. . . it is used widely to show that the product conforms with health and safety regulations.


AI developers will not have to fully comply with the EU AI Act until August 2, 2027.

Comments


bottom of page