Publications

Start of main content

The EU Proposal for a new Artificial Intelligence regulation establishes a new legal framework for innovation

| Publications | Privacy, IT & Digital Business

The European Commission frames the AI standard within the European digital strategy called "Shaping Europe's Digital Future"

On 21st April 2021, the European Commission (EC) has published its Proposal for a Regulation on Artificial Intelligence (“Proposed AI Regulation”), which is part of the European digital strategy "Shaping Europe's Digital Future", whose main objective is to drive digital transformation, creating new opportunities for businesses and ensuring maximum respect for citizens' fundamental rights.

With the advent of 5G networks, artificial intelligence-based technologies (“AI”) have become pervasive across all sectors, from virtual assistants, facial recognition, predictive online applications and even the incorporation of bots into everyday electronic devices and appliances.

The use of AI and 5G networks will enable the standardisation and cheapening of the use of big data (techniques for processing and analysing huge volumes of data) in all industries, which will undoubtedly lead to a significant reduction in production costs and, in turn, to the increasing development of personalised products and services for users.

The impact of AI on people's privacy is evident and involves important social debates, both in its use by private companies and by public administrations themselves (see, for example, the debate that has arisen around the use of AI for video surveillance purposes in countries such as China).

The Proposal for a Regulation on AI provides a broad definition of the concept of AI, understanding it as software that, using mathematical and programming techniques, known as "algorithms", makes it possible to produce results that serve to make predictions, behavioural forecasts, recommendations for future decisions, among other purposes.

Given that the European Commission is aware of the multiple applications and the risks that the application of AI may entail for users, with the Proposal for a Regulation on AI it has tried to establish a clear legal framework on the implications of the use of AI.

It should be noted that, if the proposed AI Regulation is finally approved, simply developing an online App that integrates user assistance bot (i.e., robot software) will mean that the operator must first assess whether the solution it wishes to implement using AI entails implementing any additional safeguards.

Therefore, the proposed AI Regulation establishes the following classification of the possible uses of AI:

  • Prohibited AI applications: the use of AI systems to manipulate human behaviour, to exploit information about individuals or groups of individuals, to perform social ranking or evaluation of individuals and for monitoring purposes by means of video surveillance systems in an indiscriminate manner is prohibited.
  • AI applications subject to authorisation: remote biometric identification in public spaces (i.e., video surveillance) will be subject to prior administrative authorisation. This authorisation will only be granted when there is an enabling regulation, or for the fight against serious crimes (i.e., terrorism) and in any case, it will be subject to limits and safeguards.
  • AI applications with specific requirements: specific rules are established for certain uses of AI, such as in the case of the use of a chatbot or for the use of deep fake systems.
  • High-risk” AI applications: it is considered that certain AI applications may pose serious risks to citizens, and therefore establishes certain requirements in these cases. For example, the use of AI for biometric identification or for the operation of critical infrastructure will require prior verification by an independent third party.

Additionally, for other uses of AI, such as predictive applications (e.g. admission of students to universities, granting of credits, etc.) or applications for risk assessment, a kind of responsible declaration will be required.

On the other hand, it should also be noted that the proposed AI Regulation establishes certain transparency obligations towards users and consumers, as well as the obligation to ensure adequate training and knowledge for the persons in charge of managing and supervising applications involving the use of AI.

Finally, it should be stressed that the Proposal for an AI Regulation, which may still undergo modifications, establishes administrative fines of up to 6% of annual global turnover, or up to 30 million euros, for those companies that fail to comply with their obligations derived from the use of AI.

Therefore, although the use of AI already involves measuring its impact on users' fundamental rights, such as privacy, through data protection impact assessments (incorporated by the GDPR), the proposed AI Regulation incorporates new legal obligations that must be considered by any operator, regardless of the industrial sector, when interested in the use of AI for their solutions or applications.

In any case, compliance with new obligations should not be interpreted as an obstacle to the digital transformation of the different economic sectors. On the contrary, the proposed AI Regulation establishes a stable framework that will protect the uses of AI and facilitate the interpretation of "diligent" conduct, which is essential for companies' internal and external risk management.

In other words, the new regulatory framework is an opportunity to develop new AI applications with legal certainty, combining innovation with high security standards, all in favour of the rights of users and consumers.

 

For more information, please contact.

Isabel Martínez Moriel

isabel.martinez@es.Andersen.com

Download the full document here

End of main content