Start of main content

Artificial intelligence and legal certainty: an unavoidable necessity

| News | Litigation

Vicente Moret analyses how IA-based systems are massively affecting our lives and generating many uncertainties that must be clarified in an adequate regulatory and ethical framework to regulate it

Systems based on Artificial Intelligence (AI) are already massively affecting our lives in ways that are sometimes not evident. Increasingly, citizens and legal entities are the object of actions and decisions taken through these systems. In this context of accelerated digital transformation, this technology based on huge amounts of data and algorithms is increasingly used by companies and public administrations because of the obvious advantages it offers as a means of enhancing any economic or management activity. It is the essential component of autonomous driving systems, of facial recognition or of digital medicine, to mention just a few examples, which join those that have already been used massively in the financial field for some years.

Obviously, AI can bring innumerable benefits to make companies and Government Agencies much more effective and efficient in the context of the change that we are experiencing. For this reason, all the documents recently published by the European Commission on digital strategy for the coming years insist on the need to promote research, development and implementation of these systems, since in this area Europe has lagged behind the two great digital powers, the USA and China. AI has become a strategic priority in all aspects for Europe, which last February published a White Paper on Artificial Intelligence that lays the foundations for what should be the future development of this technology and its regulation.

As well as the great expectations that the AI brings, the truth is that there are many uncertainties and potential dangers that this technology entails if an appropriate regulatory and ethical framework is not established to regulate it. The first statement that can be made is that, from a legal point of view, the use of these systems generates questions that cannot be resolved with the regulations currently in force. That is why the Commission is clearly committed to creating a coherent regulatory framework across the EU. As a result of this, we can echo the great doubts that are being generated by the mass implementation of facial recognition systems. These systems are now at the very heart of the public debate, not because of their effectiveness or their necessity, but because of the legal and ethical limits to their use from the point of view of respect for fundamental rights and public freedoms. In our legal-political systems, it is not possible to use mass surveillance indiscriminately without affecting those fundamental rights that are the very basis of our way of life.

The same could be said of the European and national legal system for the protection of consumers and users, which is not prepared to attribute responsibility for malfunctioning with regard to AI systems based on algorithms, which are truly non-transparent black boxes. The legal effects may involve the violation of several fundamental rights such as the principle of equality and non-discrimination, freedom of expression, the right to privacy and intimacy, rights as consumers and users, or data protection rights. Therefore, the best approach to avoid this damage would be that of the risk associated with each specific system, so that a legal assessment is made on a case-by-case basis before its use. However, in order not to burden research, innovation and development of economic activity with legal obligations, this legal and ethical assessment should only be mandatory for systems that pose a high risk from the point of view of individual rights and from the point of view of data handling. On the other hand, companies using or planning to use these AI systems should bear in mind that the complexity in terms of development, implementation and supply chains of this technology makes it very likely that they will be held liable in case of malfunctioning or error. This aspect of attribution of liability has been pointed out by the European Commission as one of the central aspects in the new regulation to come from the AI.

For all the above reasons, and while awaiting the final configuration of a new regulatory framework for the AI on which intensive work is being carried out, it is essential from a legal point of view that the companies and administrations that are going to use these systems carry out an evaluation of the legal and ethical impact of their use and their characteristics. Parameters such as transparency, human supervision, cyber-security, the origin of the data used, the biases involved in the programming or the justification of the purpose, can mean the difference between the adequate use of these systems and the generation of important negative legal and reputational consequences. Taking into account that the European Commission has already established the principles and bases of the future AI regulation, these same parameters can serve to give companies, Agencies, citizens and consumers, and society in general, more legal certainty in a context as changing as that of digital disruption.

You can see the article in Expansion

End of main content