Aleksandra Cywińska_meta Contact

By Aleksandra Cywińska, associate in the commercial practice at Bird & Bird Poland

 

 

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know” – this was what Prof Stephen Hawking said during the opening night of the Web Summit conference in Lisbon in 2017.

Six years have since passed, the captivating power of AI has been unleashed and the first steps to avoid the invention of AI being the worst event in human history have been taken. One of them is the EU’s initiative to establish a regulatory framework around AI, addressing practical and ethical challenges posed by popularisation of this technology.

What is an AI black box

The name black box evokes something impenetrable or even mysterious. This brings us to the core of the problem and one of the reasons why people tend to mistrust certain AI systems.

The AI black box is a system based on AI models of such complexity and sophistication that makes its operations unexplainable. Users entering data into such systems are unable to track and understand the process that takes place inside and results in generating an output – a piece of information, conclusion or decision.

One reason for using black box technology is that these very powerful AI systems are composed of layers of algorithms that form so-called deep neural networks aimed at simulating the functioning of a human brain. Following the processing of data inside such abstract mathematical relationships becomes increasingly difficult. The other reason is protection of intellectual property. This is, of course, understandable – the system architecture may be a valuable trade secret.

Why the EU considers it important to open AI black boxes

The EU sees the variety of uses the potential of AI can be put to across all the sectors of economy and areas of social life. The uniform legal framework that it intends to establish across all European countries is aimed at fostering development and free cross-border movement of AI based goods and services, as well as protecting the health, safety and fundamental rights of EU citizens. So, the EU considers it vital to ensure that the ‘reasoning’ behind decisions made or supported by AI systems is transparent, unbiased and non-discriminatory, especially in cases where such decisions may significantly impact the lives of individuals.

The risk posed by AI black boxes has been recognized by the EU only recently, following the public release and rapid success of ChatGPT in autumn last year, as well as of the other generative AI models. The term ‘black box’ was introduced to the Commission’s proposal for the AI Act (i.e. a Regulation laying down harmonised rules on AI) by the European Parliament in June 2023 and is now being discussed in a trilogue negotiations between the European Commission, the Council and the Parliament.

How the EU intends to open AI black boxes

‘Opening’ AI black boxes means making such AI systems sufficiently transparent, explainable, and documented. The AI Act – both the original proposal published by the Commission in April 2021 and the amendments presented by the Parliament and the Council – stipulate a number of obligations in this respect.

In June this year, the European Parliament proposed to establish general principles applicable to all AI systems. One of them is the ‘transparency’ obligation to develop and use AI systems in a way that allows their appropriate traceability and explainability, as well as to inform users of the capabilities and limitations of those systems (Article 4a of the proposal for the AI Act, as amended by the European Parliament).

The transparency obligations become more detailed and stricter in case of the AI systems that the EU identifies as high-risk. These include,  inter alia, certain systems used in critical infrastructure (e.g. transport), provision of essential services (e.g. credit scoring), law enforcement etc. (the final definition of the high-risk AI systems in the AI Act has not yet been agreed on).

Article 13 of the AI Act proposal, as amended by the European Parliament, stipulates that all technical means available should be used to ensure that the AI system’s output is interpretable both by the provider and the user. The user should be able to understand how the AI system works, what data it processes, what are its characteristics, capabilities and limitations, so as to explain the decisions taken by such system to persons affected.

Although the discussion regarding who and to what extent will be directly responsible for fulfilling all the obligations mentioned above is still ongoing (we should wait until the final text is adopted), there is no doubt that various actors will be affected, including AI software producers and entities making use of it in their professional capacity.

The gravity of the transparency obligations is enhanced by high administrative fines for non-compliance with the transparency obligations, especially if high-risk AI systems are concerned – up to €20m or, if the offender is a company, up to up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is the higher (Article 71 sec. 3a of the AI Act proposal, as amended by the European Parliament).

Challenges ahead

It is likely that the AI Act will be crystallised and adopted by the end of 2023. It will probably enter into force 20 days afterwards and begin to apply within the next two years.

Both the original text of the proposal for the AI Act published in 2021 and the proposed amendments presented this year, indicate the direction in which the EU is heading with the intention to open AI black boxes.

It is vital that producers and prospective users of AI systems monitor the legislative process, consider the technical possibilities to abide by the future regulations, and gradually implement policies allowing them to comply with the transparency obligations.