Gruszecki_Pawel

By Paweł Gruszecki, attorney-at-law, counsel in Dentons’ Warsaw office, member of the Intellectual Property and New Technologies team

 

 

Preface

As indicated in the Stanford University (Institute for Human-Centered AI) The AI Index 2023 Annual Report, the words ‘artificial intelligence’[i] appeared in the body of legislation adopted in 37 countries around the world between 2016 and 2022. For this reason, ensuring relative consistency in the regulation of the cross-border offering and operation of tools and systems based on AI systems in the broadest sense of the word is beginning to become one of the more interesting issues, showing how different approaches to public regulation of this new technology can be in different regions of the world.

This is all the more important as it’s easy to identify situations in which a person or entity concerned is based in a country other than that of the service provider of the tool/system in question. It is quite possible that, as far as private claims are concerned, only the law of the country indicated by the service provider in the terms of use of the tool will apply. But what about the qualification of the legal status of such a service provider in terms of public regulation?

From the perspective of EU residents, this problem will be acute, as the majority of AI-based solutions will come from the US, China, India, the UK, Canada or South Korea (the most advanced countries in the development of AI in the EU are Germany, Italy and the Netherlands, which, however, are outside the top five). This will be particularly important for those multinational groups that not only offer tools and systems in many countries around the world at the same time, but also use them within their globally dispersed subsidiaries.

Regulatory divergence

The issue of regulatory divergence – different national approaches to how artificial intelligence should be understood and what and to what extent it should be regulated – will play a particularly important role. Countries may also differ on what standards should apply to the design, evaluation or security of AI-based tools and systems.

It is even theoretically possible that an extreme situation could arise where a service provider that would be subject to the proposed EU Artificial Intelligence Act would not be regulated at all in the country in which it carries out its activities.

In search of an international consensus, the scope of which will be limited anyway

It should therefore be noted that at least some of the negative effects of such a state of affairs may be reduced as a result of, for example, work and projects of such initiatives, bodies and organisations as: G7 Hiroshima AI Process, OECD, the Global Partnership on AI (GPAI), and the International Telecommunication Union (ITU). The above organisations, however, are concerned with building an international consensus on the opportunities and risks associated with AI technology.[ii] It is therefore much more difficult to build an international consensus and thus engage international organisations to participate in the development of more concrete arrangements, i.e. a common framework for areas such as norms and standards, security, compliance, and export controls.

More pragmatic and detailed approach only possible within a smaller group of countries, e.g. the US- EU Trade and Technology Council (TTC)

The above is much easier at the level of bilateral international relations. In the case of the EU and the US, such issues are to be resolved (through consistency of definitions and agreement on standards to be applied) by the US-EU Trade and Technology Council (TTC) which was established in 2021 for EU-US relations. This council announced a joint roadmap in December 2022 ( TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management), which ”aims to guide the development of tools, methodologies and approaches for AI and trustworthy AI risk management by and for the EU and the United States.”

Standardisation is key

Importantly, according to the above document “as like-minded partners”, the EU and US seek to support and provide leadership in international standardisation efforts (presumably against some other countries). This can be achieved by contributing to and collaborating on the development of technical AI standards (currently underway in international standards organisations). Critically, these standards will impact the design, operation, evaluation and measurement of trustworthy AI and risk management.

Against all odds

Moreover, attempts to converge approaches and regulatory positions between the EU and the US (“like-minded partners”) are all the more important because both are engaged at various levels in advanced legislative work in AI, collaborate closely with each other, and are involved in an increasingly visible rivalry of a geotechnical nature (i.e., of a nature that combines geopolitical and technological issues) with other actors on the international stage (such as China). However, reducing the level of regulatory divergence is not facilitated by the fact that, for example, in the US the government discusses the future of AI and its self-regulation directly with the CEOs of the major players in the market[iii], while in the EU lawmakers create the law taking into account a number of factors, such as the voices of the pro-consumer community. So, despite mobilising to counter the policies of systemic rivals and even hostile countries, “like-minded partners” also face many of odds when they work together.

Summary

In summary, despite the efforts of various international bodies and the efforts of individual states such as the US-EU Trade and Technology Council, it is very likely that regulatory divergence will lead to difficulties in pursuing claims against the AI solution-providers. This in turn may stimulate national regulatory and supervisory authorities in individual EU countries to more active operation, which, however, will be limited only to issues within the scope of competence of these authorities, such as personal data protection, competition and consumer protection or protection provided for in the proposed EU Act on Artificial Intelligence, Directive on Liability for Artificial Intelligence, or amendments to the Directive on Liability for Defective Products. An example of such an action is the intervention of Italy’s Data Protection Authority (Garante per la protezione dei dati personali) taken this year against the U.S.-based company OpenAI, the service provider of ChatGPT.

The launch of ChatGPT and hundreds of other artificial intelligence-based tools ahead of the entry into force of major AI regulations around the world has once again put reality ahead of lawmakers. To what extent (if at all) the timing of ChatGPT’s launch was determined by the desire to get ahead of the entry into force of the regulations mentioned above will remain speculation for the time being. However, the above does not change the fact that there will be more new regulations in this area. Yet the likelihood that in this case, the EU will once again become an exporter of standards (as happened in the case of data protection) is greatly diminished.


[i] The AI Index 2023 Annual Report, Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.

[ii] International Institutions for Advanced AI, Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, Duncan Snidal, https://arxiv.org/abs/2307.04699

[iii] https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html