Osborne
By Paulina Perkowska, associate, and Piotr Kaniewski, counsel, Osborne Clarke

 

 

Why generative AI?

Generative AI appears to be on everyone’s lips recently. We hear about it as a curiosity in the technology sector that’s fast making an important contribution to our business processes. With the capacity to generate sharp insights and respond to personalised needs,  generative AI is emerging as a technology with vast transformative potential that enables us to save time and focus on more important business aspects.

How can we distinguish what ‘generative AI models’ mean? We tend to call a AI solution ‘generative’ if it is capable of generating new, realistic content from the training data. Actually, our creativity is the limit – generative models can create texts fitted for a certain purpose, images, audio, or even source code. Generative AI algorithms are built on top of foundation models that are trained on enormous amounts of data. Those model, rather than – for example – simply classify photos of cats, can create an actual image or text description of a cat on demand.

Generative AI solutions are usually available to customers in a service model. They can be used in organisation either in a general way via accessing the available websites or applications or can be implemented as a personalised tool. Thanks to the personalised deployment the company’s internal data usually may not be available to everyone. However, even if we manage to close our AI training cycle within our organisation – the regulatory issues related to cloud computing may arise.

When considering the use of AI tools in the organisation, it is essential to determine whether it will be used for internal purposes only or whether you will also apply it to relations with your customers (that is when so-called ‘gen-AI’ creates the outputs that directly or indirectly affect the customers, in particular, are part of the products provided to them). If you anticipate potential interaction with customers you have to be aware of the regulatory requirements you should ensure compliance with. This will be the case, for example, if you are operating in the financial sector, medical sector or targeting consumers. For sure you will face also challenges related to intellectual property. The EU AI Act regulation, which we can expect to come into force soon, will undoubtedly increase the scope of these obligations. Meanwhile, we signal some important legal aspects that should be borne in mind while implementing AI in the organisation.

What challenges do we identify?

Firstly, it is important to be careful when formulating prompts (the commands we enter into AI systems). And while this is already a fairly classic standard in organisations’ cybersecurity policies, it is also worth emphasising in the context of AI – be particularly careful when it comes to the protection of personal data and confidential information, especially business and professional secrets. Be diligent and anonymise the data you use. Many widely available generative AI programmes rely on a model that allows users to operate the programme freely, but at the same time takes the data they put into the system to train the AI model.

Secondly, there are some important copyright issues to note. Although we assume that AI is no longer considered as an author itself (in copyright law, an author must be a human), the legal uncertainty about qualification of outputs exposes AI users to certain risks. Indeed, content generated by AI will not be protected by copyright. However it may be protected while one implements a non-protected content in his own creative work. How existing AI systems solve this problem? Their terms of use tend to generally attribute to their users rights to use the AI outputs – it remains unclear, though, whether this is a matter of copyright. Note, that these terms usually do not specify, what rights they grant you.

When you are already using generative AI in your work you need to remember that AI systems sometimes ‘hallucinate’, which means that they provide content that can be inaccurate or completely made-up, but looks entirely convincing. AI programs generate content which is easy to perceive as the ideal answer to questions posed in prompts, but in fact they still produce many errors. Their ‘knowledge’ is limited to certain data that they have been fed. Be careful with the results, use AI as your creative friend, but don’t trust it uncritically.

AI-generated works can be infringing – because of the inaccurate or harmful content but also due to the fact that the output may infringe the copyright of third parties. And then usually the user will be responsible for such a violation. Note that many generative AI providers exclude or strictly limit their liability towards their users. This is all the more dangerous given the actual lack of tools to verify what data the AI model used to create the output and whether it could have use them.

Finally, whenever you want to use AI outputs – be fair – do not attribute to yourself what AI prepared. At first – it’s unethical. And secondly – it’s usually contrary to generative AI Terms of Use. E.g. Open AI obligates users of ChatGPT who publish AI-generated content to attribute their name or the name of their company to it, as well as to indicate transparently to the average person that the content was generated by AI.

With the coming into force of the EU’s AI Act, there will soon be additional obligations related to the use of generative AI. The AI Act divides the use of AI solutions into three categories: prohibited, high risk and no significant risk – imposing certain obligations on those using AI – depending on the qualification adopted. It also appears that the AI Act will provide some specific obligations regarding generative AI, in particular regarding the data used by the model.

Generative AI Policy

As generative AI tools are widely available it is sure that your employees are already using them in their everyday tasks – or will soon start to. Do not let it become a problem and prepare a policy that will clearly state if you want to enable the use of generative AI solutions in your organisation. You should analyse especially potential AI use cases and your risk tolerance. Such an AI policy should among others include sections on record-keeping and use logs, security, privacy, and IP rights. It should also set standards for verification of the AI-generated output for accuracy and legal conformity.

Do not forget of the upcoming AI Act which also provides guidelines on what should be included in such a policy. Even if an AI system you are going to use is not classified as high-risk – including certain demands relating to these systems in your AI code of conduct will be a right thing to do.

Generative AI user interfaces make them quite intuitive and easy to use. But still, there is a lot to learn to optimise their usage in the entity: formulating proper prompts, understanding the technology’s limitations and awareness of possibilities to use them in the workflows. Companies need to adapt the AI to incorporate their culture and values, which may require a lot of expertise. Organisations which are implementing AI systems should organise ongoing education and training to keep their employees informed of the AI systems advantages but also of their risks. Not only does AI need constant training but also employees and organisation if one wants to implement it wisely and avoid legal risks.