What does this phenomenon entail? By its broadest definition, it is incorporation of computer-based technologies into an organisation's strategies, processes and products. Within the context of modern business, frequently the above definition encompasses its intended purpose as well – increased profits.
Overall, the digital transformation of an enterprise should be understood as the remodelling of the organisational culture within the business systems to take advantage of arising opportunities.
The pandemic has significantly accelerated the development of existing and the emergence of new projects saturating business digitally. Examples of highly successful digital transformation include Walmart, whose investment in e-commerce allowed it to overtake eBay in sales for the first time, or Nike's SNKRS app, which offers limited edition shoes only to users with the most interactions.
Nevertheless, the examples cited above – though spectacular in effects – should be considered fairly classic examples of digital transformation. The nobility among such projects is the introduction of artificial intelligence (AI) into a business.
So far, no definition of AI has emerged that everyone would agree on. However, in terms of essence, it can be said that AI is the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
Digital transformation and AI
Undoubtedly, technologies such as the Internet of Things (IoT), applications, cloud computing, and AI, give organizations an opportunity of complete digital transformation of their operation and business processes.
The use of AI methods will change the way the companies use technology. Digital transformation took a big step forward when AI and machine learning became a part of the business development strategy. This allows for the companies to grow, improve their current products and services, and create innovative strategies.
EU actions for the purpose of using secure AI
AI hubs such as the US and China have chosen not to create complex standards systems that could end up limiting technological development. However, the EU, with the welfare of citizens and the ethical development of AI in mind last year took up the subject.
On 21 April 2021, the European Commission presented a proposal for a regulation defining harmonised rules on AI, the aim of which is to regulate the development, marketing, and use of AI systems in a uniform manner throughout the EU. Then at the end of November 2021, the Council presented the first compromise version of the draft.
The AI Regulation aims to make the EU a global hub for credible AI and will play a central role in its development.
The proposed legislation is expected to make AI systems in the EU safe, transparent, ethical, impartial, and human-controlled. In the current version of the project, the AI system is defined as a system that receives data or input from a machine or a human; it concludes how to achieve a given set of human-defined objectives through learning, reasoning, or modelling implemented through techniques and approaches listed in Annex I, and generates output data in the form of content, forecasts, recommendations or decisions, which affect the environments with which it interacts.
To whom will the new rules apply? The proposed Regulation will have an impact on the activities of a wide range of entities. According to Article 2 of the Artificial Intelligence Regulation, the project concerns:
- Suppliers placing on the market or putting into commission AI systems in the EU, regardless of where they are established
- Users of AI systems located in the EU
- Suppliers and users of AI systems located in a third country, producing products used in the EU.
Providers and users of the systems – not only ones established in the EU, will have the broadest responsibilities related to providing secure and reliable AI. The Regulation will also apply to all non-EU suppliers that market or put into use AI systems within the EU, as well as to suppliers and users located outside the EU, if the results of the operation of the system are used in the EU.
In line with the content of the Regulation, a risk-based approach to AI has been adopted. The EU legislature assumed that systems entailing a higher level of risk should be subject to wider and stricter requirements than those the use of which involves only limited or low risk. For this purpose, AI systems were divided into four categories of risk:
1) Unacceptable – AI systems which due to their contradiction with the EU’s values were prohibited
2) High – AI systems that carry a significant risk to security; these should be subject to strict regulations and requirements
3) Limited – AI systems the use of which does not involve significant risk, such as chatbots. These systems should still be subject to certain minimum obligations – users of these systems must be made aware that they are interacting with AI
4) Minimum – these are other AI systems, the use of which involves minimal or zero security risks, such as AI-based computer games. These systems will not be subject to the provisions and requirements of the Regulation.
Interestingly, AI systems which were developed and put into service exclusively for R&D purposes are to be excluded from the scope of the Regulation.
The Regulation includes also a system of penalties for non-compliance with its provisions and standards. The highest penalty amounts to up to €30,000,000 or up to 6% of the total annual worldwide turnover for the preceding financial year. It is intended to address non-compliance with the prohibition of ‘unacceptable’ systems and non-compliance of AI systems with the requirements for use of high-quality data for training and testing of AI systems. Failure by the AI system to comply with other requirements may result in penalties of up to €20,000,000 or up to 4% of turnover. Meanwhile providing incorrect, incomplete, or misleading information on AI systems may result in a fine of up to €10,000,000 or up to 2% of the turnover.
The Commission also plans to set up a European Council for Artificial Intelligence, composed of one representative from each member state, a representative of the Commission, and the European Data Protection Supervisor. This body will have the task of 'issuing appropriate recommendations and opinions to the Commission on the list of prohibited AI practices and the list of high-risk AI systems'.
The digital transformation continues and we are waiting for regulations
The EU has repeatedly stressed the need to couple the development of AI with the guiding principles of human rights and the protection of personal data.
Time has passed and the EU has still not issued a binding regulation. The proposed regulation will be of great importance for the entire AI market. Although this is a draft of future legislation, it can be assumed that the target version will maintain basic principles such as a risk-based approach or a system of penalties for non-compliance.
One of the most debated points is the question of balancing the restrictions necessary from a human-rights perspective versus the freedom to invest and develop technology.
While we cannot say with certainty that the planned regulation will remain in its current form, we can already draw practical conclusions about the use of AI.
It seems worthwhile to keep appropriate documentation from the very beginning of creating AI, as well. Even though the documentation that will be required by the EU legislature has not yet been precisely defined. However, it seems reasonable to already prepare the basic documentation that will define elementary rules and instructions.
The Regulation will apply in the member states two years after its entry into force (the twentieth day following its publication in the Official Journal of the European Union).