AI Regulation: Challenges and the European Union's Approach

July 31, 2023
by İhsan Tekineş, published on 31 July 2023
AI Regulation: Challenges and the European Union's Approach

With the public release of OpenAI's ChatGPT, a new phase of the race for artificial intelligence (AI) has been kicked off. This is not only the case for technology companies but for regulators as well. For some time, lawmakers have been trying to tackle the challenges of transparency, risk, bias, privacy, and intellectual property rights. While several countries, including Italy, have outright banned ChatGPT, others, such as the US, China, and the EU, have proposals and drafts to implement broad regulations. On 14 June 2023, the European Parliament agreed on their version of the AI Act, and negotiations began with the European Council on the final form of the legislation. The act is expected to be passed into law by the end of 2023, establishing the EU as a leader in creating laws and regulations to address AI-related concerns and challenges. 

Goals of the new regulation

The main objective of the AI Act is to define AI in a manner that strikes a balance between avoiding excessive restrictions that could stifle innovation and ensuring that certain AI systems are not left unregulated due to a very narrow definition. This has to be done while ensuring that the definition is technology-neutral, such that the legislation remains relevant even when new technological developments are being made in the sector. The legislation classifies AI into four different risk levels: Unacceptable risk, high risk, limited risk, and minimal or no risk. This ensures that innovation in the AI sector is not hindered while still overseeing sensitive AI systems.

Unacceptable use cases are outright banned. This includes systems that collect or process real-time biometric data, cognitive behavioral manipulation of people, and social scoring. High-risk systems are advanced systems that deal with specific areas that might negatively affect fundamental rights. Eight areas are specified in the legislation: Biometric identification, operation of critical infrastructure, education, employment, law enforcement, border control, practice of law, and access to and enjoyment of essential public and private services. Every high-risk AI system will be reviewed before being released onto the market and at various points during its lifespan. Lastly, there are extra regulations for generative AI models such as ChatGPT. The developers of these models will have to publish summaries of copyrighted data used for training and design the model to prevent it from generating illegal content, and the AI model must disclose that the content was generated by AI.

Defining AI: shooting a moving target

One of the primary issues lies in establishing a legal definition of AI. There is not a single universally accepted definition. Various definitions focus on different aspects, such as the level of autonomy of a program, similarities to human intelligence, AI as a field of research, or the applications of AI.

One definition claims that the defining aspect of AI is the ability of a program to act independently of humans. This is already being used in some areas, such as autonomous cars, where they are split into 6 levels ranging from level 0 (no driving automation) to level 6 (full driving automation). However, the problem with this is that it is not technology-neutral, meaning that what is considered autonomous decisions by advanced systems may change over time. This would render the legislation regarding AI irrelevant as technological progress is made.

Another definition focuses on comparing AI to human intelligence. However, there is still the problem of classifying which computational procedures we want to call intelligent or similar to that of human capabilities. Simply put, we may refer to an advanced system as AI when it produces outputs that are on par with human intelligence. However, this definition is too subjective and inadequate for legislation because it essentially means AI is what we call AI. Other definitions focus on AI as a field of research and the ability of systems to learn. These various definitions demonstrate that AI does not yet have a universal legal definition, and any legislation that attempts to tackle this issue might be too narrow, leaving some use cases unregulated or too broad, encompassing other systems, so such an AI act becomes a Software Act.

In the AI Act, the EU defines artificial intelligence as "[…] the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions that influence the environment with which the system interacts, be it in a physical or digital dimension."

The European Union's definition of AI notably focuses on the outcomes and objectives of advanced systems rather than solely emphasizing the underlying technological aspects of such systems. The specifics of how advanced systems carry out their tasks are only significant to the degree that they affect how difficult it is to ensure accountability and maintain transparency.

Transparency

Another critical issue in AI regulation is transparency. Transparency of systems is important to ensure unbiased systems, accountability, privacy, and public trust in AI. However, enforcing transparency may be difficult due to privacy laws or technological restrictions. Transparency can be approached in three ways.

The first approach requires transparency of the algorithm's decision-making process, essentially making the decision model of the AI public. However, this approach to transparency is not suitable for self-learning AI because it only enlightens the decision-making of the system at a certain point in time. Another possibility is the transparency requirement on the input data used for training and testing the system. This would enable third parties to review the neutrality of the systems or identify any biases in the dataset. The third approach is transparency on the outcomes of the system. This way, pronounced statistical biases may be uncovered, and others may be discovered by manipulating the inputs with different variations.

The European Union's AI act focuses on the first two approaches of transparency: transparency in the decision model of the AI and transparency in the input data used to train the AI. It requires companies to be transparent in how the AI functions, so users may interpret how the system has reached its conclusion. It also requires them to disclose the data used to train the system and the accuracy of the system. However, the legislation does not clarify the level of transparency required for the model. Furthermore, systems classified as high-risk AI will be in a public database open to the public, which the European Commission will maintain.

The advent of ChatGPT and the rapid progress of AI have propelled the world into an era where robust regulation is imperative. The challenges of defining AI, ensuring transparency, mitigating risks, and safeguarding fundamental rights have been at the forefront of lawmakers' and regulators' efforts. The European Union's AI Act stands as a significant step forward in addressing these issues. By focusing on outcomes and objectives rather than solely technological aspects, the EU strikes a balance between fostering innovation and overseeing high-risk AI systems.

You may also like

No items found.
No items found.
No items found.