Artificial Intelligence (AI) is increasingly becoming a part of our lives. From facial recognition systems to self-driving cars, AI technologies are changing the way we live and work. But with this increasing presence of AI in our lives comes a need to ensure that it is used in a safe, ethical, and responsible way.
In response to this need, the European Union has proposed the Artificial Intelligence Act (AI Act). This proposed regulation seeks to ensure that AI is developed and used in a way that protects the fundamental rights and freedoms of individuals and society. It sets out a number of requirements for AI systems, such as requiring human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness.
This blog post will look at the AI Act in more detail, exploring its purpose, its categories, and its main concerns. We will also look at how the Act is designed to protect individuals fundamental rights and how it can be implemented in a way that ensures AI is used for good.
What is the AI Act?
The Artificial Intelligence Act (AI Act) is a proposed regulation of the European Union that aims to introduce a common regulatory and legal framework for artificial intelligence. It was proposed by the European Commission on 21 April 2021 and is currently being negotiated by the European Parliament and the Council of the European Union.
The purpose of the AI Act is to ensure that AI is developed and used in a way that is safe, ethical, and responsible. The Act sets out a number of requirements for AI systems, including requirements for human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness.
The AI Act is a complex piece of legislation, but it has the potential to ensure that AI is used in a way that benefits society. The Act is still under negotiation, but it is expected to come into force in 2026.
Categories of the AI Act
The AI Act defines three categories of AI systems:
- Unacceptable risk: These systems are banned, such as those that use AI for social scoring or for mass surveillance.
- High risk: These systems are subject to specific legal requirements, such as those that use AI for facial recognition or for hiring decisions.
- Minimal risk: These systems are largely unregulated, but they must still comply with general EU law, such as the General Data Protection Regulation (GDPR).
Let’s see each of these categories in more detail.
Unacceptable risk
The AI Act defines unacceptable risk systems as those that pose a serious threat to the fundamental rights and freedoms of natural persons, such as their right to privacy, non-discrimination, or physical integrity.
Some examples of unacceptable risk systems include:
- Social scoring systems: These systems use AI to assign a score to individuals based on their behavior, such as their spending habits or their social media activity. These systems can be used to discriminate against individuals or to restrict their access to services.
- Mass surveillance systems: These systems use AI to collect and analyze large amounts of data about individuals, such as their location, their communications, and their online activity. These systems can be used to violate individuals’ privacy and to target them with discrimination or violence.
- Biometric identification systems: These systems use AI to identify individuals based on their biometric data, such as their fingerprints, their facial features, or their voice. These systems can be used to track individuals without their consent and to deny them access to services.
The AI Act prohibits the development and use of unacceptable risk systems. This means that companies and organizations cannot develop or use these systems in the European Union.
There are a few exceptions to the prohibition on unacceptable risk systems. For example, the prohibition does not apply to systems that are used by law enforcement agencies for the prevention or detection of crime. However, even in these cases, the systems must be used in a way that complies with the law and that does not violate individuals’ fundamental rights.
The prohibition on unacceptable risk systems is an important part of the AI Act. It is designed to protect individuals’ fundamental rights and to ensure that AI is used in a way that is safe and ethical.
High Risk
The AI Act defines high-risk systems as those that pose a significant threat to the safety or fundamental rights of natural persons, such as their right to life, health, or property.
Some examples of high-risk systems include:
- Facial recognition systems: These systems use AI to identify individuals based on their facial features. These systems can be used to track individuals without their consent, to deny them access to services, or to target them with discrimination or violence.
- Hiring decision systems: These systems use AI to make hiring decisions. These systems can be used to discriminate against individuals on the basis of their race, gender, or other protected characteristics.
- Credit scoring systems: These systems use AI to assess the creditworthiness of individuals. These systems can be used to deny individuals access to credit or to charge them higher interest rates.
- Medical diagnosis systems: These systems use AI to diagnose medical conditions. These systems can be used to make mistakes that could have serious consequences for patients’ health.
The AI Act sets out specific requirements for high-risk AI systems. These requirements include:
- Human oversight: High-risk AI systems must be designed in a way that allows for human oversight. This means that there must be a way for humans to understand how the system works and to intervene if necessary.
- Fairness and non-discrimination: High-risk AI systems must not be used in a way that discriminates against individuals or groups of people.
- Privacy and data protection: High-risk AI systems must comply with the GDPR and other EU data protection laws.
- Safety and robustness: High-risk AI systems must be designed in a way that minimizes the risk of harm to individuals or society.
The AI Act also requires providers of high-risk AI systems to register their systems with a central EU database. This will allow the authorities to monitor the use of these systems and to take action if they are used in a way that violates the law.
The requirements for high-risk AI systems are designed to ensure that these systems are used in a safe and ethical way. They will help to protect individuals’ fundamental rights and to ensure that AI is used for good.
Minimal Risk
The AI Act defines minimal risk systems as those that do not pose any significant threat to the safety or fundamental rights of natural persons. This means that they are considered to be relatively safe and ethical.
Some examples of minimal risk systems include:
- Chatbots: These systems use AI to simulate conversation with humans. They are often used in customer service applications.
- Online recommendation systems: These systems use AI to recommend products or services to users. They are often used in e-commerce applications.
- Spam filters: These systems use AI to identify and filter out spam emails.
- Fraud detection systems: These systems use AI to identify and prevent fraudulent transactions.
The AI Act does not impose any specific requirements on minimal risk systems. However, they must still comply with general EU law, such as the General Data Protection Regulation (GDPR).
The AI Act also requires providers of minimal risk systems to make certain information publicly available, such as the purpose of the system and the data that it uses. This will allow users to make informed decisions about whether or not to use these systems.
The minimal risk category is designed to ensure that AI systems that are considered to be relatively safe and ethical are not overregulated. This will help to promote the development and use of these systems, which can benefit society in a number of ways.
Main concerns of the AI Act
The AI Act is a complex piece of legislation that has been met with mixed reactions from the AI community. Some people have praised the Act for its ambitious approach to regulating AI, while others have criticized it for being too complex and burdensome.
Here are some of the main concerns that have been raised about the AI Act:
- The definition of AI is too broad. The AI Act defines AI as “a system that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This definition is so broad that it could include a wide range of systems, from simple chatbots to complex self-driving cars. This has led to concerns that the Act could be overreaching and could stifle innovation.
- The requirements for high-risk AI systems are too burdensome. The AI Act requires providers of high-risk AI systems to register their systems with a central EU database, to have human oversight, and to carry out impact assessments. These requirements are seen by some as being too burdensome, especially for small businesses.
- The penalties for non-compliance are too weak. The AI Act provides for fines of up to €20 million or 4% of global turnover for non-compliance. However, some people have argued that these penalties are not enough to deter companies from breaking the law.
These are just some of the main concerns that have been raised about the AI Act. It is important to note that the Act is still under negotiation, so it is possible that some of these concerns will be addressed before it comes into force. However, the Act is a complex piece of legislation, and it is likely that there will be further debate about it in the coming months and years.
Conclusions
The AI Act is a proposed regulation from the European Union that aims to ensure that AI is developed and used in a safe, ethical, and responsible way. It sets out a number of requirements for AI systems, such as requiring human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness. The Act is still under negotiation, but it is expected to come into force in 2026.
The AI Act is an ambitious attempt to regulate AI in the European Union. It sets out a number of requirements that are designed to ensure that AI is used in a way that benefits society. However, the Act has also been met with mixed reactions, and there are still a number of concerns that need to be addressed before it comes into force.
For example, some people have expressed concern that the definition of AI is too broad, and that the requirements for high-risk AI systems are too burdensome. Others have argued that the penalties for non-compliance are too weak.
Ultimately, the AI Act is a complex piece of legislation that will have a significant impact on the development and use of AI in the European Union. It is important that these concerns are addressed, and that the Act is implemented in a way that ensures that AI is used in a safe, ethical, and responsible way.
Marina Mele has experience in artificial intelligence implementation and has led tech teams for over a decade. On her personal blog (marinamele.com), she writes about personal growth, family values, AI, and other topics she’s passionate about. Marina also publishes a weekly AI newsletter featuring the latest advancements and innovations in the field (marinamele.substack.com)