Artificial Intelligence (AI), in the past few years, has drastically transformed the world with its advanced tech capabilities. AI is omnipresent in our day-to-day lives, from healthcare and finance to automobiles and education.
However, the rising use of AI in almost every sector has raised multiple topics of concern regarding its impact on individual health, fundamental rights, safety, and autonomy.
At the end of 2023, the European Union (EU) took a big step to regulate AI to some extent. The EU decided to approve the first action to restrain the excessive use of AI with this act.
To get the complete details of the European Union Artificial Intelligence Act, continue reading the article.
Background of the European Union AI Act
After a thorough discussion of 37 hours between three EU groups- the Commission, Council, and Parliament, they’ve agreed upon the act.
Finally, on Dec 8, 2023, the first comprehensive law to regulate the use of AI in the European Union got approval.
However, some European countries like France, Germany, and Italy suggested replacing the AI Act with some set of rules to reduce regulatory duties on European companies. They thought this would help them compete better globally. But European legislatives didn’t agree on this. They believed that having the AI Act would instead force foreign companies to follow it. This will create a balanced environment, promoting fairer competition.
The Purpose of The EU AI Act
The main objective of the EU AI Act was to set a regulatory framework for AI tech. Additionally, to reduce risks related to advancing AI tech, and to establish true guidelines for all.
The AI Act divides AI systems into four groups based on how much risk they pose. Here’s what each group means:
· Intolerable risk: These AI systems are completely banned because they can harm people. Those AI systems that manipulate the cognitive behavior of people, such as voice-enabled toys, might uplift children with harmful behavior.
The Act also forbids AI systems that do social scoring or take biometric identification (like facial recognition), which judges people based on their social behavior or categorizes them.
· High risk: This category includes AI systems used in important areas like medicine, cars, lifts, and aviation. These systems are closely watched because they might cause harm if something goes wrong.
The rules for these systems are strict and focus on ensuring that they meet certain safety standards, are transparent, and have human supervision.
· Generative AI and general purpose (GP): Generative AI systems like ChatGPT should also comply with transparency. Design it in a way that prevents the generation of illegal content and also discloses if the content is generated by AI.
· Limited risk: These AI systems don’t have as much potential to cause harm. They should still follow minimal transparency.
These systems should pre-inform users about their interaction with AI. Limited risks include systems that develop manipulative content (audio, video, or images).
Understanding these categories is important for companies using AI. It helps them follow the rules mentioned in the Act and ensure their AI systems are safe and legal.
Scope of the AI Act
- The EU AI Act regulates providers, deployers, importers, distributors, and manufacturers within the EU.
- Special regulations exist for high-risk AI systems under the Act.
- The act doesn’t cover military, defense, or national security uses.
- Liability laws and scientific research remain unaffected by the Act.
- The Act enforces data protection laws for AI systems.
- Certain research, testing, and personal uses are exempt from the Act’s regulations.
- Consumer protection laws and worker rights are not superseded by the Act.
- Free and open-source AI may have exemptions under the Act’s provisions.
Impact of AI Act on Businesses and Industries
The EU AI Act applies to all the businesses concerned in the development, deployment, and use of AI. Therefore, the EU requires every business to comply with the act, or else authorities will penalize the company for non-compliance. The penalties for prohibiting the act are as follows:
- For non-compliance prohibition, the penalty amounts from €35 million or 7 % of the company’s annual turnover (concerning the use of prohibited AI) to €15 million or 3% of the company’s total turnover (concerning the use of GPAI).
- The act will penalize businesses that give incomplete or misleading information with a fine of €7.5 million or 1.5% of the company’s annual turnover.
- However, the AI Act has a lower effect on start-ups and medium-sized enterprises (SMEs).
The Future of the EU AI Act
The AI Act has achieved a big milestone in regulating the use of AI tech in Europe. Its implications extend far beyond the EU, influencing others towards the need for rules for tech innovation and ethics. The AI Act highlights how the EU can shape AI ethically and responsibly.
The future that the AI Act holds:
- As AI tech keeps growing, the growth of rules in the AI Act may need to change too. This means updating the Act to match new tech and deal with new problems is essential.
- The Act is designed to stay relevant by using flexible definitions and a risk-based approach, which helps it adapt to future changes.
- When the AI Act is put into action, it’s vital to hear from different people involved, like AI developers, users, and customers. Their feedback helps to make the rules better over time.
- As AI becomes an integral part of our lives, ensuring its fair and ethical use is vital. It means keeping a close eye on how AI violates human rights and democratic values, and thereby, adjusting regulations accordingly.
The Final Note
To sum up, this act can shape the future of AI not just in the EU but globally. This act doesn’t just deal with current issues and dangers linked to AI, but it’s also ready for whatever might come in the future.