Skip to main content

The introduction of the EU AI Act reflects the European Union’s efforts to regulate the development and application of artificial intelligence (AI). This endeavor is driven by the desire to find a balance between promoting technological innovation and protecting citizens from potential risks that may be associated with the use of AI technologies. At the heart of this regulatory framework are specific key objectives, which can be summarized as follows:

  • Introduction of a risk-based approach to the regulation of AI that distinguishes between different risk levels and sets corresponding requirements.
  • Ensuring the transparency of AI systems, especially if they are used in sensitive areas or have a direct impact on individuals.
  • Strengthening data protection and data security in connection with the use of AI.
  • Creating a favorable environment for research, development and use of AI technologies in the EU.
  • Promoting international cooperation in the regulation of AI in order to create global standards and norms.

Key points and classification of AI systems

The EU AI Act divides AI systems into four risk categories, from “unacceptable risk” to “limited or no risk”, and sets out specific requirements for each category.

AI systems with unacceptable risk
such as those that enable biometric categorization or social scoring, will be completely banned just six months after official adoption.

AI systems with high risk
are still permitted, but are subject to strict testing and transparency requirements. The regulations cover the following areas:

  • Critical infrastructures that can endanger the lives and health of citizens
  • Education and training, which can determine a person’s access to education and career progression
  • Safety elements of products
  • Employment, personnel management and access to self-employment
  • Key private and public services
  • Law enforcement that can interfere with people’s fundamental rights
  • Migration, asylum and border controls
  • Administration of justice and democratic processes

AI systems with limited risk, such as ChatGPT, are not classified as high-risk, but must comply with certain transparency obligations and EU copyright law. For example, providers must ensure that AI-generated content is identifiable as such. In addition, such AI systems must not generate any illegal content. Developers must also document how their AI systems work and ensure that the training data used is free of bias. This should not only ensure the fairness and accuracy of AI applications, but also strengthen users’ trust in these technologies.

AI systems with minimal or no risk
will continue to be approved for free use under the EU AI Act. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently in use in the EU fall into this category.

Effects and challenges

The implementation of the EU AI Act will have far-reaching effects on companies and research institutions. SMEs and start-ups in particular could incur significant compliance costs. It is very likely that they will find it difficult to make the necessary investments, which could impair their competitiveness. Larger organizations may be more likely to have the resources to ensure compliance guidelines.

In general, critics fear that the EU AI Act could lead to over-regulation that slows down innovation and puts companies in the EU at a disadvantage in global competition. It is assumed that overly strict regulations could slow down technological progress and prevent start-ups from developing innovative solutions quickly.

The challenges in implementing the law also include checking compliance and enforcing the regulations. To this end, the EU AI Act provides for the creation of an EU Committee on Artificial Intelligence. This delegates the enforcement of the law to the national authorities, which can impose fines of up to 7 percent of the previous year’s global turnover or 35 million euros in the event of violations. This is significantly more than for violations of the GDPR.

It remains to be seen how effectively the EU and its member states can monitor compliance and punish violations. Despite these challenges, the EU AI Act can provide an opportunity to steer the development of AI in a way that takes ethical considerations into account while not overly restricting innovation in this area.

Next steps and timetable

The agreed legislative text is expected to be formally adopted in April 2024. It will be fully applicable 24 months after its entry into force, but some parts will be applicable earlier. This applies, for example, to the ban on AI systems with unacceptable risk, which will come into force just six months after adoption.

As already explained in the article, the EU AI Act underlines the growing importance of artificial intelligence. Are you facing a specific project and looking for an experienced partner in the field of AI-supported software development and data engineering? We offer you the expertise and support you need to successfully implement your project.