Language selection 📢


AI systems, high-risk systems and the AI ​​Act for practice in companies and authorities

Published on: February 13, 2025 / update from: February 13, 2025 - Author: Konrad Wolfenstein

AI systems high-risk systems and the AI ​​Act for practice in companies and authorities

AI systems High-risk systems and the AI ​​Act for practice in companies and authorities-Image: Xpert.digital

EU Ki Act: New guidelines of the EU Commission-what companies need to know now

AI systems, high-risk applications and the AI ​​Act in companies and authorities

The European Commission published extensive guidelines on February 11, 2025 for the practical implementation of the EU Ki Act (AI Act). These are intended to support companies and authorities to better understand the requirements of the law and to implement them in accordance with. The focus is particularly on prohibited AI practices, high-risk systems and measures to comply with the regulations.

Important aspects of the guidelines

Forbidden AI practices

The AI ​​Act explicitly prohibits certain AI applications that are classified as unacceptable risky. These bans have been in force since February 2, 2025. These include:

  • AI systems that use manipulative or deceptive techniques
  • Systems that use vulnerabilities of certain people or groups in a targeted manner
  • Social evaluation systems (social scoring)
  • AI to predict potential criminal acts without clear evidence
  • Uncontrolled scraping of facial images from the Internet for biometric identification
  • Systems for emotion detection at work or in educational institutions
  • Biometric real-time identification systems in public spaces (with a few exceptions for law enforcement authorities)

These prohibitions are intended to ensure that AI technologies are used ethically and responsibly and do not violate fundamental rights.

Practical application of the guidelines

The 140-page guidelines of the EU Commission contain numerous practical case studies to facilitate the correct classification of their AI systems. Although these guidelines are not legally binding, they serve as a reference for supervisory authorities in monitoring and enforcing the regulations.

Meaning for companies and authorities

Companies and authorities have to actively deal with the guidelines in order to:

  1. To check your existing AI systems for possible violations
  2. To make necessary adjustments at an early stage
  3. Building internal compliance structures to avoid punishments

The non -compliance with the regulations can have serious consequences. The sanctions range up to 35 million euros or 7% of the global annual turnover of a company, depending on which amount is higher.

Next Steps

Before the guidelines are fully implemented, they must still be translated into all EU office languages ​​and formally adopted. Nevertheless, companies and authorities should proactively take measures to prepare for the gradual introduction of the AI ​​Act. The full application of the law is planned for August 2, 2026.

The risk categorization of AI systems

The EU AI Act divides AI systems into four risk classes, each of which has different regulatory requirements:

1. Unacceptable risk-forbidden AI systems

These systems are fully prohibited in the EU because they represent a significant danger to the rights and freedoms of the citizens. Examples are:

  • AI systems for social evaluation (social scoring)
  • Manipulative AI that unconsciously influences the behavior of users
  • Biometric real-time identification in public spaces for law enforcement purposes (with a few exceptions)
  • AI systems that use vulnerabilities due to age, disability or socio-economic status

2. High risk - strict regulation required

These systems have to meet strict requirements and go through a conformity test before they come onto the market. They include:

  • AI as a security component in critical products such as medical devices, vehicles or machines
  • Independent AI systems with effects on fundamental rights (e.g. credit check, application screening, criminal prosecution, judicial administration)

Extensive requirements for transparency, risk management, data quality and human supervision apply to these applications.

3. Limited risk - transparency obligations

These systems must inform users that they interact with a AI. Examples are:

  • Chatbots
  • Deeppakes used to create or manipulate media content

4. Minimal or no risk - free use

Such systems are not subject to special legal obligations, but a voluntary code of conduct is recommended. Examples are:

  • AI-based video games
  • Spam filter

High-risk ACI systems and their regulation

The AI ​​Act defines high-risk ACI systems as such that have a significant impact on safety, health or fundamental rights of humans. These can be divided into two main categories:

1. AI as a security component or independent product

A AI system is classified as a high risk if it is either:

  • Acts as a security component of a product that falls under EU harmonization regulations, or
  • A conformity assessment is subject to potential dangers.

Examples of such products are:

  • AI in medical devices (e.g. diagnostic systems)
  • AI-based driving assistance systems
  • AI in industrial production for risk assessment and quality assurance

2. Independent high-risk ACI systems with social relevance

These systems are listed in Annex III of the AI ​​Act and concern critical social areas such as:

a) critical infrastructures
  • AI for the control and monitoring of electricity networks or traffic networks
b) Education and employment
  • AI for the automated assessment of exams
  • AI to select applicants or performance assessment of employees
c) Access to financial and social benefits
  • AI-supported credit tests
  • Systems for evaluating the right to claim for social benefits
d) law enforcement and judiciary
  • AI for evidence analysis and investigative support
  • AI-based systems for border controls and migration management
e) Biometric identification
  • Biometric remote identification systems
  • Emotion detection systems in security -critical environments

For all of these high-risk Ki applications, strict requirements for risk management, transparency, data processing, technical documentation and human monitoring apply.

AI Act of the EU: How do companies be able to prepare for the strict AI regulations

The EU AI Act sets a clear framework for the use of AI technologies and attaches particular importance to the protection of fundamental rights. Companies and authorities have to deal intensively with the new regulations and adapt their AI applications accordingly in order to avoid sanctions. Strict requirements apply, especially for high-risk systems, which should be integrated into development and implementation processes at an early stage.

The continuous monitoring of legislation and proactive compliance measures are essential to make the use of AI responsible and at the same time promote innovations within the legal framework. The coming years will show how the AI ​​Act proves itself in practice and which further adjustments could be necessary.

Suitable for:

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer - Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs


⭐️ Artificial intelligence (AI)-AI blog, hotspot and content hub ⭐️ Digital Intelligence ⭐️ Xpaper