Website icon Xpert.Digital

AI systems, high-risk systems and the AI ​​Act for practice in companies and authorities

AI systems high-risk systems and the AI ​​Act for practice in companies and authorities

AI systems, high-risk systems, and the AI ​​Act for practical application in companies and public authorities – Image: Xpert.Digital

EU AI Law: New guidelines from the EU Commission – What companies need to know now

AI systems, high-risk applications and the AI ​​Act in companies and public authorities

On February 11, 2025, the European Commission published comprehensive guidelines on the practical implementation of the EU AI Act. These guidelines are intended to help companies and public authorities better understand and comply with the requirements of the Act. Particular emphasis is placed on prohibited AI practices, high-risk systems, and measures to ensure compliance.

Key aspects of the guidelines

Prohibited AI practices

The AI ​​Act explicitly prohibits certain AI applications deemed to pose unacceptably high risks. These prohibitions have been in effect since February 2, 2025. They include, among others:

  • AI systems that use manipulative or deceptive techniques
  • Systems that specifically exploit the vulnerabilities of certain individuals or groups
  • Social scoring systems
  • AI to predict potential criminal acts without clear evidence
  • Uncontrolled scraping of facial images from the internet for biometric identification
  • Emotion recognition systems in the workplace or in educational institutions
  • Biometric real-time identification systems in public spaces (with a few exceptions for law enforcement agencies)

These prohibitions are intended to ensure that AI technologies are used ethically and responsibly and do not violate fundamental rights.

Practical application of the guidelines

The European Commission's 140-page guidelines contain numerous practical case studies to help companies and public authorities correctly classify their AI systems. Although these guidelines are not legally binding, they serve as a reference for supervisory authorities when monitoring and enforcing regulations.

Importance for companies and authorities

Companies and authorities must actively engage with the guidelines in order to:

  1. to review your existing AI systems for potential violations
  2. To make necessary adjustments early on
  3. To build internal compliance structures to avoid penalties

Failure to comply with the regulations can have serious consequences. Sanctions can reach up to €35 million or 7% of a company's global annual turnover, whichever is higher.

Next Steps

Before the guidelines can be fully implemented, they still need to be translated into all EU official languages ​​and formally adopted. Nevertheless, businesses and public authorities should proactively take steps to prepare for the phased introduction of the AI ​​Act. Full application of the law is scheduled for August 2, 2026.

Risk categorization of AI systems

The EU AI Act divides AI systems into four risk classes, each with different regulatory requirements:

1. Unacceptable Risk – Prohibited AI Systems

These systems are completely banned in the EU because they pose a significant threat to the rights and freedoms of citizens. Examples include:

  • AI systems for social scoring
  • Manipulative AI that unconsciously influences user behavior
  • Real-time biometric identification in public spaces for law enforcement purposes (with a few exceptions)
  • AI systems that exploit vulnerabilities due to age, disability, or socioeconomic status

2. High risk – strict regulation required

These systems must meet strict requirements and undergo conformity testing before they can be placed on the market. They include:

  • AI as a safety component in critical products such as medical devices, vehicles or machines
  • Standalone AI systems with implications for fundamental rights (e.g., creditworthiness checks, application screening, law enforcement, judicial administration)

These applications are subject to extensive requirements regarding transparency, risk management, data quality, and human oversight.

3. Limited risk – transparency obligations

These systems must inform users that they are interacting with AI. Examples include:

  • Chatbots
  • Deepfakes are used to create or manipulate media content.

4. Minimal or no risk – Free use

Such systems are not subject to any specific legal obligations; however, a voluntary code of conduct is recommended. Examples include:

  • AI-powered video games
  • Spam filter

High-risk AI systems and their regulation

The AI ​​Act defines high-risk AI systems as those that have a significant impact on the safety, health, or fundamental rights of people. These can be divided into two main categories:

1. AI as a security component or standalone product

An AI system is classified as high-risk if it either:

  • It functions as a safety component of a product that falls under EU harmonization regulations, or
  • It is subject to a conformity assessment because it poses potential hazards.

Examples of such products include:

  • AI in medical devices (e.g., diagnostic systems)
  • AI-supported driver assistance systems
  • AI in industrial production for risk assessment and quality assurance

2. Standalone high-risk AI systems with societal relevance

These systems are listed in Annex III of the AI ​​Act and affect critical areas of society such as:

a) Critical infrastructures
  • AI for controlling and monitoring power grids or transport networks
b) Education and employment
  • AI for automated exam assessment
  • AI for selecting applicants or evaluating employee performance
c) Access to financial and social benefits
  • AI-powered creditworthiness checks
  • Systems for assessing eligibility for social benefits
d) Law enforcement and justice
  • AI for evidence analysis and investigation support
  • AI-supported systems for border control and migration management
e) Biometric identification
  • Systems for biometric remote identification
  • Emotion recognition systems in safety-critical environments

All these high-risk AI applications are subject to strict requirements regarding risk management, transparency, data processing, technical documentation, and human oversight.

EU AI Act: How companies can prepare for the strict AI regulations

The EU's AI Act sets a clear framework for the use of AI technologies and places particular emphasis on the protection of fundamental rights. Companies and public authorities must thoroughly familiarize themselves with the new regulations and adapt their AI applications accordingly to avoid sanctions. Particularly stringent requirements apply to high-risk systems, which should be integrated into the development and implementation processes at an early stage.

Continuous monitoring of legislation and proactive compliance measures are essential to ensure the responsible use of AI while simultaneously fostering innovation within the legal framework. The coming years will reveal how the AI ​​Act performs in practice and what further adjustments might be necessary.

Suitable for:

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs

Exit the mobile version