
AI bans and mandatory competence: The EU AI Act – A new era in dealing with artificial intelligence – Image: Xpert.Digital
Consumer protection and fundamental rights: What the EU AI Act changes
The EU AI Act: New rules for artificial intelligence from February 2025
The European Union's AI Act will enter into force on February 2, 2025, and introduces far-reaching changes to the use of artificial intelligence (AI) in Europe. Companies, public authorities, and developers that use or offer AI systems in the EU must comply with strict regulations. The aim of the AI Act is to ensure the safety and transparency of AI systems, protect fundamental human rights, and strengthen consumer protection.
The new rules include, among other things, clear bans on certain high-risk AI applications, a requirement for employee training, and high penalties for violations.
Suitable for:
Prohibited AI practices (from February 2, 2025)
Some applications of artificial intelligence are considered unacceptably risky and will therefore be prohibited from February 2025. These include:
1. Social Scoring
Evaluation of individuals based on their social behavior or personal characteristics, such as:
- Analysis of social media data for creditworthiness assessment,
- Evaluation of citizens based on their political opinions or religious beliefs,
- Automated credit ratings based on circle of friends or place of residence.
2. Emotion recognition in sensitive areas
AI systems that analyze emotions or psychological states in certain environments are prohibited:
- In the workplace (e.g., systems that measure stress or frustration based on facial expressions),
- In educational institutions (e.g., AI that monitors students' concentration).
3. Biometric real-time surveillance in public spaces
The use of facial recognition and other real-time biometric systems is prohibited, for example:
- Cameras in train stations or squares for mass surveillance,
- Automatic facial recognition for the identification of individuals without their consent.
Exception: Use is permitted if it is for combating serious crimes (e.g. terrorism) and a court order has been issued.
4. Manipulative AI
Systems that deliberately exploit people's psychological vulnerabilities to manipulate them are prohibited. These include:
- Voice-controlled toys that encourage children to engage in risky behavior,
- AI-powered advertising that manipulates people into making unwanted purchases.
5. Predictive Policing
AI systems that classify people as potential criminals based on personality traits or social factors are prohibited.
The use of AI remains permitted if it is based on objective facts such as criminal records.
6. Biometric categorization
The automatic classification of people according to criteria such as: is prohibited
- Ethnic origin,
- sexual orientation,
- political opinion.
Mandatory AI skills training for employees
In addition to bans on high-risk AI, the AI Act also mandates training for employees who work with AI systems. Companies and government agencies must ensure that their employees possess sufficient expertise.
Training content:
- Technical understanding of the AI tools used,
- Awareness of risks such as discrimination or data privacy issues,
- Critical reflection on AI decisions.
Affected groups:
- Developers of AI systems (e.g., start-ups in the field of generative AI),
- HR departments that use AI in recruiting processes,
- Security authorities with AI-supported surveillance systems,
- Universities and public administrations using AI-supported data analysis.
Companies are required to document training measures and update them regularly.
Consequences of violating the AI Act
Failure to comply with the new regulations will result in severe penalties:
- Fines of up to 35 million euros or 7% of worldwide annual turnover,
- Liability risks if damages arise from faulty AI applications,
- Operating bans if a company repeatedly violates AI guidelines.
National regulatory authorities will be responsible for monitoring compliance, starting their work in August 2025. In Germany, the Federal Network Agency is expected to be responsible.
Loopholes and exceptions
Although the AI Act prohibits many risky applications, there are exceptions:
1. Law enforcement
- The use of biometric surveillance remains permitted in cases of serious crimes (e.g., combating terrorism).
- The police are allowed to use AI for facial recognition if a judicial authorization is obtained.
2. Border controls
- AI can be used to analyze the emotional state of refugees.
- Certain AI-supported risk assessments remain permitted.
3. Research & Development
- Certain high-risk AI systems may be developed for scientific purposes, as long as they are not used in practice.
Action required for companies
The EU AI Act sets new global standards for the ethical use of artificial intelligence. Companies must prepare for the new rules early on, in particular by:
- Testing their AI systems for compliance,
- Implementation of internal training programs,
- Documentation of AI decisions and risk assessments.
Those who disregard the strict regulations risk not only heavy fines but also a massive loss of trust among customers and partners. It is therefore advisable to begin adapting processes and guidelines now to comply with the requirements of the AI Act.
Our recommendation: 🌍 Limitless reach 🔗 Networked 🌐 Multilingual 💪 Strong sales: 💡 Authentic with strategy 🚀 Innovation meets 🧠 Intuition
At a time when a company's digital presence determines its success, the challenge is how to make this presence authentic, individual and far-reaching. Xpert.Digital offers an innovative solution that positions itself as an intersection between an industry hub, a blog and a brand ambassador. It combines the advantages of communication and sales channels in a single platform and enables publication in 18 different languages. The cooperation with partner portals and the possibility of publishing articles on Google News and a press distribution list with around 8,000 journalists and readers maximize the reach and visibility of the content. This represents an essential factor in external sales & marketing (SMarketing).
More about it here:
The EU AI Act: A paradigm shift in dealing with artificial intelligence - background analysis
Artificial intelligence under scrutiny: The impact of the AI Act on Europe
Today, the European Union's AI Act enters into force, a groundbreaking law that fundamentally restructures the handling of artificial intelligence (AI). This law marks a crucial turning point, as it establishes, for the first time, concrete prohibitions on certain AI applications and simultaneously sets high standards for the competence of those working with these technologies. The AI Act aims to harness the immense opportunities of AI without jeopardizing the fundamental rights of citizens or tolerating unacceptable risks.
The scope of the AI Act is broad and affects companies, public authorities, and developers alike that use or offer AI systems within the EU. This means that virtually all areas of our society, from business and public administration to the education sector, will be affected by the new regulations. The implications of this law are immense and will bring about profound changes in how we develop, use, and regulate AI.
Forbidden AI practices: A shield for civil liberties
The core of the AI Act consists of clearly defined prohibitions on certain AI applications that are classified as particularly risky or harmful. These prohibitions are not intended to stifle innovation, but rather as a necessary protective mechanism to safeguard fundamental rights and human dignity in the digital world.
The ban on social scoring
One of the most prominent bans concerns so-called "social scoring." This involves evaluating individuals based on social characteristics such as political views, religious affiliation, or purchasing behavior. "People must not be reduced to mere data sets," warned one of the EU Commissioners during the negotiations. Systems that socially evaluate individuals in this way and place them in a kind of ranking are considered incompatible with European values. Experiences with similar systems in other parts of the world, which have led to social exclusion and discrimination, have contributed to this strict stance.
The ban on emotion recognition in the workplace and in educational institutions
The AI Act prohibits the use of emotion recognition technologies in the workplace and in educational institutions. "The world of work and education must not become surveillance arenas," emphasized a Member of the European Parliament. The recording of stress, frustration, or fatigue by AI systems is considered an infringement on the privacy and personal autonomy of those affected. The concern is that such technologies could lead to an atmosphere of mistrust and fear, and could also contribute to unfair performance evaluation.
The ban on real-time biometric surveillance in public spaces
The use of real-time biometric surveillance in public spaces, such as cameras at train stations or in public squares, is also prohibited. This surveillance, which often involves facial recognition, is considered a massive intrusion into privacy. The constant gaze of the surveillance state, as it is called by critics, is incompatible with the fundamental principles of a free and open society. However, there is an important exception for law enforcement in cases of serious crimes, such as terrorism. In these cases, the use of such technologies may be justified under strict conditions and within a limited scope.
The ban on manipulative AI
Another crucial regulation concerns the use of manipulative AI systems. These systems, which deliberately exploit the vulnerabilities of vulnerable individuals, are prohibited by the AI Act. Examples include voice-controlled toys that entice children into risky behavior, or AI-powered fraudulent calls that lead to financial hardship for elderly people. The legislature aims to ensure that AI systems are not misused to impair or harm people's freedom of choice.
The prohibition of predictive policing
Finally, the AI Act prohibits the use of predictive policing, in which individuals are categorized as potential offenders based on personality traits. This practice is considered discriminatory and unfair, as it can be based on prejudice and stereotypes. However, it is important to emphasize that the use of objective facts, such as criminal records, remains permitted.
AI competence mandate: The basis for responsible AI use
In addition to prohibitions, the AI Act also contains a crucial component for strengthening AI competence. Companies and public authorities must ensure that employees working with AI systems possess sufficient expertise. This competence requirement is intended to ensure that AI systems are used not only efficiently, but also ethically and responsibly.
The required skills include a technical understanding of the AI tools used, an awareness of risks such as discrimination or data breaches, and the ability to critically evaluate AI decisions. Companies must offer training for employees who work with AI-based chatbots, recruiting tools, or analytics systems. This training must be documented and take into account the specific context of use. Employees must be able to understand how the AI systems work, recognize their limitations, and identify potential errors or biases. They must consider the ethical implications of their work and understand the impact of their decisions on those affected.
The obligation to demonstrate competence applies not only to the direct users of AI systems, but also to the developers of AI technologies. They must ensure that their systems are not only technically sound, but also comply with ethical and legal requirements. They must consider the principles of "AI by Design" and strive to minimize risks and potential harm from the outset.
Consequences of violations: An incentive for compliance
The consequences of violating the AI Act are significant. Companies and government agencies can face fines of up to €35 million or 7% of their global annual turnover. Furthermore, they may face liability risks if damages result from inadequate employee competence. The fear of hefty fines and reputational damage is intended to incentivize companies and government agencies to strictly adhere to the AI Act regulations.
It is important to emphasize that the AI Act is not only a criminal law, but also an instrument for promoting the responsible use of AI. With this law, the EU wants to send a signal that AI technologies should be used to serve humanity and not to its detriment.
Challenges and open questions
Although the AI Act is an important step forward, some challenges and open questions remain. The precise training standards and the responsible regulatory authorities still need further clarification. It is expected that it will take some time before the new regulations are fully implemented and take full effect.
Monitoring compliance with regulations will be a major challenge. It must be ensured that companies and authorities are compliant not only on paper, but also in practice. Supervisory authorities must be equipped with the necessary resources and powers to effectively fulfill their task.
Another important aspect is international cooperation. The EU is not the only actor addressing the regulation of AI. It is crucial to reach a global consensus on the ethical and legal framework for AI. Fragmented regulation could lead to competitive disadvantages and an unequal distribution of the benefits and risks of AI.
The AI Act: Europe's vision for a human-centered AI future
The AI Act is more than just a law. It is an expression of European values and a vision for responsible and human-centered AI. It is a call to society to actively engage with the opportunities and risks of AI and to shape a future in which technology is used for the benefit of all.
The AI Act will undoubtedly bring about a profound change in how we interact with AI. It will influence the development of new technologies and alter how we integrate them into our daily lives. It will compel companies and government agencies to rethink their practices and adopt a more responsible approach to AI.
The AI Act is an important step towards a digital future that serves humanity, not the other way around. It demonstrates the European Union's readiness to take a leading role in shaping the AI revolution while prioritizing fundamental rights and human dignity. This is a law that will be significant not only for Europe, but for the entire world. It represents an attempt to strike a balance between innovation and the protection of the individual.
Ethics and AI: The AI Act as a guidepost for a responsible future
The role of ethics in AI development
The AI Act is not only a legal project, but also an ethical one. Integrating ethical principles into AI development is crucial to ensuring that AI systems are fair, transparent, and responsible. There needs to be a discussion about the ethical issues surrounding AI, both in society and within companies.
The importance of transparency
Transparency is a key principle of the AI Act. The way AI systems work must be understandable so that those affected can comprehend how decisions are made. This is particularly important for AI systems used in sensitive areas such as healthcare or the justice system.
The impact on the labor market
The use of AI will impact the labor market. New jobs will be created, but jobs will also be lost. It is important that society prepares for these changes and takes the necessary measures to support workers.
The role of education
Education plays a crucial role in fostering AI competence. It is essential that education systems adapt to the challenges of the AI revolution and impart the necessary skills. This includes not only technical skills but also ethical and social competencies.
Privacy protection
Protecting privacy is a key concern of the AI Act. The collection and processing of data by AI systems must be responsible. Data subjects must retain control over their data and have the right to request its deletion.
Promoting innovation
The AI Act should not be misunderstood as a brake on innovation. Rather, it is intended to provide a framework for the development of responsible and ethically sound AI technologies. It is important that companies and researchers continue to have the opportunity to drive innovation in the field of AI.
We are there for you - advice - planning - implementation - project management
☑️ SME support in strategy, consulting, planning and implementation
☑️ Creation or realignment of the digital strategy and digitalization
☑️ Expansion and optimization of international sales processes
☑️ Global & Digital B2B trading platforms
☑️ Pioneer Business Development
I would be happy to serve as your personal advisor.
You can contact me by filling out the contact form below or simply call me on +49 89 89 674 804 (Munich) .
I'm looking forward to our joint project.
Xpert.Digital - Konrad Wolfenstein
Xpert.Digital is a hub for industry with a focus on digitalization, mechanical engineering, logistics/intralogistics and photovoltaics.
With our 360° business development solution, we support well-known companies from new business to after sales.
Market intelligence, smarketing, marketing automation, content development, PR, mail campaigns, personalized social media and lead nurturing are part of our digital tools.
You can find out more at: www.xpert.digital - www.xpert.solar - www.xpert.plus

