Blog/Portal for Smart FACTORY | CITY | XR | METAVERSE | AI (AI) | DIGITIZATION | SOLAR | Industry Influencer (II)

Industry HUB & Blog for B2B Industry – Mechanical Engineering – Logistics/Instalogistics – Photovoltaics (PV/Solar)
for Smart Factory | City | XR | Metaverse | Ki (Ai) | Digitization | Solar | Industry influencer (II) | Startups | Support/advice

Business Innovator – Xpert.digital – Konrad Wolfenstein
More about this here

EU intensifies regulation of AI: The most important questions and answers to the regulation from August 2025

Xpert pre-release


Konrad Wolfenstein – Brand Ambassador – Industry InfluencerOnline contact (Konrad Wolfenstein)

Language selection 📢

Published on: July 30, 2025 / update from: July 30, 2025 – Author: Konrad Wolfenstein

EU intensifies regulation of AI: The most important questions and answers to the regulation from August 2025

EU intensifies regulation of AI: The most important questions and answers to the regulation from August 2025 – Image: Xpert.digital

EU-KI-Office starts: This is how the EU monitors artificial intelligence

What is the EU Ki Ordinance and why was it introduced?

The EU Ki Ordinance, also known as the EU AI Act (Ordinance (EU) 2024/1689), is the world's first comprehensive law on regulating artificial intelligence. The regulation was passed by the European Parliament on March 13, 2024 and came into force on August 1, 2024.

The main objective of the regulation is to create a harmonized legal framework for the use of AI in the European Union. On the one hand, it is intended to promote innovations and strengthen trust in AI, but on the other hand, make sure that AI is only used in such a way that the fundamental rights and security of the EU citizens remain preserved. The EU pursues a risk-based approach: the higher the risk from a AI system, the more comprehensive the obligations.

The regulation is a direct reaction to the rapid development of AI technology and its extensive social effects. With the farewell, the EU would like to offer protection against potential dangers and establish Europe as a leading location for trustworthy AI.

Suitable for:

  • AI systems, high-risk systems and the AI ​​Act for practice in companies and authoritiesAI systems high-risk systems and the AI ​​Act for practice in companies and authorities

What are General Purpose AI models and why are you the focus of regulation?

According to the definition of the EU Ki Ordinance, General-Purpose-AI models (GPAI) are AI models that have considerable general usability and are able to competently perform a wide range of different tasks. These models can be integrated into a variety of downstream systems or applications.

Their versatility is characteristic of GPAI models: they have not been developed for a specific application, but offer broad functionality. In contrast to specialized AI models, a GPAI model is not taught only a special, closely stuck task during training from the outset.

Well-known examples of GPAI models are GPT-4 from Openaai (used in Microsoft Copilot and Chatgpt), Gemini by Google Deepmind, Claude von Anthropic and Llama of Meta. Due to their technical reach and training architecture, these systems meet the criteria of a GPAI model according to the EU regulation.

The special regulatory focus on GPAI models is based on their high risk potential: Since they can be used in many areas, there is a special risk that the AI could result in applications that have a high level of social effects – for example in healthcare, lending or in human resources.

What are the obligations in force on August 2, 2025?

From August 2, 2025, binding obligations for providers of GPAI models come into force. These new regulations mark a decisive time milestone of the EU KI regulation and affect different areas:

Transparency obligations

Providers must provide technical documentation and disclose information about training methods, training data and model architecture. This documentation must contain a general description of the AI model, including the tasks that the model is supposed to perform, and the type of AI systems in which it can be integrated.

Copyright conformity

The providers must ensure that their models comply with EU copyright law. This includes taking websites into account with opt-out signals and setting up complaints for rights holders.

Governance duties

Appropriate governance structures must be implemented to ensure compliance with the regulations. This also includes extended test and reporting requirements for particularly risky GPAI models.

Documentation obligations

A sufficiently detailed summary of the content used for training must be created and published. This should facilitate parties with legitimate interest, including the owners of copyrights, the exercise and enforcement of their rights.

How does the supervision of the European AI office work?

The European Ki Office (AI Office) was founded on January 24, 2024 by the European Commission and plays a central role in the implementation of the AI regulation. Organizationally, it is part of the General Directorate Communication networks, content and technologies (DG Connect) of the European Commission.

The AI office has far-reaching powers to monitor the compliance with the GPAI regulations. It can request information from companies, rate their models and demand that defects and problems are parked. In the event of violations or refusal to provide information, the officials can remove AI models from the market or impose fines.

The institution takes on various central tasks: it develops the binding AI behavioral code, coordinates the European supervisory network and monitors the development of the AI market. In addition, the office is intended to develop tools and methods in order to evaluate larger generative AI models to risks.

The AI office consists of five specialized presentations: regulation and compliance, AI security, excellence in AI and robotics, AI for the common good as well as AI innovation and politics coordination. This structure enables comprehensive support for all aspects of AI regulation.

What are the penalties of violations of the GPAI requirements?

The EU Ki Ordinance provides for sensitive punishments for violations of the GPAI requirements. Finning can be up to 15 million euros or 3 percent of the global annual turnover – depending on which amount is higher.

For particularly serious violations, such as the non-compliance with the prohibition of certain AI practices according to Article 5, penalties of up to 35 million euros or 7 percent of global annual sales can even be imposed. This underlines the EU's determination of enforcing the new rules.

In addition to the fines, there are further legal consequences: false or incomplete information about authorities can be punished with fines of up to 7.5 million euros or 1.5 percent of global annual sales. For inadequate documentation and lack of cooperation with the supervisory authorities, punishments threaten the same amount.

The amount of the punishments illustrates the seriousness with which the EU pursues the enforcement of its AI regulation. A current example is the fine of 15 million euros that the Italian data protection authority has imposed against Openai, even if it was carried out under the GDPR and not under the AI regulation.

Suitable for:

  • AI bans and competence obligation: the EU AI Act – a new era in dealing with artificial intelligenceAI bans and competence obligation: the EU AI Act – a new era in dealing with artificial intelligence

What is the AI behavior code and why is it controversial?

The AI behavioral code (Code of Practice) is a voluntary instrument that was developed by 13 independent experts and contains contributions from more than 1,000 stakeholders. He was published on July 10, 2025 by the European Commission and is intended to help providers of GPAI models to meet the requirements of the AI regulation.

The code is divided into three main chapters: transparency, copyright as well as security and protection. The chapters on transparency and copyright are aimed at all providers of GPAI models, while the chapter on security and protection is only relevant for providers of the most advanced models with systemic risks.

Companies that voluntarily sign the code benefit from less bureaucracy and increased legal certainty. Anyone who does not sign the code must expect more inquiries from the Commission. This creates indirect pressure to participate, although the code is officially voluntary.

The controversy arises from the different reactions of the companies: While Openai was ready to cooperate and would like to sign the code, Meta rejected participation. Joel Kaplan, Chief Global Affairs Officer at Meta, criticized the code as legally insecure and argued that he goes far beyond the scope of the AI law.

Why does Meta refuse to sign the code of conduct?

Meta has decided not to sign the EU behavioral code for GPAI models, which represents a remarkable confrontation with European regulation. Joel Kaplan, Chief Global Affairs Officer at Meta, founded this decision in a LinkedIn post with several critical points.

The main point of criticism is the legal uncertainty: Meta argues that the code brings "a number of legal uncertainties for model developers and provides measures that go far beyond the scope of the AI law". The company fears that the unclear formulations could lead to misinterpretations and unnecessary bureaucratic hurdles.

Another aspect is the concern for inhibition of innovation: Meta warns that the code could slow down the development of advanced AI models in Europe and hinder European companies. The company sees the effects on start-ups and smaller companies that could be disadvantaged by the regulations in the competition.

This refusal is remarkable, since Meta also wants to rely on its own AI services such as the Llama 3 language model in the EU. The company plans to use its AI through its own platforms as well as in cooperation with cloud and hardware providers, for example in Qualcomm smartphones and Ray-Ban glasses.

Which companies support the code of conduct and which rejected it?

The reactions of the technology companies on the EU behavioral code are very different and reflect the division of the industry with regard to European AI regulation.

Supporter of the code

Openai has announced that it wants to sign the code and describes it as a practical framework for the implementation of the EU Ki Act. The company sees the code as the basis for expanding its own infrastructure and partnerships in Europe. Microsoft is also ready to cooperate: President Brad Smith explained that it was "probably that we will sign" and recognized the cooperation of the AI office positively with the industry.

Critics and negative

Meta is the most prominent critic and explicitly refuses to sign. The company is involved in the ranks of the critics that Europe's regulatory plans for AI see as hostile to innovation. More than 40 CEOs of European companies, including ASML, Philips, Siemens and the Ki Startup Mistral, have called for a two-year shift from the AI Act in an open letter.

Undecided

So far, Google and Anthropic have not publicly commented on their attitude, which indicates internal evaluation processes. This reluctance could have strategic reasons, since both companies have to weigh both the advantages of legal certainty and the disadvantages of additional compliance costs.

The EU Commission remains steadfast despite the industrial cutting and states on the schedule: The required displacement of the GPAI rules is excluded.

 

Integration of an independent and cross-data source-wide AI platform for all company issues

Integration of an independent and cross-data source-wide AI platform for all company issues

Integration of an independent and cross-data source-wide AI platform for all company matters – Image: Xpert.digital

Ki-Gamechanger: The most flexible AI platform – tailor-made solutions that reduce costs, improve their decisions and increase efficiency

Independent AI platform: Integrates all relevant company data sources

  • This AI platform interacts with all specific data sources
    • From SAP, Microsoft, Jira, Confluence, Salesforce, Zoom, Dropbox and many other data management systems
  • Fast AI integration: tailor-made AI solutions for companies in hours or days instead of months
  • Flexible infrastructure: cloud-based or hosting in your own data center (Germany, Europe, free choice of location)
  • Highest data security: Use in law firms is the safe evidence
  • Use across a wide variety of company data sources
  • Choice of your own or various AI models (DE, EU, USA, CN)

Challenges that our AI platform solves

  • A lack of accuracy of conventional AI solutions
  • Data protection and secure management of sensitive data
  • High costs and complexity of individual AI development
  • Lack of qualified AI
  • Integration of AI into existing IT systems

More about it here:

  • AI integration of an independent and cross-data source-wide AI platform for all company mattersIntegration of an independent and cross-data source-wide AI platform for all company issues

 

EU-KI-Office starts: This is how the EU monitors artificial intelligence

How do normal GPAI models differ from those with systemic risks?

The EU Ki Ordinance provides for a graded risk classification for GPAI models that differentiate between normal GPAI models and those with systemic risks. This distinction is crucial for the applicable obligations.

Normal GPAI models

You must meet basic requirements: Detailed technical documentation of the model, information on the data used for training and compliance with copyright. These models are subject to the standard obligations according to Article 53 of the AI regulation.

GPAI models with systemic risks

They are classified according to Article 51 of the AI Ordinance. A model is considered systemically risky if it has “high impact capacity”. As a benchmark, a GPAI model has systemic risk if “the cumulative amount of the calculations used for his training, measured in sliding comma operations, is more than 10^25”.

Additional obligations apply to models with systemic risks: implementation of models evaluations including opposing tests, evaluation and weakening of possible systemic risks, persecution and reporting of serious accidents to the AI office and the responsible national authorities as well as guaranteeing an appropriate cyber security protection.

This gradation takes into account that particularly powerful models can also rescue particularly high risks and therefore need stricter monitoring and control.

Suitable for:

  • AI Action Summit in Paris: Awakening of the European Strategy for AI – “Stargate Ki Europa” also for startups?AI Action Summit in Paris: Awakening of the European Strategy for KI – Stargate Ki Europe?

What do companies have to document and disclose specifically?

The documentation and transparency obligations for GPAI providers are comprehensively and detailed. The technical documentation according to Article 53 and Appendix XI of the AI regulation must cover different core areas.

General model description

The documentation must contain a comprehensive description of the GPAI model, including the tasks that the model should perform, the type of AI systems in which it can be integrated, the applicable principles of acceptable use, the publication date and the distribution modalities.

Technical specifications

Detailed information on the architecture and number of parameters, modality and format of input and expenses, license used and technical means used for integration into AI systems.

Development process and training data

Construction specifications of the model and the training process, training methods and techniques, important design decisions and assumptions made, information on the data sets used including type, origin and curating methods.

Public summary of the training content

A standardized template was provided by the AI office, which gives an overview of the data used for training the models. This also includes the sources from which the data was obtained, large data records and top domain names.

The documentation must be available for both downstream providers and for the AI Office on request and can be updated regularly.

What role do copyrights play in the new regulation?

Copyright takes on a central position in the EU Ki Ordinance, since many GPAI models were trained with copyrighted content. The code of conduct dedicates its own chapter to this topic.

Compliance obligations

Providers must implement practical solutions for compliance with EU consulting law. This includes observing the Robot Exclusion Protocol (robots.txt) and identifying and maintaining rights reservations for web crawling.

Technical protective measures

Companies have to implement technical protective measures to prevent their models from generating copyright infringing content. This could be a high hurdle, especially for image generators such as Midjourney.

Complaints

Providers must name the contact person for rights holders and set up a complaint procedure. If rights holders demand that their works are not used, companies must have the offspring.

Transparency about training data

The new template for the public summary of the training content is intended to facilitate copyright owners to perceive and enforce their rights. This means that you can better understand whether your protected works were used for training.

These regulations aim to protect the balance between innovation in the AI area and the protection of intellectual property.

How does regulation affect smaller companies and start-ups?

The effects of the EU KI regulation on smaller companies and start-ups are a critical aspect of the debate about the new regulations. The concern about the disadvantage of smaller players is expressed from different sides.

Bureaucratic stress

Meta argues that the strict requirements of the code of conduct could particularly restrict the possibilities of start-ups. The extensive documentation and compliance requirements can cause disproportionately high costs for smaller companies.

Relief for SMEs

However, the AI regulation also provides relief: small and medium-sized companies, including new start-ups, can present technical documentation in a simplified form. The Commission creates a simplified form that is geared towards the needs of small and small companies.

Open source exceptions

Providers of GPAI models with open source license, i.e. open and freely licensed models without systemic risks, do not have to meet the detailed documentation standards. This regulation can relieve start-ups and smaller developers.

Equality

The code of conduct was developed with the participation of more than 1,000 stakeholders, including small and medium -sized companies. This is intended to ensure that the needs of different company sizes are taken into account.

The EU tries to limit the burden for smaller companies, while at the same time high security and transparency standards are maintained.

Suitable for:

  • 200 billion euros for the promotion of AI gigafactors and AI-related projects in Europe200 billion euros for the promotion of AI gigafactors and AI-related projects in Europe

What is the importance of August 2, 2025 for existing AI models?

August 2, 2025 marks an important turning point in European AI regulation, whereby a distinction is made between new and existing models. This distinction is crucial for the practical implementation of the regulation.

New GPAI models

For new GPAI models that are launched after August 2, 2025, the full obligations of the AI regulation apply immediately. This means that all transparency, documentation and governance duties are binding from this key date.

Existing models

The rules only apply to systems such as Chatgpt-4 on the market from August 2026. This transition period of two years should give the providers time to adapt their existing systems accordingly and to implement the new requirements.

Implementation by the AI Office

The European AI office will enforce the regulations one year after entry into force (i.e. from August 2026) for new models and two years later for existing models. This gives the authorities time to build their capacities and develop uniform enforcement practices.

Transitional bridge through the code of conduct

Companies can already sign the voluntary AI behavior code as transition aid. These companies then benefit from less bureaucracy and increased legal certainty compared to companies that choose other compliance methods.

What are the long-term effects on the European AI market?

The EU Ki Ordinance is expected to have far-reaching and permanent effects on the European AI market. These changes concern both the competition position in Europe and the development of the AI industry as a whole.

Legal security as a competitive advantage

Companies that rely early on for transparency, governance and compliance remain sustainable in the European market. The uniform rules can make Europe an attractive location for trustworthy AI and strengthen the trust of investors and customers.

Global standard setter

Similar to the GDPR, the EU Ki Ordinance could develop international broadcast. Experts expect the EU AI Act to encourage the development of AI government and ethics standards worldwide. Europe could establish itself as a global point of reference for responsible AI development.

Innovation ecosystem

The EU promotes several measures to support AI research and to facilitate the transition of research results. The network of more than 200 European digital innovation centers (EDIH) is intended to promote the broad introduction of AI.

Market consolidation

: The strict regulatory requirements could lead to consolidation of the market, since smaller providers may have difficulty bearing compliance costs. At the same time, new business models around compliance solutions could develop.

Competitive risks

Companies that do not act in time risk legal uncertainty and competitive disadvantages. The high punishments and strict assertiveness can have a significant impact on business models.

The long -term effect depends on whether Europe succeeds in successfully balancing innovation and regulation and establishing itself as a leading location for trustworthy AI.

 

Your AI transformation, AI integration and AI platform industry expert

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer – Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the AI ​​strategy

☑️ Pioneer Business Development

other topics

  • New law from February 1, 2025: The ZEREZ requirement for photovoltaic systems
    New law in Germany from February 1, 2025: 33 questions and answers about the ZEREZ requirement for photovoltaic systems...
  • Does the artificial intelligence (AI) develop Stargate to a billion dollar flop? Project does not get going
    Does the artificial intelligence (AI) develop Stargate to a billion dollar flop? Project does not get in the way ...
  • 20 questions and answers about the history and meaning of Meta Quest Pro and what happens next (with Quest 3, Quest 3S and 4?)
    20 questions and answers about the history and meaning of Meta Quest Pro and what happens next (with Quest 3, Quest 3S and 4?)...
  • Efficient warehouse automation: 25 important questions and answers for your optimization – tips on warehouse optimization and retrofit
    Efficient warehouse automation: 25 important questions and answers for your optimization – tips on warehouse optimization and retrofit ...
  • Intensified competition in
    Intensified competition in the "Vibe Coding" sector for AI: Market Analysis 2025 and the most important vibe coding platforms ...
  • Does artificial intelligence need rules? – AI report: regulation for protection or innovation brake?
    Does artificial intelligence need rules? – AI report: regulation for protection or innovation brake? ...
  • Hybrid marketing metaverse – the top 10 questions, answers and solutions
    Hybrid marketing metaverse – the top 10 questions, answers and solutions ...
  • Lager, KI, IoT, LAST MILE – Logistics 2025: Europe's intralogistics and logistics hotspots for networks, trends and technologies
    Lager, Ki, IoT, Last Mile – Logistics & Instalogistics 2025: The most important trade fairs in Europe at a glance ...
  • Questions and answers about content factory and content hub
    Questions and answers about content factory and content hub – how companies bundle their content ...
Artificial Intelligence: Large and comprehensive AI blog for B2B and SMEs in the commercial, industrial and mechanical engineering sectorsContact – Questions – Help – Konrad Wolfenstein / Xpert.digitalIndustrial Metaverse online configuratorUrbanization, logistics, photovoltaics and 3D visualizations Infotainment / PR / Marketing / Media 
  • Material handling – warehouse optimization – advice – with Konrad Wolfenstein / Xpert.digitalSolar / photovoltaic – advice planning – installation – with Konrad Wolfenstein / Xpert.digital
  • Connect with me:

    LinkedIn Contact – Konrad Wolfenstein / Xpert.digitalXing Contact – Konrad Wolfenstein / Xpert.digital
  • CATEGORIES

    • Logistics/intralogistics
    • Artificial intelligence (AI) – blog, hotspot and content hub
    • Renewable energy
    • Heating systems of the future – Carbon Heat System (carbon fiber heating) – infrared heating – heat pumps
    • Smart & Intelligent B2B / Industry 4.0 (Mechanical engineering, construction industry, logistics, intralogistics) – producing trade
    • Smart City & Intelligent Cities, Hubs & Columbarium – Urbanization solutions – City logistics Advice and planning
    • Sensor and measurement technology – Industry sensors – Smart & Intelligent – Autonomous & Automation Systems
    • Augmented & Extended Reality – Metaver's planning office / agency
    • Digital hub for entrepreneurship and start-ups – information, tips, support & advice
    • Agri-photovoltaics (agricultural PV) consulting, planning and implementation (construction, installation & assembly)
    • Covered solar parking spaces: solar carport – solar carports – solar carports
    • Power storage, battery storage and energy storage
    • Blockchain technology
    • Sales/Marketing Blog
    • AIS Artificial Intelligence Search / KIS – Ki-Search / Neo SEO = NSEO (Next-Gen Search Engine Optimization)
    • Digital intelligence
    • Digital transformation
    • E-commerce
    • Internet of Things
    • Robotics/Robotics
    • USA
    • China
    • Hub for security and defense
    • Social media
    • Wind power / wind energy
    • Cold Chain Logistics (fresh logistics/refrigerated logistics)
    • Expert advice & insider knowledge
    • Press – Xpert press work | Advice and offer
  • Another article by the Freiatrition and Logistics Baden-Württemberg (VSL) places requirements for logistics to defend it
  • Xpert.Digital overview
  • Xpert.Digital SEO
Contact/Info
  • Contact – Pioneer Business Development Expert & Expertise
  • contact form
  • imprint
  • Data protection
  • Conditions
  • e.Xpert Infotainment
  • Infomail
  • Solar system configurator (all variants)
  • Industrial (B2B/Business) Metaverse configurator
Menu/Categories
  • B2B procurement: supply chains, trade, marketplaces & AI-supported sourcing
  • Tables for desktop
  • Logistics/intralogistics
  • Artificial intelligence (AI) – blog, hotspot and content hub
  • Renewable energy
  • Heating systems of the future – Carbon Heat System (carbon fiber heating) – infrared heating – heat pumps
  • Smart & Intelligent B2B / Industry 4.0 (Mechanical engineering, construction industry, logistics, intralogistics) – producing trade
  • Smart City & Intelligent Cities, Hubs & Columbarium – Urbanization solutions – City logistics Advice and planning
  • Sensor and measurement technology – Industry sensors – Smart & Intelligent – Autonomous & Automation Systems
  • Augmented & Extended Reality – Metaver's planning office / agency
  • Digital hub for entrepreneurship and start-ups – information, tips, support & advice
  • Agri-photovoltaics (agricultural PV) consulting, planning and implementation (construction, installation & assembly)
  • Covered solar parking spaces: solar carport – solar carports – solar carports
  • Energetic renovation and new construction – energy efficiency
  • Power storage, battery storage and energy storage
  • Blockchain technology
  • Sales/Marketing Blog
  • AIS Artificial Intelligence Search / KIS – Ki-Search / Neo SEO = NSEO (Next-Gen Search Engine Optimization)
  • Digital intelligence
  • Digital transformation
  • E-commerce
  • Finance / Blog / Topics
  • Internet of Things
  • Robotics/Robotics
  • USA
  • China
  • Hub for security and defense
  • Trends
  • In practice
  • vision
  • Cyber ​​Crime/Data Protection
  • Social media
  • eSports
  • glossary
  • Healthy eating
  • Wind power / wind energy
  • Innovation & strategy planning, consulting, implementation for artificial intelligence / photovoltaics / logistics / digitalization / finance
  • Cold Chain Logistics (fresh logistics/refrigerated logistics)
  • Solar in Ulm, around Neu-Ulm and around Biberach photovoltaic solar systems – advice – planning – installation
  • Franconia / Franconian Switzerland – Solar / Photovoltaic Solar systems – Advice – Planning – Installation
  • Berlin and Berlin area – solar/photovoltaic solar systems – advice – planning – installation
  • Augsburg and Augsburg area – solar/photovoltaic solar systems – advice – planning – installation
  • Modurack PV solutions
  • Expert advice & insider knowledge
  • Press – Xpert press work | Advice and offer
  • XPaper
  • XSec
  • Protected area
  • Pre-release
  • English version for LinkedIn

© July 2025 Xpert.digital / Xpert.plus – Konrad Wolfenstein – Business Development