EU vs. US: An end to data theft? How the new EU law is set to change AI training forever
Xpert pre-release
Language selection 📢
Published on: August 4, 2025 / Updated on: August 4, 2025 – Author: Konrad Wolfenstein

EU vs. US: An end to data theft? How the new EU law is set to change AI training forever – Image: Xpert.Digital
More transparency, stricter rules: What the new EU law really means for your AI security
Stricter rules for ChatGPT, Gemini and Co. – The new EU rules for artificial intelligence
Starting August 2, 2025, stricter rules will apply in the European Union to large-scale artificial intelligence systems such as ChatGPT, Gemini, and Claude. These rules are part of the EU AI Regulation, also known as the AI Act, which is gradually taking effect. The new regulations specifically affect so-called general-purpose AI models, or GPAI for short. These include systems that are versatile and can be used for various tasks – from text generation to translation to programming.
Providers of these systems will be required to comply with comprehensive transparency obligations in the future. They must disclose how their systems work, the data they were trained with, and the measures they have taken to protect copyrights. Particularly powerful models that could potentially pose systemic risks are subject to additional security measures and must conduct regular risk assessments.
Suitable for:
- EU intensifies regulation of AI: The most important questions and answers to the regulation from August 2025
Why is the EU introducing this regulation?
The European Union pursues several goals with the AI Regulation. On the one hand, it is intended to protect citizens from the potential risks of artificial intelligence, but on the other, it is also intended to promote innovation and create legal certainty for companies. The EU aims to be a global pioneer in AI regulation and set standards that could potentially be adopted internationally.
A key concern is the protection of fundamental rights. The regulation aims to ensure that AI systems are transparent, accountable, non-discriminatory, and environmentally friendly. At the same time, it aims to prevent AI systems from being used for purposes incompatible with EU values, such as Chinese-style social scoring or manipulative practices.
What specific obligations do providers have from August 2025?
GPAI model providers must fulfill a number of obligations starting August 2, 2025. This includes, first, comprehensive technical documentation containing details about the model architecture, training methodology, training data source, energy consumption, and the computing resources used. This documentation must be continuously updated and made available to authorities upon request.
A particularly important aspect is copyright compliance. Providers must develop and implement a strategy for compliance with EU copyright law. They must ensure that they do not use any content for training for which rights holders have declared a right of use. They must also prepare and publish a sufficiently detailed summary of the content used for the training. The EU Commission has developed a binding template for this, which will become mandatory for new models from August 2025.
What about copyright and training AI models?
The issue of copyright in the training of AI models is a central point of contention. Many authors, artists, and media producers complain that their works have been used to train AI systems without permission, and that AI is now competing with them. The new EU rules address this problem by requiring providers to disclose which websites they use to access copyrighted works.
According to Article 53 of the AI Regulation, providers must demonstrate that they have a functioning system in place to protect European copyright. They must implement a copyright compliance policy, including technologies to detect and honor potential author opt-outs. The text and data mining exception of the DSM Directive remains applicable, but if rights holders have reserved their rights, providers must obtain permission for use.
What about existing AI models?
There is a longer transition period for AI models that were already on the market before August 2, 2025. Providers such as OpenAI, Google, or Anthropic, whose models were already available before then, do not have to fulfill the obligations under the AI Regulation until August 2, 2027. This means that ChatGPT, Gemini, and similar existing systems have two more years to adapt to the new rules.
This phased introduction is intended to give companies time to adapt their systems and processes. However, new models launched after August 2025 must meet the requirements from the outset.
What happens if the new rules are violated?
The EU has established a graduated sanction system that provides for severe penalties for violations. The amount of the fines depends on the severity of the violation. Violations of GPAI obligations can result in fines of up to €15 million or 3 percent of global annual turnover, whichever is higher. Providing false or misleading information to authorities can result in fines of up to €7.5 million or 1.5 percent of annual turnover.
It's important to note, however, that the EU Commission's enforcement powers will only take effect from August 2, 2026. This means there will be a one-year transition period during which the rules will apply but will not yet be actively enforced. However, affected citizens or competitors can already file lawsuits during this period if they discover violations.
What role does the voluntary code of conduct play?
In parallel with the binding rules, the EU has developed a voluntary code of conduct, the GPAI Code of Practice. This was developed by 13 independent experts and is intended to help companies meet the requirements of the AI Regulation. The code is divided into three areas: transparency, copyright, and security and safeguards.
Companies that sign the Code can benefit from reduced administrative burdens and greater legal certainty. By the end of July 2025, 26 companies had already signed the Code, including Aleph Alpha, Amazon, Anthropic, Google, IBM, Microsoft, Mistral AI, and OpenAI. However, Meta has explicitly decided against signing, criticizing the Code for creating legal uncertainty and going beyond the requirements of the AI Act.
How do the approaches in the EU and the USA differ?
Regulatory approaches in the EU and the US are increasingly diverging. While the EU relies on strict regulation and clear guidelines, the US, under President Trump, is pursuing a path of deregulation. Shortly after taking office, Trump repealed the AI regulations of his predecessor, Biden, and his AI plan is entirely focused on promoting innovation without regulatory hurdles.
A particularly controversial issue is the question of copyright. Trump argues that AI models should be allowed to use content for free, without having to respect copyright laws. He compares this to how people who read a book also acquire knowledge without violating copyright. This position stands in stark contrast to EU regulations, which explicitly call for copyright protection.
What does this mean for users of AI systems?
For end users of AI systems like ChatGPT or Gemini, the new rules primarily bring greater transparency. Providers will be required to communicate more clearly how their systems work, their limitations, and potential errors. AI-generated content must be clearly marked as such, for example, with watermarks for images or corresponding notices for text.
In addition, the systems should become more secure. The mandatory risk assessments and security measures are intended to prevent AI systems from being misused for harmful purposes or producing discriminatory results. Users should be able to trust that the AI systems available in the EU comply with certain standards.
Which AI practices are already banned in the EU?
Since February 2, 2025, certain AI applications have been completely banned in the EU. This includes so-called social scoring, i.e., the evaluation of people's social behavior, as practiced in China. Emotion recognition in the workplace and educational institutions is also prohibited. Systems that manipulate people or exploit their vulnerability to harm them are also prohibited.
Facial recognition in public spaces is generally prohibited, but there are exceptions for law enforcement authorities investigating serious crimes such as terrorism or human trafficking. These bans are considered practices with an "unacceptable risk" and are intended to protect the fundamental rights of EU citizens.
How is compliance with the rules monitored?
Monitoring of the AI Regulation takes place at various levels. At the EU level, the European Commission's newly created AI Office is responsible for monitoring GPAI models. Member States must also designate their own competent authorities. In Germany, the Federal Network Agency, in cooperation with other specialist authorities, assumes this task.
For certain high-risk AI systems, so-called notified bodies are involved to carry out conformity assessments. These bodies must be independent and have the necessary expertise to assess AI systems. The requirements for these bodies are specified in detail in the regulation.
What impact does this have on innovation and competition?
Opinions differ on the impact of the AI Regulation on innovation. Proponents argue that clear rules create legal certainty and thus promote investment. The EU Commission emphasizes that the regulation leaves room for innovation while ensuring that AI is developed responsibly.
Critics, including many technology companies and industry associations, warn of a "sudden halt to innovation." They fear that the extensive documentation and compliance requirements could particularly disadvantage smaller companies and startups. Meta argues that overregulation will slow the development and spread of AI models in Europe.
Suitable for:
- The five-point plan: This way Germany wants to become AI world tip – data gigafactory and public orders for AI starups
What are the next important dates?
The timeline for implementing the AI Regulation includes several important milestones. After August 2, 2025, when the GPAI rules enter into force, the next major phase will take place on August 2, 2026. Then, the full rules for high-risk AI systems will take effect, and the EU Commission will be granted full enforcement powers. Member States must also have implemented their sanction rules and established at least one AI sandbox by then.
Finally, on August 2, 2027, the rules for high-risk AI systems regulated through sectoral harmonisation legislation, as well as the rules for GPAI models launched before August 2025, will come into effect. There are further transition periods until 2030 for specific areas such as AI systems in large-scale EU IT systems.
How do the big tech companies position themselves?
The reactions of major technology companies to the new EU rules vary. While companies like Microsoft and OpenAI have signaled a general willingness to cooperate and have signed the voluntary code of conduct, Meta is much more critical. Joel Kaplan, Meta's Chief Global Affairs Officer, stated that Europe is taking the wrong approach to AI regulation.
Google has announced that it will sign the Code of Practice, but has also expressed concerns that the AI law could stifle innovation. Anthropic, which has been sued for alleged copyright infringement, has also expressed support for the Code. The differing positions reflect the companies' different business models and strategic directions.
What practical challenges are there in implementation?
Implementing the AI Regulation presents numerous practical challenges. A key difficulty is defining which systems qualify as "artificial intelligence" and thus fall under the regulation. The EU Commission has announced corresponding guidelines, but has not yet published them in full.
Another problem is the complexity of the documentation requirements. Companies must compile detailed information about their training data, which is particularly difficult when large amounts of data from different sources have been used. The question of how exactly rights holders' opt-outs should be technically implemented has also not yet been fully resolved.
What does this mean for European AI companies?
For European AI companies, the regulation presents both opportunities and challenges. On the one hand, it creates a uniform legal framework within the EU, facilitating cross-border business. Companies that meet the standards can use this as a mark of quality and build trust with customers.
On the other hand, many fear that the strict rules could put European companies at a disadvantage in global competition. European providers could be at a disadvantage, particularly compared to US or Chinese competitors, who are subject to less stringent regulations. The EU, however, argues that the regulation will lead to safer and more trustworthy AI systems in the long term, which could represent a competitive advantage.
🎯📊 Integration of an independent and cross-data source-wide AI platform 🤖🌐 for all company matters
Integration of an independent and cross-data source-wide AI platform for all company matters – Image: Xpert.digital
Ki-Gamechanger: The most flexible AI platform – tailor-made solutions that reduce costs, improve their decisions and increase efficiency
Independent AI platform: Integrates all relevant company data sources
- This AI platform interacts with all specific data sources
- From SAP, Microsoft, Jira, Confluence, Salesforce, Zoom, Dropbox and many other data management systems
- Fast AI integration: tailor-made AI solutions for companies in hours or days instead of months
- Flexible infrastructure: cloud-based or hosting in your own data center (Germany, Europe, free choice of location)
- Highest data security: Use in law firms is the safe evidence
- Use across a wide variety of company data sources
- Choice of your own or various AI models (DE, EU, USA, CN)
Challenges that our AI platform solves
- A lack of accuracy of conventional AI solutions
- Data protection and secure management of sensitive data
- High costs and complexity of individual AI development
- Lack of qualified AI
- Integration of AI into existing IT systems
More about it here:
Innovation vs. Regulation: Europe's Balancing Act in the AI Sector
How do other countries deal with AI regulation?
The EU is a global pioneer with its comprehensive AI regulation, but other countries are also developing their own approaches. There is currently no comparable federal regulation in the US, but individual states have passed their own laws. Under the Trump administration, the US is moving more toward deregulation.
China is pursuing a different approach, with specific rules for certain AI applications, while simultaneously promoting technologies like social scoring through state support. Other countries, such as Canada, the United Kingdom, and Japan, are developing their own frameworks, which are often less comprehensive than the EU regulation. These different approaches could lead to regulatory fragmentation, posing challenges for international companies.
What role do courts play in enforcement?
Courts will play an important role in interpreting and enforcing the AI Regulation. Several lawsuits are already underway in the US alleging copyright infringement in AI training. For example, one court ruled in favor of authors who sued Anthropic for using unauthorized versions of their books in the training of Claude.
In the EU, private individuals and companies can now sue if they detect violations of the AI Regulation. This also applies during the transition period before the authorities' official enforcement powers take effect. However, the final interpretation of the regulation will be up to the European Court of Justice, which is likely to issue groundbreaking rulings in the coming years.
Suitable for:
- AI bans and competence obligation: the EU AI Act – a new era in dealing with artificial intelligence
What are the long-term prospects?
The long-term effects of the EU AI Regulation are still difficult to estimate. Proponents hope that EU standards could become a global benchmark, similar to the General Data Protection Regulation (GDPR). Companies developing for the European market could then apply these standards worldwide.
Critics, however, warn of a technological disconnection in Europe. They fear that strict regulation could lead to innovative AI developments taking place primarily outside of Europe. Time will tell whether the EU has struck the right balance between protection and innovation.
What does all this mean in summary?
The new EU rules on artificial intelligence mark a turning point in the regulation of this technology. Starting in August 2025, providers of large-scale AI systems such as ChatGPT and Gemini will be required to meet comprehensive transparency and security requirements. The regulation aims to protect citizens' rights while enabling innovation.
Practical implementation will show whether this balancing act will succeed. While some companies view the rules as necessary and sensible, others criticize them as inhibiting innovation. The differing approaches in the EU and the US could lead to a fragmentation of the global AI landscape. For users, the rules mean greater transparency and security, while for companies, they impose additional compliance requirements. The coming years will be crucial for determining whether Europe can successfully pursue its self-chosen path of AI regulation.
How does technical documentation work in practice?
The technical documentation that GPAI model providers must create is a complex undertaking. It includes not only technical specifications but also detailed information about the entire development process. Providers must document which design decisions were made, how the model architecture is structured, and which optimizations were made.
Documenting training data is particularly demanding. Providers must not only disclose which data sources were used, but also how the data was prepared and filtered. This includes information on cleaning processes, the removal of duplicates, and the treatment of potentially problematic content. The EU also requires information on the scope of the data, its key characteristics, and how it was obtained and selected.
What special requirements apply to systemically risky models?
AI models classified as systemically risky are subject to particularly strict requirements. This classification occurs when training requires a cumulative computational effort of more than 10^25 floating-point operations, or when the EU Commission classifies the model as particularly risky due to its capabilities.
These models are subject to additional obligations, such as conducting risk assessments, adversarial testing to identify vulnerabilities, and implementing risk mitigation measures. Providers must also establish an incident reporting system and promptly report serious incidents to the supervisory authorities. These measures are intended to ensure that particularly powerful AI systems cannot be misused for malicious purposes.
What does cooperation between the EU and its member states look like?
The enforcement of the AI Regulation takes place in a complex interplay between EU institutions and national authorities. While the EU AI Office is responsible for monitoring GPAI models, national authorities play an important role in monitoring other AI systems and enforcing the rules locally.
Member States were required to designate at least one competent authority by November 2024 and establish national notification authorities by August 2025. These authorities are responsible for the accreditation and monitoring of conformity assessment bodies that assess high-risk AI systems. Coordination between the different levels is challenging but necessary to ensure consistent application of the regulation across the EU.
What is the importance of harmonized standards?
An important aspect of the AI Regulation is the development of harmonized standards. These technical standards are intended to specify how the abstract requirements of the regulation can be implemented in practice. The European standards organizations CEN, CENELEC, and ETSI are working on developing these standards, which cover areas such as data quality, robustness, cybersecurity, and transparency.
While the harmonized standards are not mandatory, they provide a presumption of conformity. This means that companies that comply with these standards can assume that they meet the relevant requirements of the regulation. This creates legal certainty and significantly simplifies practical implementation.
How do smaller companies deal with these requirements?
The extensive requirements of the AI Regulation pose a particular challenge for smaller companies and start-ups. The documentation requirements, conformity assessments, and compliance measures require considerable resources that not all companies can afford.
The EU has attempted to address this problem by explicitly requiring the interests of SMEs to be taken into account in the regulation. Notified bodies are intended to avoid unnecessary burdens and minimize administrative burdens for small businesses. Furthermore, AI living labs are intended to offer small businesses the opportunity to test their innovations in a controlled environment.
What are AI real-world labs and how do they work?
AI living labs are controlled environments where companies can test AI systems under real-world conditions without having to comply with all regulatory requirements. Member states are required to establish at least one such living lab by August 2026. These labs are intended to promote innovation while providing insights into risks and best practices.
In the real-world labs, companies can test new approaches and benefit from regulatory flexibility. Authorities oversee the tests and gain valuable insights into the practical challenges of AI regulation. This is intended to contribute to the evidence-based further development of the legal framework.
Suitable for:
How does the AI Regulation relate to other EU laws?
The AI Regulation does not exist in a vacuum; it must be harmonized with other EU laws. Its relationship with the General Data Protection Regulation (GDPR) is particularly relevant, as AI systems frequently process personal data. The AI Regulation supplements the GDPR and creates additional requirements specifically for AI systems.
The AI Regulation must also be coordinated with sector-specific regulations such as the Medical Devices Regulation or the Machinery Regulation. In many cases, both sets of rules apply in parallel, increasing compliance requirements for companies. The EU is working on guidelines to clarify the interplay between the various legal acts.
What role does cybersecurity play in the AI Regulation?
Cybersecurity is a central aspect of the AI Regulation. Providers must ensure that their systems are robust against cyberattacks and cannot be manipulated. This includes measures to protect against adversarial attacks, in which specially crafted inputs are intended to trick the AI system into making errors.
Cybersecurity requirements vary depending on the risk level of the AI system. High-risk systems and systemically risky GPAI models must meet particularly high standards. Providers must conduct regular security assessments and promptly remediate vulnerabilities. Security incidents must be reported to the authorities.
How are cross-border issues dealt with?
The global nature of AI systems raises complex cross-border issues. Many AI providers are based outside the EU but offer their services to European users. The AI Regulation applies to all AI systems placed on the market or used in the EU, regardless of the provider's location.
This creates practical challenges for enforcement. The EU must cooperate with third countries and potentially negotiate agreements on the mutual recognition of standards. At the same time, European companies operating internationally may have to comply with different regulatory requirements in different markets.
What support is available for affected companies?
To assist companies in implementing the AI Regulation, the EU and its member states have established various support measures. The EU AI Office regularly publishes guidelines and explanatory notes on key aspects of the regulation. These documents are intended to provide practical assistance in interpreting and applying the rules.
National authorities also offer advice and support. In Germany, for example, the Federal Network Agency has developed an AI Compliance Compass to guide companies through regulatory requirements. Industry associations and consulting firms offer additional resources and training.
How will the international discussion develop further?
The international discussion on AI regulation is dynamic and complex. While the EU is moving forward with its comprehensive regulation, other countries are closely monitoring developments. Some are considering similar approaches, while others are deliberately pursuing alternative paths.
International organizations such as the OECD, the G7, and the UN are working on global principles for responsible AI. These efforts aim to create a common framework that can bridge differing regulatory approaches. The challenge lies in finding consensus among countries with very different values and priorities.
What does this mean for the future of AI development?
The EU AI Regulation will undoubtedly shape the landscape of AI development. Some experts see it as a necessary measure to strengthen trust in AI systems and ensure their responsible development. They argue that clear rules will lead to better and safer AI systems in the long run.
Others fear that regulation could weaken Europe's innovative strength. They point out that compliance costs pose a hurdle, especially for smaller companies, and that talented developers could potentially migrate to less regulated markets. The coming years will show which of these predictions come true.
Europe's regulatory path: protection and progress in artificial intelligence
The introduction of stricter rules for AI systems in the EU marks a historic moment in technology regulation. With the gradual implementation of the AI Regulation, Europe is breaking new ground and setting standards that may be replicated worldwide. Balancing protection and innovation, between security and progress, is becoming a key challenge.
For all involved – from large tech companies to startups and individual users – this represents a time of change and adaptation. Successful implementation will depend on how well the regulation's abstract principles can be translated into practical solutions. Collaboration among all stakeholders will be crucial: regulators, businesses, academia, and civil society must work together to ensure that AI can realize its positive potential while minimizing risks.
The coming years will show whether the EU has created a model for the world with its regulatory approach, or whether alternative approaches will prove superior. The only thing that is certain is that the debate about the right balance between AI innovation and regulation will continue for a long time to come. The rules that will come into force on August 2, 2025, are only the beginning of a longer development that will shape the digital future of Europe and possibly the world.
We are there for you – advice – planning – implementation – project management
☑️ SME support in strategy, consulting, planning and implementation
☑️ Creation or realignment of the AI strategy
☑️ Pioneer Business Development
I would be happy to serve as your personal advisor.
You can contact me by filling out the contact form below or simply call me on +49 89 89 674 804 (Munich) .
I'm looking forward to our joint project.
Xpert.digital – Konrad Wolfenstein
Xpert.Digital is a hub for industry with a focus on digitalization, mechanical engineering, logistics/intralogistics and photovoltaics.
With our 360° business development solution, we support well-known companies from new business to after sales.
Market intelligence, smarketing, marketing automation, content development, PR, mail campaigns, personalized social media and lead nurturing are part of our digital tools.
You can find more at: www.xpert.digital – www.xpert.solar – www.xpert.plus