Digital EU bus and AI: How much special legislation can Europe's data order tolerate?
Xpert pre-release
Language selection 📢
Published on: December 22, 2025 / Updated on: December 22, 2025 – Author: Konrad Wolfenstein

Digital EU bus and AI: How much special legislation can Europe's data order tolerate? – Image: Xpert.Digital
Brussels preaches deregulation – and opens the back door for Big Tech to access the data resource of Europe
What the digital EU bus would actually change
The planned EU digital omnibus is far more than a mere "clean-up" of European digital law. Behind the rhetoric of simplification and bureaucracy reduction lies a profound intervention in the fundamental logic of the European data order. Instead of simply harmonizing forms or streamlining reporting obligations, the Commission is tampering with core principles of the General Data Protection Regulation (GDPR) and other digital regimes. At the same time, it is attempting to adapt the legal framework for artificial intelligence (AI) and the data economy so that European and international companies can work more extensively and easily with personal data.
Economically, this signifies a strategic shift: away from strictly fundamental rights-oriented, technology-neutral regulation, towards a more technology-policy-driven approach that treats AI as a privileged future industry. The omnibus thus not only creates clarity but also an asymmetric advantage for certain business models – especially those companies that benefit from economies of scale in data collection and the training of large models. This restructures incentives and power dynamics in the data markets.
At its core is the proposed new Article 88c of the GDPR, flanked by amendments concerning sensitive data, information obligations, end-device data protection, and cookie rules. The omnibus is thus a political-economic project: it defines who may develop AI, at what legal risks and costs, who has access to which data resources, and whose business model is facilitated or hindered by regulation. The debate about whether this constitutes an "unbounded special legal zone" for AI is therefore not merely a legal one, but also directly relevant to industrial and competition policy.
Technology neutrality versus AI privilege: Erosion of a core principle of the GDPR
The GDPR was deliberately designed to be technology-neutral. It does not refer to specific technologies, but rather to the processing of personal data, regardless of whether this is carried out by simple algorithms, classic software, or highly complex AI systems. This principle ensures that similar risks to fundamental rights are regulated similarly. The Omnibus is gradually undermining this principle.
Article 88c aims to explicitly qualify the development and operation of AI systems as a legitimate interest within the meaning of Article 6(1)(f) GDPR. This grants the AI context its own, technology-specific special treatment. From an economic perspective, this means that a specific technology – AI – is legally privileged, even though its risks are often higher than those of conventional data processing methods. Adherence to the AI Act only partially resolves this issue, as the levels of protection are not identical and the AI Act itself is risk-based, not comprehensively based on personal data.
Furthermore, the definition of AI is extremely broad. If virtually any advanced form of automated data analysis can be interpreted as an AI system within the meaning of the AI Act, Article 88c extends the scope of the privilege far beyond classic "GenAI" or deep learning applications. In practice, companies could declare almost any data-intensive, automated processing to be AI in order to benefit from more favorable legal treatment. The dividing line between "normal" data processing and "AI processing" becomes blurred, and this very ambiguity is economically attractive: it reduces compliance costs and legal vulnerability for appropriately positioned actors.
The result would be a de facto technological advantage that undermines the neutral, fundamental rights-oriented design of the GDPR. This has far-reaching consequences for the market order in the digital single market: those who are "AI" and can credibly substantiate this legally would gain easier access to data, less legal uncertainty, and potentially lower enforcement costs.
Data minimization under pressure: When mass becomes legitimacy
A particularly critical point of the omnibus concerns the handling of sensitive data – such as information on health, political opinions, ethnic origin, or sexual orientation. These data categories are subject to a strict processing ban under the GDPR, with only a few narrowly defined exceptions. The omnibus now introduces additional exceptions by citing the training and operation of AI systems as specific justifications.
The economically explosive aspect is not so much the mere opening up of data, but rather the underlying supply logic: the more data-intensive and massive the processing, the easier it is to justify it as necessary for the development of high-performance AI models. The principle of data minimization – targeted, minimal data use – is turned on its head. Data abundance becomes a justification, not a threat.
For data-hungry business models, especially global platforms with gigantic user bases, this is a structural advantage. Those who possess billions of data points and the technical means to comprehensively absorb and process them in models can more easily exploit the narrative of necessity than small or medium-sized enterprises with limited data sets. What is sold as an innovation-friendly simplification, therefore, in practice reinforces economies of scale and network externalities in favor of companies that already dominate the market.
At the same time, collective vulnerabilities arise on the risk side. AI systems trained on widely collected sensitive data are structurally susceptible to data leaks, re-identification, and discriminatory patterns. Even though the omnibus requires "appropriate technical and organizational measures," these requirements are deliberately formulated in broad terms. This openness has a twofold economic effect: On the one hand, it enables flexible, innovative approaches to technical data protection; on the other hand, it shifts liability and proof risks to smaller providers who have fewer resources to credibly implement complex protection concepts. Digital EU Omnibus: Regulatory clarity or a carte blanche for data-hungry AI corporations?
Bureaucracy reduction as a pretext for a tectonic shift in the data protection regime – Why the “digital omnibus” is far more than a technical streamlining law
The planned "digital EU omnibus" is being sold by the European Commission as a pragmatic cleanup project: less bureaucracy, more coherence, better competitiveness in the digital single market. Political communication is dominated by the narrative of "simplification"—a word that almost inevitably evokes positive associations in European politics. In reality, however, this is not merely an editorial overhaul, but a profound intervention in the fundamental logic of European data protection and digital regulation as a whole.
The focus is on the role of artificial intelligence and data-driven business models. The omnibus proposal links several legal acts – in particular the GDPR, the AI Act, the Data Act, and the ePrivacy Directive – in a new way, shifting the balance in favor of expansive data use. Under the guise of creating legal certainty and facilitating innovation, a new regime is outlined in which large-scale data processing for AI is privileged rather than restricted. This is precisely where the massive criticism from data protection lawyers, consumer associations, and parts of the academic community begins.
The analysis of Spirit Legal's report for the German Federation of Consumer Organizations (vzbv) sheds light on a core conflict in European digital policy: Can Europe simultaneously be a global AI hub, a true guardian of fundamental rights, and a protector of consumers – or will data protection be silently sacrificed to geopolitical and industrial policy logic? The omnibus draft suggests that Brussels is prepared to relax the current strict interpretation of the GDPR, at least partially, in favor of an AI-friendly exception regime. The crucial question, therefore, is: Is this a necessary modernization or the beginning of an "unbounded special legal zone" for AI?
Article 88c and the logic of preferential treatment: How technological neutrality becomes special technology law
At the heart of the conflict is the planned new Article 88c of the GDPR. It aims to explicitly classify the development, training, and operation of AI systems as a "legitimate interest" within the meaning of Article 6(1)(f) GDPR. At first glance, this sounds like a mere clarification: AI companies should be able to rely on an established legal basis without having to stumble over consent or special provisions in every single case. However, a paradigm shift is taking place at the core of the legal architecture.
Up to now, the GDPR has been designed to be technology-neutral. It does not distinguish between "AI" and other data processing methods, but rather links rights and obligations to the type of data, the context, and the risk to data subjects. Article 88c would break with this principle: Artificial intelligence would be granted its own privileged access to personal data. This is precisely where Hense and Wagner's warning against a "boundless special legal zone" comes in.
The problem is exacerbated by the AI Act's extremely broad definition of AI. Under the Act, virtually any software that uses certain techniques—from machine learning to rule-based systems—to recognize patterns, make predictions, or support decision-making is considered an AI system. Combined with Article 88c, this could allow almost any sophisticated data processing to be declared AI-relevant. This creates a strong incentive for companies to "label" their infrastructure as AI systems for regulatory purposes in order to access the privileged legal framework.
This transforms a seemingly narrow, special case of AI into a gateway for a systematic relaxation of data protection requirements. The GDPR's technological neutrality—until now an important safeguard against special legislation for specific technologies—would be undermined. Legally, a technology category whose boundaries are already difficult to define in practice would gain a structural advantage over other forms of data processing. In an environment where more and more processes are algorithmically optimized, this is nothing less than a regulatory turning point for the entire future of data capitalism in Europe.
How the principle "the more data, the more likely it is to be allowed" creates a dangerous incentive structure for Big Tech
The omnibus draft becomes particularly controversial where it interferes with the existing logic of data minimization and purpose limitation. The GDPR is based on the idea that only as much personal data may be collected and processed as is absolutely necessary for a specific purpose. This principle was explicitly designed as a counter-model to unlimited data collection and profiling.
The omnibus approach, at least in practice, reverses this logic in the context of AI. Its rationale suggests that large datasets carry particular weight in justifying processing when used to train AI models. The reviewers interpret this as a perverse incentive structure: the more extensive, diverse, and massive the data collected, the easier it is to justify its use for AI. Mass scraping, profiling, and the merging of diverse sources could thus be legitimized under the guise of AI optimization.
Economically, this structure systematically favors those players who already possess gigantic datasets and are capable of aggregating further data on a large scale – primarily US-based platform companies. The more users, the more interaction data, the more connection points, the stronger the alleged "legitimate interest" in pushing this data into AI pipelines. Small and medium-sized enterprises (SMEs) that lack both similar data volumes and comparable infrastructure remain at a disadvantage. The omnibus architecture thus acts as a scaling multiplier for already dominant players.
Furthermore, there is another critical aspect: The argument that large datasets increase the accuracy and fairness of AI systems is sometimes used uncritically as justification. From an economic perspective, it is true that the performance and robustness of models often increase with more data. However, this efficiency gain comes at the cost of increased information asymmetries, concentration of power, and the risk of reproducing personal and social patterns. The proposal largely ignores the fact that data minimization and purpose limitation were not enshrined in the GDPR by chance, but rather as a response to precisely such power imbalances.
Why weakening the protection of special categories of personal data creates a systemic risk
Special categories of personal data – such as data concerning health, ethnic origin, political opinions, religious beliefs, or sexual orientation – are subject to a strict processing ban under the GDPR, with narrowly defined exceptions. The omnibus proposal expands the possibility of using such data in the context of AI development and operation by introducing a new exception. This is justified by the need for comprehensive data to prevent bias and discrimination.
In practice, however, this amounts to a normalization of the use of highly sensitive data without a corresponding strengthening of the control options available to those affected. The construct that sensitive characteristics sometimes appear "unproblematic" as long as they cannot be directly traced back to individual identifiable persons or primarily function as statistical variables in a training dataset is particularly problematic. But even seemingly anonymous or pseudonymized datasets can allow inferences to be drawn about groups, social milieus, or minorities and reinforce discriminatory patterns.
From an economic perspective, such a regulation expands the pool of raw materials for AI models by adding particularly valuable, because profound, information. Health data, political preferences, psychological profiles – all of this data has enormous monetary relevance in the advertising, insurance, financial, and labor market sectors. Whoever gains access to such data on a large scale can develop significantly more granular and therefore more profitable models. The combination of the sensitive nature of the data and its economic potential creates a twofold risk: for individual autonomy and for the collective structure of democracy and social cohesion.
Especially in the context of AI, the risk of systemic biases is high. Models trained on sensitive data not only reproduce information but also implicit value judgments and stereotypes. The proposed "appropriate technical and organizational measures" intended to limit negative effects remain vague in the draft. This creates a gray area: On the one hand, highly sensitive data is opened up for AI training, while on the other hand, clear, enforceable standards for safeguards and controls are lacking. In such an architecture, those actors with technological superiority and a high risk tolerance benefit most.
Erosion through the back door: Recitals instead of standard texts and the weakening of enforcement
Another key criticism from the experts concerns the methodological shift of important protective mechanisms from the legally binding text of the law to the non-binding explanatory notes. What appears to be a technical detail at the level of legal technique has massive practical consequences for the enforceability of the law.
The recitals primarily serve as interpretive guidelines; they are not directly enforceable legal norms. If essential safeguards—such as opt-out procedures, information obligations, or restrictions on web scraping—are primarily enshrined there, rather than in clearly formulated articles, this significantly limits the options available to data protection authorities. Violations become more difficult to prosecute, fines and orders are based on less clear grounds, and companies can argue that these are merely "interpretive aids.".
For AI-related mass data processing, this construct acts as an invitation to expand the scope of regulations. Particularly with web scraping of publicly accessible information—for example, from social networks, forums, or news sites—there is a significant risk that those affected will neither be informed nor have a realistic opportunity to exercise their rights. If the central barrier against such practices is only hinted at in the recitals but not enshrined in the legal text itself, data protection in practice is reduced to a mixture of soft law and the goodwill of corporations.
From an economic perspective, this shifts the cost structure: Companies that aggressively collect data and train AI models benefit from legal ambiguity because regulatory authorities tend to refrain from taking action or must await lengthy court rulings. Legal risks are thus postponed and diminished; in the short term, this creates competitive advantages for particularly risk-tolerant providers. In the competitive landscape, integrity and compliance tend to be penalized, while pushing boundaries appears rewarding – a classic case of regulatory perverse incentives.
Why a separate, narrowly defined standard for AI training data could better balance the conflicting objectives
As an alternative to the blanket legitimation based on "legitimate interest," the experts propose a targeted, independent legal basis for the training of AI systems. From an economic perspective, this would be an attempt to resolve the conflict between promoting innovation and protecting privacy not through a general weakening of data protection, but through specific, strict conditions.
Such a special legal basis could contain several protective barriers:
First, it could enshrine a strict verification requirement stipulating that companies may only access personal data if it can be proven that an equivalent result cannot be achieved with anonymized, pseudonymized, or synthetic data. This would incentivize investment in data anonymization methods, synthetic data generation, and privacy by design. The direction of innovation would shift away from unchecked data collection and toward technical creativity in managing data minimization.
Secondly, such a standard could mandate minimum technical standards to prevent data leakage. AI models must not reproduce or make reconstructible any personally identifiable information from their training data in their outputs. This requires not just simple filters, but robust architectural decisions, such as differential privacy, output control mechanisms, and strict evaluation pipelines. The economic logic here would be clear: investing in model architectures that protect personal data reduces liability risks in the long run and strengthens trust.
Thirdly, the standard could stipulate strict purpose limitation for AI training data. Data that has been collected or used for a specific AI training purpose could not be readily used in other contexts or for new models. This would restrict the widespread practice of treating collected datasets as a permanent resource for various developments. Companies would then need to maintain clearly segmented data pools and transparently document usage paths.
Such a specialized legal framework is not a carte blanche, but rather a qualified authorization. It could structure the tension between AI innovation and the protection of fundamental rights, instead of obscuring it with a general clause. While this might be less "lean" politically, it would be significantly more sound from a rule-of-law perspective, because the conflict would be openly codified and not hidden behind layers of interpretation.
A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) - Platform & B2B Solution | Xpert Consulting

A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) – Platform & B2B Solution | Xpert Consulting - Image: Xpert.Digital
Here you will learn how your company can implement customized AI solutions quickly, securely, and without high entry barriers.
A Managed AI Platform is your all-round, worry-free package for artificial intelligence. Instead of dealing with complex technology, expensive infrastructure, and lengthy development processes, you receive a turnkey solution tailored to your needs from a specialized partner – often within a few days.
The key benefits at a glance:
⚡ Fast implementation: From idea to operational application in days, not months. We deliver practical solutions that create immediate value.
🔒 Maximum data security: Your sensitive data remains with you. We guarantee secure and compliant processing without sharing data with third parties.
💸 No financial risk: You only pay for results. High upfront investments in hardware, software, or personnel are completely eliminated.
🎯 Focus on your core business: Concentrate on what you do best. We handle the entire technical implementation, operation, and maintenance of your AI solution.
📈 Future-proof & Scalable: Your AI grows with you. We ensure ongoing optimization and scalability, and flexibly adapt the models to new requirements.
More about it here:
AI needs a lot of electricity, not just chips: Why energy is becoming the new currency of the global AI economy
Vulnerable groups and the digital biography: Why children and young people are in danger of becoming the testing ground for AI capitalism
A particularly sensitive aspect concerns the protection of minors and other vulnerable groups. Children and young people already generate enormous amounts of digital traces – on social media, in gaming environments, on educational platforms, and in health apps. This data paints a highly detailed, often lifelong digital biography. In the context of AI training and personalization, the question arises as to what extent this data may be incorporated into models without specific, informed, and reversible consent.
The experts advocate for explicit parental consent whenever data from minors is to be used for AI training purposes. Furthermore, they propose that young adults, upon reaching the age of majority, should have an unconditional right to prohibit the further use of their data in existing models. This would mean that not only future data processing, but also the previous use of data in trained models would have to be corrected – to the extent technically possible.
From an economic perspective, this is inconvenient but crucial. Data from minors is particularly attractive for AI applications because it enables early pattern recognition, long-term profiling, and targeted advertising over years (or even decades). In consumer, education, and advertising markets, such long time horizons are enormously valuable. If this data is used unregulated as a training basis, corporations will gain a data advantage that is virtually impossible to overcome. The younger generation would thus become a systematic resource for a long-term AI business model without ever having made a conscious, informed decision.
At the same time, there is a risk that errors, prejudices, or unfortunate periods in digital life will remain permanently present in the models—for example, if previous online activities indirectly influence careers, loans, or insurance terms. Even if the models officially operate "anonymously," correlations at the group level can have long-term effects on the educational and employment opportunities of certain social groups. Those who grow up in a problematic social environment are statistically more likely to find themselves in negative risk profiles. Therefore, the lack of robust safeguards for minors perpetuates social inequality in an algorithmic form.
The political rhetoric of "digital sovereignty for the next generation" remains hollow when the very group that will be exposed to the future digital ecosystem is currently being fed into AI data streams largely unprotected. From an economic perspective, the short-term convenience for AI providers—unfettered access to valuable data—comes with long-term societal costs that extend far beyond individual data breaches. The question is whether democratic societies are prepared to make the life stories of their young citizens a primary raw material for the AI industry.
Trust as a production factor: Why weakened data protection is an economic risk for Europe's digital economy
In public debate, data protection is often portrayed as an obstacle to innovation. Empirical data paints a different picture. Representative surveys conducted by the German Federation of Consumer Organizations (vzbv) show that trust is a key prerequisite for the use of digital services for an overwhelming majority of consumers. When 87 percent of respondents state that trust is a fundamental requirement for their digital use, it becomes clear: without a credible legal framework and effective means of control, a viable market for complex, data-intensive applications cannot emerge.
The GDPR currently plays a dual role. On the one hand, it limits certain business models in the short term or forces companies to incur additional costs. On the other hand, it acts as an institutional anchor of trust: Over 60 percent of consumers say they are more likely to trust companies that demonstrably comply with European data protection regulations. This trust is not a vague "feeling," but a real economic factor. It determines whether users are willing to disclose sensitive information, test new services, or trust data-driven systems in everyday situations—for example, in the healthcare or financial sectors.
If this anchor is weakened because the impression arises that data protection is being gradually diluted and fundamental principles sacrificed in favor of AI interests, there will be consequences. In the short term, data usage may be made easier for some companies. In the medium term, however, skepticism towards the entire ecosystem grows. Users react with avoidance behavior, evasive strategies, conscious data reduction, or by resorting to particularly restrictive tools. Trust, once lost, is difficult to regain – and the costs of doing so are higher than the effort required to adhere to a robust, consistent legal framework from the outset.
This has a strategic implication for the European digital economy: competitive advantages over US platforms cannot be gained primarily through sheer volume of data and aggressive data collection – others are already far ahead in this regard. The realistic path to differentiation lies in trustworthiness, transparency, accountability, and the credible integration of data-intensive services into a values-based regulatory framework. The omnibus approach, which effectively signals the opposite, thus undermines precisely the strength that Europe could have developed in global competition.
Asymmetric effects: Why the omnibus strengthens Big Tech and weakens European SMEs
A key criticism is that the planned regulatory relief measures structurally benefit primarily large, data-rich platform companies – those commonly referred to as "Big Tech." The underlying economic logic is simple: companies that already possess vast amounts of data, operate a global infrastructure for data collection and processing, and maintain specialized compliance teams can strategically exploit regulatory loopholes and exceptions without facing existential risks. For small and medium-sized enterprises (SMEs), the calculation is quite different.
Recognizing AI training and operation as a "legitimate interest" requires complex balancing processes: the company's interests must be weighed against the rights and freedoms of those affected. Large corporations have the legal departments to substantiate such considerations with elaborate documentation and the market power to absorb potential fines as a calculated risk in the long term. Smaller companies, on the other hand, face the choice of either cautiously refraining from riskier, but potentially competitively relevant, data uses or venturing into gray areas without sufficient legal expertise.
Furthermore, there is the network effect: If large-scale data use for AI training is facilitated, naturally those who already possess massive amounts of data will derive the greatest benefit. Every additional data package improves their models, increases the attractiveness of their services, and in turn amplifies the influx of more users and data. As a result, the market equilibrium shifts further in favor of fewer global platforms. European providers attempting to compete with less data-intensive but more privacy-friendly approaches find themselves in an increasingly defensive position.
The politically communicated objective of strengthening European companies and expanding digital sovereignty thus contradicts the actual effects of the regulations. Deregulation that primarily benefits those already at the top increases the concentration of power instead of limiting it. For European industrial and location policy, this means that what is sold as "relief" can turn into structural dependence on foreign data and AI infrastructures. Sovereignty is not achieved through lax rules, but through the ability to build one's own trustworthy and competitive alternatives.
As the Omnibus debate shows, European digital policy is being caught between industrial interests and fundamental rights
The suspicion that the Digital Omnibus was largely created under the influence of the US government and American technology companies points to the geopolitical dimension of the debate. In the global AI race, data flows, model access, and cloud infrastructures are strategic resources. For the US, whose digital economy benefits greatly from the exploitation of European user data, a more flexible European legal framework is of great interest.
An omnibus agreement that weakens European data protection standards indirectly lowers the barriers to data transfers, training collaborations, and the integration of European data into global AI models. Even if formal transfer rules—for example, within the framework of transatlantic data agreements—remain in place, a relaxation of intra-European safeguards reduces the political and regulatory pressure to actually handle such transfers restrictively.
At the same time, Europe is sending an ambivalent signal to other regions of the world. The GDPR has often been regarded as a global benchmark; numerous countries have based their data protection laws on it. If it now becomes apparent that the EU itself is prepared to relax key principles in favor of AI industry interests, this weakens its normative leadership. Other countries could conclude that strict data protection frameworks are ultimately being sacrificed to economic realities – with the consequence that global protection standards as a whole are eroding.
From a power-political perspective, Europe thus faces a dilemma: If it adheres to a strict framework of fundamental rights, it risks short-term competitive disadvantages in the AI race. If it gradually abandons this strictness, it might gain somewhat more flexibility, but loses its identity as a protector of digital self-determination. The Digital Omnibus, as it is currently conceived, attempts to bridge this dilemma through ambivalence: Outwardly, it upholds fundamental values, but in detail, it creates loopholes and exceptions that effectively allow for widespread data use. Economically, however, this does not lead to clarity, but rather to a hybrid system in which uncertainty becomes the norm.
Two paths for Europe's digital economy and their medium- to long-term consequences
To assess the economic impact of the digital bus, it is worthwhile to outline two rough scenarios: an implementation of the design largely in continuity with the current version and a variant in which key criticisms are addressed and the course is noticeably corrected.
In the first scenario, AI training and operation would be widely recognized as a legitimate interest, sensitive data would be more frequently incorporated into training pipelines under vague safeguards, and essential safeguards would only be mentioned in the explanatory notes. In the short term, some European companies—especially those with already extensive datasets—could benefit because legal risks would be perceived as mitigated. Investors would see new growth opportunities in certain segments, particularly in the areas of generative models, personalized advertising, healthcare, and FinTech applications.
In the medium term, however, the side effects described at the outset would intensify: concentration effects favoring global platform companies, declining user trust, increasing social conflicts over discretionary data use, and growing pressure on policymakers and regulators to retrospectively correct problematic developments. Legal uncertainty would not disappear, but merely shift: instead of individual, clear prohibitions, there would be countless disputes over borderline cases, in which courts would have to establish precedents for years. This would create a risk for companies that is open to volatile interpretation – the supposed relief would prove to be illusory.
In the alternative scenario, the omnibus would still aim for simplification and harmonization, but would be refined in key areas. Article 88c would be reduced to a narrowly defined, specific legal basis for AI training, explicitly reaffirming data minimization, purpose limitation, and data subject rights. Sensitive data would only be usable under clear, stringent conditions, and essential safeguards would be enshrined in the text of the regulation rather than hidden in recitals. At the same time, the legislator would create targeted instruments to support SMEs in using data in compliance with the GDPR – for example, through standardized guidelines, certifications, or technical reference architectures.
In the short term, this scenario would be more inconvenient for some business models; certain data-intensive AI projects would need to be redesigned or equipped with different data architectures. In the long term, however, a more stable, trust-based ecosystem could develop, in which innovation does not thrive in the shadow of legal gray areas, but rather along clear, reliable guidelines. For European providers, this would present an opportunity to develop a profile as a provider of "trusted AI" with verifiable guarantees – a profile that is increasingly in demand in both consumer and B2B markets.
Why an open debate on the core conflict between innovation and fundamental rights is now necessary
With the Digital Omnibus now being debated in the EU Council and the European Parliament, the responsibility for making corrections no longer rests solely with the Commission. Civil society actors, consumer protection groups, and data protection advocates have made it clear that they see the draft as a systemic threat to the European data protection model. Policymakers face the choice of whether to take these objections seriously or marginalize them under pressure from lobbying interests.
Economically, the temptation is great to send short-term relief signals to companies – especially at a time when the EU is criticized in the global AI race for being too cumbersome and overly focused on regulation. However, it would be a strategic error to sacrifice the core of the European success model in the digital sphere because of this criticism: the combination of market liberalization, protection of fundamental rights, and normative leadership. A digital single market that is formally harmonized but demonstrably deregulated in substance would not secure either investment or public acceptance in the long run.
Instead, what is needed is an explicit political debate about the permissible framework for data use in AI. This includes recognizing that innovation in data-intensive sectors cannot be limitless without eroding fundamental freedoms. It also requires the understanding that data protection can be not only a cost factor but also a competitive advantage when combined with sound industrial and innovation policies. This approach demands more than cosmetic clarifications in the omnibus draft; it requires a conscious decision for a European AI model that differs from the logic of unbridled data capitalism.
Europe's digital future will not be decided by the question of whether AI is "enabled" – but how
Why the digital bus in its current form is riskier than having the courage for a stricter, clearer AI data framework
The EU's digital omnibus is more than just a package of technical simplifications. It is a litmus test of whether Europe is prepared to weaken its own data protection commitments in favor of supposedly faster AI progress. The planned preferential treatment of AI data processing via Article 88c, the relative devaluation of the principles of data minimization and purpose limitation, the weakening of the protection of sensitive data, and the relocation of important safeguards to recitals are not minor details, but rather expressions of a fundamental policy decision.
Economically, there is strong evidence that such a course of action primarily strengthens those who already possess power, data, and infrastructure, while weakening European SMEs, consumers, and democratic institutions. Trust is underestimated as a factor of production, regulation is misunderstood as a burden, and the real competitive advantages of a values-based digital ecosystem are squandered. Short-term concessions for AI corporations are thus bought at the price of long-term risks to social stability, the competitive order, and Europe's digital sovereignty.
An alternative, more ambitious strategy would not focus on accelerating AI at any cost, but rather on clear, rigorous, and yet innovation-compatible rules for data use, training processes, and the rights of individuals. It would provide special protection for minors and other vulnerable groups, avoid favoring Big Tech through loopholes, and treat public trust as a strategic resource. Above all, it would recognize that in a digitized economy, fundamental rights are not negotiable parameters, but rather the infrastructure upon which every form of legitimate value creation is built.
The Digital Omnibus, in its current form, is moving in the opposite direction. If Parliament and the Council approve it unchanged, this would be not only a legal but also an economic and political turning point: Europe would relinquish some of its role as a global pacesetter for responsible, fundamental rights-based data management – and move closer to a model in which AI development primarily serves to legitimize ever-expanding data exploitation. The debate surrounding the Omnibus is therefore not a technical detail, but a crucial arena in which the digital order Europe wants to represent in the 21st century will be decided.
Your global marketing and business development partner
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.
☑️ SME support in strategy, consulting, planning and implementation
☑️ Creation or realignment of the digital strategy and digitalization
☑️ Expansion and optimization of international sales processes
☑️ Global & Digital B2B trading platforms
☑️ Pioneer Business Development / Marketing / PR / Trade Fairs
🎯🎯🎯 Benefit from Xpert.Digital's extensive, five-fold expertise in a comprehensive service package | BD, R&D, XR, PR & Digital Visibility Optimization

Benefit from Xpert.Digital's extensive, fivefold expertise in a comprehensive service package | R&D, XR, PR & Digital Visibility Optimization - Image: Xpert.Digital
Xpert.Digital has in-depth knowledge of various industries. This allows us to develop tailor-made strategies that are tailored precisely to the requirements and challenges of your specific market segment. By continually analyzing market trends and following industry developments, we can act with foresight and offer innovative solutions. Through the combination of experience and knowledge, we generate added value and give our customers a decisive competitive advantage.
More about it here:




















