
Data sovereignty, self-determination and the reality of AI-driven platform moderation – Creative image: Xpert.Digital
The Enderman case: How a bizarre AI error shows how vulnerable we really are online.
### One click, everything gone: The silent chaos of AI-driven platform moderation ### Life's work destroyed by AI: Why your social media account could simply disappear tomorrow ### The great illusion of data sovereignty: How algorithms secretly rule over us ### Despite new EU laws: Why tech companies are still allowed to delete arbitrarily ###
Judge Algorithm: When an AI ends your digital life – and nobody is responsible
We live in a time where terms like "data sovereignty" and "digital self-determination" are not just political slogans, but represent the aspirations of an entire society. With laws like the Digital Services Act, Europe is attempting to build a bulwark against the arbitrary actions of global tech companies and to protect the fundamental rights of its citizens in the digital sphere. But while we debate legal clauses and regulations, a reality is unfolding right before our eyes that makes a mockery of these lofty goals. A reality in which people's digital existence is destroyed at the push of a button – not by a person, but by an opaque algorithm.
Every day, accounts are suspended and channels deleted on platforms like YouTube, TikTok, and Instagram—channels that users have painstakingly built up over years. Their digital life's work vanishes, often without clear justification, without a fair hearing, and without an effective way to appeal the decision. Increasingly, this is due to AI-driven moderation, which is error-prone, opaque, and yet possesses the ultimate power to judge visibility and digital existence. The case of tech YouTuber Enderman, whose channels with hundreds of thousands of subscribers were deleted based on an absurd connection allegedly made by AI, is just the tip of the iceberg. This article explores the deep chasm between our desire for control and the unchecked power of algorithms, which have long since become judges and executioners in our digital public sphere.
Where is the contradiction between our aspiration and our reality?
We constantly talk about data sovereignty and digital self-determination. These terms have become hallmarks of a self-confident and independent culture, one that is meant to portray its handling of artificial intelligence as a sign of its maturity. The European Union has set out to protect its citizens from the arbitrary actions of global technology corporations with laws such as the Digital Services Act and the Digital Markets Act. Regulations have been enacted to enforce transparency and safeguard fundamental rights. But with all this regulatory buildup, we are overlooking something fundamental: we have not addressed the existential threat that unfolds daily before our eyes and undermines the credibility of all these efforts.
The reality that the major social media channels present to us daily tells a very different story than that of data sovereignty and self-determination. People lose their digital life's work every day, without justification or any mechanisms in place to counteract it. Channels painstakingly built up over years are deleted. Not after careful review, not after transparent processes, not after the possibility of a fair hearing. Simply deleted. And this happens in a way unworthy of a democracy, because there are no effective mechanisms for appeal and those affected don't even know why their time and creativity have been wasted.
What specific examples demonstrate this arbitrariness?
The most recent and striking example is the case of tech YouTuber Enderman. The Russian content creator had built a main YouTube channel with over 350,000 subscribers, where he explored technological topics. His content was valuable in documentary terms – he dealt with older versions of Windows and other technical issues. This channel was deleted without warning. Shortly before, his secondary channel, Andrew, also with hundreds of thousands of subscribers, had disappeared. The stated reason for this drastic measure was bizarre: YouTube claimed that Enderman's channels were connected to a Japanese channel that had received its third copyright strike. A channel Enderman doesn't know, in whose language he doesn't communicate, and with which he has no connection.
What's remarkable about this case isn't just the injustice of the decision itself, but the way it was made. Enderman suggested that an AI system was behind it, having established a faulty connection between his channels and an unknown Japanese account. The tech YouTuber's hope that a human YouTube employee would review his complaint was dashed. Months passed without a response. Enderman now seems to have resigned himself to the fact that his time on YouTube is over. Another YouTuber reported identical problems in the same Twitter thread – his channel, too, was deleted with reference to the same Japanese channel. This points to systemic failure, not an isolated incident of human error, but rather the shortcomings of an automated system operating unchecked.
YouTube is not an isolated case. Various platforms have exhibited similar patterns. TikTok, Instagram, Facebook, and other services delete content and suspend accounts daily, often without sufficient justification. The transparency organization Freiheitsrechte.org has documented that social media platforms often provide inadequate explanations for their moderation decisions to those affected. In some cases, justifications only refer generally to a violation of the terms of service, without explaining which specific violation led to the action.
Are tech companies living up to their social responsibility?
This is the crucial point where we need to correct our cognitive biases. The major tech companies demonstrably profit from our data, our economic activity, and our society. They use our shared internet as their business foundation. They earn billions from advertising revenue generated by our attention and our personal data. At the same time, these corporations are de facto assuming public and societal responsibilities.
YouTube is not simply a technical service like a mere hosting provider. The platform has become the infrastructure of public communication. It determines visibility, reach, and access for millions of people. It has entrenched itself in the position of gatekeeper of information and knowledge. Facebook and Instagram are similar – these services have become central hubs for social discourse. For many people, these platforms are the primary place to raise their voices, build their communities, and spread their messages.
But while these tech companies profit economically from their roles as intermediaries of social communication, they shirk the responsibilities inherent in this role. A charitable organization commissioned by the state to perform tasks for a fee cannot simply exclude dissenting voices because they don't like someone. A public broadcaster cannot simply silence individuals without having heard their side of the story. A court cannot simply convict someone without giving them the opportunity to defend themselves.
Yet this is precisely what happens on these platforms every day. People are excluded without any real justification. Their work is deleted. Their livelihoods are destroyed online. And the platforms' only response is a reference to their terms of service and, at best, an automated complaint system that hardly resolves any issues. This is not only unjust; it is structurally dangerous for an open society.
🤖🚀 Managed AI Platform: Faster, safer & smarter to AI solutions with UNFRAME.AI
Here you will learn how your company can implement customized AI solutions quickly, securely, and without high entry barriers.
A Managed AI Platform is your all-round, worry-free package for artificial intelligence. Instead of dealing with complex technology, expensive infrastructure, and lengthy development processes, you receive a turnkey solution tailored to your needs from a specialized partner – often within a few days.
The key benefits at a glance:
⚡ Fast implementation: From idea to operational application in days, not months. We deliver practical solutions that create immediate value.
🔒 Maximum data security: Your sensitive data remains with you. We guarantee secure and compliant processing without sharing data with third parties.
💸 No financial risk: You only pay for results. High upfront investments in hardware, software, or personnel are completely eliminated.
🎯 Focus on your core business: Concentrate on what you do best. We handle the entire technical implementation, operation, and maintenance of your AI solution.
📈 Future-proof & Scalable: Your AI grows with you. We ensure ongoing optimization and scalability, and flexibly adapt the models to new requirements.
More about it here:
Automated moderation as a threat to fundamental rights: When AI decides on deletion
How does the use of AI change the problem?
Here, the situation is dramatically worsening. Tech companies are increasingly using automated systems to moderate content and make decisions. These AI systems are not transparent. They are not regularly reviewed. And above all: they also make mistakes with massive consequences. The Enderman case is just one of many examples of how AI-driven moderation leads to absurd or harmful results.
This became particularly evident during the COVID-19 pandemic. When human reviewers were unavailable, social media platforms massively shifted their content moderation to automated systems. The result was a wave of bad decisions. Videos that didn't violate guidelines were deleted. Legitimate content disappeared. Users became frustrated because the platforms couldn't keep their promises.
The limitations of AI-based content moderation are fundamental. Artificial intelligence only functions reliably when sufficient training data is available. Many situations are nuanced and cannot be easily categorized. A phrase like "I had pasta tonight" had a double meaning on TikTok—literally, it referred to food consumption, but in the context of a trend, it signaled suicidal thoughts. The TikTok algorithm failed to grasp this nuance and instead fueled the trend.
Furthermore, the error rate is systematic. A study by the European Broadcasting Union showed that AI chatbots had at least one significant problem in 45 percent of all answers to questions about current events, a problem that could mislead readers. In 81 percent of the results, some kind of error was found. This is not an exception; it's the rule.
Yet these very error-prone and opaque systems are used to decide the fate of millions of people's digital lives. A video is deleted. A channel is deactivated. A company is removed from the platform. And the decision was made by a system that users cannot understand, that is not accountable, and that is allowed to make wrong decisions with impunity.
Where does the state's responsibility lie?
The state isn't simply turning a blind eye. Worse still, the state, which has the power to correct this situation, is instead bureaucratizing and getting bogged down in minute details. There are rules – that's true. The European Union's Digital Services Act stipulates that platforms must be transparent. It requires users to have the right to complain. It stipulates that very large platforms must disclose their systems and their decisions. All of this sounds good and right on paper.
However, the enforcement of these rules is fragmented. The Federal Network Agency in Germany has taken on the role of Digital Services Coordinator and is now tasked with enforcing these rules. But does this agency have sufficient resources? Does it have enough power? Can individual national authorities truly take action against global tech companies that evade their responsibilities through lawyers and lobbying?
Furthermore, there is a deeper problem. For too long, the state has allowed private corporations to simultaneously play the roles of gatekeeper, judge, and jury. These corporations decide what is right and wrong on their platforms. They deliver verdicts. They enforce sentences. And they are not accountable to anyone. This is not just a regulatory flaw. It is a fundamental failure of democracy.
For a long time, the assumption was that markets regulate themselves, that platforms would act out of reputation and self-interest. This assumption has proven fundamentally wrong. The platforms optimize for engagement and advertising revenue, not for fairness. They run AI systems that are cheaper than human moderation, even though these systems are prone to error. And when an error occurs, they can shift the blame to an algorithm that supposedly made an autonomous decision.
What would be required to change this situation?
First, it must be clarified that the major platforms are not simply private companies over which the state has no say. These companies perform public functions. They are intermediaries of public discourse. They have assumed a societal task, certainly with economic profit, but nonetheless with social responsibility.
This means that fundamental principles of the rule of law must apply to moderation decisions, especially drastic measures such as suspensions or deletions. This means full transparency regarding the reasons for a decision. This means the right to a fair hearing before drastic measures are taken. This means a genuine right to appeal, not an automated complaint system that is ineffective in practice. This means human review, especially in cases involving an algorithm.
Furthermore, there need to be limits to AI-driven moderation. If a system is fallible and can affect millions of people, a human must always be involved. EU regulations point in this direction, but enforcement is lacking. Platforms constantly find ways to circumvent or undermine these rules.
A structural change in accountability is also needed. Platforms must be held liable for the decisions of their systems. Not metaphorically liable, but legally liable. If a channel is wrongfully deleted, the platform should be obligated to pay damages. This would change the incentives. Suddenly, it would no longer be cheaper to use a faulty automated system. Suddenly, there would be a price to unjustly harm people.
For Enderman, this would have meant that YouTube couldn't simply delete his channel because an AI system made a faulty connection to a Japanese account. There should have been a review. There should have been an opportunity to respond. And if the error went unnoticed, YouTube could have been held liable.
What will happen if these problems are not solved?
The answer is devastating. If we allow AI systems to arbitrarily decide on people's digital existence, then chaos won't arrive with AI—chaos is already here. It will only intensify. Because the more intelligent these systems become, the less we understand them. And the less we understand them, the less we can control them.
Even worse: The problem will grow exponentially. The use of AI in content moderation will intensify. The systems will become more complex. Error rates may decrease or increase—no one knows for sure. But what is guaranteed is that millions, and soon billions, of people will be affected by decisions they don't understand, can't challenge, and for which there is no accountability.
And while this is happening, the state looks the other way. The Federal Network Agency outlines its responsibilities. The EU enacts laws. But enforcement is half-hearted. The authorities are under-resourced. The platforms pay fines that are mere pocket change for them and don't really change their practices. The status quo persists: tech companies act as unchecked rulers of the digital public sphere.
What's remarkable about this situation is that it's avoidable. Solutions exist. There are ways to make data sovereignty and digital self-determination a reality, not just normative goals. But for that to happen, the state would have to abandon its indifference. It would have to recognize that this isn't just a regulatory issue, but a power imbalance. Tech companies have power. They must harness that power for the benefit of society, or it must be taken away from them.
Until then, cases like Enderman's remain symptomatic of a system that isn't working. A man loses his life's work. No one can help him. And the machine that destroyed his life's work continues to run undisturbed, reviewing new cases, making new judgments, and the state documents it all in administrative files while the smoke rises.
Advice - planning - implementation
I would be happy to serve as your personal advisor.
contact me under Wolfenstein ∂ Xpert.digital
call me under +49 89 674 804 (Munich)
Download Unframe ’s Enterprise AI Trends Report 2025
Click here to download:

