Published on: January 31, 2025 / update from: January 31, 2025 - Author: Konrad Wolfenstein
Software giant asks EU to act: AI project possible in billions
The focus is on Europe: Revolutionary AI investment is at stake
The announcement of a leading European software company to invest up to 40 billion euros in a common AI project, provided the European framework is improved, has caused considerable a stir. Many interpret this statement as a strong commitment to the European market and as an indication that Europe has considerable potential in the field of artificial intelligence (AI). Nevertheless, numerous companies and investors still hesitate when it comes to gaining a foothold in Europe or realizing AI projects here. One of the main reasons for this are the current legal and bureaucratic requirements, which are often perceived as strict or inhibitory compared to the USA and China. At the same time, it becomes clear that balanced rules are necessary to build up trust in AI technologies and minimize risks.
The following text illuminates the background of this situation, takes a look at the different strategies of the EU, the USA and Chinas and presents concrete recommendations for action on how the European Union can improve its framework conditions in order to remain competitive and at the same time responsible, ethically justifiable AI applications to ensure. It is not just about the legal aspects, but also about investments in research and development, the expansion of the digital infrastructure, the promotion of talents and the role of Europe on the way to global AI government.
Suitable for:
"Ai act": Europe's answer to the challenges of AI
In order to meet the increasing influence of AI technologies, the EU works with high pressure on a uniform regulation. An essential component for this is the so -called "AI Act", the first comprehensive legal framework for artificial intelligence in Europe. The aim is to create clear rules that promote innovation on the one hand and, on the other hand, limit the abuse of AI systems and their potential risks of security and fundamental rights. This balancing act is not easy, because on the one hand, companies and research institutions should find an attractive environment, on the other hand, consumers, citizens and society should be protected as a whole by strict requirements.
In essence, the "AI Act" provides for a classification of different AI applications according to risk categories. Systems that only represent minimal risks, such as simple chatbots or programs for automated spam filtering, are said to be subject to as few bureaucratic hurdles as possible. On the other hand, AI solutions that are used for safety-related applications are used, for example, in sensitive areas such as medicine, law enforcement, traffic or robotics. For this "high risk" systems, the "AI Act" provides strict requirements for transparency, security and reliability. Systems that are considered "unacceptable risky", for example if they could be used for socially undesirable surveillance purposes or for manipulation, should generally be banned.
In a simplified representation, you can imagine the four risk categories as follows:
- First, there are systems with "minimal or no risk" that are not subject to a separate obligation. For example, this includes video games or filters for unwanted emails.
- Second, there is "limited risk" in which transparency requirements take effect. This includes, for example, that users need to know if they communicate with a AI. Simple chatbots or automated information systems fall into this category.
- Third, "high -risk systems" are defined, which are either security -critical or are used for significant decisions, for example in medicine. These must meet strict criteria in terms of accuracy, responsibility and traceability.
- Finally, there are “unacceptable risks” that are to be completely prohibited for the European market, for example those that manipulate human behavior, socially assess people or threaten basic rights.
Proponents of the "Ai Act" welcome this approach because he focuses on people and specifies clear ethical guidelines. Critics, on the other hand, object that a regulation that is too restrictive could make development and innovation process in Europe difficult. In fact, it is a challenge to master the tightrope walk between security and freedom of innovation.
USA and China: differences in the AI strategy
While Europe is trying to protect ethical standards and fundamental rights through a comprehensive legal framework, a more market -oriented approach is emerging in the United States, in which competition and freedom of innovation have top priority. China, on the other hand, relies on a centrally controlled strategy in which the state not only coordinates research funding, but also takes control of the social effects of the AI.
Market orientation in the USA
So far there has been no comprehensive federal law in the United States that regulates AI. Instead, you rely on a flexible approach that is made up of individual initiatives at the federal level and from state to state. Numerous funding programs support research and development, especially in the military, medical and university area. At the same time, a growing number of specific regulations comes into force at the level of individual states, which relate, for example, to protection of discrimination, data protection and transparency of AI applications.
Colorado has adopted a law that is intended to regulate the use of so-called “high-risk” AI systems by obliging developers and operators to actively avoid discrimination and report any cases. Other states, such as California, focus on the informational self -determination of the citizens and give them the right to contradict the automated decision -making by companies. In addition, there are guidelines of the US patent and brand office that clarify that AI-supported inventions are not fundamentally excluded from patenting. However, it must remain recognizable which "essential contributions" come from the human side, since the patent law is geared towards the recognition of human inventive spirit.
This coexistence of federal guidelines, state laws and industry -specific recommendations reflects the usual mix of deregulation, competition promotion and selective regulation in the USA. The result is a dynamic, sometimes also confusing landscape in which start-ups, large corporations and universities also try to advance innovations using flashing conditions. An American AI researcher explains: "The greatest possible scope for experiments and technologies ensures a rapid pace, but also carries new risks that we only inadequate in some areas."
China's centrally controlled strategy
China has set up ambitious goals and would like to rise to the world's leading AI location by 2030. In order to achieve this goal, the Chinese government is investing massively in AI research, infrastructure and training. The state not only bears responsibility for the development of high technology parks and large research facilities, but also regulates the content that AI systems can access. At the same time, a system was built up that enables a large number of social applications from AI and specifically steers.
This is associated with a strict regulation that goes far beyond pure technology. There are guidelines that should ensure that AI systems do not generate “harmful” content. Developers and operators are obliged to incorporate mechanisms that filter unauthorized or politically sensitive content before they get to the end users. At the same time, AI developers always have to make sure not to produce any discriminatory or illegal results. Contents that are classified as socially questionable can be legally sanctioned.
The labeling obligation for AI generated content also plays an important role. For texts, pictures or videos that are created using AI, users must be able to recognize that they are not dealing with human authors. This obligation not only serves consumer protection, but also state control over media content. Chinese regulations also emphasize the avoidance of prejudices in algorithms, so that social inequalities are not further cemented. It says in the specifications: "Every form of algorithmic discrimination is to be refrained from."
The centralized approach in China enables a quick implementation of large -scale programs, but raises questions about the freedom of research and innovation. Critical voices emphasize that controls and censorship could restrict creativity. Nevertheless, it is undeniable that China has made many progress, especially in the practical application of AI systems, from image recognition to facial recognition to voice assistants.
Comparison: EU vs. USA vs. China
If you relate to the European “AI Act” in relation to the strategies of the United States and China, there is an exciting picture: Europe follows the principle “Innovation in harmony with fundamental rights and ethical norms”. There is concern that strict regulation could inhibit innovations. There is a model in the United States that focuses on competition and flexibility. This can lead to extremely rapid progress, but also to weaker consumer protection if local regulations are not enough. China, in turn, combines a tight control from above with high investments in technology, which leads to fast and drastic developments, but raises questions about the freedom for individual and economic actors.
An industry expert describes the situation as follows: “In Europe, great importance is placed on the fact that AI systems are transparent, safe and fair. In the United States, the focus is on innovation speed, while in China there is a great top-down control, in which technology is seen as a central instrument of economic and social development. ”
At the same time, a discourse takes place in Europe about how much regulation is necessary so that both entrepreneurs and investors do not have to fear a deterrent bureaucracy. The basic idea behind the "Ai Act" is: "It is better to regulate AI clearly in order to create legal certainty than to have a patchwork of individual laws where start-ups could fail."
The starting point in the EU: strengths and weaknesses
Europe undoubtedly has a very strong research landscape. The universities and research institutions of the continent are among the best in the world, many high-ranking publications and groundbreaking studies come from EU countries. At the same time, European countries in areas such as robotics, engineering and industrial suppliers are leading, which is extremely important for AI applications that are based not only on software, but also on hardware.
However, many companies criticize that Europe is slowed down by excessive bureaucratic hurdles, lengthy approval processes and complex data protection rules. While the General Data Protection Regulation (GDPR) is considered a showcase project for the protection of personal data, some AI developers feel it as an obstacle to data collection and use. In addition, companies in Europe often have difficulty getting to risk capital because investors are majority in the United States or Asia.
A start-up founder summarizes the dilemma as follows: “We have extremely well-trained talents in Europe and a high degree of scientific expertise. At the same time, however, it is more difficult than in America to mobilize large sums of money for risk projects. If you want to grow quickly in Europe, you fight with bureaucratic effort and funding gaps. "
In order to catch up in the AI race, the EU must therefore turn on several set screws. On the one hand, it is important to design regulation in such a way that projects can start as smoothly as possible without fundamental rights or ethical principles. On the other hand, more financial resources have to be provided so that European AI companies and research teams do not necessarily look around for investments abroad.
Suitable for:
Recommendations for action for the EU
Against this background, it becomes increasingly clear that Europe has to act. Anyone who relies on the fact that technological progress emerges solely from the research landscape without creating suitable framework conditions in the long term. "The EU has to develop reliable structures so that start-ups, universities and large corporations drive their AI projects within Europe and do not migrate," says a political advisor.
1. Bureaucracy reduction and faster approval procedures
Europe should reduce bureaucratic obstacles so that AI projects can be implemented without excessive delays. Many innovators report that they receive approval for testing new technologies in the USA or Asia. A more smooth process in communication with authorities, clearly defined responsibilities and uniform procedures could help strengthen the competitive advantage of Europe in the field of high -tech. "If we wait for every prototype for months for permits, we will never progress as quickly as the competition," notes a AI entrepreneur from Berlin.
2. Promotion of research and development
Research is the heart of every AI innovation. Here Europe has enormous potential that should be exhausted even more. Intensified funding can be carried out by expanding scholarships, research collaborations and targeted investment programs. It is not just about basic research in areas such as machine learning or language processing, but also about applied research in key industries: from the automotive industry to the healthcare industry to agriculture. In addition, common European platforms could be created where data can be secured for research purposes and GDPR-compliant. In this way, researchers can gain access to large, diverse data sets that are decisive in many AI projects.
3. Adjustment of the "Ai Act"
The "AI Act" represents a milestone for Europe, but it makes sense to critically evaluate some of its provisions regarding their practical effects. Small and medium-sized companies in particular are often unable to meet extensive compliance guidelines that are easier to implement for international groups. Therefore, Europe should find paths to adapt bureaucratic duties to the size and financial possibilities of companies. Great Britain provides an example of more flexible handling, where there is deliberately no new regulatory authority for AI in order to keep the bureaucratic procedures slim. A graded system could also be used in the EU that promotes innovations and at the same time maintains fundamental rights.
4. Strengthening the digital infrastructure
A powerful digital infrastructure is a prerequisite for developing and implementing AI applications on a large scale. These include broadband networks and fiber optic networks, on the other hand powerful cloud and server environments. In the long term, Europe also needs its own high-performance data centers and supercomputers to train large AI models and to process data to a significant extent. Initiatives for the development of European cloud environments that ensure high security and data protection standards are an important step to achieve more digital sovereignty. "Without sufficient computing capacities, it is difficult to keep complex AI applications in Europe," emphasizes a scientist from France who works on large-scale projects in the area of language-consuming systems.
5. Education and further education
So that Europe is not left behind in the AI race, the training of new talents must also be promoted. Universities should focus more on future fields such as machine learning, data science and robotics. At the same time, it is important to offer working specialists further training in order to acquire new skills and to keep up with the latest developments. Only if Europe produces sufficiently qualified AI specialists can it be used to cover the needs of domestic industry and claim top positions. A German industrial association says: "We need specialists who understand technology and ethics equally and use them responsibly."
6. Ethical guidelines and standards
In addition to technology, values and ethics must not be neglected. The EU is traditionally careful to focus on people in the center of politics and business. So that this also remains in the digital transformation, clear guidelines must be defined how AI systems can be designed in a man-centered manner. It is about transparency, data protection, fairness and accountability. Not too many bureaucratic processes should arise, but rather simple, clear standards that make orientation easier. Examples of this are obligations to explain AI algorithms or requirements for companies to actively deal with the question of how potential distortions in the data sets are avoided. "We want to use technology, but we want to use it so that nobody is discriminated against and there is a clear responsibility," summarizes a political decision -maker.
7. International cooperation
Europe cannot consider the question of AI government in isolation. Since AI applications have global effects, global exchange is also required. For example, the EU should discuss with the United States how common standards in data protection, data use and data security could look like. A dialogue is also conceivable with China to define certain ethical minimum standards or technical interfaces. In addition, Europe can expand cooperations with countries such as Japan, Canada or South Korea, which are also considered top locations in AI research. Common programs and workshops could help use synergies and to expand the view beyond their own limits.
The way to a self-determined AI future
If Europe consistently uses its strengths and focuses on a well thought-out regulation, the continent can continue to play a crucial role in the AI area in the future. It is helpful that the EU has already launched large -scale programs to support digital technologies. But as a European Parliamentarian notes: "We must not lose ourselves in the structures, but have to use them to achieve concrete results."
It is conceivable that Europe takes on a leadership role, especially in the areas of medical technology, mobility, production and sustainability. The EU is already considered a pioneer in "green" technologies, and it is obvious that AI systems are used, for example, in energy optimization, reducing emissions and in sustainable agriculture. Europe can show here that high -tech and environmental protection do not have to be opposites, but can fertilize each other. "The development of AI applications for climate research or for ecological agriculture is an example of how we can profile ourselves internationally," explains a scientific consultant in Brussels.
The AI sector in Europe could also mean a strong thrust for the healthcare industry. Intelligent diagnostic tools, personalized medicine and robots that support doctors could increase the quality of health care without replacing people. Instead, it is conceivable that AI and robot work will provide support for the staff by taking on routine tasks or providing diagnostic proposals, while the final decision is still made by medical specialist personnel.
"In terms of security and ethical principles, we have a long tradition in Europe," says a medical ethicist from Austria. "If we do it correctly, we can set worldwide recognized standards and establish our AI systems as trustworthy products."
Suitable for:
Financing models and innovation culture
However, the financing remains a key factor. European banks and venture capital providers are often more careful than their counterparts in the USA or China. In order to promote risk, state-supported innovation funds could help that initially take over the start-up of AI start-ups. Especially where many funds are needed - for example in the development of complex algorithms that process huge amounts of data - reliable capital sources are required. Many young companies give up or emigrate because they do not receive sufficient risk capital.
In addition, Europe should promote a culture of cooperation. Linking large corporations, research institutes and young start-ups in innovation clusters could help bundle the expertise and reduce entrepreneurial risks. "We have to learn that innovation is not an isolated process, but a collective project that everyone can benefit from if we organize it correctly," says a professor of computer science from Italy.
Furthermore, an open attitude towards new ideas, innovative business models and interdisciplinary approaches must be developed. AI is not the only domain of computer science. Psychology, voice sciences, sociology, law and business administration also play a role in developing AI systems that are positively anchored in society. A broad networking of experts from various specialist areas could contribute to a more holistic perspective that can strengthen trust in AI.
"We need AI experts who exchange ideas with social scientists and think together how to make algorithms transparently and socially tolerated," emphasizes an industry analyst. "This is the only way we gain acceptance among people so that AI is not seen as a threat, but as an opportunity."
Working of the superpowers: Can Europe develop its potential in the AI?
Europe has the potential to play a leading role in the global race for artificial intelligence. A strong research landscape, highly qualified talents and the will to put technology in the service of society are good prerequisites. The biggest task is to create an environment that promotes innovation and investments without neglecting the protection of fundamental rights and ethical guidelines.
The "Ai Act" is an important step in this way. It creates uniform rules for AI systems and defines clear risk classes. In this way, both consumers and the development of new technologies are to be supported. Nevertheless, the set of rules must be designed in such a way that it does not become a hind -up shoe for small and medium -sized companies. Bureaucracy reduction, targeted support programs, the structure of strong digital infrastructures and the training of specialists are other central building blocks that should urgently advance Europe.
In addition, you shouldn't be afraid to learn from others. The United States rely on competition and flexibility, which fuels innovations, but at the same time can bring weaknesses in the area of consumer protection and social security. China, on the other hand, pursues a comprehensive top-down strategy with government investments and strict control mechanisms. Europe has the chance to go a third way that is characterized by a sense of responsibility, openness and a broad social discourse.
"The future of AI in Europe depends on whether we can bravely develop and guarantee freedom as protection," says a political decision -maker. “Artificial intelligence will gain importance in all areas of life. If we act wisely now, we create the basis for Europe not only keep up in this epochal transformation, but can actively help shape it. ”
In view of the rapid progress in the USA and China, hurry is required. If Europe combines its strengths-scientific excellence, industrial competence, cultural diversity and ethical principles-with one another, it could become a quality scale: for AI products that are in demand globally because they create trust and stand on solid technological and ethical foundations. Last but not least, Europe could set an example: "We believe that technology should be in the service of people and not the other way around."
This gives the opportunity to take advantage of the digital opportunities to build a sustainable economy that at the same time pays attention to social values and the protection of privacy. This is not only received positively in Europe itself, but is also increasingly well received in other parts of the world. In the end, trust in AI is not only a question of technological progress, but also a question of credibility and integrity. And this is exactly where the great chance for Europe is: shaping a AI world in which technology and values are in a healthy balance.
Suitable for:
We are there for you - advice - planning - implementation - project management
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.