Website icon Xpert.Digital

Can a SAP KI be Europe's answer? What the EU has to do in the global race for artificial intelligence

Can a SAP KI be Europe's answer? What the EU has to do in the global race for artificial intelligence

Can SAP AI be Europe's answer? What the EU must do in the global race for artificial intelligence – Image: Xpert.Digital

Europe's AI future: SAP calls for regulatory adjustments - Research

SAP and AI: Investment plans at the center of the regulatory debate

SAP's announcement that it will invest up to €40 billion in European AI projects, provided the regulatory framework is optimized and suitable, is a clear signal of Europe's enormous potential in the field of artificial intelligence (AI). However, many companies are still hesitant, particularly due to the stringent regulations, which are perceived as hindering innovation. Compared to the US and China, the EU faces the challenge of aligning its AI strategy in a way that upholds ethical standards while simultaneously fostering investment and innovation.

This article analyzes the current hurdles, highlights international approaches, and provides concrete recommendations on how the EU can strengthen its competitiveness in the global AI race.

The “AI Act”: Europe’s answer to the challenges of AI

Europe boasts a strong research landscape and a well-developed digital infrastructure, but the regulatory framework plays a crucial role in the future of AI development. The “AI Act” is the world’s first comprehensive legal framework for artificial intelligence in the EU. Its aim is to foster innovation without compromising the safety and security of citizens.

The “AI Act” divides AI systems into four risk categories:

  • Minimal or no risk: These AI systems are not subject to any specific regulations. Examples include video games and spam filters.
  • Limited risk: AI systems in this category, such as chatbots, must be transparent so that users can recognize that they are interacting with an AI.
  • High risk: This includes applications in sensitive areas such as medicine, transportation, or law enforcement. These applications are subject to strict requirements regarding security, transparency, and reliability.
  • Unacceptable risk: AI systems used for purposes such as behavioral manipulation or social evaluation are banned in the EU.

While the “AI Act” is considered an important step towards the responsible use of AI, it has also faced criticism. Many companies fear that the strict regulations will slow down Europe's progress and competitiveness compared to other markets.

Suitable for:

USA and China: The pioneers in AI development

Both the US and China have developed ambitious AI strategies and are investing heavily in research and development. However, they are pursuing different approaches:

USA: Innovation-driven and market-oriented

The US is pursuing a flexible, market-based approach to fostering AI innovation. Instead of uniform federal legislation, there are various federal and state initiatives:

  • The Colorado AI Act requires companies to avoid algorithmic discrimination and ensure transparency.
  • The California Consumer Privacy Act (CCPA) regulates the use of automated decision-making systems and gives consumers the right to object to the use of these systems.
  • The US Patent and Trademark Office has issued guidelines on the patentability of AI-powered inventions. These guidelines state that the human contribution must be clearly identifiable in order for a design to be patentable.

This relatively liberal approach allows companies to bring new AI technologies to market quickly. However, there are also challenges: The lack of central regulation can lead to companies facing inconsistent requirements in different states.

China: Centrally controlled with ambitious goals

China is pursuing a strongly state-controlled AI strategy with the goal of becoming the world's leading AI nation by 2030.

  • The “Preliminary Measures for Generative AI” establish strict rules for data protection, content control, and ethical standards. AI-generated content must be clearly labelled.
  • The Chinese government is investing heavily in national AI infrastructures and promoting strategically relevant technologies.
  • In China, developers and operators of AI systems are responsible for the content of their AI and must ensure that no socially or politically undesirable content is generated.

This strict control enables rapid and coordinated development, but also places high demands on companies, especially regarding censorship and political influence.

Recommendations for action for the EU

To ensure that Europe does not fall behind in the global AI competition, targeted measures are needed:

  1. Reducing bureaucracy and speeding up approval processes: Companies should face fewer regulatory hurdles to bring AI innovations to market faster.
  2. Promoting research and development: By increasing investment in AI research, Europe can enhance its competitiveness.
  3. Adaptation of the “AI Act”: More flexible regulations that support start-ups and SMEs would be crucial.
  4. Expansion of digital infrastructure: A high-performance IT infrastructure is essential for AI development and applications.
  5. Promoting AI training: More training programs for AI specialists and attractive job markets could alleviate the shortage of skilled workers.
  6. International cooperation: Closer cooperation with the USA and China could help to establish globally uniform AI standards.

Europe has the potential to take a leading role in artificial intelligence. However, to compete with the US and China, the EU must optimize its regulatory framework. A well-balanced regulatory approach that fosters rather than hinders innovation is crucial. The coming years will show whether Europe adapts its AI strategy accordingly and establishes itself as a strong player in the global AI competition.

 

Our recommendation: 🌍 Limitless reach 🔗 Networked 🌐 Multilingual 💪 Strong sales: 💡 Authentic with strategy 🚀 Innovation meets 🧠 Intuition

From local to global: SMEs conquer the global market with clever strategies - Image: Xpert.Digital

At a time when a company's digital presence determines its success, the challenge is how to make this presence authentic, individual and far-reaching. Xpert.Digital offers an innovative solution that positions itself as an intersection between an industry hub, a blog and a brand ambassador. It combines the advantages of communication and sales channels in a single platform and enables publication in 18 different languages. The cooperation with partner portals and the possibility of publishing articles on Google News and a press distribution list with around 8,000 journalists and readers maximize the reach and visibility of the content. This represents an essential factor in external sales & marketing (SMarketing).

More about it here:

 

Europe's path to AI leadership: A balancing act between innovation and responsibility - background analysis

The “AI Act”: A regulatory milestone and its challenges

Europe is a hotbed of talent and ideas in AI research and development. To fully realize this potential, however, the EU must cultivate an environment that fosters investment and innovation. A crucial step in this direction is the AI ​​Act, the first comprehensive piece of legislation on artificial intelligence in the EU. The primary objective of this law is to promote the development and use of AI while ensuring that these systems are safe, trustworthy, and ethical. It aims to strike a balance between protecting citizens and promoting technological progress.

The “AI Act” follows a risk-based approach. It categorizes AI systems according to their potential risk and sets corresponding requirements:

Risk category Description Requirements Examples

  • Minimal or no risk: AI systems that pose no or only minimal risk to users. No specific requirements. Video games, spam filters, simple recommendation systems
  • Limited-risk AI systems pose a limited risk to the safety or fundamental rights of users. Transparency requirements: Users must be informed that they are interacting with an AI system. Disclosure of how the algorithms work is required. Examples include chatbots, AI-powered text generators, and simple image editing software
  • High-risk AI systems are those used in sensitive areas and pose a high risk to the safety or fundamental rights of users. These systems are subject to strict requirements regarding safety, reliability, transparency, and traceability. Monitoring of the systems is required throughout their entire lifecycle. Examples of AI systems include those used in medicine (diagnosis, treatment), transportation (autonomous driving), law enforcement (facial recognition), and education (automated assessment)
  • Unacceptable risk: AI systems that pose an unacceptable risk to the safety or fundamental rights of users. Prohibited in the EU. Systems for manipulating human behavior (e.g., subliminal advertising), social scoring systems

The “AI Act” is a crucial step towards promoting responsible AI development and protecting citizens from the potential risks of this technology. However, it has also drawn criticism. Some warn that strict regulation could stifle innovation and jeopardize Europe's competitiveness in the global AI race. It's a balancing act between protecting civil liberties and fostering technological progress.

Global AI arenas: A comparison of the USA and China

While Europe is striving to establish a consistent regulatory framework, the US and China are pursuing different but equally ambitious strategies in the field of AI. Both countries have already taken concrete steps to promote the development and application of AI. The US is focusing on a market-oriented approach, while China is pursuing a more centralized strategy.

USA: A mosaic of innovation and competition

In the US, there is no single federal law regulating AI. The US government has opted for a more flexible approach designed to foster innovation and competition. This approach is based on the premise that competition is the best driver of technological progress. However, individual states have increasingly launched initiatives to regulate AI.

Some states, such as Colorado, have enacted laws that prioritize transparency and protection against discrimination by AI systems. The Colorado AI Act, for example, requires developers and operators of high-risk AI systems to prevent algorithmic discrimination and report cases of discrimination to the Attorney General. California has also established regulations for the use of automated decision-making systems with the California Consumer Privacy Act (CCPA), granting consumers the right to object to companies' use of such technologies.

In addition to regulatory efforts at the state level, the U.S. Patent and Trademark Office has issued guidance on inventions supported by artificial intelligence. This guidance states that AI-powered inventions are not inherently unpatentable, but a substantial contribution from the human inventor must be evident, as patents are intended to reward human creativity.

This mix of federal regulation and an overall more flexible approach at the federal level allows the US to foster innovation while also addressing concerns about the ethical use of AI.

China: Centralized control and national ambitions

China is pursuing a different model. The country has set itself the goal of becoming a global leader in AI by 2030 and is investing heavily in building a national AI infrastructure. The Chinese government promotes the development and application of AI in all sectors of the economy and society. This strategic direction is enshrined in national development plans.

China has already enacted a number of laws and guidelines to regulate AI development. The “Preliminary Measures for the Management of Generative Artificial Intelligence Services” establish strict rules for data protection, content control, and the ethical use of AI. A key aspect is the mandatory labeling of AI-generated content, for example, through watermarking. Furthermore, content that could be considered harmful, misleading, or disruptive to social order is prohibited. This also includes politically sensitive topics.

Chinese regulations place great emphasis on preventing bias in AI algorithms and ensure that AI-generated content does not discriminate against individuals or groups. AI developers and operators in China are responsible for the results of their systems and must guarantee that they do not produce harmful or illegal content. They are also obligated to respect intellectual property rights and business ethics.

China's regulatory approach exemplifies centralized control of AI development, aiming to foster a responsible and ethical AI ecosystem and strengthen public trust in AI technologies. This strategy could also set a precedent for global AI governance.

Comparative analysis: EU, USA and China in the race for AI

The differing strategies of the EU, the US, and China in the field of AI regulation reflect their differing priorities and economic models. While the EU has created a comprehensive legal framework with the “AI Act,” which focuses on risk minimization and ethical standards, the US pursues a more flexible and market-oriented strategy with regulations at the state level. China follows a centralized approach with strict regulations aimed at promoting AI development and application under state control.

A key difference lies in the degree of regulation. The EU pursues a comparatively strict approach with its “AI Act,” which could stifle innovation. The US relies on deregulation and competition as means of promoting innovation. China combines state control with targeted innovation support. The ethical aspects of AI development are weighted differently. The EU places great emphasis on protecting fundamental rights and preventing discrimination. In the US, ethical issues are increasingly addressed at the state level. China emphasizes adherence to “public order and morality” and the avoidance of bias in AI algorithms.

Suitable for:

Recommendations for the EU: Paving the way to AI leadership

To remain competitive in the global AI race, the EU must adapt its framework and create an environment that fosters investment and innovation. The following recommendations can help pave the way for this:

Reducing bureaucracy and accelerating approval processes

The EU should reduce bureaucratic hurdles and accelerate approval processes for AI projects. This would encourage companies to invest in AI and bring innovative products and services to market faster. Streamlined regulations can increase the competitiveness of European companies.

Promoting research and development

The EU should increase its support for research and development in the field of AI. This could be achieved through targeted funding programs, the expansion of research infrastructure, and the support of collaborations between science and industry. An ecosystem should be created that promotes knowledge exchange and cooperation.

Amendment of the “AI Act”

The AI ​​Act should be adapted to promote, rather than hinder, innovation. The specific needs of startups and small and medium-sized enterprises (SMEs) should be taken into account. A comparison with the UK's approach, which deliberately refrained from establishing a new regulatory authority to avoid impeding innovation, can provide valuable insights. It is crucial that the regulation is flexible and can adapt to the rapid developments in the field of AI.

Strengthening the digital infrastructure

The EU should accelerate the expansion of digital infrastructure to support the development and application of AI. This includes expanding high-speed networks, promoting cloud computing, and providing data infrastructure. A robust and reliable digital infrastructure is the foundation for successful AI development.

Initial and continuing education

The EU should invest in the education and training of AI professionals. This includes promoting AI degree programs, supporting continuing education initiatives, and creating incentives for training AI talent. It is crucial that Europe has a sufficient number of skilled workers to drive AI development forward.

Ethical guidelines and standards

The EU should promote the development of ethical guidelines and standards for AI development and application. This would help ensure that AI systems are used responsibly and in accordance with European values. Adherence to ethical standards is crucial for public trust in AI technologies.

International cooperation

The EU should promote international cooperation in the field of AI and advocate for the harmonization of AI standards at a global level. In doing so, it should take into account the different regulatory approaches of the EU, the US, and China, and leverage synergies. Global cooperation is essential to creating a unified and transparent regulatory framework for AI.

Funding for “AI made in Europe”

The EU should create targeted funding programs specifically for European AI companies. This could take the form of direct grants, venture capital, or tax incentives. Supporting European AI champions is crucial for strengthening the European AI landscape.

Data sovereignty

The EU must strengthen its control over its citizens' data. This means that European data should be stored and processed in Europe, and that European companies should have the opportunity to access this data. Data protection and the safeguarding of data sovereignty are crucial factors for the EU's digital independence.

Public awareness

The EU should promote a public debate on the opportunities and risks of AI. It is important that citizens are informed about the impact of AI and can participate in shaping the future of this technology. Comprehensive public education can help reduce fears and foster acceptance of AI.

Europe's strategic course for the AI ​​future

The EU has the potential to play a leading role in the global AI race. To realize this potential, however, it must create the right framework. By specifically promoting research and development, implementing flexible regulations, strengthening digital infrastructure, and investing in education, the EU can create an attractive environment for AI investment and innovation.

The differing approaches of the EU, the US, and China to AI regulation highlight the challenges of global AI governance. The EU should learn from the experiences of other countries and continuously refine its own approach to foster innovation while upholding ethical standards. It is crucial that the EU acts now to secure its competitiveness in the global AI race and actively shape the future of artificial intelligence in Europe. The European Union is at a crossroads, poised to play a decisive role in shaping the future of AI and thus securing its position in the global economy. The path ahead is demanding, but with clear objectives and decisive action, the EU can become a leader in AI development.

 

We are there for you - advice - planning - implementation - project management

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development

 

Konrad Wolfenstein

I would be happy to serve as your personal advisor.

You can contact me by filling out the contact form below or simply call me on +49 89 89 674 804 (Munich) .

I'm looking forward to our joint project.

 

 

Write to me

 
Xpert.Digital - Konrad Wolfenstein

Xpert.Digital is a hub for industry with a focus on digitalization, mechanical engineering, logistics/intralogistics and photovoltaics.

With our 360° business development solution, we support well-known companies from new business to after sales.

Market intelligence, smarketing, marketing automation, content development, PR, mail campaigns, personalized social media and lead nurturing are part of our digital tools.

You can find out more at: www.xpert.digital - www.xpert.solar - www.xpert.plus

Keep in touch

Exit the mobile version