Website icon Xpert.Digital

Not OpenAI, not Amazon: This is the real winner of the $38 billion deal: Nvidia

Not OpenAI, not Amazon: This is the real winner of the $38 billion deal: Nvidia

Not OpenAI, not Amazon: This is the real winner of the $38 billion deal: Nvidia – Image: Xpert.Digital

Bigger than the dot-com bubble? The AI ​​hype is reaching a new level of irrationality.

Burning money for the future: Why OpenAI is losing even more billions despite billions in revenue

The $38 billion deal between OpenAI and Amazon Web Services is far more than just a gigantic infrastructure acquisition – it's a strategic turning point that ruthlessly exposes the tectonic shifts and profound contradictions of the global AI revolution. Behind the enormous sum lies the story of a company that, despite an astronomical valuation of up to $500 billion, is trapped in an economic paradox: maximum market valuation with minimal operational profitability. This deal is OpenAI's calculated attempt to break free from its precarious dependence on its main partner, Microsoft, and simultaneously a desperate effort to satisfy the exponentially growing demand for computing power that threatens to engulf its entire business model.

The agreement reveals a complex power structure in which each player pursues its own agenda: Amazon is launching a strategic catch-up in the cloud computing race, while the real beneficiary of this arms race appears to be chip giant Nvidia, whose technology forms the foundation for everything. At the heart of it all, however, lies a fundamental question reminiscent of the excesses of past tech bubbles: Can these gigantic investments—OpenAI alone plans expenditures of $1.4 trillion—ever be recouped through real revenues? Analyzing this deal is therefore a glimpse into the engine room of AI economics, a world caught between visionary bets on the future, existential risks, and a financing logic that seems to be testing the limits of rationality.

Suitable for:

The strategic reorganization of the cloud infrastructure economy – When dependency becomes strategy: The 38 billion dollar gamble on the future of artificial intelligence

The $38 billion agreement between OpenAI and Amazon Web Services signals far more than a typical procurement contract. It marks a fundamental shift in the power architecture of the global technology industry and reveals the precarious dependencies upon which the entire artificial intelligence revolution rests. While on the surface OpenAI appears to be merely securing access to hundreds of thousands of Nvidia graphics processors, a closer look reveals a complex web of strategic calculations, existential risks, and a financing logic reminiscent of the excesses of past tech bubbles.

The deal reveals the fragile position of a company that, despite its valuation of $300 to $500 billion and annualized revenue of approximately $12 billion, operates at a structural loss. With projected capital burn of $8 billion in 2025 alone and cumulative losses that could reach an estimated $44 billion by 2028, OpenAI finds itself in a paradox: maximum market valuation with minimal operating profitability.

The economic anatomy of an infrastructure crisis

The fundamental problem of modern artificial intelligence manifests itself in a simple yet fundamental imbalance: the resource requirements for training and operating large language models are growing exponentially, while monetization opportunities are linear or even stagnant. OpenAI requires computing power for its current and planned model generations on a scale that defies all historical analogy. The company's management plans to spend a total of $1.4 trillion on processors and data center infrastructure over the coming years.

To put this scale into context: The planned investments exceed the gross domestic product of numerous developed economies. The industry estimates the cost of a single one-gigawatt data center at around $50 billion, with 60 to 70 percent of that figure attributable to specialized semiconductors. With a target of ten gigawatts of total capacity, OpenAI is operating on a scale that dwarfs even the infrastructure investments of established cloud giants like Microsoft and Google.

The cost structure reveals the structural Achilles' heel of the business model: OpenAI spends an estimated 60 to 80 percent of its revenue on computing power alone. With revenue of $13 billion, this translates to infrastructure costs of $10 billion, in addition to substantial further expenses for personnel, research, development, and operational processes. Even with optimistic growth forecasts, it remains questionable whether and when this cost structure will enable sustainable profitability.

Suitable for:

The diversification strategy as an existential necessity

In this context, the partnership with Amazon Web Services appears not as an expansion, but as a survival strategy. Until recently, OpenAI was trapped in an unprecedented dependence on Microsoft. The Redmond-based software giant had invested a total of 13 billion US dollars in OpenAI since 2019 and, in return, received not only substantial revenue shares but also de facto exclusive rights to the cloud infrastructure.

This situation presented OpenAI with a double vulnerability: Technologically, the company was dependent on a single infrastructure source, causing bottlenecks in scaling. Economically, significant portions of revenue flowed directly back to Microsoft—initially 75 percent until the investment was fully recouped, and subsequently 49 percent of profits. This arrangement proved increasingly unsustainable as OpenAI's growth plans became more ambitious.

The renegotiation of the Microsoft partnership in October 2025 did lift cloud exclusivity, but it also highlights the strained relationship between the two companies. Media reports about antitrust complaints and differences regarding intellectual property, computing power, and governance structures underscore the fragility of this symbiotic relationship.

The new strategy relies on radical diversification. In addition to Amazon as a new partner, OpenAI now has agreements with Microsoft for $250 billion, Oracle for $300 billion, the specialized provider CoreWeave for $22.4 billion, as well as collaborations with Google Cloud, Nvidia, AMD, and Broadcom. While this diversification reduces individual dependencies, it also creates new complexities in the orchestration of different infrastructures and technology stacks.

The Amazon Perspective: Strategic Catch-Up in the Cloud Competition

For Amazon Web Services, the deal represents a strategic breakthrough in an increasingly competitive market. Although AWS remains the global leader in cloud computing with a market share of 29 to 32 percent, its growth dynamics in recent years have shown worrying trends. While AWS grew by 17 percent in the second quarter of 2025, Microsoft Azure increased by 39 percent and Google Cloud by 34 percent. The major AI deals in recent years have primarily gone to competitors.

AWS's market share fell from 50 percent in 2018 to currently below 30 percent. This gradual decline in importance paradoxically resulted from Amazon's early dominance: As an established infrastructure provider, AWS lacked the close integration with leading AI developers that Microsoft possessed through its billion-dollar investment in OpenAI and Google through its own language models. The partnership with the less well-positioned Anthropic only partially compensated for this disadvantage, even though Amazon had already invested eight billion US dollars there.

The announcement of the OpenAI deal boosted Amazon's market capitalization by over $100 billion, underscoring its significance for investors. For AWS, the agreement means not only substantial revenue but, more importantly, a powerful signal: the world's largest cloud provider is now also a serious infrastructure partner of the leading AI company. The $38 billion may seem modest compared to OpenAI's total commitments of $1.4 trillion, but it marks the beginning of a potentially long-term relationship with significant expansion options through 2027 and beyond.

Amazon promises to provide all the computing capacity agreed upon in the agreement by the end of 2026, giving OpenAI immediate access to hundreds of thousands of Nvidia chips in Amazon's data centers. This rapid availability addresses a key problem for OpenAI: the extremely long lead time required to build its own infrastructure. While the Stargate project with SoftBank and Oracle aims to build ten gigawatts of capacity in the long term, OpenAI needs resources available in the short term to train new models and scale existing services.

The technological dimension: Nvidia as the real beneficiary

On closer inspection, a third party emerges as perhaps the biggest winner in this situation: Nvidia. The semiconductor company dominates the market for AI accelerators with an estimated 80 percent market share and has established a near-monopolistic position. The GB200 and GB300 chips that Amazon provides for OpenAI represent Nvidia's latest Blackwell generation and offer drastically increased performance for AI training and inference.

The GB300 NVL72 platform combines 72 Blackwell Ultra GPUs and 36 ARM-based Grace CPUs in a liquid-cooled rack design that operates like a single, massive GPU. Compared to the previous Hopper generation, Nvidia promises a 50-fold performance increase for AI reasoning tasks and a tenfold improvement in user responsiveness. These technological advancements are crucial for OpenAI's ambitious plans for so-called agentic AI systems, which aim to enable autonomous, multi-stage problem-solving.

Agentic AI workloads differ fundamentally from classical inference tasks. While conventional language models respond to individual queries with individual answers, agentic systems are designed to break down complex tasks into sub-steps, make independent decisions, and iteratively pursue solution paths. These capabilities require significantly greater computing power and longer processing times, further driving the demand for more powerful processors.

The cost of this cutting-edge technology is astronomical. A single GB300 superchip is estimated at $60,000 to $70,000. With hundreds of thousands of chips needed, the acquisition costs add up to tens of billions of dollars. Nvidia benefits from a self-reinforcing cycle: the more that is invested in AI infrastructure, the higher the demand for Nvidia chips, which in turn increases the company's valuation and financial strength, enabling new investments in AI startups that then require even more Nvidia chips.

This dynamic is manifest in Nvidia's recent announcement of a $100 billion investment in OpenAI. The deal follows a remarkable logic: Nvidia provides capital that OpenAI uses to build data centers, which are then equipped with Nvidia chips. The money essentially moves from one pocket to another, with Nvidia simultaneously financing demand for its own products. Analysts at Bank of America point to some accounting issues, but the strategy is paying off: Nvidia has achieved a market capitalization of over $5 trillion and is among the most valuable companies in the world.

The financing architecture: Between innovation and irrationality

The entire wave of investment in AI infrastructure is on a scale that leaves even experienced market observers baffled. The major technology companies Meta, Microsoft, Google, and Amazon alone are planning capital expenditures of an estimated $320 billion for 2025, primarily for AI data centers. This sum exceeds Finland's gross domestic product and is almost equivalent to ExxonMobil's total revenue in 2024.

Analysts at Bain & Company predict that the AI ​​industry would need to generate $2 trillion in annual revenue by 2030 to justify planned infrastructure investments. Their calculations identify a funding gap of $800 billion between necessary revenue and realistic expectations. Morgan Stanley sees a funding gap of $15 trillion over the next three years. These figures raise fundamental questions about the sustainability of the current investment cycle.

The problem is exacerbated by the speed at which capital is being consumed. OpenAI generated $4.3 billion in revenue in the first half of 2025, while burning $2.5 billion in cash in six months. This equates to a burn rate of over $8 billion annually, which is projected to increase further through 2028. Even with optimistic revenue projections of $29.4 billion for 2026 and $125 billion by 2029, OpenAI anticipates continued high losses and significant capital requirements.

These deficits are financed through continuous funding rounds at escalating valuations. A funding round in March 2025 valued OpenAI at $300 billion; just seven months later, a secondary stock sale brought the valuation to $500 billion. This valuation implies a price-to-sales ratio of approximately 38 based on projected revenue of $13 billion for 2025, whereas typical software companies are valued at two to four times their annual revenue.

OpenAI is deliberately working to circumvent traditional profitability metrics. The company communicates a creative metric to investors called “AI-adjusted earnings,” which excludes significant cost blocks such as the billions spent on training large language models. According to this fictitious metric, OpenAI is supposed to become profitable in 2026, while the real figures predict losses of $14 billion for 2026, which are projected to accumulate to $44 billion by 2028.

 

A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) - Platform & B2B Solution | Xpert Consulting

A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) – Platform & B2B Solution | Xpert Consulting - Image: Xpert.Digital

Here you will learn how your company can implement customized AI solutions quickly, securely, and without high entry barriers.

A Managed AI Platform is your all-round, worry-free package for artificial intelligence. Instead of dealing with complex technology, expensive infrastructure, and lengthy development processes, you receive a turnkey solution tailored to your needs from a specialized partner – often within a few days.

The key benefits at a glance:

⚡ Fast implementation: From idea to operational application in days, not months. We deliver practical solutions that create immediate value.

🔒 Maximum data security: Your sensitive data remains with you. We guarantee secure and compliant processing without sharing data with third parties.

💸 No financial risk: You only pay for results. High upfront investments in hardware, software, or personnel are completely eliminated.

🎯 Focus on your core business: Concentrate on what you do best. We handle the entire technical implementation, operation, and maintenance of your AI solution.

📈 Future-proof & Scalable: Your AI grows with you. We ensure ongoing optimization and scalability, and flexibly adapt the models to new requirements.

More about it here:

 

Monetization stress: Why billion-dollar investments threaten profits

The Stargate Project: A monumental undertaking between vision and hubris

The most ambitious manifestation of this investment logic is the Stargate project, a joint venture between OpenAI, SoftBank, and Oracle with planned investments of up to $500 billion over four years. The project envisions the construction of up to 20 state-of-the-art data centers with a total capacity of ten gigawatts, equivalent to the energy consumption of approximately ten nuclear power plants or the power supply of four million households.

The partner structure reveals the complexity of the financing: SoftBank is acting as the main investor with approximately a 40 percent stake, OpenAI is also contributing 40 percent, and Oracle and the Emirati tech investor MGX are jointly providing 20 percent. The first $100 billion for the first year has already been largely committed; for the remaining $400 billion, the partners are seeking project-specific external investors such as Apollo Global Management and Brookfield Asset Management.

The first data centers are already under construction. Oracle installed the first GB200 racks on its main campus in Abilene, Texas. Additional locations have been identified in Lordstown, Ohio; Milam County and Shackelford, Texas; and Doña Ana County, New Mexico. SoftBank plans to establish 1.5-gigawatt facilities in Ohio and Texas, which are expected to be operational within 18 months.

The financing structure combines equity, project-related debt financing, and innovative leasing models. According to media reports, OpenAI and its partners are negotiating leasing arrangements for the necessary chips, which would reduce the capital requirement but further bind OpenAI to Nvidia. The future users of the data centers are expected to contribute approximately ten percent of the project costs.

Critics like Tesla CEO Elon Musk doubt the feasibility of these plans, arguing that SoftBank could realistically raise "well under $10 billion." So far, the actual commitments made have refuted this skepticism, but the fundamental question remains: How will these gigantic investments ever be recouped if even optimistic revenue projections don't cover the cost of capital?

Suitable for:

The macroeconomic implications: Scaling laws at the limit of their capacity

The entire investment logic is based on a fundamental assumption: the so-called scaling laws of artificial intelligence. These state that larger models with more parameters, trained on more data with more computing power, lead to better results. This relationship has proven remarkably stable in recent years, enabling predictable performance improvements simply by scaling up resources.

However, there are increasing signs that this linear approach is reaching its limits. The latest OpenAI model, Orion, disappointed expectations and failed to deliver the hoped-for performance leaps despite significantly increased resource expenditure. Gary Marcus, Professor of Psychology and Neuroscience at New York University and a prominent critic of the Silicon Valley approach, argues that the fundamental theory behind the "bigger is better" strategy is flawed.

Alternative approaches, such as the techniques demonstrated by DeepSeek, show that dramatic efficiency gains are possible through improved algorithms without massive scaling. Should such approaches prevail, the enormous investments in traditional scaling would lose considerable value. OpenAI and others would have to fundamentally rethink their strategies and could lose their current advantages in the process.

Energy demand represents another fundamental constraint. The International Energy Agency estimates that data centers accounted for approximately two percent of global energy consumption in 2022. This share could more than double to 4.6 percent by 2026. The planned ten gigawatts for OpenAI's Stargate project alone are equivalent to roughly five million specialized chips or the output of ten nuclear power plants. These magnitudes raise existential questions about sustainability and social acceptance.

Capacity bottlenecks are already manifesting themselves. For example, according to forecasts, Germany will only be able to increase the IT connection capacity of data centers from 2.4 to 3.7 gigawatts by 2030, while business demand is estimated at at least twelve gigawatts. The USA already has 20 times the capacity of Germany, but even there, bottlenecks are becoming apparent.

Brookfield Asset Management forecasts that global AI data center capacity will increase from approximately seven gigawatts at the end of 2024 to 15 gigawatts at the end of 2025 and to 82 gigawatts by 2034. This more than tenfold increase within a decade will require investments exceeding seven trillion US dollars, two trillion of which are specifically earmarked for building AI data centers. Financing these sums would fundamentally transform capital markets and potentially crowd out other investment areas.

Suitable for:

 

The geopolitical dimension: Technological sovereignty as a competitive factor

The dependency structures in cloud infrastructure are increasingly taking on geopolitical dimensions. In Germany and Europe, concerns are growing about excessive reliance on US cloud providers. According to a Bitkom survey, 78 percent of German companies believe Germany is too dependent on US cloud providers, while 82 percent want European hyperscalers that can compete with the non-European market leaders.

The three major US hyperscalers, Amazon, Microsoft, and Google, control 65 percent of the global cloud market. In the area of ​​cloud computing, nearly 40 percent of German companies report being highly dependent on non-European cloud providers, while less than a quarter use European cloud services. In the field of artificial intelligence, although one-fifth of companies are aware of European AI offerings, only about ten percent actually use them.

This dependency is increasingly perceived as a strategic risk. Half of all companies using cloud computing feel compelled to rethink their cloud strategy due to US government policies. Deutsche Telekom is responding by building an “Industrial AI Cloud” in Munich, a multi-billion-euro project in cooperation with Nvidia, which will comprise over 10,000 high-performance chips and is expected to increase German AI computing capacity by 50 percent.

The European Union is planning a €200 billion program with up to five AI gigafactories, each capable of producing over 100,000 chips. The EU will cover up to 35 percent of the estimated costs of €3 to €5 billion per factory. These initiatives represent attempts to regain technological sovereignty, but their scale remains far below US investments.

The challenges for European alternative solutions are immense. Hyperscalers like AWS, Azure, and Google Cloud offer simple, scalable solutions with mature ecosystems that European providers cannot replicate in the short term. Small and medium-sized enterprises (SMEs) are particularly affected by vendor lock-in and vendor dependency, as they are often tied to specific formats and proprietary systems.

Market dynamics: Concentration as a systemic risk

Analysis of market structures reveals an increasing concentration on a few dominant players, creating systemic risks. In the cloud market, the "Big Three"—AWS, Azure, and Google Cloud—capture over 60 percent of the market, with the remainder distributed among numerous smaller providers. Nvidia dominates the AI ​​chip market with an estimated 80 percent market share.

This concentration is amplified by network effects and self-reinforcing cycles. Companies with larger data centers can negotiate better terms with hardware suppliers, further increasing their cost advantages. Developers tend to develop for the platforms with the largest installed base, further increasing their attractiveness. Investors favor established players with proven business models, facilitating their access to capital.

Vertical integration intensifies these dynamics. Google is developing its own AI accelerators with TPUs, enabling it to build AI infrastructure at a third of the cost of Nvidia-based systems. Amazon is developing its own chips with Trainium, which are already being used by Anthropic and could potentially become relevant for OpenAI as well. Microsoft is investing heavily in its own semiconductor development. This vertical integration dramatically increases the barriers to entry for new competitors.

The valuations of the companies involved reflect the expectation of continued dominance. Nvidia achieved a market capitalization of over five trillion US dollars, and Microsoft and Google are among the most valuable companies in the world. Amazon saw its value increase by 100 billion US dollars after the announcement of the OpenAI deal. These valuations are based on the assumption that the current market leaders will not only maintain their positions but also expand them.

The governance question: Structures caught between innovation and control

OpenAI's corporate structure reflects the inherent tensions between non-profit goals and commercial necessities. Originally founded as a non-profit organization with the mission of developing artificial intelligence for the benefit of humanity, OpenAI gradually transformed into a hybrid construct with a for-profit subsidiary that enabled significant capital inflows.

The current restructuring plans aim for a complete transformation into a for-profit organization, which is a prerequisite for the planned funding rounds. Regulators in California and Delaware have approved these steps, but they raise fundamental questions: How does the original mission align with the return expectations of investors who are putting hundreds of billions of dollars on the line?

Microsoft's stake illustrates this complexity. Microsoft initially receives 75 percent of revenues until its investment is fully recouped, and subsequently 49 percent of profits. At the same time, Microsoft holds exclusive intellectual property rights to certain technologies and preferential access to new models until artificial general intelligence is achieved. This structure tightly binds OpenAI to Microsoft, even after the cloud exclusivity is lifted.

The governance structure must also manage growing tensions between strategic partners. Microsoft and Amazon compete directly in the cloud business, while OpenAI navigates between the two. Oracle, Google, and other partners pursue their own strategic interests. Coordinating these diverse demands requires diplomatic skill and can lead to conflicts of interest that impair operational efficiency.

The competitive dynamics: Anthropic as a strategic counterweight

The Amazon-Anthropic partnership forms an interesting counterweight to the Microsoft-OpenAI constellation. Amazon has already invested eight billion US dollars in Anthropic, the competitor founded by former OpenAI employees. This investment positions Amazon with one foot in both camps: infrastructure partner of OpenAI and main investor in Anthropic.

Anthropic primarily uses Amazon's own Trainium chips, while OpenAI relies on Nvidia hardware. This technological differentiation allows Amazon to pursue different approaches in parallel and gain insights into the efficiency and performance of different architectures. Should Amazon's own chips offer comparable performance at lower costs, this could reduce its long-term dependence on Nvidia.

Anthropic's Claude models are among the most powerful chatbots available and directly compete with OpenAI's GPT models. Anthropic is already used by tens of thousands of companies via Amazon's AI cloud service, Bedrock. Anthropic's current market value is $61.5 billion, significantly lower than OpenAI's $500 billion, but still a considerable valuation for a company founded in 2021.

The competitive landscape poses risks for all involved. Amazon is developing its own AI models and could become a long-term competitor for Anthropic, on which it depends to acquire enterprise customers. OpenAI competes with Anthropic for developer talent, enterprise customers, and media attention. Microsoft is navigating between its investment in OpenAI and expanding its own AI capabilities. These multilateral competitive relationships create strategic uncertainty.

The profitability problem: Structural deficits despite revenue growth

The fundamental challenge for all AI companies remains monetization. OpenAI generated $4.3 billion in revenue in the first half of 2025, 16 percent more than its total revenue for the previous year. Annualized revenue reached approximately $12 billion with 700 million weekly users. However, about 75 percent of revenue comes from consumer products, primarily ChatGPT subscriptions, while the enterprise customer business is still relatively small.

User conversion remains problematic. With 700 million weekly users, only about five percent pay for premium subscriptions. ChatGPT's growth rates show signs of market saturation, creating pressure to find new monetization methods. OpenAI is testing advertising and monetizing its Sora video generation app, but it remains questionable whether these measures will be sufficient to cover the enormous expenses.

Despite technological advances, the cost structure remains challenging. The marginal cost per million AI tokens that OpenAI charges developers fell by 99 percent in just 18 months. However, this dramatic cost reduction paradoxically leads to higher overall demand for computing power, a phenomenon known as the Jevons Paradox. As AI models become more efficient and cheaper, their use increases disproportionately, raising overall costs rather than lowering them.

The payback periods for infrastructure investments are unclear. McKinsey warns that both overinvestment and underinvestment in infrastructure carry significant risks. Overinvestment leads to lost assets if demand falls short of expectations. Underinvestment means falling behind the competition and losing market share. Optimizing this trade-off requires accurate forecasting in an extremely volatile environment.

 

Our US expertise in business development, sales and marketing

Our US expertise in business development, sales and marketing - Image: Xpert.Digital

Industry focus: B2B, digitalization (from AI to XR), mechanical engineering, logistics, renewable energies and industry

More about it here:

A topic hub with insights and expertise:

  • Knowledge platform on the global and regional economy, innovation and industry-specific trends
  • Collection of analyses, impulses and background information from our focus areas
  • A place for expertise and information on current developments in business and technology
  • Topic hub for companies that want to learn about markets, digitalization and industry innovations

 

How realistic are the revenue forecasts? Who wins, who loses? The power struggles surrounding AI infrastructure.

Investor expectations: Between rational analysis and speculative excess

The valuations of AI companies reflect extreme expectations for future growth. OpenAI's valuation of $500 billion implies that the company will become one of the most valuable in the world, comparable to Apple or Saudi Aramco. This valuation is based on the assumption that OpenAI will increase its revenue from $13 billion in 2025 to $100 billion by 2028 and subsequently operate sustainably profitably.

To reach $100 billion in revenue, OpenAI would have to meet several conditions: The number of paying users would have to increase to 200 to 300 million, from the current figure of approximately 35 million. New revenue streams such as advertising, e-commerce, and high-priced enterprise products would have to be successfully developed. Inference costs would have to decrease significantly through technological advancements and scaling. Each of these assumptions is highly uncertain.

Analysts at Epoch AI are critical of OpenAI's likelihood of meeting its revenue targets. In a moderate scenario, OpenAI might reach $40 billion to $60 billion in revenue by 2028 instead of $100 billion, which would still represent exceptional growth. However, profitability would remain difficult to achieve, as costs would keep pace with growth. In this scenario, the current valuation of $500 billion would be significantly inflated.

In a pessimistic scenario, growth stagnates earlier than expected, new competitors erode margins, and technological breakthroughs fail to materialize. OpenAI would have to significantly revise its valuation, which could trigger a chain reaction among investors. The high debt and dependence on constant capital inflows would make the company vulnerable.

The tech-heavy Nasdaq rose by 19 percent in 2025, Nvidia gained over 25 percent, and Oracle by 75 percent. These valuations reflect the hope that the AI ​​revolution will indeed deliver the promised productivity gains and new business models. But they also recall past tech bubbles, in which inflated expectations led to massive value destruction when reality fell short of forecasts.

Suitable for:

Industrial Transformation: Use Cases Between Promise and Reality

The justification for these massive investments ultimately depends on concrete use cases and measurable productivity gains. Agentic AI systems promise to automate complex workflows that previously required human expertise. In logistics platforms, agents could detect shipping delays, reroute deliveries, notify customers, and automatically update inventory levels. In enterprise software, they could understand queries, make decisions, and execute multi-stage plans.

Current applications show mixed results. Microsoft reports over one million AI agents built by customers using Azure AI Foundry Agent services. Over 14,000 customers use Azure AI Foundry for complex automation tasks. These figures demonstrate growing adoption, but actual productivity gains and cost savings often remain anecdotal.

Commerzbank, with Microsoft's help, developed the AI ​​customer advisor Ava over two years and praises the collaboration. Such success stories illustrate the potential, but they represent complex implementations that require significant time, resources, and expertise. Scaling such solutions across industries and company sizes remains an open question.

Critics point to the discrepancy between hype and reality. Bain & Company argues that planned investments could be met with insufficient revenue. The consulting firm estimates that AI providers would need to reach annual revenue of two trillion US dollars by 2030, but sees a gap of 800 billion US dollars compared to realistic expectations. This discrepancy would mean that significant amounts of capital have been misallocated and investors are suffering substantial losses.

Bubble risks: Parallels to historical technology cycles

Current developments show remarkable parallels to previous technology bubbles. In the late 1990s, inflated expectations surrounding the internet drove the valuations of dot-com companies to astronomical heights before reality forced a brutal correction. Many investors lost their entire capital; established companies survived, but with significant losses in value.

The railway mania of the 19th century offers another historical analogy. Massive investments in railway infrastructure led to overcapacity, bankruptcies, and financial crises. While the railway did transform the economy and society in the long run, early investors often suffered devastating losses. The parallel is obvious: infrastructure investments can be socially valuable without the investors profiting.

Several warning signs point to bubble dynamics. The circular money flows, in which Nvidia funds OpenAI, which then buys Nvidia chips, are reminiscent of Ponzi-like structures. The creative valuation metrics, such as “AI-adjusted earnings,” resemble the pro forma profits of the dot-com era. The constantly rising valuations despite structural losses replicate patterns of previous bubbles.

The question is not whether, but when a correction will occur. Triggers could include: the high-profile failure of an AI project, technological breakthroughs in alternative approaches, regulatory interventions, energy shortages, or simply the failure to deliver the promised productivity gains. Such a correction would likely involve significant value destruction, but could also give rise to healthier, more sustainable business models.

The strategic implications: Positioning in a volatile environment

This raises complex strategic questions for companies, investors, and policymakers. Companies must decide how much to invest in AI infrastructure and which providers they want to become dependent on. The lock-in effects of proprietary cloud platforms make switching difficult later and create long-term commitments.

Hybrid approaches that combine on-premises infrastructure with cloud services offer greater flexibility at the cost of increased complexity. Organizations retain control over critical workloads while leveraging cloud scalability for variable loads. Optimizing this balance requires nuanced analyses of workload characteristics, costs, security requirements, and strategic priorities.

Investors must choose between different exposures in the AI ​​value chain. Infrastructure providers like AWS, Azure, and Google Cloud offer relatively stable business models with established cash flows. Semiconductor manufacturers like Nvidia benefit from the investment cycle regardless of the ultimate success of specific AI companies. AI startups like OpenAI or Anthropic offer higher upside potential but also significantly higher risk.

Policymakers must create frameworks that enable innovation without generating systemic risks. Antitrust issues become increasingly important when a few dominant actors control critical infrastructure. Energy policy must address the massively increasing electricity demand of AI data centers. Questions of digital sovereignty require strategic investments in European alternatives without creating protectionist inefficiencies.

Technological evolution: Efficiency as a potential game changer

A key uncertainty remains technological development. Should drastic efficiency gains be achieved, the entire investment logic could fundamentally change. Google demonstrates that AI infrastructure can be built with its own TPU chips at a third of the cost of Nvidia systems. Should such approaches prevail, cost structures would fall considerably and profitability would be achieved more quickly.

The shift from GPU-based training to CPU-based inference workloads could also be transformative. GPUs are valued for their AI training capabilities but are not optimal for inference. Switching to CPUs for inference could reduce power consumption, improve performance, and offer a more cost-effective solution. Brookfield's prediction that inference will account for roughly 75 percent of AI computing needs by 2030 underscores this shift.

New semiconductor architectures specifically designed for AI workloads could enable further leaps in efficiency. OpenAI is developing its own chips with Broadcom and expects cost savings of 20 to 30 percent compared to Nvidia technology. Amazon, Google, and other tech giants are pursuing similar strategies. Should these efforts prove successful, Nvidia's dominance would erode, and the dependency structures would fundamentally shift.

Algorithmic innovations could have a similarly disruptive effect. The techniques demonstrated by DeepSeek show that smarter architectures enable drastic resource savings. Machine learning models that learn more efficient representations or better filter out irrelevant information could achieve comparable performance with a fraction of the computing power. Such breakthroughs would render massive infrastructure investments partially obsolete.

Future scenarios: Between consolidation and disruption

Further development could take several paths. In the consolidation scenario, the current market leaders prevail and expand their dominance. AWS, Azure, and Google Cloud control the cloud infrastructure, Nvidia dominates semiconductors, and OpenAI and a few competitors share the AI ​​application market. The massive investments pay off over long periods, and profitability is achieved, albeit later than originally hoped.

In this scenario, oligopolistic structures would become established with high barriers to entry for new competitors. The societal benefits of AI would materialize, but value creation would be concentrated in the hands of a few companies. Regulatory intervention would likely increase to prevent abuse of market power. Early investors would achieve substantial, though perhaps not the hoped-for, returns.

In the disruption scenario, alternative technologies or business models emerge that render current approaches obsolete. Open-source models could offer sufficient performance and undermine the monetization of proprietary systems. More efficient architectures could devalue massive infrastructure investments. New application paradigms beyond large language models could emerge. In this scenario, many current investments would suffer losses, but the democratization of AI would accelerate.

A likely middle scenario combines elements of both extremes. Current market leaders retain substantial positions, but margins erode due to competition. New, specialized providers capture niche markets. Technological advances reduce costs, but not as dramatically as hoped. Profitability is delayed, but the business becomes sustainable. Societal benefits gradually materialize in improved productivity metrics and new applications.

Suitable for:

Betting on the future in a time of uncertainty

The $38 billion deal between OpenAI and Amazon Web Services embodies the ambivalences of the current AI revolution. On the one hand, it documents the impressive dynamism of an industry willing to invest hundreds of billions of US dollars in a technological vision. The players involved are pursuing seemingly rational strategies to diversify dependencies, secure competitive positions, and participate in potentially transformative technologies.

On the other hand, the agreement reveals the precarious foundations on which these investments rest. The discrepancy between gigantic valuations and structural losses, the circular money flows between investors and recipients, the creative valuation metrics, and the sheer scale of the capital allocation are reminiscent of historical bubbles. The fundamental question remains unanswered: Can the promised applications and productivity gains ever justify the massive investments?

The coming years will show whether the current wave of infrastructure investment will go down in history as a far-sighted positioning for the AI ​​age or as an irrational waste of capital. Regardless of the outcome, the deal marks a turning point in the power architecture of the technology industry and illustrates that the future of artificial intelligence will be determined not only by algorithmic breakthroughs, but also by economic realities, strategic partnerships, and ultimately, by the markets' willingness to gamble on an uncertain future.

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs

 

🎯🎯🎯 Benefit from Xpert.Digital's extensive, five-fold expertise in a comprehensive service package | BD, R&D, XR, PR & Digital Visibility Optimization

Benefit from Xpert.Digital's extensive, fivefold expertise in a comprehensive service package | R&D, XR, PR & Digital Visibility Optimization - Image: Xpert.Digital

Xpert.Digital has in-depth knowledge of various industries. This allows us to develop tailor-made strategies that are tailored precisely to the requirements and challenges of your specific market segment. By continually analyzing market trends and following industry developments, we can act with foresight and offer innovative solutions. Through the combination of experience and knowledge, we generate added value and give our customers a decisive competitive advantage.

More about it here:

Exit the mobile version