
Nvidia's strategic emergency call The trillion-dollar phone call: Nvidia's bet on the future of OpenAI – Creative image: Xpert.Digital
The power games in Silicon Valley: When a phone call laid the foundation for a trillion-dollar bet
When panic becomes a strategy and failure becomes the biggest threat to the tech industry
Modern business history has seen few moments when a single phone call set the stage for investments worth hundreds of billions. Late summer 2025 provided such a moment, when Jensen Huang, the longtime CEO of chip giant Nvidia, picked up the phone and called Sam Altman, the head of the artificial intelligence company OpenAI. What followed wasn't just a business deal, but rather a lesson in the fragile nature of strategic partnerships in an industry increasingly characterized by interdependence, where the lines between customer, supplier, and investor are becoming increasingly blurred.
The conversation between Huang and Altman took place at a critical time. Although Nvidia and OpenAI had already worked together for years, negotiations on a new infrastructure project had stalled. OpenAI was actively seeking alternatives to reduce its heavy dependence on Nvidia. The irony of the situation: The company found what it was looking for at Google, a direct competitor in the field of artificial intelligence. OpenAI had reportedly signed a cloud contract with Google in the spring and begun using its proprietary Tensor Processing Units. At the same time, the AI company was working with semiconductor manufacturer Broadcom to develop its own customized chips.
When reports of Google's use of TPU chips reached the public, Nvidia interpreted this as an unmistakable warning signal. The message was clear: Either a quick agreement would be reached, or OpenAI would increasingly turn to the competition. The panic at Nvidia must have been significant, because it prompted the CEO to take personal action. Huang's call to Altman initially served to clarify the rumors, but during the course of the conversation, the Nvidia CEO signaled his willingness to restart the stalled negotiations. A source familiar with the situation described this call as the birth of the idea of a direct investment in OpenAI.
Suitable for:
- What does the AI chip deal between AMD and OpenAI mean for the industry? Is Nvidia's dominance in danger?
One hundred billion dollars and a web of obligations
The result of this intervention was an agreement of breathtaking proportions. In September, Nvidia and OpenAI announced a strategic partnership in which the chip company is prepared to invest up to one hundred billion US dollars. The agreement envisages the construction of AI data centers with a planned capacity of at least ten gigawatts, which translates into millions of graphics processors for OpenAI's next-generation infrastructure. By comparison, a typical nuclear reactor generates about one gigawatt of power. The first phase of the project is scheduled to go live in the second half of 2026 using Nvidia's upcoming Vera Rubin platform.
The structure of the agreement is quite remarkable. Nvidia is not only committing to supplying up to five million chips, but is also considering providing guarantees for loans that OpenAI intends to take out to build its own data centers. This financial interdependence goes far beyond a traditional customer-supplier relationship. Nvidia effectively becomes its own customer's financier, a constellation reminiscent of the practices of the dot-com era, when equipment suppliers supported their customers through loans and equity investments.
But the Nvidia agreement is just one element in a much larger web of deals that OpenAI has forged in recent months. The company has maneuvered itself into a position that can rightly be described as too big to fail. The list of agreements reads like a who's who of the technology and semiconductor industry. Oracle secured a contract worth three hundred billion dollars over five years to build data center capacity as part of the so-called Stargate project. Broadcom announced a partnership to develop custom chips targeting ten gigawatts of computing capacity. AMD signed an agreement for six gigawatts of computing capacity, which also gives OpenAI the option to acquire up to ten percent of the company.
Sales versus liabilities: A calculation that doesn't add up
The sheer magnitude of these commitments raises fundamental questions about their economic viability. OpenAI is expected to generate approximately thirteen billion dollars in revenue this year. At the same time, the company has committed to computing costs of six hundred and fifty billion dollars through its agreements with Nvidia and Oracle alone. Including the agreements with AMD, Broadcom, and other cloud providers like Microsoft, the total commitments approach the trillion mark.
These figures are blatantly disproportionate to current business results. In the first half of 2025, OpenAI generated revenue of approximately $4.3 billion, a 16 percent increase over the previous year. At the same time, the company burned through $2.5 billion in cash, primarily on research and development and the operation of ChatGPT. R&D expenses amounted to $6.7 billion in the first half of the year. OpenAI had approximately $17.5 billion in cash and marketable securities at the end of the first half of the year.
The discrepancy between revenue and commitments is staggering. Calculations suggest that building just one gigawatt of data center capacity costs approximately fifty billion dollars, including hardware, power infrastructure, and construction costs. OpenAI has committed to a total of thirty-three gigawatts, which would theoretically require investments of over $1.6 trillion. The company would therefore have to increase its revenue a hundredfold to even come close to funding this infrastructure.
How will this gap be closed? OpenAI is pursuing an aggressive diversification strategy. The company's five-year plan includes government contracts, e-commerce tools, video services, consumer hardware, and even a role as a computing provider through the Stargate data center project. The company's valuation has risen rapidly: from $157 billion in October 2024 to $300 billion in March 2025, to the current $500 billion following a secondary stock sale in which employees sold shares valued at $6.6 billion.
The money carousel: How the AI industry finances itself
The structure of these agreements has raised concerns in the financial world, as it recalls a phenomenon prevalent during the dot-com bubble of the late 1990s: circular finance. The pattern is disturbingly familiar. A supply chain company invests in a downstream company, which then uses the capital received to purchase products from the investor. Nvidia buys shares in OpenAI, and OpenAI buys GPUs from Nvidia. Oracle invests in Stargate, and OpenAI leases computing power from Oracle. AMD gives OpenAI warrants for up to 10 percent of the company, and OpenAI commits to buying tens of billions of dollars worth of AMD chips.
These cycles create the appearance of thriving businesses, while in reality, largely the same money flows back and forth between the same players. The problem is not new. In the late 1990s, equipment suppliers for internet infrastructure practiced similar vendor financing. Companies like Lucent, Nortel, and Cisco extended generous loans to telecommunications providers and internet service providers, who then used the money to purchase equipment from these same suppliers. This created artificially inflated sales and concealed actual demand. When the bubble burst, not only the heavily indebted buyers collapsed, but also the suppliers, whose sales turned out to be a mirage.
The parallels to today's situation are unmistakable, even though important differences exist. Unlike many dot-com companies that never made a profit, the major players in today's AI boom are profitable companies with established business models. Nvidia, for example, posts profit margins of around 53 percent and dominates the AI chip market with a market share of around 80 percent. Microsoft, Google, and Amazon are among the most profitable companies in the world. Nevertheless, there are legitimate concerns.
A survey of global fund managers in October 2025 found that 54 percent believed AI-related stocks were in bubble territory. Sixty percent considered stocks overall to be overvalued. This skepticism is not unfounded. The commitments to build massive quantities of chips and data centers before OpenAI can afford them are fueling fears that the enthusiasm for AI is turning into a bubble similar to the infamous dot-com bubble.
The Curse of Success: Why Nvidia's Customers Are Becoming Competitors
At the center of this web is Nvidia, a company that has transformed itself over the past two years from a major but specialized chip manufacturer into the world's most valuable publicly traded company. With a market capitalization of over $4 trillion, Nvidia now surpasses even the heavyweights of the technology industry. This rise is closely linked to the AI boom that began with the launch of ChatGPT in late 2022. Since then, Nvidia's revenue has nearly tripled, while profits have exploded.
Jensen Huang, who has led the company since its founding in 1993, has led Nvidia through a remarkable transformation. Originally focused on graphics cards for video games, Huang recognized early on the potential of its processors for scientific computing and artificial intelligence. The development of CUDA, a parallel computing platform, made it possible to use Nvidia's GPUs for deep learning and AI models that require massively parallel processing. This strategic foresight positioned Nvidia as an indispensable partner for virtually every major AI project worldwide.
Huang's leadership style is unconventional. He eschews long-term plans and instead emphasizes a focus on the present. His definition of long-term planning is: What are we doing today? This philosophy has given Nvidia remarkable agility. The company pursues an aggressive innovation strategy with the goal of launching a new generation of advanced AI chips annually. Hopper and Blackwell are followed by Vera Rubin and Rubin Ultra, each generation offering significantly increased performance and efficiency.
But this very strategy carries risks. For customers investing tens of billions of dollars in Nvidia hardware, the rapid obsolescence of their investments poses a serious problem. If a new chip generation significantly surpasses the previous one within twelve to eighteen months, the investment rapidly loses value. No company can afford to spend ten or twenty billion dollars every two years on the latest hardware. This dynamic explains why major customers like Meta, Google, Microsoft, and Amazon are simultaneously pursuing their own chip development programs. OpenAI's collaboration with Broadcom on developing its own chips follows the same logic.
Nvidia faces a paradox: The companies that are its biggest customers today could become its fiercest competitors tomorrow. Approximately 40 percent of Nvidia's revenue comes from just four companies: Microsoft, Meta, Amazon, and Alphabet. All possess the resources and technical expertise to develop their own AI chips. While Nvidia's technological lead and comprehensive CUDA software ecosystem create significant barriers to entry, technology industry history shows that dominance is rarely permanent.
Our US expertise in business development, sales and marketing
Industry focus: B2B, digitalization (from AI to XR), mechanical engineering, logistics, renewable energies and industry
More about it here:
A topic hub with insights and expertise:
- Knowledge platform on the global and regional economy, innovation and industry-specific trends
- Collection of analyses, impulses and background information from our focus areas
- A place for expertise and information on current developments in business and technology
- Topic hub for companies that want to learn about markets, digitalization and industry innovations
Many users, hardly any payers: The economic problem of ChatGPT
Between hype and reality: The economic logic of the AI boom
Despite all the legitimate concerns, there are arguments supporting the economic viability of massive AI investments. The demand for AI applications is real and growing exponentially. ChatGPT was the fastest application in history to reach 100 million users within two months. OpenAI now has approximately 800 million weekly users, of which only about 5 percent are paying subscribers. This conversion rate of 99 percent free to 1 percent paying users represents both a massive opportunity and a precarious foundation.
The integration of AI into business processes is progressing. Studies show that over 70 percent of companies worldwide now use some form of artificial intelligence. In contrast to the dot-com era, when many business models were purely speculative and internet penetration was still low, there is a real and growing demand for AI. Large companies are deploying advanced models for specific tasks, creating a feedback loop of revenue and productivity gains.
Analysts argue that the falling cost per unit of intelligence justifies the investment. As computing power becomes cheaper, more applications can be developed economically, which in turn increases demand. Nvidia emphasizes that its systems should be evaluated not only by chip price, but by total operating costs. The energy efficiency of the latest generations has increased significantly. The GB300-NVL72 platform offers a fifty-fold increase in energy efficiency per token compared to the previous Hopper generation. A $3 million investment in GB200 infrastructure could theoretically generate $30 million in token sales, representing a tenfold return.
Nevertheless, fundamental doubts remain. The assumption of a linear scaling of computing power to AI capabilities is increasingly being questioned. Research suggests that diminishing returns may be occurring. The Stanford AI Index 2024 shows that computing usage has grown exponentially, while performance improvements in key benchmarks are leveling off. More servers don't automatically lead to better AI, but OpenAI's strategy treats computing power as a guaranteed path to dominance.
A house of cards made of chips? The domino risk in the AI ecosystem
The close interdependence between chip manufacturers, cloud providers, and AI developers creates systemic risks. If OpenAI fails or misses its growth targets, the repercussions would ripple through the entire supply chain. Nvidia would be sitting on investments in an overvalued company. Oracle would have built data center capacity that no one is using. AMD would have created production capacity for chips that are no longer in demand. The fates of these companies are intertwined in a way reminiscent of the interdependencies that contributed to the 2008 financial crisis.
Critics like well-known short seller Jim Chanos draw explicit parallels to the dot-com bubble. Chanos points out that the capital requirements for AI infrastructure far exceed the approximately $100 billion in vendor financing during the internet boom. He raises concerns that leading technology companies like Nvidia and Microsoft will do anything to keep the actual equipment off their balance sheets through creative financing structures. The concern is that these companies are afraid of the depreciation schedules and accounting implications, as well as the enormous capital requirements that they don't want to directly report on their balance sheets.
But there are also voices warning against hasty bubble diagnoses. Some analysts argue that the current agreements do not reach the necessary scale to be overwhelming. For example, the OpenAI-Nvidia agreement would represent approximately thirteen percent of Nvidia's projected revenue for 2026. If a one-gigawatt implementation occurs in the second half of 2026, it would trigger a total capital investment of approximately fifty to sixty billion dollars, of which Nvidia would receive approximately thirty-five billion dollars. Of this, ten billion dollars would be reinvested in OpenAI, with further investments depending on the actual progress in AI monetization. This performance-based approach differs from the fixed, often speculative commitments of the telecom bubble.
The real bottleneck: Why the AI boom could run out of steam
An often overlooked but potentially crucial bottleneck is energy supply. Operating AI data centers requires massive amounts of electricity. Ten gigawatts is equivalent to powering over eight million American households, or five times the capacity of the Hoover Dam. The 33 gigawatts, to which OpenAI has committed, would roughly equal the entire electricity demand of New York State.
The power grids in the United States are already under severe strain. Data centers accounted for approximately four percent of total American electricity consumption in 2024, equivalent to approximately 183 terawatt-hours. By 2030, this figure is expected to more than double to 426 terawatt-hours. In some states, such as Virginia, data centers already consumed 26 percent of the total electricity supply in 2023. In North Dakota, the figure was 15 percent, in Nebraska 12 percent, in Iowa 11 percent, and also in Oregon 11 percent.
Building new data centers with adequate energy supplies takes years. Estimates suggest that developing a data center in the US typically takes about seven years from initial development to full operation, including 4.8 years for pre-development and 2.4 years for construction. This creates a fundamental bottleneck for OpenAI's ambitious expansion plans. The company can sign contracts as much as it wants, but if the physical infrastructure isn't ready in time, the commitments remain empty promises.
The energy issue also raises sustainability concerns. A single ChatGPT query consumes about ten times as much energy as a typical Google search. With millions of queries per day at OpenAI alone, not to mention competitors like Anthropic, Google, and Microsoft, this places an enormous strain on power grids and the environment. Cooling these data centers also requires massive amounts of water. Hyperscale data centers directly consumed about fourteen billion gallons of water in 2023, with expectations that this number will double or triple by 2028.
The global playing field: AI between national interests and export controls
AI infrastructure has become a national security issue. Both the Trump and Biden administrations have emphasized industrial policy, framing AI not only as an economic opportunity but also as a security imperative. The implicit message to companies is that speed is more important than caution. The Stargate project was announced at the White House with President Trump portraying the technology as a driver of economic leadership and technological independence.
China is pursuing a state-led model that channels capital into AI to build domestic champions and reduce dependence on American technology. Europe initially focused on risk management, but fears of lost competitiveness prompted Brussels to launch the AI Continent Action Plan and a €1 billion initiative to accelerate adoption.
For Nvidia, this geopolitical dimension represents both opportunity and risk. The company has attempted to pursue a strategy that would allow it to continue exporting chips to China, arguing that exclusion from the Chinese market would only strengthen Chinese competitors. However, export controls have reduced Nvidia's market share in China from 95 percent to virtually zero. Huang has publicly stated that he can't imagine any policymaker considering this a good idea. The Chinese market represents an opportunity worth approximately 50 billion dollars that Nvidia is missing out on due to regulatory restrictions.
Bubble or revolution? A conclusion with an open ending
The question of whether we are in the midst of an AI bubble cannot be definitively answered as long as we are still in the eye of the storm. Bubbles often only become clear in hindsight. Alan Greenspan's famous warning against irrational exuberance came in December 1996, but the Nasdaq didn't peak until more than three years later. Amid the inflation of a bubble, inflation can persist longer than seems logical.
Some facts, however, are undeniable. The valuations of AI companies are based on assumptions of future growth that would be historically unprecedented. No company has ever grown from ten billion to one hundred billion dollars in revenue as quickly as OpenAI projects. The commitments to build trillions of dollars in infrastructure with current revenue of thirteen billion dollars require a revenue explosion for which there are no historical precedents.
At the same time, AI is not mere speculation. The technology is already transforming industries and ways of working. Companies are achieving measurable productivity gains through AI integration. The question is not whether AI will be transformative, but how quickly this transformation will occur and whether current valuations and investments are in line with this pace.
What happens if OpenAI misses its projections? In the best-case scenario, the company would have to scale back its infrastructure plans. In the worst-case scenario, the second-round effects could be significant, as investors and other companies increasingly place large bets on OpenAI's value creation. These bets depend not only on that value being realized, but on it being realized quickly enough to cover the debt used to finance those bets. The failure to deliver value as quickly as investors expected has been enough to turn several historic tech booms into busts.
The central lesson of the dot-com bubble was that transformative technologies often succeed for decades, but the first wave of companies and their investors rarely capture the full promise implied in their stock prices. The internet did indeed change the world, but most of the highly valued internet companies in 2000 no longer exist. The winners were often companies that entered the market later or survived the darkest days of the crisis.
Whether this will also apply to AI remains to be seen. What is clear, however, is that that phone call between Jensen Huang and Sam Altman in late summer 2025 could prove to be one of those turning points where panic became strategy, dependence transformed into mutual obligation, and an industry set the course for one of the biggest economic bets in modern history. The answer to whether this bet pays off or becomes the biggest misinvestment since the dot-com era will be revealed in the coming decade.
Your global marketing and business development partner
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.
☑️ SME support in strategy, consulting, planning and implementation
☑️ Creation or realignment of the digital strategy and digitalization
☑️ Expansion and optimization of international sales processes
☑️ Global & Digital B2B trading platforms
☑️ Pioneer Business Development / Marketing / PR / Trade Fairs
🎯🎯🎯 Benefit from Xpert.Digital's extensive, five-fold expertise in a comprehensive service package | BD, R&D, XR, PR & Digital Visibility Optimization
Benefit from Xpert.Digital's extensive, fivefold expertise in a comprehensive service package | R&D, XR, PR & Digital Visibility Optimization - Image: Xpert.Digital
Xpert.Digital has in-depth knowledge of various industries. This allows us to develop tailor-made strategies that are tailored precisely to the requirements and challenges of your specific market segment. By continually analyzing market trends and following industry developments, we can act with foresight and offer innovative solutions. Through the combination of experience and knowledge, we generate added value and give our customers a decisive competitive advantage.
More about it here:

