Clash of Strategies | Why IBM CEO Arvind Krishna doesn't believe in Sam Altman's trillion-dollar vision – AGI at zero to one percent?
Xpert pre-release
Language selection 📢
Published on: December 4, 2025 / Updated on: December 4, 2025 – Author: Konrad Wolfenstein

Clash of Strategies | Why IBM CEO Arvind Krishna doesn't believe in Sam Altman's trillion-dollar vision – AGI at zero to one percent? – Image: Xpert.Digital
Artificial General Intelligence (AGI) and unforgiving mathematics: Why the data center boom can never pay off.
The 5-year death cycle: The underestimated risk for Nvidia, Microsoft and others.
While Silicon Valley is engulfed in an unprecedented investment frenzy, with trillions pouring into the race for artificial superintelligence, one of the world's most experienced tech CEOs is pulling the emergency brake. IBM CEO Arvind Krishna warns: The gamble isn't paying off.
A gold rush mentality grips the global technology sector. Corporations like Microsoft, Google, and Meta are outbidding each other with investments in new data centers, driven by the fear of being left behind in the next major technological revolution. The vision is clear: the development of artificial general intelligence (AGI) that is equal to or superior to human intelligence. But amidst this euphoria, a powerful voice rises, not from the ranks of technology critics, but from the very center of power: Arvind Krishna, CEO of IBM.
In a sober analysis based on pure arithmetic, Krishna dismantles the prevailing Silicon Valley narrative. His warning is as simple as it is terrifying: Infrastructure costs are exploding while hardware is becoming obsolete faster than it can be depreciated. Krishna speaks of investment sums of up to eight trillion US dollars that would be necessary to continue the current trajectory of AGI development—a sum that could financially bankrupt even the world's richest companies if the promised astronomical profits fail to materialize.
But Krishna's criticism isn't limited to financial figures. He questions the technological basis of the hype itself. While Sam Altman and OpenAI portray the arrival of superintelligence as almost inevitable, Krishna puts the probability of achieving this goal with today's large-scale language modeling technology at a sobering zero to one percent.
Are we facing the biggest misinvestment in economic history? Is the AI boom a bubble about to burst, or are skeptics overlooking the transformative potential that lies beyond the balance sheets? The following article examines the arguments, the unforgiving mathematics of data center economics, and the fundamental conflict between the visionaries of an "all-or-nothing" approach and the proponents of pragmatic realism.
Suitable for:
Why IBM's CEO predicts the end of the most expensive experiment in tech history
The global technology sector may be facing one of the biggest misinvestments in economic history. While corporations like Microsoft, Amazon, Meta, and Google are pumping hundreds of billions of dollars into building artificial intelligence infrastructure, a warning voice is rising from the heart of the IT industry. Arvind Krishna, CEO of IBM and with the company since 1990, presented a fundamental economic analysis in an interview with The Verge's Decoder podcast in late November 2025 that could shatter the euphoria surrounding artificial general intelligence.
His statements, published on November 30 and December 1, 2025, get to the heart of a debate that is gaining increasing momentum in boardrooms and analyst circles. Krishna isn't talking about theoretical risks or philosophical concerns, but about concrete financial impossibilities that call into question the current investment model in the AI sector. His calculations are giving even optimistic industry observers pause, as they are based on simple arithmetic and sound business principles.
Suitable for:
The merciless mathematics of data center economics
Krishna begins his analysis with a sober assessment of the current cost situation. A data center with a capacity of one gigawatt incurs capital expenditures of 80 billion US dollars by today's standards. This figure includes not only the physical infrastructure and buildings, but also all the technical equipment, from servers and network components to the highly specialized graphics processors required for AI calculations.
The tech industry has committed to massive expansion in recent months. Several companies have publicly announced plans to build between 20 and 30 gigawatts of additional computing capacity. At current costs per gigawatt, this would result in total investments of at least $1.5 trillion. This sum is roughly equivalent to Tesla's current market capitalization and illustrates the sheer scale of the undertaking.
But the calculation becomes even more drastic when considering the ambitions in the context of the desired artificial general intelligence. Krishna estimates that the path to true AGI would require approximately 100 gigawatts of computing power. This estimate is based on extrapolations of current training requirements for large language models and takes into account the exponentially increasing complexity that accompanies each development step. At $80 billion per gigawatt, the investment expenditure would amount to a staggering eight trillion US dollars.
This investment figure, however, is only half the story. Krishna points to a factor often overlooked in public discourse: the cost of capital. With an investment of eight trillion US dollars, companies would need to generate approximately 800 billion US dollars in profit annually just to cover the interest on the invested capital. This figure assumes a conservative interest rate of ten percent, which reflects the cost of capital, risk premiums, and investor expectations.
The five-year death cycle of AI hardware
A crucial point in Krishna's argument concerns the lifespan of the installed hardware. The entire computing capacity must be fully utilized within five years, as the installed hardware will then need to be disposed of and replaced. This assessment aligns with observations from industry and is the subject of intense debate in financial circles.
The well-known investor Michael Burry, famous for his accurate predictions of the 2008 financial crisis, raised similar concerns in November 2025. Burry argues that large technology companies are overestimating the actual lifespan of their AI hardware, thus artificially keeping their depreciation low. He anticipates that graphics processors and specialized AI chips will, in practice, only remain economically viable for two to three years before being rendered obsolete by newer, more powerful generations.
The rapid development in the semiconductor sector supports this view. Nvidia, the dominant provider of AI chips, releases new processor generations roughly every 12 to 18 months. Each generation offers significant performance improvements, quickly rendering older models uneconomical. While a conventional server in a data center can easily be used for six years or more, different rules apply to AI-specific hardware.
In practice, the picture is more nuanced. Some companies have adjusted their depreciation periods. At the beginning of 2025, Amazon shortened the estimated useful life of some servers from six to five years, citing the accelerated development in the field of AI. This adjustment will reduce the company's operating income by approximately $700 million in 2026. Meta, on the other hand, extended the depreciation period for servers and network equipment to 5.5 years, which reduced depreciation costs by $2.9 billion in 2025.
These differing strategies illustrate that even companies investing billions in AI hardware are uncertain about how long their investments will remain economically viable. The five-year scenario Krishna describes falls within the optimistic range of these estimates. If the actual useful life is closer to the two to three years predicted by Burry, depreciation costs, and thus the pressure on profitability, would increase significantly.
The impossibility of profitable returns
The connection between these two factors leads Krishna to his central argument. He believes that the combination of enormous capital costs and short lifecycles makes it impossible to achieve a reasonable return on investment. With investment costs of eight trillion US dollars and the need to generate 800 billion US dollars in annual profit just to cover capital costs, an AI system would have to generate revenue on a scale far exceeding what currently seems realistic.
For comparison, Alphabet, Google's parent company, had total revenues of approximately $350 billion in 2024. Even assuming aggressive growth of 12 percent per year, revenues would rise to around $577 billion by 2029. The total revenue required to justify AI investments would far exceed this figure.
OpenAI, the company behind ChatGPT, projects annualized revenue of over $20 billion for 2025 and expects to reach hundreds of billions of dollars by 2030. The company has signed agreements worth approximately $1.4 trillion over the next eight years. But even these ambitious figures raise questions. Analysts at HSBC model that OpenAI will incur $792 billion in cloud and AI infrastructure costs between the end of 2025 and 2030, with total computing capacity commitments potentially reaching around $1.4 trillion by 2033.
HSBC analysts predict that OpenAI's cumulative free cash flow will remain negative through 2030, resulting in a funding shortfall of $207 billion. This gap would need to be filled through additional debt, equity, or more aggressive revenue generation. The question is not only whether OpenAI can become profitable, but whether its entire business model, which relies on massive data center investments, is even viable.
The vanishingly small probability of AGI
Krishna adds a technological dimension to his economic critique that is even more fundamental. He estimates the probability that current technologies will lead to artificial general intelligence at between zero and one percent. This assessment is remarkable because it is not based on philosophical considerations, but rather on a sober evaluation of the technical capabilities and limitations of large language models.
While the definition of AGI is controversial, at its core it refers to AI systems that can achieve or surpass human cognitive abilities across the entire spectrum. This would mean that a system not only demonstrates expert knowledge in specific areas, but is also capable of transferring knowledge from one area to another, understanding new situations, creatively solving problems, and continuously improving without needing to be retrained for each new task.
Krishna argues that large language models, which form the core of the current AI revolution, have fundamental limitations. These models are based on statistical patterns in massive text datasets and can perform impressively in language-based tasks. They can generate coherent texts, answer questions, and even write program code. But they don't truly understand what they are doing. They lack a world model, a concept of causality, and a genuine capacity for abstraction.
These limitations manifest in several areas. Language models regularly hallucinate, meaning they invent facts that sound plausible but are false. They struggle with multi-stage logical reasoning and often fail at tasks that are trivial for humans if those tasks were not included in their training dataset. They lack episodic memory and cannot learn from their own mistakes without retraining.
Scientists and researchers from various fields increasingly share this skepticism. Marc Benioff, CEO of Salesforce, expressed similar skepticism regarding AGI in November 2025. In a podcast, he described the AGI term as potentially misleading and criticized the technology industry for being under a kind of hypnosis regarding the imminent capabilities of AI. Benioff emphasized that while current systems are impressive, they possess neither consciousness nor true understanding.
Yann LeCun, senior AI scientist at Meta, argues that large language models will never lead to AGI, no matter how much they are scaled. He advocates for alternative approaches that go beyond pure text prediction, including multimodal world models that not only process text but also integrate visual and other sensory information to build internal representations of the world.
Our US expertise in business development, sales and marketing
Industry focus: B2B, digitalization (from AI to XR), mechanical engineering, logistics, renewable energies and industry
More about it here:
A topic hub with insights and expertise:
- Knowledge platform on the global and regional economy, innovation and industry-specific trends
- Collection of analyses, impulses and background information from our focus areas
- A place for expertise and information on current developments in business and technology
- Topic hub for companies that want to learn about markets, digitalization and industry innovations
AI bubble or engine of the future? The dangerous gap between investments, energy consumption, and real profits.
The necessary technological breakthrough
Krishna believes that achieving AGI will require more technologies than the current path of large language models can provide. He suggests that integrating hard knowledge with language models could be a viable approach. By hard knowledge, he means structured, explicit knowledge about causal relationships, physical laws, mathematical principles, and other forms of knowledge that go beyond statistical correlations.
This perspective aligns with research in the field of neuro-symbolic AI, which seeks to combine the pattern recognition strengths of neural networks with the logical capabilities of symbolic AI systems. Symbolic AI, based on rules and logical inference, was dominant in the early decades of AI research but has been overtaken by neural approaches in recent years. Hybridizing both approaches could theoretically produce systems capable of both learning and logical reasoning.
Other promising research directions include embodied AI, where systems learn through interaction with a physical or simulated environment; continuous learning, where systems can expand their capabilities without losing previous knowledge; and intrinsically motivated systems that explore and learn on their own.
Even with these additional technologies, Krishna remains cautious. If asked whether this expanded approach could lead to AGI, he would only answer with "maybe." This caution underscores the uncertainty that exists even among experts who have been working with AI for decades. The development of AGI is not simply a matter of computing power or data volume, but may require fundamental new insights into the nature of intelligence itself.
Suitable for:
- Independent of US tech giants: How to achieve cost-efficient and secure in-house AI operation – Initial considerations
The paradox of productive AI today
Despite his skepticism regarding AGI and the economics of massive data center investments, Krishna is by no means an AI pessimist. On the contrary, he speaks enthusiastically about current AI tools and their impact on the business world. He is convinced that these technologies will unlock trillions of dollars in productivity potential within companies.
This distinction is central to understanding his position. Krishna does not doubt the value of AI per se, but rather the economic viability of the specific path the industry has taken. Today's AI systems, particularly large language models, can already enable significant productivity gains in many areas without requiring eight trillion US dollars in infrastructure.
IBM itself provides a striking example of these productivity gains. Since January 2023, the company has comprehensively implemented AI and automation within its own operations and expects to achieve productivity gains of $4.5 billion by the end of 2025. This initiative, which IBM calls Client Zero, encompassed the deployment of hybrid cloud infrastructure, AI and automation technologies, and consulting expertise across various business units.
The concrete results of this transformation are impressive. IBM has implemented AI-powered tools in customer service that resolve 70 percent of inquiries and improve resolution time by 26 percent. Across all business units, approximately 270,000 employees have been equipped with agentic AI systems that orchestrate complex workflows and support human workers.
This type of AI application doesn't require massive new data centers but can build on existing infrastructure. It focuses on specific use cases where AI delivers demonstrable improvements, rather than the hypothetical development of general intelligence. This is the core of Krishna's argument: The technology is valuable and transformative, but the current approach of investing trillions in the pursuit of AGI is not economically sustainable.
Studies by McKinsey estimate that generative AI has the potential to create between $2.6 trillion and $4.4 trillion in economic value annually across 63 analyzed use cases. When considering the impact of embedding generative AI in software currently used for other tasks, this estimate could roughly double. These productivity gains could boost annual labor productivity growth by 0.1 to 0.6 percentage points through 2040.
The diverging strategies of the technology giants
While Krishna expresses his concerns, other tech giants are doubling down on their bets on AI infrastructure. The spending of the Big Four illustrates the scale of this investment cycle. Microsoft plans to spend roughly $80 billion building AI-enabled data centers in fiscal year 2025, with more than half of that investment earmarked for the United States.
Amazon has announced capital expenditures of approximately $125 billion for 2025, with the majority earmarked for AI and related infrastructure for Amazon Web Services. The company has already signaled that spending will be even higher in 2026. Meta Platforms expects capital expenditures of between $70 billion and $72 billion for 2025, an increase from its previous estimate of $66 billion to $72 billion. For 2026, the company indicated that spending would be significantly higher.
Alphabet, Google's parent company, expects capital expenditures of between $91 billion and $93 billion for 2025, up from a previous forecast of $85 billion. Combined, these four companies plan to spend between $350 billion and $400 billion in 2025, more than double what was spent two years ago.
These massive investments are taking place in an environment where actual revenues from AI services are still far below expectations. OpenAI reports annualized revenues of over $20 billion but remains unprofitable. Microsoft generates approximately $13 billion in annual AI revenue, with year-over-year growth of 175 percent, while Meta cannot report a single dollar of direct AI revenue.
The discrepancy between investment and revenue is striking. Morgan Stanley estimates that the AI industry will spend approximately three trillion US dollars on data centers by 2028. In comparison, current revenues are negligible. An MIT study from July 2025 found that roughly 95 percent of companies that invested in AI did not make any money from the technology. The combined total expenditure of these companies is estimated at approximately 40 billion US dollars.
The growing voices of skepticism
Krishna's warning is part of a growing chorus of skeptical voices from various sectors of the technology and financial worlds. These concerns focus not only on immediate economic benefits but also on systemic risks arising from current investment dynamics.
Economists point out that the AI sector accounted for roughly two-thirds of US GDP growth in the first half of 2025. An analysis by JPMorgan Asset Management shows that AI spending on data centers contributed more to economic growth than the combined consumption of hundreds of millions of American consumers. Harvard economist Jason Furman calculated that without data centers, GDP growth in the first half of 2025 would have been only 0.1 percent.
This concentration of growth on a single sector carries risks. Daron Acemoglu, an economist at MIT and the 2024 Nobel laureate in Economics, argues that the actual impact of AI could be significantly smaller than industry forecasts suggest. He estimates that perhaps only five percent of jobs will be replaced by AI in the next ten years, far less than the enthusiastic predictions of some technology leaders.
Concerns about a bubble are heightened by several factors. Technology companies are increasingly using financial instruments known as special purpose vehicles (SPVs) to keep billions of dollars in expenses off their balance sheets. These Wall Street-funded SPVs serve as shell companies for building data centers. This practice raises questions about transparency and the actual risk borne by the companies.
Sundar Pichai, CEO of Alphabet, described the AI investment surge as an extraordinary moment in a November 2025 BBC interview, but also acknowledged a certain irrationality accompanying the current AI boom. He warned that every company would be affected if the AI bubble were to burst. Even Sam Altman, CEO of OpenAI and one of the most prominent AI advocates, admitted in August 2025 that AI might be in a bubble, comparing market conditions to those of the dot-com boom and emphasizing that many intelligent people were getting too excited about a kernel of truth.
Suitable for:
The energy issue as a limiting factor
Another fundamental problem, which Krishna doesn't explicitly address but is implicit in his cost calculations, concerns energy supply. A 100-gigawatt data center would require approximately 20 percent of the United States' total electricity generation. This is not a trivial challenge, but a potential bottleneck that could jeopardize the entire vision.
The International Energy Agency forecasts that global electricity demand from data centers could more than double by 2030, from approximately 415 terawatt-hours in 2024 to between 900 and 1,000 terawatt-hours. AI could account for 35 to 50 percent of data center electricity consumption by 2030. In the United States, data center electricity demand is expected to increase from 35 gigawatts to 78 gigawatts by 2035, representing 8.6 percent of the nation's electricity consumption.
This demand comes at a time when many countries are trying to decarbonize their power grids and increase the share of renewable energy. The challenge is that data centers require a constant power supply, 24 hours a day, 365 days a year. This makes the transition to renewable energy more complex, as wind and solar power are intermittent and require storage solutions or backup capacity.
Carbon emissions from data centers are projected to rise from 212 million tons in 2023 to potentially 355 million tons by 2030, though this figure varies considerably depending on the speed of clean energy solutions and efficiency improvements. A single AI-generated image generation process consumes as much electricity as fully charging a smartphone. Processing one million tokens produces as much carbon dioxide as a gasoline-powered car driving 8 to 32 kilometers.
Generative AI requires roughly seven to eight times more energy than traditional computing loads. Training large AI models can consume as much electricity as hundreds of households over several months. This energy intensity means that even if the financial resources to build massive data centers were available, the physical infrastructure to power these facilities might not be ready in time.
Suitable for:
Alternative technological pathways and their significance
The debate surrounding the limitations of large-scale language models has led to increased research efforts in alternative fields. Quantum computing is seen by some as a potential breakthrough that could overcome current limitations. In October 2025, Google unveiled its Willow quantum chip, which achieved a verifiable quantum advantage. This was a milestone that transcended the boundaries of classical physics and opened up new possibilities in fields such as medicine, energy, and AI.
Quantum computers operate on entirely different principles than classical computers. They utilize quantum bits, or qubits, which can exist in multiple states simultaneously, enabling parallel computations on a scale impossible with conventional systems. However, quantum computers face significant challenges, particularly decoherence, which affects the stability of qubits.
Recent breakthroughs in qubit stabilization suggest that scalable quantum computers may be a reality within the next few years. Companies like PsiQuantum plan to put quantum computers 10,000 times larger than Willow into operation before the end of this decade—computers large enough to tackle important questions about materials, medicines, and the quantum aspects of nature.
The convergence of quantum computing and artificial intelligence could theoretically open up new possibilities. Quantum algorithms have improved more than 200-fold in the simulation of important drugs and materials. Some speculate that the combination of AGI and quantum computing could be possible within one to two years, followed by artificial superintelligence within five years.
Other promising research directions include optical computing architectures that use light instead of electricity to power chips. An architecture called Parallel Optical Matrix-Matrix Multiplication, unveiled in November 2025, could eliminate one of the biggest bottlenecks in current AI development. Unlike previous optical methods, it performs multiple tensor operations simultaneously with a single laser pulse, which could significantly increase processing speed.
IBM's strategic positioning
Krishna's position is particularly interesting when viewed in the context of IBM's strategy. In recent years, IBM has consciously shifted its focus away from a pure hardware and infrastructure business and towards enterprise software, cloud services, and consulting. The company sold off large parts of its traditional IT business and concentrated instead on hybrid cloud solutions and AI applications for businesses.
This strategic direction differs fundamentally from the approaches of Microsoft, Amazon, Google, and Meta, all of which are investing heavily in building their own infrastructure. IBM, instead, focuses on helping companies deploy AI on their own terms, with transparency, choice, and flexibility. This philosophy reflects a belief that not every company will use a single public cloud and that, in particular, regulated industries and companies outside the United States will prefer hybrid approaches.
Krishna's criticism of the massive infrastructure investments can therefore also be understood as an implicit defense of IBM's approach. If the pursuit of AGI through trillions of dollars in data center investments is indeed not economically viable, then this would confirm IBM's strategy of focusing on specific, value-creating use cases that can build on existing or moderately expanded infrastructure.
At the same time, IBM is heavily involved in areas such as quantum computing, which could potentially represent the next technological wave. The company is investing significantly in the development of quantum computers and is working on partnerships with other technology companies to advance this technology. This suggests that Krishna is not against innovation or ambitious technological goals, but rather against a specific approach that he considers economically unviable.
🎯🎯🎯 Benefit from Xpert.Digital's extensive, five-fold expertise in a comprehensive service package | BD, R&D, XR, PR & Digital Visibility Optimization

Benefit from Xpert.Digital's extensive, fivefold expertise in a comprehensive service package | R&D, XR, PR & Digital Visibility Optimization - Image: Xpert.Digital
Xpert.Digital has in-depth knowledge of various industries. This allows us to develop tailor-made strategies that are tailored precisely to the requirements and challenges of your specific market segment. By continually analyzing market trends and following industry developments, we can act with foresight and offer innovative solutions. Through the combination of experience and knowledge, we generate added value and give our customers a decisive competitive advantage.
More about it here:
Productivity yes, AGI no: Why targeted AI projects could be more profitable than mega-models
The perspective of OpenAI leadership
Krishna's skepticism stands in direct contrast to the public statements of Sam Altman, the CEO of OpenAI. Altman has repeatedly emphasized that OpenAI is prepared to make massive investments to achieve AGI. The company has entered into agreements totaling approximately $1.4 trillion over the next eight years, including significant deals with Oracle, Broadcom, and other partners.
Altman predicts that OpenAI will achieve annualized revenues in the hundreds of billions of US dollars by 2030. This projection is based on the assumption that demand for AI services will grow exponentially as the systems become more powerful. OpenAI's business model depends on companies and individuals being willing to pay substantial sums for access to advanced AI capabilities.
Krishna stated in the podcast that he understands Altman's perspective but does not share it. This is a remarkably diplomatic way of putting it, suggesting that he respects OpenAI's vision but makes fundamentally different assumptions about its technological feasibility and economic viability. Krishna answers the question of whether OpenAI can generate a return on its investments with a clear "no."
This disagreement represents a fundamental conflict in the technology industry between those who believe in an imminent transformative AGI and are prepared to invest astronomical sums, and those who are more skeptical and prefer an incremental, more economically sustainable approach.
Suitable for:
The role of depreciation policy and accounting standards
The debate surrounding the actual useful life of AI hardware raises fundamental questions about accounting and transparency. The way companies depreciate their assets directly impacts their reported profits and, consequently, share prices and valuations.
Michael Burry argues that large technology companies overestimate the useful life of their AI chips to keep depreciation low and inflate profits. For example, if Meta spends $5 billion on a new Nvidia Blackwell server rack in 2025 and depreciates it over 5.5 years, the annual depreciation costs will be spread over approximately $909 million. However, if the actual useful life is only three years, the annual depreciation should be around $1.67 billion—a significant discrepancy.
Burry estimates that these extended lifespans could boost the profits of several large companies by a total of $176 billion between 2026 and 2028. Nvidia disputed these claims in an internal memo in November 2025, arguing that hyperscalers depreciate GPUs over a four- to six-year period based on actual longevity and usage trends. The company pointed out that older GPUs, such as the A100 released in 2020, continue to be used at high utilization rates and retain significant economic value.
The reality likely lies somewhere in between. GPUs can certainly function physically for more than three years, but their economic value can decline rapidly as newer, more efficient models enter the market. A key factor is the cascading of value: older GPUs, no longer optimal for training the latest models, can still be useful for inference tasks and running already trained models. They can also be used for less demanding applications or sold on secondary markets.
These nuances make a clear assessment difficult. CoreWeave, an AI-focused cloud provider, extended the depreciation period for its GPUs from four to six years in January 2023. Critics see this decision as an attempt to artificially improve profitability. Proponents, on the other hand, argue that the actual usage of the hardware justifies longer periods.
The social and political dimensions
The debate surrounding AI investments also has a political and social dimension. David Sacks, a venture capitalist and White House advisor on cryptocurrencies and AI, warned in November 2025 that a reversal of the AI investment boom would risk a recession. His wording suggests that the economy has become so dependent on AI investments that a halt or significant slowdown would have substantial macroeconomic consequences.
This dependency raises the question of whether society has maneuvered itself into a situation where it is forced to continue investing, regardless of its economic viability, simply to avoid a sudden shock. This would be a classic bubble dynamic, where rational economic considerations are overshadowed by the fear of the consequences of a bursting bubble.
The concentration of investments and resources on AI also raises questions about opportunity costs. The trillions flowing into AI data centers could theoretically be used for other societal priorities, from improving education systems and expanding renewable energy to addressing infrastructure deficits. The justification for this massive resource allocation depends on whether the promised benefits actually materialize.
At the same time, AI is already having demonstrably positive effects. In Germany, according to an IBM study from November 2025, two-thirds of companies report significant productivity gains through AI. The areas with the greatest AI-related productivity increases include software development and IT, customer service, and business process automation. Approximately one-fifth of companies in Germany have already achieved their ROI targets through AI-driven productivity initiatives, and almost half expect a return on investment within twelve months.
These figures show that AI does indeed create economic value, but they also support Krishna's argument that this value does not necessarily result from the pursuit of AGI with trillions of dollars in investment, but rather from more targeted, specific applications.
The historical perspective of technological transformations
To put the current situation into perspective, it is helpful to consider historical parallels. The dot-com boom of the late 1990s is often cited as a cautionary tale. At that time, enormous sums of money flowed into internet companies, based on the justified belief that the internet would be transformative. Many of those investments proved to be misguided, and when the bubble burst in 2000, trillions in market value were wiped out.
Nevertheless, the underlying technology proved to be truly transformative. Companies like Amazon and Google, which survived the crisis, became the dominant forces in the global economy. The infrastructure built during the boom, including that of failed companies, formed the foundation for the digital economy of the following decades. In this sense, one could argue that even excessive investment in AI infrastructure could be beneficial in the long run, even if many of the current players fail.
However, a key difference lies in capital intensity. First-generation internet companies could scale with relatively low investment once the basic infrastructure was in place. A website or online service, once developed, could reach millions of users with minimal additional costs. AI, especially as it is currently practiced, does not follow this pattern. Every query to a large language model incurs significant computational costs. Scaling AI services requires proportional increases in infrastructure, fundamentally altering the economics.
Another historical comparison is the development of electricity. When electrical energy first became available, it took decades for companies to learn how to redesign their production processes to fully exploit the new possibilities. Initially, factories simply replaced steam engines with electric motors, but otherwise retained their old layouts and processes. The real productivity gains only came when engineers and managers learned to design factories from the ground up, taking advantage of the flexibility of electrical energy.
The same could be true for AI. Current applications may only scratch the surface of what's possible, and real transformations might not come until organizations learn to fundamentally reorganize themselves to leverage AI capabilities. This would take time, possibly years or decades, and it's unclear whether current investment dynamics can afford that patience.
The future of AI development
Despite all the skepticism and warnings, AI development will continue. The question is not whether AI is important, but which path is the most promising and economically sustainable. Krishna's intervention can be understood as a plea for a reassessment of the strategy, not as a call to halt AI research.
The most likely development is a diversification of approaches. While some companies will continue to invest heavily in scaling large language models, others will explore alternative paths. Neuro-symbolic approaches, multimodal systems, embodied intelligence, continuous learning, and other research directions will be pursued in parallel. Breakthroughs in hardware, from quantum computing to optical computing architectures and neuromorphic chips, could change the equation.
A key factor will be actual market acceptance. If businesses and consumers are willing to pay substantial sums for AI services, even the high infrastructure costs could be justified. So far, however, this remains largely an open question. ChatGPT and similar services have attracted millions of users, but the willingness to pay substantial amounts for them is limited. Most users utilize free or heavily subsidized versions.
In the enterprise sector, the situation is somewhat different. Here, there is a demonstrable willingness to pay for AI solutions that solve specific business problems. Microsoft reports strong growth in its AI services for businesses. The question is whether these revenue streams can grow quickly enough to justify the massive investments.
Suitable for:
Findings from a multidimensional analysis
The concerns raised by Arvind Krishna on the Decoder podcast touch upon the core of one of the most significant economic and technological gambles in history. His argument is grounded in sound economic principles and technical understanding. The combination of enormous capital costs, short hardware lifecycles, and the low probability of current technologies leading to AGI (Automated Generating Intelligence) presents a compelling argument against the current investment strategy.
At the same time, Krishna's position is not without counterarguments. Proponents of massive AI investments would argue that transformative technologies often require enormous upfront investments, that the cost per computing unit is continuously decreasing, that new business models will emerge that are not yet foreseeable, and that the risk of falling behind in a potentially world-changing technology is greater than the financial risk of excessive investment.
The truth likely lies somewhere between these extreme positions. AI is undoubtedly an important and transformative technology that will create significant economic value. Current language models and AI applications already demonstrate impressive capabilities and are driving measurable productivity gains in many areas. At the same time, the idea that simply scaling up current approaches will lead to artificial general intelligence is increasingly controversial, even among leading AI researchers.
The economic analysis speaks volumes. The sheer size of the required investments and the need to generate enormous profits in a short period of time present an unprecedented challenge. If Krishna's calculations are even remotely accurate, it is difficult to imagine how the current investment strategy can be sustainable.
However, this does not necessarily mean that disaster is imminent. Markets have the capacity to adapt. Investment flows can shift, business models can evolve, and technological breakthroughs can fundamentally alter economics. The history of technology is full of examples where initial skepticism was disproven and seemingly impossible challenges were overcome.
What seems likely is a period of consolidation and reassessment. Current growth rates in AI investments cannot continue indefinitely. At some point, investors and business leaders will want to see evidence of actual returns. Companies that can deliver compelling use cases and demonstrable economic value will thrive. Others may have to adjust their strategies or exit the market.
Krishna's intervention serves as an important warning to exercise caution in an environment characterized by euphoria and the urge to keep up. His decades of experience in the technology sector and his position at the helm of one of the world's oldest and most established IT companies lend weight to his words. Time will tell if he is right. What is certain, however, is that the questions he raises must be taken seriously and thoroughly discussed before trillions more are poured into a strategy whose success is far from guaranteed.
A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) - Platform & B2B Solution | Xpert Consulting

A new dimension of digital transformation with 'Managed AI' (Artificial Intelligence) – Platform & B2B Solution | Xpert Consulting - Image: Xpert.Digital
Here you will learn how your company can implement customized AI solutions quickly, securely, and without high entry barriers.
A Managed AI Platform is your all-round, worry-free package for artificial intelligence. Instead of dealing with complex technology, expensive infrastructure, and lengthy development processes, you receive a turnkey solution tailored to your needs from a specialized partner – often within a few days.
The key benefits at a glance:
⚡ Fast implementation: From idea to operational application in days, not months. We deliver practical solutions that create immediate value.
🔒 Maximum data security: Your sensitive data remains with you. We guarantee secure and compliant processing without sharing data with third parties.
💸 No financial risk: You only pay for results. High upfront investments in hardware, software, or personnel are completely eliminated.
🎯 Focus on your core business: Concentrate on what you do best. We handle the entire technical implementation, operation, and maintenance of your AI solution.
📈 Future-proof & Scalable: Your AI grows with you. We ensure ongoing optimization and scalability, and flexibly adapt the models to new requirements.
More about it here:
Your global marketing and business development partner
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.




























