Website icon Xpert.Digital

From ridiculed visions to reality: Why artificial intelligence and service robots overtook their critics

From ridiculed visions to reality: Why artificial intelligence and service robots overtook their critics

From ridiculed visions to reality: Why artificial intelligence and service robots overtook their critics – Image: Xpert.Digital

When the impossible becomes commonplace: A warning to all technology skeptics

Between Euphoria and Contempt – A Technological Journey Through Time

The history of technological innovations often follows a predictable pattern: a phase of exaggerated euphoria is inevitably followed by a period of disappointment and contempt, before the technology finally quietly conquers everyday life. This phenomenon can be observed particularly strikingly in two areas of technology that are now considered key technologies of the 21st century: artificial intelligence and service robots.

At the end of the 1980s, AI research found itself in one of the deepest crises in its history. The so-called second AI winter had set in, research funding was cut, and many experts declared the vision of thinking machines a failure. A similar fate befell service robots two decades later: While the shortage of skilled workers was not yet a socially relevant issue at the turn of the millennium, robots for the service sector were dismissed as expensive gimmicks and unrealistic science fiction.

This analysis examines the parallel development paths of both technologies and reveals the mechanisms that lead to the systematic underestimation of revolutionary innovations. It demonstrates that both the initial euphoria and the subsequent disdain were equally flawed—and what lessons can be learned from this for the evaluation of future technologies.

Suitable for:

Looking back to yesterday: The story of a misunderstood revolution

The roots of modern AI research date back to the 1950s, when pioneers like Alan Turing and John McCarthy laid the theoretical foundations for thinking machines. The famous Dartmouth Conference of 1956 is generally considered the birth of artificial intelligence as a research discipline. The early researchers were inspired by boundless optimism: They firmly believed that machines would achieve human intelligence within a few years.

The 1960s brought the first spectacular successes. Programs like the Logic Theorist were able to prove mathematical theorems, and in 1966, Joseph Weizenbaum developed ELIZA, the first chatbot in history. ELIZA simulated a psychotherapist and could mimic human conversations so convincingly that even Weizenbaum's own secretary asked to speak to the program alone. Paradoxically, Weizenbaum was appalled by this success—he had wanted to prove that people couldn't be fooled by machines.

But the first major disillusionment set in in the early 1970s. The infamous Lighthill Report of 1973 declared AI research a fundamental failure and led to drastic cuts in research funding in the UK. In the US, DARPA followed suit with similar measures. The first AI winter had begun.

A crucial turning point was the criticism of perceptrons—early neural networks—by Marvin Minsky and Seymour Papert in 1969. They mathematically demonstrated that simple perceptrons couldn't even learn the XOR function and were thus unusable for practical applications. This criticism led to a standstill in research on neural networks for almost two decades.

The 1980s initially marked a renaissance of AI with the rise of expert systems. These rule-based systems, such as MYCIN, which was used in the diagnosis of infectious diseases, finally seemed to offer a breakthrough. Companies invested millions in specialized Lisp machines optimally designed to run AI programs.

But this euphoria didn't last long. By the end of the 1980s, it became clear that expert systems were fundamentally limited: They could only function in narrowly defined areas, were extremely maintenance-intensive, and failed completely as soon as they were confronted with unforeseen situations. The Lisp machine industry collapsed spectacularly—companies like LMI went bankrupt as early as 1986. The second AI winter began, even harsher and more lasting than the first.

At the same time, robotics initially developed almost exclusively in the industrial sector. Japan took a leading role in robot technology as early as the 1980s, but also focused on industrial applications. Honda began developing humanoid robots in 1986, but kept this research strictly secret.

The Hidden Foundation: How Breakthroughs Emerged in the Shadows

While AI research was publicly considered a failure at the end of the 1980s, groundbreaking developments were occurring at the same time, albeit largely unnoticed. The most important breakthrough was the rediscovery and perfection of backpropagation by Geoffrey Hinton, David Rumelhart, and Ronald Williams in 1986.

This technique solved the fundamental problem of learning in multilayer neural networks, thus refuting the criticisms of Minsky and Papert. However, the AI ​​community initially barely responded to this revolution. Available computers were too slow, training data too scarce, and general interest in neural networks had been permanently damaged by the devastating criticism of the 1960s.

Only a few visionary researchers like Yann LeCun recognized the transformative potential of backpropagation. They worked for years in the shadow of established symbolic AI, laying the foundations for what would later conquer the world as deep learning. This parallel development demonstrates a characteristic pattern of technological innovation: breakthroughs often occur precisely when a technology is publicly considered a failure.

A similar phenomenon can be observed in robotics. While public attention in the 1990s was focused on spectacular but ultimately superficial successes like Deep Blue's victory over Garry Kasparov in 1997, Japanese companies like Honda and Sony were quietly developing the foundations for modern service robots.

While Deep Blue was a milestone in computing power, it was still based entirely on traditional programming techniques without any real learning capability. Kasparov himself later realized that the true breakthrough lay not in raw computing power, but in the development of self-learning systems capable of self-improvement.

Robotics development in Japan benefited from a culturally different attitude toward automation and robots. While in Western countries, robots were primarily perceived as a threat to jobs, Japan viewed them as necessary partners in an aging society. This cultural acceptance enabled Japanese companies to continuously invest in robot technologies, even when the short-term commercial benefits were not apparent.

The gradual improvement of basic technologies was also crucial: sensors became smaller and more precise, processors more powerful and energy-efficient, and software algorithms more sophisticated. Over the years, these incremental advances accumulated into qualitative leaps that were, however, difficult to detect for outsiders.

Present and breakthrough: When the impossible becomes everyday

The dramatic shift in the perception of AI and service robots paradoxically began just as both technologies were facing their harshest criticism. The AI ​​winter of the early 1990s ended abruptly with a series of breakthroughs that had their roots in the supposedly failed approaches of the 1980s.

The first turning point was Deep Blue's victory over Kasparov in 1997, which, although still based on traditional programming, permanently changed the public perception of computing capabilities. More important, however, was the renaissance of neural networks starting in the 2000s, driven by exponentially growing computing power and the availability of large amounts of data.

Geoffrey Hinton's decades-long work on neural networks finally bore fruit. Deep learning systems achieved feats in image recognition, natural language processing, and other areas that had been considered impossible just a few years earlier. AlphaGo defeated the Go world champion in 2016, and ChatGPT revolutionized human-computer interaction in 2022—both were based on techniques that had their origins in the 1980s.

At the same time, service robots evolved from a science-fiction vision into practical solutions for real-world problems. Demographic change and the growing shortage of skilled workers suddenly created an urgent need for automated assistance. Robots like Pepper were used in nursing homes, while logistics robots revolutionized warehouses.

Crucial to this was not only technological progress, but also a change in the social framework. The shortage of skilled workers, which hadn't been an issue at the turn of the millennium, developed into one of the central challenges facing developed economies. Suddenly, robots were no longer perceived as job killers, but as necessary helpers.

The COVID-19 pandemic further accelerated this development. Contactless services and automated processes gained importance, while at the same time, staffing shortages in critical areas like healthcare became dramatically apparent. Technologies that had been considered impractical for decades suddenly proved indispensable.

Today, both AI and service robots have become everyday reality. Voice assistants like Siri and Alexa are based on technologies directly derived from ELIZA, but have been exponentially improved by modern AI techniques. Care robots already routinely support staff in Japanese nursing homes, while humanoid robots are on the verge of a breakthrough into other service areas.

Practical examples: When theory meets reality

The transformation from derided concepts to indispensable tools is best illustrated by concrete examples that trace the path from laboratory curiosity to market readiness.

The first impressive example is the development of the Pepper robot by SoftBank Robotics. Pepper is based on decades of research in human-robot interaction and was initially conceived as a sales robot. Pepper is now successfully used in German nursing homes to engage patients with dementia. The robot can conduct simple conversations, offer memory training, and promote social interactions through its presence. What was considered an expensive gimmick in the 2000s is now proving to be a valuable support for overworked nursing staff.

Particularly remarkable is the patient acceptance: Older people who never grew up with computers interact naturally and without reservation with the humanoid robot. This confirms the decades-controversial theory that humans have a natural tendency to anthropomorphize machines – a phenomenon already observed with ELIZA in the 1960s.

The second example comes from logistics: the use of autonomous robots in warehouses and distribution centers. Companies like Amazon now employ tens of thousands of robots to sort, transport, and pack goods. These robots handle tasks that were considered too complex for machines just a few years ago: They navigate autonomously through dynamic environments, recognize and manipulate a wide variety of objects, and coordinate their actions with human colleagues.

The breakthrough didn't come from a single technological leap, but from the integration of various technologies: Improvements in sensor technology enabled precise environmental perception, powerful processors enabled real-time decision-making, and AI algorithms optimized coordination between hundreds of robots. At the same time, economic factors—staff shortages, rising labor costs, and increased quality requirements—suddenly made investing in robot technology profitable.

A third example can be found in medical diagnostics, where AI systems now assist doctors in detecting diseases. Modern image recognition algorithms can diagnose skin cancer, eye diseases, or breast cancer with an accuracy equal to or even exceeding that of medical specialists. These systems are directly based on neural networks, which were developed in the 1980s but dismissed as impractical for decades.

The continuity of development is particularly impressive: Today's deep learning algorithms essentially use the same mathematical principles as backpropagation from 1986. The crucial difference lies in the available computing power and the data volumes. What Hinton and his colleagues demonstrated with small toy problems now works with medical images with millions of pixels and training data sets with hundreds of thousands of examples.

These examples demonstrate a characteristic pattern: The enabling technologies often emerge decades before their practical application. Between the scientific feasibility study and market readiness, there is typically a long phase of incremental improvements, during which the technology appears stagnant to outsiders. The breakthrough then often occurs suddenly when several factors—technological maturity, economic necessity, social acceptance—align simultaneously.

 

Our global industry and economic expertise in business development, sales and marketing

Our global industry and business expertise in business development, sales and marketing - Image: Xpert.Digital

Industry focus: B2B, digitalization (from AI to XR), mechanical engineering, logistics, renewable energies and industry

More about it here:

A topic hub with insights and expertise:

  • Knowledge platform on the global and regional economy, innovation and industry-specific trends
  • Collection of analyses, impulses and background information from our focus areas
  • A place for expertise and information on current developments in business and technology
  • Topic hub for companies that want to learn about markets, digitalization and industry innovations

 

Hype, valley of disappointment, breakthrough: The development rules of technology

Shadows and Contradictions: The Downside of Progress

However, the success story of AI and service robots is not without its dark sides and unresolved contradictions. The initial disdain for these technologies had, in part, entirely legitimate reasons that remain relevant today.

A central problem is the so-called "black box" problem of modern AI systems. While the expert systems of the 1980s had, at least theoretically, comprehensible decision-making processes, today's deep learning systems are completely opaque. Even their developers cannot explain why a neural network makes a particular decision. This leads to significant problems in critical application areas such as medicine or autonomous driving, where traceability and accountability are crucial.

Joseph Weizenbaum, the creator of ELIZA, became one of the harshest critics of AI development for a reason. His warning that people tend to attribute human characteristics to machines and place undue trust in them has proven prophetic. The ELIZA effect—the tendency to mistake primitive chatbots for being more intelligent than they are—is more relevant today than ever, as millions of people interact with voice assistants and chatbots every day.

Robotics faces similar challenges. Studies show that skepticism toward robots in Europe increased significantly between 2012 and 2017, particularly regarding their use in the workplace. This skepticism is not irrational: Automation is indeed leading to the loss of certain jobs, even as new ones are created. The claim that robots only take on "dirty, dangerous, and boring" tasks is misleading—they are increasingly taking over skilled jobs as well.

The development in nursing is particularly problematic. While nursing robots are being hailed as a solution to staff shortages, there is a risk of further dehumanizing an already strained sector. Interaction with robots cannot replace human care, even if they can perform certain functional tasks. The temptation lies in prioritizing efficiency gains over human needs.

Another fundamental problem is the concentration of power. The development of advanced AI systems requires enormous resources—computing power, data, capital—that only a few global corporations can muster. This leads to an unprecedented concentration of power in the hands of a few technology companies, with unforeseeable consequences for democracy and social participation.

The history of the Lisp machines of the 1980s offers an instructive parallel here. These highly specialized computers were technically brilliant but commercially doomed because they were controlled only by a small elite and incompatible with standard technologies. Today, there is a danger that similar isolated solutions will develop in AI – with the difference that this time the power lies with a few global corporations rather than specialized niche companies.

Finally, the question of long-term societal impacts remains. The optimistic predictions of the 1950s that automation would lead to more leisure time and prosperity for all have not come true. Instead, technological advances have often led to greater inequality and new forms of exploitation. There is little reason to believe that AI and robotics will have a different impact this time unless deliberate countermeasures are taken.

Suitable for:

Future Horizons: What the Past Reveals About Tomorrow

The parallel development histories of AI and service robots offer valuable insights for assessing future technology trends. Several patterns can be identified that are highly likely to emerge in future innovations.

The most important pattern is the characteristic hype cycle: New technologies typically go through a phase of inflated expectations, followed by a period of disappointment, before finally reaching practical maturity. This cycle is not random but reflects the different timescales of scientific breakthroughs, technological development, and societal adoption.

Crucial here is the realization that groundbreaking innovations often emerge precisely when a technology is publicly considered a failure. Backpropagation was developed in 1986, in the midst of the second AI winter. The foundations for modern service robots emerged in the 1990s and 2000s, when robots were still considered science fiction. This is because patient basic research takes place away from the public spotlight, only bearing fruit years later.

For the future, this means that particularly promising technologies will often be found in areas currently considered problematic or failed. Quantum computing is where AI was in the 1980s: theoretically promising, but not yet practically viable. Fusion energy is in a similar situation—20 years away from market readiness for decades, but with continuous progress in the background.

A second important pattern is the role of economic and social conditions. Technologies prevail not only because of their technical superiority, but because they address specific problems. Demographic change created the need for service robots, the shortage of skilled workers made automation a necessity, and digitalization generated the data volumes that made deep learning possible in the first place.

Similar drivers for the future can already be identified today: Climate change will promote technologies that contribute to decarbonization. An aging society will drive medical and care innovations. The increasing complexity of global systems will require better analysis and control tools.

A third pattern concerns the convergence of different technology strands. In both AI and service robots, the breakthrough was not the result of a single innovation, but rather the integration of several lines of development. In AI, improved algorithms, greater computing power, and more extensive data sets all came together. In service robots, advances in sensor technology, mechanics, energy storage, and software converged.

Future breakthroughs will most likely arise at the interfaces of different disciplines. Combining AI with biotechnology could revolutionize personalized medicine. Integrating robotics with nanotechnology could open up entirely new areas of application. Combining quantum computing with machine learning could solve optimization problems that are currently considered intractable.

At the same time, history warns against excessive short-term expectations. Most revolutionary technologies require 20-30 years from scientific discovery to widespread societal adoption. This period is necessary to overcome technical teething problems, reduce costs, build infrastructure, and gain social acceptance.

A particularly important lesson is that technologies often develop completely differently than originally predicted. ELIZA was intended to demonstrate the limits of computer communication, but it became a model for modern chatbots. Deep Blue defeated Kasparov with raw computing power, but the real revolution came with self-learning systems. Service robots were originally intended to replace human workers, but they are proving to be a valuable addition in situations of staff shortages.

This unpredictability should serve as a reminder of humility when evaluating emerging technologies. Neither excessive euphoria nor blanket disdain do justice to the complexity of technological development. Instead, a nuanced approach is required that takes both the potential and the risks of new technologies seriously and is willing to revise assessments based on new insights.

Lessons from a misunderstood era: What remains of the knowledge

The parallel histories of artificial intelligence and service robots reveal fundamental truths about the nature of technological change that extend far beyond these specific areas. They demonstrate that both blind technological euphoria and blanket technophobia are equally misleading.

The most important insight is the recognition of the time gap between scientific breakthrough and practical application. What appears today as a revolutionary innovation often has its roots in decades of basic research. Geoffrey Hinton's backpropagation of 1986 shapes ChatGPT and autonomous vehicles today. Joseph Weizenbaum's ELIZA of 1966 lives on in modern voice assistants. This long latency between invention and application explains why technology assessments so often fail.

The role of the so-called "valley of disappointments" plays a crucial role here. Every significant technology goes through a phase in which its initial promises cannot be fulfilled and it is deemed a failure. This phase is not only inevitable, but even necessary: ​​It filters out dubious approaches and forces a focus on truly viable concepts. The two AI winters of the 1970s and 1980s eliminated unrealistic expectations and created space for the patient groundwork that later led to real breakthroughs.

Another key insight concerns the role of social conditions. Technologies prevail not solely because of their technical superiority, but because they respond to concrete social needs. Demographic change transformed service robots from a curiosity to a necessity. The shortage of skilled workers transformed automation from a threat to a rescue. This contextual dependency explains why the same technology is evaluated completely differently at different times.

The importance of cultural factors is particularly noteworthy. Japan's positive attitude toward robots enabled continued investment in this technology, even when it was considered impractical in the West. This cultural openness paid off when robots suddenly became in demand worldwide. Conversely, growing skepticism toward automation in Europe led to the continent falling behind in key future technologies.

History also warns of the dangers of technological monoculture. The Lisp machines of the 1980s were technically brilliant, but failed because they represented incompatible isolated solutions. Today, the opposite danger exists: The dominance of a few global technology companies in AI and robotics could lead to a problematic concentration of power, inhibiting innovation and complicating democratic control.

Finally, the analysis shows that technological criticism is often justified, but made for the wrong reasons. Joseph Weizenbaum's warning about the humanization of computers was prophetic, but his conclusion that AI should not be developed because of this proved to be wrong. Skepticism about service robots was based on legitimate concerns about jobs, but overlooked their potential to address labor shortages.

This insight is particularly important for the evaluation of emerging technologies. Criticism should not be directed against the technology itself, but rather against problematic applications or inadequate regulation. The task is to harness the potential of new technologies while simultaneously minimizing their risks.

The history of AI and service robots teaches us humility: Neither the enthusiastic prophecies of the 1950s nor the pessimistic forecasts of the 1980s came true. Reality was more complex, slower, and more surprising than expected. This lesson should always be kept in mind when evaluating today's future technologies—from quantum computing to genetic engineering to fusion energy.

At the same time, history shows that patient, continuous research can lead to revolutionary breakthroughs even under adverse circumstances. Geoffrey Hinton's decades-long work on neural networks was long ridiculed, but today shapes all of our lives. This should encourage us not to give up, even in seemingly hopeless areas of research.

But perhaps the greatest lesson is this: technological progress is neither automatically good nor automatically bad. It is a tool whose effects depend on how we use it. The task is not to demonize or idolize technology, but to shape it consciously and responsibly. Only in this way can we ensure that the next generation of underappreciated technologies truly contributes to the well-being of humanity.

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs

 

🎯🎯🎯 Benefit from Xpert.Digital's extensive, five-fold expertise in a comprehensive service package | BD, R&D, XR, PR & Digital Visibility Optimization

Benefit from Xpert.Digital's extensive, fivefold expertise in a comprehensive service package | R&D, XR, PR & Digital Visibility Optimization - Image: Xpert.Digital

Xpert.Digital has in-depth knowledge of various industries. This allows us to develop tailor-made strategies that are tailored precisely to the requirements and challenges of your specific market segment. By continually analyzing market trends and following industry developments, we can act with foresight and offer innovative solutions. Through the combination of experience and knowledge, we generate added value and give our customers a decisive competitive advantage.

More about it here:

Exit the mobile version