Language selection 📢


This is how AI learns like a brain: Learning a new approach to AI systems with time-Sakana Ai and Continuous Though Machine

Published on: May 19, 2025 / update from: May 19, 2025 - Author: Konrad Wolfenstein

This is how AI learns like a brain: Learning a new approach to AI systems with time-Sakana Ai and Continuous Though Machine

This is how AI learns like a brain: Learning a new approach for AI systems with time-Sakana Ai and Continuous Thoug Machine-Image: Xpert.digital

Human thinking new: The innovative CTM by Sakana Ai

Machine thinking 2.0: Why the CTM is a milestone

The new “Continuous Thought Machine” (CTM) of the Japanese start-up Sakana Ai marks a paradigm shift in AI research by establishing the time dynamics of neuronal activity as a central mechanism for machine thinking. In contrast to conventional AI models that process information in one round, CTM simulates a multi-stage process of thinking that is more based on the functioning of the human brain.

Suitable for:

The revolution of time -based thinking

While traditional AI models such as GPT-4 or Llama 3 work sequentially-an input comes in, an output goes out-CTM breaks with this principle. The system operates with an internal time concept, so -called “ticks” or discrete timing, through which the internal condition of the model develops gradually. This approach enables iterative adaptation and creates a process that is more like a natural thinking process than a mere reaction.

"The CTM works with an internal concept of time, the so -called 'Internal Ticks', which are decoupled by the data input," explains Sakana AI. "This enables the model to 'think' several steps when solving tasks instead of making a decision in a single run immediately."

The core of this approach lies in the use of neuronal synchronization as a fundamental mechanism of representation. Sakana Ai was inspired by the functionality of biological brains, in which time coordination between neurons plays a crucial role. This biological inspiration goes beyond a mere metaphor and forms the foundation of its AI development philosophy.

Neuron-level models: The technical foundations

The CTM introduces a complex neural architecture, which is referred to as “Neuron-Level Models” (NLMS). Each neuron has its own weight parameters and pursues a history of past activations. These historicals influence the behavior of the neurons in time and enable more dynamic processing than with conventional artificial neuronal networks.

The thinking process runs in several internal steps. First, a “synapse model” processes the current neuron states and external input data to create the first signals-the so-called pre-activations. Subsequently, individual “neuron models” use the historicals of these signals to calculate their next states.

The neuroning states are recorded over time to analyze the synchronization strength between the neurons. This synchronization forms the central internal representation of the model. An additional attention mechanism enables the system to select and process relevant parts of the input data.

Performance and practical tests

In a number of experiments, Sakana Ai compared the performance of the CTM with established architectures. The results show promising progress in various areas of application:

Figure classification and visual workmanship

On the well-known IMAGENET-1K data set, the CTM achieves a top 1 accuracy of 72.47% and a top 5 accuracy of 89.89%. Although these values ​​for today's standards do not represent top values, Sakana Ai emphasizes that this is not the primary goal of the project. It is noteworthy that this is the first attempt to use neural dynamics as a form of representation for the imagenet classification.

In tests with the CIFAR 10 data set, the CTM also slightly better off than conventional models, with their predictions being more similar to human decision-making behavior. At CIFAR-10H, the CTM achieves a calibration error of only 0.15 and thus exceeds both humans (0.22) and LStMS (0.28).

Complex problem solving

In the case of parity tasks with a length of 64, the CTM achieves an impressive accuracy of 100% with over 75 bars, while LStMS get stuck with a maximum of 10 effective bars at less than 60%. In a labyrinth experiment, the model demonstrated behavior that resembles the gradual planning of a route, with a success rate of 80%, compared to 45% in LSTMS and only 20% in feed forward networks.

The model of the model is particularly interesting to dynamically adapt its processing depth: it stops earlier in the case of simple tasks, with more complex it calculates longer. This works without additional loss functions and is an inherent property of architecture.

Interpretability and transparency

An outstanding feature of the CTM is its interpretability. During the image processing, the attention heads scan systematically relevant features, which enables an insight into the “thinking process” of the model. In labyrinth experiments, the system showed behavior that resembles the gradual planning of a route-a behavior that, according to the developers, is emergent and was not explicitly programmed.

Sakana Ai even provides an interactive demo in which a CTM system in the browser finds its way out of a labyrinth in up to 150 steps. This transparency is an important advantage over many modern AI systems, the decision-making process of which is often perceived as a “black box”.

Suitable for:

Challenges and limitations

Despite the promising results, the CTM still faces considerable challenges:

  1. Computing effort: Every internal clock requires complete forward runs, which increases the training costs compared to LStMS by about three times.
  2. Scalability: Current implementations process a maximum of 1,000 neurons, and the scaling to transformer size (≥1 billion parameters) has not yet been tested.
  3. Areas of application: While the CTM shows good results in specific tests, it remains to be seen whether these advantages are also used in broad practical applications.

The researchers also experimented with different model sizes and found that more neurons led to more diverse activity patterns, but did not automatically improve the results. This indicates complex relationships between model architecture, size and performance.

Sakana Ai: A new approach to artificial intelligence

Sakana Ai was founded in July 2023 by AI visionary David Ha and Llion Jones, both of the former Google researchers, together with Ren Ito, a former employee of Mercari and Officials in the Japanese Foreign Ministry. The company pursues a fundamental approach than many established AI developers.

Instead of walking the conventional path more massive, resource-intensive AI models, Sakana Ai is inspired by nature, especially by the collective intelligence of fish swarms and swarms of birds. In contrast to companies such as Openaai, which develop extensive, powerful models such as Chatgpt, Sakana Ai relies on a decentralized approach with smaller, collaborative AI models that work efficiently together.

This philosophy is also reflected in the CTM. Instead of simply building larger models with more parameters, Sakana Ai focuses on fundamental architectural innovations that could fundamentally change the way in which AI systems could process information.

A paradigm shift in AI development?

The Continuous Thought Machine could mark a significant step in AI development. By reintroducing temporal dynamics as a central element of artificial neural networks, Sakana Ai extends the repertoire of tools and concepts for AI research.

The biological inspiration, interpretability and adaptive calculation depth of the CTM could be particularly valuable in application areas that require complex conclusions and problem solving. In addition, this approach could lead to more efficient AI systems that can do with less computing resources.

It remains to be seen whether the CTM actually represents a breakthrough. The biggest challenge will be to convert the promising results from the laboratory tests into practical applications and to scale the architecture to larger models.

Regardless of this, the CTM represents a brave and innovative approach that shows that despite the impressive successes of current AI systems, there is still a lot of space for fundamental innovations in the architecture of artificial neural networks. Sakana Ais Continuous Thought Machine reminds us that we may only be at the beginning of a long journey to develop really human -like artificial intelligence.

Suitable for:

 

Your AI transformation, AI integration and AI platform industry expert

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer - Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the AI ​​strategy

☑️ Pioneer Business Development


⭐️ Artificial intelligence (AI)-AI blog, hotspot and content hub ⭐️ Digital Intelligence ⭐️ Xpaper