Language selection 📢


Artificial intelligence with Exaone Deep: LG AI Research introduces a new Reasoning AI model-Agentic AI from South Korea

Published on: March 24, 2025 / update from: March 24, 2025 - Author: Konrad Wolfenstein

Artificial intelligence with Exaone Deep: LG AI Research presents new Reasoning AI model-Agentic AI from South Korea

Artificial intelligence with Exaone Deep: LG AI Research introduces a new Reasoning AI model-Agentic AI from South Korea-Image: Xpert.digital

South Korea's AI offensive: Exaone deep sets global standards

LG presents exaone deep: revolutionary Agentic Ai on an open source basis

With Exaone Deep, LG Ai Research has published a further Reasoning AI model as an open source that brings the South Korean AI efforts to the global stage. The model presented in March 2025 while Nvidia's developer conference GTC is characterized by its ability to formulate, check, check and make autonomous decisions based on it. This innovative AI solution marks the transition to the era of “Agentic Ai” and positions LG among the few global companies that drive this technology forward. With impressive achievements in mathematical, scientific and coding benchmarks with efficient model size, EXAONE deep is a significant progress in AI development.

The exaone model family and their development

From the beginning to exaone deep

The basis for exaone deep was laid in December 2020 with the foundation of LG AI Research. Under the leadership of LG Corp Chairman Koo Kwang-Mo, the research department was launched with the aim of securing LG's long-term future through AI technology. In a management meeting, Koo emphasized: "We have to develop AI with foresight to maintain growth engines for the 2030s."

The development of the exaone model family began with exaone 1.0 in December 2021, a “Supergiant Ai” model with around 300 billion parameters. This was followed by Exaone 2.0 in July 2023 and Exaone 3.0 in August 2024, the latter as South Korea's first open source AI model was an important milestone. At the end of 2024, Exaone 3.5 followed with improved instruction compliance and understanding of longer contexts. Exaone Deep builds on this development and focuses specifically on Reasoning skills.

Technical architecture and model variants

Exaone deep is based on a decoder-on-on transformer architecture and is available in three size variants:

  1. EXAONE DEEP-32B: The flagship model with 32 billion parameters and 64 layers, optimized for maximum reasoning performance.
  2. Exaone Deep-7.8b: A lightweight version with 7.8 billion parameters and 32 layers, which offers 95% of the performance of the 32B model at only 24% of the size.
  3. Exaone Deep-2.4b: An on-Device model with 2.4 billion parameters and 30 layers, which, despite its small size (7.5% of the 32B model), still reaches 86% of the performance.

All models have a maximum context scope of 32,768 tokens, which is a significant improvement compared to previous models. The models were mainly trained on Reasoning-specialized data records that take long thought processes into account, which enables them to understand more complicated relationships and draw logical conclusions.

Suitable for:

Performance features and benchmark results

Mathematical reasoning and scientific problem solving

Exaone deep shows particularly impressive results in mathematical and scientific reasoning tasks. The 32b model scored 94.5 points on the South Korean university entrance test (CSAT) in the mathematics part and at the American Invitational Mathematics Examination (AIME) 2024 90.0 points, which surpasses competing models.

With MATH-500, an index for the evaluation of mathematical problem-solving skills, it achieved 95.7 points. It is particularly noteworthy that the model achieves these services with only about 5% of the size of some “giants” models such as Deepseek-R1 (671 billion parameters).

In the field of scientific reasoning, the 32B model in the GPQA Diamond test, which evaluated problem-solving skills at a doctorate level in physics, chemistry and biology, scored 66.1 points. These results underline the ability of the model to understand and apply complex scientific concepts.

Coding skills and general understanding of language

Exaone deep also proves its strength in the area of ​​coding and problem solving. In the LiveCodebench test that evaluates the coding skills, the 32B model reached a value of 59.5. This underlines its potential for applications in software development, automation and other technical areas that require a high degree of computation.

In the general understanding of language, the model secured the highest MMLU score (massive multitask Language Understanding) under Korean models with 83.0 points. This shows that Exaone Deep is not only efficient in specialized reasoning tasks, but also in the general understanding of language.

Performance efficiency of the smaller models

The performance of the smaller model variants is particularly noteworthy. The 7.8b model scored 94.8 points at Math-500 and 59.6 points in Aime 2025, while the 2.4b model at Math-500 92.3 points and 47.9 points for Aime 2024. These results position the smaller versions of exaone deep at the top of their respective categories in all important benchmarks.

The community is particularly surprised by the performance of the 2.4b model. In a Reddit contribution it is noted that this small model even exceeds the significantly larger GEMMA3 27B model in certain benchmarks. One user wrote: "I mean-you tell myself that a 2.4b model (46.6) exceeds the GemMA3 27b (29.7) in the live code benchmark?"

Application potential and meaning in the AI ​​market

Areas of application in industry, research and education

LG AI Research expects exaone deep to be used in various areas. The press release states: "Exaone deep will not only be used in professional fields that are needed by industries in the future, but also in scientific research and education areas such as physics and chemistry by showing high performance in evaluation indicators of specialized fields such as mathematics, science and coding."

A special focus is on the on-device model (2.4b), which can be used due to its small size on devices such as smartphones, in automobiles and in robotics. Since the data can be processed safely on the device without the necessary connection to external servers, this model offers advantages for data security and the protection of personal data.

Positioning in the global AI competition

With the publication of exaone deep, LG positions itself in the increasingly competitive global AI market. The South Korean Tech company thus competes in direct competition with large technology companies such as Openaai, Google Deepmind and Chinese AI developers such as Deepseek.

A representative of LG AI Research said: "We have announced exaone deep about a month after participating in the domestic Ai Industry competition diagnosis and inspection meeting, which was held in February at the National Artificial Intelligence Committee in February and the open source publication of a Deepseek R1-Level model is prospecting." The representative added: "The core of LGS Ki technology is the maintenance of the performance while a significant reduction in model size."

At a time when cost-efficient models receive great attention after the rise of Chinas Deepseek in the field of Reasoning capabilities, LGS approach to develop smaller but powerful models could be a strategic advantage.

The meaning of Reasoning-Ki and “Agentic Ai”

From knowledge-ski to Reasoning-Ki

With Exaone Deep, LG AI Research takes the transition from “Knowledge KI” to “Reasoning-KI”. While traditional AI models are mainly geared towards information calls and provision, Reasoning-KIS such as exaone deep can set up hypotheses independently, check them and make autonomous decisions based on them.

This ability marks the entry into the era of “Agentic Ai” - active AI, which is able to “think” and act independently. LG AI Research explains: "Agentic AI refers to an active AI that is able to make autonomous decisions by formulating hypotheses independently and carrying out conclusions to verify them."

The open source strategy

An important aspect of the exaone deep publication is the decision to provide the model as an open source. This is followed by the strategy that started with Exaone 3.0, the first open source AI model in South Korea.

The open source strategy enables developers to use and develop the model for research purposes without restrictions. This could lead to a broader application and further development of the technology and strengthen LG's position in the global AI ecosystem.

Kyung-Hoon Bae, President of LG AI Research, said: “We plan to provide this highly versatile and lightweight model as an open source so that universities and research institutions can use the latest generative AI technology, which contributes to the AI ​​research ecosystem and further improves the AI ​​competitive ability.”

Suitable for:

Future prospects and ongoing developments

Chatexaone: The new standard for AI-based productivity in the company

LG plans to work with LG subsidiaries in the second half of the year in order to integrate exaone deep into various products and services. Depending on the application, exaons will be available in different model sizes, from the ultra-light-weight model for on-device-KI services to the high-performance model for specialized applications.

A concrete example of the practical application of exaone technology is Chatexaone, a KI agent based on exaone 3.0 for companies that is already available as an open beta version for the employees of the LG group. Chatexaone offers various functions for increasing labor productivity, including real-time-web-based question-answer systems, document and image-based question response systems, coding support and database management.

Further development of the AI ​​expertise within the LG group

The development of exaone deep is part of a larger AI strategy within the LG group. LG has already set up an internal AI Graduate School in order to promote tailor-made engineers with a nine-month master's degree and an 18-month doctorate program.

Employees who take these courses work on projects that are difficult to develop for individual subsidiaries. As part of a pilot project, LG Display developed a design technology to accommodate more pixels on the same screen, while LG Electronics and LG Innotek methods for precise demand forecast with AI, which will significantly reduce the storage costs.

Why smaller AI models could be a better choice-a look at exaone deep

With the introduction of exaone deep, LG Ai Research has achieved an important milestone in AI development. As South Korea's first Reasoning AI model based on a Foundation Model, LG places it in a number of leading global technology companies that develop this advanced AI technology. The impressive performance in mathematical, scientific and coding benchmarks with efficient model size underline the potential of this model for different areas of application.

The approach of LG is particularly noteworthy to develop high-performance AI models with a relatively small size. While many AI companies rely on ever larger models, Exaone Deep shows that with intelligent optimization and specialized training, smaller models can achieve top performance. This could not only offer economic advantages, but also enable the use of powerful AI models on EDGE devices.

With the open source publication of exaone deep, LG AI Research contributes to the global AI research ecosystem and at the same time strengthens South Korea's position in the international AI competition. It remains to be seen how this technology is implemented in various products and services of the LG group and what innovation it will enable it in various industries.

Suitable for:

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer - Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs


⭐️ Artificial intelligence (AI)-AI blog, hotspot and content hub ⭐️ Digital Intelligence ⭐️ Xpaper