Industry News for Business Leaders
Artificial IntelligenceEuropeFeaturedMarket Watch

Dragon LLM: CEO Olivier Debeugny on Europe’s First Frugal AI Architecture (and His Dream of a Generative AI Airbus)

Dragon LLM: CEO Olivier Debeugny on Europe’s First Frugal AI Architecture (and His Dream of a Generative AI Airbus)
In this interview, Olivier Debeugny, founder of Dragon LLM and a former finance professional turned AI entrepreneur, explains how Dragon LLM achieved its scientific milestone, why AI sovereignty and frugality matter, and what it will take for Europe to build its own generative AI powerhouse.

A few weeks ago, Dragon LLM, a French startup born from Lingua Custodia, stunned the AI community by unveiling the first European-designed large language model (LLM) architecture. Unlike the ubiquitous Transformer framework that underpins ChatGPT and most modern AI systems, Dragon LLM’s new approach is leaner, faster, and far less energy-hungry. This scientific achievement could eventually allow AI models to run locally, on SME servers or even on a simple smartphone. We met with Olivier Debeugny, founder of Dragon LLM, who envisions the creation of a true “Airbus of European generative AI.” We also discussed with him the AI bubble that the whole ecosystem is watching closely.

Dragon LLM, formerly known as Lingua Custodia and founded in 2015, originally focused on models for the finance sector, providing a secure, finance-oriented alternative to Google Translate. In mid-October, the company announced a major scientific breakthrough: a new European LLM architecture, frugal while delivering the same performance as Transformer-based architectures used by the most well-known AI models, from Mistral to OpenAI, including Anthropic.

In this interview with DirectIndustry, Olivier Debeugny, founder of Dragon LLM, explains how Dragon LLM achieved its scientific milestone, why AI sovereignty and frugality matter, and what it will take for Europe to build its own generative AI powerhouse. From the company’s base in France, he envisions a coordinated industrial effort, an “Airbus of generative AI” capable of competing with the United States and China on its own terms.

You announced that Dragon LLM is the first startup to have developed a 100% European AI architecture. What exactly does that mean? How is that different from what Mistral is doing?

Olivier Debeugny: “The LLMs that form the basis of most generative AI use cases today are built using a certain architecture, the same one used by roughly 95% of all model providers, including MistralAI. That technical architecture is called the Transformer architecture.

Until now, there have been companies in the U.S. exploring hybrid approaches to Transformers, some in Asia, and a few in the Middle East as well. But in Europe, no one has proposed a new technical architecture for LLMs since 2022 or 2023, which in our field already feels like ancient history. So, for the first time in a long time, a European company has created a new, modern type of architecture for building these LLMs. And the goal of our new architecture is to be more efficient and more frugal than the traditional Transformer architecture. We called this architecture Dragon, and then we renamed ourselves Dragon LLM.”

So Mistral doesn’t use a 100% European architecture?

Olivier Debeugny: “No, they use the Transformer architecture, which is a worldwide standard that was originally created in the United States. Of course, their LLMs are built by them using data they’ve collected.

When you create an LLM, there are several components: the architecture you use, the data you use, and then the training you apply for specific types of tasks. What we’re talking about here is the foundation, the base of the pyramid. So it’s quite a scientific approach. Our achievement is a scientific achievement. But we didn’t throw Transformers away completely. We combined the Transformer approach with other technological components inside the model.”

Who are the top LLM providers? Read what GlobalData has to say in our report

What do you mean by “scientific achievement”?

Olivier Debeugny: “We reviewed all the research papers published over the past two years to propose a new way of building LLMs. And we proved, by training a demonstration model, that this architecture works and delivers what we claimed: in particular, better efficiency, speed, and the ability to run on smaller machines. And it’s validated by the scientific community, that’s who has reviewed and confirmed it.”

How did you manage to obtain these performances?

Olivier Debeugny: “At the end of 2023, the European Commission organized a competition across the European ecosystem to reward an original idea aimed at creating foundation LLMs, in exchange for access to European computing power to do so. Hundreds of companies applied, and in the end, four were selected. We were the only French company chosen. The other companies mainly proposed to create foundation LLMs based on data, such as one that focused on covering Eastern European languages, which are often underrepresented. We were the only team that pitched a project focused on architecture, to make LLMs more energy-efficient and lightweight and that’s how we were selected.

To do this, you need access to a lot of computing power first to run tests, and then to train models to prove that the new architecture works. So, we created the architecture, we built a demonstration model to prove its effectiveness. We’re one of the first companies to use the German supercomputer Jupiter. And we’re continuing from there: we’re now developing foundation models of various sizes based on this architecture, which we’ll be releasing to the market soon.”

Can you tell us more concretely about what you’re bringing  in terms of greater efficiency, frugality, lower energy consumption?

Olivier Debeugny: “The goal is this: when you ask a question to a Transformer-based LLM, it’s as if I had to think through everything I’ve ever said in my life before answering you. Our hybrid architecture introduces small dynamic memory boxes, so I can answer much faster, without reprocessing everything and without consuming as much energy. That’s why it can run on smaller machines.

A model equivalent in size to 70 billion parameters for example, if you take it using this Dragon architecture can either handle many more users on the same machine, or run at the same performance level on a smaller machine. That’s what we mean by frugality. For instance, a small model with 7 billion parameters, if it’s configured so that only 1 billion parameters are active and built on this architecture, could potentially run on a simple CPU server.”

So clearly, in the long term, we could have a kind of local AI that is hosted directly within companies, without needing a massive cloud infrastructure?

Olivier Debeugny: “Yes. One of our objectives is to make it run on a phone in airplane mode. The idea is to make it so frugal that it could run in such an environment. We’re still working on it, though.”

Would that be a general-purpose model or a specialized one?

Olivier Debeugny: “It would be general-purpose. We’re not yet sure we’ll achieve that, but it’s one of the goals. In any case, it will be able to run on a CPU — that’s certain. The phone aspect is being considered particularly for defense industries or similar use cases. Currently, we operate in the regular enterprise space. We come from the financial sector, so by extension, we work in regulated environments where data security is critical.”

Are you positioning yourselves as a kind of response to the growing energy concerns around AI?

Olivier Debeugny: “It’s both an energy issue and an economic one for companies that want to use these systems. Very often, companies like banks come to me saying they need a solution that can analyze and respond to SWIFT messages, for example. You don’t necessarily need a model that knows the entire history of Uruguay for that. So, is there a way to have something more specialized, smaller, and capable of running on smaller machines? Yes. And while that’s great for the environment, at its core, it’s really an economic issue.”

Read also our article on The Shift Project’s Report on AI and Its Impact on Energy

Your initial goal is to democratize AI. Are SMEs your primary target?

Olivier Debeugny: “Not necessarily. Historically, we’ve worked with large banks. We’ve been in the market for over ten years, initially developing hundreds of models dedicated to a specific task: machine translation in the financial sector. So, we already work with many major banks including Crédit Agricole group, as well as with Rothschild, HSBC, Natixis, and AXA. We also published a fundamental research paper with BNP Paribas’ AI teams earlier this year on controlling hallucinations in RAG systems.

These organizations are very interested in models that can run on small servers, including CPU-based servers. Why? Because their production infrastructure relies on CPUs. Integrating new GPU servers in a bank involves complex legacy systems. It could take two to three years, it’s a massive undertaking. Yet, they want to integrate generative AI today into their processes.

Another major challenge, even for large companies in finance, industry, and insurance, is that many have experimented with AI models via APIs or token-based systems. But moving to production is much harder.

Our goal is to release the architecture, then the models, and finally to develop specialized versions of these models for our clients, and deploy them both in large and small companies.”

So your business will be fine-tuning your main model for clients?

Olivier Debeugny: “Exactly. Once the open-source foundation models are out, we’ll begin creating specialized versions of them based on specific client use cases. The idea is then to develop a range of commercial, fine-tuned models for targeted applications — for instance, financial text sentiment analysis for investment, or other domain-specific tasks.

And we’ll be all the more credible with clients. Because the hardest thing for companies like ours is credibility. There are thousands of startups claiming to “do AI” today. But for us, doing AI means building the model from the ground up.”

Olivier Debeugny, CEO, Raheel Qader, Head of R&D et Jean-Gabriel Barthélémy, AI Engineer, Dragon LLM
Olivier Debeugny, CEO, Raheel Qader, Head of R&D et Jean-Gabriel Barthélémy, AI Engineer, Dragon LLM

Why did you choose to go open source? 

Olivier Debeugny: “The architecture, the demonstration models, and the foundation models that we will release later are indeed all open source. Publishing it in open source means that other companies with the right expertise to create LLMs can reuse this architecture to build models that are more efficient, can run on smaller machines, and are therefore more accessible. It’s a way for us to encourage broader adoption of this new model architecture we’ve developed. And it would make it much easier to deploy generative AI across enterprises.”

Your solution has been on the market since October 15, is that right?

Olivier Debeugny: “The architecture was published on October 15, and yes, it’s available. I’m talking only about the architecture here. We also published the demonstration model on the same day — October 15 — on Hugging Face. We were able to show that this architecture can achieve equivalent performance to other models trained with far more data — which is one of the interesting aspects of our work.

October 15 was just the first step. The next steps are the release of models themselves. We’ve already published a small demonstrator model with 3.8 billion parameters, and next we’ll release 7-billion and 70-billion parameter foundation models based on this architecture — all open source. We hope to release everything by the end of the year or early January.”

Read our report on how a French medtech company aims to challenge ChatGPT

In the long term, do you envision creating models to compete with OpenAI, Anthropic, and others?

Olivier Debeugny: “It’s our ambition. We hope that when our 70-billion-parameter foundation model is released, it will be good enough to provide a credible alternative to certain general-purpose models, or at least enable others to use it.

In the creation of LLMs, there are several key steps: first comes the model architecture, then the base model, and finally the instruct model. The base model, at its simplest, predicts a word given a few others. For this stage, we’ve worked with high-quality open-source corpora that we carefully cleaned. We know we’ll be very strong at the base model level.

The instruct model can perform tasks such as sentiment analysis, summarization, rewriting, and more. This phase is more complex because obtaining instruct corpora is challenging. This phase will be more challenging, as it requires not only computing power and expertise—which we have—but also large, high-quality instruct datasets.

There’s also the multimodal aspect. We’re currently focusing heavily on text. We won’t compete at this stage with models handling images, audio, etc. Doing that would require additional development and resources.”

Interested in Europe’s tech? Read our report listing tools that are 100% made in Europe

Europe has strong computing resources, but what’s still missing for you to compete with the U.S. and China?

Olivier Debeugny: “We do have significant computing power in Europe. AI factories are being established and offer companies — including SMEs — access to high-performance resources for tasks like fine-tuning. This is a major advantage.

What Europe lacks, however, is direct financing. Even though institutions like the European Investment Bank invest in AI, access to funding for companies to grow and remain European is limited. So far, we’ve been able to achieve all this thanks to the expertise of our team, and especially thanks to the computing power provided by the European Commission. Since we could rely on these resources, valued at around €12–15 million, we didn’t need to raise private capital for our project.

What we also lack is visibility and distribution. The challenge for us is that there’s a strong tendency toward Microsoft, Google, and other American companies. Over the past 10 years, I’ve often been told, “Your product is excellent, we’ve tested it, it’s better than the American alternatives, but personally, if I choose something more expensive that works slightly less well, but is made by Microsoft, it’s zero risk. Working with you is exposing me.” That’s reality. So yes, the path is to grow, gain credibility, and show we’ve been around for a long time, and gradually we’re getting there.”

But, do you believe Europe can compete technologically in generative AI with the U.S. and China?

Olivier Debeugny: “We’re capable on the model side. Where we may be behind is hardware and materials. There’s an even bigger challenge: creating a sustainable European AI ecosystem. The key question is: how can we build a European “Airbus of generative AI”? How can companies of similar size and expertise gradually collaborate or merge, as happened with Airbus, to create a strong, independent European AI player? There is an ecosystem of European companies with expertise capable of creating models, like LightOn in France, Pleias and H. I’m a convinced European. I find the idea of an Airbus of generative AI for Europe exciting; it’s a kind of dream I’d like to pursue.”

How are you financed? 

Olivier Debeugny: “We are structured differently from typical AI companies. I’ve always prioritized having talent in the company who publish research papers every year. That also allows us to access public research funding. For example, this year we were labeled France 2030 for a project with AGEFI to do fine-tuning of models for the financial industry. That provides some public revenue. Additionally, we have our clients and recurring revenues from our business, which has been active for over ten years. From an initial financing perspective, we rely only on business angels, not professional investors.”

European business angels?

Olivier Debeugny: “Yes, that’s important. We’ve raised very little. The company was founded in 2011, launched on the market in 2015, and since 2011, we’ve raised about €2.9 million. We are really the antithesis of the big-funded AI startups. Last year, our balance sheet was break-even. Personally, I aim to raise client-based funding, not just investment capital.”

Is there an AI bubble? Read our report

In the long term, is your goal to target the European market or the global market?

Olivier Debeugny: “First, there’s a lot to do in Europe. There’s significant complexity, including linguistic diversity. When we create LLMs, it’s like building an encyclopedia. For me, it’s crucial to create models that reflect our culture. When we trained our foundation models, we included Albanian, Hungarian, etc. Do U.S. or Chinese models bother including Albanian corpora? No. So there’s work to do. The question is whether we can scale sufficiently, achieve a decent size, and whether having only a European ambition is enough. For me, focusing on Europe is already meaningful and contributes to a broader project.”

Last question about the AI bubble. Is there one? When will it burst, and how?

Olivier Debeugny: “That’s the usual question. Everyone agrees there’s a bubble, but no one knows when it will pop — me included. Activity is booming, with so many models being released that it’s hard to keep up. We rushed to launch our architecture simply to avoid being overtaken! But when you compare this pace with U.S. valuation levels, it’s extreme. Valuations are soaring while adoption remains slow — just like during the dot-com bubble. Eventually, generative AI will be everywhere, but adoption and revenue growth are lagging behind expectations.

For me, the real issue is the gap between valuations and actual business returns. Token-based API models require massive usage to stay profitable, and verbosity only drives up costs. That’s why I prefer a licensing model — letting companies run models internally instead of paying per token, which creates downward price pressure.

Sooner or later, valuations will outpace reality, funding will tighten, and a correction will come — as we saw in 2022 with tech companies. That’s why I believe in building a confederation of smaller European AI players rather than relying on a few fast-growing, overfunded giants. It’s a more sustainable path, with greater long-term control and stability.”

Will generative AI generate real business value? Experts weigh in read our full report

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement