At the AIM conference in Marseille last week, much of the discussion focused on the future of AI: superintelligence, productivity gains, the need for regulation without stifling innovation, and the challenge of building sovereign AI systems as part of a potential “third European way.” We asked Laurence Devillers, Professor at Sorbonne University and CNRS Paris Saclay Researcher for her perspective on the growing number of AI summits aimed at addressing these issues, on the technical future of algorithms, and on what remains to be done by public authorities and individual companies to maintain control over this field as technological progress accelerates.
For years, Laurence Devillers, teacher–researcher in computer science applied to the social sciences at Paris-Sorbonne University and author of “AI, Angel or Demon?: The New Invisible World” has been calling on European public authorities to recognize the major challenges posed by AI. She argues that AI is an undeniable and much-needed technological revolution for society—but not without education, not without understanding, and not implemented carelessly.
As president of the Blaise Pascal Foundation, whose mission is to promote, support, develop, and sustain scientific outreach initiatives in mathematics and computer science, she believes we must start educating tomorrow’s engineers today so they can properly understand and work with AI-related tools. Interview.
When I first interviewed you a few years ago, you spoke about the need for a “COP for robotics,” well before generative AI took off. Today, AI Summits are multiplying—Marseille this week, Paris last February, Berlin next week, and India next year. Do you see this as a step in the right direction?
Laurence Devillers: “Yes, it’s heading in the right direction. Change is happening. On sovereignty, Anne Le Hénanff, the French AI minister, stressed defining European standards and applying them consistently, as well as building alliances. Next week’s Berlin summit may be Franco-German, but the aim is broader cooperation.”
But it seems that people don’t expect much in terms of cooperation from next week’s Berlin summit, is that right?
Laurence Devillers: “Dialogue is always better. We built Airbus with Germany; we can achieve similar results again. We have strong players, and the minister’s firm approach is very convincing. We won’t just stand by—we aim to strengthen and optimize European cooperation.”
We had also discussed ethics. You had suggested that companies and users might one day select tools based on ethical criteria, perhaps even guided by an ethics index—but that hasn’t happened.
Laurence Devillers: “No, it hasn’t happened. In any case, to bring some order to this lawless world, there are three major pillars: laws (the Digital Services Act (DSA), Digital Markets Act (DMA), the AI Act), standards created by industry—but I don’t see many industrial players taking much interest in standards, even though they’re essential if we want to sell products that comply with the law and can be certified through various normative labels. And the third pillar is ethics. Before these tools are deployed, we need to educate people on certain ethical principles, whether they’re engineers or users.
I think ethics is the hidden issue that, in my view—and this is where there is a major gap right now—should be taught in schools, starting with the youngest students. At the Blaise Pascal Foundation, for example, we’ve created short ethics/AI/digital modules that explain what harassment is, what capturing attention or attachment means, what it means when “the computer does my homework.” And we’re struggling to get them adopted by the French Ministry of Education. I find this slowness, this lack of agility in the ministries, to be anti-progress.”
Read also
But in Europe, and in France in particular, research institutes are testing AI models, essentially evaluating whether they meet the standards and requirements we expect of them.
Laurence Devillers: “Yes, the CEA is doing that, the Sorbonne too. Large companies don’t do it because they don’t have the time, but major universities do. It’s essential that AI agents deployed for task X, Y, or Z be properly evaluated—but that is rarely the case. Evaluations are not done in interactive real-world scenarios; they rely on benchmarks—mostly well-known or tightly controlled. But human beings exist in complex spatial and temporal contexts, with countless events unfolding, which are extremely difficult for these “disembodied” machines with no world representation to grasp. This is why we’re also trying to invent new tools because with LLMs we’re reaching saturation. They won’t take us much further in representing humans.”
What do you mean? What about artificial superintelligence?
Laurence Devillers: “There’s currently a kind of illusion around superintelligence and general AI, because these tools that process only text—or a bit of other data, but not much—have no understanding of the world, no sense of context. So they won’t go very far. They’ll be incapable of producing an action plan. They’ll remain reactive. So it’s obvious that new algorithmic systems need to be developed. This is what Yann LeCun is trying to do with JEAP, and others, across the global scientific community. There’s a lot of momentum toward building something else because we see the limitations. The signals we hear about superintelligence are marketing from the big tech companies that have invested heavily in these areas and can’t really backtrack now.”
Afterwards, they sold us the metaverse…
Laurence Devillers: “LLMs can still be useful when it comes to language—they can write for you, help create things. They can be used wisely in industry. I’m very pro-technology, but I’m completely against putting this in everyone’s hands and letting people believe things that are false. I would like to hear these tech giants say, “Ok, we have supercomputing power, we can do amazing things, but this is not really intelligent.” I think when the LLM bubble bursts on its own, there will be people labeled as charlatans”.
So you really don’t believe in artificial superintelligence?
Laurence Devillers: “From the inside, working with these systems, I can tell you they can do extraordinary things…but they’re not intelligent at all. We’ve given them certain capacities, but they are still very far from human abilities. They imitate things, but that doesn’t mean they understand your psyche. You can be very unhappy or desperate, and the machine won’t understand that at all. The problem with an LLM is that everything is encapsulated. I’ve written theses on how we might better control and interpret signals—audio, gestures, multimodal inputs we could add to better understand context. But we still don’t have many neuro-symbolic models that can make sense of a range of actions. But people are working on new algorithmic capabilities like Yann LeCun with JEPA, which I think is worth exploring.”
Read also our interview with Dragon LLM’s CEO about a European alternative to Transformers
Can you tell us what this new algorithm involves?
Laurence Devillers: “The idea is to create a form of abstraction: LLMs predict the next word; here, we’d want to predict the next action. You would interpret the action at a given moment—visually, through audio, through context—then build a model which, when it sees the next “frame” of that reality, can interpret what happened and extract more abstract concepts through deep learning. These abstract concepts could be chained together to form actions.
In language, the system doesn’t need to “understand” words. They’re pseudo-words, tokens it manipulates. So maybe we could create a system capable of chaining actions that, for us, might seem unrealistic or non-symbolic, and yet the system would still manage to build something structured, even intentional. That’s where tomorrow’s power will come from.”
A word about regulation. We have advanced in Europe on trying to prepare a frame for the use of AI, haven’t we?
Laurence Devillers: “Since 2021, we’ve been trying to establish rules—for example, the AI Act. But we were misled. We decided that chatbots, for instance, would fall into the “moderate risk” category in the risk pyramid [The risk pyramid establishes 4 levels of risk: unacceptable risk, high risk, moderate risk, or no risk]. So chatbots like CHatGPT were classified as moderate risk. Yet we’ve seen children commit suicide using these systems, and we’ve seen emotional dependence develop. And to counter this, we simply said we should avoid subliminal manipulation. That falls far short of the goal; it’s not precise enough.
Meanwhile, California passed a law on October 13, coming into effect July 1, 2026, stating that any companion-chatbot system must prevent emotional dependence, must handle emotional data in a specific way, must encode it appropriately, must avoid loneliness bias, etc. Why aren’t we more agile? And there is also the question of manipulation.”
Yes, let’s talk about manipulation, including electoral manipulation. France is entering a very sensitive electoral period.
Laurence Devillers: “There’s an excellent paper from the British think tank the Institute of Strategic Dialogue where researchers tested different models—ChatGPT, Gemini, DeepSeek, etc.—on their capacity for manipulation related to Ukraine. They examined where the comments produced by the models drew their information—whether from Russia or elsewhere. There were 300 questions, some very biased or harmful, others neutral. Their findings show that ChatGPT performs the worst. It produces three times more manipulative outputs referencing sites, users, or narratives relayed by Russian networks than the others. That means there is a huge bias.
And then there’s ChatGPT’s capacity—especially now that ChatGPT-5 has come out with multiple agents, some even closer to human empathy. These, they say, are the worst in terms of manipulation, because they use confirmation bias: they repeat key words you use and feed them back to you. This makes you feel understood—when it’s just the machine echoing your words.This kind of mimicry already existed in ELIZA, the first MIT system, based on Rogerian psychology: to create a bond and capture attention and attachment, you reuse the user’s own words.”
Final question: what is your view on the involvement of French public authorities in all these issues? Are you concerned?
Laurence Devillers: “There is a cosmic void in education. I could say it directly to President Macron or any minister. How can we expect tomorrow’s talent to understand increasingly complex generative AI models if we don’t have strong education? We need to train teachers—but not with MOOCs or the superficial “sprinkling” we’re doing now. We are surviving on an elite who grew up before generative AI ever existed. For those who come next, we need dedicated courses to introduce these new tools, and also the concepts behind them—epistemology, ethics, even philosophy if we want, but not abstract, classical philosophy. Something very applied to what we’re living through, to avoid manipulation and enable free choice before adulthood. And I see very little of that happening. I truly hope it will finally happen. But at the moment, there is no agility whatsoever.”
Read also







