Amid widening international, geopolitical, economic, and technological divides between Europe and the United States, European companies are accelerating efforts to develop sovereign, made-in-Europe alternatives to American technologies (see our report Europe Tech Transfer: Is It Time to Go Local?). In the customer service sector, Europe may have found its alternative to OpenAI-powered agents with the French solution Yampa. We spoke with its founder, Marin Huet.
Yampa is a French startup that offers an enterprise-grade agentic platform for customer care. The company recently announced the opening of Y.core to enterprises — its sovereign orchestration platform designed to create, deploy, and supervise fleets of autonomous AI agents.
Yampa has raised a €3 million seed round led by Partech, alongside BPO experts and tech entrepreneurs, to turn this conviction into an industrial reality. Marin Huet, founder of Yampa, tells us more.
Can you present Yampa, its mission and key features?
Marin Huet: “Our mission is to give companies the ability to create, deploy and operate their own fleet of autonomous AI agents, fully adapted to their processes and constraints. These agents don’t just answer questions; they actually do the work: they gather information, make decisions within predefined policies, trigger actions in back-office systems, and escalate to human teams when necessary. They operate across voice, email, chat, SMS and APIs, in multiple languages, all orchestrated by our platform, Y.core.
Y.core is the command center for this fleet of agents. It connects to CRMs, helpdesks, telephony and internal tools (Salesforce, Zendesk, Freshdesk, Intercom, Microsoft Dynamics, GLPI, custom APIs, etc.) and is compatible with several LLM providers such as OpenAI, Anthropic and Mistral. On top of orchestration, it brings the “enterprise layer” that large organizations need: monitoring, versioning, access control, governance, and quality and compliance workflows.”
What is your market positioning?
Marin Huet: “In terms of positioning, we focus on mid-market and enterprise customers with complex operations: multiple channels, legacy tools, regulatory requirements, and often a mix of in-house and outsourced teams. This is precisely where traditional chatbots or simple “AI add-ons” struggle, and where a structured fleet of AI agents brings the most value.
We for example have great use-cases in IT and tech services, where we’ve automated a large share of technical hotline tickets. And also in e-commerce and retail, where multilingual email and ticket triage is a major pain point.”
What are the advantages of this solution over competitors?
Marin Huet: “Our main advantage is that we don’t sell a generic bot: we provide a tailor-made agentic platform that is configured and industrialized for each client. We combine deep AI expertise with decades of BPO and call-center experience to help organizations design the right agents, connect them safely to their systems, and run them in production as a real operational layer – not just as an experiment.”
Read Also
How does agentics fundamentally transform the customer relationship compared to traditional automation approaches?
Marin Huet: “Traditional automation, IVRs, form-based workflows, basic chatbots, pushes work onto the customer. You navigate menus, repeat your story across channels, and hope the system eventually lands you in the right queue. It’s “self-service” in theory, but often feels like abandonment.
Agentics is the opposite: instead of making the customer adapt to the system, an AI agent takes ownership of the request. It understands natural language, asks clarifying questions, pulls data from internal tools, fills in forms, updates tickets, and follows up, like a digital employee whose job is to get the problem solved, not to recite information.
For the customer, the experience becomes: “I explain once, in my own words, and the system handles the rest.” For the company, it’s a shift from managing channels and scripts to managing outcomes: resolution rate, time-to-resolution, and satisfaction. In our deployments, this reduces what I call “customer silence”, those long stretches where no one knows what’s happening with a case, which is one of the biggest drivers of frustration.”
According to you, why should companies move from isolated chatbots to fully autonomous fleets of AI agents?
Marin Huet: “Most companies today have “islands” of automation: a chatbot on the website, an IVR on the phone, maybe an AI assistant for agents. Each solves a slice of the problem, none owns the full journey. That fragmentation shows up in KPIs: high transfer rates, repeated contacts, and inconsistent experiences across channels.
Moving to a fleet of AI agents means you structure automation the way you structure teams. For example, you might have an email triage agent that classifies and routes every incoming message in your CRM. Or a voice agent that answers calls 24/7 and manages after-hours incidents. Or even a billing agent that can negotiate payment plans within defined guardrails.
Each agent is specialized, but they share the same platform, data and governance. That’s what Y.core provides: a way to design, deploy and supervise dozens of agents as if they were a single team, rather than a collection of disconnected bots.
The practical impact is higher autonomous resolution on real-world processes. For instance, we’ve seen 50% of technical hotline tickets resolved end-to-end by AI, significant productivity gains on multilingual email management, and 24/7 instant pickup on all calls in emergency scenarios.”
Read also
You claim that Y.core can handle thousands of interactions in real time. What is the technological secret behind this scalability?
Marin Huet: “There is no single “secret algorithm”; it’s an architecture choice. Y.core separates three concerns:
1. Conversation orchestration, Stateless services that manage turns in the dialogue and context retrieval, built to scale horizontally.
2. Business logic & workflows, Deterministic workflows that decide which tools to call, which policies apply, how to route and escalate.
3. Integration layer, Adapters to CRMs, ticketing systems, telephony and internal APIs, with their own scaling and resiliency rules.
On top of that, Y.core is LLM-agnostic: we can route different use cases to different models (OpenAI, Anthropic, Mistral, or private deployments), and we aggressively pre- and post-process to reduce unnecessary calls. That’s what allows us to support thousands of concurrent conversations without collapsing your infrastructure or your budget.
The benefits are concrete: you can absorb peaks (product launches, billing cycles, seasonal spikes) without hiring waves of temporary agents; you can open 24/7 coverage or new languages with minimal marginal cost; and you maintain consistent quality even when volumes are unpredictable. In some emergency and property management cases, this has meant 100% of calls answered instantly, day and night.”
Read also
You also announce impressive results: 65% of calls handled, 73% of emails automated. What real productivity gains can your clients expect, and in what timeframe? So far, companies implementing AI tools have not been satisfied and have not achieved significant productivity gains (see MIT study). How do you respond to this?
Marin Huet: “These figures are in line with what we see on mature, well-scoped use cases: clear processes, clean data, and strong sponsorship on the client side.
To give a concrete example, in one deployment we publicly highlight, our voice agent now resolves about 65% of after-hours calls autonomously, with on-call costs divided by three and a 40% drop in escalations in just two months. On the email side, we’ve seen 50% of messages automatically classified and routed in CRM for a proptech client, freeing teams from low-value triage.
In terms of timeframe, a typical pattern is:
• 0–2 months, First agent live in production on a tightly scoped use case; 20–40% of volume handled or accelerated by AI.
• 3–6 months, Automation rates climb as we expand coverage and iterate on edge cases; productivity gains of 30–50% on the targeted process are common.
• 6–12 months, Additional agents go live on adjacent processes and channels; overall, a significant portion of Tier 1 and repetitive Tier 2 workload is absorbed by AI.
Regarding the MIT / Stanford research: the best-known study on generative AI in customer support found around 14–15% higher productivity for human agents using an AI assistant, with the biggest gains for less-experienced staff. That’s real, but incremental, and it matches what we see when AI is used only as a “copilot” that suggests answers.”
Sovereignty is a key issue. How does Yampa represent a credible European alternative to American solutions?
Marin Huet: “First, we’re a European company, founded in France, financed by European and international investors, and built from day one with European regulatory and cultural constraints in mind. On the technical side, Y.core is hosted in Europe and designed so that customer data doesn’t need to leave the EU. The platform is LLM-agnostic: we can work with American models where appropriate, but we also integrate European providers such as Mistral, and can support private or on-premise deployments for clients with strict sovereignty requirements.
Our security, governance, and auditability approach is aligned with GDPR and the upcoming AI Act, not adapted after the fact. More broadly, I think sovereignty is not just about where the servers live, but about control and expertise. If Europe wants strategic autonomy, it needs not only foundational models, but also industrial platforms and teams capable of deploying them safely at scale in critical business processes. That’s exactly the layer Yampa aims to represent.”
Read also







