European AI Research Council and the Sovereignty Race
Brussels wants a new AI council, but Europe’s real test is whether it can link research, compute, procurement, and control into power.
I’ve read enough Brussels AI announcements to recognize the genre by the second paragraph. Big words. Nice logo. Somebody says “strategic autonomy” with a straight face. Then you ask the only question that matters — who gets the GPUs, who signs the procurement, and whose cloud this thing actually runs on — and the room suddenly gets very philosophical.
That’s why the whole Commission’s new European AI Research Council sparks sovereignty race story actually got my attention. Not because Europe needs another council. Madonna, we already have enough councils, task forces, dialogues, and roundtables to open a furniture store. It got my attention because, paired with AI gigafactories, this is one of the first Brussels moves in a while that hints at the real issue: tying research, compute, procurement, and sovereignty together before Europe becomes a permanent tenant in somebody else’s AI empire.
My view is simple. Europe does not have an AI research problem. Europe has a wiring problem. We know how to fund science. We know how to write rules. We’re even getting better at saying “compute” without sounding like we’re ordering extra parmesan. What we still can’t do consistently is turn public money, legal architecture, cloud capacity, and procurement into actual European power.
If this new council doesn’t fix that, it’s just another elegant Brussels layer cake. Beautiful. Clever. Useless when the bill arrives and AWS still owns the restaurant.
Europe has the brains. It still rents the machine
The lazy story is that Europe is behind in AI because it lacks talent. I don’t buy it for a second. I’ve met too many absurdly good researchers in Paris, Zurich, Milan, Amsterdam, and Berlin to take that line seriously. Europe keeps producing brains. What it keeps failing to produce is the machine those brains need to matter at scale.
The numbers are not even that depressing anymore. In the European Commission’s April 9, 2026 update on the AI Continent Action Plan, the EU says it now has 19 AI Factories operational and 13 AI Factory antennas. The same update says the call for expressions of interest on AI Gigafactories drew 76 responses across 60 sites in 16 Member States. That’s not fake momentum. That’s the beginning of actual industrial capacity.
But “beginning” is doing a lot of work here.
Because on the cloud side, Europe is still massively dependent. TechRadar, reporting on the CISPE push around the upcoming Cloud and AI Development Act, says AWS, Azure, and Google Cloud account for around 70% of the EU cloud market. Seventy percent. So yes, Europe can talk all day about sovereign AI while paying the infrastructure toll to American hyperscalers. Very on brand. Like insisting on artisanal pasta and then heating up supermarket sauce.
As a founder, this is the part Europeans hate saying out loud. We love values. I love values. I’m Italian; give me one glass of Barolo and I can absolutely hold forth on the European civilisational model like I’m auditioning for a minor role in The Economist. But power is not vibes. Power is capex, compute allocation, procurement, and institutions that can pick priorities and stick to them longer than one news cycle.
Economist Impact put it more bluntly than most EU people usually dare: on its current path, Europe may have “little say over how this technology is built or governed.” That should scare anyone who uses the word sovereignty without irony. Because this is not about startup ego or whether Europe gets its own ChatGPT moment with a better accent. It’s about whether a continent that gave the world ASML, SAP, Mistral, DeepMind’s talent pipeline, and a ridiculous amount of serious industrial engineering ends up as a highly regulated customer of other people’s systems.
And once those systems scale, the game changes. Economist Impact notes that leading AI firms now have revenues of more than $10bn. At that point they’re not just selling models. They’re locking in enterprise contracts, developers, cloud dependencies, and standards. That’s the race. Not benchmark screenshots on X. Not another founder posting “we are humbled and excited” under a graph nobody understands.
I had coffee in Milan recently with a founder building AI tools for insurance compliance. Great team. Sharp product. Their real problem wasn’t the model. It was legal review, procurement cycles, data residency, and whether enterprise buyers trusted the deployment setup. That’s Europe in one cappuccino: excellent minds, weak path to power.
If this becomes ERC-with-GPUs, Europe is cooked
I love the ERC. Genuinely. The European Research Council has funded serious work. But if the European AI Research Council turns into ERC-with-more-compute, we’re done.
Europe already knows how to fund excellent research. The missing piece is mission coordination. Somebody has to decide which compute gets allocated where, which strategic sectors matter, how evaluation works, where safety testing sits, and how deployment priorities line up across Member States. If nobody owns that chain, everybody gets to make speeches and nobody gets sovereignty.
The Commission’s own framing points in the right direction. In that same AI Continent Action Plan stocktake, Brussels says the strategy rests on five pillars: computing infrastructure, data, skills, AI adoption, and simplifying AI rules. Good. That means at least some people in the building understand this is not just a science-policy conversation anymore. It’s a full-stack problem.
Other pieces are showing up too. The Commission says it delivered its Data Union Strategy in November to unlock data for AI development. It launched the European Legal Gateway Office in February during the AI Impact Summit in New Delhi as part of its talent agenda. The branding is a little chaotic, admittedly. Sounds like a startup accelerator and a visa office had a baby. Still, the point is real: Brussels is trying to build an ecosystem, not just spray grants into the void.
Then there’s the Apply AI Strategy, which the Commission says has already produced dozens of calls worth up to €1 billion across strategic sectors. That’s the kind of detail I want. Not abstract “leadership.” Actual money tied to actual adoption. If the new council sits on top of this stack, it should act like an operator. Less salon, more switchboard.
Because if it turns into a prestige machine for white papers, panels, and conferences in rooms with suspiciously good pastries, it’ll become classic Europe: brilliant, tasteful, and five years late.
That’s where the headline — Commission’s new European AI Research Council sparks sovereignty race — stops being media fluff and becomes an institutional design question. Is this body meant to coordinate the stack, or just bless excellence while someone else captures deployment?
I’m pro-European enough to say the unfashionable thing clearly: only federal scale makes sense here. Germany alone can’t outspend the US. France alone can’t out-subsidize China. Italy alone can barely digitize half its municipal paperwork without requiring group therapy and a priest. Together, though? Different story.
Sovereignty washing is the new greenwashing
Europe has developed a truly elite skill for saying “sovereign” when it really means “hosted nearby by someone else.” It’s the digital-policy version of ordering a salad with fries. Technically, yes. Spiritually, absolutely not.
That’s why the recent CISPE-backed intervention mattered. As reported by TechRadar, 24 European cloud CEOs sent a letter to Executive Vice-President Henna Virkkunen warning against “sovereignty washing” ahead of the Cloud and AI Development Act. Finally. Someone said it in plain language.
Their argument was straightforward: “This first comprehensive European cloud policy should strengthen Europe’s digital capacity by prioritising procurement and investment in sovereign European solutions that foster a competitive cloud ecosystem.” Exactly. Print that out. Tape it to every office in Brussels where people still think sovereignty is mostly a branding exercise.
The asks in that letter were not radical. They were basic statecraft: sovereignty by control, resilience where full sovereignty isn’t possible, reserved procurement shares for European providers in sensitive areas, interoperability, and strategic investment in European companies. If that sounds controversial, it’s only because Europe spent years pretending industrial policy was something other countries did while we held tasteful conferences about openness.
And then there’s the part nobody can really dance around. TechRadar also reports that Microsoft has said it cannot fully guarantee EU data sovereignty because it must comply with US legal orders. There it is. Clean and brutal. If your “European” AI stack folds the second a foreign legal order arrives, that is not sovereignty. That is vibes in a trench coat.
I’m not anti-American. I live in America half the time. I use American tools. I admire the speed, the ambition, the almost deranged confidence. But dependency is still dependency, even when the UX is beautiful. Europe needs partnership with the US, not digital tenancy with nicer marketing.
And because AWS, Azure, and Google Cloud still hold around 70% of the EU cloud market, procurement becomes the whole game. You cannot lecture founders about strategic autonomy while letting every public tender drift toward the same three non-European defaults. That’s not neutrality. That’s surrender through paperwork.
Francisco Mingorance, CISPE’s secretary general, put it well in the same reporting: “CADA is a once-in-a-lifetime opportunity to put Europe back on the front foot in the digital economy, and we must not squander it by legitimising ‘sovereignty-washing’.” He’s right. If Europe misses this moment, we’ll spend the next decade pretending local hosting is the same thing as local control, which is like calling a rental car family property because you parked it in Naples.
Deployment beats demos. Every time.
Here’s the least sexy and most important part of this whole AI sovereignty debate: Europe probably does not win by having the flashiest model. Europe wins, if it wins at all, by becoming the first place that makes advanced AI deployable in serious institutions.
That’s why I paid attention to a Commission-hosted Futurium concept note called “Europe’s Next Sovereignty Frontier: Governed High-Risk Deployment of Sovereign AI Models.” Horrendous title. Pure Brussels. But the substance is actually solid. The note argues that the challenge is not just building European models. It’s making them usable inside real institutional environments like healthcare, finance, insurance, public administration, telecom, and legal operations.
Then it lands the line that matters: “the bottleneck is often not model capability, but deployment trust.” Exactly. Finally. Somebody wrote down the thing every founder selling into regulated Europe already knows in their bones.
The proposal, stripped of Commission dialect, is pretty simple. Don’t just open the gates and hope for the best. Build a governed deployment layer where sovereign AI models are used through pre-approved workflow classes, bounded logic templates, technical policy controls, and auditable human oversight. In normal-person language: don’t just ask whether the model is smart. Ask whether a hospital, ministry, insurer, or bank can buy it, run it, audit it, and survive the compliance meeting afterward.
That is deeply European in the best sense. Not anti-innovation. Not timid. Just serious about the difference between a demo and a system.
The Commission’s own report on public administration adoption says AI uptake in government should be built around anchoring AI adoption in EU policies, regulations, and principles; adapting the capabilities of public administrations; and applying AI in high-impact domains. Which sounds boring. It is boring. It’s also how real markets get built.
Silicon Valley sells magic. Europe should sell systems a hospital CIO and a finance regulator can both sign off on without needing beta blockers.
I learned this the annoying way. A couple of years ago, I helped a team pitch an AI workflow tool to a public-sector buyer. We spent weeks polishing the product, sharpening the narrative, making the whole thing look elegant. The meeting turned on three questions: audit trails, liability, and procurement compatibility. Not the model. Not the benchmark. Not the TED Talk part. The plumbing. I left that room equal parts humbled and irritated, which is basically the founder lifestyle.
If a European AI Research Council is smart, it won’t just fund labs. It will define deployment missions. Radiology triage in public hospitals. Fraud detection in cross-border payments. Case management in courts. Language tooling for ministries. Industrial compliance in manufacturing. Real domains. Real workflows. Real tenders.
That’s how sovereign AI models become sovereign markets.

You can’t preach trust and then leave liability in a ditch
This is where I’m going to annoy both the libertarian tech bros and the performative anti-tech crowd. Europe does need rules. But rules without a usable liability framework are just moral theatre with PDFs.
A CEPS analysis published on April 15, 2026 said the Commission’s 2025 work programme effectively scrapped the AI Liability Directive, leaving what it called a “gaping hole” in the EU’s AI framework. That’s not dramatic language. It’s accurate. If Europe wants “AI made in the EU” to mean anything in the market, it needs a clear answer on harm, accountability, and legal certainty.
CEPS gives examples that are painfully practical: an AI hiring tool that discriminates, an automated medical diagnosis system that makes a fatal error, a generative AI tool that defames someone. Who pays? Who proves what? Under which rules? The analysis argues that neither the updated Product Liability Directive nor existing national regimes fully solve this.
And this is where founders get whiplash. People assume startups fear regulation most. Not really. What kills momentum is uncertainty. If I know the rules, I can price the rules. If I don’t know which of 27 national liability regimes and 27 procedural systems might apply, and how courts will interpret AI harms across borders, I’m not moving faster. I’m paying lawyers to explain Schrödinger’s compliance.
CEPS is right on the bigger point too: liability itself is not what kills innovation. Fragmentation kills innovation. Especially for SMEs. The biggest firms can absorb legal complexity. Smaller European companies can’t. So when Europe drops harmonisation in the name of simplicity, it often ends up helping the incumbents with the fattest legal departments. Bellissimo. Exactly what we didn’t need.
I’ll admit something mildly embarrassing. When I first started building in regulated tech, I used to roll my eyes at legal architecture. Very founder-brain. Very “we’ll figure it out later.” Then I watched deals stall for months because nobody could answer basic accountability questions, and I realized trust is not a marketing layer. It’s part of the product.
So yes, I want Europe to move faster. But speed without liability clarity is fake speed. A serious European AI Research Council without a serious liability framework is like building a Ferrari and forgetting the brakes. Gorgeous right up until the first corner.
This isn’t about becoming America with better bread
Europe makes a mistake when it frames AI as a race to become a slightly more regulated version of the US. That’s not the job. The job is to avoid permanent dependency in a world that is getting more geopolitical, less forgiving, and a lot less naive.
Economist Impact put the timeline starkly: AI systems capable of matching humans on nearly all economically valuable cognitive tasks may arrive “within just a few years.” Not in some distant sci-fi future. In the planning horizon of this Commission, this Parliament, this funding cycle, this startup generation.
The same analysis says AI is already reshaping laboratories, security agencies, and military capabilities. It points to the war in Ukraine, where AI-powered drone operations and automated intelligence analysis are already giving us an early look at military transformation. If anyone still thinks AI policy is just startup drama plus regulation discourse, they are asleep at the wheel.
The compute side makes it even clearer. Economist Impact’s AI Compute framing says the next phase is a contest over specialist chips, inexpensive energy, scarce data-centre capacity, cooling, and infrastructure. That’s not just a tech story. That’s industrial policy. Security policy. Sovereignty with a giant electricity bill.
To Europe’s credit, the conversation is finally getting less childish. It’s widening from pure regulation talk to supercomputing, cooling, energy reuse, and industrial-scale collaboration. Good. Because that’s where actual power lives. Not in another glossy declaration about trust, but in whether Europe can line up grid capacity, capital, procurement, and institutional coordination fast enough to matter.
This is why the Commission’s new European AI Research Council sparks sovereignty race framing matters beyond the headline. If designed well, this council is not just a science body. It’s a sovereignty institution. It should sit at the junction of compute access, strategic missions, evaluation, deployment, and public-interest outcomes. If designed badly, it becomes another place where Europe explains the future while importing it.
I’m aggressively pro-European on this. Not in the sentimental Eurovision way — though obviously I love that too — but in the hard-power way. No single Member State can pull this off alone. Not France. Not Germany. Not Italy, despite our national belief that we are secretly the center of civilization. Federal Europe is the only level where this becomes plausible.
And there is a very European win condition here, if we stop being weird about it. Not “beat OpenAI on every benchmark by Q4.” Relax. The real question is much simpler: in three years, do we have more homegrown models actually deployed in critical sectors on European-controlled infrastructure? Are hospitals, ministries, banks, insurers, and industrial groups buying European AI systems because they are trustworthy, procurable, accountable, and strategically safe?
That’s the scoreboard.
Not conferences. Not PDFs. Not another 24-language microsite with suspicious gradients.
Europe does not need to become Silicon Valley with better bread. First of all, impossible. Second, boring. It needs to do something much harder and much more European: build the first AI ecosystem where research, compute, law, procurement, and public trust actually line up.
The proposed European AI Research Council could help make that real. Or it could become one more elegant mechanism for avoiding hard choices.
So here’s the question I keep coming back to: when Europe says sovereign AI, do we mean we control the stack?
Or do we just mean the data center is nearby and the press release comes in 24 languages?
Sources
- Primary trending article
- Commission marks one year of the AI Continent Action Plan with two new reports on AI adoption and policymaking
- Europe’s Next Sovereignty Frontier: Governed High-Risk Deployment of Sovereign AI Models
- An AI Liability Regulation would complete the EU’s AI strategy
- Europe wants tech sovereignty but is this realistic?
- Dozens of European cloud CEOs call for real tech sovereignty ahead of Cloud and AI Development Act