On March 5, 2026, the U.S. Department of Defense officially designated Anthropic, maker of the AI model Claude, a supply chain risk. It was the first time in history that this instrument had been used against an American company. Until that day, the designation existed for one purpose: to protect the United States from foreign adversaries. Huawei got it. ZTE got it. Now an American AI company got it, not for espionage or technical failure, but for refusing to remove safeguards against mass domestic surveillance and fully autonomous weapons.

In the same week, something unexpected happened on the other side of the equation. Millions of users cancelled their ChatGPT subscriptions and migrated to Claude, siding with the company that had drawn an ethical line. The irony was painful: Anthropic's infrastructure buckled not under political pressure, but under the weight of public support. For a few hours on March 2, a service that millions of professionals use daily was unreachable.

Two disruptions, two different causes, one conclusion. The AI infrastructure that companies worldwide are building their processes on is fragile. And the sources of that fragility are not technical.

What this means for Europe

Europe's dependency on American technology infrastructure is not a new story. Cloud computing, operating systems, productivity software: for decades, the digital backbone of European business has been American. This was always a theoretical risk, but it was manageable because one assumption held: the U.S. government operates within predictable rules. Treaties are honored. Contracts are respected. Institutions function. Strategic planning, both corporate and governmental, rested on that predictability.

That assumption no longer holds. And it erodes at the precise moment when the nature of the dependency itself is changing.

From tools to capabilities

There is a meaningful distinction between depending on a tool and depending on a capability. Understanding this distinction is essential to grasping why AI dependency is qualitatively different from anything Europe has faced before.

Cloud infrastructure is a tool dependency. If AWS goes down or becomes unavailable, it is disruptive, expensive, and painful, but the path back exists. You can migrate data, set up your own servers, switch providers. It takes weeks or months, but the knowledge to do the work remains inside your organization. Your employees can still think, analyze, and decide. They just need different machines to do it on.

AI integration creates something else entirely. When a company eliminates positions because an AI system now handles the work, and then loses access to that AI system, what is lost is not a tool. It is a competence. The people who used to draft legal memos, analyze financial data, write code, or triage customer requests are gone. They have moved to other companies, other careers, other countries. They do not come back overnight.

We already know this from a different context. The pandemic disrupted the entire economy, but one pattern was especially visible in the service industry: entire sectors lost skilled workers who never returned. Hotels, restaurants, logistics companies spent years trying to rebuild teams. Some are still trying. In knowledge work, where AI cuts deepest, rebuilding would be harder still, because the skills are more specialized and the market for them is shrinking as more companies automate.

And unlike cloud servers, you cannot replace a foundation model with your own hardware. Training a model at the level of Claude or GPT-5 requires billions in investment, years of research, and access to computational resources that perhaps two dozen companies on earth possess. There is no Plan B you can spin up in a datacenter.

To be clear: we are not there yet. Most European enterprises are still in the early stages of AI integration. Large corporations, in particular, tend to move slowly. Today, if Claude or ChatGPT disappeared overnight, most companies could revert to manual processes. It would be painful and inefficient, but it would be possible. The capability vacuum described above is forming, not fully formed. That is precisely what makes this moment critical. The decisions being made now about how deeply to integrate AI, which provider to build on, and whether to maintain human fallback capacity will determine how vulnerable European businesses are in three to five years. By the time the dependency is deep enough to be dangerous, it will be too late to build alternatives. The architecture of the dependency is being laid right now, largely without strategic deliberation.

The outage was a symptom

The Claude disruption on March 2 deserves precise framing. It primarily affected the consumer-facing web interface. The API remained largely stable. Enterprise customers with direct integrations were barely impacted. This was not a collapse of AI infrastructure. It was a stress test that the infrastructure did not fully pass.

What it revealed was narrow but important. When millions of users attempt to switch AI providers simultaneously, the receiving end cannot absorb the load. The theoretical portability between AI providers, the comforting idea that you can always just switch, does not survive contact with reality. Workflows optimized for one model do not transfer cleanly to another. The models reason differently, respond differently, fail differently. A multi-provider strategy sounds prudent in a planning document. In practice, it is far harder than multi-cloud.

But these are solvable engineering problems. More servers, better architecture, smarter load balancing. What cannot be engineered away is the reason those millions of users migrated in the first place.

The end of predictability

The Anthropic crisis is not an isolated incident. It is a signal of a regime change in how the United States relates to its own technology sector, and by extension, to every country that depends on it.

Consider what happened. The U.S. government applied a designation created to counter foreign adversaries against a domestic company, because that company insisted on contractual protections against mass surveillance. The President said he had "fired" Anthropic "like dogs." The company's CEO reported internally that the administration's hostility stemmed in part from Anthropic's refusal to donate or offer what he called "dictator-style praise." Defense contractors like Lockheed Martin immediately complied with the blacklist. Palantir, which derives 60% of its U.S. revenue from government contracts and partners with Anthropic, found itself caught in the crossfire.

The reaction from the American security establishment itself was telling. Former CIA directors, retired military leaders, and a former AI adviser to the Trump White House called the designation an abuse of authority. Hundreds of employees at OpenAI and Google — Anthropic's competitors — signed an open letter urging the Pentagon to reverse course. Dean Ball, who had advised the Trump administration on AI, called it a "death rattle of the American republic."

OpenAI, which stepped into the breach and signed its own Pentagon contract hours after Anthropic was blacklisted, later tried to add contractual language prohibiting domestic surveillance. Its CEO admitted the deal had been "opportunistic and sloppy." But here is the question that European decision-makers need to sit with: what is a contractual clause worth when the government has just demonstrated that it will designate a company a national security risk for insisting on contractual clauses?

The chilling effect is already working. Every AI provider watched what happened to Anthropic. The designation does not need to hit everyone to be effective. It only needs to hit one company for every other company to learn the lesson: this is what it costs to say no.

For Europe, the implication is direct. The AI infrastructure that European businesses are increasingly integrating into their operations is subject to the decisions of an unpredictable administration. An administration that renames the Department of Defense to the Department of War. That announces policy via social media before the legal basis exists. That treats a company's ethical position as grounds for economic punishment.

And the horizon extends beyond AI. If a government is willing to weaponize supply chain designations against its own AI companies, it can do the same to any technology provider. Not because it is likely that the U.S. will shut down AWS for European customers tomorrow, but because the predictability that used to make such scenarios unthinkable is no longer there.

The question Europe is not asking

Europe's policy conversation about AI has focused, understandably, on regulation. The AI Act establishes risk categories, mandates transparency, restricts certain applications. This matters. But it addresses how AI may be used. It does not address who controls the infrastructure that makes AI possible.

Europe is regulating the use of a technology it does not own. And the dependency deepens daily, with every AI integration, every process optimization, every position that is not refilled because the model handles it now. The capability vacuum grows, quietly and incrementally, until the day it becomes visible. By then, it cannot be closed quickly.

Part of the problem may be experiential. Most European policymakers do not use AI in any meaningful way beyond having a draft memo polished or a speech summarized. They have not experienced what it means to integrate AI into the core of an operation, to restructure a team around it, to depend on it for decisions that used to require three analysts and a week of work. If you have never felt that dependency, you cannot grasp what it means to lose it. The AI Act was written by people regulating a technology they observe from the outside, not one they rely on from the inside. That distance shows.

The reflexive response is to call for European AI sovereignty, for homegrown foundation models, strategic investment, a European champion. This has been a talking point for years. Given the investment gap — European AI funding is a fraction of what flows into American labs — it will not close the gap quickly. But "not quickly" is not the same as "not at all." Europe needs its own capabilities, under its own rules, even if they start smaller. The alternative is permanent dependency on providers whose behavior is dictated by a government that has stopped being predictable.

The harder question is about rules themselves. The international order that European policy relies on is eroding — and Europe is not the one eroding it. China builds competitive models through distillation of American frontier systems, in open violation of licensing terms. The United States punishes its own companies for ethical positions and wages trade wars against allies. Both have decided that the rules are optional when they conflict with national interest. Europe, meanwhile, still plays by the book. The AI Act took years of careful deliberation. GDPR is enforced meticulously. Intellectual property is respected.

This is admirable. It may also be a strategic trap. When the rules-based order breaks down, the last player still following the rules does not win a prize for integrity. They fall behind. That does not mean Europe should abandon its principles. But it does mean that European policymakers need to acknowledge an uncomfortable reality: the framework they are operating in no longer exists in the form they assume. Building AI strategy on the assumption of international cooperation, stable trade relationships, and enforceable agreements is building on sand.

What Europe needs is not one answer but several, pursued simultaneously. Own capabilities, even if initially inferior. Infrastructure resilience requirements for AI adoption, the way we require them for energy and finance. And a long-overdue conversation about what it means to compete in a world where the two largest AI powers have decided that rules are for everyone else.