Rogue State Models and Democratic Resilience
Since the start of 2025, global headlines have focused on DeepSeek - China’s open-weight AI challenger that has quickly climbed the ranks of generative model performance. As its models spread through developer communities and enterprise tools across borders, governments have scrambled to understand what exactly they’re dealing with.
In Europe, the dominant concern has been data protection. Multiple Data Protection Authorities (DPAs) including in Italy, Germany, and France, have launched investigations, or issued warnings about potential violations of the General Data Protection Regulation. The core questions have centered on where data goes, who has access, and whether EU citizens’ data rights are being upheld.
But this is not actually the key problem.
What happens when an “open” AI model emerges from a system where control lies with an unelected few, without open competition or accountability, or closed political systems marked by state capture? The concern isn't merely technical. It’s strategic, political, and existential. For Europe, DeepSeek is a litmus test for how democracies handle AI built abroad under very different rules, on top of the myriad ethical challenges AI already poses to democratic governance.
What Is DeepSeek? A New Generation of Chinese AI
DeepSeek AI, is a Chinese artificial intelligence start-up launched in late 2023. Its founding vision was to build open large language models (LLMs) that could rival closed Western counterparts like GPT-4 and Claude, while addressing the inefficiencies and high costs associated with developing advanced AI.
In early 2025, DeepSeek released DeepSeek R1: a multimodal model with capabilities approaching OpenAI's o1 (or, according to some, outperforming its competitors altogether). Deepseek claimed it has been able to do this cheaply – with the researchers behind it allegedly spending just $6m (£4.8m) to train it; a fraction of the "over $100m" spend alluded to by OpenAI’ Sam Altman when discussing GPT-4.
Beyond funding figures and performance metrics, for Europe, DeepSeek’s launch invited a more targeted layer of inquiry: Where was this data sourced and what rights do users have? Is the AI trained to respect these rights? Note, for example, that Claude's Constitution is said to be inspired by global ethical standards, such as the United Nations Universal Declaration of Human Rights.
Data Protection as а Fundamental Right
In the European Union, data protection is a fundamental right, enshrined in the Charter of Fundamental Rights. This right is enforced through the General Data Protection Regulation (GDPR), which imposes strict requirements on how data is collected, processed, and transferred, including in non-EU countries.
Crucially, the GDPR places restrictions on data transfers to third countries that do not offer “adequate” data protection. China has not been deemed adequate by the European Commission, meaning any data shared with or processed by Chinese-based entities must undergo heightened scrutiny and safeguard mechanisms.
As DeepSeek’s models began gaining traction in European developer and academic communities, several national Data Protection Authorities raised red flags:
- The Italian DPA questioned DeepSeek about its use of personal data (what personal data is collected, from which sources, for what purposes, on what legal basis and whether it is stored in China). It then ordered DeepSeek to block its chatbot in Italy after the company failed to address concerns over its privacy practices.
- Several German state DPAs announced that they had initiated coordinated investigations of into DeepSeek.
- France’s DPA confirmed that it is requesting additional information from DeepSeek, and that it is analysing R1’s functionality.
Privacy Policies: Surface Similarities, Deeper Questions
Interestingly, a comparison of DeepSeek’s and OpenAI’s privacy policies reveals that - at a glance -they appear broadly similar.
Both companies:
- collect user inputs and metadata for performance optimization;
- retain rights to use data for training models (unless enterprise customers opt out);
- do not offer full, GDPR-aligned default protections to casual users; and
- offer only partial transparency around how data is stored, shared, or deleted.
Source: CSD based on company data.
So why is DeepSeek drawing disproportionate regulatory attention?
It’s because the concern is not about corporate privacy policy. It’s about the legal and political system behind the company - and what that system could do with access to foreign data and users.
The Rogue State Theory of AI: When Authoritarian Systems Code Misaligned Agents
This is where the Rogue State Theory offers a compelling frame. In their simplest form, “rogue states” can be defined as aggressive states that seek to upset the balance of power of the international system.
The theory, reframed in the context of AI by Anthropic’s Head of Policy,Jack Clark, treats AI systems as quasi-sovereign agents arriving into the world. Like nation states, they act in the world, influence others, and may become misaligned with largely accepted human rights norms. A rogue AI model, in this context, is one whose objectives, actions, or feedback loops misalign with the stability or values of the international system.
Now add a geopolitical twist: What happens when an adversarial, let alone rogue, political state is the one building the rogue AI system?
China, with its opaque governance and sweeping surveillance infrastructure, is increasingly seen in democratic Europe as а geopolitical adversary, an economic competitor, and a black box in terms of regulations; something particularly troubling when paired with the black box nature of artificial intelligence itself. Governance principles can shift rapidly at the discretion of the Chinese Communist Party, making it difficult to assess long-term reliability or accountability. Compounding this is the fact that AI model developers have significant control over what kinds of content their systems will engage with. In a system without transparency or independent oversight, there’s a credible risk that the training process could be shaped to reflect the ideological preferences of the ultimate owner, which in the case of China would be the Party, embedding bias or selective narratives at a foundational level.
While many democracies, including those in Europe, also have legal frameworks requiring companies to cooperate with law enforcement, China’s context is fundamentally different. Laws like the 2017 National Intelligence Law and the 2016 Cybersecurity Law obligate Chinese companies to share data and support state intelligence operations without meaningful judicial oversight or checks and balances. This makes it difficult for any Chinese AI company, including DeepSeek, to credibly claim independence from state influence.[1]
So what we are witnessing is a kind of recursive risk:
An adversarial state, which has repeatedly gone rogue on human rights, is encoding its worldview into an AI system — a digital entity that itself may evolve into a rogue actor.
This layered threat of authoritarian values built into potentially misaligned systems creates a profound dilemma for democracies. The threat isn’t just data misuse. It’s the slow and silent infusion of illiberal logic into the systems we rely on for education, communication, productivity, and decision-making.
Conclusion: What Openness Conceals, and Democracies Must Confront
DeepSeek is not just a test case for European data protection - it’s a mirror reflecting the limits of openness in a shifting geopolitical landscape. It reveals how principles like transparency and institutional trust cannot be taken for granted. Openness without alignment, transparency, or institutional trust can become a vector for manipulation.
We are entering an era in which AI systems are no longer neutral tools. They are actors shaped by the values of their creators and how they choose to train them. And in some cases, they could turn into tools for manipulating democracies. The China and DeepSeek experience demonstrates how geopolitics and AI governance collide to become a potent threat to Europe’s human rights based democracy. Yet, it is also a warning of the pitfalls and serious risks of power concentration, opaque training data, and the misaligned incentives of all LLMs.
As DeepSeek reveals, the deepest challenge in AI governance isn’t merely navigating regulatory divergence but grappling with the reality that even fundamental human values are not universally shared or understood.
[1] Recently, the House Select Committee on China alleged in a report that DeepSeek’s ties to Chinese government interests are “significant”. Lawmakers claimed that DeepSeek’s founder, Liang Wenfeng, controls the firm alongside the High-Flyer Quant hedge fund in an "integrated ecosystem” with ties to state-linked hardware distributors and Chinese research institute Zhejiang Lab.



















