OpenAI report links Chinese use of ChatGPT to Uyghur surveillance
OpenAI findings show Beijing weaponizing AI tools like ChatGPT to expand its repression of Uyghurs and online critics
SAN FRANCISCO, United States (MNTV) — Suspected Chinese state-linked users have used OpenAI’s language models, including ChatGPT, to surveil Uyghurs and monitor online dissent, according to a new report released by the US-based artificial intelligence firm.
The company said it identified activity linked to accounts “likely connected to a Chinese government entity” that sought to build systems for tracking the movements of Uyghurs and other individuals deemed “high-risk.”
One user reportedly asked ChatGPT to draft a proposal for such a surveillance tool, while another requested help creating promotional content for a program designed to scan social media platforms for political and religious expression.
According to CNN, both accounts were permanently banned after internal investigations confirmed violations of OpenAI’s usage policy.
Ben Nimmo, OpenAI’s principal investigator, said the case demonstrates how Beijing is expanding its surveillance infrastructure using AI-driven tools. “It’s not new that the Chinese Communist Party monitors its own population,” he told CNN. “But now they’ve heard of AI and they’re thinking, maybe we can use this to get a little bit better.”
The report comes amid escalating US–China competition over artificial intelligence dominance, as both governments pour billions of dollars into developing advanced systems. While the rivalry has largely centered on innovation and national security, the OpenAI findings suggest a more immediate concern: the use of AI for political control, information censorship, and digital repression.
China’s embassy in Washington rejected the report, calling it “baseless.” Spokesperson Liu Pengyu said Beijing was “building an AI governance system with distinct national characteristics,” emphasizing what it calls a “balance between innovation and security.”
Rights organizations have long accused China of deploying advanced technologies to target Uyghurs and other Muslim minorities in Xinjiang through facial recognition, data collection, and predictive policing systems.
The new findings suggest those practices are evolving beyond China’s borders, reflecting what analysts describe as transnational repression — efforts to monitor or intimidate diaspora communities overseas.
OpenAI’s report also identified similar misuse attempts from suspected actors in Russia and North Korea, who allegedly sought to refine phishing schemes and malicious code using generative AI.
The revelations have renewed calls for international frameworks to prevent the political weaponization of artificial intelligence, as governments grapple with balancing innovation, human rights, and digital accountability.