OpenAI recently revealed that it has taken decisive action, banning several ChatGPT accounts suspected of being linked to Chinese entities. These accounts were reportedly attempting to misuse the powerful AI chatbot to develop tools for mass surveillance, social media monitoring, and profiling of specific individuals or groups. In a new report, the San Francisco-based artificial intelligence leader confirmed that all such nefarious attempts have been thwarted and the associated accounts permanently blocked. The report also shed light on similar efforts from Russia, where accounts were trying to leverage AI for phishing-related activities.
China-Linked Users Attempting to Create Monitoring Tools with OpenAI’s AI
OpenAI’s “Disrupting Malicious Uses of AI: October 2025” report details how a group of accounts, believed to be connected to the Chinese government, utilized ChatGPT to gather intelligence and craft tools with authoritarian implications. Describing these instances as a “rare snapshot into the broader world of authoritarian abuses of AI,” the company emphasized repeated attempts by these accounts throughout 2025 to build specialized tools for widespread surveillance, individual profiling, and online tracking. It’s important to note that these incidents occurred over time, not as a single event.
One specific user, for example, requested ChatGPT’s assistance in creating project plans and promotional content for a “social media listening tool.” This tool was ostensibly for a government client and aimed to scan platforms such as X (formerly Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube for extremist or politically sensitive content. OpenAI confirmed there is no evidence that this proposed “social media probe” was ever fully developed or deployed.
Another account that was banned had sought help in formulating a proposal for a “High-Risk Uyghur-Related Inflow Warning Model.” This alarming system was designed to analyze transport bookings and cross-reference them with police records to identify and track individuals deemed “high-risk.” Just like the previous case, OpenAI clarified that the model was not used to build or run such a tool and that its existence could not be independently verified.
Beyond these, other accounts were observed using ChatGPT for general profiling and online information gathering. In one instance, a user asked the AI to pinpoint funding sources for an X (Twitter) account critical of the Chinese government. Another query involved seeking details about the organizers of a petition in Mongolia. In both scenarios, ChatGPT only provided information that was already publicly accessible.
Furthermore, OpenAI noted that some accounts simply used ChatGPT as an advanced open-source research tool, much like a sophisticated search engine. These users prompted the chatbot to identify and summarize breaking news relevant to China, and also requested information on highly sensitive subjects, including the Tiananmen Square massacre in 1989 and the birthday of the Dalai Lama.