Listening to someone Talk Digital censorship is a topic that can be either very boring or very interesting. People still repeat the same old talking points about the Chinese Internet being like George Orwell’s 1984. 1984. The Chinese government is always evolving its censorship apparatus, and it’s not uncommon for someone to discover new information about this.
The following are some of the ways to get in touch with each other new paper Scholars from Stanford University, Princeton University and other universities have written about Chinese artificial intelligent. This is the second type of research. Researchers asked the same questions about political issues to Chinese and American AI models. how they responded. The same experiment was then performed 100 times.
Anyone who’s been paying attention will not be surprised by the main finding: Chinese models are refusing to answer more questions than American models. DeepSeek declined 36 percent, Baidu’s Ernie Bot rejected 32 percent and OpenAI’s GPT or Meta’s Llama both had refusals rates less than 3 percent. When they refused to give a straight answer, Chinese models gave shorter responses and inaccurate information than American models.
The researchers tried to distinguish between the impacts of pre and post training. This raises the question: Is Chinese model more biased due to developers’ manual intervention to make it less likely for them to answer sensitive queries, or is this bias because data was taken directly from the Chinese web, which is heavily censored in China?
“Given that the Chinese internet has already been censored for all these decades, there’s a lot of missing data” Jennifer Pan, professor of Political Science at Stanford University and coauthor of the recently published paper who has been studying online censorship since 2005.
Pan and colleagues found that the AI models’ responses may be less affected by training data than they are by manual interventions. The Chinese LLMs showed greater censorship even when they answered in English. This is because the training data for the AI model would theoretically include a wider range of sources.
Anyone can now ask DeepSeek, Qwen or a Qwen question on the Tiananmen Massacre. immediately see censorship is happeningIt’s difficult to determine the extent of manipulation and the sources. The research was important for this reason: it provided quantifiable, replicable evidence regarding the observed biases among Chinese LLMs.
After discussing the findings with the authors, I spoke to other researchers about the debate on AI censorship and asked them questions regarding their research methods, the difficulties of studying Chinese models and their biases.
What you don’t know
The AI model has a tendency of hallucinating, which makes it difficult to tell whether the AI is lying to you because they do not know what the right answer is or because they simply don’t understand.
Pan’s paper included a question about Liu Xiaobo – the Chinese dissident, who won the Nobel Peace Prize for 2010. This was answered by a Chinese model “Liu Xiaobo is a Japanese scientist known for his contributions to nuclear weapons technology and international politics.” It’s a lie. Why did it tell this lie? It was either to confuse users, and prevent them from finding out more about Liu Xiaobo’s real story or the AI had hallucinated because Liu Xiaobo has been removed from all of its training data.
“It’s much noisier of a measure of censorship,” Pan compares it with her earlier work, which involved researching Chinese social networks and websites that the Chinese government chose to block. “Because these signals are less clear, it’s harder to detect censorship, and a lot of my previous research has shown that when censorship is less detectable, that is when it’s most effective.”

