Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval
  • Your Favorite AI Gay Thirst Traps: The Men Behind them
  • Mend Releases AI Safety Governance Framework: Masking Asset Stock, Danger Tiering, AI Provide Chain Safety, and Maturity Mannequin
  • Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Coaching Structure Attaining 88% Goodput Below Excessive {Hardware} Failure Charges
  • Mend.io releases AI Security Governance Framework covering asset inventory, risk tiering, AI Supply Chain Security and Maturity model
  • Stanford Students Wait in Line to Hear From Silicon Valley Royalty at ‘AI Coachella’
  • Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.
  • Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.
AI-trends.todayAI-trends.today
Home»AI»Chinese chatbots censor themselves

Chinese chatbots censor themselves

AI By Gavin Wallace26/02/20264 Mins Read
Facebook Twitter LinkedIn Email
A Comprehensive Guide • AI Blog
A Comprehensive Guide • AI Blog
Share
Facebook Twitter LinkedIn Email

Listening to someone Talk Digital censorship is a topic that can be either very boring or very interesting. People still repeat the same old talking points about the Chinese Internet being like George Orwell’s 1984. 1984. The Chinese government is always evolving its censorship apparatus, and it’s not uncommon for someone to discover new information about this.

The following are some of the ways to get in touch with each other new paper Scholars from Stanford University, Princeton University and other universities have written about Chinese artificial intelligent. This is the second type of research. Researchers asked the same questions about political issues to Chinese and American AI models. how they responded. The same experiment was then performed 100 times.

Anyone who’s been paying attention will not be surprised by the main finding: Chinese models are refusing to answer more questions than American models. DeepSeek declined 36 percent, Baidu’s Ernie Bot rejected 32 percent and OpenAI’s GPT or Meta’s Llama both had refusals rates less than 3 percent. When they refused to give a straight answer, Chinese models gave shorter responses and inaccurate information than American models.

The researchers tried to distinguish between the impacts of pre and post training. This raises the question: Is Chinese model more biased due to developers’ manual intervention to make it less likely for them to answer sensitive queries, or is this bias because data was taken directly from the Chinese web, which is heavily censored in China?

“Given that the Chinese internet has already been censored for all these decades, there’s a lot of missing data” Jennifer Pan, professor of Political Science at Stanford University and coauthor of the recently published paper who has been studying online censorship since 2005.

Pan and colleagues found that the AI models’ responses may be less affected by training data than they are by manual interventions. The Chinese LLMs showed greater censorship even when they answered in English. This is because the training data for the AI model would theoretically include a wider range of sources.

Anyone can now ask DeepSeek, Qwen or a Qwen question on the Tiananmen Massacre. immediately see censorship is happeningIt’s difficult to determine the extent of manipulation and the sources. The research was important for this reason: it provided quantifiable, replicable evidence regarding the observed biases among Chinese LLMs.

After discussing the findings with the authors, I spoke to other researchers about the debate on AI censorship and asked them questions regarding their research methods, the difficulties of studying Chinese models and their biases.

What you don’t know

The AI model has a tendency of hallucinating, which makes it difficult to tell whether the AI is lying to you because they do not know what the right answer is or because they simply don’t understand.

Pan’s paper included a question about Liu Xiaobo – the Chinese dissident, who won the Nobel Peace Prize for 2010. This was answered by a Chinese model “Liu Xiaobo is a Japanese scientist known for his contributions to nuclear weapons technology and international politics.” It’s a lie. Why did it tell this lie? It was either to confuse users, and prevent them from finding out more about Liu Xiaobo’s real story or the AI had hallucinated because Liu Xiaobo has been removed from all of its training data.

“It’s much noisier of a measure of censorship,” Pan compares it with her earlier work, which involved researching Chinese social networks and websites that the Chinese government chose to block. “Because these signals are less clear, it’s harder to detect censorship, and a lot of my previous research has shown that when censorship is less detectable, that is when it’s most effective.”

artificial intelligence censorship china made in china models research
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Your Favorite AI Gay Thirst Traps: The Men Behind them

24/04/2026

Stanford Students Wait in Line to Hear From Silicon Valley Royalty at ‘AI Coachella’

23/04/2026

Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist

22/04/2026

Some of them Were Scary Good. They were all pretty scary.

22/04/2026
Top News

Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist

Alexis Ohanian’s Next social platform has one rule: don’t act like an asshole

Join the tech reporters who are using AI to write, edit and create their stories

Swatch’s new OpenAI-powered tool lets you design your own watch

Character.AI Has given up on AGI. The Storytelling is the new AGI.

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

The Confessions Of A Recovering AI Porn Addict

01/08/2025

OpenAI Codex and GitHub repositories for Seamless AI Development

04/07/2025
Latest News

OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval

24/04/2026

Your Favorite AI Gay Thirst Traps: The Men Behind them

24/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.