AI Chatbots: A New Way to Use AI Even 10 minutes can be detrimental to people’s thinking and ability to problem solve, says a new study From researchers at Carnegie Mellon MIT Oxford UCLA
The researchers paid people for solving problems online, such as fractions or reading comprehension. Each experiment involved several hundred participants. Some participants had access to an AI assistant that could solve the problem on their own. The participants were more likely than others to abandon the task or make mistakes when their AI assistant suddenly disappeared. This study indicates that AI could boost productivity, but at the cost of problem-solving abilities.
“The takeaway is not that we should ban AI in education or workplaces,” Michiel Bakker is an assistant professor of MIT who was involved in the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”
Bakker is a man with chaotic hair, a big grin, and a lot of energy. We met on the MIT Campus. He was originally from The Netherlands and previously worked in London at Google DeepMind. He said that A well-known essay He was inspired to explore how AI might be disempowering humans in the future. It’s a bit depressing to read the essay, as it implies that disempowerment will happen. It’s possible, however, that figuring out ways AI can assist people in developing their own mental capacities should be included as part of the alignment between models and human values.
“It is fundamentally a cognitive question—about persistence, learning, and how people respond to difficulty,” Bakker tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.”
Bakker says that the results of this study are particularly alarming, as a person’s ability to persevere in solving problems is critical to learning new skills. It also indicates their potential to improve over time.
Bakker says it may be necessary to rethink how AI tools work so that—like a good human teacher—models sometimes prioritize a person’s learning over solving a problem for them. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” Bakker says. Bakker admits that it is difficult to balance this type of “paternalistic” It can be difficult to know how to approach.
Companies that use AI are already considering the subtler effects their models may have on customers. The sycophancy of some models—or how likely they are to agree with and patronize users—is something that OpenAI has sought to tone down GPT is now available in newer versions.
Too much trust in AI can be problematic, especially when tools don’t behave the way you would expect. The agentic AI system is unpredictable, as it can make odd mistakes and perform complex tasks independently. You wonder how Claude Code or Codex affects the ability of programmers who might need to correct the bugs that they introduce.
It was only recently that I realized the risks of delegating critical thinking to AI. OpenClaw with Codex is my daily assistant, and it’s remarkably effective at resolving configuration problems on the fly. Linux. Recently however, when I found that my WiFi connection was dropping frequently, my AI assistant recommended running a number of commands so as to tweak the drivers talking to the WiFi card. It was the result of a computer that would not boot up no matter what.
OpenClaw might have been better off teaching me how to correct the issue myself instead of trying to do it for me. I might have a more capable computer—and brain—as a result.
The edition is a special one. Will Knight’s AI Lab newsletter. Read previous newsletters here.

