Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • How to build a Django Admin Dashboard using Custom Filters and Actions with KPIs
  • Poetiq’s Meta-System automatically builds a model-agnostic harness that improved every LLM tested on LiveCodeBench without fine-tuning
  • Who are the real losers in the Musk v. Altman Trial?
  • The Coding implementation to master GPU computing with CuPy and Custom CUDA kernels. Sparse matrices, Streams.
  • Trump’s Tech Posse, Who’s Winning Musk v. Altman and Hantavirus Conspiracy Theories
  • Meta is a hive of activity after an engineer’s post protesting laptop surveillance went viral.
  • Gen Z Is Pioneering a New Understanding of Fact
  • AI Promised the Audemars Piguet Swatch Wristwatch. China will Deliver.
AI-trends.todayAI-trends.today
Home»AI»A study shows that using AI even for 10 minutes can make you lazy and dumb.

A study shows that using AI even for 10 minutes can make you lazy and dumb.

AI By Gavin Wallace06/05/20264 Mins Read
Facebook Twitter LinkedIn Email
Pope Leo XIV Declares AI a Threat to Human Dignity
Pope Leo XIV Declares AI a Threat to Human Dignity
Share
Facebook Twitter LinkedIn Email

AI Chatbots: A New Way to Use AI Even 10 minutes can be detrimental to people’s thinking and ability to problem solve, says a new study From researchers at Carnegie Mellon MIT Oxford UCLA

The researchers paid people for solving problems online, such as fractions or reading comprehension. Each experiment involved several hundred participants. Some participants had access to an AI assistant that could solve the problem on their own. The participants were more likely than others to abandon the task or make mistakes when their AI assistant suddenly disappeared. This study indicates that AI could boost productivity, but at the cost of problem-solving abilities.

“The takeaway is not that we should ban AI in education or workplaces,” Michiel Bakker is an assistant professor of MIT who was involved in the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”

Bakker is a man with chaotic hair, a big grin, and a lot of energy. We met on the MIT Campus. He was originally from The Netherlands and previously worked in London at Google DeepMind. He said that A well-known essay He was inspired to explore how AI might be disempowering humans in the future. It’s a bit depressing to read the essay, as it implies that disempowerment will happen. It’s possible, however, that figuring out ways AI can assist people in developing their own mental capacities should be included as part of the alignment between models and human values.

“It is fundamentally a cognitive question—about persistence, learning, and how people respond to difficulty,” Bakker tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.”

Bakker says that the results of this study are particularly alarming, as a person’s ability to persevere in solving problems is critical to learning new skills. It also indicates their potential to improve over time.

Bakker says it may be necessary to rethink how AI tools work so that—like a good human teacher—models sometimes prioritize a person’s learning over solving a problem for them. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” Bakker says. Bakker admits that it is difficult to balance this type of “paternalistic” It can be difficult to know how to approach.

Companies that use AI are already considering the subtler effects their models may have on customers. The sycophancy of some models—or how likely they are to agree with and patronize users—is something that OpenAI has sought to tone down GPT is now available in newer versions.

Too much trust in AI can be problematic, especially when tools don’t behave the way you would expect. The agentic AI system is unpredictable, as it can make odd mistakes and perform complex tasks independently. You wonder how Claude Code or Codex affects the ability of programmers who might need to correct the bugs that they introduce.

It was only recently that I realized the risks of delegating critical thinking to AI. OpenClaw with Codex is my daily assistant, and it’s remarkably effective at resolving configuration problems on the fly. Linux. Recently however, when I found that my WiFi connection was dropping frequently, my AI assistant recommended running a number of commands so as to tweak the drivers talking to the WiFi card. It was the result of a computer that would not boot up no matter what.

OpenClaw might have been better off teaching me how to correct the issue myself instead of trying to do it for me. I might have a more capable computer—and brain—as a result.


The edition is a special one. Will Knight’s AI Lab newsletter. Read previous newsletters here.

agentic ai ai lab artificial intelligence chatbots chatgpt claude models research
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Who are the real losers in the Musk v. Altman Trial?

15/05/2026

Trump’s Tech Posse, Who’s Winning Musk v. Altman and Hantavirus Conspiracy Theories

14/05/2026

Meta is a hive of activity after an engineer’s post protesting laptop surveillance went viral.

14/05/2026

Gen Z Is Pioneering a New Understanding of Fact

14/05/2026
Top News

OpenAI Acquires Tech Talk Show ‘TBPN’—and Buys Itself Some Positive News

The WIRED roundup includes Alpha School, Grokipedia and Real Estate AI Videos

This AI-powered robot keeps going even if you attack it with a chainsaw

OpenAI Wants to ChatGPT be your Future Operating System

Infiltrated Moltbook: the AI-Only Social Network, where humans are not allowed

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

How to Create Production-Grade Validation Pipelines using Pandera and Typed Schemas with Composable DataFrame Contracts

06/02/2026

Low-cost tactile sensing system bridges human-robot gap with 3D ViTac

27/05/2025
Latest News

How to build a Django Admin Dashboard using Custom Filters and Actions with KPIs

15/05/2026

Poetiq’s Meta-System automatically builds a model-agnostic harness that improved every LLM tested on LiveCodeBench without fine-tuning

15/05/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.