More robots When large language-based models start appearing in homes and offices as well as warehouses, it sounds like a nightmare. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot—in this case, a robot dog.
Researchers at Anthropic found in a recent study that Claude could automate a lot of the manual work required to program a robot to perform physical tasks. Their findings demonstrate the ability of AI models to code in an agentic manner. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software—and physical objects as well.
“We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,” Logan Graham tells WIRED that he is part of Anthropic’s “red team”, which analyzes risk models. “This will really require models to interface more with robots.”
Anthropic
Anthropic
Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic—even dangerous—as it advances. Graham claims that current models will not have the intelligence to fully control robots, but they may in future. According to Graham, studying how humans use LLMs in order to program robots will help industry get ready for this idea. “models eventually self-embodying,” Referring to the notion that AI could one day operate physical systems.
It is still unclear why an AI model would decide to take control of a robot—let alone do something malevolent with it. Anthropic has a reputation for being able to predict the worst possible scenario. It also helps the company position itself as a leader in responsible AI.
Anthropic, in the Project Fetch experiment, asked two teams of researchers with no previous experience of robotics to program a Unitree Go2 quadruped to perform specific tasks. Teams were provided with a controller and asked to perform increasingly difficult tasks. One group was using Claude’s coding model—the other was writing code without AI assistance. The group using Claude was able to complete some—though not all—tasks faster than the human-only programming group. The group using Claude was able get the robot walking around to find a beachball, something the humans-only programming group couldn’t figure out.
Anthropic also recorded and analysed the interactions between both teams to study their collaboration dynamics. The group that did not have access to Claude displayed more negativity and confusion. It’s possible that Claude made the connection faster and created a more user-friendly interface.
Anthropic
The Go2 robot used in Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It’s usually used by industries such as construction or manufacturing for remote inspections, security patrols and more. Robots can be programmed to move autonomously, however they are usually guided by high-level commands from software or controllers. Unitree is located in Hangzhou, China. According to recent research, its AI systems are the most popular ones on the market. report by SemiAnalysis.
In response to an input, the large language models powering ChatGPT or other clever chatbots usually generate text and images. More recently, these systems have become adept at generating code and operating software—turning them into agents rather than just text-generators.

