Anthropic’s Claude Takes Control of a Robot Dog

Anthropic’s Claude Takes Control of a Robot Dog

As robots become increasingly common in warehouses, offices, and even people’s homes, the notion of large language models (LLMs) hacking into or taking control of complex robotic systems shifts from science fiction to a real and intriguing possibility. In response to this emerging landscape, researchers at Anthropic—a company focused on AI safety and responsible development—conducted a study exploring how their AI model, Claude, performs when tasked with programming and controlling a physical robot. Their findings shed light not only on the current capabilities of AI but also on the potential future where AI systems may extend their influence beyond digital environments and into the physical world.

Anthropic’s experiment, called Project Fetch, centered on a robot dog named Unitree Go2, a quadruped robot priced at $16,900, which is relatively affordable by robot standards. The Go2 is normally used in industrial settings such as construction and manufacturing, where it performs tasks like remote inspections and security patrols. This robot can walk autonomously but typically requires high-level software commands or manual operation through a controller. The choice of this model made sense given its accessibility and popularity; Unitree, based in Hangzhou, China, is currently regarded as a leading provider of robotic AI systems, according to market analysis.

The researchers recruited two groups of participants who had no prior experience with robotics and asked them to program the Go2 robot to complete a series of increasingly complex tasks. One group was allowed to use Claude’s coding capabilities to assist with the programming, while the other group relied solely on traditional, human-written code without AI help. The goal was to evaluate whether Claude could meaningfully accelerate or simplify the programming process, and how it might influence the quality and success of the robot’s performance.

Results from Project Fetch demonstrated that Claude was able to automate a significant portion of the programming work necessary for the robot to execute physical tasks. Notably, the AI-assisted group succeeded in getting the robot to walk around and locate a beach ball—an achievement the human-only group could not replicate. While Claude did not complete all tasks perfectly or faster than humans in every case, its assistance clearly improved the efficiency and effectiveness of coding in certain scenarios.

Beyond task completion, the study also analyzed the dynamics and sentiments of the two teams during their work. The group without access to Claude exhibited more frustration and confusion, whereas the AI-assisted group showed more positive engagement. This difference likely stemmed from Claude’s ability to quickly establish a connection with the robot and generate an easier-to-use programming interface. The findings suggest that AI tools like Claude can not only speed up technical work but also enhance the collaboration experience between humans and machines.

Anthropic’s interest in this project goes beyond demonstrating AI’s current coding prowess. The company’s founders, many of whom formerly worked at OpenAI, have consistently voiced concerns about the potential dangers of advanced AI. They believe that as AI models become more intelligent and capable, they might eventually “self-embody”—that is, control physical systems like robots autonomously. Logan Graham, a member of Anthropic’s red team that probes AI for risks, told WIRED that while today’s models aren’t yet smart enough to take full robotic control independently, future iterations might be. He emphasized that studying how human programmers use LLMs to operate robots today could help the industry prepare for more autonomous AI in the future.

The question of why an AI would decide to take control of a robot—and whether it might ever act with malicious intent—remains unanswered. Nonetheless, Anthropic embraces worst-case scenario thinking as part of its mission to promote responsible AI development. By anticipating the risks of AI systems acting in the physical world, the company hopes to position itself as a leader in guiding the safe evolution of these technologies.

The broader AI research community has taken note of Anthropic’s findings, with some experts praising the study’s insights and others urging caution. Changliu Liu, a roboticist at Carnegie Mellon University, described the results as interesting but not entirely surprising. She highlighted that the research’s analysis of team dynamics could inspire new ways to design AI-assisted programming interfaces, making it easier for humans and machines to collaborate effectively. Liu expressed curiosity about the specific contributions Claude made during coding—whether it was identifying correct algorithms, selecting relevant API calls, or providing other substantive guidance.

At the same time, some scientists emphasize the risks inherent in enabling AI to interact directly with robots. George Pappas, a computer scientist at the University of Pennsylvania who studies AI safety

Previous Post Next Post

نموذج الاتصال