Anthropic is pushing the boundaries of what AI assistants can do with the introduction of a new “computer use” feature for Claude, one that moves beyond simple chat and into full task execution.
In a research preview now available to select subscribers, Claude can interact directly with your computer, performing actions on your behalf in a way that feels closer to a human assistant than ever before.
At its core, this update represents a major step toward what is often called “agentic AI”, systems that do not just respond to prompts but take initiative and complete multi step tasks independently. With this feature enabled, Claude can browse the web, open files, interact with applications, and even move your cursor or type as if it were physically using your keyboard and mouse. In practical terms, this means you can ask Claude to do things like locate a document on your computer and send it to your phone, check your calendar and summarize your day, or run workflows across multiple apps without needing to manually guide every step.
What makes this particularly interesting is how Claude decides what actions to take. According to Anthropic, the system first looks for available integrations called connectors with apps like Google Calendar or Slack. If a connector exists, Claude uses it to complete tasks more efficiently and securely. If no such integration is available, it does not give up. Instead, it falls back to a more human like approach by navigating your system directly, opening programs, clicking through menus, and typing commands just as you would.
This hybrid model, combining structured integrations with direct interface control, gives Claude a high level of flexibility. It is not limited to pre defined APIs or narrow use cases. Whether it is interacting with a web browser, accessing developer tools, or opening local files, Claude can adapt its approach depending on what is needed to get the job done.
Of course, handing over that level of control to an AI raises immediate concerns. Giving any system access to your computer, especially one capable of acting autonomously, introduces real security risks. Experts have already warned that agentic AI systems can act quickly and at scale, sometimes in unexpected ways. If something goes wrong, the consequences could escalate rapidly. There is also the risk of malicious interference, where attackers could potentially hijack or manipulate the AI to gain access to sensitive data or systems.
Anthropic appears to be aware of these concerns and has built in several safeguards. One key feature is that Claude must request permission before carrying out actions, ensuring the user remains in control. You can also interrupt or stop a task at any time, providing a safety net if something does not look right. Additionally, the system is designed to detect and defend against prompt injection attacks, where hidden instructions attempt to trick the AI into performing unintended actions.
Even with these protections, Anthropic is taking a cautious approach. The company explicitly warns that the feature is still experimental and may contain errors. As a result, certain high risk applications, particularly those involving sensitive personal or financial data, are disabled by default. Users are encouraged to be mindful of what tasks they delegate and to treat the feature as a work in progress rather than a fully polished tool.
The rollout itself reflects this experimental mindset. The computer use feature is currently available as a research preview for Claude Pro and Claude Max subscribers on both macOS and Windows. By limiting access, Anthropic can gather real world feedback and better understand how users interact with the system, where it performs well, and where improvements are needed.
Another key piece of this ecosystem is Dispatch, a companion feature that allows users to assign tasks to Claude remotely, such as from a smartphone. When combined with computer use, Dispatch unlocks a new level of convenience. You could, for example, ask Claude to prepare a morning briefing while you are still in bed, check your emails and summarize important messages, or run tests on a project while you are away from your desk. In effect, Claude becomes a kind of digital coworker operating in the background.
That said, the experience is not flawless. Because both computer use and Dispatch are still in early stages, complex workflows may not always work perfectly on the first attempt. Tasks that require nuanced judgment, intricate navigation, or coordination across multiple systems can still challenge the AI. That is part of the purpose of releasing a research preview, to identify these limitations early and refine the system over time.
What is clear is that this feature signals a broader shift in how AI tools are evolving. Instead of being passive systems that wait for instructions, they are becoming active participants in digital workflows. The idea of an AI that can use a computer like a human, clicking, typing, and navigating, blurs the line between software and user in a way that could redefine productivity.
If Anthropic can successfully address the security challenges and improve reliability, Claude’s computer use feature could become one of the most transformative developments in everyday computing. For now, it offers an intriguing glimpse into a future where your AI assistant does not just help you think, but actually helps you get things done.