Back to Blog
10 min read

OpenClaw Security: The Vulnerabilities You Need to Know About

SecurityAI AgentsOpen Source

What Is OpenClaw?

OpenClaw is a free, open-source AI agent that runs locally on your devices and acts as a personal assistant. It connects to messaging platforms like WhatsApp, Telegram, Slack, and Discord, and uses large language models — Claude, DeepSeek, GPT — as its reasoning engine. Think of it as a self-hosted AI butler that can manage your files, send emails, organize your calendar, and automate tasks across your digital life.

The project has had an eventful naming history. It launched as “Clawdbot” in November 2025, was renamed to “Moltbot” in January 2026 after trademark concerns from Anthropic, and ultimately settled on “OpenClaw” shortly after. Despite the identity crisis, adoption has been explosive. Tens of thousands of people are now running OpenClaw instances, many of them self-hosted on personal servers and home networks.

That rapid adoption is exactly what makes the security picture so concerning. OpenClaw is powerful by design — it needs system-level access to do its job. But that same power means that any vulnerability, misconfiguration, or supply chain compromise has an unusually large blast radius. Here is what you need to know.

Critical Remote Code Execution

The most severe vulnerability discovered so far is CVE-2026-25253, rated CVSS 8.8 (High). This bug allows one-click remote code execution by exploiting unvalidated gateway URLs in OpenClaw's Control UI. In practical terms, an attacker could craft a malicious link that, when clicked by an OpenClaw user, would execute arbitrary code on their machine. No special privileges or complex exploitation chain required — just one click.

This was patched in version 2026.1.29, but the window of exposure was significant. Given that many self-hosted users do not update promptly, it is reasonable to assume that unpatched instances are still running in the wild today.

Command Injection Vulnerabilities

Beyond the headline RCE bug, multiple high-severity command injection vulnerabilities have been identified. CVE-2026-25157 and CVE-2026-24763 both enable attackers to inject and execute arbitrary system commands through OpenClaw's interfaces. These are not theoretical risks — command injection is one of the most well-understood and commonly exploited vulnerability classes in software security.

A broader security audit of the OpenClaw codebase uncovered 512 total vulnerabilities, with eight classified as critical. That number is striking, even for a fast-moving open-source project. It suggests that security was not a primary consideration during the initial development push, which is unfortunately common when a project goes from side experiment to viral adoption faster than its maintainers anticipated.

The ClawHub Supply Chain Problem

OpenClaw's extensibility is one of its biggest selling points. Users can install “skills” from ClawHub, a community marketplace, to add new capabilities — anything from smart home control to automated expense tracking. Skills are essentially code packages that run with the same privileges as OpenClaw itself, which means they have full access to your system.

Security researchers found 341 malicious skills on ClawHub. Of those, 335 were traced to a single coordinated campaign dubbed “ClawHavoc” that delivered information-stealing malware. The attack was sophisticated: the malicious skills appeared legitimate, had plausible descriptions and documentation, and in many cases actually provided the advertised functionality — while quietly exfiltrating sensitive data in the background.

This is a classic supply chain attack, and it is particularly dangerous in the OpenClaw ecosystem because installing a skill is frictionless by design. There is no sandboxing, no permission system, and no meaningful code review process before a skill appears on ClawHub. If you install a skill, you are trusting its author with everything on your machine.

Misconfiguration and Mass Exposure

Security researchers scanning the internet found between 30,000 and 40,000 exposed OpenClaw instances that were publicly accessible. The root cause is a default configuration issue: OpenClaw's “gateway bind mode” can easily be set to bind to all network interfaces (0.0.0.0) instead of just localhost (127.0.0.1). Many users, especially those following quick-start guides or running OpenClaw on cloud servers, end up with their instance accessible to anyone on the internet.

An exposed OpenClaw instance is not just an information leak. Because OpenClaw can execute system commands, manage files, and interact with connected services, an exposed instance is effectively an open door to the entire machine. Combine this with the RCE and command injection vulnerabilities, and you have a situation where tens of thousands of machines are potentially one HTTP request away from full compromise.

Prompt Injection: The AI-Native Risk

All of the vulnerabilities above are traditional software security issues — bugs in code, misconfigurations, malicious packages. But OpenClaw also faces a category of risk that is unique to AI agents: prompt injection.

Because OpenClaw processes content from external sources — emails, web pages, documents, chat messages — any of that content can contain adversarial instructions designed to manipulate the underlying language model. An attacker could embed hidden instructions in an email that, when processed by OpenClaw, cause it to forward sensitive files, send messages on the user's behalf, or execute commands that the user never intended.

Prompt injection is an unsolved problem across the entire AI industry. There is no reliable way to fully prevent it while still allowing an AI agent to process arbitrary external content. The difference with OpenClaw is that the consequences of a successful prompt injection are far more severe than with a typical chatbot. When the AI agent has the ability to execute system commands and access your files, a prompt injection is not just an annoyance — it is a potential system compromise.

What You Should Do

If you are running OpenClaw or considering it, here is what the security community recommends.

Keep it on localhost. Never expose your OpenClaw instance to the public internet. Double-check that the gateway bind mode is set to 127.0.0.1, not 0.0.0.0. If you need remote access, put it behind a VPN or an authenticated reverse proxy.

Update immediately and consistently. The critical RCE vulnerability was patched in version 2026.1.29. If you are running anything older, update now. Enable automatic updates if the option is available, or make checking for updates part of your routine.

Be extremely selective about skills. Treat every skill installation the way you would treat running an unknown script with sudo. Check the author, look at the source code if it is available, and prefer skills from established and verified developers. If a skill seems too good to be true, it probably is.

Limit what OpenClaw can access. Run it in a container or a dedicated VM if possible. Restrict file system access to only the directories it genuinely needs. Do not give it credentials to sensitive accounts unless absolutely necessary.

Monitor its activity. Keep an eye on what OpenClaw is doing, especially network requests and file system operations. Unexpected outbound connections or file access patterns could indicate a compromised skill or successful prompt injection.

The Bigger Picture

OpenClaw is not uniquely bad. It is a well-intentioned open-source project that got popular faster than its security posture could keep up with. The vulnerabilities it faces are a preview of what the entire industry will be dealing with as AI agents become more common and more capable.

The fundamental tension is this: AI agents are useful precisely because they have access to your systems and can take actions on your behalf. But that same access makes them a high-value target. Every permission you grant an AI agent is a permission that can be abused if the agent is compromised — whether through a software bug, a malicious extension, or a cleverly crafted prompt.

As these tools mature, we will need better sandboxing, better permission models, better supply chain security, and better defenses against prompt injection. Until then, the best defense is understanding the risks and making informed decisions about what you allow these agents to do.

Need help securing your infrastructure?

I help teams evaluate security risks, audit configurations, and build safer systems. Whether you are deploying AI agents or tightening your existing stack, let's talk.

Book a free discovery call