OpenClaw: The Helpful AI That Could Quietly Become Your Biggest Insider Threat

OpenClaw offers powerful automation, but without proper security oversight, it can pose real risk. Read the latest from Jamf Threat Labs.

February 9 2026 by

Jamf Threat Labs

By: Elad Shapira, Allen Golbig, Nir Avraham, Yuan Shen, Matteo Bolognini

What is OpenClaw (The “Thinking” Runtime)

OpenClaw represents the shift from software as a passive tool to software as an active teammate. An autonomous system that doesn’t wait for clicks, but reasons, decides and acts on its own. It is an open-source framework for building autonomous AI agents, best understood as an automation engine with a “brain” attached, capable of chaining actions across tools, maintaining long-term memory, and evolving its capabilities over time.

Give it a high-level goal, for example, “research the latest earnings reports, summarize them, and draft an email to the board, ”and it doesn’t stop at a single search. It decides what data to pull, which APIs to call, which files to read or write, and how to format the output.

It runs natively on macOS, Windows, and Linux, often as a background service. It integrates directly with messaging platforms, corporate email, calendars, cloud consoles, and local files. It remembers context across sessions and can pick up where it left off hours or days later.

How can OpenClaw be dangerous?

While it offers powerful automation and productivity boosting capabilities, several security concerns make it particularly dangerous without careful attention to security:

Key risks:

  • Unrestricted system access: OpenClaw agents can execute shell commands, access files, and interact with applications without built-in security boundaries.

  • Lateral movement potential: Once deployed, agents can potentially access network resources and spread across systems.

  • Data exfiltration: AI agents have broad access to information they process, creating data loss risks.

  • Lack of audit trails: Many deployments lack comprehensive logging of agent actions.

  • Shadow IT deployments: Users may install OpenClaw without IT approval or security review.

Where agentic risk turns real

In OpenClaw deployments, risk rarely comes from a software bug or malicious intent. More often, it emerges from powerful features operating without clear boundaries. Recent GitHub security advisories illustrate how quickly an autonomous agent can shift from a helpful assistant to a high-risk insider.

Several advisories have demonstrated that once an attacker gains access to agent credentials or control interfaces, the blast radius is significant. Token exfiltration issues (for example, GHSA-g8p2-7wf7-98mq, CVE-2026-25253) exposed paths where a single stolen gateway token enabled remote connections, configuration changes, and arbitrary command execution. In parallel, local file inclusion flaws (such as GHSA-r8g4-86fx-92mq) allowed agents to read sensitive files simply by emitting specially crafted paths, bypassing traditional filesystem controls. Other advisories (including GHSA-q284-4pvr-m585, GHSA-g55j-c2v4-pjcg) showed how command injection could be achieved through unescaped user input, unsafe WebSocket configuration writes, or SSH handling, resulting in execution with minimal interaction.

What ties many of these issues together is how agents consume and act on input. OpenClaw agents routinely ingest emails, documents, web pages, chat messages and third-party skills as part of their normal operation. This creates fertile ground for indirect prompt injection, where malicious instructions are embedded inside otherwise legitimate content.

Because these instructions arrive through normal business inputs, the resulting actions often look indistinguishable from legitimate automation. The agent is not exploited in the traditional sense – it is instructed. Files are accessed using valid permissions, credentials are handled through authorized APIs, and outbound communication follows expected workflows.

This risk is amplified by the surrounding ecosystem. Public skill repositories have already shown how malicious extensions can masquerade as legitimate functionality, permanently altering agent behavior once installed.


At the same time, many real-world deployments store API keys, OAuth tokens, and conversation history in accessible locations or expose control interfaces without strong authentication, making post-compromise persistence easy to maintain.

Taken together, these are not theoretical concerns. They demonstrate how an over-privileged, insufficiently governed agent on a trusted, mission critical endpoint device can become a persistent and trusted execution layer-one that attackers can steer indirectly through content, configuration, or supply-chain manipulation rather than traditional exploits.

Detection and visibility of OpenClaw deployments

Focusing on these paths in macOS can help with discovering OpenClaw in your organization:

  • ~/.openclaw

  • ~/Library/LaunchAgents/ai.openclaw.gateway.plist

  • /Applications/OpenClaw.app (Optional macOS companion application)

Detection methods:

  1. Process monitoring: Detecting the installation/onboarding commands for OpenClaw and associated commands for installing skills from Clawhub.
  2. Network traffic analysis: Blocking domains associated with OpenClaw and monitor API calls to LLM providers (OpenAI, Anthropic, etc.)
  3. File system scanning: Monitor OpenClaw installation directories, configuration files, and persistence items.
  4. API key detection: Scan for AI service API keys in environment variables or config files

Configure Jamf for Mac for Prevention, detection and remediation

If aligned to your corporate policy and risk reduction strategy, Jamf for Mac supports comprehensive protection against unauthorized AI agent deployments in macOS.

Jamf-based protection strategy:

1. Prevention

Domains included (at the time of writing):

  • clawhub.ai
  • openclaw.ai
  • open-claw.me
  • molt.bot
  • openclaw.bot
  • Endpoint binary run-time prevention. Custom prevention: Add macOS Companion app to Jamf Protect’s Custom Prevent list

  • Signing ID = bolt.molt.mac

  • Team ID = Y5PE65HELJ

  • Documentation: Ensure corporate security policies include verbiage around AI-agents

  • Ensure that alternative AI tools (if any) are easily accessible and referenced in both Self Service and internal policies.

Jamf Protect’s Advanced Threat Controls (ATC) help prevent the execution of known malicious commands used to install malicious skills.

2. Detection

Jamf Pro

  • Extension attributes:

Jamf Protect

Usage: https://learn.jamf.com/en-US/bundle/jamf-protect-documentation/page/Creating_Analytics.html

  • Telemetry: Telemetry provides visibility into OpenClaw installation and usage on macOS devices, enabling security operators to detect OpenClaw activity within their SIEM.

3. Response

  • User notifications: Alert users about policy violations using

  • Compliance reporting: Generate reports on AI agent deployment attempts

Best practices for AI agent governance

  1. Establish clear policies: Define acceptable AI agent use cases
  2. Require approval workflows: Internal review before AI agent deployment
  3. Implement monitoring: Continuous visibility into AI agent activities
  4. Security training: Educate users about AI agent risks
  5. Vendor evaluation: Assess commercial alternatives with built-in security
  6. Incident response plans: Prepare for AI agent-related security incidents

Conclusion

OpenClaw and similar AI agent frameworks represent powerful automation tools, but they introduce significant security risks when deployed without proper controls. Organizations must balance innovation with security by implementing comprehensive detection, remediation and prevention strategies. Using MDM solutions like Jamf provides the visibility and control necessary to manage AI agents safely in enterprise environments.

The key is not to ban AI agents entirely, but to ensure they are deployed in a controlled, monitored, and secure manner that protects organizational data and systems.

Dive into more Jamf Threat Labs research on our blog.