Skip to content

Network Isolation

Warden enforces network policies inside containers using iptables, giving you control over what Claude Code can reach on the internet. Choose between unrestricted access, a domain allowlist, or full air-gapping.

Network isolation is configured per-project when creating or updating a container.

Unrestricted outbound access. The container can reach any host on the internet. No iptables rules are applied.

Use this when Claude needs general internet access — installing packages, cloning repos, calling APIs.

Outbound traffic is limited to a configurable list of allowed domains. All other traffic is dropped.

Use this when you want Claude to have internet access but only to specific services — your GitHub org, npm registry, internal APIs, etc.

Configuring allowed domains:

When selecting Restricted mode, you specify a list of domains. Both exact domains and wildcards are supported:

EntryWhat it matches
github.comgithub.com and *.github.com
npmjs.orgnpmjs.org and *.npmjs.org
api.example.comapi.example.com and *.api.example.com

Each domain entry automatically includes all subdomains.

Warden pre-populates the domain list based on the selected agent type: Claude Code gets *.anthropic.com, Codex gets *.openai.com, and both include shared infrastructure domains (GitHub, Ubuntu apt repos). Runtime-specific domains (npm, PyPI, Go modules, etc.) are added automatically based on the runtimes enabled for the project. You can customize this list at creation time or edit it later.

Live domain updates:

Allowed domains can be changed on a running container without restarting it. When you update domains in the edit dialog, Warden hot-reloads the network policy: the dnsmasq config and ipset are updated and dnsmasq is signaled to reload. Active connections to previously-allowed domains remain alive until they close naturally, while new connections to removed domains are blocked immediately.

All outbound traffic is blocked. Only loopback (localhost) and established connections (responses to already-open connections) are allowed.

Use this for air-gapped operation — when Claude should work entirely with local files and tools, with no internet access.

Containers can reach services running on the host machine (e.g. a local dev server, database, or API) using the special hostname host.docker.internal.

For example, if you’re running a dev server on port 3000 on the host:

Terminal window
# Inside the container
curl http://host.docker.internal:3000

This works in all network modes — host.docker.internal resolves to the host’s IP via Docker’s host-gateway mapping. It does not count as outbound internet traffic, so it is not affected by domain allowlists in Restricted mode or blocked in None mode.

When an agent starts a web server inside the container (e.g. Vite on port 5173), you can access it from the host via Warden’s built-in reverse proxy.

Declaring ports:

Add the ports you want to forward in your project’s container settings. Ports can also be declared in .warden.json:

{
"forwardedPorts": [5173, 3000]
}

Each declared port is accessible via a subdomain URL:

http://{projectId}-{agentType}-{port}.localhost:8090/

For example, if your project ID is a1b2c3d4e5f6 and you’re running Claude Code with Vite on port 5173:

http://a1b2c3d4e5f6-claude-code-5173.localhost:8090/

Warden’s web UI shows clickable port chips on each project card that open the proxy URL in a new tab.

How it works:

Warden’s Go backend routes requests based on the Host header — when a request arrives at {projectId}-{agentType}-{port}.localhost, it reverse-proxies it to the container’s internal IP. Browsers resolve *.localhost to 127.0.0.1 automatically (RFC 6761), so no DNS or hosts file configuration is needed.

  • HMR (hot module replacement) works — WebSocket upgrade is supported
  • Root-relative asset paths work correctly (no path prefix to break them)
  • Multiple containers can each use the same port internally without conflicts
  • No Docker port bindings are needed, so no container recreation is required

Live updates:

Forwarded ports can be added or removed on a running container without restarting it. The proxy validates each request against the current declared port list — undeclared ports are rejected.

Network isolation is enforced from outside the container using Docker’s privileged exec mechanism. The container itself does not have the NET_ADMIN capability, which means:

  • Users cannot disable network rules using iptables — even with sudo, the command fails with EPERM because the capability bounding set does not include NET_ADMIN.
  • The dnsmasq DNS resolver runs as root and cannot be killed by the warden user.
  • Package installation via sudo apt-get install works normally — sudo has access to standard capabilities (file ownership, process management) but not network administration.
  • Domain IPs are resolved dynamically, but if a domain’s IP changes and DNS caching hasn’t refreshed, there may be a brief interruption. Editing the allowed domains list triggers a full re-resolution; otherwise restart the container.
  • Network mode changes (e.g. fullrestricted) still require container recreation since they involve different iptables rule sets.