Remote & Split Mode
Architecture
rdc-cli uses a three-tier architecture that cleanly separates the CLI frontend, the daemon (replay engine), and the GPU. Each tier can run on a different machine, enabling flexible cross-platform workflows.
+--------------------------------------------------+
| Tier 1: CLI (thin client) |
| Pure JSON-RPC client, no renderdoc / GPU needed |
| Runs on any platform: macOS / Windows / Linux |
+-------------------------+------------------------+
| JSON-RPC over TCP
| (localhost or network)
+-------------------------v------------------------+
| Tier 2: Daemon (replay engine) |
| Loads capture, holds replay state |
| Requires renderdoc module |
+-------------------------+------------------------+
| Optional: RenderDoc remote protocol
| (only when GPU is on another machine)
+-------------------------v------------------------+
| Tier 3: GPU |
| Local GPU or remote renderdoccmd server |
+--------------------------------------------------+
Core principle: the CLI does not care where the daemon is; the daemon does not care where the CLI is. The session file is the only connection contract between them.
Three Deployment Modes
| Mode | Daemon runs on | GPU access | Client needs renderdoc? | Server needs |
|---|---|---|---|---|
| Local (default) | Same machine as CLI | Local GPU | Yes | — |
Proxy (--proxy) | Same machine as CLI | Remote renderdoccmd server | Yes | renderdoccmd |
Split (--listen / --connect) | Remote machine (server) | Server-local GPU | No | renderdoc module + rdc-cli |
The three modes are orthogonal and can be freely combined:
Local: [CLI + daemon + GPU]
Proxy: [CLI + daemon] --RenderDoc protocol--> [remoteserver + GPU]
Split: [CLI] --JSON-RPC--> [daemon + GPU]
Split+Proxy: [CLI] --JSON-RPC--> [daemon] --RenderDoc protocol--> [remoteserver + GPU] When to Use Which
| Scenario | Mode | Why |
|---|---|---|
| Single machine, local GPU | Local | Simplest setup, no network involved |
| macOS client, Linux/Windows GPU server | Split | macOS has no renderdoc — split client needs nothing |
| CI runner analyzing captures | Split | CI runner connects to a pre-started daemon, zero GPU dependency on CI |
Existing RenderDoc remoteserver fleet | Proxy | Reuse existing infrastructure, no rdc-cli needed on GPU servers |
| Client has renderdoc but no GPU | Proxy | Daemon runs locally, GPU work forwarded to remote server |
| Cross-vendor replay (capture needs original GPU) | Split or Proxy | RenderDoc captures are GPU-vendor-bound; use remote replay on the original GPU |
Client without GPU or renderdoc, server only has renderdoccmd | Split+Proxy | CLI → daemon (on a third machine) → remoteserver (GPU machine) |
Split Mode (recommended for cross-platform)
Split mode is the recommended way to use rdc-cli across machines. The daemon runs on the machine with RenderDoc and a GPU. The CLI connects over TCP and needs no local RenderDoc installation. All commands work transparently — the user experience is identical to local mode.
How it works
- The server runs
rdc open capture.rdc --listen ADDR:PORT, which starts a daemon that binds to the network and prints a connection token. - The client runs
rdc open --connect HOST:PORT --token TOKEN, which creates a local session file withpid=0(marking it as an external daemon). - All subsequent commands (
rdc draws,rdc rt, etc.) send JSON-RPC requests to the remote daemon and receive results. Binary exports (textures, render targets) are transferred inline.
Server side
# Bind to a specific LAN interface (recommended)
rdc open frame.rdc --listen 192.168.1.10:54321
# Or auto-assign port on all interfaces
# WARNING: 0.0.0.0 exposes the daemon to the entire network — use only on trusted LANs
rdc open frame.rdc --listen 0.0.0.0:0
# Output:
# opened: frame.rdc (listening)
# host: 0.0.0.0
# port: 54321
# token: <TOKEN>
# connect with: rdc open --connect <server-ip>:54321 --token <TOKEN>
# Bind to a specific interface and port (useful for CI/automation)
rdc open frame.rdc --listen 10.0.0.1:8080 Client side
# Connect from any machine (macOS, Windows, CI runner, etc.)
rdc open --connect replay-host:54321 --token TOKEN
# Everything works exactly like local mode
rdc status # shows remote capture path and daemon info
rdc info # capture metadata
rdc goto 142 # navigate to a draw call
rdc draws # list all draw calls
rdc pipeline # pipeline state
rdc rt 142 -o out.png # export render target
rdc debug pixel 142 400 300 # shader debugging
rdc counters --eid 142 # GPU performance counters
rdc close # disconnect (daemon keeps running)
rdc close --shutdown # disconnect AND stop the remote daemon Session transparency
The session file is the only thing that differs between local and split mode.
When pid > 0, the daemon is local and health-checked via
is_pid_alive(). When pid == 0, the daemon is
external and health-checked via a ping RPC. All other behavior
is identical.
Security
- Token authentication: 128-bit random token
(
secrets.token_hex(16)), verified withsecrets.compare_digest()for timing-safe comparison. - Default bind:
0.0.0.0(all interfaces). Prefer binding to a specific LAN IP or127.0.0.1on untrusted networks. - SSH tunnel (recommended for untrusted networks):
# On the client machine
ssh -L 54321:localhost:54321 user@replay-host
rdc open --connect localhost:54321 --token TOKEN Proxy Mode
Proxy mode runs the daemon on the client machine, but forwards all GPU
operations to a remote renderdoccmd remoteserver via
RenderDoc's native remote protocol. Use this when you have existing
RenderDoc remote server infrastructure and don't want to install rdc-cli
on every GPU server.
How it works
- A
renderdoccmd remoteserverruns on the GPU machine (port 39920 by default). - The client runs
rdc open capture.rdc --proxy HOST:PORT. The local daemon uploads the capture to the remote server and opens it there. - The local daemon holds a remote controller proxy. Metadata queries (events, draws, pipeline state) work through this proxy. The daemon maintains a background keepalive ping every 3 seconds.
Start a remote server
# Using rdc-cli (wraps renderdoccmd) — restrict to trusted subnet
rdc serve --allow-ips 10.0.0.0/24
rdc serve --daemon --port 39920 # detach after printing PID
# Or use renderdoccmd directly
renderdoccmd remoteserver Connect via proxy
# Capture locally replayed on remote GPU
rdc open frame.rdc --proxy gpu-server:39920
rdc status # shows "remote: gpu-server:39920"
rdc draws # metadata queries work normally
rdc pipeline # pipeline state via remote controller Proxy mode limitations
- Requires renderdoc on the client: the daemon needs the renderdoc module to create the remote controller proxy.
- Requires
renderdoccmdon the server: needed to run the remote server process. - Binary export limitations: commands that export files
(
rt,texture) may not work reliably in proxy mode. The remote controller'sSaveTexture()writes to the remote server's filesystem, not the client's. For full export support, use Split mode instead.
Android Remote Replay
Android captures can only be replayed on the original device GPU (desktop GPUs
are incompatible with mobile GLES/Vulkan captures). The --android
flag on rdc open automates this: it resolves the device's
adb-forwarded port from saved state and opens a proxy replay session.
Workflow
# 1. Setup: start RenderDoc remote server on the device
rdc android setup
# 2. Capture a frame
rdc android capture com.example.app/.MainActivity -o frame.rdc
# 3. Remote replay on the device GPU
rdc open frame.rdc --android # auto-resolves device
rdc open frame.rdc --android --serial XYZ # explicit device selection
# 4. All commands work transparently
rdc info # shows "API: OpenGL, machine_ident: Android ARM 64-bit"
rdc draws # draw call list
rdc pick-pixel 540 1170 --json # pixel queries
rdc snapshot 11 -o ./snap/ # pipeline + shader export
rdc close
# 5. Cleanup
rdc android stop How it works
--android reads the RemoteServerState saved by
rdc android setup, finds the adb-forwarded TCP port via
adb forward --list, and passes localhost:PORT
as the proxy URL. The daemon then uploads the capture to the device via
CopyCaptureToRemote and opens it for replay on the device GPU.
Prerequisites
rdc android setupmust be run first (starts the remote server)- Device must be connected via USB with
adbaccess - Android 10+ for GPU debug layer capture
Remote Capture Commands
The rdc remote command group connects to a remote RenderDoc
target control server to list running applications and trigger captures.
This is independent of proxy/split mode — it uses RenderDoc's
target control protocol.
# Connect to a remote RenderDoc server and save state
rdc remote connect gpu-server:39920
# List capturable applications on the remote host
rdc remote list
# Trigger capture of a specific application
rdc remote capture /usr/bin/game -o /tmp/remote.rdc Proxy vs Split: Key Differences
Proxy (--proxy) | Split (--listen / --connect) | |
|---|---|---|
| Daemon location | Client machine | Server machine |
| Communication protocol | RenderDoc internal binary protocol | JSON-RPC (rdc-cli's own protocol) |
| Client dependency | renderdoc module required | Only rdc-cli (no renderdoc) |
| Server dependency | renderdoccmd | renderdoc module + rdc-cli |
| GPU operations | Proxied through remote controller | Executed locally on daemon machine |
| Binary export | Limited (path goes to remote filesystem) | Full support (daemon writes locally, transfers via JSON-RPC) |
| Use case | Reuse existing renderdoccmd servers | Cross-platform, zero-dependency clients |
In short: Proxy means "local daemon, remote GPU"; Split means "remote daemon (with GPU), local thin client". Split is simpler, more reliable, and recommended for most cross-platform workflows. Proxy is useful when you already have RenderDoc remote server infrastructure.
Recipes
macOS development with Linux GPU server
# Linux server (has GPU + renderdoc)
rdc capture -o /tmp/scene.rdc -- ./my_game
rdc open /tmp/scene_frame0.rdc --listen 0.0.0.0:0
# note the port and token from output
# macOS laptop (no GPU, no renderdoc)
pip install rdc-cli # or: uv tool install rdc-cli
rdc open --connect linux-server:PORT --token TOKEN
rdc draws | sort -t$'\t' -k3 -rn | head -10 # top draws by triangle count
rdc shader 142 ps | grep shadow # search shaders
rdc rt 142 -o ~/Desktop/render.png # export render target
rdc close CI pipeline: regression testing on shared GPU server
# GPU server (always running)
rdc open golden_frame.rdc --listen 0.0.0.0:9000
# CI runner (no GPU needed)
rdc open --connect gpu-server:9000 --token $RDC_TOKEN
rdc assert-pixel 142 400 300 --expect "0.5 0.0 0.0 1.0" --tolerance 0.01
rdc assert-state 142 topology --expect TriangleList
rdc assert-count draws --expect 50 --op ge
rdc assert-clean --min-severity HIGH
rdc close Cross-vendor debugging (NVIDIA capture, AMD client)
# RenderDoc captures are GPU-vendor-bound.
# An NVIDIA capture cannot replay on AMD hardware (different memory types).
# Solution: replay on the original GPU via Split mode.
# NVIDIA server (where the capture was made)
rdc open nvidia_capture.rdc --listen 0.0.0.0:0
# AMD workstation (cannot replay this capture locally)
rdc open --connect nvidia-server:PORT --token TOKEN
rdc draws # inspect the frame
rdc debug pixel 42 300 200 # debug shader on NVIDIA GPU
rdc close Proxy + Split combined: three-machine setup
# Machine A: GPU server with renderdoccmd only (no rdc-cli)
renderdoccmd remoteserver
# Machine B: daemon host with renderdoc module + rdc-cli (no GPU)
rdc open frame.rdc --listen 0.0.0.0:5000 --proxy machine-a:39920
# Machine C: thin client (no renderdoc, no GPU)
rdc open --connect machine-b:5000 --token TOKEN
rdc draws Git Bash on Windows
# Git Bash (MSYS2) converts "/" arguments to Windows paths.
# Set MSYS_NO_PATHCONV=1 for VFS path commands:
export MSYS_NO_PATHCONV=1
rdc ls /
rdc cat /current/pipeline/summary
# Commands without VFS paths work without the workaround:
rdc draws
rdc events
rdc pipeline Command Reference
# Local mode (default)
rdc open capture.rdc
# Proxy mode
rdc open capture.rdc --proxy HOST[:PORT]|adb://SERIAL
# Android remote replay
rdc open capture.rdc --android [--serial SERIAL]
# Split server
rdc open capture.rdc --listen [ADDR]:PORT # :0 for auto-port
# Split client
rdc open --connect HOST:PORT --token TOKEN
# Remote server management
rdc serve [--daemon] [--port PORT] [--allow-ips CIDR,...]
rdc remote connect HOST:PORT
rdc remote list [--url HOST:PORT]
rdc remote capture APP [-o OUTPUT] Notes
--connectis mutually exclusive withCAPTURE,--proxy,--android, and--listen.--androidis mutually exclusive with--proxy,--connect, and--listen.--listencan be combined with--proxyfor Split+Proxy setups.- For
rdc open,--remoteis a deprecated alias for--proxy(hidden, shows deprecation warning). - The daemon has a 30-minute idle timeout by default. Any command resets the timer.
- IPv6 is not currently supported (
HOST:PORTparsing usesrsplit(":", 1)).