IX Bond Documentation
Complete reference for IX Bond, the cloud-managed enterprise mesh VPN built on WireGuard. This documentation covers every feature, configuration option, API endpoint, and CLI command.
Version: This documentation covers IX Bond v0.8.x. For older versions, see the changelog.
Quick Start
Get a fully encrypted mesh network running in under 5 minutes. This guide walks you through installing the control server, joining your first node, and verifying connectivity.
# 1. Install the IX Bond agent $ curl -fsSL https://get.ixbond.com | sh # 2. Start the control server $ ixbond-server --data-dir /var/lib/ixbond --listen :443 # 3. Join a node to the mesh $ ixbond-agent --server https://mgmt.ixbond.com \ --auth-token "your-token" \ --node-name "server-01" # 4. Verify the mesh is running $ ixbondctl status
Tip: The install script detects your platform automatically and downloads the correct binary for Linux (amd64/arm64/armv7), macOS, or Windows.
After running ixbondctl status, you should see your node listed with a mesh IP in the 100.120.x.x range. You can now ping any other enrolled node by its mesh IP or DNS name.
Installation
IX Bond supports every major platform. Choose the installation method that fits your environment.
Linux (Recommended)
The fastest way to install on any Linux distribution. The install script adds the binary to /usr/local/bin and creates a systemd service.
# One-line install (auto-detects architecture) $ curl -fsSL https://get.ixbond.com | sh # Or download a specific release from GitHub $ wget https://github.com/ixbond/ixbond/releases/download/v0.8.0/ixbond-linux-amd64.tar.gz $ tar -xzf ixbond-linux-amd64.tar.gz $ sudo mv ixbond-agent ixbondctl /usr/local/bin/ # Enable and start the service $ sudo systemctl enable --now ixbond-agent
Windows
Download the Windows .exe installer from the releases page. The installer registers IX Bond as a Windows service that starts automatically on boot.
# Download and install PS> Invoke-WebRequest -Uri https://get.ixbond.com/windows -OutFile ixbond-setup.exe PS> .\ixbond-setup.exe /S # Or run as a service manually PS> sc.exe create IXBondAgent binPath="C:\Program Files\IXBond\ixbond-agent.exe" PS> sc.exe start IXBondAgent
Raspberry Pi
Both ARM64 and ARMv7 binaries are available. Ideal for running as a dedicated gateway or subnet router at branch offices.
# ARM64 (Raspberry Pi 4/5 with 64-bit OS) $ curl -fsSL https://get.ixbond.com | sh # ARMv7 (older Pis or 32-bit OS) $ wget https://github.com/ixbond/ixbond/releases/download/v0.8.0/ixbond-linux-armv7.tar.gz $ tar -xzf ixbond-linux-armv7.tar.gz $ sudo mv ixbond-agent ixbondctl /usr/local/bin/
Docker
Run the agent as a Docker container. Requires --net=host for WireGuard interface access.
$ docker run -d --net=host \ --cap-add=NET_ADMIN \ --cap-add=NET_RAW \ -v /var/lib/ixbond:/data \ -e IXBOND_SERVER=https://mgmt.ixbond.com \ -e IXBOND_TOKEN=your-token \ ixbond/agent
Kubernetes
Deploy the agent as a DaemonSet across your cluster with a single kubectl apply.
$ kubectl apply -f https://get.ixbond.com/k8s.yaml
Configuration
The agent is configured via a JSON config file at /etc/ixbond/agent.json, environment variables, or CLI flags. Config file values take lowest priority; CLI flags override everything.
Full Agent Configuration
{
"server_url": "https://mgmt.ixbond.com",
"auth_token": "your-token",
"node_name": "office-gateway",
"wg_interface": "ixbond0",
"wg_listen_port": 51820,
"enable_socks5": true,
"enable_exit": true,
"enable_ssh": true,
"ssh_port": 2222,
"enable_taildrop": true,
"tags": ["tag:office", "tag:gateway"],
"advertise_routes": ["10.20.6.0/24"],
"serve": [
{
"proto": "http",
"port": 8080,
"target": "http://127.0.0.1:3000"
}
]
}
Configuration Reference
| Key | Type | Default | Description |
|---|---|---|---|
server_url | string | required | URL of the IX Bond control server |
auth_token | string | required | Enrollment or node authentication token |
node_name | string | hostname | Human-readable node name |
wg_interface | string | ixbond0 | WireGuard interface name |
wg_listen_port | int | 51820 | WireGuard UDP listen port |
enable_socks5 | bool | true | Enable SOCKS5 proxy on mesh IP:1080 |
enable_exit | bool | false | Act as an exit node for internet traffic |
enable_ssh | bool | false | Start mesh SSH server |
ssh_port | int | 2222 | Mesh SSH server port |
enable_taildrop | bool | false | Enable peer-to-peer file transfers |
tags | []string | [] | Node tags for ACL, posture, ZTNA |
advertise_routes | []string | [] | LAN subnets to advertise to the mesh |
serve | []object | [] | Local services to expose on the mesh |
ephemeral | bool | false | Auto-remove node when offline for 2h |
Environment Variables
Every config key can be set via an environment variable prefixed with IXBOND_. For example:
$ IXBOND_SERVER="https://mgmt.ixbond.com" \ IXBOND_TOKEN="your-token" \ IXBOND_NODE_NAME="server-01" \ ixbond-agent
Architecture Overview
IX Bond consists of four core components that work together to create a secure, cloud-managed mesh network.
Control Server
The central management plane. It stores all mesh state (nodes, peers, ACLs, DNS records), distributes peer configurations to agents on each heartbeat, runs the REST API, and serves the web dashboard. Typically deployed behind a reverse proxy with TLS termination.
Node Agent
Runs on every machine that joins the mesh. The agent manages the WireGuard interface, configures local networking (iptables rules, routes, DNS), starts optional services (SOCKS5 proxy, SSH server, taildrop receiver), and maintains a persistent heartbeat connection with the control server.
Relay Server
A DERP-like packet forwarding server for nodes that cannot establish direct WireGuard connections due to restrictive NAT or firewall rules. IX Bond uses a relay-first architecture: all traffic initially routes through the relay, and direct peer-to-peer connections are established opportunistically in the background.
Mesh Addressing
All nodes receive an IP from the global mesh CIDR 100.120.0.0/16. Each tenant is assigned a /24 subnet (e.g., 100.120.1.0/24), supporting up to 254 tenants with 254 nodes each.
Heartbeat Protocol
Every 30 seconds, each agent sends a heartbeat to the control server containing:
- Current WireGuard endpoints (public IP:port)
- Connection metrics (latency, bytes, handshake age)
- Device posture data (OS, firewall state, disk encryption)
The server responds with the updated peer list, ACL rules, and DNS configuration. If peers have changed since the last heartbeat, the agent reconfigures WireGuard automatically.
Cloud-managed: All components run on your infrastructure. No traffic ever touches IX Bond's servers. The control server, relay, and all agents are binaries you deploy yourself.
Mesh Networking
IX Bond creates a WireGuard-based mesh network where every enrolled node can communicate securely with every other node in the same tenant. All traffic between nodes is encrypted end-to-end using WireGuard's ChaCha20-Poly1305 cipher suite.
How It Works
- Each node generates a WireGuard keypair on first run (Curve25519)
- The node registers with the control server, which assigns a mesh IP from the tenant's CIDR block (e.g.,
100.120.1.5) - The control server distributes peer information to all nodes via the heartbeat protocol (every 30s)
- WireGuard handles all encryption at the kernel level -- every packet between nodes is encrypted with ChaCha20-Poly1305
Mesh Addressing
| Scope | CIDR | Capacity |
|---|---|---|
| Global mesh | 100.120.0.0/16 | 65,534 addresses total |
| Per tenant | 100.120.{1-254}.0/24 | 254 nodes per tenant |
| Max tenants | 254 | Prefix range 1-254 |
Heartbeat Protocol
Every 30 seconds, each agent sends a heartbeat to the control server. The heartbeat includes current endpoints, connection metrics, and device posture. The server responds with the updated peer list, ACL rules, and DNS configuration. If peers have changed, the agent reconfigures WireGuard automatically -- no restarts needed.
// Heartbeat request (agent -> server) { "node_id": "node_abc123", "endpoints": ["203.0.113.5:51820", "[fd00::1]:51820"], "metrics": { "peers": [ {"id": "node_def456", "latency_ms": 12, "tx_bytes": 485920, "rx_bytes": 302144} ] }, "posture": { "os": "Ubuntu 22.04.3 LTS", "firewall_enabled": true, "disk_encrypted": true } }
Node Agent
The agent (ixbond-agent) runs on every machine that joins the mesh. It handles all local networking, WireGuard interface management, and optional services.
Initialization Sequence (15 Steps)
When the agent starts, it executes this exact sequence:
- Load or generate encrypted WireGuard keypair
- Discover public endpoint via STUN (Google + Cloudflare STUN servers)
- Collect device posture (OS version, firewall state, disk encryption)
- Register with control server (sends public key, endpoint, posture, tags)
- Set up WireGuard interface (
ixbond0) with assigned mesh IP - Sync peers from server response, configure WireGuard peer entries
- Start SOCKS5 proxy (binds to mesh IP:1080)
- Enable exit node or subnet routing (if configured)
- Start MagicDNS resolver (127.0.0.53:53)
- Initialize ACL engine and apply iptables rules
- Start mesh SSH server (if enabled, default port 2222)
- Start taildrop file receiver (if enabled)
- Start serve/funnel proxy (if configured)
- Initialize kill switch (if enabled)
- Begin heartbeat loop (30s) + key rotation loop (30 days)
Supported Platforms
| Platform | Features | Notes |
|---|---|---|
| Linux (amd64) | Full feature set | Recommended for servers & gateways |
| Linux (arm64) | Full feature set | Raspberry Pi 4/5, ARM servers |
| Linux (armv7) | Full feature set | Older Raspberry Pi, embedded |
| Windows | WireGuard + SOCKS5 | Runs as Windows service |
| Docker | Full feature set | Requires --net=host, NET_ADMIN |
| Kubernetes | Full feature set | Sidecar or DaemonSet |
SOCKS5 Proxy
Every node runs a SOCKS5 proxy server, allowing any application to route traffic through the mesh without VPN-level routing changes.
How It Works
- Binds to mesh IP only (not
0.0.0.0) -- only mesh peers can connect - Outbound traffic is sourced from the mesh IP, routing through WireGuard
- Optional username/password authentication for additional security
- Supports CONNECT (TCP) and UDP ASSOCIATE methods
Usage Examples
# Route curl through a remote node's SOCKS5 proxy $ curl --socks5 100.120.1.5:1080 https://example.com # With authentication $ curl --socks5 user:pass@100.120.1.5:1080 https://example.com # Use with SSH (proxy through a mesh node) $ ssh -o "ProxyCommand nc -X 5 -x 100.120.1.5:1080 %h %p" user@remote-host # Browser: set proxy to any mesh node's IP:1080
The SOCKS5 proxy is enabled by default. To disable it, set "enable_socks5": false in the agent config.
Exit Node Routing
Any Linux node can act as an exit node, routing all internet traffic from other mesh nodes through itself. This is useful for centralized internet breakout, geographic IP selection, or ensuring all traffic passes through a security appliance.
Server Side (The Exit Node)
The exit node enables IP forwarding and configures NAT masquerading so mesh traffic appears to originate from the exit node's public IP.
# Kernel IP forwarding (set automatically by agent) sysctl -w net.ipv4.ip_forward=1 # NAT masquerade rule (created automatically) iptables -t nat -A POSTROUTING -s 100.120.0.0/16 -o eth0 -j MASQUERADE # Forward rules for mesh-to-WAN and stateful return traffic iptables -A FORWARD -i ixbond0 -o eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o ixbond0 -m state --state RELATED,ESTABLISHED -j ACCEPT
Client Side (Routing Through an Exit)
On the client node, the agent adds two routes that capture all internet traffic without conflicting with WireGuard's own routes:
0.0.0.0/1via the exit node's mesh IP128.0.0.0/1via the exit node's mesh IP
# Join mesh with exit routing through a specific node $ ixbond-agent --server https://mgmt.ixbond.com \ --auth-token "token" \ --use-exit 100.120.1.1
Subnet Router
A node can advertise LAN subnets to the mesh, allowing other mesh nodes to access devices on that LAN. For example, a node at the office can advertise 10.20.6.0/24, giving every mesh node access to printers, NAS devices, and servers on that subnet.
How It Works
- Agent sends
advertise_routesin its registration request - Control server creates a route approval request (admin must approve)
- Once approved, the subnet appears in peer
AllowedIPson all mesh nodes - The advertising node sets up iptables NAT + forwarding for the subnet
# Advertise a LAN subnet $ ixbond-agent --advertise-routes "10.20.6.0/24" --subnet-lan-iface eth0
Admin approval required: Advertised routes do not take effect until a tenant admin approves them. See Route Approvals for details.
NAT Traversal
IX Bond uses multiple techniques to establish direct WireGuard connections between nodes behind NAT. The system tries each method in sequence, falling back to relay if all direct methods fail.
STUN Discovery
On startup, the agent sends STUN binding requests to Google and Cloudflare STUN servers to discover its public IP:port (reflexive address). This is used for WireGuard endpoint configuration and shared with peers.
NAT Type Detection
The agent sends STUN requests from the same local port to multiple servers and compares the reflexive addresses to classify the NAT type:
| NAT Type | Direct Connection | Detection Method |
|---|---|---|
| Full Cone | Always possible | Same reflexive address from all servers |
| Restricted Cone | Usually possible | Same IP, same port from all servers |
| Port Restricted | Possible with hole punching | Same IP, different ports |
| Symmetric | Rarely possible | Different IPs or ports from each server |
UDP Hole Punching
Both peers simultaneously send UDP probes to each other's STUN-discovered endpoints:
- Probe packet:
IXBOND_PUNCHmagic + port (16 bytes) - 5 probes at 200ms intervals per candidate endpoint
- First successful probe establishes the direct path
Port Mapping (UPnP / NAT-PMP)
- UPnP: Discovers gateway via SSDP multicast, sends
AddPortMappingSOAP request - NAT-PMP: Sends mapping request to default gateway on port 5351
- Both create a public-to-private port mapping for the WireGuard port
Relay Fallback
If all direct methods fail, traffic routes through the DERP relay. IX Bond uses a relay-first architecture: by default, all traffic goes through the relay. Direct connections are attempted opportunistically in the background and used once established.
Relay Server
The relay (ixbond-relay) is a DERP-like packet forwarding server for nodes that cannot establish direct WireGuard connections.
Protocol
TCP-based binary frame protocol with three frame types:
| Frame | Type Byte | Payload |
|---|---|---|
| Register | 0x02 | WireGuard public key (32 bytes) |
| Data | 0x01 | 44-byte destination key prefix + encrypted WG packet |
| KeepAlive | 0x03 | Empty (header only) |
Frame header: 1 byte type + 4 bytes length (big-endian). Maximum frame size: 65,536 bytes.
Architecture
- Relay-first: all non-relay nodes route
100.120.0.0/16through the relay peer - The relay node adds all peers with direct endpoints
- Non-relay nodes only have the relay as a WireGuard peer
- Persistent keepalive: 25 seconds
# Start a relay server $ ixbond-relay --listen :3340 --server https://mgmt.ixbond.com # Register the relay with the control server $ curl -X POST https://mgmt.ixbond.com/api/v1/relays \ -H "Authorization: Bearer $TOKEN" \ -d '{"name":"us-east","hostname":"relay-us.ixbond.com","port":3340,"region":"us-east-1","lat":39.0,"lon":-77.5}'
Multi-Region Relay
For global deployments, IX Bond supports multiple relay servers with geographic routing. Nodes automatically connect to the lowest-latency relay.
Relay Selection
- FindNearest(): Haversine distance calculation using relay GPS coordinates to find the geographically closest relay
- FindBest(): Concurrent TCP latency probes to all relays to find the fastest one (accounts for network topology, not just distance)
Health Monitoring
The control server monitors relay health with periodic TCP probes. Relays that fail health checks are marked offline and excluded from assignment. The relay registry tracks online/offline status and connected client count.
API Endpoints
MagicDNS
Every node runs a DNS resolver on 127.0.0.53:53 that provides mesh name resolution with enterprise DNS features including split DNS, search domains, and upstream forwarding.
Mesh Resolution
nodename.ixbond.localresolves to the node's mesh IP- Records automatically updated on every heartbeat
- Case-insensitive lookups
Split DNS
Route specific domains to specific nameservers. For example, *.corp.com queries go to 10.0.0.53, everything else goes to Cloudflare. Configured per-tenant via the dashboard or API.
{
"split_dns": [
{"domain": "corp.com", "nameservers": ["10.0.0.53"]},
{"domain": "internal.dev", "nameservers": ["10.0.1.53"]}
],
"search_domains": ["ixbond.local"],
"upstream_nameservers": ["1.1.1.1", "8.8.8.8"],
"manage_resolv_conf": true
}
Search Domains
Resolve short names without the full domain. With search domain ixbond.local, typing ping mynode resolves mynode.ixbond.local.
resolv.conf Management
Optionally manages /etc/resolv.conf to point at the local resolver and adds search domains automatically. Disable with "manage_resolv_conf": false if you use systemd-resolved or another DNS manager.
Split Tunneling
Control which traffic goes through the mesh and which bypasses it. Useful for routing only corporate traffic through the VPN while streaming and personal browsing go direct.
Modes
- Exclude mode: Everything goes through mesh EXCEPT listed targets (bypass specific CIDRs/domains)
- Include mode: ONLY listed targets go through mesh (everything else goes direct)
Target Types
cidr-- IP ranges (e.g.,192.168.0.0/16)domain-- DNS names (resolved once, then routed by IP)
Implementation
Uses ip rule and ip route with a dedicated routing table (table 200) on Linux. Excluded/included CIDRs get routing rules that either bypass or use the WireGuard interface.
{
"mode": "exclude",
"targets": [
{"type": "cidr", "value": "192.168.0.0/16"},
{"type": "domain", "value": "zoom.us"},
{"type": "domain", "value": "*.netflix.com"}
]
}
QoS / Bandwidth Limits
Enforce bandwidth limits per node or per tenant using Linux traffic control. Prevents any single node from saturating the mesh.
How It Works
- Creates an HTB (Hierarchical Token Bucket) qdisc on the WireGuard interface
- Root class limits total bandwidth
- Default class handles unclassified traffic
- iptables mangle rules mark packets for TC classification
{
"max_up_mbps": 100,
"max_down_mbps": 50,
"priority": 4,
"node_id": "node_abc123"
}
Network Kill Switch
Prevents data leaks by blocking all internet traffic if the VPN connection drops. Ensures no unencrypted traffic leaves the machine.
How It Works
Creates an IXBOND_KILLSWITCH iptables chain with these rules (in order):
- Allow established/related connections
- Allow loopback traffic
- Allow WireGuard interface traffic
- (Optional) Allow LAN traffic (
192.168.0.0/16,10.0.0.0/8,172.16.0.0/12) - (Optional) Allow DNS (UDP port 53)
- Allow explicitly allowed IPs (e.g., control server)
- DROP everything else on OUTPUT
{
"enabled": true,
"allow_lan": true,
"allow_dns": true,
"allowed_ips": ["75.58.119.159"]
}
Caution: With the kill switch enabled, if the agent crashes or is stopped, all internet traffic will be blocked until the agent restarts or the iptables rules are manually flushed.
Wake-on-LAN
Wake sleeping or powered-off devices on remote LANs via the mesh. The WoL command can be sent from any mesh node and relayed through a subnet router to reach the target device.
How It Works
- Constructs a 102-byte magic packet: 6 bytes of
0xFF+ 16 repetitions of the target MAC address - Sends via UDP broadcast on port 9
- Can send directly (local LAN) or relay through a mesh node (for remote LANs via subnet routers)
# Wake a device by MAC address $ ixbondctl wol AA:BB:CC:DD:EE:FF # Wake via a specific mesh node (subnet router) $ ixbondctl wol AA:BB:CC:DD:EE:FF --via 100.120.1.1
ACL Policy Engine
Tag-based access control rules that are enforced via iptables on every node. ACLs control which nodes can communicate with which other nodes, on which ports and protocols.
Rule Structure
{
"action": "allow",
"src": ["tag:engineering"],
"dst": ["tag:prod-db"],
"ports": ["tcp:5432"],
"proto": "tcp",
"priority": 10,
"enabled": true
}
Evaluation
- Rules sorted by priority (lower number = higher priority = checked first)
- Each rule checked: does src match? does dst match? does proto/port match?
- Matching supports: node IDs, tags (
tag:web), wildcard (*) - First match wins; default action if no match (configurable: allow or deny)
Enforcement
On each heartbeat, the agent receives updated ACL rules from the control server. It creates an IXBOND_ACL iptables chain with rules translated to mesh IPs, with jumps from the FORWARD and INPUT chains.
# List ACL rules $ ixbondctl acl # Add a rule: allow web servers to reach API servers on port 8080 $ ixbondctl acl-add allow "tag:web" "tag:api" "tcp:8080,tcp:443"
Node Tags & Groups
Tags are labels attached to nodes that drive ACL rules, posture policies, and ZTNA rules. They are the foundation of IX Bond's policy model.
Format
Tags follow the format tag:name (e.g., tag:web, tag:prod, tag:office-nyc). Names can contain lowercase letters, numbers, and hyphens.
Assignment Methods
| Method | Example |
|---|---|
| Agent config | "tags": ["tag:web", "tag:prod"] |
| API (single node) | PUT /api/v1/nodes/{id}/tags with {"tags": ["tag:web"]} |
| CLI | ixbondctl tags node_abc123 tag:web,tag:prod |
| Bulk API | POST /api/v1/bulk/tags with {"node_ids": [...], "tags": [...]} |
Used By
Tags are referenced by ACL rules, ZTNA rules, posture policies, SSH policies, and webhook filters. Changing a node's tags immediately affects all policies that reference those tags (applied on next heartbeat).
Zero Trust (ZTNA)
Per-application access control that goes beyond network-level ACLs. While ACLs control network access (IP + port), ZTNA controls application access (HTTP path, specific service, with posture requirements).
Rule Structure
{
"name": "Engineering to Prod API",
"src_tags": ["tag:engineering"],
"dst_node_id": "node_abc123",
"app_proto": "https",
"app_port": 443,
"app_path": "/api/v2/",
"require_posture_tags": ["tag:compliant"],
"enabled": true
}
Evaluation Checks
- Source node has required tags
- Destination matches (node ID or tag)
- Protocol and port match
- URL path prefix matches (for HTTP/HTTPS)
- Source node meets posture requirements
ZTNA vs ACL: ACLs are enforced at the network layer via iptables. ZTNA rules are enforced at the application layer and can inspect HTTP headers, paths, and require device posture. Use both together for defense in depth.
Data Loss Prevention
Scans content transferred through the mesh for sensitive data patterns. DLP rules can block, warn, or silently log transfers that match predefined or custom regex patterns.
Built-in Patterns
| Pattern | Regex | Example Match |
|---|---|---|
| SSN | \b\d{3}-\d{2}-\d{4}\b | 123-45-6789 |
| Credit Card | \b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b | 4111-1111-1111-1111 |
| AWS Access Key | AKIA[0-9A-Z]{16} | AKIAIOSFODNN7EXAMPLE |
| Private Key | -----BEGIN.*PRIVATE KEY----- | PEM private keys |
Actions
- block -- reject the transfer and notify the sender
- warn -- allow the transfer but log a warning in the audit log
- log -- allow the transfer with silent logging
Applies To
taildrop (file transfers), ssh (session content), or all (both).
{
"name": "Block AWS Keys",
"pattern": "AKIA[0-9A-Z]{16}",
"action": "block",
"applies_to": "all"
}
Device Posture
Collects and enforces security posture of every node in the mesh. Posture data is gathered on every heartbeat and evaluated against policies to ensure compliance.
Data Collected
| Check | Linux | Windows |
|---|---|---|
| OS Version | /etc/os-release PRETTY_NAME | ver command |
| Firewall | iptables chain count | netsh advfirewall state |
| Disk Encryption | lsblk crypt type, /sys/block/*/dm/uuid | manage-bde -status C: |
| Auto Updates | unattended-upgrades, apt-daily.timer | wuauserv service state |
| Hostname | os.Hostname() | os.Hostname() |
| Architecture | runtime.GOARCH | runtime.GOARCH |
Posture Policies
{
"require_tags": ["tag:prod"],
"min_agent_version": "0.8.0",
"require_firewall": true,
"require_disk_encryption": true,
"action": "block"
}
Actions: warn (allow but flag in dashboard) or block (deny mesh access until compliant).
Certificate Auth
Mutual TLS as an alternative to token-based authentication. The control server's internal CA issues client certificates to nodes, which are then used for authentication on every API request.
How It Works
- Control server's internal CA issues client certificates to nodes
- Nodes present their client cert during TLS handshake
- Server verifies cert against CA, extracts CN as node identity
- No tokens needed -- the certificate IS the identity
Middleware wraps HTTP handlers to extract and verify client certs automatically. Certificate-authenticated requests bypass token validation entirely.
Key Management
Comprehensive key lifecycle management including generation, encryption at rest, trust-on-first-use pinning, and automatic rotation.
Key Generation
WireGuard keypair generated via wg genkey + wg pubkey (Curve25519).
Encryption at Rest
- Private keys encrypted with AES-256-GCM
- Encryption key derived from machine ID via SHA-256
- Linux:
/etc/machine-id - Windows:
HKLM\SOFTWARE\Microsoft\Cryptography\MachineGuid - Stored as
private.key.enc(base64) -- plaintext key never touches disk - Automatic migration from plaintext to encrypted on first run
Trust-on-First-Use (TOFU)
- First public key a node presents is pinned in the
pinned_keystable - Subsequent registrations must use a pinned key
- Prevents node impersonation attacks
Key Rotation
- Automatic every 30 days
- Agent generates new keypair, notifies server via
/api/v1/rotate-key - Server pins new key, revokes old key, logs rotation
- All operations recorded in
key_rotationsaudit table
Zero downtime: Key rotation is seamless. The agent generates the new key, registers it with the server, and only switches to it after confirmation. Peers are updated on the next heartbeat cycle.
Mesh SSH
SSH into any mesh node using WireGuard identity for authentication -- no SSH keys or passwords needed. If you can reach a node on the mesh, you are already authenticated by WireGuard.
How It Works
- Agent starts an SSH server on the mesh IP (default port 2222)
- Authentication is mesh-based: if a connection arrives on the WireGuard interface, the peer is already authenticated by WG
- Authorization callback checks ACL/SSH policies to determine if the peer is allowed to SSH
- Supports interactive shells, exec commands, and PTY
# SSH to a mesh node by IP $ ssh -p 2222 root@100.120.1.5 # SSH by mesh DNS name $ ssh -p 2222 root@mynode.ixbond.local
SSH Policies
Control who can SSH where with tag-based policies:
{
"action": "accept",
"src": ["tag:admin"],
"dst": ["tag:prod"],
"ssh_users": ["root", "ubuntu"]
}
Session Tracking
Every SSH session is recorded in the database with: from/to nodes, user, status, duration, and bytes transferred.
SSH Session Recording
Captures SSH session output for audit and compliance. Recordings use the asciicast v2 format, compatible with the asciinema player for browser-based playback.
Recording Format
// asciicast v2 header {"version": 2, "width": 80, "height": 24, "timestamp": 1712345678} // Event entries: [time_offset, event_type, data] [0.5, "o", "$ ls\r\n"] [0.8, "o", "file1.txt file2.txt\r\n"] [1.2, "o", "$ "]
Storage
Recording files are stored on disk in the data directory. Metadata (session ID, duration, from/to nodes) is stored in the database. Recordings can be streamed via the API.
File Transfer (Taildrop)
Peer-to-peer encrypted file transfer between mesh nodes. Files travel over the WireGuard tunnel, so they are always encrypted in transit.
Transfer Protocol
- Sender computes SHA-256 checksum of the file
- Connects to receiver's mesh IP on a random TCP port
- Sends JSON header:
{"file_name": "doc.pdf", "file_size": 12345, "checksum": "sha256..."} - Receiver accepts or rejects
- Sender streams raw bytes
- Receiver verifies SHA-256 checksum
- Receiver confirms:
{"status": "ok"}
File Collision Handling
If the destination file already exists, the receiver saves with incrementing suffixes: doc.pdf.1, doc.pdf.2, etc.
// Agent config for taildrop { "enable_taildrop": true, "taildrop_dir": "/home/user/Downloads/ixbond" }
# Send a file to another mesh node $ ixbondctl send 100.120.1.5 ./report.pdf # Send by node name $ ixbondctl send office-server ./backup.tar.gz
Serve & Funnel
Serve proxies local services so they are accessible to mesh peers. Funnel exposes mesh services to the public internet.
Configuration
{
"serve": [
{
"proto": "http",
"port": 8080,
"target": "http://127.0.0.1:3000",
"funnel": false
},
{
"proto": "tcp",
"port": 5432,
"target": "tcp://127.0.0.1:5432",
"funnel": false
},
{
"proto": "http",
"port": 443,
"target": "http://127.0.0.1:8080",
"funnel": true
}
]
}
How Serve Works
- HTTP rules:
httputil.ReverseProxylistens onmeshIP:port, forwards to local target - TCP rules: Raw TCP proxy with bidirectional
io.Copy
How Funnel Works
- Control server runs a
FunnelProxythat inspects theHostheader - Routes
mynode.tenant.ixbond.comto the correct mesh backend - Requires a funnel-enabled serve rule on the node
HTTPS Certificates
The control server acts as an internal Certificate Authority (CA), issuing TLS certificates for mesh nodes so they can serve HTTPS without manual certificate management.
CA Setup
- Ed25519 root key + self-signed root cert (CN="IX Bond Mesh CA", 10-year validity)
- Stored at
{data_dir}/ca.keyand{data_dir}/ca.crt
Per-Node Certificates
- RSA-2048 key + cert signed by the CA
- SAN includes: DNS name (
nodename.ixbond.local) + mesh IP - 1-year validity, auto-renewal when 30 days from expiry
Multi-Tenant
Each tenant (organization) gets a completely isolated mesh network. Designed for MSPs, enterprises with multiple business units, or SaaS deployments.
Isolation Guarantees
| Layer | Isolation |
|---|---|
| Network | Separate CIDR block (100.120.{prefix}.0/24) |
| Database | All queries include tenant_id filter |
| API | Auth middleware enforces tenant scoping |
| Peers | Nodes only receive peers from the same tenant |
| Tokens | Enrollment tokens scoped per tenant |
| Admin | Tenant admin token separate from super-admin |
Tenant Creation
$ curl -X POST https://mgmt.ixbond.com/api/v1/tenants \ -H "Authorization: Bearer $SUPER_ADMIN_TOKEN" \ -d '{"name": "Acme Corp", "slug": "acme"}'
Returns: tenant ID, admin token, mesh CIDR, and install commands.
Maximum of 254 tenants (prefix range 1-254), each with up to 254 nodes.
RBAC
5 built-in roles with granular permissions. Custom roles can be created via the API.
| Role | Permissions |
|---|---|
| admin | * (full access to all resources) |
| network-admin | nodes.*, acl.*, routes.*, dns.*, relay.* |
| security-admin | acl.*, posture.*, ssh.*, dlp.*, ztna.*, audit.read |
| viewer | nodes.read, acl.read, routes.read, metrics.read, audit.read |
| helpdesk | nodes.read, nodes.write, tokens.write, ssh.read |
Wildcard matching: nodes.* matches nodes.read, nodes.write, nodes.delete, etc.
Audit Log
Every state-changing operation is recorded with full context. The audit log provides a complete history of who did what, when, and from where.
Event Structure
{
"id": "evt_a1b2c3d4e5f6",
"tenant_id": "tenant_abc123",
"actor_id": "user_def456",
"actor_name": "Justin M",
"actor_type": "user",
"action": "acl.create",
"resource": "acl",
"resource_id": "acl_ghi789",
"detail": {"src": ["tag:web"], "dst": ["*"]},
"ip": "100.120.1.5",
"timestamp": "2026-04-08T14:30:00Z"
}
# View recent audit events $ ixbondctl audit # Filter by action type $ curl "https://mgmt.ixbond.com/api/v1/audit?action=node&limit=50&offset=0" \ -H "Authorization: Bearer $TOKEN"
Webhooks
HTTP POST notifications fired on mesh events. Webhooks enable real-time integration with monitoring systems, Slack, PagerDuty, and custom automation.
Supported Events
| Event | Description |
|---|---|
node.online | Node came online |
node.offline | Node went offline |
node.register | New node registered |
node.remove | Node was removed |
acl.deny | ACL blocked traffic |
posture.fail | Device failed posture check |
ssh.session.start | SSH session opened |
ssh.session.end | SSH session closed |
transfer.complete | File transfer completed |
* | All events |
Delivery
- Async (non-blocking) -- fires in background goroutines
- 5-second timeout per request
X-Webhook-Signatureheader: HMAC-SHA256(secret, body) if secret configured
// Webhook payload { "event": "node.offline", "timestamp": "2026-04-08T14:30:00Z", "tenant_id": "tenant_abc123", "data": { "node_id": "node_def456", "node_name": "office-gateway", "mesh_ip": "100.120.1.1" } }
Email Notifications
SMTP-based email alerts for mesh events. Receive immediate notifications when nodes go offline, posture checks fail, or SSH sessions start.
Configuration
{
"smtp_host": "smtp.gmail.com",
"smtp_port": 587,
"smtp_user": "alerts@company.com",
"smtp_pass": "app-password",
"from_address": "IX Bond <alerts@company.com>",
"enabled": true,
"events": ["node.offline", "posture.fail"]
}
Email format: HTML template with the event name, timestamp, and a key-value detail table. Configured per tenant.
SCIM Provisioning
SCIM 2.0 endpoint for automated user lifecycle management from identity providers (Azure AD, Okta, OneLogin, etc.).
How It Works
- Configure SCIM provisioning in Azure AD / Okta
- Point it at
https://mgmt.ixbond.com/scim/v2/ - Users are automatically created/updated/deactivated as they change in the directory
- Each SCIM user maps to an IX Bond user linked to a tenant
Endpoints
Schema: urn:ietf:params:scim:schemas:core:2.0:User
SSO
Support for multiple identity providers. Users authenticate via their organization's SSO and are automatically mapped to IX Bond tenant users.
Supported Providers
| Type | Provider | Protocol |
|---|---|---|
microsoft | Azure AD / Microsoft Entra | OAuth 2.0 |
google | Google Workspace | OAuth 2.0 |
oidc | Okta, Auth0, Keycloak, etc. | OpenID Connect |
Configuration Example
{
"name": "Company Okta",
"type": "oidc",
"client_id": "your-client-id",
"client_secret": "your-secret",
"issuer_url": "https://company.okta.com",
"scopes": ["openid", "profile", "email"]
}
White-Label
Customize the dashboard appearance per tenant for MSPs and enterprise deployments. Replace all IX Bond branding with your own.
Configurable Properties
| Property | Description |
|---|---|
| Company name | Replaces "IX Bond" throughout the UI |
| Logo URL | Custom logo image |
| Primary color | Hex color for buttons, links, accents |
| Accent color | Hex color for secondary elements |
| Favicon URL | Custom browser tab icon |
| Custom CSS | Arbitrary CSS injected into all pages |
| Hide branding | Remove "Powered by IX Bond" footer |
Implementation: CSS variable injection + HTML string replacement applied server-side before serving dashboard pages.
Route Approvals
When a node advertises subnets via advertise_routes, an admin must approve them before traffic flows. This prevents unauthorized network exposure.
Workflow
- Agent registers with
advertise_routes: ["10.20.6.0/24"] - Server creates a route approval request (status: pending)
- Admin reviews in dashboard or CLI
- Admin approves or denies
- If approved, subnet appears in peer
AllowedIPson next heartbeat
# List pending route approvals $ ixbondctl routes # Approve a route $ ixbondctl approve-route ra_abc123 # Deny a route $ ixbondctl deny-route ra_abc123
Ephemeral Nodes
Nodes that are automatically removed when they disconnect -- ideal for CI runners, containers, serverless functions, and auto-scaling groups.
Usage
Set "ephemeral": true in agent config or during registration.
$ ixbond-agent --server https://mgmt.ixbond.com \ --auth-token "token" \ --node-name "ci-runner-${BUILD_ID}" \ --ephemeral
Cleanup
The server runs a goroutine every 5 minutes that deletes ephemeral nodes with last_seen older than 2 hours. This includes removing the node from all peer configurations, cleaning up iptables rules, and freeing the mesh IP.
Expiring Access
Grant time-limited access to a node for contractors, auditors, or temporary staff. Once the time expires, access is automatically revoked.
// Grant 24-hour access { "node_id": "node_abc123", "reason": "Contractor access for migration", "hours": 24 }
Expired access is automatically cleaned up by the server's background cleanup goroutine.
Web Dashboard
User-facing dashboard at /dashboard/ for managing your tenant's mesh network. Authenticated via Microsoft OAuth or token-based login.
Dashboard Pages
| Page | Purpose |
|---|---|
| Dashboard | Node grid, stats (total, online, exit nodes), mesh CIDR |
| Nodes | Table with status, tags, mesh IP, OS, version, actions |
| Enrollment Tokens | Create/manage one-time or limited-use tokens |
| ACL Rules | Create/manage access control rules |
| Posture | View device posture for all nodes |
| SSH Sessions | Monitor active and historical SSH sessions |
| Route Approvals | Approve/deny advertised subnet routes |
| DNS Config | Configure split DNS, search domains, nameservers |
| Metrics | Per-node connection quality data |
| Split Tunnel | Configure include/exclude routing rules |
| QoS | Set bandwidth limits per node |
| Kill Switch | Toggle network kill switch |
| Audit Log | Search and browse admin actions |
| Webhooks | Configure event notifications |
| Branding | Customize dashboard appearance |
| Onboarding | Guided setup wizard |
| Install Agent | One-click install commands (Linux, Windows, Pi) |
| Settings | Server info, version, TLS status |
Admin Console
Super-admin dashboard at /admin/ with cross-tenant management capabilities. Only accessible with a super-admin token.
Admin Pages
| Page | Purpose |
|---|---|
| Overview | Tenant count, total nodes, online count |
| Tenants | Create/manage/delete tenants |
| All Nodes | Cross-tenant node view with tenant filter |
| SSO Providers | Configure identity providers |
| Posture Policies | Create compliance requirements |
| SSH Policies | Control SSH access rules |
| ZTNA Rules | Per-app zero trust rules |
| DLP Rules | Content scanning patterns |
| QoS Policies | Tenant-wide bandwidth policies |
| Relay Servers | Manage multi-region relays |
| Plans | View available plan tiers |
| Licenses | Generate and manage license keys |
| Email Config | SMTP notification setup |
| Session Recordings | Browse SSH session recordings |
| Expiring Access | Grant/view time-limited access |
CLI Tool (ixbondctl)
21 commands for all management operations. The CLI connects to the control server API and requires a server URL and authentication token.
ixbondctl [--server URL] [--token TOKEN] <command> # ─── Node Management ─── nodes / ls # List nodes node <id> # Show node details rm <id> # Remove node tags <id> <tag1,tag2> # Set tags # ─── Security ─── acl # List ACL rules acl-add <action> <src> <dst> <ports> # Add rule posture # List posture data dlp # List DLP rules ztna # List ZTNA rules ssh-sessions # List SSH sessions # ─── Network ─── routes # List route approvals approve-route <id> # Approve route deny-route <id> # Deny route metrics <node-id> # Connection metrics network-map # Mesh topology wol <mac> # Wake-on-LAN # ─── Administration ─── audit # Audit log webhooks # List webhooks roles # List RBAC roles plans # List plans relays # List relays version # Show version
Global Flags
| Flag | Env Var | Description |
|---|---|---|
--server | IXBOND_SERVER | Control server URL |
--token | IXBOND_TOKEN | Authentication token |
--json | - | Output as JSON |
--quiet | - | Suppress non-essential output |
REST API
98 endpoints organized into 27 groups. Full OpenAPI 3.0 specification available at /api/openapi.yaml.
Authentication
All API requests must include one of these authentication methods:
Authorization: Bearer <token>header (most common)- OAuth session cookie (
ixbond_session) -- used by the web dashboard - Mutual TLS client certificate -- see Certificate Auth
Token Types
| Type | Scope | Use Case |
|---|---|---|
| Super-admin | All tenants, all resources | Multi-tenant management |
| Tenant admin | Single tenant, all resources | Tenant administration |
| Enrollment | Single tenant, node registration only | Node enrollment |
| Node-specific | Single node, heartbeat + metrics | Agent authentication |
Base URL
https://mgmt.ixbond.com/api/v1/
Key Endpoint Groups
For the complete 98-endpoint listing, download the OpenAPI specification or visit /api/openapi.yaml on your control server.
Network Map
API endpoint returning full mesh topology for visualization. Powers the network map view in the dashboard.
Response
{
"nodes": [
{
"id": "node_abc123",
"name": "office-gw",
"mesh_ip": "100.120.1.1",
"tags": ["tag:office", "tag:gateway"],
"status": "online",
"os": "Ubuntu 22.04"
}
],
"relays": [
{
"id": "relay_def456",
"name": "us-east",
"region": "us-east-1",
"status": "online"
}
],
"mesh_cidr": "100.120.0.0/16"
}
$ ixbondctl network-map # Output: Mesh: 100.120.0.0/16 (6 nodes) [online] office-gw (100.120.1.1) - tag:office, tag:gateway [online] dev-server (100.120.1.2) - tag:dev [offline] laptop-jm (100.120.1.3) - tag:mobile
Kubernetes Operator
Auto-join pods to the IX Bond mesh with a Kubernetes operator. Supports custom resources and automatic sidecar injection.
Custom Resource Definitions
| CRD | Scope | Description |
|---|---|---|
IXBondMesh | Cluster | Defines mesh connection (server URL, token) |
IXBondNode | Namespace | Registers a pod as a mesh node |
Sidecar Injection
Label a namespace with ixbond.com/inject: "true" and the operator automatically adds the IX Bond agent as a sidecar container with NET_ADMIN + NET_RAW capabilities. An init container configures networking before the main container starts.
# Install the operator $ kubectl apply -f https://get.ixbond.com/k8s.yaml # Example IXBondMesh resource apiVersion: ixbond.com/v1 kind: IXBondMesh metadata: name: production spec: serverUrl: https://mgmt.ixbond.com authTokenSecret: ixbond-token tags: - tag:k8s - tag:prod # Enable sidecar injection for a namespace $ kubectl label namespace default ixbond.com/inject=true
Terraform Provider
Infrastructure-as-code support for IX Bond. Manage tenants, nodes, ACL rules, and enrollment tokens declaratively.
Resources
| Resource | Description |
|---|---|
ixbond_tenant | Create/manage tenants |
ixbond_node | Register nodes |
ixbond_acl_rule | Manage ACL rules |
ixbond_enrollment_token | Create enrollment tokens |
Example Configuration
terraform { required_providers { ixbond = { source = "ixbond/ixbond" version = "~> 0.1" } } } provider "ixbond" { server_url = "https://mgmt.ixbond.com" token = var.ixbond_token } resource "ixbond_tenant" "prod" { name = "Production" slug = "prod" } resource "ixbond_acl_rule" "allow_web" { action = "allow" src = ["tag:frontend"] dst = ["tag:backend"] ports = ["tcp:8080"] } resource "ixbond_enrollment_token" "servers" { tenant_id = ixbond_tenant.prod.id max_uses = 10 tags = ["tag:server", "tag:prod"] }
Go SDK
Programmatic access to the IX Bond API from Go applications.
Installation
$ go get github.com/ixbond/ixbond-go
Usage Example
package main import ( "fmt" "github.com/ixbond/ixbond-go" ) func main() { // Initialize client client := ixbond.NewClient("https://mgmt.ixbond.com", "your-token") // List all nodes nodes, err := client.ListNodes() if err != nil { panic(err) } for _, n := range nodes { fmt.Printf("%s (%s) - %s\n", n.Name, n.MeshIP, n.Status) } // Set tags on a node client.SetTags("node_abc123", []string{"tag:web", "tag:prod"}) // Create an ACL rule client.CreateACLRule(ixbond.ACLRule{ Action: "allow", Src: []string{"tag:web"}, Dst: []string{"tag:api"}, Ports: []string{"tcp:8080"}, }) }
Python SDK
Programmatic access to the IX Bond API from Python applications.
Installation
$ pip install ixbond
Usage Example
from ixbond import IXBondClient # Initialize client client = IXBondClient("https://mgmt.ixbond.com", token="your-token") # List all nodes nodes = client.list_nodes() for node in nodes: print(f"{node.name} ({node.mesh_ip}) - {node.status}") # Set tags client.set_tags("node_abc123", ["tag:web", "tag:prod"]) # Create ACL rule client.create_acl_rule( action="allow", src=["tag:web"], dst=["tag:api"], ports=["tcp:8080"] ) # Get connection metrics metrics = client.get_metrics("node_abc123") for peer in metrics.peers: print(f" Peer {peer.id}: {peer.latency_ms}ms, {peer.tx_bytes}B tx")
CI/CD Pipeline
GitLab CI pipeline with two stages for building and deploying IX Bond.
Build Stage
golang:1.22-alpineimage- Compiles all 4 binaries (
ixbond-server,ixbond-agent,ixbond-relay,ixbondctl) with version injection - Build artifacts stored for 7 days
Deploy Stage
- Uploads binaries + dashboards via SSH
- Sets permissions, restarts systemd services
- Verifies health after restart
- Auto-triggers on version tags (e.g.,
v0.8.0) - Manual trigger from
mainbranch
# .gitlab-ci.yml stages: - build - deploy build: stage: build image: golang:1.22-alpine script: - CGO_ENABLED=0 go build -ldflags "-X main.version=$CI_COMMIT_TAG" -o ixbond-server ./cmd/server - CGO_ENABLED=0 go build -ldflags "-X main.version=$CI_COMMIT_TAG" -o ixbond-agent ./cmd/agent - CGO_ENABLED=0 go build -o ixbond-relay ./cmd/relay - CGO_ENABLED=0 go build -o ixbondctl ./cmd/ctl artifacts: paths: [ixbond-server, ixbond-agent, ixbond-relay, ixbondctl] expire_in: 7 days deploy: stage: deploy rules: - if: '$CI_COMMIT_TAG =~ /^v/' - if: '$CI_COMMIT_BRANCH == "main"' when: manual
Connection Metrics
Per-peer connection quality data collected on every heartbeat. Metrics are stored for 7 days and automatically cleaned up.
Metrics Collected
| Metric | Unit | Description |
|---|---|---|
| Peer latency | ms | Round-trip time to each peer |
| Packet loss | % | Percentage of lost packets |
| TX bytes | bytes | Total bytes transmitted |
| RX bytes | bytes | Total bytes received |
| Uptime | seconds | Time since node joined mesh |
| Handshake age | seconds | Seconds since last WireGuard handshake |
Retention: 7 days (automatic cleanup via hourly goroutine).
$ ixbondctl metrics node_abc123 # Output: Node: office-gw (100.120.1.1) Uptime: 14d 6h 32m Peer Latency TX RX Handshake dev-server 4ms 1.2 GB 890 MB 12s ago laptop-jm 28ms 340 MB 156 MB 45s ago cloud-proxy 62ms 5.6 GB 3.2 GB 8s ago
Usage Metering
Per-tenant usage tracking for billing and capacity planning. Metrics are aggregated monthly.
Tracked per Month
| Metric | Description |
|---|---|
| Node count | Number of active nodes in the tenant |
| Total TX bytes | Total bytes transmitted across all nodes |
| Total RX bytes | Total bytes received across all nodes |
| SSH sessions | Number of SSH sessions initiated |
| File transfers | Number of taildrop transfers |
Plan Tiers & Licensing
Three built-in plans with escalating features. License keys control plan activation per tenant.
Plans
| Plan | Max Nodes | Features | Price |
|---|---|---|---|
| FREE | 5 | Mesh, SOCKS5, DNS | $0/mo |
| PRO | 50 | + ACL, SSH, taildrop, posture, funnel, split tunnel | $29.99/mo |
| ENTERPRISE | Unlimited | + ZTNA, DLP, SCIM, audit, webhooks, white-label, session recording, multi-relay | $99.99/mo |
License Keys
Format: IXBOND-XXXX-XXXX-XXXX-XXXX (cryptographically random). Generated by super-admins, activated by tenant admins.
# Generate a license key (super-admin) $ curl -X POST https://mgmt.ixbond.com/api/v1/licenses/generate \ -H "Authorization: Bearer $SUPER_ADMIN_TOKEN" \ -d '{"plan": "enterprise"}' # Activate a license (tenant admin) $ curl -X POST https://mgmt.ixbond.com/api/v1/licenses/activate \ -H "Authorization: Bearer $TENANT_TOKEN" \ -d '{"key": "IXBOND-A1B2-C3D4-E5F6-G7H8"}'
The server validates the license on every request, checking node limits and feature flags against the active plan.
Onboarding Wizard
Guided setup for new tenants, shown in the dashboard after first login. Walks through the essential configuration steps to get a mesh network running.
Wizard Steps
- Create tenant (automatic on first OAuth login)
- Add first node -- shows install commands for your platform
- Create enrollment token -- for adding more nodes
- Configure ACL rules -- set up basic access policies
- Set up DNS -- configure split DNS and search domains
- Enable exit node -- optionally configure internet breakout
Tracks completion state per tenant. Can be dismissed at any time and resumed from the dashboard.
Skip ahead: If you are familiar with IX Bond, you can skip the wizard entirely by clicking "Skip Setup" and configuring everything manually via the CLI or API.