githubEdit

Spuff Technical Architecture

This document provides a deep dive into spuff's architecture, protocols, data flows, and internal workings. It is intended for engineers who want to understand or contribute to the project.

Table of Contents


System Overview

Spuff is a CLI tool that orchestrates ephemeral development VMs across cloud providers. The system consists of three main runtime components:


Components

CLI (spuff)

The main binary that users interact with. Built with:

  • clap for argument parsing

  • tokio for async runtime

  • ratatui for terminal UI

  • reqwest for HTTP client (provider APIs)

  • chrondb for local state (Git-backed document store)

Key modules:

  • src/cli/commands/ - Command implementations (up, down, ssh, status, volume, etc.)

  • src/provider/ - Cloud provider abstraction layer:

    • mod.rs - Provider trait and common types (ProviderInstance, InstanceStatus, Snapshot)

    • config.rs - Provider-agnostic configuration (InstanceRequest, ImageSpec, ProviderTimeouts, ProviderType)

    • error.rs - Structured error types (ProviderError, ProviderResult)

    • registry.rs - Provider factory registry (ProviderFactory, ProviderRegistry)

    • digitalocean.rs - DigitalOcean implementation

  • src/connector/ssh.rs - SSH/SCP operations

  • src/environment/cloud_init.rs - Cloud-init template generation

  • src/state.rs - ChronDB state management (LocalInstance, StateDb)

  • src/volume/ - SSHFS-based volume mounting:

    • config.rs - Volume configuration and path resolution

    • drivers/sshfs.rs - SSHFS mount/unmount operations

    • state.rs - Local mount state tracking

  • src/tui/ - Terminal UI components

Agent (spuff-agent)

A lightweight daemon running on the VM that provides:

  • System metrics collection (CPU, memory, disk)

  • Idle time tracking for auto-destruction

  • Bootstrap status reporting

  • Remote command execution (experimental)

Built with:

  • axum for HTTP server

  • sysinfo for system metrics

  • tokio for async runtime

Key modules:

  • src/agent/main.rs - Entry point and server setup

  • src/agent/routes.rs - HTTP API endpoints

  • src/agent/metrics.rs - System metrics collection

Cloud-Init

YAML configuration that bootstraps the VM:

  • User creation and SSH key injection

  • Package installation

  • Tool installation (Docker, devbox, etc.)

  • Agent installation and startup

Generated from Tera templates in src/environment/cloud_init.rs.


Protocol Stack

Spuff uses three distinct communication protocols:

Protocol
Use Case
Port
Encryption

HTTPS

Cloud Provider API

443

TLS 1.2+

SSH

Remote shell & SCP

22

SSH protocol

Mosh

Interactive shell (preferred)

UDP 60000-61000

AES-128

HTTP

Agent API

7575

None (localhost)

Protocol Flow Diagram


Data Flow

Instance Creation (spuff up)

Instance Destruction (spuff down)


Cloud Provider Integration

DigitalOcean API

Located in src/provider/digitalocean.rs.

Base URL: https://api.digitalocean.com/v2

Authentication:

Endpoints Used:

Method
Endpoint
Purpose

POST

/droplets

Create instance

GET

/droplets/{id}

Get instance status

DELETE

/droplets/{id}

Destroy instance

GET

/droplets?tag_name=spuff

List spuff instances

GET

/account/keys

Get SSH key IDs

POST

/droplets/{id}/actions

Create snapshot

GET

/snapshots?resource_type=droplet

List snapshots

DELETE

/snapshots/{id}

Delete snapshot

GET

/actions/{id}

Poll action status

Create Droplet Request:

Create Droplet Response:

Instance Status Polling: The CLI polls GET /droplets/{id} every 5 seconds until:

  • status changes from "new" to "active"

  • networks.v4 contains a public IP address


SSH/Mosh/SCP Communication

Located in src/connector/ssh.rs.

Mosh Support

Spuff automatically uses mosh (Mobile Shell) for interactive connections when available locally. Mosh provides:

  • Better responsiveness over high-latency connections

  • Seamless roaming between networks

  • Connection resilience (survives sleep/wake, network changes)

How it works:

  1. CLI checks if mosh is installed locally (which mosh)

  2. If available, uses mosh for interactive sessions

  3. Falls back to SSH if mosh is not installed locally

The remote server always has mosh-server installed via cloud-init.

Note: Non-interactive operations (SCP, remote commands) always use SSH regardless of mosh availability.

SSH Operations

All SSH operations use the system's ssh and scp binaries with consistent options:

SSH Functions

wait_for_ssh(host, port, timeout)

wait_for_ssh_login(host, config, timeout)

run_command(host, config, command)

scp_upload(host, config, local_path, remote_path)

connect(host, config)

SSH Agent Forwarding

The -A flag enables SSH agent forwarding, allowing:

  • Git operations with SSH URLs on the VM

  • Access to private repositories without copying keys

  • Chain SSH connections through the VM


Agent HTTP API

Located in src/agent/routes.rs.

Server: Axum on 127.0.0.1:7575 (localhost only)

Authentication:

If SPUFF_AGENT_TOKEN env var is not set, authentication is disabled.

Endpoints

GET /health (public)

GET /status (authenticated)

Bootstrap status values:

  • "unknown" - Status file doesn't exist

  • "running" - Bootstrap in progress

  • "ready" - Bootstrap complete

  • "failed" - Bootstrap encountered errors

GET /metrics (authenticated)

GET /processes (authenticated)

Returns top 10 processes by CPU usage.

POST /exec (authenticated)

Execute a command on the remote environment. Used by spuff exec for non-interactive commands.

GET /exec-log?lines=50 (authenticated)

Returns persistent log of all commands executed via /exec. Useful for auditing and debugging.

The stdout and stderr fields are truncated to 500 characters and have newlines escaped as \n.

POST /heartbeat (authenticated)

Resets idle timer. Returns current timestamp.

GET /logs?file=/var/log/syslog&lines=100 (authenticated)

Returns last N lines from log files in /var/log/.

GET /cloud-init (authenticated)


Cloud-Init Provisioning

Template Structure

Cloud-init YAML is generated from Tera templates in src/environment/cloud_init.rs.

Two-Phase Bootstrap

To minimize time to first SSH login, bootstrap is split into two phases:

Phase 1: Synchronous (bootstrap-sync.sh)

  • Runs during cloud-init

  • Installs critical components:

    • Docker

    • Basic shell tools (fzf, bat, eza)

    • Creates directory structure

  • SSH login is blocked until this completes

Phase 2: Asynchronous (bootstrap-async.sh)

  • Runs in background via nohup

  • Installs heavier components:

    • devbox/nix

    • Node.js

    • Claude Code CLI

    • spuff-agent download

    • Dotfiles clone

  • Progress tracked via /opt/spuff/bootstrap.status

Status File

The async bootstrap writes its status to /opt/spuff/bootstrap.status:

This file is read by the agent's /status endpoint.


State Management

Located in src/state.rs.

Database: ChronDB at ~/.spuff/chrondb/ (Git-backed document store)

Storage Structure

Documents are stored as JSON with key-based addressing:

  • instance:{id} — Instance documents (one per provisioned VM)

  • meta:active — Pointer to the currently active instance ({"instance_id": "..."})

Types

LocalInstance - Instance information stored locally (different from ProviderInstance which represents the provider's view):

Operations

Instance Lifecycle


Volume Management

Located in src/volume/.

Spuff provides SSHFS-based volume mounting for bidirectional file synchronization between local machine and remote VM.

Architecture

Components

VolumeConfig (src/volume/config.rs)

Handles volume configuration parsing and path resolution:

SshfsDriver (src/volume/drivers/sshfs.rs)

Manages SSHFS mount/unmount operations:

VolumeState (src/volume/state.rs)

Tracks mounted volumes locally:

Data Flow

Mount Flow (spuff up / spuff volume mount):

Unmount Flow (spuff down / spuff volume unmount):

Platform-Specific Handling

macOS:

  • Requires macFUSE installation

  • Force unmount sequence: umount -fdiskutil unmount force

  • SSHFS installed via Homebrew: brew install macfuse sshfs

Linux:

  • Uses native FUSE support

  • Force unmount sequence: fusermount -uzumount -l (lazy unmount)

  • SSHFS installed via package manager: apt install sshfs

SSH Wrapper for Paths with Spaces

SSHFS requires special handling for SSH key paths containing spaces. The driver creates a temporary wrapper script:

This wrapper is passed to SSHFS via the -o ssh_command= option.


Security Model

Authentication Layers

  1. Cloud Provider API

    • Bearer token authentication

    • Token stored in env var or config file

    • Config file permissions: 0600

  2. SSH

    • Ed25519 key pair (or RSA)

    • Public key registered with provider

    • Private key protected by filesystem permissions

    • Optional passphrase (requires ssh-agent)

  3. Agent API

    • Token-based authentication via X-Spuff-Token header

    • Server binds to localhost only (127.0.0.1)

    • Token passed via env var to agent service

Network Security

VM Security Hardening

From cloud-init:

  • Root SSH login disabled (disable_root: true)

  • Password authentication disabled (ssh_pwauth: false)

  • User password locked (lock_passwd: true)

  • Non-root user with sudo access

  • Only SSH key authentication allowed

Sensitive Data Handling

Data
Storage
Protection

API Token

env var or config.yaml

File permissions (0600)

SSH Private Key

~/.ssh/id_*

File permissions (0600)

Agent Token

env var

Process environment

State DB

~/.spuff/chrondb/

File permissions


Error Handling

Structured Provider Errors

Provider operations use structured ProviderError types for proper error handling and recovery strategies:

SSH Errors

The system provides clear error messages for common SSH issues:

Provider API Errors

Provider API calls use structured errors that can be matched for specific handling:

Timeout Handling

Operations have configurable timeouts:

Operation
Default Timeout

Provider API calls

30s

SSH port wait

300s (5 min)

SSH login wait

120s (2 min)

Cloud-init wait

600s (10 min)

Agent exec command

30s


Extending Spuff

Adding a New Provider

Spuff uses a Registry Pattern for providers, making it easy to add new cloud providers without modifying existing code.

Step 1: Create Provider Module

Create src/provider/<name>.rs (e.g., src/provider/hetzner.rs).

Step 2: Implement the Provider Trait

Step 3: Implement the ProviderFactory Trait

Step 4: Register the Provider

In src/provider/registry.rs, add your factory to the default registration:

Step 5: Add Provider Type

In src/provider/config.rs, add your provider to the enum:

Key Types for Provider Implementation

InstanceRequest - Provider-agnostic instance configuration:

ImageSpec - Provider-agnostic image specification:

ProviderInstance - Instance returned by provider operations:

ProviderError - Structured error types for proper handling:

Adding Agent Endpoints

  1. Add route in src/agent/routes.rs:


Debugging

Enable Debug Logging

Inspect Cloud-Init

Agent Status

Local State

Last updated

Was this helpful?