Star 历史趋势
数据来源: GitHub API · 生成自 Stargazers.cn
README.md

Netclode

netclode

Self-hosted coding agent with microVM sandboxes and a native iOS and macOS app.

Netclode iOS App Netclode macOS App

Why I built this

I wanted a self-hosted Claude Code environment I can use from my phone, with the UX I actually want. The existing cloud coding agents were a bit underwhelming when I tried them, so I built my own!

I wrote a blog post about how it works: Building a self-hosted cloud coding agent.

What makes it nice

  • Full yolo mode - Docker, root access, install anything. The microVM handles isolation
  • Local inference with Ollama - Run models on your own GPU, nothing leaves your machine
  • Tailnet integration - Preview URLs, port forwarding, access to my infra through Tailscale
  • JuiceFS for storage - Storage offloaded to S3. Paused sessions cost nothing but storage
  • Live terminal access - Drop into the sandbox shell from the app
  • Session history - Auto-snapshots after each turn. Roll back workspace and chat to any previous point
  • GitHub integration - Clone private repos, push commits, create PRs. Per-repo scoped tokens generated on demand via a GitHub App
  • GitHub Bot - @mention on PRs/issues to spin up a sandbox and get a response as a comment. Auto-reviews dependency update PRs from Dependabot/Renovate
  • Multiple SDKs & providers - Claude Code, OpenCode, Copilot, Codex SDKs with Anthropic, OpenAI, Mistral, Ollama, and more
  • Secrets can't be stolen - API keys never enter the sandbox. A proxy injects them on the fly for allowed hosts

How it works

flowchart LR subgraph CLIENT["Client"] APP["iOS / macOS<br/><sub>SwiftUI</sub>"] end subgraph VPS["VPS - k3s"] TS["Tailscale Ingress<br/><sub>TLS - HTTP/2</sub>"] CP["Control Plane<br/><sub>Go</sub>"] BOT["GitHub Bot<br/><sub>Go</sub>"] REDIS[("Redis<br/><sub>Sessions</sub>")] POOL["agent-sandbox<br/><sub>Warm Pool</sub>"] JFS[("JuiceFS")] subgraph SANDBOX["Sandbox - Kata VM<br/><sub>Cloud Hypervisor</sub>"] AGENT["Agent<br/><sub>Claude / OpenCode / Copilot / Codex SDK</sub>"] DOCKER["Docker"] end end GH["GitHub Webhooks"] S3[("S3")] LLM["LLM APIs"] APP <-->|"Connect RPC<br/>HTTPS/H2"| TS TS <-->|"Connect RPC<br/>h2c"| CP GH -->|"Webhooks"| BOT BOT <-->|"Connect RPC<br/>h2c"| CP CP <-->|"Redis Streams"| REDIS CP <-->|"Connect RPC<br/>gRPC/h2c"| AGENT POOL -.->|"allocate"| SANDBOX JFS <--> SANDBOX JFS <-->|"POSIX on S3"| S3 AGENT --> LLM

The control plane grabs a pre-booted Kata VM from the warm pool (so it's instant), forwards prompts to the agent SDK inside, and streams responses back. Redis persists events so clients can reconnect without losing anything.

When pausing, the VM is deleted but JuiceFS keeps everything in S3: workspace, installed tools, Docker images, SDK session. Resume mounts the same storage and the conversation continues as if nothing happened. Dozens of paused sessions cost practically nothing.

Stack

LayerTechnologyPurpose
HostLinux VPS + AnsibleProvisioned via playbooks
Orchestrationk3sLightweight Kubernetes, nice for single-node
IsolationKata Containers + Cloud HypervisorMicroVM per agent session
StorageJuiceFS → S3POSIX filesystem on object storage
StateRedis (Streams)Real-time, streaming session state
NetworkTailscale OperatorVPN to host, ingress, sandbox previews
APIProtobuf + Connect RPCType-safe, gRPC-like, streams
Control PlaneGoSession and sandbox orchestration
AgentTypeScript/Node.jsSDK runner inside sandbox
GitHub BotGoWebhook-driven bot for @mentions and dep reviews
Secret ProxyGoInjects API keys outside the sandbox
Local LLMOllamaOptional, local models on GPU
ClientSwiftUI (iOS 26)Native iOS/macOS app
CLIGoDebug client for development

Project structure

netclode/
├── clients/
│   ├── ios/              # iOS/Mac app (SwiftUI)
│   └── cli/              # Debug CLI (Go)
├── services/
│   ├── control-plane/    # Session orchestration (Go)
│   ├── agent/            # SDK runner (Node.js)
│   │   └── auth-proxy/   # Adds SA token to requests (Go)
│   ├── github-bot/       # GitHub webhook bot (Go)
│   └── secret-proxy/     # Injects real API keys (Go)
├── proto/                # Protobuf definitions
├── infra/
│   ├── ansible/          # Server provisioning
│   └── k8s/              # Kubernetes manifests
└── docs/

Getting started

See docs/deployment.md for full setup. I tried to make it as easy as possible: ideally a single playbook run.

Quick version:

  1. Provision a VPS with nested virtualization support
  2. Run Ansible playbooks to provision the server
  3. Configure secrets (API keys, S3 credentials, Tailscale OAuth)
  4. Deploy k8s manifests
  5. Connect via Tailscale and you're good to go

Docs

Demo

All videos from the blog post:

Warm pool instant start

No cold start, sandboxes are pre-booted

Session pause & resume

Older sessions are automatically paused to save resources. Resume brings everything back instantly

Local inference with Ollama

Run models on your own GPU

CLI shell

Instant sandbox access from the terminal, inspired by sprites.dev

Git diff view
Diff view with multi-repo support

Live terminal
Drop into the sandbox shell from iOS

Speech input
Speech recognition for prompts

Tailscale port preview
Expose sandbox ports to the tailnet

关于 About

Self hosted cloud coding agent with k3s + kata containers + cloud hypervisor microVMs + tailscale + any harness + a nice iOS app
claude-codecloud-hypervisorcodexcoding-agentcopilotjuicefsk3skata-containersmicrovmopencodeswiftui

语言 Languages

Swift51.2%
Go29.8%
TypeScript16.7%
Jinja1.0%
Dockerfile0.7%
Shell0.3%
Makefile0.3%
JavaScript0.0%

提交活跃度 Commit Activity

代码提交热力图
过去 52 周的开发活跃度
870
Total Commits
峰值: 272次/周
Less
More

核心贡献者 Contributors