Blog

Hub, Spoke, and Raven

April 7, 2026 · Oskar Austegard

A few days ago I wrote about building a container layer cache for Claude Code on the Web — a Dockerfile-like spec that snapshots your environment as a tarball and restores it in milliseconds. That solved the environment problem: packages, CLIs, and path config survive across sessions.

But a persistent environment isn't the same as a persistent agent. And a single repo isn't a development workflow. What I actually wanted was this: open a Claude Code session on the web, have it boot in seconds with my tools installed, my skills loaded, and my AI agent — Muninn — already present with its identity, memory, and operational context. Then work across multiple GitHub repos, not just the one the session opened in.

That's what's running now. Here's how it fits together.

The Stack

Three layers, each solving a different problem:

Ephemeral Ubuntu Container Fresh every session — the platform we build on Container Layer httpx, libsql, gh CLI — cached as tarball, restored in ms CACHED Skills (~70) Fetched from GitHub every session — always current, never cached FRESH Muninn (Stateful Agent) Identity, memories, ops loaded from Turso DB every session STATEFUL persists across sessions loaded every session

The key design decision: separate what's slow-changing from what's fast-changing. System packages and CLI tools rarely change — cache them aggressively. Skills evolve constantly — always fetch fresh. Agent identity is persistent state in a database — load it every time. Each layer has its own caching strategy because each layer has a different rate of change.

Hub and Spoke

Claude Code on the Web opens a session in one repository. That's your working directory, your git context, your CLAUDE.md. But real work often spans multiple repos — a skills repo, a cache storage repo, a blog, an app.

The solution is a hub/spoke model. One repo — claude-workspace — is the hub. It contains the boot scripts, the Containerfile, the session hooks. It exists to configure and launch the environment. I don't write application code in it.

Everything else is a spoke:

The glue is gh, the GitHub CLI. It's installed in the container layer (one of the things we cache), authenticated via a GH_TOKEN in the project's env files. With gh available, Muninn can clone repos, create branches, push commits, open PRs, read issues — across any of my repos, not just the hub.

This is why the container layer matters beyond convenience. Without cached gh, every session would need to download and install it before doing any cross-repo work. With it, the agent can start working across repos immediately.

Loading the Raven

The boot sequence is orchestrated by .claude/settings.json hooks — specifically, a SessionStart hook that fires automatically when a session begins:

{
  "hooks": {
    "SessionStart": [{
      "hooks": [{
        "type": "command",
        "command": "bash ./boot-ccotw.sh 2>&1 || echo 'Boot failed (non-fatal)'"
      }]
    }]
  }
}

The boot script (boot-ccotw.sh) does four things in sequence:

  1. Restore the container layer — apply the cached Containerfile. If the cache hits (which it almost always does), this takes ~4ms.
  2. Fetch skills — pull the latest tarball from claude-skills on GitHub. Always fresh, never cached. ~1.4 seconds.
  3. Emit skills metadata — parse each skill's SKILL.md and output an XML block listing names, descriptions, and locations. This lands in Claude's context window, so the agent knows what tools it has.
  4. Run post-boot — execute post-boot.sh, which loads Muninn.

The post-boot script is minimal:

cd /mnt/skills/user/remembering
python3 -c "from scripts import boot; print(boot(telemetry=True))"

That boot() call connects to a Turso database and loads:

All of this is printed to stdout, which the SessionStart hook captures and injects into Claude's context window. When the conversation begins, the agent doesn't need to be told who it is or how to behave — it already knows.

Session Lifecycle

Sessions have a shape: they start, they run, they end. The hooks give each phase a purpose.

SessionStart Restore container layer 4 ms Fetch skills from GitHub 1.4 s Emit skills XML to context window post-boot.sh → boot() from Turso 4.5 s = Agent ready with tools, identity, and memory Session runs — user interacts with Muninn SessionEnd / Stop persist-transcript.sh → find .jsonl → archive to GitHub Release = Session preserved for future reference

The stop hook is fire-and-forget — errors are silenced, and missing credentials cause a silent no-op. The transcript is a .jsonl file that Claude Code writes to ~/.claude/projects/. The script tars it up and pushes it to a GitHub Release on the container-layers repo, both as a per-session archive (tagged with timestamp and session ID) and as a rolling transcripts-latest bundle.

This isn't about nostalgia. Transcripts are useful for debugging boot failures, reviewing what Muninn actually did in a session, and occasionally for training data to improve skills and ops.

Boot Telemetry

When BOOT_TELEMETRY=1 is set, the boot script emits per-phase timing data. Here's a real boot from today:

Environment ready (cached).
⏱ bash:env_source       4ms
⏱ bash:skills_fetch  1401ms
⏱ bash:post_boot     6686ms

  Post-boot breakdown (Python):
  config_fetch        379ms ███████
  github_detect         1ms 
  ops_topics          565ms ███████████
  utilities          1849ms ████████████████████████████████████
  tasks               217ms ████
  flights             272ms █████
  reminders           616ms ████████████
  format              573ms ███████████
  TOTAL              4472ms

The telemetry isn't vanity — it drove actual design decisions. The utilities phase (loading recall triggers, initializing caches) dominates at 1.8 seconds. That's the next optimization target. Without the telemetry bars staring at you every session, you'd guess wrong about where the time goes.

What Makes This Different

The individual pieces aren't novel. Caching builds, loading config from a database, using CLI tools across repos — people do all of these. What's unusual is stacking them into a coherent boot sequence for an AI agent on an ephemeral platform:

The result is that I open a Claude Code session on the web, wait about 8 seconds, and I'm talking to Muninn — who knows who it is, remembers our previous conversations, has 70 skills loaded, can work across 10 repos, and will archive this session's transcript when we're done.

Not bad for an ephemeral container.

Try It

You don't need the full Muninn stack to benefit from this pattern. The container layer is a standalone skill you can drop into any Claude Code on the Web project:

A Containerfile + SessionStart hook + a few spoke repos and gh gets you a persistent multi-repo development environment. The stateful agent on top is optional — but once you have the infrastructure, it's a natural next step.

The container layer skill is documented in detail in the previous post. Muninn's memory architecture is described on muninn.austegard.com.