Hub, Spoke, and Raven
A few days ago I wrote about building a container layer cache for Claude Code on the Web — a Dockerfile-like spec that snapshots your environment as a tarball and restores it in milliseconds. That solved the environment problem: packages, CLIs, and path config survive across sessions.
But a persistent environment isn't the same as a persistent agent. And a single repo isn't a development workflow. What I actually wanted was this: open a Claude Code session on the web, have it boot in seconds with my tools installed, my skills loaded, and my AI agent — Muninn — already present with its identity, memory, and operational context. Then work across multiple GitHub repos, not just the one the session opened in.
That's what's running now. Here's how it fits together.
The Stack
Three layers, each solving a different problem:
The key design decision: separate what's slow-changing from what's fast-changing. System packages and CLI tools rarely change — cache them aggressively. Skills evolve constantly — always fetch fresh. Agent identity is persistent state in a database — load it every time. Each layer has its own caching strategy because each layer has a different rate of change.
Hub and Spoke
Claude Code on the Web opens a session in one repository. That's your working directory, your git context, your CLAUDE.md. But real work often spans multiple repos — a skills repo, a cache storage repo, a blog, an app.
The solution is a hub/spoke model. One repo — claude-workspace — is the hub. It contains the boot scripts, the Containerfile, the session hooks. It exists to configure and launch the environment. I don't write application code in it.
Everything else is a spoke:
claude-skills— the skills fetched at boot. When Muninn needs to fix or extend a skill mid-session, it opens a PR here directly.claude-container-layers— stores cached tarballs and archived transcripts. Managed automatically by the boot and stop hooks.oaustegard.github.io— this blog.- Several others — an AT Protocol app, browser extensions, bookmarklets.
The glue is gh, the GitHub CLI. It's installed in the container layer (one of the things we cache), authenticated via a GH_TOKEN in the project's env files. With gh available, Muninn can clone repos, create branches, push commits, open PRs, read issues — across any of my repos, not just the hub.
This is why the container layer matters beyond convenience. Without cached gh, every session would need to download and install it before doing any cross-repo work. With it, the agent can start working across repos immediately.
Loading the Raven
The boot sequence is orchestrated by .claude/settings.json hooks — specifically, a SessionStart hook that fires automatically when a session begins:
{
"hooks": {
"SessionStart": [{
"hooks": [{
"type": "command",
"command": "bash ./boot-ccotw.sh 2>&1 || echo 'Boot failed (non-fatal)'"
}]
}]
}
}
The boot script (boot-ccotw.sh) does four things in sequence:
- Restore the container layer — apply the cached Containerfile. If the cache hits (which it almost always does), this takes ~4ms.
- Fetch skills — pull the latest tarball from
claude-skillson GitHub. Always fresh, never cached. ~1.4 seconds. - Emit skills metadata — parse each skill's
SKILL.mdand output an XML block listing names, descriptions, and locations. This lands in Claude's context window, so the agent knows what tools it has. - Run post-boot — execute
post-boot.sh, which loads Muninn.
The post-boot script is minimal:
cd /mnt/skills/user/remembering
python3 -c "from scripts import boot; print(boot(telemetry=True))"
That boot() call connects to a Turso database and loads:
- Profile — identity, personality, voice, values, tensions to navigate
- Ops — operational instructions: memory discipline, grounding safeguards, dev workflow preferences
- Recall triggers — a precomputed index of topics the agent has memories about, for fast retrieval
- Recent flights — the most recent GitHub issues created by Muninn (we call them "flights" — the raven flies out, reports back)
- Reminders — pending tasks with due dates
- Constellation — the list of spoke repos and their status
All of this is printed to stdout, which the SessionStart hook captures and injects into Claude's context window. When the conversation begins, the agent doesn't need to be told who it is or how to behave — it already knows.
Session Lifecycle
Sessions have a shape: they start, they run, they end. The hooks give each phase a purpose.
The stop hook is fire-and-forget — errors are silenced, and missing credentials cause a silent no-op. The transcript is a .jsonl file that Claude Code writes to ~/.claude/projects/. The script tars it up and pushes it to a GitHub Release on the container-layers repo, both as a per-session archive (tagged with timestamp and session ID) and as a rolling transcripts-latest bundle.
This isn't about nostalgia. Transcripts are useful for debugging boot failures, reviewing what Muninn actually did in a session, and occasionally for training data to improve skills and ops.
Boot Telemetry
When BOOT_TELEMETRY=1 is set, the boot script emits per-phase timing data. Here's a real boot from today:
Environment ready (cached). ⏱ bash:env_source 4ms ⏱ bash:skills_fetch 1401ms ⏱ bash:post_boot 6686ms Post-boot breakdown (Python): config_fetch 379ms github_detect 1ms ops_topics 565ms utilities 1849ms tasks 217ms flights 272ms reminders 616ms format 573ms TOTAL 4472ms
The telemetry isn't vanity — it drove actual design decisions. The utilities phase (loading recall triggers, initializing caches) dominates at 1.8 seconds. That's the next optimization target. Without the telemetry bars staring at you every session, you'd guess wrong about where the time goes.
What Makes This Different
The individual pieces aren't novel. Caching builds, loading config from a database, using CLI tools across repos — people do all of these. What's unusual is stacking them into a coherent boot sequence for an AI agent on an ephemeral platform:
- Container persistence — the environment survives session boundaries
- Agent persistence — the agent's identity and memory survive session boundaries
- Skill freshness — capabilities are always current, deliberately not cached
- Multi-repo reach — the agent works across repositories, not within one
- Lifecycle hooks — sessions have defined start and stop behaviors
The result is that I open a Claude Code session on the web, wait about 8 seconds, and I'm talking to Muninn — who knows who it is, remembers our previous conversations, has 70 skills loaded, can work across 10 repos, and will archive this session's transcript when we're done.
Not bad for an ephemeral container.
Try It
You don't need the full Muninn stack to benefit from this pattern. The container layer is a standalone skill you can drop into any Claude Code on the Web project:
container-layer-test— a working demo repo withSessionStarthooks, a sampleContainerfile, and step-by-step setup instructionscontainer-layerskill — the parser, executor, cache, and uv shim
A Containerfile + SessionStart hook + a few spoke repos and gh gets you a persistent multi-repo development environment. The stateful agent on top is optional — but once you have the infrastructure, it's a natural next step.
The container layer skill is documented in detail in the previous post. Muninn's memory architecture is described on muninn.austegard.com.