Blog

Custom Container Layers for Claude's Ephemeral Machines

April 3, 2026 · Oskar Austegard

Claude.ai and Claude Code on the Web both run in ephemeral containers. Every session starts from a fresh Ubuntu box — packages, path config, everything gone. That's how it works, and it's fine for most people.

But if you've built a workflow that depends on custom packages, fetched repos, or path configuration — like Muninn, the stateful memory agent I run on Claude — your boot sequence starts doing real work. Mine fetches a skills repo from GitHub, installs Python path entries, sources credentials, queries a Turso database. Repeated, from scratch, every single session.

At some point I asked what seemed like an obvious question: could we just use a Dockerfile?

The Idea

Not to build an actual Docker image — we don't control the base image and there's no daemon to speak of. But to use the Dockerfile as a declarative spec for the environment, parse and execute the instructions we care about, and cache the result.

The Dockerfile format is already a well-understood DSL for exactly this purpose. And if Anthropic ever does expose custom base images (please), you've already got the spec ready to go.

We call it a Containerfile to avoid confusion with actual Docker builds. It supports the subset that matters in an ephemeral container:

# Fetch a GitHub repo
FETCH github:oaustegard/claude-skills /mnt/skills/user

# Install Python packages
RUN uv pip install --system httpx pandas

# Configure paths
RUN echo '/mnt/skills/user/remembering' > /usr/local/lib/python3.11/dist-packages/muninn.pth
SNAPSHOT /usr/local/lib/python3.11/dist-packages/muninn.pth

# Set environment
ENV MY_VAR=hello
WORKDIR /home/user

# Dockerfile-only instructions are silently ignored
FROM ubuntu:24.04
EXPOSE 8080

FETCH, RUN, ENV, WORKDIR, SNAPSHOT — that's the active vocabulary. Everything else (FROM, EXPOSE, CMD, etc.) is silently skipped, so you can maintain a file that's valid-ish Dockerfile syntax while only the relevant parts execute.

The Cache

The parser is straightforward. The more interesting part is the caching.

On first build, the executor snapshots what changed. Not the entire filesystem — just the delta. It captures a baseline of well-known install paths before executing, then diffs against it afterward. Only new files from package installs get included. FETCH destinations are captured in full. The result is a tarball, typically 2–3 MB, that gets pushed to a GitHub Release as an asset.

The cache key is a SHA-256 hash of the Containerfile contents. Change a line, and the hash changes, triggering a rebuild. You can also salt it with external signals:

python3 -m scripts.cli \
    --invalidate-on oaustegard/claude-skills \
    restore ./Containerfile

The --invalidate-on flag fetches the HEAD SHA of the specified repo and mixes it into the cache key. Push a commit to your skills repo? The cache auto-invalidates. Next session does a full rebuild and re-caches.

Session start → Hash Containerfile (+ optional repo HEAD SHAs) → Check GitHub Releases for matching tag → Hit? → curl 2.7 MB tarball → tar xzf → done (seconds) → Miss? → Execute each instruction → Snapshot filesystem delta → Push tarball to GitHub Releases → Next session gets the fast path

The uv Shim

There's also a shim for capturing ad-hoc installs. Mid-session, you install a package you didn't anticipate:

source ./scripts/uv_shim.sh ./Containerfile
uv pip install --system scikit-learn

The shim is a bash function that wraps uv. It proxies the real install, and on success, appends RUN uv pip install --system scikit-learn to your Containerfile. Your ad-hoc install is now part of the spec. Next time you build, it's baked in.

It strips transient flags like --break-system-packages from the captured line (the executor auto-adds those during build), so the Containerfile stays clean.

Claude Code on the Web

The integration for Claude Code on the Web uses the SessionStart hook — a shell command that fires automatically when a session starts, with stdout injected into Claude's context:

// .claude/settings.json
{
  "hooks": {
    "SessionStart": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "bash ./boot-ccotw.sh 2>&1 || true"
      }]
    }]
  }
}

The boot script bootstraps the container-layer skill itself (a ~30 KB fetch from GitHub), then uses it to process the Containerfile. It's turtles all the way down — the tool that builds the cache is itself fetched fresh each session, but it's small enough that this is negligible.

For Claude.ai chat (project instructions), it's the same flow but triggered from the boot script block in project instructions instead of a hook.

What It Cost to Get Working

Three PRs on the test repo before the CCotW integration worked cleanly:

  1. Python version mismatch — CCotW runs Python 3.11, not 3.12. The dist-packages path was wrong.
  2. set -e vs. glob patternsfor envfile in ./*.env returns a literal ./*.env when no files match, the [ -f ] test fails, and set -e kills the entire script. Classic.
  3. Snapshot scope — snapshotting only the .pth file missed the actual packages. Need to snapshot the full dist-packages directory to capture pip-installed dependencies.

All the kinds of bugs that are trivial in retrospect and maddening in the moment. The working version is oaustegard/container-layer-test.

What This Isn't

It's not a real container builder. It doesn't do layer caching per-instruction (though it could). It doesn't handle multi-stage builds. It doesn't work with Docker registries. It's a ~400-line Python parser that executes shell commands and tarballs the results. The Dockerfile format is borrowed for familiarity, not fidelity — but the containers are ephemeral, and the spec doesn't need to be.

Try It

To use caching, you'll need to create your own GitHub repo for storing layer tarballs (the default target is configurable via --repo). Without a cache repo, the Containerfile still executes — you just rebuild from scratch each session, which is where you were anyway.


Built in a single Claude.ai session. The skill, cache repo, test repo, all three debugging PRs, and this blog post were produced in one conversation with Claude Opus 4.6.