Skip to content

SDK: Storage Snapshots

See also: SDK Overview, VFS Providers, Custom Images, Snapshots

VFS Providers

Gondolin can mount host-backed paths into the guest via programmable VFS providers.

See VFS Providers for the full provider reference and common recipes (blocking /.env, hiding node_modules, read-only mounts, hooks, and more).

Minimal example:

import { VM, RealFSProvider, MemoryProvider } from "@earendil-works/gondolin";

const vm = await VM.create({
  vfs: {
    mounts: {
      "/workspace": new RealFSProvider("/host/workspace"),
      "/scratch": new MemoryProvider(),
    },
  },
});

Image Management

Guest images (kernel, initramfs, rootfs, and optional krun boot artifacts) are resolved automatically from local overrides/store first, then from builtin-image-registry.json when needed. The default cache location is ~/.cache/gondolin/images/.

Override image selection / source:

# Use explicit local assets
export GONDOLIN_GUEST_DIR=/path/to/assets

# Change default image selector
export GONDOLIN_DEFAULT_IMAGE=alpine-base:1.0

# Override builtin registry URL
export GONDOLIN_IMAGE_REGISTRY_URL=https://example.invalid/my-registry.json

Build-id selectors (uuid) are resolved locally first and only downloaded from the builtin registry when that registry has an explicit builds[buildId] mapping.

Builtin registry entries are normalized: refs[name:tag][arch] stores a build id, and builds[buildId] stores the downloadable source metadata (url, optional sha256, optional arch).

Check asset status programmatically:

import {
  hasGuestAssets,
  ensureGuestAssets,
  getAssetDirectory,
} from "@earendil-works/gondolin";

console.log("Assets available:", hasGuestAssets());
console.log("Asset directory:", getAssetDirectory());

// Download if needed
const assets = await ensureGuestAssets();
console.log("Kernel:", assets.kernelPath);

To build custom images, see: Building Custom Images.

Use custom assets programmatically by pointing sandbox.imagePath at the asset directory:

import { VM } from "@earendil-works/gondolin";

const vm = await VM.create({
  sandbox: {
    imagePath: "./my-assets",
  },
});

const result = await vm.exec("uname -a");
console.log("exitCode:", result.exitCode);
console.log("stdout:\n", result.stdout);
console.log("stderr:\n", result.stderr);

await vm.close();

Rootfs Modes

You can control rootfs write behavior per VM:

  • readonly: rootfs is read-only (EROFS on writes)
  • memory: writable throwaway rootfs
    • on qemu, this uses backend-native snapshot mode
    • on krun, this is not RAM-backed; Gondolin creates a temporary qcow2 overlay on disk and deletes it on close
  • cow: writable qcow2 copy-on-write overlay (default)
    • this does not write back into the original rootfs image
    • by default it is a throwaway qcow2 overlay file that is deleted on close
    • because it is a real qcow2 layer, it can be checkpointed
const vm = await VM.create({
  rootfs: { mode: "readonly" },
});

If the guest asset manifest.json contains runtimeDefaults.rootfsMode, that value is used as the default when rootfs.mode is not provided.

Disk Checkpoints (qcow2)

Gondolin supports disk-only checkpoints of the VM root filesystem.

A checkpoint captures the VM's writable disk state and can be resumed cheaply using qcow2 backing files.

Backend support: checkpoints work with both qemu and krun. Resume enforces checkpoint backend-compatibility metadata. See VM Backends (QEMU vs krun).

See also: Snapshots.

import path from "node:path";

import { VM } from "@earendil-works/gondolin";

const base = await VM.create();

// Install packages / write to the root filesystem...
await base.exec("apk add git");
await base.exec("echo hello > /etc/my-base-marker");

// Note: must be an absolute path
const checkpointPath = path.resolve("./dev-base.qcow2");
const checkpoint = await base.checkpoint(checkpointPath);

const task1 = await checkpoint.resume();
const task2 = await checkpoint.resume();

// Both VMs start from the same disk state and diverge independently
await task1.close();
await task2.close();

checkpoint.delete();

Notes:

  • This is disk-only (no in-VM RAM/process restore)
  • The checkpoint is a single .qcow2 file; metadata is stored as a JSON trailer (reload with VmCheckpoint.load(checkpointPath))
  • Checkpoints require guest assets with a manifest.json that includes a deterministic buildId (older assets without buildId cannot be snapshotted)
  • QEMU rootfs.mode="memory" uses backend snapshot mode and is not checkpointable; use rootfs.mode="cow" when you need a writable qcow2 layer
  • Cross-backend resume (qemukrun) requires guest assets with krun boot artifacts (manifest.assets.krunKernel)
  • Some guest paths are tmpfs-backed by design (eg. /root, /tmp, /var/log); writes under those paths are not part of disk checkpoints

Debug Logging

See Debug Logging.