/changelog reverse chronological · deploy dates · receipts

Every hardening choice,
with the date it landed.

This page lists every security decision that shipped on jenni.noschmarrn.dev — the live reference instance — in reverse chronological order. Dates are deploy dates on production, not merge dates: until it ran in the wild, it doesn't count. Phase numbers match the internal plan/ structure that ships with the open-source release.

If you are considering trusting Jenni with your releases, read this before you upload your first ZIP. The cross-cutting picture is on the specs page; the trust-layer protocol is on trust; the why-this-exists narrative is on the story.

2026-05-13 · phase 8

Trust layer with an offline root key.

The structural answer to the open question Phase 7 had documented: what stops a compromised server from signing forged updates? A second key, kept off the server entirely, signs the list of which signing keys are currently valid. Clients pin one 32-byte root pubkey for years and verify both layers on every refresh. The full protocol lives on the trust page.

Hardening added

  • Two-stage trust hierarchy. A long-lived root pubkey is pinned in the client application; the server delivers a trust.json signed by the root, listing the currently valid signing keys. The root privkey lives only on the operator's laptop, never on the server.
  • Compromise recovery without a client code update. If a signing key is pulled from server memory, the operator moves it to revoked_keys on the laptop and issues a new trust.json. Clients refuse every signature by the revoked key from the next trust refresh onward.
  • Signing-key rotation without a client code update. Scheduled rotation (e.g. annual) is handled entirely through a new offline-signed trust.json. Open-source clients pick it up on the next cron run.
  • Trust-list replay protection. trust_version is monotonically increasing. The client refuses any trust list with a version smaller than the one it has already accepted.
  • Trust-list freeze protection. trust.expires_at (default two years) forces the operator to re-sign regularly. The server shows a traffic-light status (90 / 30 / 0 days) and writes audit events at every threshold crossing.
  • Fail-fast on configuration drift. The TRUST_ROOT_PUBKEY environment variable is compared to the persisted settings row at container start. Mismatch = non-zero exit. Prevents accidental root rotation by an env-var typo.
  • Operator-laptop CLI (tools/trust-tool/): a stand-alone Python tool, Argon2id + Fernet for encrypted-at-rest root privkey, no app.* imports. Cross-platform (Linux / macOS / Windows).
  • Audit trail for the trust lifecycle: trust_root_configured, trust_imported, trust_root_rotated, trust_expiry_warning_90, trust_expiry_warning_30, trust_expired, signing_key_rotated_without_trust_update.

Internal references: trust-layer design spec, root-key plan, Phase 8 plan, end-to-end test with the raw cryptography library on the client side.

2026-05-13 · phase 7

Ed25519-signed release manifests.

Before Phase 7, every auto-updater that pointed at Jenni was implicitly trusting TLS for everything: the version number, the SHA-256, the scan status. Phase 7 added the per-release signature layer so a client can verify what the server claims, not just that it reached the server.

Hardening added

  • One Ed25519-signed manifest per release with counter, key_id, project, sha256, signed_at, size_bytes, url, version. Canonical JSON (sorted keys, no whitespace, UTF-8 direct) so byte-exact verification is reproducible across languages.
  • Rollback protection. A monotonic signing_counter per project. The client stores max_counter locally; any manifest with counter ≤ max is a rollback attempt and is refused.
  • MITM protection. The ZIP's sha256 is part of the signed payload. The client verifies after download → any byte change in flight (CDN, proxy, malicious MITM) is detected.
  • Freeze protection. signed_at is part of the signed payload. The client can apply its own freshness policy (e.g. "the update server has been returning the same manifest for over 30 days — alert").
  • Mirror verification. /api/signing/pubkey is offline-verifiable. The client can accept manifests from a third-party mirror as long as the signature verifies under the pinned pubkey.
  • Encrypted-at-rest signing privkey. The signing key sits in the DB under the same Argon2id + Fernet stack as the SVN credentials from Phase 5.1. Decryption happens once at container start (via env-var or admin UI unlock); in-memory state is kept to the minimum needed.
  • Partial unique index on signing_keys enforces exactly one active key row at a time — no accidental multi-active states.

Internal references: signed-manifest design spec, Phase 7 plan, reference verifier in Python under tests/integration/.

2026-05-10 · phase 5.3

WordPress plugin asset workflow.

wp.org has two SVN paths per plugin: trunk/ for the code, assets/ for banners, icons, and screenshots. Phase 5.3 added the second one with its own concurrency lane and image validation pipeline.

Hardening added

  • Separate concurrency domain for asset commits: wp_asset_deploy_status parallel to wp_deploy_status from Phase 5.2. Plugin code commits and asset commits can run concurrently because they touch disjoint SVN paths (trunk/ vs assets/) — no cross-locking.
  • Stage-and-batch pattern for asset changes: a local staging layer with an explicit diff preview before anything is committed against wp.org. The operator sees every byte change in the browser before clicking "commit".
  • Pillow + nh3 for image validation. Every uploaded asset file is magic-byte-checked with Pillow (no PHP-with-PNG-header tricks). SVGs pass through nh3 as an HTML sanitiser (active content stripped).
  • Atomic tmp + replace when persisting: write to /tmp first, confirm sha256, then os.replace() onto the final path. No half-writes on crash.
2026-05-09 · phase 6

Module collections + public JSON catalog.

New project type (module_collection) plus its children (module), exposed through a JSON catalog endpoint for apps that ship a plug-in ecosystem to their users. The hardening pass extended the upload-validation surface from ZIPs to icon images.

Hardening added

  • Strict image validation for module icons via Pillow (magic-byte check) + nh3 SVG sanitise. Same pipeline as Phase 5.3, one call per upload.
  • FS-vs-DB drift detection. /<module-slug>/icon answers with two distinct 404 paths: DB row missing or file on disk missing. The operator sees in the audit log which variant was hit.
  • Cascade protection. ON DELETE RESTRICT between module collection and module. A container cannot be deleted as long as modules still reference it.
  • Slug allow-list tightened: 3+ characters, allowed charset [a-z0-9][a-z0-9-]{1,48}[a-z0-9], plus a reserved set.
  • Filename validation as the first gate on the icon endpoint: URL-encoded .. is rejected by regex before the DB is even queried.
2026-05-03 · phase 5.2

WordPress.org SVN deploy.

Jenni's first end-to-end push against an external system. wp.org's SVN repository is the canonical home for free WordPress plugins; Phase 5.2 turned it into a one-click deploy with pre-flight gates and atomic commits.

Hardening added

  • Atomic tag commit against wp.org. svn co --depth=immediates, svn update trunk --set-depth=infinity, modify trunk, svn copy trunk tags/X.Y.Z, one svn ci. A single SVN revision covers both the trunk update and the tag creation — no half-state with a published version missing its tag.
  • --non-interactive --no-auth-cache on every svn call. Credentials never persist to the SVN auth cache and the subprocess never hangs on prompts.
  • Stderr redaction of the --password=… argument in subprocess logs.
  • Pre-flight checks before commit. PHPCS with WordPress Coding Standards (soft gate for stylistic warnings), PHPCompatibility against a configurable PHP target (default 7.4), wp plugin check. The operator can only commit if all three produce acceptable output.
  • Diff preview before commit. SVN diff between the current trunk and the new ZIP content is rendered in the browser before anything is committed.
  • Partial unique index per_in_flight_per_project: only one in-flight deploy per project at a time.
  • Container surface tightened. After Phase 5.2 the WordPress toolchain (svn, php, composer, wp-cli, phpcs / phpcbf) was baked into the image as fixed layers instead of being installed at runtime.
2026-05-02 · phase 5.1

Master password & encrypted SVN credentials.

Once Jenni had to hold wp.org credentials to deploy on behalf of the operator, those credentials needed to be encrypted at rest. Phase 5.1 added the master-password layer that later phases (Phase 7's signing privkey) reused.

Hardening added

  • Argon2id-derived Fernet key for encrypted-at-rest storage of SVN credentials. Argon2id parameters: time_cost=3, memory_cost=65536 KiB, parallelism=4, hash_len=32 (OWASP 2024 recommendation).
  • Verifier pattern. Setup encrypts a fixed marker with the derived key. Verify re-derives the key and attempts to decrypt the marker — a Fernet HMAC failure means wrong password. No separate Argon2 hash storage; one source of truth.
  • Master-password reset makes the old privkey mathematically unreachable. derive_key with the old passphrase is no longer producible after a reset, because the random salt has been overwritten. The operator must re-enter every credential. Consequence: no "undo" of a master-password reset.
  • Single master password per server, encrypted in its own table master_password_state. No email recovery flow → no phishing surface; recovery runs through the host-shell CLI.
  • Locked-recovery banner on the per-plugin settings page when the master password has been rotated but the plugin's SVN credentials are still encrypted under the old key — the operator sees immediately where to re-enter.
2026-04-25 · phase 4

ClamAV scan + trust badge.

The "no, really, every upload is virus-scanned" promise on the homepage stops being a promise the day this phase lands. Strict mode is the default; a scanner error aborts the upload, not the other way around.

Hardening added

  • ClamAV scan on every upload via a Unix-socket mount into the container. The scanner daemon runs host-side; the container has a read-only mount on the socket directory. AF_UNIX socket I/O is unaffected by :ro.
  • Strict-mode default. A scanner error aborts the upload instead of "just let it through". The operator has to explicitly set CLAMAV_STRICT_MODE=false to degrade the scan to a soft gate.
  • Scan status is part of the public API and the info page. Endpoints return scan_status, scanner_name, scanner_version, scanned_at, scanner_signature_date — external reviewers can rely on the verifiability.
  • EICAR test in the deploy smoke. Every production deploy is cross-checked with an EICAR probe upload to confirm that strict mode actually blocks.
  • rescan-all CLI command for nightly re-scans against updated signatures — old releases aren't forgotten when new definitions land.
  • Trust badge (/api/projects/<slug>/badge.svg): a small SVG snippet for external sites that shows the current scan status.
2026-04-19 · host & container

Host- and container-level hardening pass.

Not a numbered phase — a deliberate sweep across the host and the container's runtime context. Live on production since 2026-04-19. Most of these are boring infrastructure choices; the boringness is the point.

Hardening added

  • UFW active: limit 22/tcp, allow 80/tcp, allow 443/tcp, allow 443/udp (HTTP/3). Default-deny incoming.
  • fail2ban with sshd jail: maxretry=5, bantime=1h, backend=systemd, ignoreip=loopback.
  • unattended-upgrades for Debian security patches.
  • Container read_only: true: writes to the container filesystem fail with EROFS. Only the explicitly configured mounts (./data, ./downloads, clamav-socket) are writable.
  • tmpfs for /tmp (64 MB, mode 1777): scratch storage for ZIP extracts and pre-flight reports, without making the container itself writable.
  • no-new-privileges: true: setuid / setgid escalation from inside the container is impossible.
  • cap_drop: [ALL]: no Linux capabilities, not even the usual default set.
  • Container runs as UID 10001, not root. data/ and downloads/ are bind-mounted to this UID's ownership.
  • Caddy with ACME as the reverse proxy with automatic TLS certificates. HTTPS-only cookies are only consistent with this setup.
2026-04-18 · phase 1

MVP foundation security.

The starting baseline. Everything since has only been additive on top of this — no destructive migrations, no "we deprecated that defence", no "we moved off Argon2id". The list below is the floor every release Jenni has ever served stood on.

Hardening added

  • Argon2id for user passwords (not bcrypt, not scrypt, not PBKDF2). argon2-cffi library.
  • Login rate limit: 5 attempts per minute per IP. Counts successful logins too (defends against distributed brute force that interleaves successful attempts).
  • CSRF tokens on every admin form. Session-scoped, not rotated mid-flow.
  • Cookie hardening: HttpOnly, Secure, SameSite=Lax.
  • HTTPS-only cookies + Uvicorn --proxy-headers for the Caddy-in-front-of-FastAPI setup.
  • ZIP-slip protection: path components are validated against .. and absolute paths. No writes outside the extract root.
  • ZIP-bomb cap: 500 MB uncompressed default, configurable via MAX_UNCOMPRESSED_MB. Measured while streaming, not after a full extract.
  • Magic-byte check on every upload (PK\x03\x04 signature for ZIPs).
  • Soft-delete for releases: no accidental data loss from operator misclicks. A restore path exists.
  • Atomic activation via symlinks: no read/write race on the active release.
  • Audit log for every admin action (uploads, activations, status changes, restores). Append-only, in its own table.
  • Rate limit on public GET routes (60/min/IP) as an in-memory sliding window.
  • Vocabulary-pin test for audit actions: every new audit.record(action=…) must be entered in a frozenset allow-list AND pinned as a literal string in a test. Prevents vocabulary drift during refactors.
  • /healthz without auth and without DB probe (for the container healthcheck). /api/health WITH a DB probe (for external monitoring hooks).
  • No IP or User-Agent logging in the downloads table — GDPR-strict by design.
verify · live checks

Don't take the dates on trust — check them.

Every claim above is reachable on the live reference instance at jenni.noschmarrn.dev. A few quick ways to confirm without reading the source:

~ $
# Root-signed trust list (Phase 8)
$ curl -s https://jenni.noschmarrn.dev/api/signing/trust | jq

# Active server signing pubkey (Phase 7)
$ curl -s https://jenni.noschmarrn.dev/api/signing/pubkey | jq

# A signed release manifest (Phase 7) — pick any public project slug
$ curl -s https://jenni.noschmarrn.dev/api/projects/<slug>/manifest | jq

# Healthcheck (Phase 1) — DB probe included
$ curl -s https://jenni.noschmarrn.dev/api/health | jq

# Trust badge SVG (Phase 4) — embeds anywhere as <img>
$ curl -s https://jenni.noschmarrn.dev/api/projects/<slug>/badge.svg

The reference verifier — a Python script that re-implements the full client-side trust-list and manifest verification using only the raw cryptography library — ships in the repo as tests/integration/test_trust_e2e.py and will be public the day Jenni's open-source release lands. The protocol it implements is small enough to re-port in an afternoon; the trust page walks through every step.

audit · receipts & questions

If you found something on this page that looks wrong, weak, or sloppy — please say so. Security receipts are only useful if the people reading them push back when receipts don't add up.

info@noschmarrn.dev