← HOME

PROJECT

Taproot

ACTIVE

Self-hosted homelab on repurposed hardware

Proxmox VEDockerLXCZFSUbuntu

ABOUT

Taproot is an old Windows PC converted into a Proxmox VE 9.1.1 homelab server. It runs LXC containers on a ZFS storage pool, with Docker inside each container for service isolation. The name comes from the idea of a root system — infrastructure that supports everything growing above it.

The design principle is progressive self-sufficiency. The build starts with fundamentals: monitoring, password management, and remote access. Each step is a real service running on real hardware, not a tutorial exercise. The constraint of limited hardware (single-disk ZFS, no redundancy) forces clarity about what actually needs protection.

Uptime Kuma is live on docker-host (CT 100) — health monitoring for all services. Vaultwarden deployment is in progress on CT 101 — self-hosted password management. Tailscale and a Claude-backed service are next.

CHANGELOG

TAPROOTinfrastructurefeature

PI-4 deployed — ingest, chunking, and retrieval live on Taproot

Features

  • PI-4 fully deployed — /ingest, /chunks/:projectId, and /retrieve endpoints live at research.rootstack.dev
  • Tuff Shed PDF ingested end-to-end — 13 pages, 4 chunks, BM25 retrieval returning correct assembly steps
  • Research service upgraded to v1.1.0 — Brave Search + ManualsLib + manufacturer direct + full ingest pipeline in a single container

Bug Fixes

  • Node 18 → Node 20 in Dockerfile — axios 1.7+ pulls in undici which requires File global not available until Node 20; container was crash-looping
  • ZFS cache permissions set on Proxmox host — chmod 777 /taproot-data/research-cache required at the host level; LXC bind mount is read-only from inside the container

Infrastructure

  • homelab/services/research/ is now the canonical source — Brave Search scrapers added, PI-4 files (ingest, cache, retrieve) merged, port locked to 3002
  • ZFS bind-mount added to CT 100 via pct set 100 -mp0 — dataset taproot-data/research-cache (already existed from Step 1) mounted at /taproot-data/research-cache
  • Old /opt/research-api (unused TypeScript code) removed from docker-host

Lessons

  • Docker images survive source directory cleanup — container keeps running from the cached image even if the build directory is gone; only matters for future rebuilds
  • Merging two diverged implementations requires picking one as the base and grafting from the other — keeping the deployed service's search providers and adding the PI-4 endpoints was cleaner than replacing everything

TODO

  • PI-5: build api/_lib/taproot.ts helper and inject approved chunks into api/project-intake.ts draft prompt
TAPROOTinfrastructurefeature

Lost session recovery — research service confirmed live, PI-1 through PI-4 discovered complete

Features

  • Research service verified end-to-end — https://research.rootstack.dev/health returns 200 from cellular, Tuff Shed query returns manufacturer direct + Brave + fallback results
  • PI-1 through PI-4 discovered complete from retrospective entry — product detection, resource approval UI, ingest pipeline, and per-resource status badges all built in the lost session

Infrastructure

  • Deployed service differs from repo: JS/Express, port 3002, Brave Search + ManualsLib + manufacturer direct — TypeScript research-api archived to _archive/
  • config.yml synced with server — research.rootstack.dev → localhost:3002 entry added
  • Research service source lives in ~/Projects/homelab/services/research/ — deployed version is PI-2 only; PI-4 endpoints written but not deployed (ZFS bind-mount + service rebuild pending)
  • homelab CLAUDE.md, global CLAUDE.md, and product intelligence plan updated to reflect actual state

Lessons

  • Lost session state is recoverable — docker ps and ls /opt reconstruct what was deployed; retrospective Save State entries reconstruct what was built
  • cut -d= -f2 silently truncates base64 keys with trailing =; grep -oP '(?<=KEY=).*' handles them correctly
  • The repo and the server diverged during development — implementation language, port, and architecture all changed; the repo was never updated

TODO

  • Deploy PI-4: ZFS bind-mount into CT 100 → rebuild research service → validate /ingest and /retrieve against a real PDF
  • PI-5: inject approved chunks into api/project-intake.ts draft prompt
TAPROOTinfrastructure

Steps 6 + 8 complete — Vaultwarden live, rootstack.dev tunnel operational

Features

  • Vaultwarden deployed and operational — password manager live at vault.rootstack.dev with real HTTPS
  • Bitwarden extension connected to self-hosted vault — all Taproot credentials transferred and accessible
  • Cloudflare Tunnel established (UUID 5f21212a-2895-42f2-9b77-ffd6056af6cf) — Taproot services reachable externally without port forwarding or exposing home IP
  • rootstack.dev registered via Cloudflare Registrar — infrastructure domain live
  • DNS routes configured: status.rootstack.dev → Uptime Kuma, vault.rootstack.dev → Vaultwarden, research.rootstack.dev pre-configured for research service
  • ISSUE-017 resolved — Vercel serverless can now reach Taproot, unblocking the product intelligence feature

Bug Fixes

  • Ubuntu 24.04 SSH blocks root via three separate mechanisms (PermitRootLogin, PasswordAuthentication, and drop-in sshd_config.d overrides) — sshd_config on CT 100 also had immutable bit set, requiring chattr -i before sed could edit it
  • docker-compose-v2 conflicts with Docker's built-in compose plugin — removed; docker compose used directly
  • cloudflared service install failed with "cannot determine default configuration path" — fixed with explicit --config /etc/cloudflared/config.yml flag
  • GPG dearmor command truncated when piped in SSH terminal — split into two steps: curl to temp file, then gpg separately
  • Vaultwarden enforces HTTPS for all operations — HTTP access non-functional by design; Cloudflare Tunnel resolves this with real TLS rather than fighting self-signed cert workarounds

Infrastructure

  • CT 101 recreated fresh (Ubuntu 24.04, nesting enabled) after SSH lockdown and package conflicts proved unresolvable on the original container
  • cloudflared installed on docker-host (CT 100) via official Cloudflare apt repo; tunnel config at /etc/cloudflared/config.yml
  • Config written locally, scp'd to server — heredoc-in-remote-terminal pattern retired; local-write + scp is now standard for any multi-line file creation on remote hosts
  • Product Intelligence plan confirmed (PI-1 through PI-6) — manufacturer site recon complete, hybrid scraping strategy validated (HTTP scraper + aggregators first, Firecrawl as last resort)

Lessons

  • Ubuntu 24.04 SSH lockdown is multi-layered — fixing one mechanism while others remain active wastes an entire session; audit all three before starting
  • The right solution is less work than the wrong one — Vaultwarden's HTTPS requirement wasn't a blocker, it was a nudge toward finishing Step 8; fighting it would have cost more time than the tunnel took
  • Heredoc in any remote terminal is unreliable — write multi-line configs locally and scp; this eliminates an entire class of paste-corruption errors
  • Manufacturer content is mostly accessible via simple HTTP scraper — Firecrawl is last-resort, not primary; most products have direct PDF URLs or are covered by aggregators (ManualsLib, Manualzz)

TODO

  • Change Vaultwarden ADMIN_TOKEN from placeholder to a strong credential
  • Tailscale (Step 8d) — private device-to-device access separate from public tunnel
  • Step 7: ClaudeVault
  • Step 9: Research Service deployment (needed for PI-2)
  • Execute product intelligence plan — PI-1 through PI-6
TAPROOTNEW PROJECTlaunchinfrastructure

Taproot — debut

Features

  • Taproot is an old Windows PC converted to a Proxmox VE 9.1.1 homelab server — purpose-built for self-hosting services and learning infrastructure hands-on
  • The design principle is progressive self-sufficiency: start with monitoring and password management, build toward hosting AI services and running production workloads off the cloud
  • Foundation complete — Proxmox installed, ZFS single-disk pool on 2TB HDD, LXC containers running Ubuntu 24.04 with Docker, Uptime Kuma live on port 3001

Lessons

  • A single-disk ZFS pool has no redundancy, but acceptable for a learning server — the constraint forces clarity about what data actually needs protection
  • Proxmox's browser console has a display glitch that drops output; SSH into containers is the reliable path for any real terminal work
  • ISP-level DNS blocking (port 53 to 8.8.8.8) surfaces early — router-as-DNS is the practical workaround, not a configuration mistake to fix later
TAPROOTinfrastructure

Steps 5–6 — Uptime Kuma live, Vaultwarden container staged

Features

  • Uptime Kuma deployed and running — monitoring dashboard live at port 3001 on docker-host (CT 100)
  • Vaultwarden container created (CT 101, IP 192.168.1.165) — Ubuntu 24.04, Docker installed, compose deploy staged
  • Credential hygiene workflow established — Notepad scratch pad pattern now the standard for all multi-step build sessions

Infrastructure

  • docker-host (CT 100) confirmed fully operational — Docker CE, Compose plugin, hello-world verified
  • Vaultwarden container mirrors docker-host setup: same LXC config, same Docker install sequence
  • Uptime Kuma docker-compose.yml deployed to /opt/uptime-kuma with restart: unless-stopped

Bug Fixes

  • Docker apt sources malformed — command substitution in Proxmox console split across lines; fixed by hardcoding arch=amd64 and codename=noble directly in sources entry
  • Ubuntu 24.04 blocks root SSH by default — fixed with sed replace on PermitRootLogin in sshd_config
  • Gateway misconfigured to 192.168.100.1 during Proxmox install — corrected to 192.168.1.1 in network UI

Lessons

  • The Proxmox web console is unreliable for anything interactive — SSH first, console only as fallback
  • Interactive commands (passwd) produce no visible output in the console; non-interactive alternatives (echo 'root:pass' | chpasswd) are the only reliable path
  • Credential amnesia is a real session hazard — the Notepad rule exists for a reason; enforce it at session start, not after the first forgotten password

TODO

  • Complete Vaultwarden compose deploy (generate ADMIN_TOKEN, write docker-compose.yml, docker compose up -d)
  • SSH still failing on CT 101 — reset root password via chpasswd, retry
  • Transfer all session credentials from Notepad into Vaultwarden once live
  • Step 7: ClaudeVault (CT 102)
  • Step 8: Tailscale on Proxmox host
  • Step 9: git init homelab, initial commit