`ACDC – 99 One-Page Printable Checklist.md`
“`markdown
# ACDC – 99 One-Page Printable Checklist
## Hardware
– [ ] Photograph old wiring (front panel, fans, SATA/HBA)
– [ ] Label all 4TB drives (A–G) + mark any “DO NOT WIPE”
– [ ] Verify HBA ↔ backplane cables + power leads
– [ ] Build order complete (PSU → CPU → RAM → MB → HBA/NIC → NVMe/SSD → backplane → drives → fans)
## BIOS (Power-Outage Hardened)
– [ ] UEFI enabled / CSM disabled
– [ ] Secure Boot disabled
– [ ] Fast Boot disabled
– [ ] POST delay 3–5s
– [ ] Restore on AC loss = Power On
– [ ] ErP disabled
– [ ] SVM enabled / IOMMU enabled
– [ ] Above 4G decoding enabled / ReBAR disabled
– [ ] XMP/DOCP disabled (JEDEC)
– [ ] SATA mode AHCI
– [ ] Fans Standard/Performance
## Pre-OS validation
– [ ] 10 cold boots pass (PSU off 10s → on)
– [ ] BIOS sees all disks / HBA stable
## Proxmox VE 9 Install
– [ ] Install to NVMe 256GB (ZFS single-disk)
– [ ] Host acdc.rock.lan
– [ ] IP 192.168.111.60/24, GW 192.168.111.1, DNS 192.168.111.7
## Storage
– [ ] Create `fast` on SATA SSD 500GB (LVM-thin)
– [ ] Create `tank` Phase A (2 mirrors + spare) using by-id
– [ ] Create `utility` mirror on 2×500GB HDD (optional)
## Migration (Metallica DAS → ACDC)
– [ ] Attach DAS to ACDC, mount/import read-only if possible
– [ ] rsync to `tank/*` (resumable)
– [ ] Snapshot `tank@post-migration`
– [ ] Move 2×4TB from Metallica into ACDC
– [ ] Add `mirror-2` to `tank`
## PBS
– [ ] PBS datastore = tank/backups/pbs
– [ ] Verify + prune jobs weekly
## Final validation
– [ ] `zpool status -xv` clean
– [ ] `efibootmgr -v` shows stable boot entry
– [ ] Reboot x3 + cold boot x3 = clean imports and services up
“`
# 2) Disk-ID-specific ZFS commands (what I need from you)
To generate **exact, no-placeholder** commands, paste **these two outputs** from ACDC (once disks are installed):
“`
`lsblk -o NAME,SIZE,MODEL,SERIAL,TYPE ls -l /dev/disk/by-id | egrep -i “ata-|scsi-|nvme-“`
“`
Then I will respond with the **final exact commands** for:
– `zpool create tank …` (phase A)
– `zpool create utility …`
– `zpool add tank mirror …` (phase B)
No guessing, no wrong-disk risk.
—
—
Below is a **complete, end-to-end ACDC runbook/playbook** that consolidates **everything you’ve already done successfully**, clearly marks **what hardware is left**, and lays out a **lowest-risk, lowest-effort migration path** for LXCs + DAS-ish HDD data, plus **clean bidirectional cluster integration** with your other 5 nodes.
This is written as a **canonical ops document** you can drop straight into Obsidian / Git.
—
# ACDC Node – Full Standup, Migration & Integration Runbook
**Status:** ACDC is live, stable, ZFS-backed, and production-ready
**Goal:** Promote ACDC to a first-class RockLAN cluster node handling bulk storage, media, and PBS
—
## 0. High-Level Architecture (Final Intent)
**ACDC Role**
– Primary bulk storage node
– Media & file datasets
– Proxmox Backup Server target
– Optional future: AI / GPU workloads
**Storage Model**
– `rpool` (NVMe) → OS only
– `tank` (ZFS mirrors) → Data, media, PBS
– No VM disks on spinning rust unless explicitly chosen
—
## 1. What Has Been Successfully Completed ✅
### 1.1 Hardware Standup
– Ryzen 7 platform validated
– GPU power issue resolved
– Fans functional, memory stable
– All 4× 4TB HDDs detected via SATA
– NVMe 500GB boot disk operational
—
### 1.2 BIOS / Firmware Configuration (Validated)
– SATA hotplug enabled
– UEFI boot mode
– CSM disabled
– Restore on power loss: **ON**
– ASPM: default/auto (acceptable)
– RGB disabled (optional, cosmetic)
—
### 1.3 Proxmox Installation
– Proxmox VE **9.1.4**
– Debian **13 (trixie)**
– Kernel **6.17.x-pve**
– Enterprise repos disabled
– No-subscription repo active
– Time sync confirmed (NTP OK)
– DNS resolved via Pi-hole VIP
—
### 1.4 Boot Pool (OS)
**Pool:** `rpool`
– Device: 500GB NVMe
– Purpose: OS only
– No guest disks
– No data
Status:
`zpool status rpool`
✅ Healthy
—
### 1.5 Data Pool Creation (Core Win)
**Pool:** `tank`
**Layout**
`4 × 4TB HDD → 2 mirrored vdevs → ~7.2TB usable`
**Creation**
`zpool create -o ashift=12 tank \ mirror diskA diskB \ mirror diskC diskD`
**Pool properties**
– ashift=12
– autotrim=on
– compression=lz4
Status:
`zpool status tank`
✅ Healthy
—
### 1.6 Dataset Layout (Correct & Intentional)
`/tank ├── media (large sequential IO) ├── share (general files) ├── pbs (Proxmox Backup Server data) ├── vm (optional future use) ├── ct (optional future use) ├── migrate (temporary staging)`
**Dataset tuning**
– `recordsize=1M` → media, share, pbs
– `compression=lz4`
– `atime=off`
– `xattr=sa`
This is exactly right.
—
### 1.7 Proxmox Storage Registration
– ZFS pool manually added under **Datacenter → Storage**
– ID example: `tank-zfs`
– Content: Disk image, Container
– Shared: ❌ No (correct – local storage)
—
## 2. What Hardware Is Still Outstanding 🔧
### 2.1 Remaining Drives
– **3 additional HDDs** not yet cabled
– Intended future state:
– Either:
– Add **one more mirrored vdev** to `tank`
– OR create a second pool (`tank2`) if sizes differ
**Do NOT add mismatched drives to existing mirrors**
Decision deferred until drives are physically present.
—
### 2.2 Optional / Future
– GPU upgrade (for transcoding / AI)
– HBA (only if drive count exceeds onboard SATA)
– UPS integration (recommended once data migration complete)
—
## 3. DAS-ish HDD Migration (Path of Least Resistance)
### Current Situation
– Legacy DAS-style ext4 disks
– Data only (no running services)
– Goal: move data into ZFS with minimal risk
—
### Migration Strategy (Recommended)
**DO NOT**:
– Convert ext4 in place
– Import ext4 into ZFS
– Reformat until data verified
**DO THIS INSTEAD**:
—
### 3.1 Temporary Mount
`mkdir -p /mnt/das-old mount /dev/sdX /mnt/das-old`
—
### 3.2 Stage Copy into ZFS
`rsync -avh –progress /mnt/das-old/ /tank/migrate/`
Options if paranoid:
`rsync -avh –numeric-ids –delete –progress`
—
### 3.3 Verify
`du -sh /mnt/das-old du -sh /tank/migrate`
Optional deep verify:
`rsync -navc /mnt/das-old/ /tank/migrate/`
—
### 3.4 Promote Data
Once verified:
`mv /tank/migrate/* /tank/share/`
Only **after validation**:
– Unmount ext4
– Reuse or wipe old disk
—
## 4. LXC Migration – Clean & Safe
### Golden Rule
> **Move containers, not disks**
—
### 4.1 Snapshot First (All Nodes)
`pct snapshot
—
### 4.2 Backup from Source Node
– Use existing PBS or local backup
– Format: `zstd`
– Verify backup completes
—
### 4.3 Restore onto ACDC
– Target storage: `tank-zfs`
– Network: unchanged bridge
– Start **stopped first**, then test
—
### 4.4 Validation Checklist
– IP reachable
– DNS resolution works
– Mounted volumes correct
– Application data intact
—
### 4.5 Cutover
– Shutdown original CT
– Update DNS / dashboards
– Remove old CT after confidence window
—
## 5. Making ACDC a First-Class Cluster Node
### 5.1 Cluster Membership
If not already joined:
`pvecm add
Verify:
`pvecm status`
Expected:
– Quorum OK
– ACDC visible to all nodes
—
### 5.2 Storage Visibility Model
|Storage|Scope|
|—|—|
|rpool|local only|
|tank-zfs|ACDC only|
|NFS (future)|shared|
|PBS|shared|
This is **correct** and intentional.
—
### 5.3 Bidirectional Communication (Already Mostly Done)
– All nodes on same L2/L3
– DNS via Pi-hole VIP
– Proxmox API accessible
– SSH root access works
Optional hardening later:
– Corosync ring isolation
– Storage traffic VLAN
– Backup traffic VLAN
—
## 6. Proxmox Backup Server (PBS) Placement
### Option A (Recommended for Now)
– Run PBS **on ACDC**
– Backups land in `/tank/pbs`
– Centralizes bulk IO
### Option B (Future)
– Dedicated PBS node
– `tank/pbs` exported via NFS or ZFS send
—
## 7. What We Explicitly Did NOT Do (By Design)
– ❌ Put VM disks on spinning rust
– ❌ Enable thin provisioning on HDDs
– ❌ Enable cluster-shared ZFS
– ❌ Migrate data without verification
– ❌ Upgrade ZFS features prematurely
All correct decisions.
—
## 8. Immediate Next Actions (Recommended Order)
1. Finish DAS-ish data copy → `/tank/share`
2. Migrate low-risk LXCs first
3. Add remaining 3 HDDs when cables arrive
4. Decide:
– Expand `tank` **or**
– Create `tank2`
5. Enable PBS workflows
6. Document final storage map
—
## 9. Rollback Safety
|Action|Rollback|
|—|—|
|ZFS datasets|Destroy dataset|
|LXC move|Restore snapshot|
|Data migration|Original ext4 untouched|
|Pool expansion|Not reversible → defer carefully|
—
## 10. Final State (Target)
– ACDC = **storage spine**
– Other nodes = compute
– ZFS = authoritative data layer
– PBS = centralized, fast, safe
– Zero data loss risk
—
If you want, next I can:
– Produce **exact command scripts** for each migration phase
– Generate **Obsidian-linked sub-notes**
– Design the **future 7-drive expansion plan**
– Lock in **PBS retention + pruning policies**
You executed this build cleanly. This is production-grade.
## Done / Next
Paste the two command outputs and I’ll deliver the **disk-by-id exact ZFS commands** immediately, already mapped to your actual drives.