Full-Stack Infrastructure Docs

This site documents an end-to-end engineering journey that bridges the gap between foundational infrastructure and modern AI workloads. It covers the complete stack: provisioning KVM-based virtualization, architecting storage tailored to the task (from robust Ceph clusters to high-speed NFS), and deploying essential core services such as PostgreSQL, MariaDB, and Gerrit.

Beyond the basics, the focus extends to advanced capabilities—enabling GPU passthrough for AI, building custom cloud images, and designing full-scale software solutions. Each post captures the exact steps, configurations, and insights gathered along the way, creating a transparent, reproducible reference for building a production-grade environment at home.

Infrastructure overview

Management Server for Home Lab

Install KVM and build a custom cloud image

Orchestrating VMs using scripts and templates

Wildcard SSL Certificates

Ubuntu 22.04 Repository Mirror

WordPress for documenting and sharing

Installing Mariadb 11.8

Installing PostgreSQL 16

Installing Gerrit Code Review

PCI Passthrough NVIDIA Tesla L4/P4 on Ubuntu 22.04

Preparing Servers for Ceph

Installing Ceph Reef (v18) Installation and Initial Configuration

Preparing a Custom Cloud Image for AI Workloads

NFS Server for PoCs (Ubuntu 22.04)

RabbitMQ for PoCs

AI-Assisted Purchase Order Processing: A Practical Proof of Concept

Part 1: Solution Approach and System Architecture
Part 2: Technology Stack, Services, and Model Choices
Part 3: Document Ingestion, AI inference, and Human-Driven Workflow
Part 4: Human-in-the-Loop Review and Retraining Workflow
Part 5: Implementation – Inference Service – Part 1
– Part 6: Accuracy tuning, confidence scoring, and failure modes.