Important
The project is currently in a transitional state from a single-node Debian server with k0s to a multi-node high-available setup using Proxmox, Talos and rook.io.
This project utilizes Infrastructure as Code and GitOps to automate the provisioning, operation, and updating of self-hosted services in my homelab.
- Automated deployment of all services using ArgoCD
- Automated updates of all services using Renovate
- Automated DNS updates of local DNS (AdGuard Home) using external-dns
- Automated DNS updates of global DNS (Cloudflare) using cert-manager.io
- Automated certificate creation and renewal using cert-manager.io
- Automated backups using restic
- Media Server setup using Plex
- Media Automation using Radarr, Sonarr, Lidarr
- Password Management with Vaultwarden
- Kubernetes Native Storage using Rook.io
- Automated Kubernetes backups using velero
- Automated Database Setup and Backups using CloudNativePG
- Monitoing setup using Grafana, Grafana Loki, Grafana Mimir and Grafana Alloy
The cluster consists of three nodes. Node 1 is the primary storage server that contains all spinning rust disks. Node 2 and 3 contain fast NVMe storage and fast networking for Rook.io/CephFS Storage.
The hardware was specifically selected to achieve a low-power C10 CPU state, resulting in a power draw of around 14 watts for Node 2 and 3. Node 1, however, draws more power due to the HDDs.
flowchart TD
1[Node 1]
2[Node 2]
3[Node 3]
sw[10 Gbit/s Switch]
sw <--> 1 & 2 & 3
1 --> p1[Proxmox]
2 --> p2[Proxmox]
3 --> p3[Proxmox]
p1 --> nas[TrueNAS]
nas --> nfs[NFS Server\nfor Kubernetes]
p1 ---> cp1[Talos\nKubernetes\nControl-Plane]
p2 ---> cp2[Talos\nKubernetes\nControl-Plane]
p3 ---> cp3[Talos\nKubernetes\nControl-Plane]
p1 ---> w1[Talos\nKubernetes\nWorker]
p2 ---> w2[Talos\nKubernetes\nWorker]
p3 ---> w3[Talos\nKubernetes\nWorker]