![]()
My Entire Home Lab is Just 4 Docker Containers
Introduction: The Philosophy of a Minimalist Docker Home Lab
In the ever-evolving landscape of self-hosting and personal cloud infrastructure, complexity often masquerades as capability. We observe many enthusiasts deploying massive Kubernetes clusters or intricate virtual machine setups that demand significant overhead, resource allocation, and constant maintenance. However, true efficiency lies in simplification. We posit that a robust, secure, and highly functional home lab can be achieved with a remarkably lean architecture. Specifically, we have distilled our entire digital ecosystem into just four essential Docker containers. This approach minimizes the attack surface, reduces power consumption, and simplifies management without sacrificing functionality.
This article details the architecture, deployment, and optimization of a home lab comprising four distinct services: a reverse proxy for secure access, a media server for content management, a file synchronization tool for data continuity, and a lightweight automation engine. By leveraging Docker’s portability and isolation, we create a self-sustaining environment that operates efficiently on modest hardware, such as a Raspberry Pi 4 or an Intel NUC. We will explore the technical implementation of each container, the networking configuration that binds them, and the strategies for ensuring persistent data and robust security.
The Core Architecture: Why Four Containers are Sufficient
The decision to limit our home lab to four containers is strategic. It addresses the most critical needs of a modern household: secure remote access, centralized media consumption, file synchronization across devices, and basic automation. This minimalist stack avoids “container bloat,” a common issue where dozens of microservices consume more resources than they provide utility.
The Four Pillars of Our Infrastructure
- Reverse Proxy (Nginx Proxy Manager): The gatekeeper that manages secure HTTPS connections and routes traffic.
- Media Server (Jellyfin): The entertainment hub that organizes and streams video and audio content.
- File Synchronization (Syncthing): The private cloud alternative for syncing data without third-party providers.
- Automation (Node-RED): The logic layer that orchestrates interactions between devices and services.
This stack ensures that we cover networking, storage, media, and automation. Any additional services are integrated via these core pillars rather than adding new standalone containers.
Prerequisites and Hardware Requirements
Before deploying the stack, we must establish the hardware foundation. While Docker is versatile, performance depends on the underlying resources. We recommend a dedicated machine running a Linux distribution, preferably Ubuntu Server or Debian, due to their stability and extensive community support.
Hardware Specifications:
- CPU: A modern 64-bit processor (Intel i3/i5 or ARM Cortex-A72 for Raspberry Pi 4). Hardware transcoding support (Intel Quick Sync or NVIDIA NVENC) is recommended if media transcoding is required.
- RAM: Minimum 4GB, though 8GB is preferred to accommodate the JVM used by some applications and file caching.
- Storage: A reliable SSD for the operating system and Docker configuration (at least 64GB). For media and file storage, high-capacity HDDs configured in a RAID array or mergerfs/snapraid setup are ideal for data redundancy and capacity.
- Network: Gigabit Ethernet is non-negotiable for high-throughput file transfers and streaming. Wi-Fi is insufficient for a serious home lab.
Software Prerequisites:
- Docker Engine: The core runtime.
- Docker Compose: The tool we use to define and run multi-container applications.
- Terminal Access: SSH access to the host machine for configuration.
We assume a basic familiarity with the Linux command line. All commands presented should be run as a user with sudo privileges.
The First Container: Nginx Proxy Manager (The Gateway)
The most critical component of any exposed home lab is security. Directly exposing internal ports to the internet is a significant risk. We solve this by implementing Nginx Proxy Manager (NPM), a containerized web-based interface for managing Nginx proxy hosts with a simple, elegant UI.
Why Nginx Proxy Manager?
While other reverse proxies like Traefik exist, NPM offers a balance of power and simplicity. It handles SSL termination automatically using Let’s Encrypt, managing wildcard certificates or specific subdomains with ease. It provides a visual dashboard to monitor proxy status and access logs, which is invaluable for troubleshooting.
Docker Compose Configuration
We define the NPM service in our docker-compose.yml file. We map the necessary ports for HTTP (80), HTTPS (443), and the admin interface (81). Crucially, we mount a volume for persistent storage of SSL certificates and configuration data.
version: '3'
services:
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: npm
restart: unless-stopped
ports:
- "80:80"
- "81:81"
- "443:443"
volumes:
- ./data/npm/data:/data
- ./data/npm/letsencrypt:/etc/letsencrypt
networks:
- home_lab_net
Security and Initial Setup
Upon first launch, we access the admin panel at http://<host-ip>:81. The default credentials (admin@example.com / changeme) must be changed immediately. We then configure the first proxy host, pointing a domain like media.example.com to the internal IP and port of our Jellyfin container. NPM automatically requests a valid SSL certificate, ensuring that all traffic is encrypted. This layer acts as a firewall, terminating external connections before they reach internal services.
The Second Container: Jellyfin (The Media Hub)
For media management, we eschew proprietary solutions like Plex or Emby in favor of Jellyfin. Jellyfin is a fully open-source media server that imposes no paywalls on hardware transcoding or mobile applications. It offers a Netflix-like interface for local media, aggregating metadata, artwork, and subtitles automatically.
Performance Optimization
Running Jellyfin efficiently requires careful configuration, particularly regarding hardware acceleration. If the host supports it, we pass through the GPU device to the container. For Intel Quick Sync, we add the device flag in the Docker Compose file.
Docker Compose Configuration
We organize our media into distinct folders: one for configuration (to keep metadata and user settings safe) and one for media content (movies, TV shows, music).
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
user: 1000:1000
group_add:
- "108" # Render group for hardware acceleration
restart: unless-stopped
network_mode: host # Host mode often simplifies casting and discovery
devices:
- /dev/dri/renderD128:/dev/dri/renderD128 # Intel Quick Sync
- /dev/dri/card0:/dev/dri/card0
volumes:
- ./data/jellyfin/config:/config
- /media/movies:/media/movies
- /media/tv:/media/tv
- /media/music:/media/music
environment:
- TZ=America/New_York
networks:
- home_lab_net
Library Management and Metadata
Once deployed, Jellyfin scans the mounted media directories. We configure libraries for Movies and TV Shows, pointing to the respective folders. Jellyfin connects to metadata providers like TheMovieDB and MusicBrainz to fetch artwork, cast details, and synopses. The result is a polished, searchable catalog accessible via web browsers, smart TV apps, and mobile clients. By keeping the configuration folder separate, we ensure that library indexes and user watch history are preserved even if the container is rebuilt.
The Third Container: Syncthing (The Private Cloud)
Cloud storage solutions like Dropbox or Google Drive offer convenience but compromise privacy. Syncthing is a continuous file synchronization program that transfers files between two or more computers in real time. It is decentralized, peer-to-peer, and end-to-end encrypted.
The Role in the Home Lab
In our four-container setup, Syncthing acts as the central node. We configure it to sync files from our desktops, laptops, and mobile devices to the server. This creates a unified “source of truth” for documents, photos, and configuration backups. Unlike Nextcloud, Syncthing does not require a database or web server overhead; it simply manages files and block-level updates.
Docker Compose Configuration
Syncthing requires persistent storage for its database and synchronized files. We expose the web GUI on a custom port (mapped internally to 8384).
syncthing:
image: syncthing/syncthing:latest
container_name: syncthing
hostname: home-server-sync
restart: unless-stopped
ports:
- "8384:8384" # Web GUI
- "22000:22000/tcp" # Data transfers
- "22000:22000/udp" # Data transfers
- "21027:21027/udp" # Discovery
volumes:
- ./data/syncthing/config:/var/syncthing
- /srv/sync-data:/var/syncthing/data # The actual files
environment:
- PUID=1000
- PGID=1000
networks:
- home_lab_net
Synchronization Strategy
Upon accessing the web GUI, we create folders for “Documents,” “Photos,” and “Backups.” We then add trusted remote devices (laptops, phones) by scanning QR codes or sharing configuration strings. Syncthing uses a “send only” or “receive only” model, allowing us to control data flow. For instance, our phone camera upload folder can be set to “send only,” while the server mirrors it without deleting files if the phone runs out of space. This flexibility makes it superior to rigid cloud sync solutions.
The Fourth Container: Node-RED (The Automation Engine)
The final piece of the puzzle is automation. While full-fledged platforms like Home Assistant offer extensive features, they can be resource-heavy. Node-RED is a flow-based development tool for visual programming, originally built by IBM for IoT. It provides a browser-based editor to wire together hardware devices, APIs, and online services.
Why Node-RED?
Node-RED is lightweight and incredibly versatile. It can handle tasks ranging from simple file renaming based on triggers to complex logic involving multiple services. In our stack, it monitors file changes in Syncthing, triggers media scans in Jellyfin, or sends notifications via Telegram or Discord.
Docker Compose Configuration
Node-RED requires a persistent volume for the “flows” file, which contains the automation logic. We also mount the node_modules directory to ensure any custom nodes are saved.
nodered:
image: nodered/node-red:latest
container_name: nodered
restart: unless-stopped
ports:
- "1880:1880"
volumes:
- ./data/nodered/data:/data
environment:
- TZ=America/New_York
user: "1000:1000"
networks:
- home_lab_net
Building Flows
Accessing the editor at http://<host-ip>:1880, we can install nodes from the Palette Manager. A practical use case is a flow that listens for new file events in a specific Syncthing folder. When a new file is detected, Node-RED sends an HTTP request to Jellyfin’s API to trigger a library scan, ensuring new media appears immediately. Another flow could monitor system health (CPU/RAM usage) and send an alert if thresholds are exceeded. This container acts as the glue, automating interactions between the other three containers.
Networking: The Docker Bridge
To ensure these four containers can communicate securely, we define a custom Docker bridge network. This isolates the home lab traffic from the host’s default bridge network.
networks:
home_lab_net:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
By using this network, we can reference containers by their service names (e.g., jellyfin, syncthing) from within other containers. For example, Node-RED can access Jellyfin’s API at http://jellyfin:8096 without exposing the port to the host machine, adding an extra layer of security. Only Nginx Proxy Manager needs its ports exposed to the outside world, acting as the sole entry point.
Deployment and Management
With the docker-compose.yml file fully constructed with the four services and the custom network, deployment is a single command:
sudo docker-compose up -d
The -d flag runs the containers in detached mode. To manage updates, we use a tool like Watchtower, though in a minimalist lab, manual updates are safer. We can update containers by pulling the latest images and recreating the stack:
sudo docker-compose pull
sudo docker-compose up -d --force-recreate
Data Persistence and Backups
Data persistence is handled via volume mounts. In our configuration, data resides in the ./data directory relative to the compose file. To back up the entire lab, we simply archive this directory and the compose file itself. Because the state is stored in volumes, rebuilding the containers does not result in data loss. For critical data, we can extend Syncthing to sync the backup archives to an off-site location or a cloud storage provider, creating a hybrid backup strategy.
Security Hardening and Maintenance
Running a home lab requires a proactive approach to security. Since we expose services to the internet via Nginx Proxy Manager, we must ensure the environment is hardened.
Network Isolation
As mentioned, the custom Docker network isolates services. We ensure that only NPM listens on the public interface. The host firewall (UFW on Ubuntu) should be configured to deny all incoming traffic except for ports 80, 443, and SSH.
Authentication and Access Control
Nginx Proxy Manager supports HTTP Basic Authentication and access lists. We utilize these to add a secondary password layer to sensitive applications like Node-RED or the Syncthing GUI. Additionally, we enable Two-Factor Authentication (2FA) on NPM itself.
Update Strategy
We maintain a strict update schedule. While Docker containers are ephemeral, the images can contain vulnerabilities. We regularly pull the latest security patches for the base images. We also monitor the official GitHub repositories of the projects (Jellyfin, Syncthing, etc.) for announcements regarding critical updates.
Advanced Considerations: Resource Limits and Health Checks
To prevent a single container from consuming all host resources (e.g., a transcoding spike in Jellyfin starving Node-RED), we implement resource limits in the Docker Compose file.
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
We also implement health checks to ensure services are running correctly. NPM can be configured to check the health of upstream servers. If Jellyfin hangs, NPM will stop routing traffic to it, displaying a maintenance page instead of a timeout error.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8096/health"]
interval: 30s
timeout: 10s
retries: 3
Conclusion: The Power of Simplicity
By curating a home lab consisting of just four Docker containers—Nginx Proxy Manager, Jellyfin, Syncthing, and Node-RED—we achieve a high degree of functionality with minimal complexity. This stack covers secure access, media management, file synchronization, and automation. It is resource-efficient, easy to maintain, and highly scalable; if we need additional services, they can be integrated into the existing network or managed by Node-RED.
We have demonstrated that a minimalist approach does not equate to a lack of capability. Instead, it focuses on core utilities that provide the most value. Whether you are a developer, a media enthusiast, or a privacy advocate, this four-container architecture offers a robust foundation for your digital life. The simplicity of the setup ensures that the home lab remains a tool for productivity rather than a source of constant technical debt. As we continue to refine our digital environments, this Docker stack stands as a testament to the elegance of streamlined infrastructure.