We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
How we migrated from a bundled Caddy reverse proxy to caddy-docker-proxy for label-based, zero-config routing across multiple Docker Compose stacks.
We run iMORPHr — an Elixir/Phoenix LiveView application — on a Hetzner VPS. The setup is straightforward: a single docker-compose.yml bundles the Phoenix app with a Caddy reverse proxy. Caddy handles TLS termination via Let’s Encrypt and proxies traffic to the app on port 4000.
The VPS directory structure looks like this:
/root/imorphr/
├── docker-compose.yml # app + caddy
├── Caddyfile # routing rules
└── .env # secrets
The docker-compose.yml:
services:
app:
image: ghcr.io/imorphr/imorphr-site:latest
restart: always
env_file: .env
expose:
- "4000"
caddy:
image: caddy:2-alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
depends_on:
- app
volumes:
caddy_data:
caddy_config:
And the Caddyfile:
www.imorphr.com {
redir https://imorphr.com{uri}
}
imorphr.com {
reverse_proxy app:4000
}
Simple. Caddy and the app live in the same Compose stack, share the default Docker network, and Caddy reaches the app by its service name app. GitHub Actions builds the Docker image, pushes to GHCR, SCPs the compose file and Caddyfile to the VPS, and runs docker compose up -d.
This worked perfectly — until we needed to deploy a second service.
We’re building ShoppeX — an API backend (also Elixir/Phoenix) that helps independent UK shopkeepers compare wholesale prices. It needs to be deployed to the same VPS as api.imorphr.com.
ShoppeX’s docker-compose.prod.yml was already written with this in mind:
services:
shoppex:
image: ghcr.io/imorphr/shoppex:latest
restart: unless-stopped
ports:
- "127.0.0.1:4000:4000"
environment:
DATABASE_URL: "${DATABASE_URL}"
SECRET_KEY_BASE: "${SECRET_KEY_BASE}"
PHX_HOST: "api.imorphr.com"
# ... more env vars
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- caddy
networks:
caddy:
external: true
It referenced an external caddy network and included a comment saying “add this snippet to your shared Caddyfile”:
api.imorphr.com {
reverse_proxy shoppex:4000
}
But here’s the rub: the Caddyfile is owned by the iMORPHr website repo. Every time iMORPHr deploys, it SCPs its own Caddyfile to the VPS. To add ShoppeX’s route, we’d have to modify the iMORPHr repo — creating a cross-repo dependency. The website would need to know about every other service on the VPS. That doesn’t scale, and it couples things that shouldn’t be coupled.
We also had a networking problem. The iMORPHr Compose stack uses Docker’s default network — app and caddy can see each other because they’re in the same stack. But ShoppeX is in a separate Compose stack. For Caddy to route to ShoppeX, both containers need to be on the same Docker network.
The simplest approach: add api.imorphr.com { reverse_proxy shoppex:4000 } to the iMORPHr repo’s Caddyfile. Create a shared external Docker network. Connect Caddy to both the default network (for the iMORPHr app) and the external network (for ShoppeX).
The iMORPHr compose would look like:
services:
app:
# ... unchanged
caddy:
# ... unchanged except:
networks:
- default
- caddy
networks:
caddy:
external: true
Pros:
Cons:
We rejected this one quickly. It works today but creates a maintenance headache.
Pull Caddy out of the iMORPHr repo entirely. Give it its own directory on the VPS and its own Compose file. The Caddyfile lives on the VPS, managed independently of any app.
/root/imorphr/
├── caddy/
│ ├── docker-compose.yml # Just Caddy
│ └── Caddyfile # All routes, managed on VPS
├── imorphr/
│ ├── docker-compose.yml # Just the app
│ └── .env
└── shoppex/
├── docker-compose.prod.yml
└── .env
Pros:
caddy network Cons:
This was better, but still required centralized knowledge of all routes.
Replace standard Caddy with caddy-docker-proxy. This is a Caddy build that watches the Docker socket and auto-generates routing configuration from container labels. There is no Caddyfile at all.
Each service declares its own routing:
# iMORPHr website
labels:
caddy_0: imorphr.com
caddy_0.reverse_proxy: "{{upstreams 4000}}"
caddy_1: www.imorphr.com
caddy_1.redir: "https://imorphr.com{uri} permanent"
# ShoppeX API
labels:
caddy: api.imorphr.com
caddy.reverse_proxy: "{{upstreams 4000}}"
Pros:
Cons:
We chose this approach. The self-service nature of label-based routing is exactly what a multi-service VPS needs. Each team (or repo) is autonomous.
caddy-docker-proxy is a custom Caddy build that includes a module for watching Docker. Here’s what happens:
/var/run/docker.sock) caddy caddy: example.com → example.com {
caddy.reverse_proxy: "{{upstreams}}" → reverse_proxy <container-ip>:80
→ }
Basic reverse proxy:
labels:
caddy: example.com
caddy.reverse_proxy: "{{upstreams 4000}}"
Multiple site blocks from one container (using _0, _1 suffixes):
labels:
caddy_0: example.com
caddy_0.reverse_proxy: "{{upstreams 4000}}"
caddy_1: www.example.com
caddy_1.redir: "https://example.com{uri} permanent"
This is needed because YAML doesn’t allow duplicate keys. The _0 and _1 suffixes create separate, isolated Caddy site blocks.
The {{upstreams}} template resolves to the container’s IP address on the shared network. You can specify a port: {{upstreams 4000}}. caddy-docker-proxy handles the DNS resolution — you never hardcode IPs.
The CADDY_INGRESS_NETWORKS environment variable is critical. It tells caddy-docker-proxy which Docker network to watch. Without it, auto-detection can be unreliable:
environment:
- CADDY_INGRESS_NETWORKS=caddy
The generated Caddyfile is saved inside the container. To inspect what caddy-docker-proxy built:
docker exec caddy cat /config/caddy/Caddyfile.autosave
This is invaluable for debugging routing issues.
Here’s what we ended up with — three independent Compose stacks sharing a single external Docker network:
┌──────────────────────────────────────────────────┐
│ Hetzner VPS (Ubuntu 24.04) │
│ │
Internet │ ┌────────────────────────────────────────────┐ │
──────▶ :443 ────▶│ │ caddy-docker-proxy │ │
│ │ (watches Docker labels, auto-configures) │ │
│ │ Ports: 80, 443 │ │
│ └─────────┬──────────────────┬───────────────┘ │
│ │ │ │
│ ┌─────────▼──────────┐ ┌───▼────────────────┐ │
│ │ iMORPHr site │ │ ShoppeX API │ │
│ │ imorphr.com │ │ api.imorphr.com │ │
│ │ Port 4000 │ │ Port 4000 │ │
│ └────────────────────┘ └────────┬───────────┘ │
│ │ │
│ ┌────────▼───────────┐ │
│ │ PostgreSQL 16 │ │
│ │ (native, :5432) │ │
│ └────────────────────┘ │
│ │
│ All app containers on shared "caddy" network │
└──────────────────────────────────────────────────┘
/root/imorphr/
├── caddy/
│ └── docker-compose.yml # caddy-docker-proxy (shared infrastructure)
├── docker-compose.yml # iMORPHr website (app only, with labels)
├── .env # iMORPHr secrets
└── shoppex/
├── docker-compose.prod.yml # ShoppeX API (app only, with labels)
└── .env # ShoppeX secrets
/root/imorphr/caddy/docker-compose.yml)This is shared infrastructure — deployed once, used by all services:
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:2.12-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443/tcp"
- "443:443/udp" # HTTP/3 (QUIC) support
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- caddy_data:/data # TLS certificates — must persist
- caddy_config:/config # debug: docker exec caddy cat /config/caddy/Caddyfile.autosave
networks:
caddy:
external: true
volumes:
caddy_data:
caddy_config:
Key points:
lucaslorentz/caddy-docker-proxy:2.12-alpine — pinned to minor version for stability, Alpine for shell access during debugging /var/run/docker.sock mounted read-only so caddy can watch container events caddy_data volume persists TLS certificates across restarts (losing this means re-issuing certs and potentially hitting Let’s Encrypt rate limits) CADDY_INGRESS_NETWORKS=caddy explicitly tells it which network to watch docker-compose.yml)Caddy is gone. The app declares its own routing:
services:
app:
image: ghcr.io/imorphr/imorphr-site:latest
restart: always
env_file: .env
expose:
- "4000"
networks:
- caddy
labels:
caddy_0: imorphr.com
caddy_0.reverse_proxy: "{{upstreams 4000}}"
caddy_1: www.imorphr.com
caddy_1.redir: "https://imorphr.com{uri} permanent"
networks:
caddy:
external: true
The caddy_0 / caddy_1 pattern creates two separate Caddy site blocks: one for the main domain with a reverse proxy, and one for the www subdomain with a permanent redirect.
No ports: mapping to the host — Caddy reaches the app entirely through the Docker network. The app is not directly accessible from the internet.
docker-compose.prod.yml)services:
shoppex:
image: ghcr.io/imorphr/shoppex:latest
restart: unless-stopped
expose:
- "4000"
ports:
- "127.0.0.1:4001:4000"
environment:
DATABASE_URL: "${DATABASE_URL}"
SECRET_KEY_BASE: "${SECRET_KEY_BASE}"
PHX_HOST: "api.imorphr.com"
PHX_SERVER: "true"
PORT: "4000"
ECTO_SSL: "false"
CLOAK_KEY: "${CLOAK_KEY}"
PARFETTS_JWT: "${PARFETTS_JWT}"
PARFETTS_BRANCH_ID: "${PARFETTS_BRANCH_ID}"
PARFETTS_CUSTOMER: "${PARFETTS_CUSTOMER}"
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- caddy
labels:
caddy: api.imorphr.com
caddy.reverse_proxy: "{{upstreams 4000}}"
networks:
caddy:
external: true
Note the ports: "127.0.0.1:4001:4000" — this binds to localhost on port 4001 for health checks during deployment. It’s not accessible from the internet (loopback only). Caddy routes public traffic through the Docker network.
The extra_hosts entry maps host.docker.internal to the host machine’s IP, allowing the container to reach PostgreSQL which runs natively (not in Docker) on the VPS.
The caddy_infra repo has a simple deploy workflow — no image build, just SCP the compose file and restart:
name: Deploy Caddy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Tailscale
uses: tailscale/github-action@v2
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:ci
- name: Deploy to VPS
run: |
ssh -o StrictHostKeyChecking=accept-new ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} \
"mkdir -p /root/imorphr/caddy"
scp docker-compose.yml \
${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:/root/imorphr/caddy/docker-compose.yml
ssh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} << 'EOF'
docker network create caddy 2>/dev/null || true
cd /root/imorphr/caddy
docker compose pull
docker compose up -d
EOF
The docker network create caddy 2>/dev/null || true is idempotent — it creates the network if it doesn’t exist, silently succeeds if it does.
ShoppeX’s deploy workflow builds the Docker image, pushes to GHCR, and deploys. The only change from the original is the health check port:
# Before
if curl -sf http://localhost:4000/api/v1/health > /dev/null 2>&1; then
# After
if curl -sf http://localhost:4001/api/v1/health > /dev/null 2>&1; then
The deploy workflow removes the Caddyfile SCP and the caddy-related concerns. It just deploys the app container.
This is a one-time procedure with ~30 seconds of downtime.
api.imorphr.com pointing to the VPS IP (needed for Let’s Encrypt domain validation) caddy_infra repo created and pushed to GitHub with secrets configured # 1. Deploy caddy_infra (creates the network and starts caddy-docker-proxy)
# Trigger via GitHub Actions push to main, or manually:
ssh vps "docker network create caddy"
scp caddy_infra/docker-compose.yml vps:/root/imorphr/caddy/docker-compose.yml
ssh vps "cd /root/imorphr/caddy && docker compose up -d"
# 2. Stop the old iMORPHr stack (brief downtime starts)
ssh vps "cd /root/imorphr && docker compose down"
# 3. Deploy updated iMORPHr compose (with labels, no caddy service)
scp deploy/docker-compose.yml vps:/root/imorphr/docker-compose.yml
ssh vps "cd /root/imorphr && docker compose up -d"
# caddy-docker-proxy detects the new container and configures routing
# Let's Encrypt issues certs for imorphr.com — downtime ends
# 4. Verify iMORPHr is working
curl -I https://imorphr.com
curl -I https://www.imorphr.com # should 301 redirect
# 5. Deploy ShoppeX (whenever ready)
# Standard deploy via GitHub Actions — caddy-docker-proxy auto-discovers it
# 6. Verify ShoppeX
curl -I https://api.imorphr.com/api/v1/health
# 7. Clean up old volumes
ssh vps "docker volume rm imorphr_caddy_data imorphr_caddy_config"
If anything goes wrong, the old setup can be restored:
# Stop caddy-docker-proxy
ssh vps "cd /root/imorphr/caddy && docker compose down"
# Restore original compose + Caddyfile
scp original/docker-compose.yml vps:/root/imorphr/docker-compose.yml
scp original/Caddyfile vps:/root/imorphr/Caddyfile
ssh vps "cd /root/imorphr && docker compose up -d"
caddy_data Volume Is SacredThis volume stores TLS certificates. If you lose it, Caddy re-issues certificates on restart. Let’s Encrypt has rate limits: 5 duplicate certificates per domain per week. During testing, use the Let’s Encrypt staging environment:
labels:
caddy.tls.ca: https://acme-staging-v02.api.letsencrypt.org/directory
Remove this label once everything works to get real production certificates.
Only web-facing containers need to join the caddy network. Database containers, Redis, internal workers — these should stay on their own isolated network. It’s both a security concern (unnecessary exposure) and can cause routing confusion.
This won’t work:
labels:
caddy: imorphr.com
caddy: www.imorphr.com # silently overwrites the first one!
Use caddy_0, caddy_1 suffixes for multiple site blocks from a single container.
Both the iMORPHr app and ShoppeX listen on port 4000 inside their containers. This isn’t a conflict because Docker containers have isolated network namespaces. Caddy reaches each container by its IP on the shared Docker network, not by a host port. The host port binding (127.0.0.1:4001:4000 for ShoppeX) is only for health checks.
CADDY_INGRESS_NETWORKS Must Be Set ExplicitlyAuto-detection of the ingress network is documented as unreliable. Always set it:
environment:
- CADDY_INGRESS_NETWORKS=caddy
When things aren’t routing correctly, inspect what caddy-docker-proxy actually generated:
docker exec caddy cat /config/caddy/Caddyfile.autosave
This shows the exact Caddyfile that Caddy is running — including resolved IP addresses.
This is where the approach pays off. Say we want to deploy a new service at dashboard.imorphr.com. All we need is:
services:
dashboard:
image: ghcr.io/imorphr/dashboard:latest
networks:
- caddy
labels:
caddy: dashboard.imorphr.com
caddy.reverse_proxy: "{{upstreams 3000}}"
networks:
caddy:
external: true
No changes to Caddy. No changes to any other service. Deploy, and caddy-docker-proxy picks it up automatically. TLS certificates are provisioned within seconds.
| Before | After | |
|---|---|---|
| Caddy | Bundled in iMORPHr’s compose | Standalone caddy-docker-proxy |
| Routing config | Caddyfile in one repo | Docker labels on each container |
| Adding a service | Edit another repo’s Caddyfile | Add labels to your own compose |
| TLS | Automatic (Caddy) | Automatic (still Caddy) |
| Cross-repo coupling | Yes — website knows about all services | None — each service is self-contained |
| Network | Default compose network |
Shared external caddy network |
The one-time migration cost is modest (creating a new repo, updating two compose files, ~30 seconds of downtime). The ongoing benefit is that every future service deployment is fully autonomous — no coordination, no shared config files, no cross-repo PRs.
For a small team running multiple services on a single VPS, this is the right level of infrastructure. It’s simpler than Kubernetes, more capable than a single Caddyfile, and stays out of the way.