How a Docker Network Mismatch Broke My Ghost Blog Setup (And How I Fixed It)
Introduction
I've been building a self-hosted content and automation stack on a personal VPS — n8n for workflow automation, Qdrant for vector search, NocoDB for metadata storage, and Caddy as a reverse proxy handling TLS for everything. When I decided to add a Ghost blog to document the builds, I figured it would be straightforward: pull the image, wire it into the existing docker-compose.yml, point Caddy at it, done.
It was not done. What followed was a multi-hour debugging session touching YAML syntax errors, SQLite vs MySQL production mode quirks, redirect loops, and finally a Docker networking mismatch that was invisible until I knew exactly where to look.
This post walks through every layer of that failure, what each error actually meant, and the fix that finally made everything work.
The Problem
After adding Ghost to my existing Docker Compose stack and configuring a Caddy reverse proxy block, hitting https://blog.[DOMAIN] returned a 502 Bad Gateway.
Ghost appeared to be running. Caddy appeared to be running. DNS was pointing to the right IP. And yet: 502.
First Attempts (What Didn't Work)
Attempt 1: SQLite instead of MySQL
The first version of the Ghost service used no explicit database config, which defaults Ghost to trying MySQL on 127.0.0.1:3306. Since there was no MySQL container, it failed immediately:
connect ECONNREFUSED 127.0.0.1:3306
The fix attempt was to add SQLite:
environment:
database__client: sqlite3
This caused a new error:
TypeError: String expected
Ghost's Docker image in production mode does not support SQLite without also specifying database__connection__filename. And even then, the official image strongly discourages SQLite in production. The right move was to add a proper MariaDB container.
Attempt 2: MariaDB, but Ghost kept crashing
After adding a MariaDB service and pointing Ghost at it, Ghost booted successfully:
Ghost booted in 7.25s
But the 502 persisted.
Attempt 3: Fixing the Caddy config (wrong diagnosis)
The Caddy logs showed:
dial tcp [::1]:2368: connect: connection refused
My first read of this was an IPv6 vs IPv4 issue — localhost resolving to [::1] instead of 127.0.0.1. So I changed the Caddyfile:
blog.[DOMAIN] {
reverse_proxy 127.0.0.1:2368
}
Still 502. Same error, different address.
Attempt 4: Ghost redirect loop
Digging into Ghost's own behavior with curl -IL http://127.0.0.1:2368, the response was:
HTTP/1.1 301 Moved Permanently
Location: https://127.0.0.1:2368/
Ghost was redirecting HTTP to HTTPS internally, which conflicted with Caddy already handling TLS externally. The root cause was url: https://[DOMAIN] in the environment config — Ghost saw https:// and enforced HTTPS redirects internally.
Changing the url to http://[DOMAIN] stopped the redirect loop. But the 502 still persisted.
Investigation
At this point, Ghost was healthy (confirmed via docker logs ghost showing a clean boot), Caddy's config looked right, DNS was resolving correctly (dig blog.[DOMAIN] +short returned the correct IP), and curl -I http://127.0.0.1:2368 returned 200 OK.
So why was Caddy getting connection refused?
The key command was:
docker inspect ghost | grep -i "IPAddress"
Output:
"IPAddress": "",
"IPAddress": "10.0.1.12"
The empty IPAddress field was the tell. Ghost had no IP on the default network — it was isolated on its own Docker network (10.0.1.12), completely unreachable from Caddy.
Then checking Caddy's network:
docker inspect root-caddy-1 | grep -A5 "Networks"
Output:
"Networks": {
"root_default": {
...
"Aliases": ["root-caddy-1"]
}
}
Caddy was on root_default. Ghost was on a different network entirely. They could not talk to each other — not via localhost, not via 127.0.0.1, not via container name. The connection was simply refused at the network layer.
The Root Cause
When Docker Compose services are defined in separate files or with different project names, they get placed on different internal Docker networks. Caddy was launched as part of the main root compose project and lives on root_default. Ghost was added later and ended up on its own isolated network.
Caddy was trying to proxy requests to ghost:2368 or 127.0.0.1:2368, but from Caddy's perspective, that address simply didn't exist — Ghost was on the other side of a network boundary.
The Fix
Step 1: Connect Ghost to Caddy's network immediately (no restart needed):
docker network connect root_default ghost
Step 2: Update the Caddyfile to use Ghost's container name:
blog.[DOMAIN] {
reverse_proxy ghost:2368
}
Using the container name ghost instead of localhost or an IP means Docker's internal DNS resolves it correctly across the shared network.
Step 3: Reload Caddy:
docker exec root-caddy-1 caddy reload --config /etc/caddy/Caddyfile
The blog loaded immediately.
Step 4: Make the network connection permanent in docker-compose.yml:
To survive reboots, the Ghost service needed explicit network declarations:
ghost:
image: ghost:5
container_name: ghost
restart: unless-stopped
depends_on:
- ghost-db
ports:
- "2368:2368"
networks:
- default
- root_default
environment:
url: http://blog.[DOMAIN]
database__client: mysql
database__connection__host: ghost-db
database__connection__user: ghost
database__connection__password: [GHOST_DB_PASSWORD]
database__connection__database: ghost
volumes:
- ghost_data:/var/lib/ghost/content
And at the bottom of the compose file, declare root_default as an external network:
networks:
root_default:
external: true
Verification
After the fix:
curl -I http://127.0.0.1:2368
# HTTP/1.1 200 OK
curl -I https://blog.[DOMAIN]
# HTTP/1.1 200 OK
Ghost admin accessible at https://blog.[DOMAIN]/ghost. Blog live.
Key Lessons
1. Ghost Docker runs in production mode by default — SQLite is not supported. Don't fight it. Use MariaDB or MySQL from the start. The extra container is worth it.
2. Set url to http:// when running Ghost behind a reverse proxy. Ghost uses the url value to decide whether to force HTTPS redirects internally. If your proxy (Caddy, Nginx) is already handling TLS, setting url: http:// lets Ghost serve plain HTTP internally while the outside world sees HTTPS. The architecture looks like this:
Browser → HTTPS → Caddy → HTTP → Ghost
3. localhost inside Docker does not mean what you think it means. Each Docker container has its own network namespace. localhost inside Caddy is Caddy's loopback, not Ghost's. Services need to share a Docker network and communicate by container name.
4. Docker Compose services on different projects end up on different networks. If you add services to an existing stack incrementally, they may land on isolated networks. Always check with docker inspect [container] | grep -A5 Networks when you have unexplained connection failures.
5. Read the error message carefully. dial tcp [::1]:2368: connect: connection refused looks like an IPv6 problem. It's actually a network isolation problem. The difference matters — one sends you chasing IP configs, the other sends you to docker network connect.
6. docker logs is your first diagnostic tool, always. Every crash in this debugging session was clearly explained in the container logs. Ghost told us about the missing MySQL connection. Ghost told us about the SQLite filename issue. Ghost told us it had booted successfully. The logs never lied — they just needed to be read.
Running a self-hosted stack means owning the entire debugging surface — from DNS to Docker networks to application config. Each layer has its own failure modes, and knowing how to interrogate each one is what turns a 502 from a mystery into a five-minute fix.