The Art of Deployment: A Tale of ARM, AMD, and GLIBC
The Art of Deployment: A Tale of ARM, AMD, and GLIBC
February 5, 2026 — Today I learned that the distance between a working local build and a running production container is measured not in lines of code, but in cryptic error messages and Stack Overflow deep-dives.
The Setup
My deployment target was straightforward enough:
- Source: M2 MacBook (ARM64 / Apple Silicon)
- Target: Hetzner Cloud server (AMD64 / x86_64)
- Stack: Docker multi-stage build → Google Cloud Artifact Registry → Docker Compose on production
The plan seemed simple:
make build-upload-release # Build, tag, push
./deploy.sh # Pull and run on server
What could possibly go wrong?
Act I: The Platform Mismatch
My first build ran perfectly on my Mac. The image pushed to Google Artifact Registry without complaint. I SSHed into the production server, ran docker compose up -d, and watched the logs with anticipation:
exec /app/bin/spotlight: exec format error
The classic. My Mac had dutifully built an ARM64 image, and my x86_64 server was having none of it.
Solution: Add --platform linux/amd64 to the Docker build command.
docker-build:
docker build \
--platform linux/amd64 \
--build-arg MIX_ENV=$(MIX_ENV) \
-t "$(DOCKER_IMAGE):$(DOCKER_TAG)" .
Easy fix. Next!
Act II: The JIT Betrayal
With the platform flag in place, I kicked off another build. Docker started pulling the Elixir base image, compiled dependencies, and then:
** (ArgumentError) could not call Module.put_attribute/3
because the module Spotlight.MixProject is already compiled
The error appeared during mix deps.compile. Google led me to elixir-lang/elixir#13669, where José Valim himself explained the issue: QEMU emulation + Erlang JIT = chaos.
When Docker builds for a different platform, it uses QEMU to emulate the target architecture. The Erlang JIT (Just-In-Time compiler) doesn’t play nicely with this emulation, causing modules to appear “already compiled” when they shouldn’t be.
Solution: Tell the Erlang VM to use a JIT mode compatible with emulation:
ENV ERL_AFLAGS="+JMsingle true"
This flag puts the JIT in “single” mode, making it cooperative with QEMU’s translation layer.
Act III: The GLIBC Surprise
Build succeeded! Image pushed! Containers started! Then:
/app/erts-14.2.5.12/bin/erlexec: /lib/x86_64-linux-gnu/libc.so.6:
version 'GLIBC_2.34' not found
I had made a classic blunder. My multi-stage Dockerfile used:
-
Builder stage:
elixir:1.18.4-otp-26(based on Debian Bookworm with GLIBC 2.36) -
Runtime stage:
debian:bullseye-slim(with GLIBC 2.31)
The Erlang runtime compiled on Bookworm expected a newer GLIBC than Bullseye could provide.
Solution: Align the runtime with the builder:
ARG DEBIAN_VERSION=bookworm-slim
ARG RUNNER_IMAGE="debian:${DEBIAN_VERSION}"
Both stages now use Bookworm, and GLIBC is happy.
Act IV: The Persistent Storage Puzzle
The app booted! The login page appeared! I uploaded my avatar through the admin panel, saved my profile, and… the image was a 404.
GET /uploads/9e91eedf-d7bd-4c8c-8b35-073b85863f84.JPG → 404
In a Phoenix release, the application directory structure is different from development. My code was using a relative path priv/static/uploads, which resolved correctly in dev but not in the bundled release.
Worse, even if it worked, the uploads directory was inside the container—meaning every redeployment would wipe all uploaded files.
Solution: A three-part fix:
-
Environment variable for upload path:
defp upload_directory do System.get_env("UPLOADS_PATH") || Application.app_dir(:spotlight, "priv/static/uploads") end -
Docker Compose volume mount:
environment: UPLOADS_PATH: "/data/uploads" volumes: - uploads_data:/data/uploads -
Custom plug for serving uploads:
defmodule SpotlightWeb.Plugs.StaticUploads do def call(%Plug.Conn{request_path: "/uploads/" <> _} = conn, _opts) do uploads_path = System.get_env("UPLOADS_PATH") || Application.app_dir(:spotlight, "priv/static/uploads") Plug.Static.call(conn, Plug.Static.init( at: "/uploads", from: uploads_path )) end end
Now uploads persist across deployments in a Docker volume, and the endpoint knows where to find them.
The Deployment Architecture
After all the fixes, here’s what my production setup looks like:
┌─────────────────────────────────────────────────┐
│ Hetzner VPS │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Caddy │ │ Phoenix │ │ PostgreSQL│ │
│ │ :80/443 │─▶│ :4000 │─▶│ :5432 │ │
│ └───────────┘ └───────────┘ └───────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ caddy_data uploads_data postgres_data │
│ (volumes) (volumes) (volumes) │
└─────────────────────────────────────────────────┘
Caddy handles TLS certificates automatically via Let’s Encrypt, terminates HTTPS, and reverse proxies to Phoenix. PostgreSQL runs alongside with health checks ensuring the app waits for the database. Named volumes persist data across container restarts.
Lessons Learned
1. Cross-platform Docker builds are tricky
If you’re on Apple Silicon targeting AMD64 servers, expect QEMU-related issues. The ERL_AFLAGS="+JMsingle true" flag is your friend.
2. Always match your builder and runtime base images
GLIBC version mismatches will haunt you. Use the same Debian version (or Alpine, if you’re brave) for both stages.
3. Releases change your file paths
Application.app_dir/2 is essential in releases. Relative paths that work in development will betray you in production.
4. Plan for persistent storage from day one
Containerized apps lose their filesystem on every update. Design your upload handling with external volumes in mind.
5. Caddy is magical
Automatic HTTPS with Let’s Encrypt, HTTP/3 support, simple config syntax—Caddy is the reverse proxy I didn’t know I needed.
The Makefile Magic
My final workflow is beautifully simple:
# On my Mac:
cd ~/code/spotlight
MIX_ENV=prod make build-upload-release
# Copy the version tag, then on the server:
cd ~/code/server_conf/spotlight
# Update version in vars.yml
./deploy.sh
The entire deployment takes less than 3 minutes: 2 for the Docker build, 1 for the push/pull and restart on the server.
What’s Next
- CI/CD Pipeline: Automate the build and deploy with GitHub Actions
- Database Backups: Automated pg_dump to cloud storage
- Monitoring: Prometheus + Grafana for metrics
- CDN: CloudFlare in front of Caddy for caching and DDoS protection
But for now, I’m savoring the moment. There’s something deeply satisfying about typing a URL and seeing your own creation respond—especially after debugging platform mismatches, JIT compatibility, and GLIBC versions for the better part of an afternoon.
The app is live. The journey continues.
Previous articles: