Introduction
A major container optimization question is whether to ship statically linked binaries or rely on dynamic libraries in the runtime image.
This decision affects image size, compatibility, security updates, and operational simplicity.
Static vs Dynamic Linking
Dynamic linking
Binary depends on shared libraries at runtime.
Pros:
- Smaller binary.
- Shared libs reused across programs.
Cons:
- Runtime image must include matching libraries.
- Dependency mismatch can break startup.
Static linking
Binary embeds required libraries into executable.
Pros:
- Portable runtime behavior.
- Easier minimal runtime images.
Cons:
- Larger binary.
- Rebuild needed for library CVE fixes.
Inspect Dependencies with ldd
Use ldd to inspect dynamic dependencies:
ldd ./app
If output says:
not a dynamic executable
the binary is statically linked.
Go and Linking Behavior
Go binaries are often close to static by default for pure Go code.
When CGO is enabled, dynamic libc dependencies can appear.
Typical static-style build:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o app ./cmd/server
Always verify with ldd after build.
C/C++ Static Linking Notes
For C/C++ static builds on glibc-based systems, you may need static libc packages (for example glibc-static on some distros).
But fully static glibc has caveats. Many teams prefer:
- musl-based builds for true static linking, or
- dynamic linking with slim runtime image that includes required libs.
Docker Layers and Storage Efficiency
Container filesystems use layered storage. A common misconception is that seeing repeated files in multiple container views means disk is linearly duplicated.
Reality:
- Read-only layers are shared across images when digest matches.
- Writable container layers are separate per container.
- Disk usage still grows with many near-duplicate layers.
Use these commands to inspect real usage:
docker system df
docker image ls
docker history <image>
Practical Image Optimization Strategy
- Multi-stage builds.
- Minimal runtime base images.
- Avoid package managers in final stage.
- Copy only required artifact(s).
- Pin base image digests.
Example multi-stage Dockerfile:
FROM golang:1.24 AS build
WORKDIR /src
COPY . .
RUN CGO_ENABLED=0 go build -o app ./cmd/server
FROM gcr.io/distroless/static
COPY --from=build /src/app /app
ENTRYPOINT ["/app"]
Security Trade-Offs
Static binaries are simple operationally, but CVE response differs:
- Dynamic: patch shared library in base image and redeploy.
- Static: rebuild binary with patched toolchain/libs and redeploy.
Both can be secure if your patch pipeline is mature.
Debugging Runtime Failures
If container starts locally but fails in production:
- Check architecture mismatch (
amd64vsarm64). - Check libc expectations.
- Check missing CA certificates/timezone data in minimal images.
- Check executable permissions and entrypoint.
Useful checks:
file ./app
ldd ./app
docker run --rm -it --entrypoint sh <image>
When to Prefer Static Builds
- Single binary services.
- Minimal container runtime requirements.
- Environments with dependency drift risk.
When Dynamic Builds Are Fine
- Complex native dependency stacks.
- Existing distro-based runtime images.
- Teams with strong base-image patch workflows.
Conclusion
Static linking is a powerful container strategy, but it is not automatically better in every case. The right choice depends on dependency profile, security patch workflow, and operational constraints.
Use ldd and image inspection tools to make decisions based on evidence, not assumptions.
Comments