Most Docker caching advice feels contradictory because “Docker cache” is not one thing. There is layer cache, there are cache mounts, and there are external cache backends. If you mix those concepts together, builds feel random and CI stays slow even after you add --mount=type=cache everywhere.
This guide is the mental model I wish I had earlier.
If you only remember one thing, make it this:
- Layer cache lets BuildKit skip work entirely.
- Cache mounts make repeated work cheaper when a step has to run again.
- External cache makes cached build results portable across machines and CI runs.
They work together, but they are not substitutes for one another.
Start with BuildKit
Most of the useful caching features live in BuildKit, not the legacy builder.
Use docker buildx build when possible, or enable BuildKit explicitly:
DOCKER_BUILDKIT=1 docker build -t myapp .
In Dockerfiles that rely on modern mount syntax, declare the syntax version near the top:
# syntax=docker/dockerfile:1.7
How Docker cache actually works
At a high level, BuildKit evaluates each instruction using:
- the instruction itself
- the filesystem state produced by previous steps
- the files that instruction depends on
If those inputs are unchanged, BuildKit can reuse the cached result. If they change, that step is invalidated, and later steps often have to rebuild too.
That is why this pattern is so expensive:
FROM node:20
WORKDIR /app
COPY . .
RUN npm ci
RUN npm run build
Any source change invalidates COPY . ., which invalidates npm ci, which invalidates the build.
The better pattern is to separate dependency metadata from frequently changing source files:
FROM node:20
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN npm run build
Now dependency installation is tied to package.json and package-lock.json, not every file in the repository.
That one change usually matters more than any fancy flag.
Keep the build context small
Your build context is everything sent to the builder. A large context is bad for both speed and cache stability.
Use .dockerignore aggressively for things that do not belong in the build:
node_modules.venv- test artifacts
- logs
- local caches
- generated build output
But be careful not to exclude files that define dependency state, such as:
package-lock.jsonpoetry.lockuv.lockCargo.lockgo.sum
Those lockfiles are often exactly what you want in the cache boundary.
The three cache types people confuse
| Mechanism | What it reuses | Best for | Shared across builders? |
|---|---|---|---|
| Layer cache | Completed build steps | Skipping work entirely | Only if exported |
| Cache mounts | Files inside a RUN step | Package manager downloads and compiler caches | No, usually builder-local |
| External cache | Exported BuildKit cache | CI and multi-machine reuse | Yes |
That table explains most confusion around Docker build performance.
1. Layer cache: the first and biggest win
Layer cache is what people usually mean when they say “Docker cache.” It is also the biggest lever.
Good layer cache strategy looks like this:
- Put expensive and stable steps early.
- Put frequently changing steps late.
- Copy manifests first, source later.
- Split unrelated build stages so one change does not invalidate everything.
- Use lockfiles so dependency resolution stays deterministic.
Multi-stage builds help here too. If frontend assets, documentation, and backend code are unrelated outputs, they should not all sit in one giant invalidation chain.
2. Cache mounts: fast reruns for package managers
Cache mounts are a different mechanism:
RUN --mount=type=cache,target=/root/.npm npm ci
This does not cache the layer result in the same way cache-from does. Instead, it gives the RUN step a persistent directory that can survive across builds on the same builder.
That is useful because package managers spend a lot of time downloading artifacts:
npmdownloads tarballspipdownloads wheels and sdistsuvdownloads wheels and Python distributionscargodownloads cratesgodownloads modulesaptdownloads package metadata and archives
If a step must run again, a cache mount lets the tool reuse those downloads instead of starting from zero.
Important properties of cache mounts
- They are attached to a single
RUNstep. - They are not copied into the final image.
- They are usually local to the BuildKit builder.
- They can be garbage-collected.
- If BuildKit skips the step entirely via layer cache, the cache mount is irrelevant because the step never runs.
That last point is key:
- layer cache helps you avoid running
npm ci - cache mounts help
npm cibe cheaper if it does run
Those are different problems.
Cache mount examples
npm
FROM node:20
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN npm run build
pip
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
uv
uv works especially well with BuildKit because it benefits from both cache mounts and clean dependency boundaries.
# syntax=docker/dockerfile:1.7
FROM python:3.12-slim
COPY --from=ghcr.io/astral-sh/uv:0.10.9 /uv /uvx /bin/
WORKDIR /app
ENV UV_LINK_MODE=copy
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-install-project
COPY . .
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --locked
Why this works well:
uv.lockandpyproject.tomldefine the dependency boundary.--no-install-projectseparates transitive dependencies from application source.UV_LINK_MODE=copyavoids cross-filesystem linking warnings when the cache mount and environment live on different filesystems.
For workspace builds, the first sync often uses --frozen --no-install-workspace before the full source tree is copied, and the final sync can use --locked.
apt
apt is slightly special because parallel access to the same cache can cause issues. sharing=locked is often the right choice.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
curl \
git
Other useful targets
- Go:
/go/pkg/modand/root/.cache/go-build - Cargo:
/usr/local/cargo/registryand the buildtargetdirectory - Maven:
/root/.m2/repository - pnpm: the pnpm store directory
Cache mount options that matter
The full syntax is flexible:
RUN --mount=type=cache,target=/path,id=my-cache,sharing=shared command
The most useful options are:
target: where the cache appears in the containerid: stable identifier for the cachesharing:shared,private, orlockeduid,gid,mode: ownership and permissions
Two practical notes:
- Changing
id,uid,gid, ormodecan effectively give you a fresh cache. - If multiple builds write to the same cache and the tool expects exclusive access, use
sharing=locked.
3. External cache: what makes CI fast
By default, BuildKit cache lives inside the builder. That is fine on your laptop or on a long-lived self-hosted runner. It is much less useful on ephemeral CI runners where every build starts from a fresh machine.
That is where external cache comes in.
With docker buildx build, you can export cache to a remote location and import it in later builds:
docker buildx build \
--cache-from type=registry,ref=registry.example.com/myapp:buildcache \
--cache-to type=registry,ref=registry.example.com/myapp:buildcache,mode=max \
-t registry.example.com/myapp:latest \
--push .
This lets future builders skip already-completed steps, even on different machines.
Common cache backends
type=registry: best when you want a portable cache shared across CI and local buildstype=gha: convenient for GitHub Actions-only workflowstype=local: useful for local experiments or self-hosted runners- inline cache: stores cache metadata in the pushed image, simpler but less flexible than a dedicated cache image
If your CI builders are ephemeral, external cache is often the difference between “sometimes warm” and “predictably fast.”
Cache mounts vs external cache
This is the distinction that causes the most confusion:
cache-from/cache-tois about reusing completed build results--mount=type=cacheis about reusing a directory during aRUNstep
Another way to say it:
- external cache helps you skip work
- cache mounts help you repeat work more cheaply
That is why adding cache mounts alone often disappoints people in CI. If every run gets a new builder, the mount cache may not exist yet. You still need external cache if you want reuse across CI runs.
Use bind mounts when you do not want a COPY layer
Bind mounts are another underused optimization.
Instead of copying source into the image just to produce an artifact, you can mount it temporarily during a RUN instruction:
FROM golang:1.22
WORKDIR /src
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -o /out/app ./cmd/app
This is useful when:
- the source is only needed to generate an artifact
- you do not want every input file copied into a cached layer
- the final image only needs the built output
Remember that bind mounts are temporary. Their contents are visible only for that instruction, and output should be written outside the mount target.
Why cache mounts seem to “randomly” disappear
This usually has one of a few causes:
- you are on a different builder
- the builder was recreated
- BuildKit garbage-collected old cache data
- the cache identity changed because mount options changed
- you ran with
--no-cache
This is expected behavior, not usually a Docker bug.
Cache mounts are builder-local state. If the builder disappears, the cache mount usually disappears with it.
Common misconceptions
”If I use --mount=type=cache, will my next GitHub Actions run reuse it?”
Not reliably on hosted runners. That cache usually lives in the local BuildKit storage of that runner’s builder.
”Does cache-to export my cache mount contents?”
Treat the answer as no. External cache exports BuildKit build results, not a portable copy of every builder-local cache directory.
”Why is my dependency step still running even though I use cache mounts?”
Because cache mounts do not make the step disappear. They only make that step cheaper to execute once it is invalidated.
”Do I still need good layer ordering if I export cache to a registry?”
Yes. External cache cannot rescue a Dockerfile that invalidates expensive steps on every source change.
”Can I use cache mounts and external cache together?”
Yes, and that is usually the best setup.
A practical strategy for fast builds
If I want predictable Docker build performance, this is the order I think in:
- Enable BuildKit and use
buildx. - Keep the build context small with
.dockerignore. - Copy dependency manifests and lockfiles before source code.
- Split unrelated build outputs into separate stages.
- Add cache mounts for package manager and compiler caches.
- Export external cache for CI, especially on ephemeral runners.
- Remove duplicated work inside build scripts so later stages do not rebuild artifacts unnecessarily.
- Measure and inspect cache behavior instead of guessing.
Debugging and inspection
When cache behavior is unclear, use plain progress output:
BUILDKIT_PROGRESS=plain docker buildx build .
To inspect disk usage:
docker buildx du
To inspect cache mounts specifically:
docker buildx du --filter type=exec.cachemount --verbose
To prune old cache:
docker builder prune -a
Those commands make BuildKit much less mysterious.
One security note
While modernizing builds, use secret mounts and SSH mounts for credentials instead of baking secrets into ARG or ENV. Build speed and build safety usually improve together when you adopt BuildKit features intentionally.
The takeaway
Fast Docker builds are usually not about one magic setting. They come from combining a few boring but high-leverage rules:
- small context
- stable dependency boundaries
- good layer ordering
- package-manager cache mounts
- exported cache for CI reuse
- clear separation between source, dependencies, and final artifacts
Once those pieces are in place, Docker cache stops feeling unpredictable and starts feeling mechanical. That is the real goal: not just faster builds, but builds whose performance you can actually reason about.