Olha Stefanishyna
← Back to home

Minimal Size, Maximum Safety: Hardening Docker Images

A minimalist illustration of a container wrapped in a shield, representing hardened image.
A minimalist illustration of a container wrapped in a shield, representing hardened image.

Not so long ago, many React developers had to update projects to patch CVE-2025-55182, a critical vulnerability in React Server Components that rated CVSS 10.0. It is one of the most severe RCEs and allows unauthenticated remote code execution (RCE) on the server. The vulnerability affected multiple releases of frameworks, based on React.js library. For example, default configurations of Next.js 15.x and 16.x (including some 14.3.0-canary builds) were impacted.

This React CVE is just one example of how vulnerabilities can persist across framework versions. CVEs affecting containers follow the same logic: the attack surface and the vulnerability already exist before they are named and disclosed.

In modern development, speed and convenience are often prioritized over security. At the same time, as attacks become more sophisticated, providing safe environment takes more time. To accelerate development, engineers often rely on default images — which commonly include tools like package managers and shells. If an attacker gains access to such a container, they have a pre-installed toolkit to explore your network or escalate privileges.

Hardening removes these tools, but it can take time and requires additional knowledge of infrastructure.

Key Principles of hardening

  • Minimize runtime packages: If your app doesn't need it to run, it shouldn't be in the image.
  • Separate layers: Keep your build tools in the build stage and your runtime artifacts in the run stage.
  • Identity matters: Run your process as a non-root user with a specific UID/GID.
  • Manage tokens safely: Never hardcode secrets in the Dockerfile.

Following High impact CVEs on security, Docker made Docker Hardened Images available for all developers for free and open source under the Apache 2.0 license. These images are minimal and secure by design.

Hardened Docker Images (HDI) allow developers to simplify the process of hardening and provide safe environment to run applications. Many HDI variants run under non-root user by default, they omit shell, curl and git. It makes attack surface smaller and meets stricter security standards. An additional bonus: smaller image size leads to faster deployments.

However advantages may turn into disadvantages if you need to debug things in production (no shell, no curl), also hardening requires multi-stage builds that makes the setup more complex and requires more initial time. For such cases Docker provides Docker Debug utility, that helps you to debug images without keeping shell in production images. Learn more about it in documentaiton

The Hardened Image Approach

Let's look at a secure configuration based on a Node.js environment. This setup utilizes multi-stage builds and principle of least privilege.

dockerfile
1# --- Stage 1: Build Stage ---
2FROM node:25-alpine3.22 AS builder
3WORKDIR /app
4
5COPY package*.json ./
6RUN npm ci
7
8COPY . .
9RUN npm run build
10# remove dev deps after build if you need node_modules in the run stage
11RUN npm prune --omit=dev
12# Fix permissions before copying
13RUN chown -R 1000:1000 /app
14
15# --- Stage 2: Run Stage ---
16# Switch to a hardened, trusted registry (e.g., dhi.io)
17FROM dhi.io/node:25-alpine3.22
18WORKDIR /app
19
20COPY --from=builder /app/build ./build
21COPY --from=builder /app/package*.json ./
22COPY server ./server
23
24EXPOSE 3005
25CMD ["node", "server/index.js"]

How this works:

Alpine Linux in the build stage allows to install dependencies with npm, remaining significantly smaller (~5MB) than Debian-based images. This reduces attack surface in the build stage, resources usage, amount data to download and transfer and can speed up builds. Still, if you need tools or libraries that are only available in larger images (like Debian or Ubuntu), you can use them in the build stage, it will not affect the final image size.

**Removing development dependencies ** withnpm prune --omit=dev ensures that testing frameworks (like Jest) and compilers (like TypeScript) never make it into production.

Fix permissions before copying: By default, normal Node.js Docker containers run as root, but DHC run as a non-root user. To make files created under the root accessible with non-root permissions, you should fix the ownership. RUN chown -R 1000:1000 /app recursively changes the ownership of all files and directories under /app to the user and group with the UID and GID 1000. It ensures that the application files are owned by a non-root user.

Copy only what your app needs to run with the COPY directive. If you need node modules just to run a simple Node.js server, you can build it to avoid copying the full node_modules into the runtime image.

Secure Setup: Using Tokens for Authentication

To pull from a hardened registry or to fetch private packages during the build, you need to login to the service first. To keep things safe, don't hardcode credentials, use Personal Access Tokens (PAT) or Build Secrets and store them in your CI secret store.

Let’s do it step by step using Docker (or dhi.io) and GitHub Actions as an example:

Step 1.: Generate token in Docker settings:

Go to your Docker account settings, open Personal Access Tokens. If you need just a normal DHC don't elevate privilages, pick the Public Repo Read-only scope, it's enough. Treat it like a password and set an expiration if possible.

Step 2.: Set the credentials as secrets:

In your repository settings set secrets , let's say IMAGE_USERNAME (your username in Docker) and IMAGE_TOKEN (the token from Docker settings). Don’t expose these in logs or print them during the workflow.

Step 3.: Authenticate in CI before build/pull

If you have CI/CD, in your CI/CD runner config, use a token stored as a secret:

yml
1- name: Log in to DHI registry
2 uses: docker/login-action@v3
3 with:
4 registry: dhi.io
5 username: ${{ secrets.IMAGE_USERNAME }}
6 password: ${{ secrets.IMAGE_TOKEN }}

Use the official Docker login action in your workflow. docker/login-action@v3 expects a username + token.

Other Security Concerns

If your build stage requires access to private dependencies, don't bake secrets into Docker images, use Docker BuildKit secret mounts instead. In this case, secrets passed with --secret (or RUN --mount=type=secret,...) are not stored in the final image or the build cache. They are only available for the duration of the build step or while the container is running, and are not persisted after. Or use swarm mode in Docker that allows to manage secrets and only grant access to services that need them.

Gate images in CI with automated scanning and policy checks. Integrate vulnerability scanners (Trivy, etc.) to check images for your CVE threshold violation or policy.


That’s it - with DHI, a small amount of upfront work usually makes the environment more resilient and reduces the attack surface for your applications, helps protect your applications from both known and emerging threats, keeping your deployments clean and maintainable.

Let's talk