Skip to content

Dockerfile Requirements

This document will walk users through the process of constructing a Dockerfile in compliance with Iron Bank policies and contains best practices and methods.

Background Information

The build process defined in Gitlab is being conducted in a disconneced environment where no connection is allowed to the internet. Any dependencies that are not currently available to be pulled from Iron Bank including binary, builder/intermediate images will need to be declared in the hardening_manifest.yaml files under resources section. As part of the pipeline process, the required dependencies will be downloaded and prestaged as import artifacts stored in Nexus repository during the prebuild stage. See Hardening Manifest.

Requirements

The following requirements must be used to build your container. No other methods of building a container are accepted (e.g. using Ansible/Terraform, calling a script that builds the container). Our pipeline accepts one way of building containers: using a Dockerfile in the build context and commands similar to docker build --build-arg BASE_REGISTRY ... -t container:tag .. Ensure your Dockerfile meets the following requirements.

Your repository should only have one Dockerfile named 'Dockerfile'. Your Dockerfile should only call Iron Bank built base images with the correct variables:

Note on base images for Dockerfile: These defaults will be replaced by the build/hardening pipeline when a build command is issued. However, the defaults should also lead to a successful container build. The FROM pulling the base image must point to hardened Iron Bank containers.

For images that use UBI, we request you use UBI9 unless specifically requiring UBI8 for technical reasons. If using UBI8, we will ask for a justification and roadmap for converting to UBI9.

a. These three variables MUST be present to call Iron Bank built base images.

Do (example):

ARG BASE_REGISTRY=registry1.dso.mil
ARG BASE_IMAGE=redhat/ubi/ubi9
ARG BASE_TAG=9.3

FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}

Do not (example):

FROM registry.access.redhat.com/ubi9/ubi:9.3

b. No calls to the Internet in the Dockerfile or any script used in the Dockerfile are permitted. Iron Bank utilizes a secure satellite server for Red Hat UBI base image repositories via yum/dnf commands.

c. Only one Dockerfile is allowed per container. The Dockerfile should be named 'Dockerfile' and nothing else (e.g. no Dockerfile.ubi, project.Dockerfile, etc.).

d. Do you have any scripts used in the Dockerfile (e.g. entrypoint script)? If so, you may NOT include it in a tarball pulled via the hardening_manifest.yaml file. You need to create a scripts directory at the root of the project and include it (and any other script) there, and copy it over from the build context to the image.

Additionally, any scripts placed in this directory must not contain any external dependencies. Inclusion of commands such as wget or curl will be rejected, regardless of whether this section of the code is executed. Absolutely no reference to an external dependency should exist inside of your scripts.

e. Do not include labels within the Dockerfile. These should be placed in the hardening_manifest.yaml.

Documentation guidelines

  • No documentation you provide (including code, README, License, etc.) may have profanity/defamation or insulting/unprofessional content. Any container containing any such content will be rejected and must be corrected or will be subject to removal.

Container execution guidelines

  • The container should be started as a not-root user. Please use the USER instruction in the Dockerfile accordingly. Privileges/escalations must be provided to make that USER work properly. --privileged is not allowed and you must run as a non-root user. If root is required, you must provide a proper justification in the README.md as well as in your justifications file.

  • A bootstrap script that switches to a non-root user is not sufficient

  • This includes initContainers that chown volume mounts

  • Using tools such as gosu is prohibited.

  • Use a numeric UID (>= 1000) for USER to support random UIDs.

  • If a Helm chart exists, use the same default UID as runAsUser

Update packages and remove cache

If using UBI as your base image, you must include the following to ensure that all packages are updated at time of build. If using a base image other than UBI, you must provide an equivalent command within your Dockerfile.

#the `--nodocs` flag is optional.
RUN dnf update -y --nodocs && \
    dnf clean all && \
    rm -rf /var/cache/dnf
#the `--nodocs` flag is optional.
RUN dnf update -y --nodocs && \
    dnf clean all && \
    rm -rf /var/cache/dnf

Install packages and clean the cache within the same RUN layer to reduce the image size.


Remove unnecessary software

To reduce the attack surface and number of findings, be sure to remove any dependencies that you installed that are not required for a production image. These include compiler, devel packages, examples, and debug tools. These tools should be added and removed within the same RUN command to reduce the image size.

Do not remove packages that were installed from a previous base image or not part of your Dockerfile as this will be rejected.

Consider using a multi-stage build where the binaries are copied to a final image. It is difficult to track what is a build vs. runtime dependency, so it is easier to copy what is required vs. removing what is unnecessary.


Remove certificates and keys

Do not include private keys or certificates within the image. You must also remove any certificates or keys even if they are only intended for testing.

Specifying --nodocs when installing packages may avoid the test keys from being added


Ensure proper file permissions

  • System and user binaries should be owned by root and read/execute only.
  • Avoid chmod 777 on files and directories
  • Remove any unnecessary SUID or SGID bits
  • Modifying the permissions of /etc/passwd is unacceptable. See here for a detailed explanation.

Import GPG keys for signed packages

Include any GPG keys in a gpg folder within repo1 and import them. Do NOT use --nogpgcheck with dnf.

COPY gpg/myapplication-gpg.asc myapplication.rpm /
RUN rpm --import /myapplication-gpg.asc && \
    dnf install -y /myapplication.rpm && \
    dnf clean all && \
    rm -f /myapplication-gpg.asc /myapplication.rpm

ADD is prohibited, use COPY

The use of ADD is prohibited to prevent automatically extracting remote archives. The use of ADD for local archives is also forbidden. Consider using multi-stage builds as an alternative to avoid doubling the image size:

FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG} as build

COPY application.tar.gz /
RUN mkdir -p /opt/application && \
    tar -zxf /application.tar.gz -C /opt/application

FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}

COPY --from=build /opt/application /opt/application

Include ENTRYPOINT and other scripts in the scripts directory.

docker-entrypoint.sh and other scripts must be included within repo1 in the scripts directory. Scripts must also be copied from repo1 to the final image. This allows reviewers to easily see the contents of any bootstrap scripts.

COPY scripts/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh

Include configuration files in the conf directory.

Default configuration files for an application should also reside within repo1 in the conf directory. This allows reviewers to easily see the contents of any configuration files. The default configuration should enable any security relevant settings.

Do not embed any credentials in the configuration files.


Do not extract archives to the root directory

Extracting an archive to the root / filesystem prevents a reviewer from determining whether system files are modified and requires them to download and inspect the archive. Extract applications to non-system directories instead.

RUN tar -zxf application.tar.gz -C /opt/application

Do not RUN scripts coming from external sources in the pipeline

Any script RUN during the build must exist within repo1. Downloading an external archive or shell script within hardening_manifest.yaml and executing within the build stage is unacceptable.

It is preferred to RUN steps directly in the Dockerfile instead of using a script.


Use one process per container

It is recommended to only include a single process per container. For applications that spawn child processes or do not handle SIGTERM, consider using tini to properly reap zombie processes.


Use specific base images

Use the most specific base image available for an application. For example, a Java application should use the Java base image instead of starting from UBI and installing Java.


Use EXPOSE for defining ports

The EXPOSE command is informational only, but helps reviewers understand what ports your application intends on binding.

  • Expose ports as necessary to facilitate correct operation of your application, avoid privileged ports 1024 and below to support running as a non-root user.

Multi-Stage builds in your Dockerfile

Multi-stage builds revolutionize the art of crafting Docker images, paving the way for sleeker, more efficient containers. At its core, this approach involves utilizing multiple FROM statements within your Dockerfile. Each FROM instruction ushers in a new dimension of your build, introducing a fresh stage. The brilliance lies in the ability to selectively import artifacts from one stage to another, effectively shedding any components deemed unnecessary for the final image.

Using multi-stage builds can help with:

  • Smaller Image Sizes: Discard redundant elements and end up with an image focused solely on its operational requirements.

  • Minimizing the Attack Surface: Every extraneous package and component eliminated during the build process represents a layer of protection against potential vulnerabilities.

Putting Theory into Practice

An example of a multi-stage build can be found in the Kubernetes-1.27 Multi-Stage Build, the first build stage handles the installation of the kubeadm prerequisites alongside the integration of source code. Subsequently, the kubeadm package itself is constructed. Notably, line 31 showcases the ultimate FROM statement, a reflection of the culmination of efforts that yield the final image.

This approach holds particular value when building kubeadm from source. A plethora of packages are crucial during the construction phase, but not all are required for the final product. By using the multi-stage build technique, all of the "builder" packages are no linger present in your final image, leading to a container that's both leaner and fortified against potential vulnerabilities.

Another instance of multi-stage builds can be found in the Apache-Extended Multi-Stage Build. This configuration offers a Docker image hosting multiple services under a single roof. Through the magic of multiple "builder" images, the process of assembling distinct services becomes streamlined. Say goodbye to the ordeal of piecing together each service individually, the "builder" image effortlessly extracts the exact packages essential for your specific service.

For more information on multi-stage builds, visit https://docs.docker.com/build/building/multi-stage/