Skip to content

Understanding the Pipeline

Every container in Iron Bank utilizes the Container Hardening pipeline. At a high level, the pipeline is responsible for the following:

  • Ensure compliance with CHT project structure guidelines and check for the presence of required files.
  • Retrieve the resources used in the container build process and build the hardened container.
  • Perform scans of the container in order to evaluate for security vulnerabilities.
  • Publish scan findings to the VAT (Vulerability Assessment Tracking tool) and the hardened container to Registry1.

Pipeline Documentation

This page provides an overview of the pipeline and includes information which may be helpful in getting started with the pipeline when onboarding. For in-depth information on the pipeline, including specific stage writeups and to view the code, please access the Iron Bank Pipeline Documentation.

Getting Started

The first thing you need to know when onboarding is which Iron Bank base image you want to use. The container hardening pipeline behaves slightly different for each base image. The pipeline builds the image regardless of which Iron Bank base image you use. Check out the guide for Choosing a Base Image.

Early in the pipeline a linting stage occurs which enforces the correct repository structure and verifies the presence of required files. This stage is run in order to ensure compliance with Container Hardening team requirements. Please ensure that your repository structure meets the guidelines provided by the Container Hardening team and that all the necessary files exist, otherwise the Container Hardening pipeline will fail as a result.

Artifacts

The Container Hardening pipeline utilizes GitLab CI/CD artifacts in order to run stages which are dependent on previous stage(s) resources and to provide Repo1 contributors with access to the artifacts produced by the pipeline which are relevant to them. For example these artifacts can include a tar file of the built image, or a scan results file which contributors can use to enter justifications for any security findings.

For more information regarding specific artifacts for each stage, please refer to the following documentation from the Container Hardening pipeline project.

Project Branches

The Container Hardening pipeline runs slightly differently depending on the hardened container project branch it is run on. The reason for this is to ensure a proper review of the repository content before the code is merged into a development branch or master branch, which publish items to production systems.

Branch Pipeline actions
Feature branches Does not publish images to public Registry1 or findings results to the Iron Bank.
development Does not publish images to public Registry1 or findings results to the Iron Bank. Merging is required to be performed by the Container Hardening Team, but you can open the merge request.
master Publishes images to public Registry1 and findings results to the Iron Bank. Merging is required to be performed by the Container Hardening Team, but you can open the merge request.

Pipeline Stages

pipeline-overview

setup / setup

This preliminary stage sets up the workspace and verifies the presence and validity of required files and directories in the repository.

pre-build / import-artifacts

This stage imports any external resources (resources from the internet) listed in the hardening_manifest.yaml file for use during the hardened image build. It downloads the external resources and validates the resource's checksums calculated upon download match the checksums provided in the hardening_manifest.yaml file.

build / build-amd64

The container build process takes place in an offline environment. The only resources the build environment has access to are the resources listed in the project's hardening_manifest.yaml, images from Registry1, and the following proxied package managers:

  • Pypi, for Python based containers using the pip package manager,
  • Golang proxy for go based containers.
  • npm for Node.js images.
  • Rubygems for Ruby projects that use gem and bundler.

Any attempts to reach out to the internet will be rejected by the cluster's egress policy and the build stage will fail.

For more insight into the build stage, please refer to the following documentation from the Container Hardening pipeline project.

build / build-arm64 (optional)

This job builds an arm64 image. It requires the presence of a Dockerfile.arm64 in the project to be generated by the pipeline. It is configured to work the same way as the build-amd64 job, and builds in the same offline context.

post-build / create-tar

The create-tar job outputs a Docker archive of the image as a tarball. This tarball can be downloaded, and run locally to test the built image.

docker load -i <path-to-archive>

post-build / create-sbom

The create-sbom job makes use of Anchore's syft product to generate spdx, CycloneDX, and syft JSON reports.

pre-scan / scan-logic

This stage parses the access log generated by the build job for each built architecture and its corresponding syft JSON report. The pipeline diffs the parsed outputs, determining if a new build resulted in any changes to the software in the image.

scan stage overview

These scans run to ensure any vulnerabilities in the image are accounted for (such as dependency security flags). The scan results are generated so an auditor can manually review potential security flags raised in the scan.

Should the auditor determine the scan results are satisfactory, the container(s) will be published to Iron Bank.

The pipeline begins by pulling the container image from Repo One, and proceeds to run a series of scans on the container:

scan / openscap-compliance

The OpenSCAP Compliance scan enables us to provide configuration scans on container images. The particular security profiles that we use to scan container images conforms to the DISA STIG for Red Hat Enterprise Linux 8 and 9.

scan / twistlock-scan-amd64/arm64

The twistlock scan checks container images for vulnerabilities. Please visit paloalto networks for more information.

scan / anchore-scan

The anchore-scan job runs vulnerability, compliance, and malware scans.

The anti-virus/anti-malware scanning is performed by the open source tool ClamAV. ClamAV has a database of viruses and malware which is updated daily.

Please contact Anchore directly for an enterprise license or more information. You can visit the Anchore documentation to get started and learn more.

pre-publish / vat

The VAT stage uses pipeline artifacts from the scan stage jobs to populate the Vulnerability Assessment Tracker (VAT) at vat.dso.mil. Users who maintain the hardening of an image use VAT to track the known findings for an image, and provide justifications for findings that are not able to be mitigated. The VAT also calculates the Acceptance Baseline Criteria (ABC) status and Overall Risk Assessment (ORA) score.

publish / check-cves

This stage utilizes the API response artifact from the VAT stage, to log all findings which have not been justified or reviewed. With the release of the ABC/ORA, this job is now for informational purposes, to alert project maintainers to new or unjustified vulnerabilities.

Updating Staging Attestions

Images built from the development and feature branches do not have their VAT predicate attestations uploaded by default. But there are cases where you do want to use an image from staging (which are built from development and feature-branches) as your base parent image. The pipeline will fail because vat attestation predicates from the parent image are needed for an accurate attestation chain of the child image.

To enable using a staging parent image:

Run the parent image pipeline on the protected non-master branch with the PUBLISH_VAT_STAGING_PREDICATES CI variable set to "True".

Note that only maintainers of a project can make a branch protected.

When that pipeline completes, run the child image pipeline with STAGING_BASE_IMAGE CI variable set to "True". You child pipeline will now be able to pass the VAT stage because it can download the vat attestations of the parent image.

Master branch only jobs

Info

As mentioned above, the following stages of the pipeline will only run on master branches.

publish / generate-documentation

The generate-documentation job generates CSV files for the various scans, and justifications spreadsheet. The generated files can be found in the artifacts for this job. Members of the Container Hardening team, vendors, and contributors use the justifications spreadsheet as the list of findings for the container they are working on. This job also generates the generates the json documentation and scan results documentation which is made available through the job's artifacts.

publish / harbor

Pushes built images, SBOM files, and VAT response file as an attestation, to registry1.dso.mil/ironbank. The image and SBOM attachment are signed using cosign. See the ironbank-pipeline cosign documentation for instructions on how to perform image validation, and download .sbom, .att, and .sig artifacts.

post-publish / manifest

The manifest job creates a signed manifest-list in harbor which allows the pipeline to support multi-arch images.

post-publish / upload-to-s3

This job uploads documentation artifacts to S3, which are displayed/utilized by the Iron Bank website (e.g. scan reports, project README, project LICENSE).