Testing Manifest
Overview
Iron Bank can test images within CICD pipelines. A testing_manifest.yaml file defines tests. There are 3 types of tests: docker, kubernetes, and bb (Big Bang). To run tests:
- request
ENABLE_FUNCTIONAL_TEST*and/orENABLE_BB_TESTCICD variable(s) be set - create
testing_manifest.yamlat the top (root) of the project
Full Testing Manifest Example
docker:
- name: File Permissions
description: File Permissions
commands:
- command: ls -l /etc/passwd
timeout_seconds: 30
- command: ls -l /etc/shadow
timeout_seconds: 30
- name: User Accounts
description: User Accounts
commands:
- command: cat /etc/passwd
timeout_seconds: 30
- name: Check file contents
description: file contents sanity check
commands:
- command: /usr/bin/important_command --test
timeout_seconds: 30
expected_output: "correct output"
kubernetes:
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 5
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
ports:
- name: https
containerPort: 443
protocol: TCP
env:
- name: VAR_TO_SET
value: "value"
command: ["/bin/sh", "-c"]
args: ["ls"]
bb:
- repo: big-bang/product/packages/harbor
project_id: 3964
values_overrides:
portal:
image:
repository: "[[registry]]/[[repo]]"
tag: "[[tag]]"
tests
docker
The docker section in testing_manifest.yaml runs commands. The test is successful if the command output contains the expected_output within timeout_seconds.
testing_manifest.yaml docker tests.
docker:
- name: Check important_command output
description: Runs important_command --test and validates that the output contains 'correct output'
commands:
- command: /usr/bin/important_command --test
timeout_seconds: 30
expected_output: "correct output"
The above will run a single test with a single command. The command executed will be "/usr/bin/important_command --test", and it will validate that the log output of the command contains the string "correct output" and that the command does not take longer than 30 seconds to execute.
docker fields
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | yes | A name for the test. |
description |
string | yes | A description for the test. |
commands |
list | yes | A list of commands. |
command fields
| Field | Type | Required | Description |
|---|---|---|---|
command |
string | yes | The command to run via "kubectl run". |
timeout_seconds |
number | no | The maximum number of seconds to wait for the command to complete. Default is 45 seconds. If timeout is exceeded, the test fails. |
expected_output |
string | no | A string to validate exists inside of the kubectl log output. If the string does not exist, the test fails. |
FIPS testing
Ironbank now supports automated FIPS readiness testing in repo1 containers. The tests run automatically in the container pipeline. Any failure stops the pipeline and highlights the issue. The tests will be run on kernel hosts with FIPS enabled (see the section below for adding FIPS variables to the settings). Repositories with 'fips' in the name must implement these tests.
Using FIPS-ready containers is the first step toward building fully compliant FIPS systems. Once running a full cluster, customers can do their own FIPS compliance testing.
Enabling FIPS testing
STEP 1:
Create or update testing_manifest.yaml in the repo root (next to hardening_manifest.yaml). This file defines the commands to verify FIPS is enabled in the container. Below is an example file which does two checks. You should use these two checks, and add any others to verify FIPS is configured and running as desired in the container.
In the below example, the first test reads /proc/cmdline and checks for fips=1 in the output. Part of the output is required to have “fips=1" in it, or the test will fail (expected output must appear somewhere in the output of the command being run).
The second test checks /proc/sys/crypto/fips_enabled for a '1'.
You can add other tests similar to this in order to verify FIPS is configured for your container, and is enabled. If you create a test script or program that checks FIPS, you can have it be part of the container, and run from here as well.
NOTE that the output from these tests will appear in the pipeline logs under the Pipeline’s “scan” stage in “functional-testing-amd64-fips” and/or “functional-testing-arm64-fips”.
docker:
- name: cat /proc/cmdline
description: See kernel cmdline parameters. Should see fips=1
commands:
- command: cat /proc/cmdline
timeout_seconds: 60
expected_output: "fips=1"
- name: /proc/sys/crypto/fips_enabled
description: checking for fips enabled. file should exist and contain a "1"
commands:
- command: cat /proc/sys/crypto/fips_enabled
timeout_seconds: 60
expected_output: "1"
STEP 2:
Add pipeline variables to the repository. These variables instruct the Ironbank pipeline to use a FIPS-enabled kernel for testing.
- With permissions, go to the repository’s project -> Settings -> CI/CD -> Variables section, and Add Variable (you will do this once for each variable)
- Uncheck “Protect variable” so the testing will happen on all branches in the repo
- Key: use key from below
- Value: true
- Add variable (back to 1 for 2nd variable)
| Variable | Value |
|---|---|
ENABLE_FUNCTIONAL_TEST_AMD64_FIPS |
true |
ENABLE_FUNCTIONAL_TEST_ARM64_FIPS |
true |
Remove these existing variables if present:
| Variable | Value |
|---|---|
ENABLE_FUNCTIONAL_TEST_AMD64 |
true |
ENABLE_FUNCTIONAL_TEST_ARM64 |
true |
With the above successfully in place and committed to repo1, any time you run its pipeline, you will see results in the scan stage. As stated previously, the tests must pass, or all further parts of the pipeline will be skipped. This is the assurance that only images which pass the FIPS tests are uploaded during the ‘publish’ stage.
kubernetes
The kubernetes section in testing_manifest.yaml tests the image as a kubernetes pod. Kubernetes tests includes options like probes and resources.
testing_manifest.yaml kubernetes tests.
kubernetes:
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 5
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
ports:
- name: https
containerPort: 443
protocol: TCP
env:
- name: VAR_TO_SET
value: "value"
command: ["/bin/sh", "-c"]
args: ["ls"]
The above will start the image with the command specifies (/bin/sh -c ls) and ensure that the pod starts and completes successfully.
kubernetes fields
| Field | Type | Required | Description |
|---|---|---|---|
command |
string | no | Pod spec container command config. |
args |
string | no | Pod spec container args config. |
ports |
list | no | Pod spec container ports config. |
livenessProbe |
list | no | Pod spec container livenessProbe config. |
readinessProbe |
list | no | Pod spec container readinessProbe config. |
resources |
list | no | Pod spec container resources. |
env |
dict | no | Pod spec container env config. |
bb
The bb section in the testing_manifest.yaml file specifies a set of Big Bang system integration tests to run.
These tests will trigger a pipeline against the specified big-bang product repository in order to run Big Bang system integration tests using newly built images against the development branch.
Big Bang tests will install a helm chart for the product being tested and then run through a series of tests to ensure that the product installed and started successfully, and in some cases will run through additional tests such as cypress UI tests to perform a deeper validation of the product.
testing_manifest.yaml Big Bang tests.
bb:
- repo: big-bang/product/packages/harbor
project_id: 3964
values_overrides:
portal:
image:
repository: "[[registry]]/[[repo]]"
tag: "[[tag]]"
The above will run the harbor repo (3964) pipeline to kick off the system integration tests for this product. This will validate that the product helm chart installs successfully and in the case of the Harbor product will also launch cypress tests to attempt to log in to the Harbor web interface and perform some tasks like attempting to create and delete projects. Each Big Bang product will have a unique set of tests that run.
The magic string variables [[tag]] in the above YAML is required to exist. The other magic strings [[registry]] and [[repo]] are optional, depending on what
needs to be overridden. We will always need to override the tag, and sometimes will need to override the repo to point at ironbank-staging instead of ironbank.
The Iron Bank pipeline will substitute these magic strings with data specific to the image being tested.
The values_overrides field must be set correctly so that the image being tested is correctly pulled into the deployment during the helm upgrade.
magic variables
The following magic string variables are currently supported.
| Variable | Required | Substitution Example |
|---|---|---|
[[tag]] |
yes | registry1.dso.mil/ironbank/opensource/istio/operator:v1.2.3 -> v1.2.3 |
[[registry]] |
no | registry1.dso.mil/ironbank/opensource/istio/operator:v1.2.3 -> registry1.dso.mil |
[[repo]] |
no | registry1.dso.mil/ironbank/opensource/istio/operator:v1.2.3 -> ironbank/opensource/istio/operator |
[[repo_namespace]] |
no | registry1.dso.mil/ironbank/opensource/istio/operator:v1.2.3 -> ironbank/opensource/istio |
[[repo_parent_namespace]] |
no | registry1.dso.mil/ironbank/opensource/istio/operator:v1.2.3 -> ironbank/opensource |
bb fields
| Field | Type | Required | Description |
|---|---|---|---|
repo |
string | yes | The repo path of the Big Bang repo containing the product that you are running the test against. |
project_id |
number | yes | The repo ID of the Big Bang repo containing the product that you are running the test against. |
values_overrides |
dict | yes | An arbitrary yaml dict defining what repository and tag to supply as helm chart value overrides so that the correct image gets installed into the product during the test. |
Finding a Big Bang Product
It only makes sense to configure a big bang test if there is a product within Big Bang that utilizies the image you are building.
Big Bang Products that you can use to test against can be found here https://repo1.dso.mil/big-bang/product/packages
Taking the Harbor Portal Image as an example, this is an image that is utilized within the Harbor Product. You can validate this is true by seeing it exists in the list of images in the Harbor Helm Chart annotations, along with other images like nginx, postgresql, and redis.
Configuring the Big Bang Test
Use the repo path of the Big Bang product, as well as the repo ID when defining a Big Bang test in your testing manifest. In this case it is big-bang/product/packages/harbor, with an ID of 3964.
Review the Big Bang product values yaml to determine what values_overrides to set in the testing_manifest.yaml file.
In the case of our example Harbor Portal values yaml, we can see that the harbor-portal image repository and tag are specified via:
portal:
image:
repository: registry1.dso.mil/ironbank/opensource/goharbor/harbor-portal
tag: v2.12.1
This means that in order for the Iron Bank repo to properly override the repository and tag in the Harbor product with our own image the testing manifest must have a values_overrides value matching what we see there,
except we need to replace the repository value with [[registry]]/[[repo]] (because we have both the registry and repo combined together in one variable) and the tag value with [[tag]].
values_overrides:
portal:
image:
repository: "[[registry]]/[[repo]]"
tag: "[[tag]]"
configuring and enabling your bb tests
Once the testing_manifest.yaml file has been checked in to the root of the repository with a bb section configured, the repository must be
flagged to enable BB tests by adding ENABLE_BB_TEST variable to the repo with a value of true.
Test Results
The test will run in a job called bb-test-amd64. The Job output will contain a link to the Big Bang product pipeline that is actually installing the product and running the test.
Follow the link over to the pipeline. The relevant job is called clean install. Drill into this job to view test result output.
The Package Install section is where the product helm chart gets installed and validate, and the Package Test section is where any tests (if any) get run.
You can view any relevant test artifacts, like Cypress videos and screenshots if there are Cypress tests configured for the product by exploring the Job artifacts of the clean install job.
If any tests fail, the clean install job will fail, which will trigger the bb-test-amd64 job to exit with a failure.