Build and Push to GHCR
This topic explains how to use the Build and Push an image to Docker Registry step to build and push an image to GitHub Container Registry.
You need:
- Access to GHCR.
- A Harness CI pipeline with a Build stage.
- A Docker connector.
Kubernetes cluster build infrastructures require root access
With Kubernetes cluster build infrastructures, Build and Push steps use kaniko. Other build infrastructures use drone-docker. Kaniko requires root access to build the Docker image. It doesn't support non-root users.
If your build runs as non-root (runAsNonRoot: true
), and you want to run the Build and Push step as root, you can set Run as User to 0
on the Build and Push step to use the root user for that individual step only.
If your security policy doesn't allow running as root, go to Build and push with non-root users.
Build and push to GitHub Container Registry
In your pipeline's Build stage, add a Build and Push an image to Docker Registry step and configure the settings for GHCR.
Here is a YAML example of a Build and Push an image to Docker Registry step configured for GHCR:
- step:
type: BuildAndPushDockerRegistry
name: Build and push to GHCR
identifier: Build_and_push_to_GHCR
spec:
connectorRef: YOUR_DOCKER_CONNECTOR_ID
repo: ghcr.io/NAMESPACE/IMAGE
tags:
- <+pipeline.sequenceId>
When you run a pipeline, you can observe the step logs on the build details page. If the Build and Push step succeeds, you can find the uploaded image in GHCR.
You can also:
Build and Push to Docker step settings for GHCR
These sections explain how to configure the Build and Push an image to Docker Registry step settings for GHCR. Depending on the build infrastructure, some settings might be unavailable or optional. Settings specific to containers, such as Set Container Resources, are not applicable when using a VM or Harness Cloud build infrastructure.
Name
Enter a name summarizing the step's purpose. Harness automatically assigns an Id (Entity Identifier) based on the Name. You can change the Id until the step is saved. Once save, the Id can't be changed.
Docker Connector
Specify a Harness Docker Registry connector configured for GHCR.
To create this connector:
- Go to Connectors in your Harness project, organization, or account resources, and select New Connector.
- Select Docker Registry under Artifact Repositories.
- Enter a Name for the connector. The Description and Tags are optional.
- For Provider Type, Select Other.
- In Docker Registry URL, enter your GHCR hostname and namespace, such as
https://ghcr.io/NAMESPACE
. The namespace is the name of a GitHub personal account or organization. - In the Authentication settings, you must use Username and Password authentication.
- Username: Enter your GitHub username.
- Password: Select a Harness text secret containing a classic personal access token with permission to publish, install, and delete private, internal, and public packages. For more information, go to Authenticating to the Container Registry.
- Complete any other settings and save the connector. For information all Docker Registry connector settings, go to the Docker connector settings reference.
Docker Repository
The namespace where you want to store the image and the image name, for example, ghcr.io/NAMESPACE/IMAGE_NAME
. For more information, go to the GitHub documentation on Pushing container images.
Tags
Add Docker build tags. This is equivalent to the -t
flag. For more information, go to the GitHub documentation on Pushing container images.
Add each tag separately.
When you push an image to a repo, you tag the image so you can identify it later. For example, in one pipeline stage, you push the image, and, in a later stage, you use the image name and tag to pull it and run integration tests on it.
Harness expressions are a useful way to define tags. For example, you can use the expression <+pipeline.sequenceId>
as a tag. This expression represents the incremental build identifier, such as 9
. By using a variable expression, rather than a fixed value, you don't have to use the same image name every time.
For example, if you use <+pipeline.sequenceId>
as a tag, after the pipeline runs, you can see the Build Id
in the output.
And you can see where the Build Id
is used to tag your image in the container registry:
You can use the same expression to pull the tagged image, such as namespace/myimage:<+pipeline.sequenceId>
.
Optimize
With Kubernetes cluster build infrastructures, select this option to enable --snapshotMode=redo
. This setting causes file metadata to be considered when creating snapshots, and it can reduce the time it takes to create snapshots. For more information, go to the kaniko documentation for the snapshotMode flag.
For information about setting other kaniko runtime flags, go to Environment variables.
Dockerfile
The name of the Dockerfile. If you don't provide a name, Harness assumes that the Dockerfile is in the root folder of the codebase.
Context
Enter a path to a directory containing files that make up the build's context. When the pipeline runs, the build process can refer to any files found in the context. For example, a Dockerfile can use a COPY
instruction to reference a file in the context.
Kaniko, which is used by the Build and Push step with Kubernetes cluster build infrastructures, requires root access to build the Docker image. If you have not already enabled root access, you will receive the following error:
failed to create docker config file: open/kaniko/ .docker/config.json: permission denied
If your security policy doesn't allow running as root, go to Build and push with non-root users.
Labels
Specify Docker object labels to add metadata to the Docker image.
Build Arguments
The Docker build-time variables. This is equivalent to the --build-arg
flag.
Target
The Docker target build stage, equivalent to the --target
flag, such as build-env
.
Docker layer caching and Remote cache image
There are two ways in which you can leverage Docker Layer Caching: Enable Docker layer caching ('caching' property) or Remote cache image ('remoteCacheRepo' property). Refer to Enable Docker layer caching for your build to learn more.
Environment Variables (plugin runtime flags)
Build and Push steps use plugins to complete build and push operations. With Kubernetes cluster build infrastructures, these steps use kaniko, and, with other build infrastructures, these steps use drone-docker.
These plugins have a number of additional runtime flags that you might need for certain use cases. For information about the flags, go to the kaniko plugin documentation and the drone-docker plugin documentation.
How you configure plugin runtime flags depends on your build infrastructure.
Set plugin runtime flags with Kubernetes cluster build infrastructure
Set plugin runtime flags with other build infrastructures
Mounting Docker Secrets
Using Local Tar Output
Run as User
With Kubernetes cluster build infrastructures, you can specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.
This step requires root access. You can use the Run as User setting if your build runs as non-root (runAsNonRoot: true
), and you can run the Build and Push step as root. To do this, set Run as User to 0
to use the root user for this individual step only.
If your security policy doesn't allow running as root, go to Build and push with non-root users.
Set Container Resources
Set maximum resource limits for the resources used by the container at runtime:
- Limit Memory: The maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes
G
orM
. You can also use the power-of-two equivalentsGi
andMi
. The default is500Mi
. - Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed; for example, you can specify one hundred millicpu as
0.1
or100m
. The default is400m
. For more information, go to Resource units in Kubernetes.
Timeout
Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:
Conditions, looping, and failure strategies
You can find the following settings on the Advanced tab in the step settings pane:
- Conditional Execution: Set conditions to determine when/if the step should run.
- Failure Strategy: Control what happens to your pipeline when a step fails.
- Use looping strategies: Define a matrix, repeat, or parallelism strategy for an individual step.
Troubleshoot Build and Push steps
Go to the CI Knowledge Base for questions and issues related to building and pushing images, such as:
- What drives the Build and Push steps? What is kaniko?
- Does a kaniko build use images cached locally on the node? Can I enable caching for kaniko?
- Can I run Build and Push steps as root if my build infrastructure runs as non-root? What if my security policy doesn't allow running as root?
- Can I set kaniko and drone-docker runtime flags, such as skip-tls-verify or custom-dns?
- Can I push without building?
- Can I build without pushing?
- Is remote caching supported in Build and Push steps?
- Why doesn't the Build and Push step include the content of VOLUMES from my Dockerfile in the final image?
- Can I use a specific version of kaniko or drone-docker?
- How do I fix this kaniko container runtime error: kaniko should only be run inside of a container?
- Can I push and pull from two different docker registries that have same prefix for registry URL ?
- Why does the parallel execution of build and push steps fail when using Buildx on Kubernetes?
- Why do Build and Push steps fail with "Error while loading buildkit image: exit status 1" when /var/lib/docker is included in shared paths during DIND execution?