Skip to main content

Build and Push to ECR

Amazon ECR is a fully managed service from AWS that you can use to store and manage Docker images securely and reliably. In addition, ECR provides a simple web-based interface for creating, managing, and sharing Docker images and integrating them with other AWS services. For more information, go to the AWS documentation on Pushing a Docker image.

In Harness CI, you can use a Build and Push to ECR step to build an image from your codebase and push it to your Amazon ECR container registry repo. This is one of several options for building and pushing artifacts in Harness CI.

You need:

Kubernetes cluster build infrastructures require root access

With Kubernetes cluster build infrastructures, Build and Push steps use kaniko. Other build infrastructures use drone-docker. Kaniko requires root access to build the Docker image. It doesn't support non-root users.

If your build runs as non-root (runAsNonRoot: true), and you want to run the Build and Push step as root, you can set Run as User to 0 on the Build and Push step to use the root user for that individual step only.

If your security policy doesn't allow running as root, go to Build and push with non-root users.

Add a Build and Push to ECR step

In your pipeline's Build stage, add a Build and Push to ECR step and configure the settings accordingly.

Here is a YAML example of a minimum Build and Push to ECR step.

- step:
type: BuildAndPushECR
name: BuildAndPushECR_1
identifier: BuildAndPushECR_1
spec:
connectorRef: YOUR_AWS_CONNECTOR_ID
region: us-east-1
account: "12345"
imageName: test-image
tags:
- latest

When you run a pipeline, you can observe the step logs on the build details page. If the Build and Push to ECR step succeeds, you can find the uploaded image in your ECR repo.

Handling Immutable ECR Repositories in the "Build and Push to ECR" Step

For customers using immutable ECR repositories (a best practice for enhanced security), encountering AWS errors during the "Build and Push to ECR" step due to existing image tags can be resolved using the following approach.

Conditional Execution Using Crane

You can leverage Crane to check if an image tag already exists in your ECR repository. By setting the result of this check in an output variable, you can use it as a conditional for the "Build and Push to ECR" step.

Here's an example partial pipeline YAML demonstrating this approach:

              - step:
identifier: Run_2
type: Run
name: Run_2
spec:
connectorRef: account.harnessImage
image: amazon/aws-cli
shell: Sh
command: |-
#!/bin/bash
export AWS_REGION="ap-southeast-2"

# Set AWS credentials if not using the default profile
export AWS_ACCESS_KEY_ID=<+secrets.getValue("AwsAccess")>
export AWS_SECRET_ACCESS_KEY=<+secrets.getValue("AwsSecret")>

# Get login password and authenticate crane with ECR
PASSWD=$(aws ecr get-login-password --region $AWS_REGION)
export LOGIN_PASSWD=$PASSWD
outputVariables:
- name: LOGIN_PASSWD
type: String
value: LOGIN_PASSWD
- step:
identifier: Run_1
type: Run
name: Run_1
spec:
connectorRef: account.harnessImage
image: alpine/crane:0.19.0
shell: Sh
command: |-
#!/bin/bash
export AWS_REGION="ap-southeast-2"
export ECR_REPO="test-repo"
export AWS_ACCOUNT_ID="012345678910"
export IMAGE_TAG="test-tag"

# Authenticate crane with ECR
crane auth login $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com -u AWS -p <+execution.steps.Run_2.output.outputVariables.LOGIN_PASSWD>

# Check if the image tag exists
if crane digest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_REPO:$IMAGE_TAG > /dev/null 2>&1; then
echo "Image tag exists."
export SKIP_BUILD=True
else
echo "Image tag does not exist."
export SKIP_BUILD=False
fi
outputVariables:
- name: SKIP_BUILD
type: String
value: SKIP_BUILD
- step:
identifier: BuildAndPushECR_1
type: BuildAndPushECR
name: BuildAndPushECR_1
spec:
connectorRef: your_aws_connector
region: ap-southeast-2
account: "012345678910"
imageName: test-repo
tags:
- test-tag
caching: true
when:
stageStatus: Success
condition: <+execution.steps.Run_1.output.outputVariables.SKIP_BUILD>
== "False"

Build and Push to ECR step settings

The Build and Push to ECR step has the following settings. Depending on the stage's build infrastructure, some settings might be unavailable or optional. Settings specific to containers, such as Set Container Resources, are not applicable when using the step in a stage with VM or Harness Cloud build infrastructure.

Name

Enter a name summarizing the step's purpose. Harness automatically assigns an Id (Entity Identifier) based on the Name. You can change the Id.

AWS Connector

Select the Harness AWS connector to use to connect to ECR.

This step supports all AWS connector authentication methods (AWS access key, delegate IAM role assumption, IRSA, and cross-account access), but an additional stage variable might be required to assume IAM roles or use ARNs.

The AWS IAM roles and policies associated with the AWS account for your Harness AWS connector must allow pushing to ECR. For more information, go to the AWS connector settings reference.

If you're using Harness Cloud build infrastructure, the Connectivity Mode must be Connect through Harness Platform.

Stage variable required to assume IAM role or use ARNs

Stages with Build and Push to ECR steps must have a PLUGIN_USER_ROLE_ARN stage variable if:

To add the PLUGIN_USER_ROLE_ARN stage variable:

  1. In the Pipeline Studio, select the stage with the Build and Push to ECR step, and then select the Overview tab.
  2. In the Advanced section, add a stage variable.
  3. Enter PLUGIN_USER_ROLE_ARN as the Variable Name, set the Type to String, and then select Save.
  4. For the Value, enter the full ARN value.
    • For cross-account roles, this ARN value must correspond with the AWS connector's ARN.
    • For connectors that use the delegate's IAM role, the ARN value must identify the role you want the build pod/machine to use.

Region

Define the AWS region to use when pushing the image.

The registry format for ECR is AWS_ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com, and a region is required. For more information, go to the AWS documentation on Pushing a Docker image.

Account Id

The AWS account ID to use when pushing the image. This is required.

The registry format for ECR is aws_account_id.dkr.ecr.region.amazonaws.com. For more information, go to the AWS documentation for Pushing a Docker image.

Image Name

The name of the image you are pushing. It can be any name.

Tags

Add Docker build tags. This is equivalent to the -t flag.

Add each tag separately.

tip

When you push an image to a repo, you tag the image so you can identify it later. For example, in one pipeline stage, you push the image, and, in a later stage, you use the image name and tag to pull it and run integration tests on it.

Harness expressions are a useful way to define tags. For example, you can use the expression <+pipeline.sequenceId> as a tag. This expression represents the incremental build identifier, such as 9. By using a variable expression, rather than a fixed value, you don't have to use the same image name every time.

For example, if you use <+pipeline.sequenceId> as a tag, after the pipeline runs, you can see the Build Id in the output.

And you can see where the Build Id is used to tag your image:

Later in the pipeline, you can use the same expression to pull the tagged image, such as myrepo/myimage:<+pipeline.sequenceId>.

Base Image Connector

Select an authenticated connector to download base images from a Docker-compliant registry. If you do not specify a Base Image Connector, the step downloads base images without authentication. Specifying a Base Image Connector is recommended because unauthenticated downloads generally have a lower rate limit than authenticated downloads.

tip

When using base image connector, pushing to or pulling from multiple Docker registries with the same URL prefix (e.g., https://index.docker.io) is not supported. This limitation occurs because the second registry's credentials overwrite the first in the Docker config file. This issue doesn't affect registries with completely unique URLs, such as separate JFrog instances. This limitation does not apply to following build and push steps only on K8 - ACR, GAR, ECR.

This setting is enabled by the feature flag CI_ENABLE_BASE_IMAGE_DOCKER_CONNECTOR. When enabling this flag, the delegate version must be higher than 24.07.83503.

Optimize

With Kubernetes cluster build infrastructures, select this option to enable --snapshotMode=redo. This setting causes file metadata to be considered when creating snapshots, and it can reduce the time it takes to create snapshots. For more information, go to the kaniko documentation for the snapshotMode flag.

For information about setting other kaniko runtime flags, go to Environment variables.

Dockerfile

The name of the Dockerfile. If you don't provide a name, Harness assumes the Dockerfile is in the root folder of the codebase.

Context

Enter a path to a directory containing files that make up the build's context. When the pipeline runs, the build process can refer to any files found in the context. For example, a Dockerfile can use a COPY instruction to reference a file in the context.

Labels

Specify Docker object labels to add metadata to the Docker image.

Build Arguments

The Docker build-time variables. This is equivalent to the --build-arg flag.

Target

The Docker target build stage, equivalent to the --target flag, such as build-env.

Docker layer caching and Remote cache image

There are two ways in which you can leverage Docker Layer Caching: Enable Docker layer caching ('caching' property) or Remote cache image ('remoteCacheRepo' property). Refer to Enable Docker layer caching for your build to learn more.

Environment Variables (plugin runtime flags)

Build and Push steps use plugins to complete build and push operations. With Kubernetes cluster build infrastructures, these steps use kaniko, and, with other build infrastructures, these steps use drone-docker.

These plugins have a number of additional runtime flags that you might need for certain use cases. For information about the flags, go to the kaniko plugin documentation and the drone-docker plugin documentation.

How you configure plugin runtime flags depends on your build infrastructure.

Set plugin runtime flags with Kubernetes cluster build infrastructure

When using the built-in Build and Push steps with a Kubernetes cluster build infrastructure, you can use the Environment Variables setting to set kaniko plugin runtime flags.

warning

Unlike in other Harness CI steps, the Environment Variables setting in Build and Push steps only accepts the known kaniko plugin runtime flags. You must set other types of environment variables in your Dockerfile, build arguments, or as stage variables, depending on their usage and purpose in your build.

In Environment Variables, you must input a Name and Value for each variable. Format the name as PLUGIN_FLAG_NAME.

For example, to set --skip-tls-verify, add an environment variable named PLUGIN_SKIP_TLS_VERIFY and set the variable value to true.

              - step:
identifier: buildandpush
name: buildandpush
type: BuildAndPush---
spec:
...
envVariables:
PLUGIN_SKIP_TLS_VERIFY: true

To build without pushing, use the no-push kaniko flag.

YAML example: Build and Push step with multiple environment variables

This YAML example shows a Build and Push to GAR step with several PLUGIN environment variables.

- step:
identifier: pushGCR
name: push GCR
type: BuildAndPushGAR ## Type depends the selected Build and Push step, such as Docker, GAR, ACR, and so on.
spec: ## Some parts of 'step.spec' vary by Build and Push step type (Docker, GAR, ACR, etc).
connectorRef: GCR_CONNECTOR
host: "us.gcr.io"
projectID: "some-gcp-project"
imageName: "some-image-name"
tags:
- "1.0"
- "1.2"
buildArgs:
foo: bar
hello: world
labels:
foo: bar
hello: world
target: dev-env
context: "."
dockerfile: "harnessDockerfile"
remoteCacheImage: "test/cache"
envVariables: ## Specify plugin runtime flags as environment variables under 'step.spec'.
PLUGIN_TAR_PATH: ./harnesstarpath
PLUGIN_IMAGE_DOWNLOAD_RETRY: "2"
PLUGIN_COMPRESSED_CACHING: "false"
PLUGIN_USE_NEW_RUN: "true"
PLUGIN_GARBAGE: yoyo
Stage variables

Previously, you could set some kaniko runtime flags as stage variables. If you had done this and you are using Kubernetes cluster build infrastructure, then Harness recommends moving these kaniko plugin stage variables to the Environment Variables in your Build and Push step. Don't change non-kaniko plugin variables, such as PLUGIN_USER_ROLE_ARN.

For other types of environment variables (that aren't Build and Push plugin runtime flags), stage variables are still inherently available to steps as environment variables. However, where you declare environment variables depends on their usage and purpose in your build. You might need to set them in your Dockerfile, build args, or otherwise.

Set plugin runtime flags with other build infrastructures

With Harness Cloud, self-managed VM, or local runner build infrastructures, you can set some drone-docker plugin runtime flags as stage variable.

Currently, Harness supports the following drone-docker flags:

  • auto_tag: Enable auto-generated build tags.
  • auto_tag_suffix: Auto-generated build tag suffix.
  • custom_labels: Additional arbitrary key-value labels.
  • artifact_file: Harness uses this to show links to uploaded artifacts on the Artifacts tab.
  • dry_run: Disables pushing to the registry. Used to build without pushing.
  • custom_dns: Provide your custom CNS address.

To set these flags in your Build and Push steps, add stage variables formatted as PLUGIN_FLAG_NAME.

For example, to set custom_dns, add a stage variable named PLUGIN_CUSTOM_DNS and set the variable value to your custom DNS address.

variables:
- name: PLUGIN_CUSTOM_DNS
type: String
description: ""
required: false
value: "vvv.xxx.yyy.zzz"
Mounting Docker Secrets

Harness now allows mounting Docker build secrets securely in 'Build and Push' steps. This feature enables you to pass sensitive data such as credentials or configuration files during Docker builds, either as environment variables or file-based secrets. It ensures secure handling of secrets, reducing the risk of exposing sensitive information.

note
  • This feature is currently configurable only through YAML.
  • In Kubernetes, unlike other build infrastructures (e.g., Harness Cloud), "Build and Push" steps default to Kaniko rather than Buildx. To enable this feature in Kubernetes, you must enable the feature flag CI_USE_BUILDX_ON_K8. Additionally, note that Kubernetes build infrastructure using Buildx requires privileged access.
YAML example: Mounting Docker secrets

This example demonstrates how to configure a Build and Push step with Docker secrets passed as both environment variables and file-based secrets:

- step:
identifier: buildAndPush
type: BuildAndPushDockerRegistry
name: Build and Push Docker Image
spec:
connectorRef: dockerConnector
repo: dockerRepo/imageName
tags:
- ci-<+pipeline.executionId>
envDockerSecrets:
a_user: USERNAME # Environment variable in format of key:value
a_pass: PASSWORD
fileDockerSecrets:
docker_user2: <+secrets.getValue("myusername")> # File secret defined in Harness
docker_pass2: <+secrets.getValue("mydockerpass")>
docker_user3: /harness/test.txt # path to local file in workspace containing the secret
caching: true

The envDockerSecrets field allows you to define environment variables to securely pass sensitive information to the Docker build process.

  • Key: The name of the environment variable that will be exposed to the Docker build process.
  • Value: The secret value associated with the key. This can either be a plain text string or a reference to a secret managed securely in Harness.

The fileDockerSecrets field allows you to mount secrets as files into the Docker build process. This is useful for passing configuration files, certificates, or other file-based sensitive data.

  • Key: The name of the secret as it will be referenced during the Docker build.
  • Value: The path to the file or a dynamic reference to a secret in Harness that will be mounted as a file.
Using Local Tar Output

In scenarios where pushing a Docker image to a registry is not feasible, you can generate a local tarball of the built image instead. This approach is particularly useful for situations like local testing or when registry access is unavailable during the build process.

Once the tarball is generated, you can use a Security Testing Orchestration (STO) step, such as Aqua Trivy, to scan the image for vulnerabilities. This workflow ensures that images are built and scanned effectively, even without access to a remote registry.

Here’s a sample partial pipeline that demonstrates how build the image, generate the tarball, and push it to the registry:

- step:
type: BuildAndPushDockerRegistry
name: BuildAndPushDockerRegistry_1
identifier: BuildAndPushDockerRegistry_1
spec:
connectorRef: docker_connector
repo: dockerhub/image_name
tags:
- linux-amd64
caching: false
dockerfile: ./docker/Dockerfile
envVariables:
PLUGIN_TAR_PATH: /harness/image_name.tar
- step:
type: Run
name: Run_2
identifier: Run_2
spec:
shell: Sh
command: ls /harness

The PLUGIN_NO_PUSH: "true" environment variable prevents the image from being pushed to the registry.Here’s a sample partial pipeline that demonstrates how build the image, generate the tarball, but skip pushing it to the registry:

- step:
type: BuildAndPushDockerRegistry
name: BuildAndPushDockerRegistry_1
identifier: BuildAndPushDockerRegistry_1
spec:
connectorRef: docker_connector
repo: dockerhub/image_name
tags:
- linux-amd64
caching: false
dockerfile: ./docker/Dockerfile
envVariables:
PLUGIN_TAR_PATH: /harness/image_name.tar
PLUGIN_NO_PUSH: "true"
- step:
type: Run
name: Run_2
identifier: Run_2
spec:
shell: Sh
command: ls /harness
note
  • The local tar output feature is available only when using Kaniko as the build tool, which is commonly used in Kubernetes environments.

  • While the above examples show a push to a Docker registry, you can easily repurpose it for other registries by updating the step type, connector, and other relevant fields.

Run as User

With Kubernetes cluster build infrastructures, you can specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.

This step requires root access. You can use the Run as User setting if your build runs as non-root (runAsNonRoot: true), and you can run the Build and Push step as root. To do this, set Run as User to 0 to use the root user for this individual step only.

If your security policy doesn't allow running as root, go to Build and push with non-root users.

Set Container Resources

Set maximum resource limits for the resources used by the container at runtime:

  • Limit Memory: The maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes G or M. You can also use the power-of-two equivalents Gi and Mi. The default is 500Mi.
  • Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed; for example, you can specify one hundred millicpu as 0.1 or 100m. The default is 400m. For more information, go to Resource units in Kubernetes.

Timeout

Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:

Conditions, looping, and failure strategies

You can find the following settings on the Advanced tab in the step settings pane:

Troubleshoot Build and Push steps

Go to the CI Knowledge Base for questions and issues related to building and pushing images, such as: