Skip to main content

Upload Artifacts to S3

To upload artifacts to AWS or other S3 providers, such as MinIO, you can either:

To upload artifacts to S3, you need:

  • Access to an S3 instance.
  • A CI pipeline with a Build stage.
  • Steps in your pipeline that generate artifacts to upload, such as by running tests or building code. The steps you use depend on what artifacts you ultimately want to upload.
  • An AWS connector, if you want to use the Upload Artifacts to S3 step.

You can also upload artifacts to GCS, upload artifacts to JFrog, and upload artifacts to Sonatype Nexus.

Use the Upload Artifacts to S3 step

Add the Upload Artifacts to S3 step to your pipeline's Build stage, and configure the settings accordingly.

Here is a YAML example of a minimum Upload Artifacts to S3 step.

              - step:
type: S3Upload
name: S3Upload
identifier: S3Upload
spec:
connectorRef: YOUR_AWS_CONNECTOR_ID
region: YOUR_AWS_REGION
bucket: YOUR_S3_BUCKET_NAME
sourcePath: path/to/artifact.tar.gz
target: <+pipeline.name>/<+pipeline.sequenceId>

Upload Artifacts to S3 step settings

The Upload Artifacts to S3 step has the following settings. Depending on the stage's build infrastructure, some settings might be unavailable or optional. Settings specific to containers, such as Set Container Resources, are not applicable when using a VM or Harness Cloud build infrastructure.

Name

Enter a name summarizing the step's purpose. Harness generates an Id (Entity Identifier) based on the Name. You can edit the Id.

AWS Connector

Select the Harness AWS connector to use when connecting to AWS S3.

info
Stage variable required for non-default ACLs

S3 buckets use private ACLs by default. Your pipeline must have a PLUGIN_ACL stage variable if you want to use a different ACL.

  1. In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
  2. In the Advanced section, add a stage variable.
  3. Enter PLUGIN_ACL as the Variable Name, set the Type to String, and then select Save.
  4. For the Value, enter the relevant ACL.
Stage variable required to assume IAM roles or use ARNs

Stages with Upload Artifacts to S3 steps must have a PLUGIN_USER_ROLE_ARN stage variable if:

To add the PLUGIN_USER_ROLE_ARN stage variable:

  1. In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
  2. In the Advanced section, add a stage variable.
  3. Enter PLUGIN_USER_ROLE_ARN as the Variable Name, set the Type to String, and then select Save.
  4. For the Value, enter the full ARN value.
    • For cross-account roles, this ARN value must correspond with the AWS connector's ARN.
    • For connectors that use the delegate's IAM role, the ARN value must identify the role you want the build pod/machine to use.

Region

Define the AWS region to use when pushing the image.

Bucket

The name of the S3 bucket name where you want to upload the artifact.

Source Path

Path to the file or directory that you want to upload.

If you want to upload a compressed file, you must use a Run step to compress the artifact before uploading it.

Endpoint URL

Endpoint URL for S3-compatible providers. This setting is not needed for AWS.

Target

Provide a path, relative to the S3 Bucket, where you want to store the artifact. Do not include the bucket name; you specified this in Bucket.

If you don't specify a Target, Harness uploads the artifact to the bucket's main directory.

Run as User

Specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.

Set Container Resources

Maximum resources limits for the resources used by the container at runtime:

  • Limit Memory: Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number with the suffixes G or M. You can also use the power-of-two equivalents, Gi or Mi. Do not include spaces when entering a fixed value. The default is 500Mi.
  • Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed. For example, you can specify one hundred millicpu as 0.1 or 100m. The default is 400m. For more information, go to Resource units in Kubernetes.

Timeout

Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:

View artifacts on the Artifacts tab

You can use the Artifact Metadata Publisher plugin to view artifacts on the Artifacts tab on the Build details page.

Add the Plugin step after the Upload Artifacts to S3 step.

Add a Plugin step that uses the artifact-metadata-publisher plugin.

               - step:
type: Plugin
name: publish artifact metadata
identifier: publish_artifact_metadata
spec:
connectorRef: account.harnessImage
image: plugins/artifact-metadata-publisher
settings:
file_urls: https://BUCKET.s3.REGION.amazonaws.com/TARGET/ARTIFACT_NAME_WITH_EXTENSION
artifact_file: artifact.txt
  • connectorRef: Use the built-in Docker connector (account.harness.Image) or specify your own Docker connector.
  • image: Must be plugins/artifact-metadata-publisher.
  • file_urls: Provide the URL to the target artifact that was uploaded in the Upload Artifacts to S3 step, such as https://BUCKET.s3.REGION.amazonaws.com/TARGET/ARTIFACT_NAME_WITH_EXTENSION. If you uploaded multiple artifacts, you can provide a list of URLs. If your S3 bucket is private, use the console view URL, such as https://s3.console.aws.amazon.com/s3/object/BUCKET?region=REGION&prefix=TARGET/ARTIFACT_NAME_WITH_EXTENSION.
  • artifact_file: Provide any .txt file name, such as artifact.txt or url.txt. This is a required setting that Harness uses to store the artifact URL and display it on the Artifacts tab. This value is not the name of your uploaded artifact, and it has no relationship to the artifact object itself.

Use the S3 Upload and Publish plugin

You can use the S3 Upload and Publish plugin to upload an artifact to S3 and publish the artifact URL on the Artifacts tab.

If you use this plugin, you do not need an Upload Artifacts to S3 step in your pipeline. This plugin provides the same functionality as the Upload Artifacts to S3 step combined with the Artifact Metadata Publisher plugin; however it may not be appropriate for use cases that require advanced configuration.

In your pipeline's CI stage, add a Plugin step that uses the drone-s3-upload-publish plugin, for example:

              - step:
type: Plugin
name: s3-upload-publish
identifier: custom_plugin
spec:
connectorRef: account.harnessImage ## Use the built-in Docker connector or specify your own connector.
image: harnesscommunity/drone-s3-upload-publish ## Required.
settings:
aws_access_key_id: <+pipeline.variables.AWS_ACCESS> ## Reference to your AWS access ID.
aws_secret_access_key: <+pipeline.variables.AWS_SECRET> ## Reference to your AWS access key.
aws_default_region: ap-southeast-2 ## Set to your default AWS region.
aws_bucket: bucket-name ## The target S3 bucket.
artifact_file: artifact.txt ## Provide any '.txt' file name. Harness uses this to store the artifact URL and display it on the Artifacts tab. This value is not the name of your uploaded artifact, and it has no relationship to the artifact object itself.
source: path/to/target/artifact.tar.gz ## Provide the path to the file or directory that you want to upload.
target: <+pipeline.name>/<+pipeline.sequenceId> ## Optional. Provide a path, relative to the 'aws_bucket', where you want to store the artifact. Do not include the bucket name. If unspecified, Harness uploads the artifact to the bucket's main directory.
imagePullPolicy: IfNotPresent
tip

For aws_access_key_id and aws_secret_access_key, use expressions to reference Harness secrets or pipeline variables containing your AWS access ID and key. You could also use expressions for target, such as <+pipeline.name>/<+pipeline.sequenceId>, which would automatically organize your artifacts into directories based on the pipeline name and incremental build ID.

If you want to upload a compressed file, you must use a Run step to compress the artifact before uploading it.

Build logs and artifact files

When you run the pipeline, you can observe the step logs on the build details page.

If the build succeeds, you can find the artifact on S3.

If you used the Artifact Metadata Publisher or S3 Upload and Publish plugin, you can find the artifact URL on the Artifacts tab.

tip

On the Artifacts tab, select the step name to expand the list of artifact links associated with that step.

If your pipeline has multiple steps that upload artifacts, use the dropdown menu on the Artifacts tab to switch between lists of artifacts uploaded by different steps.

Pipeline YAML examples

The following pipeline examples use the Upload Artifacts to S3 step and the Artifact Metadata Publisher plugin.

This example pipeline uses Harness Cloud build infrastructure. It produces a text file, uploads the file to S3, and uses the Artifact Metadata Publisher to publish the artifact URL on the Artifacts tab.

pipeline:
name: default
identifier: default
projectIdentifier: default
orgIdentifier: default
tags: {}
properties:
ci:
codebase:
connectorRef: YOUR_CODEBASE_CONNECTOR_ID
repoName: YOUR_CODE_REPO_NAME
build: <+input>
stages:
- stage:
name: upload artifact
identifier: upload_artifact
description: ""
type: CI
spec:
cloneCodebase: true
platform:
os: Linux
arch: Amd64
runtime:
type: Cloud
spec: {}
execution:
steps:
- step:
type: Run
name: write file
identifier: write_file
spec:
shell: Bash
command: |-
echo "some file" > myfile.txt
date >> myfile.txt
- step:
type: S3Upload
name: S3Upload
identifier: S3Upload
spec:
connectorRef: YOUR_AWS_CONNECTOR_ID
region: YOUR_AWS_REGION
bucket: YOUR_S3_BUCKET
sourcePath: path/to/myfile.txt
target: <+pipeline.name>/<+pipeline.sequenceId>
- step:
type: Plugin
name: artifact metadata
identifier: artifact_metadata
spec:
connectorRef: account.harnessImage
image: plugins/artifact-metadata-publisher
settings:
file_urls: https://BUCKET.s3.REGION.amazonaws.com/TARGET/SOURCE_PATH/myfile.txt
artifact_file: artifact.txt

Download Artifacts from S3

You can use the S3 Drone plugin to download artifacts from S3. This is the same plugin image that Harness CI uses to run the Upload Artifacts to S3 step. To do this,add a Plugin step to your CI pipeline. For example:

              - step:
type: Plugin
name: download
identifier: download
spec:
connectorRef: YOUR_DOCKER_CONNECTOR
image: plugins/s3
settings:
access_key: <+secrets.getValue("awsaccesskeyid")>
secret_key: <+secrets.getValue("awssecretaccesskey")>
region: YOUR_BUCKET_REGION
bucket: YOUR_BUCKET_NAME
source: path/to/directory/to/download
target: download/destination
download: "true"

Configure the Plugin step settings as follows:

KeysTypeDescriptionValue example
connectorRefStringSelect a Docker connector. Harness uses this connector to pull the plugin image.account.harnessImage
imageStringEnter plugins/s3.plugins/s3
access_keyStringReference to a Harness text secret containing your AWS access key ID.<+secrets.getValue("awsaccesskeyid")>
secret_keyStringReference to a Harness text secret containing your AWS secret access key.<+secrets.getValue("awssecretaccesskey")>
regionStringThe S3 bucket regionus-east-2
bucketStringThe S3 bucket name.my-cool-bucket
sourceStringThe path to the directory to download from your S3 bucket.path/to/artifact/directory
targetStringPath to the location where you want to store the downloaded artifacts, relative to the build workspace.artifacts (downloads to /harness/artifacts)
downloadBooleanMust be true to enable downloading. If omitted or false, the plugin attempts to upload artifacts instead."true"

Mount an S3 bucket using s3fs-fuse

s3fs-fuse allows files and directories in an S3 bucket to act like a local file system. Harness Cloud supports using s3fs-fuse on Linux infrastructure.

The s3fs command supports the standard AWS credentials file, or AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables.

Here is an example pipeline step that installs s3fs and mounts an S3 bucket using aws_access_key_id and aws_secret_access_key text secrets.

              - step:
type: Run
name: Setup s3fs
identifier: Setup_s3fs
spec:
shell: Sh
envVariables:
AWS_ACCESS_KEY_ID: <+secrets.getValue("aws_access_key_id")>
AWS_SECRET_ACCESS_KEY: <+secrets.getValue("aws_secret_access_key")>
AWS_REGION: <+input>
S3_BUCKET_NAME: <+input>
S3FS_MOUNT_DIR: <+input>
command: |-
apt-get update
apt-get install s3fs

s3fs $S3_BUCKET_NAME $S3FS_MOUNT_DIR \
-o use_cache=/tmp \
-o allow_other \
-o uid=1000 \
-o gid=1000 \
-o umask=0022 \
-o url=https://s3.${AWS_REGION}.amazonaws.com

In the above example, AWS_REGION, S3_BUCKET_NAME and S3FS_MOUNT_DIR are input parameters.

Any following steps in the stage can access the directory mounted at S3FS_MOUNT_DIR to read and write files in the S3 bucket.

              - step:
type: Run
name: Write file to bucket
identifier: Write_file_to_bucket
spec:
shell: Sh
envVariables:
S3FS_MOUNT_DIR: <+input>
command: |-
echo "Write file" > $S3FS_MOUNT_DIR/example.txt
- step:
type: Run
name: Read file from bucket
identifier: Read_file_from_bucket
spec:
shell: Sh
envVariables:
S3FS_MOUNT_DIR: <+input>
command: |-
cat $S3FS_MOUNT_DIR/example.txt
note

When using Docker in a Run step, S3FS_MOUNT_DIR must be added as a shared path.

Troubleshoot uploading artifacts

Go to the CI Knowledge Base for questions and issues related uploading artifacts, such as: