Upload Artifacts to GCS
This topic provides settings for the Upload Artifacts to GCS step, which uploads artifacts to Google Cloud Storage. For more information, go to the Google Cloud documentation on Uploads and downloads.
Depending on the stage's build infrastructure, some settings may be unavailable.
Enter a name summarizing the step's purpose. Harness automatically assigns an Id (Entity Identifier Reference) based on the Name. You can change the Id.
The Harness connector for the GCP account where you want to upload the artifact. For more information, go to Google Cloud Platform (GCP) connector settings reference. This step supports GCP connectors that use access key authentication. It does not support GCP connectors that inherit delegate credentials.
The GCS destination bucket name.
Path to the artifact file/folder you want to upload. Harness creates the compressed file automatically.
Use the following settings to add additional configuration to the step. Settings specific to containers, such as Set Container Resources, are not applicable when using the step in a stage with VM or Harness Cloud build infrastructure.
The path, relative to the Bucket where you want to store the cache. If no target path is provided, the cache is saved to
Run as User
Specify the user ID to use to run all processes in the pod, if running in containers. For more information, go to Set the security context for a pod.
Set container resources
Set maximum resource limits for the resources used by the container at runtime:
- Limit Memory: The maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes
M. You can also use the power-of-two equivalents
Mi. The default is
- Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed; for example, you can specify one hundred millicpu as
100m. The default is
400m. For more information, go to Resource units in Kubernetes.
Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to: