Skip to main content

Use Harness expressions

For most settings in Harness pipelines, you can use fixed values, runtime inputs, or expressions.

You can use expressions (also called Harness expressions, variable expressions, or sometimes Harness variables) to reference Harness input, output, and execution variables. These variables represent settings and values that exist in the pipeline before and during execution. These can include environment variables, secrets, pipeline/stage/step identifiers, and more. You can also add your own variables and use expressions to reference them.

This topic explains how Harness expressions work, how to write and use expressions, some built-in Harness expressions, and some prefixes used to identify user-defined variables.

Expressions are powerful and offer many options for modification or interaction. For more information about using expressions, go to:

What is a Harness variable expression?

Harness variable expressions refer to variable values in Harness, such as entity names, configuration settings, custom variables, inputs/outputs, and more. At pipeline runtime, Harness evaluates any expressions present in the pipeline and replaces them with their resolved values.

For example, the expression <+pipeline.name> resolves to name of the pipeline where you're using that expression.

Harness variables are powerful because they enable templatizing of configuration information, pipeline settings, values in scripts, and more. They also enable your pipelines to pass information between stages and settings.

Harness has many built-in variables and expressions, and you can define custom variables that you can reference with expressions. Custom variables can be broadly scoped (such as account-wide or project-wide variables) or narrowly scoped (such as a variable for a specific pipeline or stage). Custom variables can store values that you need to reuse across many entities, or they can help you configure a specific build, such as runtime flags and environment variables for certain build tools.

Additionally, certain step types, such as Run steps and Shell Script steps, can utilize and produce input, environment, and output variables that you define in the step settings. You can also reference these with expressions.

Expression usage

To reference Harness variables, you use expressions consisting of the expression delimiter <+...> and a path to the referenced value, such as <+pipeline.name> or <+secrets.getValue("SECRET_ID")>.

In addition to Fixed Values and Runtime Inputs, you can use Harness expressions in many settings in pipelines and other entities, like connectors and triggers.

When writing pipelines in YAML, enter the expression as the value for a field.

For example, this connectorRef setting takes it's value from an expression referencing a pipeline variable named myConnector.

          connectorRef: <+pipeline.variables.myConnector>

When you type <+, Harness provides suggestions for built-in expressions as you type. You can manually trigger the suggestions by placing your cursor after <+ and pressing ctrl + space.

You can continue typing or select the expression from the list of suggestions.

info

Automated suggestions don't represent all possible expressions.

When Harness automatically suggests expressions, it suggests expressions based on the current context (meaning the setting or place in the YAML where you are entering the expression). While it doesn't suggest expressions that aren't valid in the current context, this doesn't prevent you from entering invalid expressions.

For guidance on building valid expressions, go to Expression paths and Debugging expressions.

Expression paths

Harness expressions are references to variables, settings, and other values in Harness. When you use a Harness expression, you provide a path for Harness to follow to resolve the expression's value. Expressions can use relative paths or full paths (also referred to as FQNs).

Use FQNs

The format and length of an expression's Fully Qualified Name (FQN) depends on the type of entity the expression is referencing and where it is located.

Some expressions have short FQNs, such as <+account.name>, which references your Harness account name, or <+variable.account.accountVariableName>, which references a custom account variable. In contrast, expression referencing specific settings embedded a pipeline's YAML can have much longer FQNs, like this expression for a service variable in a CD stage: <+pipeline.stages.stageID.spec.serviceConfig.serviceDefinition.spec.variables.serviceVariableName>.

When referencing values within a pipeline, an expression's FQN is the path to the value you are referencing in the context of your pipeline's YAML. For values in pipelines, the expression's full path/FQN, starting from pipeline, always works as a reference point.

For example, the FQN for an expression referencing a CD service definition's imagePath setting is <+pipeline.stages.stageID.spec.serviceConfig.serviceDefinition.spec.artifacts.primary.spec.impagePath>. This reflects the path you would step through in the YAML to locate that value.

YAML example of a CD pipeline with a service definition.

Use relative paths

To use a relative path, identify the common parent between the step/stage you're referring to and the step/stage where you're using the expression (making the reference), and then use the common parent to start the expression.

For example, to reference a CD service definition's imagePath setting in the same stage where that service definition is defined, you could use the relative expression <+stage.spec.serviceConfig.serviceDefinition.spec.artifacts.primary.spec.impagePath>.

Use expressions only after they can be resolved

When Harness encounters an expression during pipeline execution, it tries to resolve the expression with the information it has at that point in the execution. This means that if you try to use an expression before Harness has the necessary information to resolve the expression's value, the expression resolves to null, and the pipeline can fail or execute incorrectly.

This requirement applies regardless of how you are using the expression, such as with operators or in scripts. If Harness can't resolve the target value at the point when the pipeline requests the expression, the expression fails to resolve and the pipeline can fail.

warning

Your pipelines must use expressions only after Harness has the required information to resolve the expression's value.

For example, assume you want to use this expression: <+pipeline.stages.Stage_2.spec.execution.steps.Step_C.executionUrl>. This expression calls the executionUrl from a step named Step_C that is in Stage_2. Since this step is in Stage_2, you could not use this expression in a previous stage, because the stage containing this expression's value hasn't run yet. Additionally, if there are steps before Step_C in Stage_2, those steps can't use this expression either, because they run before Step_C.

Here are some guidelines to help you successfully use expressions in pipelines:

  • Don't refer to a step's expressions within that same step.
  • Don't refer to values from a subsequent step/stage in a step/stage that runs before the referenced step/stage.
  • Don't refer to step inputs/outputs in a CD stage's Service or Environment configuration.
    • In a CD stage, steps run after Harness evaluates the service and environment configuration. Consequently, Harness can't get values for expressions referencing step inputs/outputs while it is evaluating the service and environment configuration.
    • Similarly, don't refer to step inputs/output in a CI stage's Infrastructure or Codebase configuration. These are evaluated before steps run, so they can't use expressions referencing steps.
Example: Harness expression resolution throughout a CD stage

This example demonstrates when and where certain expressions (by prefix) are resolved over the duration of a CD stage, so that you can determine which events need to occur before you can safely reference a certain expression and ensure that it is successfully resolved when the pipeline runs.

Different expressions originate from different parts of a stage.

Here's when you can reference expressions resolved from information in each of these stage sections:

  • Service expressions can be resolved only after Harness has progressed through the Service section of the pipeline. Consequently, you can use service expressions in the Infrastructure and Execution sections of the stage.
  • Infrastructure expressions can be resolved only after Harness has progressed through the Infrastructure section of the pipeline.
    • In the Infrastructure section, you can reference Service settings.
    • Since Execution follows Infrastructure, you can reference Infrastructure expressions in Execution.
  • Execution expressions apply to steps in Execution.
    • Each step's Execution expressions can be referenced only after Harness has progressed through that step in the Execution section.

Hyphens require escaping

Harness recommends not using hyphens/dashes (-) in variable and property names, because these characters can cause issues with headers and they aren't allowed in some Linux distributions and deployment-related software.

For example, this expression won't work: <+execution.steps.httpstep.spec.headers.x-auth>

If you must include a hyphen in an expression, such as with x-auth, you can wrap the property name in double quotes (""), such as <+execution.steps.httpstep.spec.headers["x-auth"]>.

This also applies to nested usage, such as:

<+execution.steps.httpstep.spec.newHeaders["x-auth"]["nested-hyphen-key"]>
<+execution.steps.httpstep.spec.newHeaders["x-auth"].nonhyphenkey>

If when referencing custom variables or matrix dimensions with hyphenated names, you must use the get() method.

Expression evaluation

Mechanically, Harness passes the content within the delimiter (<+...>) to the Java Expression Language (JEXL) for evaluation at runtime.

For example, <+pipeline.name> evaluates to the name of the pipeline where that expression is used.

Here's an example of a shell script echoing some expressions, along with examples of output from those expressions.

echo <+pipeline.executionId>        # Output example: 1234-5678-abcd
echo <+pipeline.sequenceId> # Output example: 16
echo <+stage.name> # Output example: dev
echo <+service.name> # Output example: nginx
echo <+artifacts.primary.image> # Output example: index.docker.io/library/nginx:stable
echo <+artifacts.primary.imagePath> # Output example: library/nginx
echo <+env.name> # Output example: demo
echo <+infra.namespace> # Output example: default
echo <+infra.releaseName> # Output example: demo
warning

Expressions can't resolve correctly if the target value isn't available at the time that Harness evaluates the expression. For more information, go to Use expressions after they can be resolved.

Additionally, variable values (after evaluation) are limited to 256 KB. Expressions producing evaluated values larger than this can have truncated values or fail to resolve.

Expression manipulation

In addition to standard evaluation, expressions can be evaluated and manipulated with Java string methods, JSON parsing, JEXL, interpolation, concatenation, and more.

Compound expressions and operators require nested delimiters

When forming complex expressions, such as when using operators or methods with expressions, wrap the entire compound expression statement in the expression delimiter (<+...>).

For example, with the equals == and not equals != operators, wrap the entire operation in the expression delimiter <+...>:

<+<+pipeline.name> == "pipeline1">
<+<+stage.variables.v1> != "dev">

Complex usage can have multiple levels of nesting. For example, the following compound expression concatenates values from two variables into one list, and then uses the split() method on the concatenated list. The original expressions, the concatenated list expression, and the method manipulation are all wrapped in expression delimiters:

<+ <+<+pipeline.variables.listVar1> + "," + <+pipeline.variables.listVar2>>.split(",")>

Java string methods

You can use any Java string method on Harness expressions.

JEXL

You can use JEXL to build complex expressions.

For example, the following complex expression uses information from a webhook trigger payload:

<+ <+<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo")> || <+trigger.payload.repository.owner.name> == "wings-software" >

Notice the use of methods, operators, and nested delimiters (<+...>) forming a compound expression. Harness evaluates each individual expression and produces a final value by evaluating the entire JEXL expression.

Ternary operators

When using ternary conditional operators (?:), wrap the entire expression in the expression delimiter <+...>, and don't use spaces between the operators and values.

Ternary operators in Harness follow the standard format, but you can't use spaces between the operators and values.

For example:

  • Incorrect: <+condition ? IF_TRUE : IF_FALSE>
  • Correct: <+condition?IF_TRUE:IF_FALSE>
Pipeline YAML example manipulating expressions with ternary operators

This pipeline uses a ternary operator to evaluate a stage variable named myvar with a value of 1.1.

In the second ShellScript step, named ternary, the stage variable is referenced by the expression <+stage.variables.myvar> and evaluated with the ternary expression == "1.1"?"pass":"fail". The entire compound expression is <+ <+stage.variables.myvar> == "1.1"?"pass":"fail" >.

pipeline:
name: exp
identifier: exp
projectIdentifier: CD_Docs
orgIdentifier: default
tags: {}
stages:
- stage:
name: ternarydemo
identifier: ternarydemo
description: ""
type: Custom
spec:
execution:
steps:
- step:
type: ShellScript
name: ShellScript_1
identifier: ShellScript_1
spec:
shell: Bash
onDelegate: true
source:
type: Inline
spec:
script: echo <+stage.variables.myvar>
environmentVariables: []
outputVariables: []
timeout: 10m
- step:
type: ShellScript
name: ternary
identifier: ternary
spec:
shell: Bash
onDelegate: true
source:
type: Inline
spec:
script: echo <+ <+stage.variables.myvar> == "1.1"?"pass":"fail" >
environmentVariables: []
outputVariables: []
timeout: 10m
tags: {}
variables:
- name: myvar
type: String
description: ""
required: true
value: "1.1"

For more information about using ternary operators in Harness, go to Using Ternary Operators with Triggers.

Expressions as strings

If you want to treat an expression as a string, you wrap it in double quotes, with the exception of secrets expressions (such as <+secrets.getValue()>) and some JSON usage.

For example, the following command has the expression <+stage.name> wrapped in double quotes because it is an element in an array of strings.

<+<+pipeline.variables.changeType> =~ ["<+stage.name>","All"]>

When using expressions as strings in JSON, the entire expression must be wrapped in double quotes, if that is required to make the JSON valid.

For example, in the following JSON, the expression <+pipeline.variables.version> must be wrapped in quotation marks because it resolves as a string in that part of the JSON. However, the expression <+<+pipeline.variables.hosts>.split(\",\")> isn't wrapped in quotation marks because it resolves as a list.

"{\"a\":[ { \"name\": \"svc1\", \"version\": \"<+pipeline.variables.version>\", \"hosts\": <+<+pipeline.variables.hosts>.split(\",\")> } ]}"

secrets as strings

Do not wrap <+secrets.getValue()> expressions in double quotes. While the secret ID within the getValue() method must be wrapped in double quotes, do not wrap the entire expression in double quotes, even when you want to treat it as a string.

This is because these expressions are resolved by an internal Secret Manager function. The value is not a primitive type string, and it must not be wrapped in double quotes.

For example, in the following complex expression, the <+secrets.getValue()> expression is not wrapped in double quotes, despite being used in an operation where another expression would be wrapped in double quotes.

<+<+<+pipeline.variables.var1>=="secret1">?<+secrets.getValue("secret1")>:<+secrets.getValue("defaultSecret")>>

Concatenation and interpolation

Harness supports complex usages of string interpolation, such as:

  • Substituting an expression value within a path: us-west-2/nonprod/eks/eks123/<+env.name>/chat/
  • Using an expression to supply the value of an identifier within another expression:
    • This example uses the index of the looped execution to pick the desired step by ID: <+stage.spec.execution.steps.s1<+strategy.identifierPostFix>.steps.ShellScript_1.output.outputVariables.v1>
    • This example would print the status of a stage where the stage name is defined as a stage variable: <+pipeline.stages.<+pipeline.variables.stageName>.status>

Harness string variables can be concatenated by default. Each expression can be evaluated and substituted in the string.

Previously, you always used + or concat() to join multiple expressions together. Now, you can simply list the expressions with spaces between, for example:

<+pipeline.name> <+pipeline.executionId>

The concatenation operator (+) and the concat() method also work. Note that these options require you to wrap the entire operation in the expression delimiter (<+...>). For example, the following syntax is valid:

<+<+pipeline.variables.var1> + "_suffix">
<+<+pipeline.variables.var1>.concat("_suffix")>
info

When concatenating expressions as strings, each expression must evaluate to a string.

If an expression does not satisfy this condition, use the toString() method to convert it to a string.

For example, in /tmp/spe/<+pipeline.sequenceId> the variable sequenceId evaluates to an integer. When concatenating this with other string expressions, it must be converted to a string, such as: /tmp/spe/<+pipeline.sequenceId.toString()>.

Input and output variables

Your pipelines, stages, and steps can ingest inputs and produce outputs. In general, input variables represent a pipeline's configuration — the settings and values defining how and where an execution runs. Output variables are the results of an execution — such as release numbers, artifact IDs, image tags, user-defined output variables, and so on.

You can use expressions to reference inputs and outputs. For example, you could reference a previous step's output in a subsequent step's command.

The expression to reference an input or output depends on the scope where it was defined and the scope where you're referencing it. Usually, the expression follows the YAML path to the setting, and the full path (starting from pipeline) is always a valid reference. For example, to reference the command setting for a Run step in a Build stage, you could use an expression like <+pipeline.stages.BUILD_STAGE_ID.spec.execution.steps.RUN_STEP_ID.spec.command>. If you were referencing this setting in another step in the same stage, you could use a relative path like <+execution.steps.RUN_STEP_ID.spec.command>.

Get input/output expressions from execution details

In a pipeline's execution details, you can explore inputs and outputs for the pipeline as a whole, as well as for individual steps.

From the execution details, you can quickly copy the expression to reference any step-level input or output.

This is useful for determining expression paths, when debugging expressions, or when you're not sure which expression to use for a particular setting or value.

To do this:

  1. Go to the execution details page. You can get there by going to your Executions, Builds, or Deployments history and selecting the execution you want to inspect.

  2. To inspect step-level inputs and outputs, select a step in the execution tree, and then select the Input and Output tabs.

    For example, these are some inputs and outputs for a Kubernetes rollout deployment step:

  3. To get the expression referencing a particular input/output, hover over the Input/Output Name and select the Copy icon.

    For example, if you want to reference a Run step's Command setting, navigate to the Run step's Input on the execution details page, locate command under Input Name, and select the Copy icon. Your clipboard now has the expression for this Run step's command, such as <+pipeline.stages.stageID.spec.execution.steps.RunStepID.spec.command>.

    This example copies the podIP setting for a Kubernetes rollout deployment step, resulting in an expression such as <+pipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.deploymentInfoOutcome.serverInstanceInfoList[0].podIP>.

  4. In the same way, you can copy values tied to specific inputs and output. Copying a value copies the literal value, not the expression. To reference this value by an expression, you need to use the Copy option for the Input/Output Name.

Get custom variable and input expressions from the Variables list

You can get expressions for custom pipeline variables and execution inputs from the Variables list in the Pipeline Studio.

If the variable is local to a scope within the pipeline, such as a stage or step group, you can copy either the local, relative-path expression (to use the expression in the origin scope) or the full path/FQN expression (to use the expression outside the origin scope, such as in another stage).

Get input/output expressions used in a from execution context

The Execution Context provides information about resolved expressions and their values for each step in the Step Detail pane. Its purpose is to aid in debugging previous executions of the pipeline, serving as an additional tool alongside the Input/Output variables listed for each step.

Let's consider the following example:

pipeline:
name: pipeline
identifier: pipeline
projectIdentifier: docs_play
orgIdentifier: default
tags: {}
stages:
- stage:
name: custom
identifier: custom
description: ""
type: Custom
spec:
execution:
steps:
- step:
type: ShellScript
name: ShellScript_1
identifier: ShellScript_1
spec:
shell: Bash
executionTarget: {}
source:
type: Inline
spec:
script: echo {$executionUrl}
environmentVariables:
- name: executionUrl
type: String
value: <+pipeline.executionUrl>
outputVariables: []
timeout: 10m
tags: {}

Execution context is a table of keys and values with keys being the expressions that were referred within the step. In the above example, in the step, ShellScript_1, we have defined an input/environment variable, executionUrl with an expression type. Once you run the pipeline, click on the ShellScript_1 step and then select Execution Context in the Step Details pane.

You can also get the expressions for step level variable and execution inputs from Execution Context.

note

Some important points to note:

  1. Secrets are not displayed in the Execution Context.
  2. The Execution Context is non-editable, meaning you won't be able to add or remove items from it.

Expressions reference

The following sections describe some Harness expressions. This information is not exhaustive.

Account, org, and project expressions

  • <+account.identifier>: The identifier for your Harness account.
  • <+account.name>: Your Harness account name.
  • <+account.companyName>: The company name associated with your Harness account.
  • <+org.identifier>: The identifier of an organization in your Harness account. The referenced organization depends on the context where you use the expression.
  • <+org.name>: The name of the organization.
  • <+org.description>: The description of the organization.
  • <+project.identifier>: The identifier of a Harness project in your Harness account. The referenced project depends on the context where you use the expression.
  • <+project.name>: The name of the Harness project.
  • <+project.description>: The description of the Harness project.
  • <+project.tags>: All Harness tags attached to the project.

Approval expressions

Whenever a user grants an approval in a Harness Manual Approval step, the pipeline maintains the user information of the approver for the rest of the pipeline execution. You can use these variables in notifications after an approval is granted.

info

These expressions apply to Harness Manual Approval steps only. They are not applicable to Approval stages or third-party approval steps (such as Jira or ServiceNow approval steps).

  • <+approval.approvalActivities[0].user.name>: The Harness username of the approver.
  • <+approval.approvalActivities[0].user.email>: The email address of the approver.
  • <+approval.approvalActivities[0].comments>: User comments from the approval, formatted as a single string. This variable is populated from the comment output variable generated by the Approval step.

Use the index value to get information from different Approval steps in the same pipeline. For example, if you have a Deploy stage with two Approval steps, the pipeline maintains a separate set of approval variable values for each Approval step. Use the array index in the expressions to access the values for a specific approval.

CI codebase and environment variables

For information about variables and expressions relevant to Harness CI, go to:

Custom variables

For information about user-defined variables, including naming conventions, special handling, and other usage specifications, go to Define variables.

Deployment environment expressions

In Harness CD, environments represent your deployment targets (such as QA, Prod, and so on). Each environment contains one or more Infrastructure Definitions that list your target clusters, hosts, namespaces, and so on. You can use expressions to reference environment values, such as <+env.name>, in a service's Values YAML file, specs, and config files, for example.

  • <+env.name>: The name of the environment used in the current stage.
  • <+env.identifier>: The entity identifier of the environment used in the current stage.
  • <+env.description>: The description of the environment.
  • <+env.type>: The environment type, such as Production or PreProduction.
  • <+env.envGroupName>: The name of the environment group to which the environment belongs, if defined. This expression resolves only if the deployment is done on an environment group.
  • <+env.envGroupRef>: The environment group reference. This expression resolves only if the deployment is done on an environment group.
tip

Environment expressions are useful. For example, you can use them in Service steps, or you can use JEXL to evaluate them in conditional execution settings, such as <+env.type> != "Production".

Custom environment-level variables

You can define custom variables in your environment and service definitions, and you can use expressions to reference those custom variables.

Currently, there are two versions of services and environments, v1 and v2. Services and environments v1 are being replaced by services and environments v2.

To reference custom environment-level variables, use the expression syntax <+env.variables.variableName>.

Deployment infrastructure expressions

These expressions refer to deployment infrastructure configurations. Infrastructure definitions are associated with a deployment environment and they represent the actual clusters, hosts, etc., where Harness deploys a service.

  • <+infra.name>: The name of the infrastructure definition used in a pipeline stage.

  • <+infra.infraIdentifier>: The ID of the infrastructure definition used in a pipeline stage.

  • <+infra.tags>: The tags on a CD stage's infrastructure definition. To reference a specific tag use <+infra.tags.TAG_KEY>.

  • <+infra.connectorRef>: The ID of the Harness connector used in the Deploy stage's infrastructure definition.

  • <+infra.connector.name>: The name of the Harness connector used in the infrastructure definition.

  • <+infra.namespace>: The namespace used in the infrastructure definition.

  • <+infra.releaseName>: The release name used in the infrastructure definition.

INFRA_KEY and INFRA_KEY_SHORT_ID

<+INFRA_KEY> references the infrastructure key, which is a unique string that identifies a deployment target infrastructure.

The infrastructure key is a combination of serviceIdentifier, environmentIdentifer and set of values unique to each infrastructure definition implementation (Kubernetes cluster, etc.) hashed using SHA-1. For example, in case of a Kubernetes Infrastructure, the infrastructure key is a hash of serviceIdentifier-environmentIdentifier-connectorRef-namespace. The format is sha-1(service.id-env.id-[set of unique infra values]).

<+INFRA_KEY_SHORT_ID> is a shortened form of <+INFRA_KEY>. The shortened form is obtained by removing all but the first six characters of the hash of the infrastructure key.

warning

These expressions are literally <+INFRA_KEY> and <+INFRA_KEY_SHORT_ID>. These expressions use capital letters.

Infrastructure keys are typically used in the Release Name setting to add labels to release for tracking. For example, in the infrastructure definition of a deploy stage, the <+INFRA_KEY> is used in the Release Name to give the release a unique name, such as release-<+INFRA_KEY>.

When the deployment runs, Harness adds the release name as a label. For example, in a Kubernetes deployment, release-<+INFRA_KEY might resolve as harness.io/release-name=release-2f9eadcc06e2c2225265ab3cbb1160bc5eacfd4f.

...
Pod Template:
Labels: app=hello
deployment=hello
harness.io/release-name=release-2f9eadcc06e2c2225265ab3cbb1160bc5eacfd4f
Containers:
the-container:
Image: monopole/hello:1
...

With the INFRA_KEY, Harness can track the release for comparisons and rollback.

info

To resolve issues experienced with Kubernetes and Native Helm deployments when using the long form release-<+INFRA_KEY>, Harness now uses <+INFRA_KEY_SHORT_ID> in the default expression that Harness uses to generate a release name for the resources in Kubernetes and Native Helm deployments. This means that the Release name field, in the Advanced section of the Cluster Details in the infrastructure definition is now pre-populated with release-<+INFRA_KEY_SHORT_ID>.

Deployment instance expressions

The following instance expressions are supported in Secure Shell (SSH) deployments, WinRM deployments, and Custom deployments using Deployment Templates.

These deployments can be done on physical data centers, AWS, and Azure. The deployment target determines which expressions you can use.

  • For Microsoft Azure, AWS, or any platform-agnostic Physical Data Center (PDC):
    • <+instance.hostName>: The host/container/pod name where the microservice/application is deployed.
    • <+instance.host.instanceName>: Same as <+instance.hostName>.
    • <+instance.name>: The name of the instance on which the service is deployed.
  • For Microsoft Azure or AWS:
    • <+instance.host.privateIp>: The private IP of the host where the service is deployed.
    • <+instance.host.publicIp>: The public IP of the host where the service is deployed.

To use instance expressions in pipelines, you must use a repeat looping strategy and identify all the hosts for the stage as the target.

repeat:
items: <+stage.output.hosts>

When you use an instance expression in your pipeline, such as in a Shell Script step, Harness applies the script to all target instances. You do not have to loop through instances in your script.

For examples, go to Run a script on multiple target instances.

Instance attributes in deployment templates

For Deployment Templates, you can use instance expressions to reference host properties defined in the Instance Attributes in the deployment template.

Instances collected by the mandatory instancename field can be referenced by the expressions <+instance.hostName>, <+instance.host.instanceName>, or <+instance.name>.

To reference the other properties added to Instance Attributes, use the expression syntax <+instance.host.properties.PROPERTY_NAME>. For example, if you added a property named artifact, you could reference it with the expression <+instance.host.properties.artifact>.

Kubernetes deployment expressions

HARNESS_KUBE_CONFIG_PATH

While this doesn't follow the typical Harness expression syntax, ${HARNESS_KUBE_CONFIG_PATH} is an expression referencing the path to a Harness-generated kubeconfig file containing the credentials you provided to Harness. The credentials can be used by kubectl commands by exporting their value to the KUBECONFIG environment variable.

Harness only generates this kubeconfig file when a delegate is outside of the target cluster and is making a remote connection. When you set up the Kubernetes cluster connector to connect to the cluster, you select the Specify master URL and credentials option. The master URL and credentials you supply in the connector are put in the kubeconfig file and used by the remote delegate to connect to the target cluster.

Consequently, you can only use ${HARNESS_KUBE_CONFIG_PATH} when you are using a delegate outside the target cluster and a Kubernetes cluster connector with the Specify master URL and credentials option.

If you are running the script using an in-cluster delegate with the Use the credentials of a specific Harness Delegate credentials option, then there are no credentials to store in a kubeconfig file since the delegate is already an in-cluster process.

You can use the ${HARNESS_KUBE_CONFIG_PATH} expression in a Shell script step to set the environment variable at the beginning of your kubectl script, such as export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH}. It cannot be used in other scripts such as a Terraform script.

For example:

## Get the pods in the default namespace
export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH} kubectl get pods -n default

## Restart a deployment object in the Kubernetes cluster
export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH}
kubectl rollout restart deployment/mysql-deployment

kubernetes.release.revision

You can use the expression <+kubernetes.release.revision> to get the deployment revision number.

This expression requires delegate version 23.04.79106 or later.

You can use this expression:

  • In the values.yaml file, OpenShift Params, and Kustomize Patches.
  • To reference the current Harness release number as part of your manifest.
  • To reference versioned ConfigMaps and Secrets in custom resources and fields unknown by Harness.

Important: Users must update their delegate to version to use this expression.

Manifest expressions

Harness has generic manifest expressions and manifest expressions for specific deployment types.

Manifest settings are referenced by the manifest ID, which is located at service.serviceDefinition.spec.manifests.manifest.identifier in the Harness Service YAML.

Use Service YAML to get manifest expression paths

Reviewing the Service YAML can help you determine the expressions you can use. For example, the expression <+manifests.mymanifest.valuesPaths> can be created by using the manifest identifier and the valuesPaths in the following YAML:

...
manifests:
- manifest:
identifier: mymanifest # <+manifests.MANIFEST_ID.identifier>
type: K8sManifest # <+manifests.MANIFEST_ID.type>
spec:
store: # <+manifests.MANIFEST_ID.store>
type: Harness
spec:
files:
- account:/Templates
valuesPaths:
- account:/values.yaml
skipResourceVersioning: false
...

Here are some generic manifest expressions:

  • <+manifest.MANIFEST_ID.commitId>: The commit Id of the manifests used in a service. This is captured in the Deployment step execution output.

  • <+manifests.MANIFEST_ID.identifier>: Resolves to the manifest identifier in Harness. The MANIFEST_ID is the same as the resolved value of this expression; however, you could use this to use the manifest ID in a script, for example.

  • <+manifests.MANIFEST_ID.type>: Resolves to the manifest type.

  • <+manifests.MANIFEST_ID.store>: Resolves to where the manifest is stored. In the following example, the manifest is stored in the Harness File Store:

    ...
    manifests:
    - manifest:
    identifier: mymanifest
    type: K8sManifest
    spec:
    store:
    type: Harness
    spec:
    files:
    - account:/Templates
    ...

Helm chart expressions

For Kubernetes Helm and Native Helm deployments, you can use the following built-in expressions in your pipeline stage steps to reference chart details.

ExpressionDescription
<+manifests.MANIFEST_ID.helm.name>Helm chart name.
<+manifests.MANIFEST_ID.helm.description>Helm chart description.
<+manifests.MANIFEST_ID.helm.version>Helm Chart version.
<+manifests.MANIFEST_ID.helm.apiVersion>Chart.yaml API version.
<+manifests.MANIFEST_ID.helm.appVersion>The app version.
<+manifests.MANIFEST_ID.helm.kubeVersion>Kubernetes version constraint.
<+manifests.MANIFEST_ID.helm.metadata.url>Helm Chart repository URL.
<+manifests.MANIFEST_ID.helm.metadata.basePath>Helm Chart base path, available only for OCI, GCS, and S3.
<+manifests.MANIFEST_ID.helm.metadata.bucketName>Helm Chart bucket name, available only for GCS and S3.
<+manifests.MANIFEST_ID.helm.metadata.commitId>Store commit Id, available only when manifest is stored in a Git repo and Harness is configured to use latest commit.
<+manifests.MANIFEST_ID.helm.metadata.branch>Store branch name, available only when manifest is stored in a Git repo and Harness is configured to use a branch.

The MANIFEST_ID is located in service.serviceDefinition.spec.manifests.manifest.identifier in the Harness service YAML. In the following example, it is nginx:

service:
name: Helm Chart
identifier: Helm_Chart
tags: {}
serviceDefinition:
spec:
manifests:
- manifest:
identifier: nginx
type: HelmChart
spec:
store:
type: Http
spec:
connectorRef: Bitnami
chartName: nginx
helmVersion: V3
skipResourceVersioning: false
commandFlags:
- commandType: Template
flag: mychart -x templates/deployment.yaml
type: Kubernetes

Pipeline expressions

The following expressions reference information about a pipeline run, such as the execution ID or start time. For expressions referencing custom pipeline variables, go to Custom variables. For expressions referencing pipeline triggers, go to Trigger expressions.

  • <+pipeline.identifier>: The pipeline's identifier for the pipeline.

  • <+pipeline.name>: The name of the current pipeline.

  • <+pipeline.tags>: The tags for a pipeline. To reference a specific tag, use <+pipeline.tags.TAG_NAME>.

  • <+pipeline.executionId>: Every pipeline run (execution) is given a universally unique identifier (UUID). The UUID can be referenced anywhere. The UUID forms the unique execution URL, for example:https://app.harness.io/ng/#/account/:accountId/cd/orgs/default/projects/:projectId/pipelines/:pipelineId/executions/:executionId/pipeline.

  • <+pipeline.resumedExecutionId>: The execution ID of the root or original execution. This value is different from the executionId when it is a retry.

  • <+pipeline.sequenceId>: The incremental sequential Id for the execution of a pipeline.

    While the <+pipeline.executionId> is randomly generated for each execution, the <+pipeline.sequenceId> increments with each run of the pipeline. The first run of a pipeline receives a sequence Id of 1 and each subsequent execution is incremented by 1.

    For CD pipelines, the Id is named execution. For CI pipelines, the Id is named builds.

    You can use <+pipeline.sequenceId> to tag a CI build when you push it to a repository, and then use <+pipeline.sequenceId> to pull the same build and tag in a subsequent stage. For examples, go to Build and test on a Kubernetes cluster build infrastructure tutorial and Integrating CD with other Harness modules.

  • <+pipeline.executionUrl>: The execution URL of the pipeline. This is the same URL you see in your browser when you are viewing the pipeline execution.

    warning

    Harness has deprecated the version of this expression with an additional period, <+pipeline.execution.Url>.

  • <+pipeline.executionMode>: This expression describes the pipeline's execution mode:

    This expression is useful in conditional executions. For example, you can create a conditional execution to ensure that a step runs only when a post-deployment rollback happens.

  • <+pipeline.startTs>: The start time of a pipeline execution in Unix Epoch format.

  • <+pipeline.selectedStages>: The list of stages selected for execution.

  • <+pipeline.delegateSelectors>: The pipeline-level delegate selectors selected via runtime input.

  • <+pipeline.storeType>: If the pipeline is stored in Harness, the expression resolves to inline. If the pipeline is stored in a Git repository, the expression resolves to remote.

  • <+pipeline.repo>: For remote pipelines, the expression resolves to the Git repository name. For inline pipelines, the expression resolves to null.

  • <+pipeline.branch>: For remote pipelines, the expression resolves to the Git branch where the pipeline exists. For inline pipelines, the expression resolves to null.

  • <pipeline.orgIdentifier>: The identifier of an organization in your Harness account. The referenced organization is the pipeline's organization.

Secrets expressions

The primary way to reference secrets is with expressions like <+secrets.getValue("SECRET_ID")>.

For information about referencing secrets, go to the Secrets documentation.

Service expressions

Services represent your microservices and other workloads. Each service contains a Service Definition that defines your deployment artifacts, manifests or specifications, configuration files, and service-specific variables.

  • <+service.name>: The name of the service defined in the stage where you use this expression.

  • <+service.description>: The description of the service.

  • <+service.tags>: The tags on the service. To reference a specific tag use <+service.tags.TAG_KEY>.

  • <+service.identifier>: The identifier of the service.

  • <+service.type>: Resolves to stage service type, such as Kubernetes.

  • <+service.gitOpsEnabled>: Resolves to a Boolean value to indicate whether the GitOps option is enabled (true) or not (false).

Custom service-level variables

You can define custom variables in your environment and service definitions, and you can use expressions to reference those custom variables.

Currently, there are two versions of services and environments, v1 and v2. Services and environments v1 are being replaced by services and environments v2.

To reference custom v2 service-level variables, use the expression syntax <+serviceVariables.VARIABLE_NAME>.

To reference custom v1 service-level variables, use the expression syntax <+serviceConfig.serviceDefinition.spec.variables.VARIABLE_NAME>.

Override service variables

To override a service variable during the execution of a step group, use<+serviceVariableOverrides.VARIABLE_NAME>. This provides significant flexibility and control over your pipelines.

Service artifacts expressions

You can use artifact expressions if you have selected an artifact in the service definition of a service you are deploying. If you have not selected an artifact, or your artifact is configured as runtime input (<+input>), you must select an artifact at pipeline runtime.

For more information and artifact expression usage examples, go to CD artifact sources.

Example: Kubernetes artifacts expressions

Here are example values for common artifact expressions for a Kubernetes deployment of a Docker image on Docker Hub:

<+artifacts.primary.tag>                       # Example value: stable
<+artifacts.primary.image> # Example value: index.docker.io/library/nginx:stable
<+artifacts.primary.imagePath> # Example value: library/nginx
<+artifacts.primary.imagePullSecret> # Example value: ****
<+artifacts.primary.dockerConfigJsonSecret> # Example value: ****
<+artifacts.primary.type> # Example value: DockerRegistry
<+artifacts.primary.connectorRef> # Example value: DockerHub

You can also have rollback artifacts and sidecar artifacts.

For a detailed example, go to Add container images as artifacts for Kubernetes deployments and Add a Kubernetes sidecar container

Primary artifact names and paths

Use <+artifacts.primary.image> or <+artifacts.primary.imagePath> in your values YAML file when you want to deploy an artifact you have added to the Artifacts section of a CD stage service definition.

  • <+artifacts.primary.image>: The full location path to the Docker image, such as docker.io/bitnami/nginx:1.22.0-debian-11-r0.
    • For non-containerized artifacts, use <+artifacts.primary.path> instead.
    • To get the image name only, use <+artifacts.primary.imagePath>.
  • <+artifacts.primary.imagePath>: The image name, such as nginx. To get the entire image location path use <+artifacts.primary.image>.
  • <+artifacts.primary.path>: The full path to the non-containerized artifact. This expression is used in non-containerized deployments.
  • <+artifacts.primary.filePath>: The file name of the non-containerized artifact. This expression is used in non-containerized deployments, such as a .zip file in AWS S3.

For more information, go to Harness Kubernetes services and Example Kubernetes Manifests using Go Templating.

Primary artifact ID, tags, and labels

Use these expressions to get artifact identifiers, tags, and labels.

  • <+artifacts.primary.identifier>: The Id of the Primary artifact added in a Service's Artifacts section.

  • <+artifacts.primary.tag>: The tags on the pushed, pulled, or deployed artifact, such as AMI tags. For example, if you deployed the Docker image nginx:stable-perl, the <+artifacts.primary.tag> is stable-perl. This expression has no relationship to Harness tags.

  • <+<+artifacts.primary.label>.get("")>: This expression uses the get() method to extract Docker labels from a Docker image artifact. Specify the label key in get(). For example <+<+artifacts.primary.label>.get("maintainer")> pulls the maintainer tag, such as maintainer=dev@someproject.org.

Example: Reference artifact labels

You can reference labels in Shell Script steps or elsewhere, for example:

echo <+<+artifacts.primary.label>.get("maintainer")>
echo <+<+artifacts.primary.label>.get("build_date")>
echo <+<+artifacts.primary.label>.get("multi.author")>
echo <+<+artifacts.primary.label>.get("key-value")>
echo <+<+artifacts.primary.label>.get("multi.key.value")>

When you run the pipeline, the expressions resolve to their respective label values in the execution logs.

Primary artifact repo type and connector

  • <+artifacts.primary.type>: The type of repository used to add this artifact in the service's Artifacts section. For example, Docker Hub, ECR, or GCR.
  • <+artifacts.primary.connectorRef>: The ID of the Harness connector used to connect to the artifact repository.

Primary artifact metadata

  • <+artifacts.primary.metadata.SHA> or <+artifacts.primary.metadata.SHAV2>: Digest/SHA256 hash of the Docker image.

    Since Docker image manifest API supports two schema versions, schemaVersion1 and schemaVersion2, there could be SHA values corresponding to each version. For the SHA value of schemaVersion1, use <+artifacts.primary.metadata.SHA>. For the SHA value of schemaVersion2, use <+artifacts.primary.metadata.SHAV2>.

  • <+artifact.metadata.fileName> and <+artifact.metadata.url>: The artifact metadata file name and metadata file URL. Not applicable to all artifact types. If populated, you can find these values in the execution details for the Service step, under the Output tab. For more information, go to CD artifact sources.

Artifacts with dockercfg or dockerconfigjson

  • <+artifacts.primary.imagePullSecret>: If your Kubernetes cluster doesn't have permission to access a private Docker registry, the values.yaml or manifest file in the service definition's Manifests section must use the dockercfg parameter. Then, if you add the Docker image in the service definition's Artifacts section, you can reference it with dockercfg: <+artifacts.primary.imagePullSecret>.
  • <+artifacts.primary.dockerConfigJsonSecret>: If your Kubernetes cluster doesn't have permission to access a private Docker registry, the values.yaml or manifest files in the service definition's Manifests section must use the dockerconfigjson parameter. Then, if you add the Docker image in the service definition's Artifacts section, you can reference it with dockerconfigjson: <+artifact.dockerConfigJsonSecret>.

For more information and examples, go to Pull an Image from a Private Registry for Kubernetes and Harness Kubernetes services.

Rollback artifacts

You can use the syntax, <+rollbackArtifact.ARTIFACT_DEFINITION_ID> to pull artifact rollback information. For example, use <+rollbackArtifact.metadata.image> to pull the metadata of the artifact image used in the last successful deployment.

Harness pulls rollback artifact information from last successful deployment. If there's no previous successful deployment, then rollback artifact expressions resolve to null.

Sidecar artifacts

Sidecar artifact expressions include:

  • <+artifacts.sidecars.SIDECAR_IDENTIFIER.imagePath>
  • <+artifacts.sidecars.SIDECAR_IDENTIFIER.image>
  • <+artifacts.sidecars.SIDECAR_IDENTIFIER.type>
  • <+artifacts.sidecars.SIDECAR_IDENTIFIER.tag>
  • <+artifacts.sidecars.SIDECAR_IDENTIFIER.connectorRef>

Replace SIDECAR_IDENTIFIER with the Sidecar Identifier/ID assigned when you added the artifact to Harness.

Service config files expressions

You can use these expressions to reference files added in a service's Config Files section.

  • <+configFile.getAsString("CONFIG_FILE_ID")>: Get config file contents as plain text.
  • <+configFile.getAsBase64("CONFIG_FILE_ID")>: Get config file contents with Base64-encoding.

For more information, go to Use config files in your deployments.

Stage expressions

The following expressions reference information for a pipeline stage.

  • <+stage.name>: The name of the stage. The resolved value depends on the context where you use the expression.

  • <+stage.description>: The description of the stage.

  • <+stage.tags>: The tags on the stage. To reference a specific tag, use <+stage.tags.TAG_NAME>. To reference tags from a stage outside the stage where you use the expression, use <+pipeline.stages.STAGE_ID.tags.TAG_NAME>.

  • <+stage.identifier>: The identifier of the stage.

  • <+stage.output.hosts>: Lists all of the target hosts when deploying to multiple hosts.

    When you are deploying to multiple hosts, such as with an SSH, WinRM, or deployment template stage, you can run the same step on all of the target hosts. To run the step on all hosts, use a repeat looping strategy and identify all the hosts for the stage as the target. For more information and examples go to Deployment instance expressions and Secure Shell (SSH) deployments.

  • <+stage.executionUrl>: The execution URL of the stage. This is the same URL you see in your browser when you are viewing the pipeline execution. To get the execution URL for a specific stage in a pipeline use <+pipeline.stages.STAGE_ID.executionUrl>

  • <+stage.delegateSelectors>: The stage-level delegate selectors selected via runtime input.

Custom stage variables

For information about custom stage variables, go to Define variables.

Status expressions

Pipeline, stage, and step status values are a Java enum. You can see the list of values in the Status filter on the Executions, Builds, or Deployments page:

You can use any status value in a JEXL condition. For example, <+pipeline.stages.stage1.status> == "FAILED".

  • <+pipeline.stages.STAGE_ID.status>: The status of a stage. You must use the expression after the target stage has executed.
  • <+pipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.status>: The status of a step. You must use the expression after the target step has executed.

Looping strategy statuses

The statuses of the nodes (stages/steps) using a matrix/repeat looping strategy can be RUNNING, FAILED, or SUCCESS.

Harness provides the following expressions to retrieve the current status of the node (stage/step) using a looping strategy. The expressions are available in pipelines during execution and rollback.

  • <+strategy.currentStatus>: The current status of the looping strategy for the node with maximum depth.
    • When this expression is used in a step, Harness will resolve it to the looping strategy current status of the first parent node (stage/step) of the step.
    • In cases where both the step and the stage have the looping strategy configured, the expression will resolve to the looping strategy status of the current step.
    • If the step (or step group) does not have the looping strategy configured, the expression will instead resolve to the looping strategy status of the current stage.
  • <+strategy.node.STRATEGY_NODE_IDENTIFIER.currentStatus>: The current status of the looping strategy for the node with a specific stage/step identifier, STRATEGY_NODE_IDENTIFIER. For example, echo <+strategy.node.cs1.currentStatus>.
  • <+<+strategy.node>.get("STRATEGY_NODE_IDENTIFIER").currentStatus>: The current status of the looping strategy for the node with a specific stage/step identifier, STRATEGY_NODE_IDENTIFIER. For example, echo <+<+strategy.node>.get("ShellScript_1").currentStatus>.

Status, currentStatus, and liveStatus

Pipeline and stage status expressions can reference the status, currentStatus, or liveStatus. These variables track different statuses, and they can resolve differently depending on the success or failure of specific steps or stages.

status refers to the running status of a single node. currentStatus and liveStatus provide the combined statuses of all running steps within a pipeline or stage. The difference between status types is based on how they handle step failures and if the status of steps running in a matrix or strategy is included in the overall status calculation.

  • Status: status expressions (such as <+pipeline.stages.STAGE_ID.status>) refer to the current running status of a single node, such as a pipeline, stage, or step. It provides information about the state of that specific node without considering the status of any parent, child, or sibling nodes. It reports the direct status of the target node.

  • Current Status: currentStatus expression (such as <+pipeline.stages.STAGE_ID.currentStatus>) represent the combined status of all the running steps within a pipeline or stage, except steps generated from matrix/repeat looping strategies.

    currentStatus uses the statuses of all non-matrix steps to determines the overall status. If any non-matrix step fails, regardless of the progress or status of other steps, the currentStatus of both the pipeline and the stage resolves as Failed. This means that the failure of one step can affects the status of the entire pipeline or stage.

    info

    currentStatus ignores steps generated from matrix/repeat looping strategies. This means that if a pipeline includes a step generated from a matrix, and the matrix step fails while all other steps succeed, then the currentStatus is Success because currentStatus ignores the matrix step.

  • Live Status: Like currentStatus, liveStatus expressions (such as <+pipeline.stages.stage1.liveStatus>) also provides the combined status of all the running steps within a pipeline or stage; however it also considers the status of steps generated from matrix/repeat looping strategies.

    liveStatus considers the statuses of all steps to determine the overall status. If any step fails, the liveStatus of both the pipeline and the stage resolves as Failed, regardless of the individual status of running or completed steps.

    info

    liveStatus includes steps generated by matrix/repeat looping strategies. This means that if a pipeline includes a step generated from a matrix, and the matrix step fails while all other steps succeed, then the liveStatus is Failed because liveStatus includes the matrix step.

Example: Status determination

The following example describes an ongoing execution with three steps named step1, step2, and step3 within a stage called stage1.

step1 is executed using a matrix strategy, specifically with two values: "john" and "doe".

Assume this pipeline is running and the stage, steps, and matrix instances of step3 have the following statuses:

  • stage1: Running
  • step1: Success
  • step2: Success
  • step3 (matrix): Running
    • "john": Failed
    • "doe": Success

In this example, the status values for stage1 are as follows:

  • The status of stage1 is Running. This is taken directly from the execution status of stage1.
  • The currentStatus of stage1 is Success. This is determined from the statuses of all steps in the stage, excluding the matrix steps generated by step3.
  • The liveStatus of stage1 is Failed. This is determined by considering the statuses of all steps in the stage, including the matrix steps generated by step3.

Step expressions

The following expressions are for steps in pipeline stages.

  • <+step.name>: The step name. The resolved value is relative to the context where you use the expression.
  • <+step.identifier>: The step identifier.
  • <+step.executionUrl>: The execution URL of the step. This is the same URL you see in your browser when you are viewing the pipeline execution. To get the execution URL for a specific step in a pipeline, use <+pipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.executionUrl>.
  • <+steps.STEP_ID.retryCount> or <+execution.steps.STEP_ID.retryCount>: When you set a failure strategy to Retry Step, you can specify the retry count for a step or all steps in the stage. The retryCount expressions resolve to the total number of times a step was retried.

Strategy expressions

You can use Harness expressions to retrieve the current execution status or identifiers for iterations of a matrix or repeat looping strategy.

Trigger expressions

  • General Git trigger and payload expressions: Harness includes built-in expressions for referencing trigger details such as the <+trigger.type> or <+trigger.event>. For a complete list, go to the Triggers Reference.

  • <+trigger.artifact.build>: Resolves to the artifact version (such as a Docker Tag) that initiated an On New Artifact Trigger.

    When you add a new artifact trigger, you select the artifact to listen on, and its Tag setting is automatically populated with <+trigger.artifact.build>.

    The <+trigger.artifact.build> used for Tag makes sure that the new artifact version that executed the trigger is used for the deployment.

    Adding a new tag to the artifact fires the trigger and executes the pipeline. Harness resolves <+trigger.artifact.build> to the tag that fired the trigger. This makes sure that the new tag is used when pulling the artifact and the new artifact version is deployed.

  • <+trigger.artifact.source.connectorRef>: Resolves to the Harness connector Id for the connector used to monitor the artifact registry that fired the trigger.

  • <+trigger.artifact.source.imagePath>: Resolves to the image path for the artifact that fired the trigger.

  • <+pipeline.triggeredBy.name>: The name of the user or the trigger name if the pipeline is triggered using a webhook.

    • For more information, go to Trigger Pipelines using Git Events.
    • If a user name is not present in the event payload, the <+pipeline.triggeredBy.name> expression will resolve as empty. For example, in the SaaS edition of Bitbucket, a user name is not present.
  • <+pipeline.triggeredBy.email>: The email of the user who triggered the pipeline. This returns null if the pipeline is triggered using a webhook. For more information, go to Trigger How-tos.

  • <+pipeline.triggerType>: The type of trigger. Similar to <+trigger.type>.

Here are the possible <+pipeline.triggerType> and <+trigger.type> values.

<+pipeline.triggerType><+trigger.type>Description
ARTIFACTArtifactNew artifact trigger. For example, new Docker Hub image tag
SCHEDULER_CRONScheduledScheduled Cron trigger
MANUALnullPipeline triggered using the RUN button in the user interface
WEBHOOK_CUSTOMCustomCustom webhook trigger
WEBHOOKWebhookSCM webhook trigger. For example, GitHub pull request
Triggers and RBAC

Harness RBAC is applied to triggers in Harness, but it is not applied to the repositories used by the triggers.

For example, you might have an On New Artifact Trigger that is started when a new artifact is added to the artifact repo. Or a Webhook Trigger that is started when a PR is merged.

You can select who can create and use these triggers within Harness. However, you must use your repository's RBAC to control who can add the artifacts or initiate events that start the Harness trigger.

Troubleshooting expressions

The following sections describe some common issues or troubleshooting scenarios for expressions. For more troubleshooting information, go to the Harness Knowledge Base.

Debugging expressions

In addition to getting inputs and outputs from execution details, you can debug expressions with Compiled Mode in the pipeline's Variables list.

Open *Variables in the Pipeline Studio, and enable View in Compiled Mode.

With Compiled Mode enabled, all variables in the panel are compiled to display their values based on the most recent execution. You can use the dropdown menu to use inputs from a different run.

Harness highlights expressions that are incorrect or can't be evaluated using the selected execution data. Once you identify the expressions that aren't evaluating correctly, you can disable Compiled Mode and correct the expressions or variables as needed.

To test expression that aren't stored in pipeline variables, such as expressions in scripts, you can create a pipeline variable to use for debugging purposes, input the expression as the variable's value, and then use Compiled Mode to debug it.

String expressions can't use greater than or less than

Greater than and less than operators aren't supported for string type expressions.

String expressions only support equal to and not equal to operators.

Don't embed scripts in expressions

You can use expressions in scripts, and you can manipulate expressions with methods and operators; however, you can't write scripts within expressions.

For example, the following is not valid:

if ((x * 2) == 5) { <+pipeline.name = abc>; } else { <+pipeline.name = def>; }

If your script requires this type of value manipulation, feed the expression into a script-level variable, and then manipulate the script variable to perform the manipulation. For example:

NAME = <+pipeline.name>

if ((x * 2) == 5) { $NAME = abc; } else { $NAME = def; }

Limit or remove expressions in comments

Harness attempts to resolve all expressions, including expressions in script comments.

Harness recommends removing unneeded expressions from comments so they don't cause unexpected failures or add build time through unnecessary processing.

Expressions aren't valid in comments in Values YAML and Kustomize patches

Regardless of their validity, you can't use Harness expressions in comments in:

  • Values YAML files (values.yaml) in Kubernetes, Helm chart, or Native Helm deployments.
  • Kustomize patches files.

For example, the following values.yaml file won't process correctly because it has an expression in the comment.

name: test
replicas: 4
image: <+artifacts.primary.image>
dockercfg: <+artifacts.primary.imagePullSecret>
createNamespace: true
namespace: <+infra.namespace>
# using expression <+infra.namespace>

CI stage initialization fails with a "null value" error or timeout

If a Build (CI) stage fails at initialization with a "null value" error or timeout, this can indicate that an expression was called before its value could be resolved or that the expression references a nonexistent value. For more information, go to Initialize step fails with a "null value" error or timeout.

Default values can't start with an asterisk

Pipelines fails if a variable's default value starts with *. To avoid this wrap the asterisk or value in quotes, such as "*".

Migrate FirstGen expressions to NextGen

Use this information if you need to migrate expressions from Harness FirstGen to Harness NextGen.

warning

All FirstGen expressions use the delimiter ${...}, such as ${approvedBy.name}.

In NextGen, the delimiter is <+...>, such as <+approvedBy.name>.

For more information about migrating to NextGen, go to:

AMI expressions
FirstGenNextGen
ami.newAsgNameRolling: pipeline.stages.STAGE_ID.spec.execution.steps.AsgRollingDeployStep.output.asg.autoScalingGroupName
Blue Green: pipeline.stages.STAGE_ID.spec.execution.steps.AsgRollingDeployStep.output.prodAsg.autoScalingGroupName
ami.oldAsgNameRolling: pipeline.stages.STAGE_ID.spec.execution.steps.AsgRollingDeployStep.output.asg.autoScalingGroupName
Blue Green: pipeline.stages.STAGE_ID.spec.execution.steps.AsgRollingDeployStep.output.stageAsg.autoScalingGroupName
Approvals expressions
FirstGenNextGen
approvedBy.namepipeline.stages.STAGE_ID.spec.execution.steps.HarnessApproval.output.approvalActivities[0].user.name
approvedBy.emailpipeline.stages.STAGE_ID.spec.execution.steps.HarnessApproval.output.approvalActivities[0].user.email
Artifacts expressions
FirstGenNextGen
artifact.metadata.imageartifact.image
artifact.source.dockerconfigartifact.imagePullSecret
artifact.metadata.tag
artifact.buildNo
artifact.revision
artifact.tag
artifact.urlartifact.metadata.url
artifact.metadata.imageartifact.imageartifact.image
Path for sidecar artifact: artifacts.sidecars.sidecarId.PROPERTY
artifact.metadata.KEYartifact.metadata.KEY
artifact.keyartifact.metadata.key
artifact.source.registryUrlDepends on artifact source type. Check the output of the Service step.
artifact.source.repositoryNameDepends on artifact source type. Check the output of the Service step.
artifact.metadata.artifactIdartifact.metadata.artifactId
artifact.bucketNameartifact.metadata.bucketName
artifact.artifactPathartifact.metadata.artifactPath
artifact.metadata.repositoryNameartifact.metadata.repositoryName
artifact.metadata.harnessartifact.metadata.harness
artifact.metadata.groupIdartifact.metadata.groupId
artifact.fileNameartifact.metadata.fileName
artifact.metadata.getSHA()artifact.metadata.SHA
ApplicationApplication (account, org, project)
app.nameaccount.name
account.companyName
org.name
project.name
projectidentifier
app.descriptionproject.description
org.description
app.accountIdaccount.identifier
app.defaults.[variable_name]variable.[VARIABLE_ID]
artifact.displayName
artifact.label.label-key
artifact.buildFullDisplayName
artifact.label.get("[label-key]")
artifact.serviceIdsNot applicable in NextGen
artifact.descriptionNot applicable in NextGen
artifact.source.usernameNot applicable in NextGen
CloudFormation expressions
FirstGenNextGen
cloudformation.OUTPUT_NAMEpipeline.stages.STAGE_ID.spec.execution.steps.CreateStack.output.OUTPUT_NAME
cloudformation.regionpipeline.stages.stage1.spec.execution.steps.CreateStack.output.region
CONFIG file and CONFIG path expressions

Harness NextGen has expressions for CONFIG files. These expressions, listed below, have no equivalent FirstGen expressions. This is not an exhaustive list of all NextGen expressions or NextGen expressions without a FirstGen equivalent.

  • configFile.getAsString("cf_file")
  • configFile.getAsBase64("cf_file")
  • configFile.getAsString("cf_secret")
  • configFile.getAsBase64("cf_secret")
  • fileStore.getAsString("/folder1/configFileProject")
  • fileStore.getAsBase64("account:/folder1/folder2/ConfigFile")

For information about the replacement for the FirstGen KUBE_CONFIG_PATH expression infra.kubernetes.infraId, go to HARNESS_KUBE_CONFIG_PATH.

Email step expressions
FirstGenNextGen
toAddresspipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.spec.to
ccAddresspipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.spec.cc
subjectpipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.spec.subject
bodypipeline.stages.STAGE_ID.spec.execution.steps.STEP_ID.spec.body
Environment expressions
FirstGenNextGen
env.descriptionFQN: stages.STAGE_ID.spec.infrastructure.environment.name(Alias: env.description)
FQN: stages.STAGE_ID.spec.infrastructure.environment.description
env.environmentTypeenv.type
env.nameenv.name
env.accountIdaccount.identifier
env.keywordsenvironmentVariable.variable_nameenv.variables.var_name
HTTP step expressions
FirstGenNextGen
httpResponseCodehttpResponseCode
httpResponseBodyhttpResponseBody
httpMethodhttpMethod
httpUrlhttpUrl
httpResponseMethodpipeline.stages.HTTP.spec.execution.steps.STEP_ID.output.httpMethod
httpResponseCodepipeline.stages.HTTP.spec.execution.steps.STEP_ID.output.httpResponseCode
httpResponseBodypipeline.stages.HTTP.spec.execution.steps.STEP_ID.output.httpResponseBody
Infrastructure expressions
FirstGenNextGen
infra.kubernetes.namespaceinfra.namespace
infra.releaseName
FQN: stages.STAGE_ID.spec.infrastructure.infrastructureDefinition.spec.namespace
infra.nameinfra.name
infra.cloudProvider.nameinfra.connectorRef
infra.route
infra.tempRoute
Instance and host expressions

All FirstGen host expressions are deprecated. Host properties are available using instance expressions.

FirstGenNextGen
instance.nameinstance.name
instance.hostNameinstance.hostName
instance.host.hostNameinstance.host.hostName
instance.host.ipinstance.host.privateIp
instance.host.publicIp
privateIp and publicIp are supported for Azure, AWS, and SSH/WinRM deployments.
instance.EcsContainerDetails.completeDockerId
instance.EcsContainerDetails.dockerId
pipeline.stages.STAGE_IDENTIFIER.spec.execution.steps.STEP_IDENTIFIER.steps.STEP_IDENTIFIER.deploymentInfoOutcome.serverInstanceInfoList[x].containers[x].runtimeId
instance.ecsContainerDetails.taskId
instance.ecsContainerDetails.taskArn
pipeline.stages.STAGE_IDENTIFIER.spec.execution.steps.STEP_IDENTIFIER.steps.STEP_IDENTIFIER.deploymentInfoOutcome.serverInstanceInfoList[x].taskArn
ECSServiceSetup.serviceNameservice.name (This expression works only if you use it in the service definition manifest as well)
pipeline.stages.ecs.spec.execution.steps.STEP_ID.output.serviceName
ECSServiceSetup.clusterNameinfra.cluster
instance.dockerIdTBD
[step__name].serviceNameNot applicable in NextGen
instance.host.publicDnsNot applicable in NextGen

Deprecated host expressions (In NextGen, host properties are available using instance expressions):

  • host.name
  • host.ip
  • host.publicDns
  • host.ec2Instance.instanceId
  • host.ec2Instance.instanceType
  • host.ec2Instance.imageId
  • host.ec2Instance.architecture
  • host.ec2Instance.kernelId
  • host.ec2Instance.keyName
  • host.ec2Instance.privateDnsName
  • host.ec2Instance.privateIpAddress
  • host.ec2Instance.publicDnsName
  • host.ec2Instance.publicIpAddress
  • host.ec2Instance.subnetId
  • host.ec2Instance.vpcId
  • host.hostName
Pipeline variables
FirstGenNextGen
pipeline.namepipeline.name
deploymentUrlpipeline.executionUrl
deploymentTriggeredBypipeline.triggeredBy.name
pipeline.triggeredBy.email
Rollback artifact variables
FirstGenNextGen
rollbackArtifact.buildNoartifact.tagrollback
artifact.imagerollback
artifact.imagePathrollback
artifact.typerollback
artifact.connectorRef
For sidecar artifact: rollbackArtifact.sidecars.sidecar_Id[property]
rollbackArtifact.metadata.imagerollbackArtifact.image
rollbackArtifact.metadata.tagrollbackArtifact.tag
rollbackArtifact.buildFullDisplayName
rollbackArtifact.ArtifactPath
rollbackArtifact.description
rollbackArtifact.displayName
rollbackArtifact.fileName
rollbackArtifact.key
rollbackArtifact.source.registryUrl
rollbackArtifact.urlNot applicable in NextGen
Service expressions
FirstGenNextGen
service.nameservice.name
service.descriptionservice.description
serviceVariable.VAR_NAMEserviceVariables.VAR_NAME
service.manifestmanifest.name
service.manifest.repoRootmanifest.repoName
Tanzu application services expressions
FirstGenNextGen
pcf.finalRoutespcf.finalRoutes
pcf.oldAppRoutespcf.oldAppRoutes
pcf.oldAppRoutes[0]pcf.oldAppRoutes[0]
pcf.tempRoutespcf.tempRoutes
pcf.newAppRoutespcf.newAppRoutes
pcf.newAppRoutes[0]pcf.newAppRoutes[0]
pcf.newAppNamepcf.newAppName
pcf.newAppGuid
host.pcfElement.applicationId
pcf.newAppGuid
pcf.oldAppNamepcf.oldAppName
pcf.activeAppNamepcf.activeAppName
pcf.inActiveAppNamepcf.inActiveAppName
pcf.oldAppGuidpcf.oldAppGuid
infra.pcf.cloudProvider.nameinfra.connector.name
infra.pcf.organizationinfra.organization
infra.pcf.spaceinfra.space
host.pcfElement.displayNameBasic or Canary deployment: pcf.newAppName
Blue Green deployment: pcf.inActiveAppName
host.pcfElement.instanceIndex
Terraform and Helm expressions
FirstGenNextGen
terraform.clusterNameSTEP_ID.output.OUTPUT_NAME
For example: pipeline.stages.stage1.spec.execution.steps.TerraformApply.output.clusterName
terraformPlan.jsonFilePath()
terraformPlan.destroy.jsonFilePath()
execution.steps.TERRAFORM_PLAN_STEP_ID.plan.jsonFilePath
For example: execution.steps.terraformPlan.plan.jsonFilePath
terraformApply.tfplanHumanReadable
terraformDestroy.tfplanHumanReadable
execution.steps.TERRAFORM_PLAN_STEP_ID.plan.humanReadableFilePath
For example: execution.steps.terraformPlan.plan.humanReadableFilePath
terraform.OUTPUT_NAMEpipeline.stages.STAGE_ID.spec.execution.steps.TerraformApply.output.OUTPUT_NAME
terraformApply.tfplan
terraformDestroy.tfplan
terraformApply.add
terraformApply.change
terraformApply.destroy
terraformDestroy.add
terraformDestroy.change
terraformDestroy.destroy
infra.helm.releaseName.service.name-env.name-infra.helm.shortIdpipeline.stages.STAGE_ID.spec.infrastructure.infrastructureDefinition.spec.output.releaseName
pipeline.stages.STAGE_ID.spec.execution.steps.rolloutDeployment.deploymentInfoOutcome.serverInstanceInfoList[2].releaseName
helmChart.descriptionservice.description
helmChart.displayNamepipeline.stages.STAGE_ID.spec.serviceConfig.output.manifestResults.SERVICE_ID.chartName
helmChart.namepipeline.stages.STAGE_ID.spec.execution.steps.rolloutDeployment.output.releaseName
helmChart.versionpipeline.stages.STAGE_ID.spec.serviceConfig.output.manifestResults.SERVICE_ID.helmVersion
infra.helm.shortIdNot applicable in NextGen
helmChart.metadata.basePathNot applicable in NextGen
helmChart.metadata.bucketNameNot applicable in NextGen
helmChart.metadata.repositoryNameNot applicable in NextGen
helmChart.metadata.urlNot applicable in NextGen
nested expressions

The way you declare nested expressions has changed in NextGen.

For example, these are nested FirstGen expressions: secrets.getValue("terraform-aws-env_name-id").

To achieve this same result in NextGen, you must declare each expression with separate expression delimiters and concatenate them together, such as:

<+secrets.getValue("test_secret_" + <+pipeline.variables.envVar>)>
<+<secrets.getValue("test_secret")>.concat(<+pipeline.variables.envVar>)>
Workflow expressions
FirstGenNextGen
workflow.releaseNostage.identifier
workflow.displayNamestage.name
pipeline.name
workflow.descriptionstage.description
pipeline.description
workflow.pipelineDeploymentUuidpipeline.executionId
pipeline.sequenceId
workflow.startTspipeline.startTs
workflow.variables.VAR_NAMEpipeline.variables.VAR_NAME
stage.variables.VAR_NAME
deploymentUrlpipeline.executionUrl
deploymentTriggeredBypipeline.triggeredBy.name
pipeline.triggeredBy.email
currentStep.namestep.name
timestampIdIn FirstGen ${timestampId} is the time when the constant is set on the target host.
NextGen doesn't use setup variables, because Harness has an internal step that creates a temp dir for the execution.
Harness creates a working directory in the Command Init unit on this %USERPROFILE% location.
context.published_name.var_name
workflow.pipelineResumeUuidNot applicable in NextGen
workflow.lastGoodReleaseNoNot applicable in NextGen
workflow.lastGoodDeploymentDisplayNameNot applicable in NextGen
regex.extract("v[0-9]+.[0-9]+", artifact.fileName)Not applicable in NextGen
currentStep.typeNot applicable in NextGen