Skip to main content

Tanzu Application Services deployments overview

This topic shows you how to deploy a publicly available application to your Tanzu Application Service (TAS, formerly PCF) space by using any deployment strategy in Harness.

Objectives

You'll learn how to:

  • Install and launch a Harness Delegate in your target cluster.
  • Connect Harness with your TAS account.
  • Connect Harness with a public image hosted on Artifactory.
  • Specify the manifest to use for the application.
  • Set up a TAS pipeline in Harness to deploy the application.

Important notes

  • For TAS deployments, Harness supports the following artifact sources. You connect Harness to these registries by using your registry account credentials.
    • Artifactory
    • Nexus
    • Docker Registry
    • Amazon S3
    • Google Container Registry (GCR)
    • Amazon Elastic Container Registry (ECR)
    • Azure Container Registry (ACR)
    • Google Artifact Registry (GAR)
    • Google Cloud Storage (GCS)
    • GitHub Package Registry
    • Azure Artifacts
    • Jenkins
  • Before you create a TAS pipeline in Harness, make sure that you have the Continuous Delivery module in your Harness account. For more information, go to create organizations and projects.
  • Your Harness Delegate profile must have CF CLI v7, autoscaler, and Create-Service-Push plugins added to it.
  • For the test connection in the connector, Harness uses the CF SDK to get the list of organizations. If the credentials are correct, you get a list of organizations. Otherwise, the connection fails. For more information, see the Cloud Foundry documentation.

Connect to a TAS provider

You can connect Harness to a TAS space by adding a TAS connector. Perform the following steps to add a TAS connector.

  1. Open a Harness project and select the Deployments module.

  2. In Project Setup, select Connectors, then select New Connector.

  3. In Cloud Providers, select Tanzu Application Service. The TAS connector settings appear.

  4. Enter a connector name and select Continue.

  5. Enter the TAS Endpoint URL. For example, https://api.system.tas-mycompany.com.

  6. In Authentication, select one of the following options.

    1. Plaintext - Enter the username and password. For password, you can either create a new secret or use an existing one.
    2. Encrypted - Enter the username and password. You can create a new secret for your username and password or use exiting ones.
  7. Select Continue.

  8. In Connect to the provider, select Connect through a Harness Delegate, and then select Continue. We don't recommend using the Connect through Harness Platform option here because you'll need a delegate later for connecting to your TAS environment. Typically, the Connect through Harness Platform option is a quick way to make connections without having to use delegates.

    Expand the sections below to learn more about installing delegates.

Use the delegate installation wizard
  1. In your Harness project, select Project Setup.
  2. Select Delegates.
  3. Select Install a Delegate.
  4. Follow the delegate installation wizard.

Use this delegate installation wizard video to guide you through the process.

Install a delegate using the terminal

The Harness Delegate is a lightweight worker process that is installed on your infrastructure and communicates only via outbound HTTP/HTTPS to the Harness Platform. This enables the Harness Platform to leverage the delegate to execute the CI/CD and other tasks on your behalf, without any of your secrets leaving your network.

You can install the Harness Delegate on either Docker or Kubernetes.

note

You might need additional permissions to execute commands in delegate scripts and create Harness users.

Install the default Harness Delegate

Create a new delegate token

You can install delegates from the Account, Project, or Org scope. In this example, we'll install create a new token in the Account scope.

To create a new delegate token, do the following:

  1. In Harness, select Account Settings, then select Account Resources. The Account Resources page opens.

  2. Select Delegates. The Delegates list page opens.

  3. Select the Tokens tab, then select +New Token. The New Token dialog opens.

  4. Enter a token name, for example firstdeltoken.

  5. Select Apply. Harness generates a new token for you.

  6. Select Copy to copy and store the token in a temporary file.

    You will provide this token as an input parameter in the next installation step. The delegate will use this token to authenticate with the Harness Platform.

Get your Harness account ID

Along with the delegate token, you will also need to provide your Harness accountId as an input parameter during delegate installation. This accountId is present in every Harness URL. For example, in the following URL:

https://app.harness.io/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview

6_vVHzo9Qeu9fXvj-AcQCb is the accountId.

note

When you install a delegate via the Harness UI, several dependencies in this topic are prefilled for your convenience. This topic explains where to find the required information for CLI-based installation.

For more information, go to View account info and subscribe to downtime alerts.

Prerequisite

Ensure that you have access to a Kubernetes cluster. For the purposes of this tutorial, we will use minikube.

info

Harness supports Kubernetes versions 1.25.16, 1.26.10, and 1.27.8 for delegate installation.

Install minikube

  • On Windows

    choco install minikube
    info

    For Chocolatey installation instructions, go to Installing Chocolatey in the Chocolatey documentation.

    For additional options to install minikube on Windows, go to minikube start in the minikube documentation.

  • On macOS:

    brew install minikube
    info

    For Homebrew installation instructions, go to Installation in the Homebrew documentation.

Now start minikube with the following config.

minikube start --memory 4g --cpus 4

Validate that you have kubectl access to your cluster.

kubectl get pods -A

Now that you have access to a Kubernetes cluster, you can install the delegate using any of the options below.

Install the Helm chart

As a prerequisite, you must have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.

You can now install the delegate using the delegate Helm chart. First, add the harness-delegate Helm chart repo to your local Helm registry.

helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/
helm repo update
helm search repo harness-delegate

We will use the harness-delegate/harness-delegate-ng chart in this tutorial.

NAME                                	CHART VERSION	APP VERSION	DESCRIPTION                                
harness-delegate/harness-delegate-ng 1.0.8 1.16.0 A Helm chart for deploying harness-delegate

Now we are ready to install the delegate. The following example installs/upgrades firstk8sdel delegate (which is a Kubernetes workload) in the harness-delegate-ng namespace using the harness-delegate/harness-delegate-ng Helm chart.

You can install delegates from the Account, Project, or Org scope. In this example, we'll install a delegate in the Account scope.

To install a delegate, do the following:

  1. In Harness, select Account Settings, then select Account Resources. The Account Resources page opens.

  2. Select Delegates. The Delegates list page opens.

  3. Select New Delegate. The New Delegate dialog opens.

  4. Under Select where you want to install your Delegate, select Kubernetes.

  5. Under Install your Delegate, select Helm Chart.

  6. Copy the helm upgrade command.

    The command uses the default values.yaml file located in the delegate Helm chart GitHub repo. To make persistent changes to one or more values, you can download and update the values.yaml file according to your requirements. Once you have updated the file, you can use it by running the upgrade command below.

       helm upgrade -i firstk8sdel --namespace harness-delegate-ng --create-namespace \
    harness-delegate/harness-delegate-ng \
    -f values.yaml \
    --set delegateName=firstk8sdel \
    --set accountId=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
    --set delegateToken=PUT_YOUR_DELEGATE_TOKEN_HERE \
    --set managerEndpoint=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
    --set delegateDockerImage=harness/delegate:yy.mm.verno \
    --set replicas=1 --set upgrader.enabled=true
note

To install a Helm delegate for Harness Self-Managed Enterprise Edition in an air-gapped environment, you must pass your certificate when you add the Helm repo.

helm repo add harness-delegate --ca-file <.PEM_FILE_PATH> <HELM_CHART_URL_FROM_UI>

For more information on requirements for air-gapped environments, go to Install in an air-gapped environment.

  1. Run the command.

Deploy using a custom role

During delegate installation, you have the option to deploy using a custom role. To use a custom role, you must edit the delegate YAML file.

Harness supports the following custom roles:

  • cluster-admin
  • cluster-viewer
  • namespace-admin
  • custom cluster roles

To deploy using a custom cluster role, do the following:

  1. Open the delegate YAML file in your text editor.

  2. Add the custom cluster role to the roleRef field in the delegate YAML.

    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
    name: harness-delegate-cluster-admin
    subjects:
    - kind: ServiceAccount
    name: default
    namespace: harness-delegate-ng
    roleRef:
    kind: ClusterRole
    name: cluster-admin
    apiGroup: rbac.authorization.k8s.io
    ---

    In this example, the cluster-admin role is defined.

  3. Save the delegate YAML file.

Verify delegate connectivity

Select Continue. After the health checks pass, your delegate is available for you to use. Select Done and verify your new delegate is listed.

Helm chart & Terraform Helm provider

Delegate Available

Kubernetes manifest

Delegate Available

Docker

Delegate Available

You can now route communication to external systems in Harness connectors and pipelines by selecting this delegate via a delegate selector.

Delegate selectors do not override service infrastructure connectors. Delegate selectors only determine the delegate that executes the operations of your pipeline.

Troubleshooting

The delegate installer provides troubleshooting information for each installation process. If the delegate cannot be verified, select Troubleshoot for steps you can use to resolve the problem. This section includes the same information.

Harness asks for feedback after the troubleshooting steps. You are asked, Did the delegate come up?

If the steps did not resolve the problem, select No, and use the form to describe the issue. You'll also find links to Harness Support and to Delegate docs.

Use the following steps to troubleshoot your installation of the delegate using Helm.

  1. Verify that Helm is correctly installed:

    Check for Helm:

    helm

    And then check for the installed version of Helm:

    helm version

    If you receive the message Error: rendered manifests contain a resource that already exists..., delete the existing namespace, and retry the Helm upgrade command to deploy the delegate.

    For further instructions on troubleshooting your Helm installation, go to Helm troubleshooting guide.

  2. Check the status of the delegate on your cluster:

    kubectl describe pods -n <NAMESPACE>
  3. If the pod did not start, check the delegate logs:

    kubectl logs -f <DELEGATE_NAME> -n <NAMESPACE>

    If the state of the delegate pod is CrashLoopBackOff, check your allocation of compute resources (CPU and memory) to the cluster. A state of CrashLoopBackOff indicates insufficient Kubernetes cluster resources.

  4. If the delegate pod is not healthy, use the kubectl describe command to get more information:

    kubectl describe <POD_NAME> -n <NAMESPACE>

To learn more, watch the Delegate overview video.

  1. In Set Up Delegates, select the Connect using Delegates with the following Tags option and enter your delegate name.
  2. Select Save and Continue.
  3. Once the test connection succeeds, select Finish. The connector now appears in the Connectors list.

Install Cloud Foundry Command Line Interface (CF CLI) on your Harness Delegate

After the delegate pods are created, you must edit your Harness Delegate YAML to install CF CLI v7, autoscaler, and Create-Service-Push plugins.

  1. Open the delegate.yaml in a text editor.

  2. Locate the environment variable INIT_SCRIPT in the Deployment object.

    - name: INIT_SCRIPT  
    value: ""
  3. Replace value: "" with the following script to install CF CLI, autoscaler, and Create-Service-Push plugins.

    info

    Harness Delegate uses Red Hat based distributions like Red Hat Enterprise Linux (RHEL) or Red Hat Universal Base Image (UBI). Hence, we recommend that you use microdnf commands to install CF CLI on your delegate. If you are using a package manager in Debian based distributions like Ubuntu, use apt-get commands to install CF CLI on your delegate.

    info

    Make sure to use your API token for pivnet login in the following script.

- name: INIT_SCRIPT  
value: |
# update package manager, install necessary packages, and install CF CLI v7
microdnf update
microdnf install yum
microdnf install --nodocs unzip yum-utils
microdnf install -y yum-utils
echo y | yum install wget
wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
echo y | yum install cf7-cli -y

# autoscaler plugin
# download and install pivnet
wget -O pivnet https://github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin;
pivnet login --api-token=<replace with api token>

# download and install autoscaler plugin by pivnet
pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441
cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295

# install Create-Service-Push plugin from community
cf install-plugin -r CF-Community "Create-Service-Push"

# verify cf version
cf --version

# verify plugins
cf plugins
  1. Apply the profile to the delegate profile and check the logs.

    The output for cf --version is cf version 7.2.0+be4a5ce2b.2020-12-10.

    Here is the output for cf plugins.

    App Autoscaler        2.0.295   autoscaling-apps              Displays apps bound to the autoscaler
    App Autoscaler 2.0.295 autoscaling-events Displays previous autoscaling events for the app
    App Autoscaler 2.0.295 autoscaling-rules Displays rules for an autoscaled app
    App Autoscaler 2.0.295 autoscaling-slcs Displays scheduled limit changes for the app
    App Autoscaler 2.0.295 configure-autoscaling Configures autoscaling using a manifest file
    App Autoscaler 2.0.295 create-autoscaling-rule Create rule for an autoscaled app
    App Autoscaler 2.0.295 create-autoscaling-slc Create scheduled instance limit change for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-rule Delete rule for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-rules Delete all rules for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-slc Delete scheduled limit change for an autoscaled app
    App Autoscaler 2.0.295 disable-autoscaling Disables autoscaling for the app
    App Autoscaler 2.0.295 enable-autoscaling Enables autoscaling for the app
    App Autoscaler 2.0.295 update-autoscaling-limits Updates autoscaling instance limits for the app
    Create-Service-Push 1.3.2 create-service-push, cspush Works in the same manner as cf push, except that it will create services defined in a services-manifest.yml file first before performing a cf push.
note

The CF Command script does not require cf login. Harness logs in using the credentials in the TAS cloud provider set up in the infrastructure definition for the workflow executing the CF Command.

Create the deploy stage

Pipelines are collections of stages. For this tutorial, we'll create a new pipeline and add a single stage.

  1. In your Harness project, select Pipelines, select Deployments, then select Create a Pipeline.

    Your pipeline appears.

  2. Enter the name TAS Quickstart and click Start.

  3. Click Add Stage and select Deploy.

  4. Enter the stage name Deploy TAS Service, select the Tanzu Application Services deployment type, and select Set Up Stage.

    The new stage settings appear.

Create the Harness TAS service

Harness services represent your microservices or applications. You can add the same service to as many stages as you need. Services contain your artifacts, manifests, config files, and variables. For more information, go to services and environments overview.

Create a new service

  1. Select the Service tab, then select Add Service.

  2. Enter a service name. For example, TAS.

    Services are persistent and can be used throughout the stages of this pipeline or any other pipeline in the project.

  3. In Service Definition, in Deployment Type, verify if Tanzu Application Services is selected.

Add the manifest

  1. In Manifests, select Add Manifest.
    Harness uses TAS Manifest, Vars, and AutoScaler manifest types for defining TAS applications, instances, and routes.
    You can use one TAS manifest and one autoscaler manifest only. You can use unlimited vars file manifests.

  2. Select TAS Manifest and select Continue.

  3. In Specify TAS Manifest Store, select Harness and select Continue.

  4. In Manifest Details, enter a manifest name. For example, nginx.

  5. Select File/Folder Path.

  6. In Create or Select an Existing Config file, select Project. This is where we will create the manifest.

    1. Select New, select New Folder, enter a folder name, and then select Create.

    2. Select the new folder, select New, select New File, and then enter a file name. For example, enter manifest.

    3. Enter the following in the manifest file, and then click Save.

      applications:
      - name: ((NAME))
      health-check-type: process
      timeout: 5
      instances: ((INSTANCE))
      memory: 750M
      routes:
      - route: ((ROUTE))
  7. Select Apply Selected.

    You can add only one manifest.yaml file.

  8. Select Vars.yaml path and repeat steps 6.1 and 6.2 to create a vars file. Then, enter the following information:

    NAME: harness_<+service.name>
    INSTANCE: 1
    ROUTE: harness_<+service.name>_<+infra.name>.apps.tas-harness.com
  9. Select Apply Selected.

You can add any number of vars.yaml files.

  1. Select AutoScaler.yaml and repeat steps 6.1 and 6.2 to create an autoscaler file. Then, enter the following information:

    instance_limits:
    min: 1
    max: 2
    rules:
    - rule_type: "http_latency"
    rule_sub_type: "avg_99th"
    threshold:
    min: 100
    max: 200
    scheduled_limit_changes:
    - recurrence: 10
    executes_at: "2032-01-01T00:00:00Z"
    instance_limits:
    min: 1
    max: 2
  2. Select Apply Selected.

    You can add only one autoscaler.yaml file.

  3. Select Submit.

Add the artifact for deployment

  1. In Artifacts, select Add Artifact Source.

  2. In Specify Artifact Repository Type, select Artifactory, and select Continue.

    important

    For TAS deployments, Harness supports the following artifact sources. You connect Harness to these registries by using your registry account credentials.

    For this tutorial, we will use Artifactory.

  3. In Artifactory Repository, click New Artifactory Connector.

  4. Enter a name for the connector, such as JFrog, then select Continue.

  5. In Details, in Artifactory Repository URL, enter https://harness.jfrog.io/artifactory/.

  6. In Authentication, select Anonymous, and select Continue.

  7. In Delegates Setup, select Only use Delegate with all of the following tags and enter the name of the delegate created in connect to a TAS provider (step 8).

  8. Select Save and Continue

  9. After the test connection succeeds, select Continue.

  10. In Artifact Details, enter the following details:

    1. Enter an Artifact Source Name.
    2. Select Generic or Docker repository format.
    3. Select a Repository where the artifact is located.
    4. Enter the name of the folder or repository where the artifact is located.
    5. Select Value to enter a specific artifact name. You can also select Regex and enter a tag regex to filter the artifact.
  11. Select Submit.

Add the manifest and artifact as an artifact bundle

Demo Video

You can add both the manifest and artifact at the same time as an artifact bundle.

In the Harness service, when you add a manifest, select Artifact Bundle in Specify TAS Manifest Store.

When you use an artifact bundle, you do not need to add an individual artifact in the service's Artifacts section. Instead, you add a compressed file (ZIP, TAR, Tar.gz, Tgz) in Manifests that contains both the manifest and artifact.

Here's an example of a file structure that you would compress for the artifact bundle.

artifactBundle/
- manifest/
- manifest.yaml
- vars.yaml
- autoscaler.yaml
- artifact-1.0.war

When you add the artifact bundle to your Harness service, you provide the paths to the manifest, artifact, and any vars.yaml and AutoScaler.yaml files:

picture 1

Configure the following artifact bundle settings:

  • Artifact Bundle Type: Select the type of compressed file. Currently, Zip, Tar, and Tar.gz are supported.
  • Deployable Artifact Path: The relative path to the artifact from the artifact bundle root after extraction.
  • Manifest Path: The relative path to the manifest from the artifact bundle root after extraction.
  • Vars.yaml path: The relative path to the vars.yaml file from the artifact bundle root after extraction. You can add multiple files.
  • AutoScaler.yaml: The relative path to the autoscaler.yaml file from the artifact bundle root after extraction. You can add multiple files.
Overrides

The standard override rules apply to an artifact bundle with these exceptions:

  1. If an artifact bundle store type is selected in in the service then it can be only overridden by the artifact bundle store type in Overrides. A different store type cannot be used in Overrides to override an artifact bundle type.
  2. If the Other store type is selected in the service, then it cannot be overridden by the artifact bundle store type in Overrides. The artifact bundle store type cannot be used in Overrides to override a different store type.

Define the TAS target infrastructure

You define the target infrastructure for your deployment in the Environment settings of the pipeline stage. You can define an environment separately and select it in the stage, or create the environment within the stage Environment tab.

There are two methods of specifying the deployment target infrastructure:

  • Pre-existing: the target infrastructure already exists and you simply need to provide the required settings.
  • Dynamically provisioned: the target infrastructure will be dynamically provisioned on-the-fly as part of the deployment process.

For details on Harness provisioning, go to Provisioning overview.

Pre-existing TAS infrastructure

The target space is your TAS space. This is where you will deploy your application.

  1. In Specify Environment, select New Environment.

  2. Enter the name TAS tutorial and select Pre-Production.

  3. Select Save.

  4. In Specify Infrastructure, select New Infrastructure.

  5. Enter a name, and then verify that the selected deployment type is Tanzu Application Type.

  6. Select the TAS connector you created earlier.

  7. In Organization, select the TAS org in which want to deploy.

  8. In Space, select the TAS space in which you want to deploy.

  9. Select Save.

Dynamically provisioned TAS infrastructure

Here is a summary of the steps to dynamically provision the target infrastructure for a deployment:

  1. Add dynamic provisioning to the CD stage:
    1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

    2. Select the type of provisioner that you want to use.

      Harness automatically adds the provisioner steps for the provisioner type you selected.

    3. Configure the provisioner steps to run your provisioning scripts.

    4. Select or create a Harness infrastructure in Environment.

  2. Map the provisioner outputs to the Infrastructure Definition:
    1. In the Harness infrastructure, enable the option Map Dynamically Provisioned Infrastructure.
    2. Map the provisioning script/template outputs to the required infrastructure settings.

Supported provisioners

The following provisioners are supported for TAS deployments:

  • Terraform
  • Terragrunt
  • Terraform Cloud
  • CloudFormation
  • Azure Resource Manager (ARM)
  • Azure Blueprint
  • Shell Script

Adding dynamic provisioning to the stage

To add dynamic provisioning to a Harness pipeline Deploy stage, do the following:

  1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

  2. Select the type of provisioner that you want to use.

    Harness automatically adds the necessary provisioner steps.

  3. Set up the provisioner steps to run your provisioning scripts.

For documentation on each of the required steps for the provisioner you selected, go to the following topics:

Mapping provisioner output

Once you set up dynamic provisioning in the stage, you must map outputs from your provisioning script/template to specific settings in the Harness Infrastructure Definition used in the stage.

  1. In the same CD Deploy stage where you enabled dynamic provisioning, select or create (New Infrastructure) a Harness infrastructure.

  2. In the Harness infrastructure, in Select Infrastructure Type, select Tanzu Application Services if it is not already selected.

  3. In Tanzu Application Service Infrastructure Details, enable the option Map Dynamically Provisioned Infrastructure.

    A Provisioner setting is added and configured as a runtime input.

  4. Map the provisioning script/template outputs to the required infrastructure settings.

To provision the target deployment infrastructure, Harness needs specific infrastructure information from your provisioning script. You provide this information by mapping specific Infrastructure Definition settings in Harness to outputs from your template/script.

For TAS, Harness needs the following settings mapped to outputs:

  • Organization
  • Space
note

Ensure the Organization and Space settings are set to the Expression option.

For example, here's a snippet of a Terraform script that provisions the infrastructure for a Tanzu Application Services deployment and includes the required outputs:


provider "aws" {
region = "us-east-1"
}

resource "aws_opsworks_org" "pcf_org" {
name = "my-pcf-org"
}

resource "aws_opsworks_space" "pcf_space" {
name = "my-pcf-space"
organization_id = aws_opsworks_org.pcf_org.id
}

output "organization_name" {
value = aws_opsworks_org.pcf_org.name
}

output "space_name" {
value = aws_opsworks_space.pcf_space.name
}

In the Harness Infrastructure Definition, you map outputs to their corresponding settings using expressions in the format <+provisioner.OUTPUT_NAME>, such as <+provisioner.organization_name>.

Figure: Mapped outputs.

TAS execution strategies

Now you can select the deployment strategy for this stage of the pipeline.

The TAS workflow for performing a basic deployment takes your Harness TAS service and deploys it on your TAS infrastructure definition.

  1. In Execution Strategies, select Basic, then select Use Strategy.

  2. The basic execution steps are added.

  3. Select the Basic App Setup step to define Step Parameters.

    The basic app setup configuration uses your manifest in Harness TAS to set up your application.

    1. Name - Edit the deployment step name.
    2. Timeout - Set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timeout.
    3. Instance Count - Select whether to Read from Manifest or Match Running Instances.
      The Match Running Instances setting can be used after your first deployment to override the instances in your manifest.
    4. Existing Versions to Keep - Enter the number of existing versions you want to keep. This is to roll back to a stable version if the deployment fails.
    5. Additional Routes - Enter additional routes if you want to add routes other than the ones defined in the manifests.
    6. Select Apply Changes.
  4. Select the App Resize step to define Step Parameters.

    1. Name - Edit the deployment step name.
    2. Timeout - Set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timeout.
    3. Ignore instance count in Manifest - Select this option to override the instance count defined in the manifest.yaml file with the values specified in the App Resize step.
    4. Total Instances - Set the number or percentage of running instances you want to keep.
    5. Desired Instances - Old Version - Set the number or percentage of instances for the previous version of the application you want to keep. If this field is left empty, the desired instance count will be the difference between the maximum possible instance count (from the manifest or match running instances count) and the number of new application instances.
    6. Select Apply Changes.
  5. Add a Tanzu Command step to your stage if you want to execute custom Tanzu commands in this step.

    1. Timeout - Set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timeout.
    2. Script - Select one of the following options.
      • File Store - Select this option to choose a script from Project, Organization, or Account.
      • Inline - Select this option to enter a script inline.
    3. Select Apply Changes.
  6. Add an App Rollback step to your stage if you want to roll back to an older version of the application in case of deployment failure.

  7. In Advanced configure the following options.

    • Delegate Selector - Select the delegate(s) you want to use to execute this step. You can select one or more delegates for each pipeline step. You only need to select one of a delegate's tags to select it. All delegates with the tag are selected.
    • Conditional Execution - Use the conditions to determine when this step is executed. For more information, go to conditional execution settings.
    • Failure Strategy - Define the failure strategies to control the behavior of your pipeline when there is an error in execution. For more information, go to failure strategy references and define a failure strategy.

    Expand the following section to view the error types and failure strategies supported for the steps in a Basic TAS deployment.

  • Looping Strategy - Select Matrix, Repeat, or Parallelism looping strategy. For more information, go to Use looping strategies.
  • Policy Enforcement - Add or modify a policy set to be evaluated after the step is complete. For more information, go to CD governance.
  1. Select Save.

Now the pipeline stage is complete and you can deploy.

Deploy and review

  1. Click Save > Save Pipeline, then select Run. Now you can select the specific artifact to deploy.

  2. Select a Primary Artifact.

  3. Select a Tag.

  4. Select the following Infrastructure parameters.

    1. Connector
    2. Organization
    3. Space
  5. Click Run Pipeline. Harness will verify the pipeline and then run it. You can see the status of the deployment, pause or abort it.

  6. Toggle Console View to watch the deployment with more detailed logging.

The deployment was successful.

In your project's Deployments, you can see the deployment listed.

Blue Green Deployment Support with a configurable amount of Tanzu Applications to maintain

note

Currently, TAS Blue Green Deployment Support with a configurable amount of application versions to keep is behind the feature flag CDS_PCF_SUPPORT_BG_WITH_2_APPS_NG. Contact Harness Support to enable the feature.

By default Harness keeps 3 Versions of Tanzu Apps for Blue Green Deployment. The Active, the inactive, and then the 3rd previous deployment as a backup. This behavior is now configurable with the Existing Version to Keep option. If the user says 0, Harness will only maintain 2. The Active and Inactive applications. If the user wishes to maintain 3 they can configure 1 and Harness will maintain Active, Inactive, and third previous successfully deployed version of the application.

In the PCF BG App Setup step, we have removed the backend validation for existing versions to keep to greater than 0. This made sure we kept more than 2 versions of the applications available for rollback.

Configuration Cases

Case 1: If existing versions to keep is > 0 Deployment remains the same

Case 2: If existing versions to keep is 0 In the BG App setup delegate task, we will skip the renaming of the old inactive to app__0 and directly do a cf push keeping the new app name as app__inactive. This will deploy the manifest to the same old application. Ensuring no new versions are maintained.

Check if this removes old routes from app__inactive, no we are detaching all old routes first from the inactive app and then run a cf push

The App Resize & Swap Routes Step would remain the same when the existing versions to keep is 0.

In the** Swap Rollback Step**, we will ignore the value of the Upsize inactive service from step params

In the Swap Rollback Step (if it executes), we aren’t deleting the new application created as we previously did, as we are not creating a new application Harness is modifying the old application only, and depending upon if the swap routes step was successful or no.

If the Swap routes step was successful: We only need to switch the name and the routes for the ACTIVE and INACTIVE(new application) applications.

If the Swap routes step wasn't successful: We don’t need to change anything.

- step:
name: BG App Setup
identifier: BGAppSetup
type: BGAppSetup
timeout: 10m
spec:
tasInstanceCountType: FromManifest
existingVersionToKeep: 0 ## This is the new field added to specify how many versions we keep
additionalRoutes: []
tempRoutes: []

Here's a demo video:

Next steps

See CD tutorials for other deployment features.