Skip to main content

Set up disaster recovery

This topic explains how to set up a disaster recovery (DR) cluster with external data storage and covers best practices for DR setup in Harness Self-Managed Enterprise Edition.

Harness recommends that you create a multi-node cluster spread across different availability zones of a data center for better node failure tolerance.

Disaster recovery with external database

Prerequisites

The following prerequisites are needed.

Each provisioned cluster should have:

  • Enough resources allocated for both a primary & a DR cluster to support the installation of Harness Helm charts.
  • Access to persistent storage.

External data storage must support:

  • Replication of data.
  • Primary database must be reachable by both the primary and a DR cluster.
  • SSL support between primary and secondary database nodes.

Set up an external database

To set up an external MongoDB, do the following:

  1. Deploy a replica set. For more information, go to Deploy a Replica Set in the MongoDB documentation.

  2. Get the MongoDB credentials by accessing the MongoDB Cloud → Database Access section.

  3. Encode the credentials using command below.

    echo -n 'YOUR_MONGODB_USERNAME' | base64
    echo -n 'YOUR_MONGODB_PASSWORD' | base64
  4. Create a mongo-secret.yaml file.

  5. Paste the encoded credentials in your mongo-secret.yaml file.

    apiVersion: v1
    kind: Secret
    metadata:
    name: mongo-secret
    type: Opaque
    data:
    keyone: <YOUR_BASE64_ENCODED_USERNAME>
    keytwo: <YOUR_BASE64_ENCODED_PASSWORD>
  6. Apply your mongo-secret.yaml file.

    kubectl apply -f mongo-secret.yaml -n <namespace>
  7. Add the following external MongoDB-specific changes in your override file.

    global:
    database:
    mongo:
    # -- set this to false if you want to use external mongo
    installed: false
    # -- provide default values if mongo.installed is set to false
    # -- generates mongo uri protocol://hosts?extraArgs
    protocol: mongodb+srv
    hosts:
    #mongo host names from atlas mongo cloud
    - smp-xx-yy-0-zzz.u2poo.mongodb.net
    secretName: mongo-secret
    userKey: user
    passwordKey: password
    extraArgs: ""
    platform:
    access-control:
    mongoSSL:
    enabled: true
    mongoHosts:
    - smp-xx-yy-0-shard-00-00-zzz.xyz1.mongodb.net:27017
    - smp-xx-yy-0-shard-00-00-zzz.xyz2.mongodb.net:27017
    - smp-xx-yy-0-shard-00-00-zzz.xyz3.mongodb.net:27017
    - smp-xx-yy-0-shard-00-00-zzz.xyz4.mongodb.net:27017

Set up the DR cluster

To use the Harness Self-Managed Enterprise Edition Helm chart with DR configuration, the following requirements must be met:

  • Set the DR variable to true in the override.yaml file before you deploy to the cluster to the DR cluster.

    global:
    dr:
    createCluster: true
  • Use the command below to create the DR cluster.

    helm install <releaseName> harness/ -n <namespace> -f override.yaml

Switch to the DR cluster

To ensure business continuity for any unplanned primary cluster failure, you can switch to DR cluster.

To switch to the DR cluster, do the following:

Disable ingress/load balancer

Before activating the DR cluster, ensure that you disable the ingress/load balancer to prevent concurrent writes to the datastore.

After traffic to the Harness primary cluster is cut-off, you can take it down before starting the DR cluster.

  1. Run the following Helm command to uninstall.

    helm uninstall <release name> -n <namespace>
  2. Update the following flags in the override.yaml file before you upgrade the DR cluster.

    global:
    dr:
    activateCluster: true
    runConfigChecks: true
  3. Run the Helm upgrade command to connect to the DR cluster.

    helm upgrade <release name> harness/ -f override.yaml
  4. Make sure the DR cluster is in a healthy state and all pods are running before you route traffic to it.