Install Delegate on Kubernetes or Docker
What is a Delegate?
Harness Delegate is a lightweight worker process that is installed on your infrastructure and communicates only via outbound HTTP/HTTPS to the Harness Platform. This enables the Harness Platform to leverage the delegate for executing the CI/CD and other tasks on your behalf, without any of your secrets leaving your network.
You can install the Harness Delegate on either Docker or Kubernetes.
Install Delegate
Create New Delegate Token
Login to the Harness Platform and go to Account Settings -> Account Resources -> Delegates. Click on the Tokens tab. Click +New Token and give your token a name `firstdeltoken`. When you click Apply, a new token is generated for you. Click on the copy button to copy and store the token in a temporary file for now. You will provide this token as an input parameter in the next delegation installation step. The delegate will use this token to authenticate with the Harness Platform.Get Your Harness Account ID
Along with the delegate token, you will also need to provde your Harness accountId as an input parameter to the delegate installation. This accountId is present in every Harness URL. For example, in the following URL
https://app.harness.io/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview
6_vVHzo9Qeu9fXvj-AcQCb
is the accountId.
Now you are ready to install the delegate on either Docker or Kubernetes.
- Kubernetes
- Docker
Prerequisite
Ensure that you access to a Kubernetes cluster. For the purposes of this tutorial, we will use minikube
.
Install minikube
- On Windows:
choco install minikube
- On macOS:
brew install minikube
Now start minikube with the following config.
minikube start --memory 4g --cpus 4
Validate that you have kubectl access to your cluster.
kubectl get pods -A
Now that you have access to a Kubernetes cluster, you can install the delegate using any of the options below.
- Helm Chart
- Terraform Helm Provider
- Kubernetes Manifest
Install Helm Chart
As a prerequisite, you should have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.
You can now install the delegate using the Delegate Helm Chart. Let us first add the harness-delegate
helm chart repo to your local helm registry.
helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/
helm repo update
helm search repo harness-delegate
You can see that there are two helm charts available. We will use the harness-delegate/harness-delegate-ng
chart in this tutorial.
NAME CHART VERSION APP VERSION DESCRIPTION
harness-delegate/harness-delegate-ng 1.0.8 1.16.0 A Helm chart for deploying harness-delegate
Now we are ready to install the delegate. The following command installs/upgrades firstk8sdel
delegate (which is a Kubernetes workload) in the harness-delegate-ng
namespace by using the harness-delegate/harness-delegate-ng
helm chart.
helm upgrade -i firstk8sdel --namespace harness-delegate-ng --create-namespace \
harness-delegate/harness-delegate-ng \
--set delegateName=firstk8sdel \
--set accountId=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
--set delegateToken=PUT_YOUR_DELEGATE_TOKEN_HERE \
--set managerEndpoint=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
--set delegateDockerImage=harness/delegate:23.02.78306 \
--set replicas=1 --set upgrader.enabled=false
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Create main.tf file
Harness has created a terraform module for the Kubernetes delegate. This module uses the standard terraform Helm provider to install the helm chart onto a Kubernetes cluster whose config by default is stored in the same machine at the ~/.kube/config
path. Copy the following into a main.tf
file stored on a machine from which you want to install your delegate.
module "delegate" {
source = "harness/harness-delegate/kubernetes"
version = "0.1.5"
account_id = "PUT_YOUR_HARNESS_ACCOUNTID_HERE"
delegate_token = "PUT_YOUR_DELEGATE_TOKEN_HERE"
delegate_name = "firstk8sdel"
namespace = "harness-delegate-ng"
manager_endpoint = "PUT_YOUR_MANAGER_HOST_AND_PORT_HERE"
delegate_image = "harness/delegate:23.02.78306"
replicas = 1
upgrader_enabled = false
# Additional optional values to pass to the helm chart
values = yamlencode({
javaOpts: "-Xms64M"
})
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
Now replace the variables in the file with your Harness Accound ID and Delegate Token values. PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Run terraform init, plan and apply
Initialize terraform. This will download the terraform helm provider onto your machine.
terraform init
Run the following step to see exactly the changes terraform is going to make on your behalf.
terraform plan
Finally, run this step to make terraform install the Kubernetes delegate using the Helm provider.
terraform apply
When prompted by terraform if you want to continue with the apply step, type yes
and then you will see output similar to the following.
helm_release.delegate: Creating...
helm_release.delegate: Still creating... [10s elapsed]
helm_release.delegate: Still creating... [20s elapsed]
helm_release.delegate: Still creating... [30s elapsed]
helm_release.delegate: Still creating... [40s elapsed]
helm_release.delegate: Still creating... [50s elapsed]
helm_release.delegate: Still creating... [1m0s elapsed]
helm_release.delegate: Creation complete after 1m0s [id=firstk8sdel]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Download Kubernetes Manifest Template
curl -LO https://raw.githubusercontent.com/harness/delegate-kubernetes-manifest/main/harness-delegate.yaml
Replace Variables in the Template
Open the harness-delegate.yml
file in a text editor and replace PUT_YOUR_DELEGATE_NAME_HERE
, PUT_YOUR_HARNESS_ACCOUNTID_HERE
and PUT_YOUR_DELEGATE_TOKEN_HERE
with your delegate name (say firstk8sdel
), Harness accountId, delegate token value respectively.
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Apply Kubernetes Manifest
kubectl apply -f harness-delegate.yml
Prerequisite
Ensure that you have the Docker runtime installed on your host. If not, use one of the following options to install Docker:
Install on Docker
Now you can install the delegate using the following command.
docker run -d --name="firstdockerdel" --cpus="0.5" --memory="2g" \
-e DELEGATE_NAME=firstdockerdel \
-e NEXT_GEN=true \
-e DELEGATE_TYPE=DOCKER \
-e ACCOUNT_ID=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
-e DELEGATE_TOKEN=PUT_YOUR_DELEGATE_TOKEN_HERE \
-e MANAGER_HOST_AND_PORT=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
harness/delegate:22.11.77436
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
To use local runner build infrastructure, modify the delegate command using the instructions to install the delegate in Use local runner build infrastructure
Verify Delegate Connectivity
Click Continue and in a few moments after the health checks pass, your Delegate will be available for you to use. Click Done and can verify your new Delegate is on the list.
Helm Chart & Terraform Helm Provider
Kubernetes Manifest
Docker
You can now route communication to external systems in Harness connectors and pipelines by simply selecting this delegate via a delegate selector.
Troubleshooting
The delegate installer provides troubleshooting information for each installation process. If the delegate cannot be verified, click Troubleshoot for steps you can use to resolve the problem. This section includes the same information.
Harness asks for feedback after the troubleshooting steps. You are asked, Did the delegate come up?
If the steps did not resolve the problem, click No and use the form to describe the issue. You'll also find links to Harness Support and to Harness Documentation.
- Helm Chart
- Terraform Helm Provider
- Kubernetes Manifest
- Docker
Use the following steps to troubleshoot your installation of the delegate using Helm.
Verify that Helm is correctly installed:
Check for Helm:
helm
And then check for the installed version of Helm:
helm version
If you receive the message
Error: rendered manifests contain a resource that already exists...
, delete the existing namespace and retry the Helm upgrade command to deploy the delegate.For further instructions on troubleshooting your Helm installation, go to Helm troubleshooting guide.
Check the status of the delegate on your cluster:
kubectl describe pods -n <namespace>
If the pod did not start, check the delegate logs:
kubectl logs -f <harnessDelegateName> -n <namespace>
If the state of the delegate pod is
CrashLoopBackOff
, check your allocation of compute resources (CPU and memory) to the cluster. A state ofCrashLoopBackOff
indicates insufficent Kubernetes cluster resources.If the delegate pod is not healthy, use the
kubectl describe
command to get more information:kubectl describe <pod_name> -n <namespace>
Use the following steps to troubleshoot your installation of the delegate using Terraform.
Verify that Terraform is correctly installed:
terraform -version
For further instructions on troubleshooting your installation of Terraform, see the Terraform troubleshooting guide.
Check the status of the delegate on your cluster:
kubectl describe pods -n <namespace>
If the pod did not start, check the delegate logs:
kubectl logs -f <harnessDelegateName> -n <namespace>
If the state of the delegate pod is
CrashLoopBackOff
, check your allocation of compute resources (CPU and memory) to the cluster. A state ofCrashLoopBackOff
indicates insufficent Kubernetes cluster resources.If the delegate pod is not healthy, use the
kubectl describe
command to get more information:kubectl describe <pod_name> -n <namespace>
Use the following steps to troubleshoot your installation of the delegate using Kubernetes.
Check the status of the delegate on your cluster:
kubectl describe pods -n <namespace>
If the pod did not start, check the delegate logs:
kubectl logs -f <harnessDelegateName> -n <namespace>
If the state of the delegate pod is
CrashLoopBackOff
, check your allocation of compute resources (CPU and memory) to the cluster. A state ofCrashLoopBackOff
indicates insufficent Kubernetes cluster resources.If the delegate pod is not healthy, use the
kubectl describe
command to get more information:kubectl describe <pod_name> -n <namespace>
Use the following steps to troubleshoot your installation of the delegate using Docker:
Check the status of the delegate on your cluster:
docker container ls -a
If the pod is not running, check the delegate logs:
docker container logs <delegatename> -f
Restart the delegate container. To stop the container:
docker container stop <delegatename>
To start the container:
docker container start <delegatename>
Make sure the container has sufficient CPU and memory resources. If not, remove the older containers:
docker container rm [container id]