Deploy a Helm Chart using CD Community Edition
CD Community Edition Basics
Harness CD Community Edition (CDCE) is the 100% free, source-available, self-managed edition of Harness CD that is designed for developers to deploy cloud-native services at the fastest velocity possible. A comparison of the features between CDCE and the Harness CD SaaS plans is available on the Harness CD Pricing page. Developers can run CDCE on Docker or Kubernetes and then use CDCE to automate deployments to Kubernetes, Serverless Functions and many more deployment platforms. The harness-cd-community repo houses these docker-compose and helm chart installers.
As shown in the figure below, we will perform the following steps in this tutorial.
- Install CDCE on Docker or Kubernetes.
- Create a helm CD pipeline using the Terraform Provider.
- Install a delegate on the Kubernetes cluster where the helm chart will be deployed to.
- Deploy the helm chart using by running the pipelines.
- Verify the health of your deployed application.
Install CD Community Edition
- Docker
- Kubernetes
Prerequisite
Ensure that you have the Docker runtime installed on your host. If not, use one of the following options to install Docker:
Now, you also need to make sure that you have enough resources to run the CD Community Edition.
- 2+ CPUs
- 3GB+ of free memory
If you are also running the Kubernetes embedded in the same Docker Desktop, then additional resources are needed since the embedded Kubernetes also consumes resources.
- 3+ CPUs
- 5GB+ of free memory
Finally, since you will be installing delegates at locations different than the machine hosting CDCE Docker app, you will need to make sure that your CDCE Docker app can correctly generate URLs for these remote delegates to talk to. You can do so by setting an environment variable HARNESS_HOST
with public IP of the laptop/VM where CDCE Docker cluster is installed. For example:
export HARNESS_HOST="192.168.0.1"
If this variable is not set, then the default value of host.docker.internal
is used.
Install CDCE on Docker
Now you can install the CDCE by executing the following commands.
git clone https://github.com/harness/harness-cd-community.git
cd harness-cd-community/docker-compose/harness
We can explictly pull the docker images first.
docker compose pull
Now let's start the CD Community Edition.
docker compose up -d
Since multiple microservices need to come up, let's wait for 300 seconds before checking the health of the install.
docker compose run --rm proxy wait-for-it.sh ng-manager:7090 -t 300
Now we can see that all the various microservices are up and runnning.
docker compose ps
Open http://localhost/#/signup (if your browser is running inside the host VM) or http://HARNESS_HOST/#/signup (if your browser is running outside the host VM) and complete the registration form. Now your Harness CDCE account along with the first (admin) user is created. If you have already completed this step, then login to CDCE at http://localhost/#/signin or http://HARNESS_HOST/#/signin.
Note that you can temporarily shut down CDCE if needed and bring it back up using the previous up
command.
docker compose down
Prerequisite
The CDCE helm chart is currently designed to run only on a single-node Kubernetes. For the purposes of this tutorial, we will use minikube
.
Install minikube
- On Windows:
choco install minikube
- On macOS:
brew install minikube
Now start minikube with the following config assuming Harness CDCE and a Harness Delegate (to be installed later in this tutorial) are the only two workloads you will run on this minikube. If you have other workloads running then you have to allocated more memory and cpu resources.
minikube start --memory 4g --cpus 4
Validate that you have kubectl access to your cluster.
kubectl get pods -A
Install helm
You should have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.
Install CDCE Helm Chart
You can now install the CDCE using a Helm Chart. First step is to clone the git repo.
git clone https://github.com/harness/harness-cd-community.git
cd harness-cd-community/docker-compose/harness
Since you will be installing delegates at locations different than the Kubernetes node hosting CDCE Helm app, you will need to make sure that your CDCE Kubernetes app can correctly generate URLs for these remote delegates to talk to. You can do so by setting the variable HARNESS_HOST
with public IP of the Kubernetes node in the values.yaml
of the helm chart prior to install. Note that the default listen_port is set to 7143
.
Now you are ready to install the helm chart.
helm install harness ./harness --create-namespace --namespace harness
Since multiple microservices need to come up, let's wait for 900 seconds before checking the health of the install.
kubectl wait --namespace harness --timeout 900s --selector app=proxy --for condition=Ready pods
We can check the health of the application we just installed.
kubectl get pods -n harness
kubectl get services -n harness
We need to forward the Kubernetes port to localhost to allow access from outside the cluster.
kubectl port-forward --namespace harness --address localhost svc/proxy 7143:80 9879:9879
Open http://localhost:7143/#/signup (if your browser is running inside the host VM) or http://HARNESS_HOST:7143/#/signup (if your browser is running outside the host VM) and complete the registration form. Now your Harness CDCE account along with the first (admin) user is created. If you have already completed this step, then login to CDCE at http://localhost/#/signin or http://HARNESS_HOST:7143/#/signin.
Create Helm CD Pipeline with Terraform Provider
Now that Harness CDCE is up and running, we can create a CD pipeline that will deploy a helm chart onto a different Kubernetes cluster (usually known as the deployment target or infrastructure). You can always set this up in the pipeline studio which will also include creating other Harness resources like service, environment and connectors. Instead, we will use the popular Harness Terraform Provider to automate this process for us. Instructions for onboarding an new application into Harness using Terraform are available below.
Onboard with Terraform for Harness CDCE
Onboard with Terraform Provider
The Harness Terraform Provider enables automated lifecycle management of the Harness Platform using Terraform. You can onboard onto Harness on day 1 and also make day 2 changes using tthis Provider. Currently the following Harness resources can be managed via the Provider.
- Organizations
- Projects
- Connectors
- Services
- Environments
- Infrastructure Definitions
- Pipelines
- Permissions
- Secrets
This tutorial shows you how to manage all the above resources except Permissions and Secrets.
Prerequisite
Install terraform CLI
Install the terraform CLI v1.3.x on your host machine. Run the following command to verify.
terraform -version
Get Your Harness Account ID
You will also need to provde your Harness accountId as an input parameter to the Provider. This accountId is present in every Harness URL. For example, in the following URL
https://<harness-mgr-port>/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview
6_vVHzo9Qeu9fXvj-AcQCb
is the accountId.
Get Harness API Key Token
Login to your Harness Account and click My Profile on the left navigation. Click +API Key to create a new API key. Now click +Token to create API key token. Give the token a name and save the token that gets autogenerated when you click Apply.
Create your main.tf file
You can create your own main.tf file. For this tutorial, we will use a sample main.tf file
curl -LO https://raw.githubusercontent.com/harness-apps/developer-hub-apps/main/terraform/main.tf
Configure the Harness Provider
Open the main.tf
file in a text editor and replace PUT_YOUR_HARNESS_ACCOUNTID_HERE
and PUT_YOUR_API_KEY_TOKEN_HERE
with your Harness accountId and API key token values respectively.
The PUT_YOUR_MANAGER_ENDPOINT_HERE
value can be determined as follows:
- For Harness SaaS, it is
https://app.harness.io/gateway
- For Harness CDCE Docker, it is
http://HARNESS_HOST
where HARNESS_HOST is the host IP where Harness CDCE is running. You can uselocalhost
for HARNESS_HOST if terraform is running on the same machine. - For Harness CDCE Helm, it is
http://HARNESS_HOST:7143
where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running. You can uselocalhost
for HARNESS_HOST if terraform is running on the same machine.
Note that if terraform CLI is running local to CDCE (either docker compose or minikube port forward onto VM), then you can use localhost
and localhost:7143
for Docker and Kubernetes install options respectively.
terraform {
required_providers {
harness = {
source = "harness/harness"
}
}
}
provider "harness" {
endpoint = "PUT_YOUR_MANAGER_ENDPOINT_HERE"
account_id = "PUT_YOUR_HARNESS_ACCOUNTID_HERE"
platform_api_key = "PUT_YOUR_API_KEY_TOKEN_HERE"
}
...
Create the terraform resources
Review themain.tf
file for the operations that terraform is going to execute for us. As you can see, the sample first creates a new organization as well as a new project inside that organization. It then creates a helm chart service that will be deployed to Kubernetes infrastructure running in a pre-production environment. A new pipeline is then created to make this service deployment happen.
At every step, you can use existing organizations, projects, services, environments if you so desire. We will now review the terraform resource definition for each of these resources.
Create a new organization
resource "harness_platform_organization" "org" {
name = "OrgByTF"
identifier = "orgbytf"
description = "Created through Terraform"
}
Create a new project in the organization
resource "harness_platform_project" "project" {
name = "ProjByTF"
identifier = "projbytf"
org_id = "orgbytf"
description = "Created through Terraform"
depends_on = [
harness_platform_organization.org
]
}
Create a helm connector at account level
Account-level connectors are available to all projects in all organizations. We create such a connector by not setting the org_id and project_id attributes in the terraform resource. We will use this connector to retrieve the Kubernetes manifests for the sample service we will deploy.
resource "harness_platform_connector_helm" "helmconn" {
name = "HelmConnByTF"
identifier = "helmconnbytf"
description = "Created through Terraform"
url = "https://charts.bitnami.com/bitnami"
}
Create a service inside the project
This service will use the above Helm connector to deploy a Helm Chart. We are using the wildfly
helm chart available at https://charts.bitnami.com/bitnami
as the sample service for this tutorial.
resource "harness_platform_service" "service" {
name = "ServiceByTF"
identifier = "servicebytf"
description = "Created through Terraform"
org_id = "orgbytf"
project_id = "projbytf"
yaml = <<-EOT
...
EOT
depends_on = [
harness_platform_project.project
]
}
Create a Kubernetes connector at account level
We will deploy the service defined earlier to the Kubernetes cluster associated with this connector. This connector will inherit the permissions of a delegate named firstk8sdel
which you can install using the Kubernetes delegate instructions from Install Delegate
resource "harness_platform_connector_kubernetes" "k8sconn" {
name = "K8SConnByTF"
identifier = "k8sconnbytf"
description = "Created through Terraform"
inherit_from_delegate {
delegate_selectors = ["firstk8sdel"]
}
}
Create an environment in the project
Create an environment inside the project that refers to the Kubernetes connector we defined in the previous step. Environments can be of PreProduction
or Production
types and we will use the former for this example.
resource "harness_platform_environment" "env" {
name = "EnvByTF"
identifier = "envbytf"
org_id = "orgbytf"
project_id = "projbytf"
tags = []
type = "PreProduction"
yaml = <<-EOT
...
EOT
depends_on = [
harness_platform_project.project
]
}
Create a infrastructure definition in the environment
Create a infrastructure definition in the environment we just created.
resource "harness_platform_infrastructure" "infra" {
name = "InfraByTF"
identifier = "infrabytf"
org_id = "orgbytf"
project_id = "projbytf"
env_id = "envbytf"
type = "KubernetesDirect"
deployment_type = "Kubernetes"
yaml = <<-EOT
...
EOT
depends_on = [
harness_platform_environment.env
]
}
Create a pipeline inside the project
Every run of this pipeline will deploy a particular version of the service onto the Kubernetes cluster.
resource "harness_platform_pipeline" "pipeline" {
name = "PipelineByTF"
identifier = "pipelinebytf"
org_id = "orgbytf"
project_id = "projbytf"
yaml = <<-EOT
...
EOT
depends_on = [
harness_platform_infrastructure.infra
]
}
Run terraform init, plan and apply
Initialize terraform. This will download the Harness Provider onto your machine.
terraform init
Run the following step to see exactly the changes terraform is going to make on your behalf.
terraform plan
Finally, run this step to make terraform onboard your resources onto the Harness Platform.
terraform apply
When prompted by terraform if you want to continue with the apply step, type yes
and then you will see output similar to the following.
harness_platform_connector_kubernetes.k8sconn: Creating...
harness_platform_organization.org: Creating...
harness_platform_connector_helm.helmconn: Creating...
harness_platform_connector_helm.helmconn: Creation complete after 2s [id=helmconnbytf]
harness_platform_organization.org: Creation complete after 2s [id=orgbytf]
harness_platform_connector_kubernetes.k8sconn: Creation complete after 2s [id=k8sconnbytf]
harness_platform_project.project: Creating...
harness_platform_project.project: Creation complete after 2s [id=projbytf]
harness_platform_service.service: Creating...
harness_platform_environment.env: Creating...
harness_platform_environment.env: Creation complete after 1s [id=envbytf]
harness_platform_service.service: Creation complete after 1s [id=servicebytf]
harness_platform_infrastructure.infra: Creating...
harness_platform_infrastructure.infra: Creation complete after 3s [id=infrabytf]
harness_platform_pipeline.pipeline: Creating...
harness_platform_pipeline.pipeline: Creation complete after 5s [id=pipelinebytf]
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
Verify and run pipeline on Harness UI
On Harness UI, you can go to Account Settings --> Organizations to see the new organization. Click View Projects to see the new project. Click the project and go into the "Continuous Delivery" module. When you click Pipelines now, you can see the new pipeline has been created. You can run this pipeline as long as you have previously installed a delegate with name firstk8sdel
using the Kubernetes delegate instructions from Install Delegate. As previously shown, you can change the delegate name to any other delegate you may have installed by editing the Kubernetes connector resource of the main.tf file.
Run terraform destroy
This is as simple as the command below.
terraform destroy
Install Kubernetes Delegate
We now need to install a delegate named firstk8sdel
on the Kubernetes cluster that is the deployment target. Note that if you installed the CDCE on a Kubernetes cluster then you can reuse the same cluster as this deployment cluster. However, the cluster should have enough underlying resources to run CDCE (in namespace harness
), a delegate (in namespace harness-delegate-ng
) and the helm chart you will be deploying (in the namespace of your choice) via CDCE.
Install Delegate for Harness CDCE
Install Delegate on Kubernetes or Docker
What is a Delegate?
Harness Delegate is a lightweight worker process that is installed on your infrastructure and communicates only via outbound HTTP/HTTPS to the Harness Platform. This enables the Harness Platform to leverage the delegate for executing the CI/CD and other tasks on your behalf, without any of your secrets leaving your network.
You can install the Harness Delegate on either Docker or Kubernetes.
Install Delegate
Create New Delegate Token
Login to the Harness Platform and go to Account Settings -> Account Resources -> Delegates. Click on the Tokens tab. Click +New Token and give your token a name `firstdeltoken`. When you click Apply, a new token is generated for you. Click on the copy button to copy and store the token in a temporary file for now. You will provide this token as an input parameter in the next delegation installation step. The delegate will use this token to authenticate with the Harness Platform.Get Your Harness Account ID
Along with the delegate token, you will also need to provde your Harness accountId as an input parameter to the delegate installation. This accountId is present in every Harness URL. For example, in the following URL
https://app.harness.io/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview
6_vVHzo9Qeu9fXvj-AcQCb
is the accountId.
Now you are ready to install the delegate on either Docker or Kubernetes.
- Kubernetes
- Docker
Prerequisite
Ensure that you access to a Kubernetes cluster. For the purposes of this tutorial, we will use minikube
.
Install minikube
- On Windows:
choco install minikube
- On macOS:
brew install minikube
Now start minikube with the following config.
minikube start --memory 4g --cpus 4
Validate that you have kubectl access to your cluster.
kubectl get pods -A
Now that you have access to a Kubernetes cluster, you can install the delegate using any of the options below.
- Helm Chart
- Terraform Helm Provider
- Kubernetes Manifest
Install Helm Chart
As a prerequisite, you should have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.
You can now install the delegate using the Delegate Helm Chart. Let us first add the harness-delegate
helm chart repo to your local helm registry.
helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/
helm repo update
helm search repo harness-delegate
You can see that there are two helm charts available. We will use the harness-delegate/harness-delegate-ng
chart in this tutorial.
NAME CHART VERSION APP VERSION DESCRIPTION
harness-delegate/harness-delegate-ng 1.0.8 1.16.0 A Helm chart for deploying harness-delegate
Now we are ready to install the delegate. The following command installs/upgrades firstk8sdel
delegate (which is a Kubernetes workload) in the harness-delegate-ng
namespace by using the harness-delegate/harness-delegate-ng
helm chart.
helm upgrade -i firstk8sdel --namespace harness-delegate-ng --create-namespace \
harness-delegate/harness-delegate-ng \
--set delegateName=firstk8sdel \
--set accountId=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
--set delegateToken=PUT_YOUR_DELEGATE_TOKEN_HERE \
--set managerEndpoint=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
--set delegateDockerImage=harness/delegate:23.02.78306 \
--set replicas=1 --set upgrader.enabled=false
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Create main.tf file
Harness has created a terraform module for the Kubernetes delegate. This module uses the standard terraform Helm provider to install the helm chart onto a Kubernetes cluster whose config by default is stored in the same machine at the ~/.kube/config
path. Copy the following into a main.tf
file stored on a machine from which you want to install your delegate.
module "delegate" {
source = "harness/harness-delegate/kubernetes"
version = "0.1.5"
account_id = "PUT_YOUR_HARNESS_ACCOUNTID_HERE"
delegate_token = "PUT_YOUR_DELEGATE_TOKEN_HERE"
delegate_name = "firstk8sdel"
namespace = "harness-delegate-ng"
manager_endpoint = "PUT_YOUR_MANAGER_HOST_AND_PORT_HERE"
delegate_image = "harness/delegate:23.02.78306"
replicas = 1
upgrader_enabled = false
# Additional optional values to pass to the helm chart
values = yamlencode({
javaOpts: "-Xms64M"
})
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
Now replace the variables in the file with your Harness Accound ID and Delegate Token values. PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Run terraform init, plan and apply
Initialize terraform. This will download the terraform helm provider onto your machine.
terraform init
Run the following step to see exactly the changes terraform is going to make on your behalf.
terraform plan
Finally, run this step to make terraform install the Kubernetes delegate using the Helm provider.
terraform apply
When prompted by terraform if you want to continue with the apply step, type yes
and then you will see output similar to the following.
helm_release.delegate: Creating...
helm_release.delegate: Still creating... [10s elapsed]
helm_release.delegate: Still creating... [20s elapsed]
helm_release.delegate: Still creating... [30s elapsed]
helm_release.delegate: Still creating... [40s elapsed]
helm_release.delegate: Still creating... [50s elapsed]
helm_release.delegate: Still creating... [1m0s elapsed]
helm_release.delegate: Creation complete after 1m0s [id=firstk8sdel]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Download Kubernetes Manifest Template
curl -LO https://raw.githubusercontent.com/harness/delegate-kubernetes-manifest/main/harness-delegate.yaml
Replace Variables in the Template
Open the harness-delegate.yml
file in a text editor and replace PUT_YOUR_DELEGATE_NAME_HERE
, PUT_YOUR_HARNESS_ACCOUNTID_HERE
and PUT_YOUR_DELEGATE_TOKEN_HERE
with your delegate name (say firstk8sdel
), Harness accountId, delegate token value respectively.
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Apply Kubernetes Manifest
kubectl apply -f harness-delegate.yml
Prerequisite
Ensure that you have the Docker runtime installed on your host. If not, use one of the following options to install Docker:
Install on Docker
Now you can install the delegate using the following command.
docker run -d --name="firstdockerdel" --cpus="0.5" --memory="2g" \
-e DELEGATE_NAME=firstdockerdel \
-e NEXT_GEN=true \
-e DELEGATE_TYPE=DOCKER \
-e ACCOUNT_ID=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
-e DELEGATE_TOKEN=PUT_YOUR_DELEGATE_TOKEN_HERE \
-e MANAGER_HOST_AND_PORT=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
harness/delegate:22.11.77436
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
To use local runner build infrastructure, modify the delegate command using the instructions to install the delegate in Use local runner build infrastructure
Verify Delegate Connectivity
Click Continue and in a few moments after the health checks pass, your Delegate will be available for you to use. Click Done and can verify your new Delegate is on the list.
Helm Chart & Terraform Helm Provider
Kubernetes Manifest
Docker
You can now route communication to external systems in Harness connectors and pipelines by simply selecting this delegate via a delegate selector.
Run Pipeline to Deploy Helm Chart
Login to Harness CDCE UI and click into the Project. Click on the pipeline and then click Run. You will see that the wildfly
helm chart from https://charts.bitnami.com/bitnami
will be pulled by the delegate you had installed and it will deploy into the default
namespace of the Kubernetes cluster. You can always change the helm chart and its deployment namespace to your own application.