Deploy a Private Image in Amazon ECR to Kubernetes
Private Registry Basics
Artifact repositories are the home for the application binaries that need to be deployed. Artifact repositories can also house the building blocks that applications need e.g the dependencies that a final build will bundle up as a deployable artifact. The Docker Ecosystem has a concept of Docker Registries. These registries are designed to store and distribute Docker images.
There are a multitude of choices when picking a Docker Registry; from Docker Hub itself to cloud vendors to artifact repository vendors all having Docker Registry solutions. Sometimes what you are working on is internal to your organization or adding a layer of security that the public can not pick up the docker distribution of your application/service. Thus having a private Docker Registry is a prudent move. Deploying an image from a private Registry is similar to a public one, though there are authentication step(s) that are needed depending on the provider. In this example, we will deploy an image from a private Amazon Elastic Container Registry (ECR) to a Kubernetes cluster.
You can leverage the example image and manifests in this tutorial or bring your own. The first step is to access your private registry.
Accessing Your Private Docker Registry - Amazon ECR
Head to the AWS Console and then to the ECR module. You can either create a new private ECR Repository or leverage an existing one. Once you have access to a private ECR Repository, they will be listed under Repositories on the Private tab. In this example, a private ECR Repository called “my-private-repo” has been created.
Amazon does list out authentication commands by clicking into the Repository and clicking “View push commands”. This will require the AWS CLI and Docker on your machine.
Execute the authentication command to run on your local machine. If you have a private image and corresponding manifest to deploy, you can skip to the deployment section. If you do not, can use the below section to seed your private registry.
Seeding Your Private Registry
If you do not have an image that you want to deploy in your private registry, you can take an image that is publicly accessible that you want to deploy and push that into your private ECR Repository.
Let’s say you want to deploy Grafana as an example which will require an image. You can locally pull the image down then re-tag and push the image to match your ECR format.
docker pull grafana/grafana
docker tag grafana/grafana <ecr-id>.dkr.ecr.<region>.amazonaws.com/my-private-repo:latest
docker push <ecr-id>.dkr.ecr.<region>.amazonaws.com/my-private-repo:latest
With the push out of the way, you will see your newly seeded image in the ECR Registry.
With an artifact in your private ECR Repository, you are now ready to deploy this with Harness.
Deploying a Private Image with Harness
If you do not have a Harness Account, sign up for a Harness Account for access to the Continuous Delivery Module. A default Harness Project will be created for you automatically. Projects are logical groupings of resources. The generated default project is perfect for the first time deployment.
When navigating back to Deployments, can set the project context to the Default Project by clicking on the blue chevrons >> and selecting Default Project.
With the Default Project selected, clicking on Overview will bring up a wizard to create your first Pipeline/Deployment. There are a few Harness entities that will need to be created in Harness. The needed objects are wirings to Amazon ECR for private image access and GitHub for the manifests.
Install Delegate
Install Delegate
Install Delegate on Kubernetes or Docker
What is a Delegate?
Harness Delegate is a lightweight worker process that is installed on your infrastructure and communicates only via outbound HTTP/HTTPS to the Harness Platform. This enables the Harness Platform to leverage the delegate for executing the CI/CD and other tasks on your behalf, without any of your secrets leaving your network.
You can install the Harness Delegate on either Docker or Kubernetes.
Install Delegate
Create New Delegate Token
Login to the Harness Platform and go to Account Settings -> Account Resources -> Delegates. Click on the Tokens tab. Click +New Token and give your token a name `firstdeltoken`. When you click Apply, a new token is generated for you. Click on the copy button to copy and store the token in a temporary file for now. You will provide this token as an input parameter in the next delegation installation step. The delegate will use this token to authenticate with the Harness Platform.Get Your Harness Account ID
Along with the delegate token, you will also need to provde your Harness accountId as an input parameter to the delegate installation. This accountId is present in every Harness URL. For example, in the following URL
https://app.harness.io/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview
6_vVHzo9Qeu9fXvj-AcQCb
is the accountId.
Now you are ready to install the delegate on either Docker or Kubernetes.
- Kubernetes
- Docker
Prerequisite
Ensure that you access to a Kubernetes cluster. For the purposes of this tutorial, we will use minikube
.
Install minikube
- On Windows:
choco install minikube
- On macOS:
brew install minikube
Now start minikube with the following config.
minikube start --memory 4g --cpus 4
Validate that you have kubectl access to your cluster.
kubectl get pods -A
Now that you have access to a Kubernetes cluster, you can install the delegate using any of the options below.
- Helm Chart
- Terraform Helm Provider
- Kubernetes Manifest
Install Helm Chart
As a prerequisite, you should have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.
You can now install the delegate using the Delegate Helm Chart. Let us first add the harness-delegate
helm chart repo to your local helm registry.
helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/
helm repo update
helm search repo harness-delegate
You can see that there are two helm charts available. We will use the harness-delegate/harness-delegate-ng
chart in this tutorial.
NAME CHART VERSION APP VERSION DESCRIPTION
harness-delegate/harness-delegate-ng 1.0.8 1.16.0 A Helm chart for deploying harness-delegate
Now we are ready to install the delegate. The following command installs/upgrades firstk8sdel
delegate (which is a Kubernetes workload) in the harness-delegate-ng
namespace by using the harness-delegate/harness-delegate-ng
helm chart.
helm upgrade -i firstk8sdel --namespace harness-delegate-ng --create-namespace \
harness-delegate/harness-delegate-ng \
--set delegateName=firstk8sdel \
--set accountId=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
--set delegateToken=PUT_YOUR_DELEGATE_TOKEN_HERE \
--set managerEndpoint=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
--set delegateDockerImage=harness/delegate:23.02.78306 \
--set replicas=1 --set upgrader.enabled=false
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Create main.tf file
Harness has created a terraform module for the Kubernetes delegate. This module uses the standard terraform Helm provider to install the helm chart onto a Kubernetes cluster whose config by default is stored in the same machine at the ~/.kube/config
path. Copy the following into a main.tf
file stored on a machine from which you want to install your delegate.
module "delegate" {
source = "harness/harness-delegate/kubernetes"
version = "0.1.5"
account_id = "PUT_YOUR_HARNESS_ACCOUNTID_HERE"
delegate_token = "PUT_YOUR_DELEGATE_TOKEN_HERE"
delegate_name = "firstk8sdel"
namespace = "harness-delegate-ng"
manager_endpoint = "PUT_YOUR_MANAGER_HOST_AND_PORT_HERE"
delegate_image = "harness/delegate:23.02.78306"
replicas = 1
upgrader_enabled = false
# Additional optional values to pass to the helm chart
values = yamlencode({
javaOpts: "-Xms64M"
})
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
Now replace the variables in the file with your Harness Accound ID and Delegate Token values. PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Run terraform init, plan and apply
Initialize terraform. This will download the terraform helm provider onto your machine.
terraform init
Run the following step to see exactly the changes terraform is going to make on your behalf.
terraform plan
Finally, run this step to make terraform install the Kubernetes delegate using the Helm provider.
terraform apply
When prompted by terraform if you want to continue with the apply step, type yes
and then you will see output similar to the following.
helm_release.delegate: Creating...
helm_release.delegate: Still creating... [10s elapsed]
helm_release.delegate: Still creating... [20s elapsed]
helm_release.delegate: Still creating... [30s elapsed]
helm_release.delegate: Still creating... [40s elapsed]
helm_release.delegate: Still creating... [50s elapsed]
helm_release.delegate: Still creating... [1m0s elapsed]
helm_release.delegate: Creation complete after 1m0s [id=firstk8sdel]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Download Kubernetes Manifest Template
curl -LO https://raw.githubusercontent.com/harness/delegate-kubernetes-manifest/main/harness-delegate.yaml
Replace Variables in the Template
Open the harness-delegate.yml
file in a text editor and replace PUT_YOUR_DELEGATE_NAME_HERE
, PUT_YOUR_HARNESS_ACCOUNTID_HERE
and PUT_YOUR_DELEGATE_TOKEN_HERE
with your delegate name (say firstk8sdel
), Harness accountId, delegate token value respectively.
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
Apply Kubernetes Manifest
kubectl apply -f harness-delegate.yml
Prerequisite
Ensure that you have the Docker runtime installed on your host. If not, use one of the following options to install Docker:
Install on Docker
Now you can install the delegate using the following command.
docker run -d --name="firstdockerdel" --cpus="0.5" --memory="2g" \
-e DELEGATE_NAME=firstdockerdel \
-e NEXT_GEN=true \
-e DELEGATE_TYPE=DOCKER \
-e ACCOUNT_ID=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
-e DELEGATE_TOKEN=PUT_YOUR_DELEGATE_TOKEN_HERE \
-e MANAGER_HOST_AND_PORT=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
harness/delegate:22.11.77436
PUT_YOUR_MANAGER_HOST_AND_PORT_HERE
should be replaced by the Harness Manager Endpoint noted below. For Harness SaaS accounts, you can find your Harness Cluster Location in the Account Overview page under Account Settings section of the left navigation. For Harness CDCE, the endpoint varies based on the Docker vs. Helm installation options.
Harness Cluster Location | Harness Manager Endpoint on Harness Cluster |
---|---|
SaaS prod-1 | https://app.harness.io |
SaaS prod-2 | https://app.harness.io/gratis |
SaaS prod-3 | https://app3.harness.io |
CDCE Docker | http://<HARNESS_HOST> if Docker Delegate is remote to CDCE or http://host.docker.internal if Docker Delegate is on same host as CDCE |
CDCE Helm | http://<HARNESS_HOST>:7143 where HARNESS_HOST is the public IP of the Kubernetes node where CDCE Helm is running |
To use local runner build infrastructure, modify the delegate command using the instructions to install the delegate in Use local runner build infrastructure
Verify Delegate Connectivity
Click Continue and in a few moments after the health checks pass, your Delegate will be available for you to use. Click Done and can verify your new Delegate is on the list.
Helm Chart & Terraform Helm Provider
Kubernetes Manifest
Docker
You can now route communication to external systems in Harness connectors and pipelines by simply selecting this delegate via a delegate selector.
With the delegate installation out of th way, now you can wire your AWS Credentials to Harness.
Wiring Your AWS Credentials to Harness
Especially with a private registry/repository, credentials for the repository are usually scattered around a few scripts or prompted by a system to enter before deployment. Harness has a concept of a Cloud Connector so this will manage the persistence of your cloud provider credentials and use them on your behalf.
To get started with creating an AWS Cloud Provider connector, head to Account Settings -> Account Resources -> Connectors + New Connector -> AWS
Name: my_aws_connector
Click Continue and pick your authentication mechanism. If you have an AWS Access Key and Secret, can enter those as encrypted credentials which gets stored in the Harness Secrets Manager.
For example, you can create “my_aws_access
” for your Access Key and “my_aws_secret
” for the Secret.
Click continue and select how you will like to connect. Connecting through the Harness Delegate that was created in the above steps works fine. Select the Harness Delegate you created or let Harness select the best available delegate [though if you have only one, this is the same operation]. Then once selected, test the credentials.
With your credentials wired, you are now ready to create a Pipeline to deploy your private image.
Creating a Harness CD Pipeline For Your Private Image
With the Delegate and AWS credentials out of the way, you are now ready to create your first Pipeline. You will be deploying a Docker Image with a Kubernetes Manifest coming from Docker Hub and GitHub respectively. The following steps will walk you through how to create a Pipeline with those resources.
- Deployments -> Pipelines + Create new Pipeline
- Name: my-first-pipeline
- Setup: in-line
Click Start and add a Pipeline Stage by clicking the +Add Stage icon.
Select Deploy as the Stage. Next, name the stage “Deploy Grafana” as a type Service.
Then click Set Up Stage.
The first step is to define the Service by clicking on + New Service
Can name the Service “my-grafana-instance”.
Once Saved, the next step is to point to a Grafana Kubernetes Manifest. In the Service Definition section, select Kubernetes as the Deployment Type. Then you can add a Manifest from GitHub.
By selecting +Add Manifest, in the Manifest Wizard, select K8s Manifest.
Click continue and select GitHub as the Manifest Source/Store.
Now you are ready to create a GitHub Connector. GitHub does require Personal Access Tokens [PATs] to access git operations. See below if you do not have one setup.
Wiring GitHub into Harness
Harness will also need access to where to grab the Kubernetes manifests from GitHub.
GitHub Wiring
GitHub as of 2021 requires token authentication e.g. no more passwords for git operations.
If you have not created a Personal Access Token before.
- GitHub -> Settings -> Developer Settings -> Personal Access Tokens
- Name: harness
- Scopes: repo
- Expiration: 30 days
Make sure to copy down the token that is generated.
In the GitHub Connector Wizard, there are a few steps to wire in your GitHub credentials. For the example authenticate against the repo which is housing the manifest.
Manifest Name: my-gh-connector
Click Next. Now can set up authentication against the repository.
- URL Type: Repository
- Connection Type: HTTP
- GitHub URL: https://github.com/harness-apps/developer-hub-apps
Click Next and provide your GitHub Username and Personal Access Token which can be stored securely in the Harness Secrets Manager.
Click on the Personal Access Token to configure your PAT.
- Secrets Manager: Harness Built-in Secret Manager
- Secret Name: github_pat
Once you hit Save then Continue, select a Harness Delegate to run the operation on. If you have more than one Harness Delegate, can narrow the scope down or for the example, can “Use any available delegate” since this is the only one.
Click Save and Continue to validate the GitHub Connection.
Next, you will need to wire in the Manifest Details which are being pulled from https://github.com/harness-apps/developer-hub-apps/tree/main/applications/grafana.
Looking at the GitHub structure there are two files to leverage, the deployment manifest and a values.yaml:
Can wire those two manifests into Harness.
- Manifest Name: grafana
- Branch: main
- File/Folder Path: /applications/grafana/grafana.yaml
- Values.yaml: /applications/grafana/grafana_values.yaml
Harness has the ability to read in input variables in your Pipeline. In a deployment manifest, can wire in variables to be picked up by Harness. Later when executing the Pipeline, Harness can prompt you for which tag of the image to deploy with {{.Values.image}}.
Click Submit, and now your Grafana Manifests will be wired to the Pipeline.
Now you are ready to wire in the private artifact.
Wiring In Private Registry / Artifact
In the Artifacts section, click + Add Primary Artifact. Select ECR is the Artifact Repository Type.
Click continue and select the AWS Connector you set up before as the credentials to connect with. For the next step, you will need to know the AWS Region your ECR instance is running.
- Region: Your ECR Region
- Image Path: my-private-repo [if using the example ECR repo]
Click Submit and your private image will be wired to the Pipeline.
Click Continue, and now you are ready to wire in where and how you want to deploy.
Where Your Pipeline Deploys To
The next step is to define the infrastructure or where your Pipeline will deploy to. The first step is to define the “where” as a Harness Environment.
A Harness Environment is your deployment target. You can create a new Harness Environment via the wizard by clicking on + New Environment.
- Name: my-k8s-environment
- Environment Type: Pre-Production
Click Save and now you are ready to wire in your Kubernetes cluster. Since your Delegate should be running in a Kubernetes cluster, you can create a reference to this cluster with a Cluster Connector.
Select “Direct Connection” Kubernetes then can fill out the Cluster Details with a New Connector by clicking on the drop down.
Click on Select Connector and then + New Connector
Once clicked on + New Connector, can give a name to your Kubernetes cluster.
- Name: my-k8s-cluster
Click Continue and select “Use the credentials of a specific Harness Delegate” to connect.
Click Continue and select the Harness Delegate you installed into your Kubernetes Cluster e.g my-harness-delegate.
Click Finish and you can enter a namespace that is available on the Kubernetes cluster.
- Namespace: default
Click Continue and now you are ready to configure how you want your deployment to execute.
How Your Pipeline Deploys
Clicking Continue, you are now ready to configure the Execution Strategy or the “how” your Pipeline executes. Harness can guide you through several deployment strategies such as a Rolling Deployment or a Canary Deployment. For the example, a Rolling Deployment is simplest.
Select “Rolling Kubernetes” then click on Use Strategy. Now you are ready to save this Pipeline and execute the Pipeline to create a deployment.
Running Your Harness Pipeline
After the setup steps, you are on your way to a repeatable deployment process. Click run in the Pipeline Window.
Here you will see only one tag [or the private tags you have available for your image(s)]. Select “latest” if leveraging the example pushed image and click Run Pipeline.
After a few moments, your deployment is complete!
Head back to your terminal and run a kubectl command to get the address [External IP] of what you just deployed. If you are using minikube, to expose a Kubernetes Service, you might have to run minikube tunnel
.
kubectl get services -A
Head to the External-IP over port 3000 to see Grafana. E.g http://34.132.72.143:3000/login By default, the Grafana user and password is admin/admin.
Congratulations on deploying a private image in a Continuous Delivery Pipeline!