Skip to main content

Google Cloud Functions deployments

This topic explains how to deploy new Cloud Functions to Google Cloud using Harness.

Supported versions

Harness integration with Google's serverless offering Google Functions.

Harness supports the following:

Harness Cloud Functions 1st gen support

Harness supports the following:

  • Basic deployments.
    • Harness deploys the new function and terminates the old one by sending 100% of traffic to the new function.
  • For rollback, Harness does not perform revision-based rollback. Instead, in case of deployment failure, Harness will take a snapshot of the last known good state of the function and reapply.

Harness Cloud Functions 2nd gen support

Harness supports the following:

  • Basic, blue green, and canary deployments.
  • Harness leverages the Cloud Run revisions capability that Google Cloud offers to configure rollback.

Cloud Functions limitations

  • For Google Cloud Functions 2nd gen, Harness does not support Google Cloud Source Repository at this time. Only Google Cloud Storage is supported.
  • For Google Cloud Functions 1st gen, Harness supports both Google Cloud Storage and Google Cloud Source.
  • Currently, the OIDC connectivity mode is not compatible with Google Cloud Functions. You can't deploy Google Cloud Functions with OIDC-enabled GCP connectors.

Deployment summary

Here's a high-level summary of the setup steps.

Harness setup summary
  1. Create a Harness CD pipeline.
  2. Add a Deploy stage.
  3. Select the deployment type Google Cloud Functions, and then select Set Up Stage.
  4. Select Add Service.
    1. Add the function definition to the new Cloud Function service. You can paste in the YAML or link to a Git repo hosting the YAML.
    2. Save the new service.
  5. Select New Environment, name the new environment and select Save.
  6. In Infrastructure Definition, select New Infrastructure.
    1. In Google Cloud Provider Details, create or select the Harness GCP connector, GCP project, and GCP region, and select Save.
  7. Select Configure and select the deployment strategy: basic, canary, or blue green.
  8. Harness will automatically add the Deploy Cloud Function step. No further configuration is needed.
    1. For canary and blue green strategies, the Cloud Function Traffic Shift step is also added. In this step's Traffic Percent setting, enter the percentage of traffic to switch to the new revision of the function.
    2. For canary deployments, you can add multiple Cloud Function Traffic Shift steps to rollout the shift.
    3. For blue green deployments, you can simply use 100 in the Traffic Percent setting.
  9. Select Save, and then run the pipeline.

Cloud Functions permission requirements

note

Harness supports Google Cloud Functions 1st and 2nd gen. There are minor differences in the permissions required by each generation. For a detailed breakdown, go to Access control with IAM from Google.

The permissions listed below work for both 1st and 2nd gen.

When you set up a Harness GCP connector to connect Harness with your GCP account, the GCP IAM user or service account must have the appropriate permissions assigned to their account.

Cloud Functions minimum permissions

Cloud Functions supports the basic roles of Editor, Admin, Developer, and Viewer. For details, go to Cloud Functions IAM Roles.

You can use the Admin role or the permissions below.

Here are the minimum permissions required for deploying Cloud Functions.

  • Cloud Functions API:
    • cloudfunctions.functions.create: Allows the user to create new functions.
    • cloudfunctions.functions.update: Allows the user to update existing functions.
    • cloudfunctions.functions.list: Allows the user to list existing functions.
    • cloudfunctions.operations.get: Allows the user to get the status of a function deployment.
  • Cloud Run API:
    • run.services.create: Allows the user to create new Cloud Run services.
    • run.services.update: Allows the user to update existing Cloud Run services.
    • run.services.get: Allows the user to get information about a Cloud Run service.
    • run.revisions.create: Allows the user to create new revisions for a Cloud Run service.
    • run.revisions.get: Allows the user to get information about a revision of a Cloud Run service.
    • run.routes.create: Allows the user to create new routes for a Cloud Run service.
    • run.routes.get: Allows the user to get information about a route of a Cloud Run service.

Note that these are the minimum set of permissions required for deploying Cloud Functions and running them on Cloud Run via API. Depending on your use case, you may need additional permissions for other GCP services such as Cloud Storage, Pub/Sub, or Cloud Logging.

Also, note that in order to call the Cloud Functions and Cloud Run APIs, the user or service account must have appropriate permissions assigned for calling the APIs themselves. This may include permissions like cloudfunctions.functions.getIamPolicy, run.services.getIamPolicy, cloudfunctions.functions.testIamPermissions, run.services.testIamPermissions, and iam.serviceAccounts.actAs.

For details, go to Cloud Functions API IAM permissions and Cloud Functions Access control with IAM.

Harness will also pull the function ZIP file in your Google Cloud Storage.

For Google Cloud Storage (GCS), the following roles are required:

  • Storage Object Viewer (roles/storage.objectViewer)
  • Storage Object Admin (roles/storage.objectAdmin)

For more information, go to the GCP documentation about Cloud IAM roles for Cloud Storage.

Ensure the Harness Delegate you have installed can reach storage.cloud.google.com and your GCR registry host name, for example gcr.io.

For Google Cloud Source, the following roles are required:

  • Source Repository Reader (roles/source.reader)
  • Source Repository Reader (roles/cloudsource.writer)

For more information, go to Cloud Source Repositories Access control with IAM.

Cloud Functions services

The Harness Cloud Functions service the following:

  • Function Definition:
    • You can enter the function manifest YAML in Harness for use a remote function manifest YAML in a Git repo.
  • Artifacts:
    • You add a connection to the function ZIP file in Google Cloud Storage (1st and 2nd gen) or Google Cloud Source (1st gen).
How the Function Definition and Artifact work together

The function manifest YAML in Function Definition and the function zip file in Artifacts work together to define and deploy a Google Cloud Function.

The function manifest YAML file is a configuration file that defines the function's name, runtime environment, entry point, and other configuration options. The YAML file is used to specify the function's metadata, but it does not include the function's code.

The function code is packaged in a zip file that contains the function's source code, dependencies, and any other necessary files. The zip file must be created based on the manifest file, which specifies the entry point and runtime environment for the function.

When you deploy a function using the manifest file, the Cloud Functions service reads the configuration options from the YAML file and uses them to create the function's deployment package. The deployment package includes the function code (from the zip file), the runtime environment, and any dependencies or configuration files specified in the manifest file.

Function Definition

The Function Definition is the parameter file you use to define your Google Function. The parameter file you add here maps to one Google Function.

To use Google Cloud Functions, in the Harness service, in Deployment Type, you select Google Cloud Functions.

Next, in Google Cloud Function Environment Version, you select 1st gen or 2nd gen.

note

The option you select in Google Cloud Function Environment Version determines the Google Cloud Function options displayed in the UI. You cannot use an expression in this setting.

1st gen function definition example

Here is a Function Definition example.

function:
name: my-function
runtime: python37
entryPoint: my_function
httpsTrigger:
securityLevel: SECURE_OPTIONAL

In httpsTrigger, you do not need to specify url.

2nd gen function definition examples

Here are some Function Definition examples.

Sample Google Function definition included in Harness
# The following are the minimum set of parameters required to create a Google Cloud Function.
# If you use a remove manifest, please ensure it includes all of these parameters.

function:
name: <functionName>
build_config:
runtime: nodejs18
entry_point: helloGET
environment: GEN_2
function_id: <functionName>
Example 1: A basic Cloud Function that triggers on an HTTP request
function:
name: my-http-function
build_config:
runtime: nodejs14
entry_point: myFunction
environment: GEN_2
function_id: my-http-function
Example 2: A Cloud Function that uses environment variables
function:
name: my-env-function
build_config:
runtime: python38
entry_point: my_function
environment_variables:
MY_VAR: my-value
environment: GEN_2
function_id: my-env-function

Example 3: A Cloud Function that uses Cloud Storage as a trigger
function:
name: my-storage-function
build_config:
runtime: go111
entry_point: MyFunction
service_config:
available_memory: 512
timeout: 180s
environment: GEN_2
event_trigger:
trigger: projects/_/buckets/my-bucket
event_type: google.storage.object.finalize
function_id: my-storage-function

Example 4: A Cloud Function that uses service config and environment variables
function:
name: canaryDemo-<+env.name>
service_config:
environment_variables:
MY_ENV_VAR: "True"
GCF_FF_KEY: "<+env.variables.GCF_FF_KEY>"
build_config:
runtime: python39
entry_point: hello_world
environment: GEN_2
function_id: canaryDemo-<+env.name>
Example 5: A Cloud Function that uses secret environment variables

This example uses the secret_environment_variables parameter. It has the information necessary to fetch the secret value from secret manager and expose it as an environment variable. For more information, go to SecretEnvVar in Google docs.

# The following are the minimum set of parameters required to create a Google Cloud Function.
# If you use a remove manifest, please ensure it includes all of these parameters.

function:
name: "my-secret-env-func"
description: "Using Secret Environment Variables"
region: "us-east1"
runtime: "nodejs16"
build_config:
entry_point: myFunction
max_instances: 1
event_trigger:
event_type: "providers/cloud.pubsub/eventTypes/topic.publish"
trigger: "projects/<project>/topics/<topic>"
service_config:
environment_variables:
MY_ENV_VAR1: value1
MY_ENV_VAR2: value2
secret_environment_variables:
[
{
key: "MY_SECERT_ENV_VAR1",
project_id: <project_id>,
secret: "<secretName1>",
version: "<secretVersion1>",
},
{
key: "MY_SECERT_ENV_VAR2",
project_id: <project_id>,
secret: "<secretName2>",
version: "<secretVersion2>",
},
]

Artifacts

In Artifacts, you add the location of the function ZIP file in Google Cloud Storage that corresponds to the YAML manifest file you added in Function Definition.

You use a Harness GCP connector to connect to your Cloud Storage bucket. The GCP connector credentials should meet the requirements in Cloud Functions permission requirements.

Adding a Cloud Function service

Here's how you add a Harness Cloud Function service.

Here's a Cloud Functions service YAML example.

service:
name: helloworld
identifier: Google_Function
serviceDefinition:
type: GoogleCloudFunctions
spec:
manifests:
- manifest:
identifier: GoogleFunction
type: GoogleCloudFunctionDefinition
spec:
store:
type: Harness
spec:
files:
- /GoogleFunctionDefinition.yaml
artifacts:
primary:
primaryArtifactRef: <+input>
sources:
- spec:
connectorRef: gcp_connector
project: cd-play
bucket: cloud-functions-automation-bucket
artifactPath: helloworld
identifier: helloworld
type: GoogleCloudStorage

Using service variables in manifest YAML

Service variables are a powerful way to template your services or make them more dynamic.

In the Variables section of the service, you can add service variables and then reference them in any of the manifest YAML file you added to the service.

For example, you could create a variable named entryPoint for the manifest entryPoint setting and set its value as a fixed value, runtime input, or expression.

Next, in your manifest YAML file, you could reference the variable like this (see <+serviceVariables.entryPoint>):

Now, when you add that service to a pipeline, you will be prompted to enter a value for this variable in the pipeline Services tab. The value you provide is then used as the entryPoint in your manifest YAML.

Cloud Functions environments

The Cloud Function environment contains an Infrastructure Definition that identifies the GCP account, project, and region to use.

Here's an example of a Cloud Functions environment.

environment:
name: GCF
identifier: GCF
description: "GCF environment"
tags: {}
type: PreProduction
orgIdentifier: default
projectIdentifier: serverlesstest
variables: []

Define the infrastructure

You define the target infrastructure for your deployment in the Environment settings of the pipeline stage. You can define an environment separately and select it in the stage, or create the environment within the stage Environment tab.

There are two methods of specifying the deployment target infrastructure:

  • Pre-existing: the target infrastructure already exists and you simply need to provide the required settings.
  • Dynamically provisioned: the target infrastructure will be dynamically provisioned on-the-fly as part of the deployment process.

For details on Harness provisioning, go to Provisioning overview.

GCP connector

You will need a Harness GCP Connector with correct permissions to deploy Cloud Functions in GCP.

You can pick the same GCP connector you used in the Harness service to connect to Google Cloud Storage for the artifact, or create a new connector.

Pre-existing Functions infrastructure

Here's a YAML example of a Cloud Function infrastructure definition.

infrastructureDefinition:
name: dev
identifier: dev
description: "dev google cloud infrastructure"
tags: {}
orgIdentifier: default
projectIdentifier: serverlesstest
environmentRef: dev
deploymentType: GoogleCloudFunctions
type: GoogleCloudFunctions
spec:
connectorRef: gcp_connector
project: cd-play
region: us-central1
allowSimultaneousDeployments: false

Dynamically provisioned Functions infrastructure

Here is a summary of the steps to dynamically provision the target infrastructure for a deployment:

  1. Add dynamic provisioning to the CD stage:

    1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

    2. Select the type of provisioner that you want to use.

      Harness automatically adds the provisioner steps for the provisioner type you selected.

    3. Configure the provisioner steps to run your provisioning scripts.

    4. Select or create a Harness infrastructure in Environment.

  2. Map the provisioner outputs to the Infrastructure Definition:

    1. In the Harness infrastructure, enable the option Map Dynamically Provisioned Infrastructure.
    2. Map the provisioning script/template outputs to the required infrastructure settings.

Supported provisioners

The following provisioners are supported for Google Functions deployments:

  • Terraform
  • Terragrunt
  • Terraform Cloud
  • Shell Script

Adding dynamic provisioning to the stage

To add dynamic provisioning to a Harness pipeline Deploy stage, do the following:

  1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

  2. Select the type of provisioner that you want to use.

    Harness automatically adds the necessary provisioner steps.

  3. Set up the provisioner steps to run your provisioning scripts.

For documentation on each of the required steps for the provisioner you selected, go to the following topics:

Mapping provisioner output

Once you set up dynamic provisioning in the stage, you must map outputs from your provisioning script/template to specific settings in the Harness Infrastructure Definition used in the stage.

  1. In the same CD Deploy stage where you enabled dynamic provisioning, select or create (New Infrastructure) a Harness infrastructure.

  2. In the Harness infrastructure, in Select Infrastructure Type, select Google Cloud Platform if it is not already selected.

  3. In Google Cloud Provider Details, enable the option Map Dynamically Provisioned Infrastructure.

    A Provisioner setting is added and configured as a runtime input.

  4. Map the provisioning script/template outputs to the required infrastructure settings.

To provision the target deployment infrastructure, Harness needs specific infrastructure information from your provisioning script. You provide this information by mapping specific Infrastructure Definition settings in Harness to outputs from your template/script.

For Google Cloud Functions, Harness needs the following settings mapped to outputs:

  • Project
  • Region
note

Ensure the Project and Region settings are set to the Expression option.

For example, here's a snippet of a Terraform script that provisions the infrastructure for a Cloud Function deployment and includes the required outputs:


provider "google" {
credentials = file("<PATH_TO_YOUR_GCP_SERVICE_ACCOUNT_KEY_JSON>")
project = "<YOUR_GCP_PROJECT_ID>"
region = "us-central1" # Replace with your desired region
}

resource "google_project_service" "cloudfunctions" {
project = "<YOUR_GCP_PROJECT_ID>"
service = "cloudfunctions.googleapis.com"
}

resource "google_cloudfunctions_function" "my_function" {
name = "my-cloud-function"
description = "My Cloud Function"
runtime = "nodejs14"
trigger_http = true

available_memory_mb = 256
timeout = "60s"

# Optionally, you can define environment variables for your function
environment_variables = {
VAR1 = "value1"
VAR2 = "value2"
}
}

output "project_name" {
value = google_cloudfunctions_function.my_function.project
}

output "region_name" {
value = google_cloudfunctions_function.my_function.region
}

In the Harness Infrastructure Definition, you map outputs to their corresponding settings using expressions in the format <+provisioner.OUTPUT_NAME>, such as <+provisioner.project_name>.

Figure: Mapped outputs

Cloud Functions execution strategies

Harness ASG deployments support the following deployment strategies:

  • Basic
  • Canary
  • Blue Green

Basic

The basic deployment execution strategy uses the Deploy Cloud Function step. The Deploy Cloud Function step deploys the new function version and routes 100% of traffic over to the new function version.

YAML example of Deploy Cloud Function step
execution:
steps:
- step:
name: Deploy Cloud Function
identifier: deployCloudFunction
type: DeployCloudFunction
timeout: 10m
spec: {}

Canary

Harness provides a step group to perform the canary deployment.

The step group consists of:

  • Deploy Cloud Function With No Traffic step: deploys the new function version but does not route any traffic over to the new function version.
  • First Cloud Function Traffic Shift step: routes 10% of traffic over to the new function version.
  • Second Cloud Function Traffic Shift step: routes 100% of traffic over to the new function version.
YAML example of canary deployment step group
execution:
steps:
- stepGroup:
name: Canary Deployment
identifier: canaryDepoyment
steps:
- step:
name: Deploy Cloud Function With No Traffic
identifier: deployCloudFunctionWithNoTraffic
type: DeployCloudFunctionWithNoTraffic
timeout: 10m
spec: {}
- step:
name: Cloud Function Traffic Shift
identifier: cloudFunctionTrafficShiftFirst
type: CloudFunctionTrafficShift
timeout: 10m
spec:
trafficPercent: 10
- step:
name: Cloud Function Traffic Shift
identifier: cloudFunctionTrafficShiftSecond
type: CloudFunctionTrafficShift
timeout: 10m
spec:
trafficPercent: 100

Blue Green

Harness provides a step group to perform the blue green deployment.

The step group consists of:

  • Deploy Cloud Function With No Traffic step: deploys the new function version but does not route any traffic over to the new function version.
  • Cloud Function Traffic Shift step: routes 100% of traffic over to the new function version.

You can also route traffic incrementally using multiple Cloud Function Traffic Shift steps with gradually increasing routing percentages.

YAML example of blue green deployment step group
execution:
steps:
- stepGroup:
name: Blue Green Deployment
identifier: blueGreenDepoyment
steps:
- step:
name: Deploy Cloud Function With No Traffic
identifier: deployCloudFunctionWithNoTraffic
type: DeployCloudFunctionWithNoTraffic
timeout: 10m
spec: {}
- step:
name: Cloud Function Traffic Shift
identifier: cloudFunctionTrafficShift
type: CloudFunctionTrafficShift
timeout: 10m
spec:
trafficPercent: 100

Rollbacks

If deployment failure occurs, the stage or step failure strategy is initiated. Typically, this runs the Rollback Cloud Function step in the Rollback section of Execution. Harness adds the Rollback Cloud Function step automatically.

The Harness rollback capabilities are based on the Google Cloud Function revisions available in Google Cloud.

If the function already exists (for example, revision 10) and a new revision is deployed (for example, revision 11) but a step after the deployment there is a failure and rollback is triggered, then Harness will deploy a new function revision (revision 12) but it will contain the artifact and metadata for revision 10.