This topic contains troubleshooting information for error messages and other issues that can arise with Harness CI. For more Harness troubleshooting guidance, go to Troubleshooting Harness.
Git connector fails to connect to the SCM service
The following SCM service errors can occur with Git connectors.
SCM request failed with: UNKNOWN
This error may occur if your Git connector uses SSH authentication. To resolve this error, make sure HTTPS is enabled on port 443. This is the protocol and port used by the Harness connection test for Git connectors.
SCM connection errors when using self-signed certificates
If you have configured your build infrastructure to use self-signed certificates, your builds may fail when the Git connector attempts to connect to the SCM service. Build logs may contain the following error messages:
Connectivity Error while communicating with the scm service
Unable to connect to Git Provider, error while connecting to scm service
To resolve this issue, add
SCM_SKIP_SSL=true to the
environment section of the delegate YAML.
For example, here is the
environment section of a
docker-compose.yml file with the
For more information about self-signed certificates, delegates, and delegate environment variables, go to:
- Delegate environment variables
- Docker delegate environment variables
- Set up a local runner build infrastructure
- Install delegates
- Configure a Kubernetes build farm to use self-signed certificates
Truncated execution logs
Each CI step supports a maximum log size of 5MB. Harness truncates logs larger than 5MB.
Furthermore, there is a single-line limit of 70KB. If an individual line exceeds this limit, it is truncated and ends with
(log line truncated).
Step logs disappear
If step logs disappear from pipelines that are using a Kubernetes cluster build infrastructure, you must either allow outbound communication with
storage.googleapis.com or contact Harness Support to enable the
CI_INDIRECT_LOG_UPLOAD feature flag.
For more information about configuring connectivity, go to:
AKS builds timeout
Azure Kubernetes Service (AKS) security group restrictions can cause builds running on an AKS build infrastructure to timeout.
If you have a custom network security group, it must allow inbound traffic on port 8080, which the Delegate service uses.
For more information, refer to the following Microsoft Azure troubleshooting documentation: A custom network security group blocks traffic.
CI pods appear to be evicted by Kubernetes autoscaling
Harness CI pods shouldn't be evicted due to autoscaling of Kubernetes nodes because Kubernetes doesn't evict pods that aren't backed by a controller object. However, if you notice either sporadic pod evictions or failures in the Initialize step in your Build logs, add the following annotation to your Kubernetes cluster build infrastructure settings:
Delegate is not able to connect to the created build farm
If you get this error when using a Kubernetes cluster build infrastructure, and you have confirmed that the delegate is installed in the same cluster where the build is running, you may need to allow port 20001 in your network policy to allow pod-to-pod communication.
For more delegate and Kubernetes troubleshooting guidance, go to Troubleshooting Harness.
Docker Hub rate limiting
By default, Harness uses anonymous access to Harness Docker Hub to pull Harness images. If you experience rate limiting issues when pulling images, use a Docker connector to connect to the Harness container image registry and provide login information in the connector's authentication settings.
Out of memory errors with Gradle
If a build that uses Gradle experiences out of memory errors, add the following to your
Your Java options must use UseContainerSupport instead of
UseCGroupMemoryLimitForHeap, which was removed in JDK 11.
Can't use the built-in Harness Docker Connector with Harness Cloud build infrastructure
Depending on when your account was created, the built-in Harness Docker Connector (
account.harnessImage) might be configured to connect through a Harness Delegate instead of the Harness Platform. In this case, attempting to use this connector with Harness Cloud build infrastructure generates the following error:
While using hosted infrastructure, all connectors should be configured to go via the Harness platform instead of via the delegate. Please update the connectors: [harnessImage] to connect via the Harness platform instead. This can be done by editing the connector and updating the connectivity to go via the Harness platform.
To resolve this error, you can either modify the Harness Docker Connector or use another Docker connector that you have already configured to connect through the Harness Platform.
To change the connector's connectivity settings:
- Go to Account Settings and select Account Resources.
- Select Connectors and select the Harness Docker Connector (ID:
- Select Edit Details.
- Select Continue until you reach Select Connectivity Mode.
- Select Change and select Connect through Harness Platform.
- Select Save and Continue and select Finish.
Can't connect to Docker daemon
Error messages like
cannot connect to the Docker daemon indicate that you might have multiple steps attempting to run Docker at the same time. This can occur when running GitHub Actions in stages that have Docker-in-Docker (DinD) Background steps.
Actions that launch DinD: You can't use GitHub Actions that launch DinD in the same stage where DinD is already running in a Background step. If possible, run the GitHub Action in a separate stage or try to find a GitHub Action that doesn't use DinD.
Actions that launch the Docker daemon: If your Action attempts to launch the Docker daemon, and you have a DinD Background step in the same stage, you must add
PLUGIN_DAEMON_OFF: true as a stage variable. For example:
- name: PLUGIN_DAEMON_OFF
Harness Cloud: You don't need DinD Background steps with Harness Cloud build infrastructure, and you can run GitHub Actions in Action steps instead of Plugin steps.