Python
You can build and test a Python application using a Linux platform on Harness Cloud or a self-managed Kubernetes cluster build infrastructure.
This guide assumes you've created a Harness CI pipeline.
Install dependencies
Use Run steps to install dependencies in the build environment.
- Harness Cloud
- Self-managed
- step:
type: Run
identifier: dependencies
name: Dependencies
spec:
shell: Sh
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
envVariables:
PIP_CACHE_DIR: "/root/.cache"
- step:
type: Run
identifier: dependencies
name: Dependencies
spec:
connectorRef: account.harnessImage
image: python:latest
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
Background steps can be used to run dependent services that are needed by steps in the same stage.
Cache dependencies
Add caching to your stage.
- Cache Intelligence
- Save and Restore Cache steps
Cache your Python module dependencies with Cache Intelligence.
Add caching to your stage.spec
:
- stage:
spec:
caching:
enabled: true
key: cache-{{ checksum "requirements.txt" }}
paths:
- "/root/.cache"
sharedPaths:
- /root/.cache
You can use built-in steps to:
Python cache key and path requirements
Python pipelines typically reference requirements.txt
in Save Cache and Restore Cache steps, for example:
spec:
key: cache-{{ checksum "requirements.txt" }}
Additionally, spec.sourcePaths
must include the python cache (typically /root/.cache
) in the Save Cache step, for example:
spec:
sourcePaths:
- "/root/.cache"
YAML example: Save and restore cache steps
Here's an example of a pipeline with Save Cache to S3 and Restore Cache from S3 steps.
steps:
- step:
type: RestoreCacheS3
name: Restore Cache From S3
identifier: Restore_Cache_From_S3
spec:
connectorRef: AWS_Connector
region: us-east-1
bucket: your-s3-bucket
key: cache-{{ checksum "requirements.txt" }}
archiveFormat: Tar
- step:
type: Run
envVariables:
PIP_CACHE_DIR: "/root/.cache"
...
- step:
type: SaveCacheS3
name: Save Cache to S3
identifier: Save_Cache_to_S3
spec:
connectorRef: AWS_Connector
region: us-east-1
bucket: your-s3-bucket
key: cache-{{ checksum "requirements.txt" }}
sourcePaths:
- "/root/.cache"
archiveFormat: Tar
Run tests
You can use Run and Test steps to run tests in Harness CI.
These examples run tests in a Run step.
- Harness Cloud
- Self-managed
- step:
type: Run
name: Test
identifier: test
spec:
shell: Sh
command: |-
pip install pytest
pytest tests.py --junit-xml=report.xml
envVariables:
PIP_CACHE_DIR: /root/.cache
- step:
type: Run
name: Test
identifier: test
spec:
connectorRef: account.harnessImage
image: python:latest
shell: Sh
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
pytest tests.py --junit-xml=report.xml
Visualize test results
If you want to view test results in Harness, make sure your test commands produce reports in JUnit XML format.
If you run tests in a Run step, your Run step must include the reports
specification. The reports
specification is not required for Test steps (Test Intelligence).
reports:
type: JUnit
spec:
paths:
- report.xml
Run tests with Test Intelligence
Test Intelligence is available for Python.
- step:
type: Test
name: runTestsWithIntelligence
identifier: runTestsWithIntelligence
spec:
connectorRef: account.harnessImage
image: python:latest
command: |-
python3 -m venv .venv
. .venv/bin/activate
python3 -m pip install -r requirements/test.txt
python3 -m pip install -e .
pytest
shell: Python
intelligenceMode: true
Test splitting
Harness CI supports test splitting (parallelism) for both Run and Test steps.
Specify version
- Harness Cloud
- Self-managed
Python is pre-installed on Harness Cloud runners. For details about all available tools and versions, go to Platforms and image specifications.
If your application requires a specific Python version, add a Run or GitHub Action step to install it.
Use the setup-python action in a GitHub Action step to install the required Python version.
You will need a personal access token, stored as a secret, with read-only access for GitHub authentication.Install one Python version
- step:
type: Action
name: Install python
identifier: installpython
spec:
uses: actions/setup-python@v4
with:
python-version: 3.10.10
token: <+ secrets.getValue("github_token") >Install multiple Python versions
- stage:
strategy:
matrix:
pythonVersion:
- 3.11.2
- 3.10.10
- step:
type: Action
name: Install python
identifier: installpython
spec:
uses: actions/setup-python@v4
with:
python-version: <+ stage.matrix.pythonVersion >
token: <+ secrets.getValue("github_token") >
Specify the desired Python Docker image tag in your steps. There is no need for a separate install step when using Docker.Use a specific Python version
- step:
type: Run
name: Python Version
identifier: pythonversion
spec:
connectorRef: account.harnessImage
image: python:3.10.10
shell: Sh
command: |-
python --versionUse multiple Python versions
- stage:
strategy:
matrix:
pythonVersion:
- 3.11.2
- 3.10.10
image
field of your steps.- step:
type: Run
name: Python Version
identifier: pythonversion
spec:
connectorRef: account.harnessImage
image: python:<+ stage.matrix.pythonVersion >
shell: Sh
command: |-
python --version
Full pipeline examples
The following full pipeline examples are based on the partial examples above.
- Harness Cloud
- Self-managed
This pipeline uses Harness Cloud build infrastructure and Cache Intelligence.
If you copy this example, replace the placeholder values with appropriate values for your code repo connector and repository name. Depending on your project and organization, you may also need to replace projectIdentifier
and orgIdentifier
.Pipeline with default Python version
pipeline:
name: Test a Python app
identifier: Test_a_Python_app
projectIdentifier: default
orgIdentifier: default
stages:
- stage:
name: Test
identifier: test
description: ""
type: CI
spec:
cloneCodebase: true
caching:
enabled: true
key: cache-{{ checksum "requirements.txt" }}
paths:
- "/root/.cache"
execution:
steps:
- step:
type: Run
identifier: dependencies
name: Dependencies
spec:
shell: Sh
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
envVariables:
PIP_CACHE_DIR: "/root/.cache"
- step:
type: Run
name: Test
identifier: test
spec:
shell: Sh
command: |-
pip install pytest
pytest tests.py --junit-xml=report.xml
envVariables:
PIP_CACHE_DIR: /root/.cache
reports:
type: JUnit
spec:
paths:
- report.xml
platform:
os: Linux
arch: Amd64
runtime:
type: Cloud
spec: {}
sharedPaths:
- /root/.cache
properties:
ci:
codebase:
connectorRef: YOUR_CODE_REPO_CONNECTOR_ID
repoName: YOUR_REPO_NAME
build: <+input>Pipeline with multiple Python versions
pipeline:
name: Test a Python app
identifier: Test_a_Python_app
projectIdentifier: default
orgIdentifier: default
stages:
- stage:
name: Test
identifier: test
description: ""
type: CI
strategy:
matrix:
pythonVersion:
- 3.11.2
- 3.10.10
spec:
cloneCodebase: true
caching:
enabled: true
key: cache-{{ checksum "requirements.txt" }}
paths:
- "/root/.cache"
execution:
steps:
- step:
type: Action
name: Install python
identifier: installpython
spec:
uses: actions/setup-python@v4
with:
python-version: <+ stage.matrix.pythonVersion >
token: <+ secrets.getValue("github_token") >
- step:
type: Run
identifier: dependencies
name: Dependencies
spec:
shell: Sh
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
envVariables:
PIP_CACHE_DIR: "/root/.cache"
- step:
type: Run
name: Test
identifier: test
spec:
shell: Sh
command: |-
pip install pytest
pytest tests.py --junit-xml=report.xml
envVariables:
PIP_CACHE_DIR: /root/.cache
reports:
type: JUnit
spec:
paths:
- report.xml
platform:
os: Linux
arch: Amd64
runtime:
type: Cloud
spec: {}
sharedPaths:
- /root/.cache
properties:
ci:
codebase:
connectorRef: YOUR_CODE_REPO_CONNECTOR_ID
repoName: YOUR_REPO_NAME
build: <+input>
If you copy this example, replace the placeholder values with appropriate values for your code repo connector, kubernetes cluster connector, kubernetes namespace, and repository name. Depending on your project and organization, you may also need to replace Here is a single-stage pipeline, with steps that use Python 3.10.10. Here is a single-stage pipeline, with a matrix looping strategy for Python versions 3.11.2 and 3.10.10.projectIdentifier
and orgIdentifier
.Pipeline with one specific Python version
pipeline:
identifier: Test a Python app
name: Test_a_Python_app
orgIdentifier: default
projectIdentifier: default
stages:
- stage:
identifier: default
name: default
spec:
cloneCodebase: true
execution:
steps:
- step:
type: Run
name: Test
identifier: test
spec:
connectorRef: account.harnessImage
image: python:3.10.10
shell: Sh
command: |-
pip install pytest
pytest tests.py --junit-xml=report.xml
reports:
type: JUnit
spec:
paths:
- "report.xml"
infrastructure:
type: KubernetesDirect
spec:
connectorRef: YOUR_KUBERNETES_CLUSTER_CONNECTOR_ID
namespace: YOUR_KUBERNETES_NAMESPACE
automountServiceAccountToken: true
nodeSelector: {}
os: Linux
type: CI
properties:
ci:
codebase:
connectorRef: YOUR_CODE_REPO_CONNECTOR_ID
repoName: YOUR_REPO_NAME
build: <+input>Pipeline with multiple Python versions
pipeline:
identifier: Test a Python app
name: Test_a_Python_app
orgIdentifier: default
projectIdentifier: default
stages:
- stage:
strategy:
matrix:
pythonVersion:
- 3.11.2
- 3.10.10
identifier: default
name: default
spec:
cloneCodebase: true
execution:
steps:
- step:
type: Run
name: Test
identifier: test
spec:
connectorRef: account.harnessImage
image: python:<+ stage.matrix.pythonVersion >
shell: Sh
command: |-
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
pytest tests.py --junit-xml=report.xml
reports:
type: JUnit
spec:
paths:
- "report.xml"
infrastructure:
type: KubernetesDirect
spec:
connectorRef: YOUR_KUBERNETES_CLUSTER_CONNECTOR_ID
namespace: YOUR_KUBERNETES_NAMESPACE
automountServiceAccountToken: true
nodeSelector: {}
os: Linux
type: CI
properties:
ci:
codebase:
connectorRef: YOUR_CODE_REPO_CONNECTOR_ID
repoName: YOUR_REPO_NAME
build: <+input>
Next steps
Now that you have created a pipeline that builds and tests a Python app, you could:
- Create triggers to automatically run your pipeline.
- Add steps to build and upload artifacts.
- Add a step to build and push an image to a Docker registry.