Increase Persistent Volume (PV) size for StatefulSets
You can manually update the Persistent Volume (PV) size associated with a StatefulSet in your Kubernetes clusters. This topic describes how to increase the PV size for StatefulSets in your Kubernetes clusters during a Helm upgrade.
For more information on Helm upgrades, go to Upgrade the Helm chart.
This is only applicable to storage file systems that support dynamic provisioning and volume expansion.
This document is a general guide to increase PVC size of harness statefulset. The actual volume is managed by kubernetes volume driver provided by your cloud provider.
For more information, go to Resizing Persistent Volumes using Kubernetes.
Prerequisite
-
Ensure your storage class supports volume expansion. Refer to the documentation for your volume driver:
- AWS:
ebs.csi.aws.com
supports volume expansion. - GCP:
pd.csi.storage.gke.io
supports volume expansion. - Other Drivers: Check the specific documentation for your storage class.
- AWS:
-
Install
yq
if you plan to use Method 1. -
Ensure you have the necessary permissions to delete StatefulSets and perform Helm upgrades.
Recommendation
- Take a Backup: Before proceeding, create a backup to ensure data can be restored in case of failures. Refer to Back up and restore Harness for guidance.
Method 1: Using shell script
-
Download the Shell Script
-
Make the script executable:
chmod +x update-pvc.sh
-
Run the script and provide the required arguments:
./pvc-update.sh
Example:
./pvc-update.sh
Enter Namespace: harness
Enter Override file: ./override-values.yaml
Enter new pvc size in Gi (eg: 30Gi): 20Gi
Enter database to increase pvc size (mongodb, timescaledb, minio, postgresql, timescaledb-wal): minio
Enter release name: harness
Enter chart path/name: harness/harness -
Wait for the script to complete successfully.
Method 2: Manually Update the PV size for StatefulSets
Follow these steps to manually increase PV size associated with a StatefulSet in your Kubernetes cluster manually:
-
Run the following command to list all the Persistent Volume Claims (PVCs) in your Kubernetes cluster.
kubectl get pvc
-
Identify the PV that corresponds to the StatefulSet you are currently working with.
-
Edit the PV configuration to update the storage size. Replace
<YOUR_PVC_NAME>
with the name of your PVC and<YOUR_UPDATED_SIZE>
with the desired storage size.kubectl patch pvc <YOUR_PVC_NAME> -p '{"spec":{"resources":{"requests":{"storage":"<YOUR_UPDATED_SIZE>"}}}}' -n <namespace>
-
Verify that the PV and PVC have been updated with the new size. Replace
<YOUR_PV_NAME>
and<YOUR_PVC_NAME>
with your applicable names.kubectl get pv <YOUR_PV_NAME> -o=jsonpath='{.spec.capacity.storage}'
kubectl get pvc <YOUR_PVC_NAME> -o=jsonpath='{.spec.resources.requests.storage}' -n <namespace>
-
Edit the storage values in the values
override.yaml
file you use to deploy Helm to reflect the new requirements.When upgrading storage for TimescaleDB, the values will look similar to the example below:
platform:
bootstrap:
database:
timescaledb:
persistentVolumes:
data:
size: 120Gi
wal:
size: 5Gi -
Ensure the StatefulSet is recreated to pick up the changes. Replace
<YOUR_STATEFULSET-NAME>
,<YOUR_RELEASE_NAME>
, and<YOUR_CHART_NAME>
with your StatefulSet name, Helm release name, and Helm chart name, and change theoverride.yaml
file name.kubectl delete statefulset <YOUR_STATEFULSET-NAME>
helm upgrade <YOUR_RELEASE_NAME> <YOUR_CHART_NAME> -f override.yaml
The field PersistentVolumesTemplates
is immutable in StatefulSet, which means that you must recreate it for any changes to take effect.
Troubleshoot
If database pods fail to come online or restart frequently, try these steps:
- Adjust Probes: Increase the readiness/liveness probe timeout values for the StatefulSet.
- Scale Down and Up: Scale down the database StatefulSet to zero pods, then scale it back to one pod. After the master pod is stable, scale it further as needed.
- Restore Data: Restore the database from a backup if taken earlier.
- Contact Support: If issues persist, reach out to Harness support for assistance.