Keep Storage and Backups when Dropping Cluster with PGO

Brian Pace

9 min read

As a Solutions Architect at Crunchy Data, I work directly with customers testing and deploying Crunchy Postgres for Kubernetes. I often see our customers fully removing a PGO cluster during testing or migrations and still needing to keep their storage and backups intact. In this post, I will dig into the steps to removing your PGO cluster while keeping your storage and backups.

With the move to a declarative approach in 5.0 PGO, storage and backup retention can be accomplished by following a native Kubernetes methodology. In this article I'll demonstrate the steps to preserve the persistent volume for the cluster and pgBackRest repository. Note that due to the risk of performing these actions it is advisable to view these steps as educational only and not as a formula to be followed in all cases. And as always, we recommend you test a procedure before you run it against your production cluster.

There is a misnomer that a Persistent Volume Claim (PVC) contains and retains data. That is not actually true. The Persistent Volume (PV) is the Kubernetes object that persists the data. A PVC is a reference to the PV that lays ‘claim’ to the data, i.e. it is an object that references which objects have access to a particular PV.

A PV has something called a ‘reclaim policy’, which directs Kubernetes what to do with the PV when any claims are dropped or removed. The reclaim policy can be set to Delete or Retain. When the PV is set to Delete, the PV will be dropped whenever the PVC is dropped in order to free up storage. At that point the data is unrecoverable. When the PV is set to Retain, the data that is on the PV remains persisted to disk even though there is no PVC referencing it.

The Kubernetes standard way to retain data is to set your PV reclaim policy to ‘Retain’. Therefore, the steps below adhere to this standard. There is still some risk associated with this, so great caution and coordination is required.

The example uses the ‘kustomize/postgres’ manifest that is found in the Postgres Operator Examples GitHub repository. If you are just getting started with the Operator, you can fork this repository and follow along. Steps will assume that the Operator has already been deployed, if you're just getting started, see our Quick Start.

Deploy Hippo Postgres Cluster

To start, modify the kustomize/postgres/postgres.yml manifest and add the spec.instances.replicas key with a value of two as shown below.

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.4-1
  postgresVersion: 13
  instances:
    - name: instance1
      replicas: 2
      dataVolumeClaimSpec:
        accessModes:
          - 'ReadWriteOnce'
        resources:
          requests:
            storage: 1Gi
  backups:
    pgbackrest:
      image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.35-0
      repos:
        - name: repo1
          volume:
            volumeClaimSpec:
              accessModes:
                - 'ReadWriteOnce'
              resources:
                requests:
                  storage: 1Gi

Save the changes to the manifest and apply using kubectl’s built-in Kustomize feature.

$ kubectl apply -k kustomize/postgres
postgrescluster.postgres-operator.crunchydata.com/hippo created

Monitor the deployment until all pods are in full ready state and the initial backup completes.

$ kubectl get pods
NAME                      READY   STATUS      RESTARTS   AGE
hippo-backup-2hb4-2d4k5   0/1     Completed   0          37s
hippo-instance1-2qjv-0    3/3     Running     0          74s
hippo-instance1-tb78-0    3/3     Running     0          74s
hippo-repo-host-0         1/1     Running     0          74s

Create some Sample Data

To test that our preservation of the storage is successful, create an example table with some dummy data.

$ kubectl exec -it -c database `kubectl get pod -l postgres-operator.crunchydata.com/role=master -o name` -- psql
psql (13.4)
Type "help" for help.
create table testtable (cola int, colb timestamp);
CREATE TABLE

insert into testtable values (1, current_timestamp);
INSERT 0 1

\q

Change Reclaim Policy on PV

Identify the PVs used for the cluster in question and modify the reclaim policy from Delete to Retain.

Identify PVs

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS      REASON   AGE
pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            Delete           Bound      postgres-operator/hippo-instance1-2qjv-pgdata   freenas-nfs-csi            18h
pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            Delete           Bound      postgres-operator/hippo-repo1                   freenas-nfs-csi            18h
pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            Delete           Bound      postgres-operator/hippo-instance1-tb78-pgdata   freenas-nfs-csi            18h

Modify Reclaim Policy

$ kubectl patch pv pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e patched

$ kubectl patch pv pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 patched

$ kubectl patch pv pvc-cecfdf54-0ec2-4598-945d-a67949375b7c -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-cecfdf54-0ec2-4598-945d-a67949375b7c patched

Verify Reclaim Policy

$ kubectl get pv | grep -v pgmonitor | grep -v finance
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS      REASON   AGE
pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            Retain           Bound      postgres-operator/hippo-instance1-2qjv-pgdata   freenas-nfs-csi            18h
pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            Retain           Bound      postgres-operator/hippo-repo1                   freenas-nfs-csi            18h
pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            Retain           Bound      postgres-operator/hippo-instance1-tb78-pgdata   freenas-nfs-csi            18h

Drop the Postgres Cluster

Before dropping the cluster, identify the pod that is currently the primary. This will be important when the cluster is later recreated and the PV made available for reuse. The Postgres cluster can now be dropped. After dropping the cluster verify the PV’s status is Released.

Identify Primary

$ kubectl get pod -L postgres-operator.crunchydata.com/role
NAME                      READY   STATUS      RESTARTS   AGE     ROLE
hippo-instance1-2qjv-0    3/3     Running     0          12m58s  master
hippo-instance1-tb78-0    3/3     Running     0          12m58s  replica
hippo-repo-host-0         1/1     Running     0          12m58s

Drop the Cluster

$ kubectl delete -k kustomize/postgres
postgrescluster.postgres-operator.crunchydata.com "hipp2o" deleted

Verify PV’s status is Released

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS      REASON   AGE
pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-2qjv-pgdata   freenas-nfs-csi            18h
pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            Retain           Released   postgres-operator/hippo-repo1                   freenas-nfs-csi            18h
pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-tb78-pgdata   freenas-nfs-csi            18h

Label PVs

If the Postgres cluster only had a single instance (replicas=1), then specifying the volumeName in the manifest would be sufficient. Since there are two instances of Postgres in this example, a label selector will be used to identify the PVs. A distinct label for the PVs used by Postgres and pgBackRest will be used. It is critical that these labels be unique across the Kubernetes cluster.

Label PVs

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS      REASON   AGE
pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-2qjv-pgdata   freenas-nfs-csi            18h
pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            Retain           Released   postgres-operator/hippo-repo1                   freenas-nfs-csi            18h
pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-tb78-pgdata   freenas-nfs-csi            18h

$ kubectl label pv pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e postgres-cluster=hippo
persistentvolume/pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e labeled

$ kubectl label pv pvc-cecfdf54-0ec2-4598-945d-a67949375b7c postgres-cluster=hippo
persistentvolume/pvc-cecfdf54-0ec2-4598-945d-a67949375b7c labeled

$ kubectl label pv pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 postgres-pgbackrest=hippo
persistentvolume/pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 labeled

Modify Postgres Manifest

With the PVs labeled, the next step is to modify the Postgres manifest and add the label selector to spec.instances.dataVolumeClaimSpec and to backups.pgbackrest.repos.volmeClaimSpec as seen below. After modification, apply the manifest to recreate the cluster.

Note that when the manifest is applied the pods and PVCs will be in a Pending state. The PVCs are in Pending state because the claimRef on the PV is still associated with the previous PVCs (uid). This is by design in order to protect the PVs from being used by another PVC. Once the new PVCs are in pending state, the previous claimRef is removed and everything will come to full ready state.

Modified Manifest

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.4-1
  postgresVersion: 13
  instances:
    - name: instance1
      replicas: 2
      dataVolumeClaimSpec:
        selector:
 matchLabels:
 postgres-cluster: hippo
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 1Gi
  backups:
    pgbackrest:
      image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.35-0
      repos:
      - name: repo1
        volume:
          volumeClaimSpec:
            selector:
 matchLabels:
 postgres-pgbackrest: hippo
            accessModes:
            - "ReadWriteOnce"
            resources:
              requests:
                storage: 1Gi

Apply the manifest, wait for PVC to enter Pending state and then clear the claimRef starting with the pgBackRest PV. Once pgBackRest is in full ready state, clear claimRef on remaining PVs. Best option is to release the PV of the previous primary instance first and allow the primary to come to full ready state before releasing the remaining replica PVs. If the replica PV is released first and there was a lag in replication, there could be data loss.

Apply Modified Manifest

$ kubectl apply -k kustomize/postgres
postgrescluster.postgres-operator.crunchydata.com/hippo created

Wait for PVC Status of Pending

$ kubectl get pvc
NAME                          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
hippo-instance1-hfvt-pgdata   Pending                                      freenas-nfs-csi   7s
hippo-instance1-r7lj-pgdata   Pending                                      freenas-nfs-csi   7s
hippo-repo1                   Pending                                      freenas-nfs-

Clear Claim for pgBackRest PV

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS      REASON   AGE
pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-2qjv-pgdata   freenas-nfs-csi            19h
pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            Retain           Released   postgres-operator/hippo-repo1                   freenas-nfs-csi            19h
pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            Retain           Released   postgres-operator/hippo-instance1-tb78-pgdata   freenas-nfs-csi            19h


$ kubectl patch pv pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 -p '{"spec":{"claimRef": null}}'
persistentvolume/pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745 patched

Wait for pgBackRest PVC Bound and Pod Running

$ kubectl get pvc
NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
hippo-instance1-hfvt-pgdata   Pending                                                                        freenas-nfs-csi   3m45s
hippo-instance1-r7lj-pgdata   Pending                                                                        freenas-nfs-csi   3m45s
hippo-repo1                   Bound     pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            freenas-nfs-csi   3m45s

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
hippo-instance1-hfvt-0   0/3     Pending   0          3m59s
hippo-instance1-r7lj-0   0/3     Pending   0          3m59s
hippo-repo-host-0        1/1     Running   0          3m59s
pgo-b95d7bbd-462kb       1/1     Running   0          23h

Clear Claim on Previous Primary PV

$ kubectl patch pv pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e -p '{"spec":{"claimRef": null}}'
persistentvolume/pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e patched

Wait for PVC Bound and Primary Pod Running

$ kubectl get pvc
NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
hippo-instance1-hfvt-pgdata   Bound     pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            freenas-nfs-csi   5m32s
hippo-instance1-r7lj-pgdata   Pending                                                                        freenas-nfs-csi   5m32s
hippo-repo1                   Bound     pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            freenas-nfs-csi   5m32s

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
hippo-instance1-hfvt-0   3/3     Running   0          5m46s
hippo-instance1-r7lj-0   0/3     Pending   0          5m46s
hippo-repo-host-0        1/1     Running   0          5m46s
pgo-b95d7bbd-462kb       1/1     Running   0          23h

Clear Claim on Remaining Replicas

$ kubectl patch pv pvc-cecfdf54-0ec2-4598-945d-a67949375b7c -p '{"spec":{"claimRef": null}}'
persistentvolume/pvc-cecfdf54-0ec2-4598-945d-a67949375b7c patched

Wait for PVC Bound and Replica Pods Running

$ kubectl get pvc
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
hippo-instance1-hfvt-pgdata   Bound    pvc-347e3dde-e3b8-4c97-a902-c1433250fb4e   1Gi        RWO            freenas-nfs-csi   7m10s
hippo-instance1-r7lj-pgdata   Bound    pvc-cecfdf54-0ec2-4598-945d-a67949375b7c   1Gi        RWO            freenas-nfs-csi   7m10s
hippo-repo1                   Bound    pvc-5259ad9b-c86d-46f8-9536-8d13dbff8745   1Gi        RWO            freenas-nfs-csi   7m10s

$ kubectl get pod -L postgres-operator.crunchydata.com/role
NAME                      READY   STATUS      RESTARTS   AGE     ROLE
hippo-backup-9c5h-f9j2x   0/1     Completed   0          117s
hippo-instance1-hfvt-0    3/3     Running     0          8m58s   master
hippo-instance1-r7lj-0    3/3     Running     0          8m58s   replica
hippo-repo-host-0         1/1     Running     0          8m58s

Verify

As the last step, verify the data.

$ kubectl exec -it -c database `kubectl get pod -l postgres-operator.crunchydata.com/role=master -o name` -- psql
psql (13.4)
Type "help" for help.

postgres=# select * from testtable;
 cola |            colb
------+----------------------------
    1 | 2021-11-16 21:03:57.810616
(1 row)

postgres=# \q

Final Thoughts

By leveraging the reclaim policy we are able to retain the database data and backups when the Postgres cluster itself was deleted. This is useful when there is a need to migrate a database cluster from one namespace to another or during certain maintenance activities. Always use caution when dropping a database cluster due to the risk for data loss.

(Interested in seeing PGO in action? Join us for a webinar on Wednesday, Dec 15th.)

Avatar for Brian Pace

Written by

Brian Pace

December 9, 2021 More by this author