Helm, GitOps and the Postgres Operator

Jonathan S. Katz

4 min read

This post provides guidance for v4x. For the latest on PGO, GitOps and Helm installer, please see: https://github.com/CrunchyData/postgres-operator-examples/tree/main/helm

In the previous article, we explored GitOps and how to apply GitOps concepts to PostgreSQL in a Kubernetes environment with the Postgres Operator and custom resources. The article went on to mention additional tooling that has been created to help employ GitOps principles within an environment, including Helm.

While the methods demonstrated in the previous blog post show how direct creation and manipulation of the PostgreSQL custom resources allow for GitOps workflows, tools like Helm can take this a step further by providing simpler encapsulation to both get started and further refine a PostgreSQL cluster.

Let's look at using Helm with the PostgreSQL Operator. To run through this example, you will need to both install the PostgreSQL Operator and Helm. This example assumes that you used the quickstart to install the Postgres Operator and will reference the pgo namespace as where both the Postgres Operator and the PostgreSQL clusters will be deployed.

The Postgres Operator repository provides an example Helm chart that can be used to create a PostgreSQL cluster and let you select the following attributes:

  • Memory / CPU resources
  • Disk size
  • High availability
  • Monitoring

You can see all the attributes listed out in both the README and the values file.

Deploying a Postgres Cluster With Helm

To use the Helm chart, you must first make a clone of the Postgres Operator repository to your host machine. To optimize this procedure, you can perform a shallow clone and then change into the example directory:

git clone --depth 1 https://github.com/CrunchyData/postgres-operator.git
cd postgres-operator/examples/helm

From there, you can modify the postgres/values.yaml file to match the desired values of your environment. Note that the defaults are set up to quickly get you started. For the purposes of this example, let's enable high availability. Have your values.yaml file contain the following values:

name: hippo
namespace: jkatz
password: W4tch0ut4hippo$
ha: true

After filling out the the values file, creating a high availability PostgreSQL cluster is as easy as:

helm install postgres-cluster postgres -n pgo

If your configuration is valid, this should successfully deploy a HA PostgreSQL cluster in your Kubernetes environment. The Helm chart will return a notice providing some configuration about your cluster, including how to connect to it.

It may take a few moments for your PostgreSQL cluster to deploy. To see when it is ready to accept connections, you can execute the pgo test command:

pgo test hippo

When it is ready, you should see output similar to this:

cluster : hippo
Services
primary (10.101.57.126:5432): UP
replica (10.104.254.232:5432): UP
Instances
primary (hippo-764f79669c-bjsxn): UP
replica (hippo-qqzl-7b4d98bb4d-k9nxk): UP

Try connecting to your new cluster! In a different terminal window, set up a port forward to your host machine:

kubectl port-forward -n pgo svc/hippo 5432:5432

If you used the default settings from the Helm example, you can connect to your Postgres cluster with the following command:

PGPASSWORD="W4tch0ut4hippos!" psql -h localhost -U hippo hippo

Updating a Postgres Cluster With Helm

You're ready to move your application into production, but since you follow good practices, you know that you want to monitor your PostgreSQL cluster. PostgreSQL monitoring visualization on Kubernetes is outside the scope of this blog post, you can refer to this guide to help you get started.

In your values file, add the following line to enable monitoring for the PostgreSQL cluster:

monitoring: true

Save, and run the "helm upgrade" command:

helm upgrade postgres-cluster postgres -n pgo

The Postgres Operator will perform a rolling update to minimize any downtime for your PostgreSQL cluster and, as a part of this, will enable monitoring for each PostgreSQL instance. You will also notice that Helm declares this to be "revision 2" of your PostgreSQL cluster: if you want to rollback and remove monitoring from the cluster, you can do so with Helm!

To validate that the monitoring sidecar is now deployed with your Postgres instance, you can run the following kubectl command:

kubectl -n pgo get pods --selector=pg-cluster=hippo,pgo-pg-database

Deleting a Postgres Cluster With Helm

Deleting a PostgreSQL cluster with Helm and the PostgreSQL Operator can be done simply with a "helm delete":

helm delete postgres-cluster -n pgo

The Postgres Operator will remove any associated Kubernetes objects for that PostgreSQL cluster.

Looking Ahead

The process of testing Helm and other tools is immeasurably helpful in determining how to combine the best of all worlds: running PostgreSQL on Kubernetes in a GitOps friendly way. A Helm chart can capture the complexities of a custom resource definition and provide a much simpler interface, where features that require a more complex setup, e.g., "high availability" can be toggled with a boolean.

Applying a GitOps mindset to managing PostgreSQL workloads can certainly help make it easier to manage production-grade Postgres instances on Kubernetes. GitOps principles can certainly make it easier for deploying a range of PostgreSQL topologies, from single instances to multi-zone, fault tolerant clusters.

Avatar for Jonathan S. Katz

Written by

Jonathan S. Katz

February 24, 2021 More by this author