How to deploy PostgreSQL to Kubernetes

Mulesoft Runtime Fabric can provide an Object Store to our Mule apps by using a PostgreSQL instance.
In this post, we’re going to see the cheapest way to provide a PostgreSQL database for the Persistence Gateway of Runtime Fabric. The method consists of deploying PostgreSQL directly on the same K8s cluster of Runtime Fabric. This is NOT something we’d be doing in a production environment. In Production, we’d rather provide a PostgreSQL database running as close as possible to the RTF cluster but outside the cluster. But, for testing purposes, this is a great approach.

If you already have a PostgreSQL instance for your Runtime Fabric and want to know how to use it as Persistence Gateway you can skip this post and go to:

How to set up Persistence Gateway for Mulesoft Runtime Fabric

To deploy PostgreSQL (or postgres) in K8s we are going to create the following resources:

  • A namespace - We’ll create a dedicated namespace for all the resources needed for the postgres installation. It’s not 100% necessary but that will help us to isolate all of these resources from the rest of the resources in the cluster.
  • A ConfigMap - With a configMap we can parametrize, centralize and externalize the configuration of a deployment. In this case, we’ll include mainly the details of the connection to the database (user, password and database). Decoupling the configuration from the deployment will help us to change the configuration of postgres more dynamically and have it better organized.
  • For Storage: The simplest and cheapest storage we can provide for testing is to use the local storage of the worker nodes to the postgres database. We’ll be creating a PersistentVolume and a PersistentVolumeClaim to map a directory in the host (the worker node) with a path on the containers running postgres. In Kubernetes, PersistentVolume (PV) and PersistentVolumeClaim (PVC) are two key concepts used for managing storage. They work together to provide a way for pods to request and use persistent storage.
    • A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator or dynamically by the cluster itself.
    • A PersistentVolumeClaim is a request for storage by a user. It specifies the desired size, access mode, and storage class (if any).
  • A Deployment - The deployment resource will contain the configuration of the pods that will run postgresl - image, version, number of replicas. The deployment will also link the configuration in the configMap and the PV/PVC to the replicas.
  • Service - In K8s a Service allows us to expose the replicas running postgres to be accessed internally or externally.

Let’s see how to create each resource in detail:


Namespace

  • Run the following to create the namespace for the postgres installation
kubectl create ns postgres


ConfigMap

  • Create a yaml file called mule-postgres-config.yaml for the ConfigMap
vi mule-postgres-config.yaml
  • Paste the following content and save the file
apiVersion: v1
kind: ConfigMap
metadata:
name: mule-postgres-config
namespace: postgres
labels:
app: postgres
data:
POSTGRES_DB: mule_ps_db
POSTGRES_USER: mule
POSTGRES_PASSWORD: Mule1234
  • Where:
    • apiVersion - version of the K8s API
    • kind - The resource type to be created - ConfigMap
    • name - the name of the configMap
    • namespace - The name of the namespace where we’ll create the ConfigMap
    • labels - app: Additional metadata that helps us identify better what this ConfigMap refer to. Not 100% necessary but alway useful for clarity and organization of resources.
    • data - this is the section that contains the configuration we want to externalize outside the postgres deployment. This helps us to change the configuration of postgres more dynamically
      • POSTGRES_DB - Default database for PostgreSQL
      • POSTGRES_USER - Default user for PostgreSQL
      • POSTGRES_PASSWORD - Password for the default user
  • Create the ConfigMap resource
kubectl apply -f mule-postgres-config.yaml


PersistentVolume

  • Create a yaml file called mule-postgres-volume.yaml for the ConfigMap
vi mule-postgres-volume.yaml
  • Paste the following content and save the file:
# PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: mule-ps-volume
namespace: postgres
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /data/postgresql
  • Where:
    • apiVersion - version of the K8s API
    • kind - The resource type to be created - PersistentVolume
    • name - the name of the PersistentVolume
    • namespace - The name of the namespace where we’ll create the PersistentVolume
    • labels
      • app: As mentioned earlier, additional metadata that helps us identify better what this ConfigMap refer to.
      • type: Additional metadata to show this persistenVolume is using local storage of the worker node
    • storageClassName - manual: This specifies that the provisioning of the storage is done manually, and not automatically with a storage provider as it would be in a production environment (this is only for testing purposes)
    • capacity/storage: Specifies the amount of storage we want to provide with this PersistentVolume
    • accessModes - This defines how the pods can use this PV. In this case we’re allowing the pods to read and write to this PV
    • hostPath: This field specifies that the volume we’re defining will be created direclty in the worker node’s filesystem. The path defines the specific directory where data will be stored in the worker node.
  • Create the PV resource
  • kubectl apply -f mule-postgres-volume.yaml

PersistentVolumeClaim

  • Create a yaml file called mule-postgres-volume-claim.yaml for the PVC
  • vi mule-postgres-volume-claim.yaml
  • Paste the following content and save the file
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mule-ps-volume-claim
namespace: postgres
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
  • Where:
    • apiVersion - version of the K8s API
    • kind - The resource type - PersistentVolumeClaim
    • name - the name of the PVC
    • namespace - The name of the namespace where we’ll create the PersistentVolumeClaim
    • labels - app: Additional metadata that helps us identify this
    • storageClassName - Specifies the storageClassName of the request
    • accessModes - Specifies the access modes of the request
    • resources/requests/storage - Defines the amount of storage of the request
  • Create the PVC resource
  • kubectl apply -f mule-postgres-volume-claim.yaml


Deployment

  • Create a yaml file called mule-postgres.yaml for the ConfigMap
  • vi mule-postgres.yaml
  • Paste the following content and save the file
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mule-postgres
namespace: postgres
spec:
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
namespace: postgres
labels:
app: postgres
spec:
containers:
- name: mule-postgres
image: 'postgres:14'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: mule-postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: mule-postgresdata
volumes:
- name: mule-postgresdata
persistentVolumeClaim:
claimName: mule-ps-volume-claim

  • Where:
    • apiVersion - version of the K8s API
    • kind - The resource type - Deployment
    • name - the name of the Deployment
    • namespace - The name of the namespace where we’ll create the Deployment
    • replicas - the number of pods that will be running postgres. Considering this is for testing purposes, one replica should suffice.
    • selector - This is how the deployment identifies the pods that belong to it. In this case all the pods with the label app=postgres
    • template - Defines the pod template. Every replica of this deployment that gets (re)created will use this template
    • containers - the specification of the list of containers to be included in this pod. In this case only one, the one that runs the postgres image
    • name - name of the container
    • image - the image that we’ll use to create the container. This is where we define the postgres image and version to be used
    • ports/containerPort: 5432 This is the port that will be exposed by the container and where postgresql will be listening
    • envFrom - This is how we can create environment variables within the container where the names and values of the variables are taken from the ConfigMap we defined earlier. Typically the code running in the image would be expecting to find these variables, as in this case, the postgreSQL installation process in the image expects to find the database name, user and password values in environment variables.
    • volumeMounts - In this section we specify where the volumes used will be mounted in the file system of the container. In this case, everything that the container stores in the mountPath will me mapped to the external volume we defined, in our case the file system of the host
    • the volumes section specifies that the volumeMount will be mapped to the PVC we created earlier, that is a directory in the worker node
  • Create the Deployment resource
  • kubectl apply -f mule-postgres.yaml


Service

  • Create a yaml file called mule-postgres-service.yaml for the ConfigMap
  • vi mule-postgres-service.yaml
  • Paste the following content and save the file
# Service
apiVersion: v1
kind: Service
metadata:
name: mule-ps
namespace: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
  • Where:
    • apiVersion - version of the K8s API
    • kind - The resource type - Service
    • name - the name of the configMap
    • namespace - The name of the namespace where we’ll create the Service
    • labels - app: Additional metadata that helps us identify this ConfigMap as configuration of the Deployment/Pods we’ll define later
    • type - NodePort: Specifies the type of Kubernetes Service we are creating. In this case, a NodePort service will expose the deployment using the service name and port 5432 as the endpoint for our pods in the deployment.
    • selector - This is how the service identifies the pods to which redirect the traffic. This is the same label we used for the deployment.
  • Create the Service resource
kubectl apply -f mule-postgres-service.yaml


How to connect to PostgreSQL and Test it

  • To verify the PosgreSQL instance is working properly we’re going to open a postgres cli session within the container running postgres and run some queries For that we first need to identify the name of the pod by running
kubectl get pods -n postgres
  • Copy the name of one of the pods and then run the following:
kubectl exec -it [POSTGRES_POD_NAME] -n postgres -- psql -h localhost -U [POSTGRES_USER] --password -p 5432 [POSTGRES_DB]
Where POSTGRES_USER and POSTGRES_DB are the values we specify in our configmap. We’ll be prompted to provide the POSTGRES_PASSWORD.
That will open a postgres shell session within the container where we can run some commands. If that happens, this means postgres is up & running. 
From here we can run some useful commands to test our installation
  • Test the connection to the database:
mule_ps_db=# \conninfo
  • Get all the databases
mule_ps_db=# \l
  • Get all the tables of the current database (we’ll use it in our next post, when the persistence gateway is working and storing info from the mule apps)
mule_ps_db=# \d


  • Get all the records of a table (we’ll use it in our next post, when the persistence gateway is working and storing info from the mule apps)
In the next post we'll see how to set up Runtime Fabric to use this PostreSQL instance as the Persistence Gateway and provide the traditional Object Store functionality to our Mule apps.
Previous Post Next Post