In previous posts, we’ve seen what Persistence Gateway is in Runtime Fabric and how to set it up. If the use of the Object store and the persistence DB is not very high, it’s not critical or don’t have tight performance requirements you can probably leave the default configuration.
This will export the CRD to the pg-crd.yaml file. Have a look at the file.
Now let’s break it down and analyze the fields that define how your Persistence Gateway will behave.. All configurable fields live under the spec.objectStore top level field.
1.
This field specifies the backend database that we’ll use for our persistence gateway. It only supports postgresql so we must not change the default value. This might change in the future to add support for other database engines, but for now (August 2025), only PostgreSQL can be used.
2.
These are fields for the internal functioning of the Persistence Gateway. I can only guess that they might be related to fixes that enable/disable the overwriting and decoding of the object store keys. By default they’re set to false, we should not change them
This is one of the most important fields. It controls the database connections pool of the Persistence Gateway pod. Each pod will open connections to the PostgreSQL backend. To improve performance, every pod creates a connection pool so that not every query needs to create and open a connection to the database, it creates a pool of connections that allows the pod to reuse one of these existing connections instead of continuously open new connections. The
The connection pool can really optimize the performance of the Persistence Gateway but we need to set it up correctly. First, you need to consider the number of replicas you’re deploying for the Persistence Gateway. The
This is the number of replicas that will run the Persistence Gateway. When you create a Persistence Gateway, under the hood K8s will create a Deployment with the number of replicas you specify in this field.
Here you need to consider:
The resources object specifies the
These values will have an impact on the PG pods performance and allow K8s to make scheduling decisions. There’s no magic number for all RTF deployments, my recommendation is to do a proper performance test using your real or expected volume of incoming traffic to the PG pods and see what the optimal configuration is for your deployment.
The Persistence Gateway pods are the components of our architecture that will be connecting the PostgreSQL database. For that, we need to tell them how to reach the DB. The details of the connection (host, port, credentials and database) are sensitive information, and for that, in a previous step we put all of these connection details into a K8s secret. The
The Persistence Gateway does not use Affinity or AntiAffinity rules. Instead, it uses the
Each item can contain:
This means the default behaviour of the Persistent Gateway is to spread replicas evenly across zones and, when an even distribution is not possible it will schedule all the replicas regardless.
The Persistence Gateway custom resource does not expose the
If you need autoscaling for Persistence Gateway (PG) a possible workaround might be to manually scale via Scripts or CronJobs. You could use a scheduled job that:
The full CRD schema for the Persistence Gateway:
But in those scenarios where you expect a high volume of traffic and reliability, performance and availability are critical for your apps you need to find an optimal configuration for your Persistence Gateway setup.
To understand what you can and cannot change in the Persistence Gateway, in this post we will have a look at the Custom Resource Definition of the Persistence Gateway and we’ll break it down to understand what settings we’ve got available to fine-tuning the Persistence Gateway for our RTF deployments.
To start off, we can get the CRD definition from our K8s cluster with the following command (this assumes that you’ve got a running RTF cluster and already created the Persistence Gateway Custom Resource):
To understand what you can and cannot change in the Persistence Gateway, in this post we will have a look at the Custom Resource Definition of the Persistence Gateway and we’ll break it down to understand what settings we’ve got available to fine-tuning the Persistence Gateway for our RTF deployments.
To start off, we can get the CRD definition from our K8s cluster with the following command (this assumes that you’ve got a running RTF cluster and already created the Persistence Gateway Custom Resource):
kubectl get crd persistencegateways.rtf.mulesoft.com -o yaml > pg-crd.yaml
What we can change in the Persistence Gateway
Now let’s break it down and analyze the fields that define how your Persistence Gateway will behave.. All configurable fields live under the spec.objectStore top level field.
1. backendDriver
(string)
This field specifies the backend database that we’ll use for our persistence gateway. It only supports postgresql so we must not change the default value. This might change in the future to add support for other database engines, but for now (August 2025), only PostgreSQL can be used.backendDriver: postgresql
2. disableAlwaysOverwriteKeyFix and disableDecodeKeysFix (boolean)
These are fields for the internal functioning of the Persistence Gateway. I can only guess that they might be related to fixes that enable/disable the overwriting and decoding of the object store keys. By default they’re set to false, we should not change themdisableAlwaysOverwriteKeyFix: true
disableDecodeKeysFix: true
3. maxBackendConnectionPool
(integer)
This is one of the most important fields. It controls the database connections pool of the Persistence Gateway pod. Each pod will open connections to the PostgreSQL backend. To improve performance, every pod creates a connection pool so that not every query needs to create and open a connection to the database, it creates a pool of connections that allows the pod to reuse one of these existing connections instead of continuously open new connections. The maxBackendConnectionPool
field specifies the maximum number of connections that can be created for each pod in the Persistence Gateway deployment. The default number is 20The connection pool can really optimize the performance of the Persistence Gateway but we need to set it up correctly. First, you need to consider the number of replicas you’re deploying for the Persistence Gateway. The
maxBackendConnectionPool
value is per replica. These means that if you set this value to 20 and use 3 replicas you’ll end up with a global pool of 60 connections. You need to verify this value with the PostgreSQL database that will sit in the backend and make sure the DB server can manage the total number of (normally a PosgreSQL database supports around 100 connections)maxBackendConnectionPool: 20
4. replicas
(integer)
This is the number of replicas that will run the Persistence Gateway. When you create a Persistence Gateway, under the hood K8s will create a Deployment with the number of replicas you specify in this field.Here you need to consider:
- For High Availabilty, you need to set up at least 2 replicas
- If you also want to provide Multi-Zone support (at least 1 replica per AZ) you’ll need, at minimum, as many replicas as number of AZ in your region.
replicas: 3
5. resources
(object)
The resources object specifies the limits
and requests
for the Persistence Gateway pods. With limits
we enforce the maximum resources for CPU/Memory that a PG pod can consume. With requests
, the minimum guaranteed resources.These values will have an impact on the PG pods performance and allow K8s to make scheduling decisions. There’s no magic number for all RTF deployments, my recommendation is to do a proper performance test using your real or expected volume of incoming traffic to the PG pods and see what the optimal configuration is for your deployment.
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 128Mi
6. secretRef
(object)
The Persistence Gateway pods are the components of our architecture that will be connecting the PostgreSQL database. For that, we need to tell them how to reach the DB. The details of the connection (host, port, credentials and database) are sensitive information, and for that, in a previous step we put all of these connection details into a K8s secret. The secretRef
field reference the secret that contains the DB connection string. This is how:- DB credentials are protected with a K8s secret
- We detach these connection parameters from the PG definition. This way we can dynamically change the DB connection in the event of rotating DB credentials or migrating to another DB without impacting the RTF deployment.
secretRef:
name: rtf-pg-secret
7. topologySpreadConstraints
(array of objects)
The Persistence Gateway does not use Affinity or AntiAffinity rules. Instead, it uses the topologySpreadConstraints
to distribute replicas across nodes/zones.Each item can contain:
maxSkew
(integer) - controls the maximum difference in the number of pods between any two topology domains (e.g., nodes, zones, etc.) that match your criteria. The scheduler tries to balance pods evenly across the specifiedtopologyKey
.maxSkew
defines how uneven that distribution can be. The recommendation is to usemaxSkew: 1
to enforce even distribution across failure domains.topologyKey
(string) - Specifies the domain (or dimension) across which the pods should be distributed. This is typically a label on nodes. Common values:
topologyKey | Meaning | Applies To |
---|---|---|
kubernetes.io/hostname | Each node | Ensures pods land on different nodes |
topology.kubernetes.io/zone | Each zone (e.g. AZ-1, AZ-2) | Ensures pods are spread across zones |
topology.kubernetes.io/region | Each region (e.g. eu-west-1) | Ensures geographic HA (multi-region) |
whenUnsatisfiable
(string:ScheduleAnyway
orDoNotSchedule
) - Defines what the scheduler should do if it can’t meet the spread constraint (i.e., if spreading the pods according tomaxSkew
andtopologyKey
is impossible). Possible values:
Value | Behavior | Use Case |
---|---|---|
ScheduleAnyway | Try to spread pods as defined, but allow violation if necessary | Best-effort availability |
DoNotSchedule | Enforce the rule strictly. If it can’t be met, don’t schedule the pod | High availability, strict constraints |
The default values are:
topologySpreadConstraints:
default:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
What we cannot change in the Persistence Gateway
- We cannot configure
affinity
,tolerations
, ornodeSelector
- We cannot add custom annotations/labels to pods directly
- We cannot enable autoscaling via HPA settings inside this CRD. HPA can only autoscale objects that expose the
/scale
subresource, which includes: Deployment
StatefulSet
ReplicaSet
/scale
endpoint unless the CRD is explicitly configured with:subresources:
scale:
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
labelSelectorPath: .status.selector
/scale
subresource on its definition, thus it can’t be used with HPA.If you need autoscaling for Persistence Gateway (PG) a possible workaround might be to manually scale via Scripts or CronJobs. You could use a scheduled job that:
- Monitors metrics (e.g., via Prometheus)
- Updates the CR (
spec.objectStore.replicas
) viakubectl patch
The full CRD schema for the Persistence Gateway:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
meta.helm.sh/release-name: runtime-fabric
meta.helm.sh/release-namespace: rtf
creationTimestamp: "2025-08-04T23:17:17Z"
generation: 1
labels:
app.kubernetes.io/instance: runtime-fabric
app.kubernetes.io/managed-by: Helm
rtf.mulesoft.com/agentNamespace: rtf
rtf.mulesoft.com/component: agent
name: persistencegateways.rtf.mulesoft.com
resourceVersion: "1142"
uid: e5de66e4-bfd4-489a-a8dc-236ad6062a57
spec:
conversion:
strategy: None
group: rtf.mulesoft.com
names:
kind: PersistenceGateway
listKind: PersistenceGatewayList
plural: persistencegateways
shortNames:
- pg
singular: persistencegateway
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
spec:
properties:
objectStore:
properties:
backendDriver:
default: postgresql
type: string
disableAlwaysOverwriteKeyFix:
default: false
type: boolean
disableDecodeKeysFix:
default: false
type: boolean
maxBackendConnectionPool:
default: 20
type: integer
replicas:
default: 2
type: integer
resources:
properties:
limits:
properties:
cpu:
default: 250m
type: string
memory:
default: 150Mi
type: string
type: object
requests:
properties:
cpu:
default: 200m
type: string
memory:
default: 75Mi
type: string
type: object
type: object
secretRef:
properties:
name:
default: persistence-gateway-creds
type: string
type: object
topologySpreadConstraints:
default:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
items:
properties:
maxSkew:
default: 1
type: integer
topologyKey:
default: topology.kubernetes.io/zone
type: string
whenUnsatisfiable:
default: ScheduleAnyway
type: string
type: object
type: array
type: object
type: object
type: object
served: true
storage: true