Skip to main content

Kubernetes deployment

Helm chart, health probes, scaling, and production deployment patterns.

Install with Helm

helm install pgagroal helm/pgagroal/ \
  --set postgresql.host=your-postgres-service \
  --set credentials.username=app \
  --set credentials.password=secret \
  -n pgagroal --create-namespace

Replace your-postgres-service with the Kubernetes Service name or hostname of your PostgreSQL backend.

Minimal production values

Most deployments need only a few overrides. The chart ships with production-ready defaults for security contexts, resource limits, probes, and pod disruption budgets.

# values-production.yaml
replicaCount: 2

image:
  repository: elevarq/pgagroal
  tag: "0.2.0"

postgresql:
  host: "pg-primary.database.svc.cluster.local"
  port: 5432

pgagroal:
  maxConnections: 50
  logLevel: warn

credentials:
  existingSecret: "pgagroal-credentials"

service:
  type: ClusterIP
  port: 6432

Store credentials in a Kubernetes Secret, not in values files. The chart reads PG_USERNAME and PG_PASSWORD from the Secret named in credentials.existingSecret.

Health probes

The chart configures liveness and readiness probes using the same command as the container's built-in health check:

pgagroal-cli -c /etc/pgagroal/pgagroal.conf ping

This checks that the pgagroal daemon is running and responsive. It does not verify backend connectivity — a healthy pooler with an unreachable backend will still pass the probe. This is intentional: the pooler should stay running so it can recover when the backend returns.

ProbeDelayIntervalFailure threshold
Liveness5s10s3 (restart after 30s of failure)
Readiness3s5s2 (stop traffic after 10s of failure)

Security context

The chart enforces a hardened security posture by default. These settings are applied out of the box — you do not need to configure them.

  • --Non-root — runs as UID/GID 1000
  • --No privilege escalationallowPrivilegeEscalation: false
  • --All capabilities droppedcapabilities.drop: [ALL]
  • --Read-only root filesystem — writable paths use emptyDir volumes
  • --SeccompRuntimeDefault profile

Scaling

The chart defaults to 2 replicas with a PodDisruptionBudget of minAvailable: 1. This ensures at least one pooler remains available during rolling updates and node drains.

Replica count

Each replica maintains its own connection pool to the backend. Two replicas with maxConnections: 50each means up to 100 total backend connections. Plan your replica count and pool size together so the total does not exceed PostgreSQL's max_connections.

Resource limits

The chart defaults to 100m CPU request / 1 CPU limit and 64Mi memory request / 256Mi limit. pgagroal is lightweight — these defaults are generous for most workloads. Increase CPU if you run more than 100 pooled connections per replica.

Common patterns

Sidecar vs dedicated deployment

The Helm chart deploys pgagroal as a standalone Deployment with its own Service. This is the recommended pattern — it lets multiple application Deployments share one pool and makes scaling and monitoring independent.

A sidecar pattern (pooler in each application pod) is possible but rarely useful. It defeats the purpose of connection pooling because each pod maintains a separate pool that cannot share connections.

Connecting applications

Point your application's database connection string at the pgagroal Service:

postgresql://app:secret@pgagroal.pgagroal.svc.cluster.local:6432/appdb

The Service name and namespace depend on your Helm release name and -n flag.

See also: Configuration for environment variables and pool sizing.

Run pgagroal

docker pull elevarq/pgagroal:1.0.0