Configuration
Everything is configured via environment variables. No configuration files needed.
How it works
The container generates its configuration at startup from environment variables. You never mount a config file — pass the variables you need, and the container handles the rest. Anything you don't set uses a production-tuned default.
Environment variables
| Variable | Default | Description |
|---|---|---|
| PG_BACKEND_HOST | postgres | PostgreSQL server hostname or IP address |
| PG_BACKEND_PORT | 5432 | PostgreSQL server port |
| PGAGROAL_HOST | * | Address the pooler listens on. * means all interfaces. |
| PGAGROAL_PORT | 6432 | Port the pooler listens on |
| MAX_CONNECTIONS | 100 | Maximum number of pooled backend connections |
| PGAGROAL_LOG_LEVEL | info | Log verbosity: fatal, error, warn, info, debug1–debug5, trace |
| PG_USERNAME | — | Optional. Register a user at startup for explicit authentication. |
| PG_PASSWORD | — | Optional. Password for the registered user. Requires PG_USERNAME. |
Connection pool sizing
MAX_CONNECTIONS controls how many backend connections pgagroal maintains to PostgreSQL. This is the most important setting to get right.
How to choose a value
Start with the number of concurrent database connections your application actually needs — not the number of application instances or threads. Pooling works by reusing connections, so you typically need far fewer backend connections than application connections.
A reasonable starting point for most workloads:
# Small workload (single service, low concurrency)
MAX_CONNECTIONS=25
# Medium workload (multiple services, moderate concurrency)
MAX_CONNECTIONS=100
# Large workload (high concurrency, many services)
MAX_CONNECTIONS=200Do not set this higher than PostgreSQL's own max_connections. The pooler cannot open more connections than the backend allows.
Timeouts and limits
These are set in the container's built-in configuration and tuned for production use. You do not need to change them for most deployments.
| Setting | Value | What it does |
|---|---|---|
| idle_timeout | 600 | Seconds before an idle backend connection is closed. Frees resources on the PostgreSQL side. |
| blocking_timeout | 30 | Seconds a client waits for a backend connection before giving up. Prevents indefinite hangs. |
| keep_alive | on | TCP keepalive. Detects dead connections through firewalls and load balancers. |
| nodelay | on | Disables Nagle's algorithm. Reduces latency for small packets (typical of database traffic). |
| pipeline | auto | Pooling mode. Auto-detects the best strategy based on query patterns. |
| ev_backend | epoll | Event loop implementation. Epoll is the standard choice on Linux. |
Authentication
Passthrough mode (default)
By default, the container accepts any username and forwards credentials to PostgreSQL for validation. This is the simplest mode — pgagroal does not manage users, and authentication is handled entirely by the backend.
# Passthrough — no extra env vars needed
docker run -d --name pgagroal \
-p 6432:6432 \
-e PG_BACKEND_HOST=db.example.com \
elevarq/pgagroal:1.0.0Registered user mode
Set PG_USERNAME and PG_PASSWORD to register a user at startup. The container writes a hashed credential file that pgagroal uses for initial authentication before forwarding to the backend.
docker run -d --name pgagroal \
-p 6432:6432 \
-e PG_BACKEND_HOST=db.example.com \
-e PG_USERNAME=app_user \
-e PG_PASSWORD=app_password \
elevarq/pgagroal:1.0.0In Kubernetes, pass credentials via a Secret rather than plain environment variables. The Helm chart supports this with credentials.existingSecret.
Recommended production setup
This is the setup we recommend for production. It covers what most deployments need: a backend connection, a bounded pool size, restart policy, and quiet logging. Start here and adjust only when you have a specific reason to.
docker run -d --name pgagroal \
--restart unless-stopped \
-p 6432:6432 \
-e PG_BACKEND_HOST=db.internal.example.com \
-e PG_BACKEND_PORT=5432 \
-e MAX_CONNECTIONS=50 \
-e PGAGROAL_LOG_LEVEL=warn \
elevarq/pgagroal:1.0.0What this gives you
- --50 pooled backend connections (set to match your workload)
- --Automatic restart on failure
- --Warn-level logging (less noise, still catches problems)
- --Built-in health check (pgagroal-cli ping, every 10s)
- --Idle connections cleaned up after 10 minutes
- --TCP keepalive and nodelay enabled by default
Connection limits and CPU
pgagroal uses one thread per connection. On a single-core container (1 CPU), 50–100 connections is a practical ceiling. If you need more, allocate more CPU to the container. A 2-CPU container comfortably handles 200 pooled connections.
The right number is always lower than you think. Most applications that claim to need 500 connections actually need 20 connections and a pooler — that is exactly what this container provides. If your pool is consistently full, the problem is almost never the pool size. It is long-running transactions, leaked connections, or missing connection close calls in application code.
When pgagroal may be unnecessary
An external pooler adds value when multiple application instances share a backend, or when you need to decouple connection lifecycle from application deployment. It may be unnecessary in simpler topologies.
- --Single instance with a well-tuned in-process pool. If you run one application instance with a properly configured connection library (HikariCP, SQLAlchemy pool, node-postgres) and have no operational need for a separate connection layer, pgagroal may add some overhead with limited benefit in simple deployments. This changes as soon as you scale to multiple instances or need independent connection management.
- --Trying to fix slow queries. A pooler manages connections, not query performance. If queries are slow, fix the queries first.
Workloads that need extra validation
These considerations apply to any connection pooler, not just pgagroal. Some application patterns depend on session affinity — the assumption that a database connection persists across multiple operations. Test these carefully before deploying with any external pooler.
- --Prepared statements — created with
PREPAREand valid only for the session that created them. If the pooler reassigns the backend connection, the prepared statement will not exist on the new connection. - --Temporary tables — exist only within the session that created them. Dropped automatically when the session ends or the connection is returned to the pool.
- --Session-level settings — changes made with
SET(e.g. search_path, work_mem, statement_timeout) apply to the session, not the transaction. They may not persist if the pooler assigns a different backend connection. - --Advisory locks held across transactions — session-level advisory locks (
pg_advisory_lock) are tied to the backend connection. If the connection is reassigned, the lock is released silently. Transaction-level advisory locks (pg_advisory_xact_lock) are safe.
If your application uses any of these patterns, test with pgagroal in a staging environment before deploying to production. In many cases, the application can be adjusted to avoid session-bound behavior.
See also: Troubleshooting for common configuration mistakes, or Kubernetes for Helm chart deployment.