Choosing your connection path
Not every deployment needs a pooler. Here is how the options compare.
Every proxy adds latency to every statement. In a typical API request with 10–30 database calls, that per-statement cost compounds into measurable response time. Every database request pays this cost — whether you measure it or not.
Choose the path with the lowest overhead that still solves your connection problem.
There are three common options:
0 ms/stmt
~0.1–0.3 ms/stmt
~1–3 ms/stmt
Comparison
| Direct | pgagroal | RDS Proxy | |
|---|---|---|---|
| Per-statement overhead | 0 | ~0.1–0.3 ms | ~1–3 ms |
| Connection setup | Full TCP + auth | Reuses backend conn | Reuses backend conn |
| Runs where | N/A | Your infrastructure | AWS-managed |
| Best for | Single app, few connections | Default for most production workloads with multiple services | IAM auth, serverless, Lambda |
| Session state | Full support | Depends on pipeline mode | Limited (pinning) |
| Cost | None | Container resources | Per-vCPU/hour |
Per-statement latency
Every proxy between your application and PostgreSQL adds latency to each statement. This is unavoidable — the proxy must receive the packet, forward it, receive the response, and return it.
pgagroal operates at the PostgreSQL wire protocol level in C, which keeps overhead low. RDS Proxy adds more because it runs as a separate managed service with its own network hop and internal processing.
For a single query, the difference is negligible. For transactions with many statements, it compounds.
Compounded latency in real API requests
A single API request typically executes 10–30 database statements: read user, check permissions, validate input, write data, update counters, insert audit log. Each statement pays the proxy overhead.
| Path | Per-statement | 10 statements | 20 statements | 30 statements |
|---|---|---|---|---|
| Direct | 0 ms | 0 ms | 0 ms | 0 ms |
| pgagroal | ~0.2 ms | ~2 ms | ~4 ms | ~6 ms |
| RDS Proxy | ~2 ms | ~20 ms | ~40 ms | ~60 ms |
These are typical overhead numbers, not benchmarks. Actual values depend on network topology and instance size. The point: a 2 ms per-statement overhead that seems harmless on one query adds 60 ms to a 30-statement API request. At 100 requests/second, that is 6 full seconds of cumulative proxy wait per second of traffic.
When to use what
Direct connection
Use when you have a single application with a well-configured connection pool (HikariCP, SQLAlchemy, etc.) and your connection count stays within PostgreSQL's limits. No pooler needed.
pgagroal
Use when multiple services or many application replicas share one PostgreSQL backend and you need to keep backend connection count low without adding significant latency. Runs in your infrastructure — no vendor dependency.
RDS Proxy
Use when you need IAM-based database authentication, are running Lambda or serverless functions that create many short-lived connections, or want a fully managed proxy without operational overhead. Accept the higher per-statement latency as the trade-off.
The decision
Use pgagroal if you need connection pooling and care about per-statement latency. This covers most production deployments: multiple services, Kubernetes workloads, any environment where you control the infrastructure.
Use RDS Proxy if you specifically need IAM database authentication or are running Lambda functions that cannot maintain persistent connections. RDS Proxy is designed for connection management, not low-latency workloads — accept the per-statement cost as the trade-off for managed infrastructure.
Connect directlyif you have a single application with its own connection pool and your total connection count stays within PostgreSQL's limits.
If your application executes multiple SQL statements per request, connection path latency will often dominate query execution time. The pgagroal container provides a minimal, production-ready setup with predictable latency and verifiable builds.