DEV Community

Maintaining Throughput With Less Physical Connections

In the previous post, I demonstrated the YugabyteDB connection manager with two connections so that we can easily track the physical process identifier. To highlight its main advantage, I'll use PgBench to run 50 connections on a tiny cluster.

I initialize PgBench tables:

yugabyte=# \! PGOPTIONS="-c client_min_messages=error" pgbench -iIdtpfG
dropping old tables...
creating tables...
creating primary keys...
creating foreign keys...
generating data (server-side)...
done in 5.83 s (drop tables 0.02 s, create tables 1.03 s, primary keys 1.41 s, foreign keys 0.95 s, server-side generate 2.41 s).

Enter fullscreen mode Exit fullscreen mode

The following PgBench command runs the select-only workload (-S) from fifty clients (-c 50), displaying the throughput every ten seconds (-P 10) for one minute (-T 60). I limit the throughput to 2000 transactions per second (-R 2000). I run it in the background, wait thirty seconds, and display the number of sessions in pg_stat_activity that serve this application.

\! pgbench -R 2000 -c 50 -nS -T 60 -P 10 & sleep 30
select datname, pid, state, wait_event 
 from pg_stat_activity where application_name='pgbench';
Enter fullscreen mode Exit fullscreen mode

Without Connection Manager

Here is the output when the Connection Manager is not enabled (enable_ysql_conn_mgr=false):

yugabyte=# \! pgbench -R2000 -c 50 -nS -T 60 -P 10 & sleep 30
pgbench (16.2, server 11.2-YB-2024.1.2.0-b0)
progress: 10.0 s, 1344.0 tps, lat 38.601 ms stddev 56.346, 0 failed, lag 19.237 ms
progress: 20.0 s, 2000.0 tps, lat 18.373 ms stddev 0.956, 0 failed, lag 0.087 ms

yugabyte=# select datname, pid, state, wait_event
  from pg_stat_activity where application_name='pgbench';

 datname  |   pid   | state  |   wait_event
----------+---------+--------+-----------------
 yugabyte | 4066426 | idle   | ClientRead
 yugabyte | 4066436 | idle   | ClientRead
 yugabyte | 4066445 | idle   | StorageRead
 yugabyte | 4066455 | idle   | ClientRead
 yugabyte | 4066464 | idle   | ClientRead
 yugabyte | 4066473 | idle   | ClientRead
 yugabyte | 4066482 | idle   | ClientRead
 yugabyte | 4066491 | idle   | QueryProcessing
 yugabyte | 4066500 | idle   | ClientRead
 yugabyte | 4066528 | idle   | ClientRead
 yugabyte | 4066597 | idle   | QueryProcessing
 yugabyte | 4066606 | idle   | ClientRead
 yugabyte | 4066615 | idle   | ClientRead
 yugabyte | 4066624 | idle   | StorageRead
 yugabyte | 4066633 | idle   | ClientRead
 yugabyte | 4066642 | idle   | ClientRead
 yugabyte | 4066652 | idle   | ClientRead
 yugabyte | 4066661 | idle   | ClientRead
 yugabyte | 4066671 | active | StorageRead
 yugabyte | 4066680 | idle   | ClientRead
 yugabyte | 4066689 | idle   | ClientRead
 yugabyte | 4066698 | idle   | ClientRead
 yugabyte | 4066707 | idle   | ClientRead
 yugabyte | 4066716 | idle   | ClientRead
 yugabyte | 4066725 | active | StorageRead
 yugabyte | 4066734 | idle   | ClientRead
 yugabyte | 4066743 | idle   | ClientRead
 yugabyte | 4066752 | idle   | ClientRead
 yugabyte | 4066773 | idle   | ClientRead
 yugabyte | 4066782 | idle   | ClientRead
 yugabyte | 4066791 | active | StorageRead
 yugabyte | 4066800 | idle   | ClientRead
 yugabyte | 4066810 | idle   | StorageRead
 yugabyte | 4066820 | idle   | ClientRead
 yugabyte | 4066829 | idle   | ClientRead
 yugabyte | 4066838 | active | ClientRead
 yugabyte | 4066847 | idle   | ClientRead
 yugabyte | 4066856 | idle   | ClientRead
 yugabyte | 4066865 | idle   | ClientRead
 yugabyte | 4066874 | idle   | ClientRead
 yugabyte | 4066893 | active | StorageRead
 yugabyte | 4066902 | idle   | ClientRead
 yugabyte | 4066911 | idle   | ClientRead
 yugabyte | 4066920 | idle   | ClientRead
 yugabyte | 4066929 | idle   | StorageRead
 yugabyte | 4066938 | active | StorageRead
 yugabyte | 4066947 | idle   | ClientRead
 yugabyte | 4066957 | idle   | QueryProcessing
 yugabyte | 4066966 | idle   | ClientRead
 yugabyte | 4066976 | idle   | ClientRead
(50 rows)

progress: 30.0 s, 1976.3 tps, lat 18.361 ms stddev 0.802, 0 failed, lag 0.085 ms
progress: 40.0 s, 1973.8 tps, lat 18.331 ms stddev 0.894, 0 failed, lag 0.085 ms
progress: 50.0 s, 1993.6 tps, lat 18.352 ms stddev 1.431, 0 failed, lag 0.136 ms
progress: 60.0 s, 1979.9 tps, lat 18.474 ms stddev 0.832, 0 failed, lag 0.075 ms
transaction type: <builtin: select only>
scaling factor: 1
query mode: simple
number of clients: 50
number of threads: 1
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 112716
number of failed transactions: 0 (0.000%)
latency average = 20.792 ms
latency stddev = 20.554 ms
rate limit schedule lag: avg 2.376 (max 249.781) ms
initial connection time = 3281.084 ms
tps = 1985.270189 (without initial connection time)

Enter fullscreen mode Exit fullscreen mode

There are two outcomes here. First, the throughput is slightly below the goal of 2000 transactions per second. I'm running this on a tiny cluster: the three YugabyteDB nodes run on a 4vCPU VM with 20GB RAM. Second, there are 50 sessions in pg_stat_activity and most of them are idle waiting for the the next application call (ClientRead).

With Connection Manager enabled

Here is the output when running the same with the Connection Manager enabled (enable_ysql_conn_mgr=true):

yugabyte=# \! pgbench -R2000 -c 50 -nS -T 60 -P 10 & sleep 30
pgbench (16.2, server 11.2-YB-2.23.0.0-b0)
progress: 10.0 s, 1622.7 tps, lat 332.494 ms stddev 260.110, 0 failed, lag 310.399 ms
progress: 20.0 s, 1995.2 tps, lat 15.052 ms stddev 4.248, 0 failed, lag 0.241 ms

yugabyte=# select datname, pid, state, wait_event
  from pg_stat_activity where application_name='pgbench';

 datname  |   pid   | state  | wait_event
----------+---------+--------+-------------
 yugabyte | 3084857 | idle   | ClientRead
 yugabyte | 3084961 | idle   | StorageRead
 yugabyte | 3084988 | active | StorageRead
(3 rows)

progress: 30.0 s, 1991.7 tps, lat 15.597 ms stddev 3.686, 0 failed, lag 0.160 ms
progress: 40.0 s, 2005.0 tps, lat 17.587 ms stddev 7.656, 0 failed, lag 0.920 ms
progress: 50.0 s, 2012.9 tps, lat 15.461 ms stddev 3.289, 0 failed, lag 0.188 ms
progress: 60.0 s, 2013.5 tps, lat 16.817 ms stddev 8.673, 0 failed, lag 1.229 ms
transaction type: <builtin: select only>
scaling factor: 1
query mode: simple
number of clients: 50
number of threads: 1
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 116441
number of failed transactions: 0 (0.000%)
latency average = 60.196 ms
latency stddev = 146.512 ms
rate limit schedule lag: avg 43.729 (max 744.211) ms
initial connection time = 1947.742 ms
tps = 2005.197661 (without initial connection time)

Enter fullscreen mode Exit fullscreen mode

The throughput was slightly higher, reaching the limit set in PgBench. Thanks to the Connection Manager connection pool, only three physical connections were used, and they were never idle because they served the fifty logical connections.

I've queried the Connection Manager statistics endpoint (http://yb-tserver-1:13000/connections) during the run, at a time when five physical connections were active:
Image description

Prepared Statements

Managing prepared statements can be challenging when using external connection pools because the prepared statement's scope is the session, and it must remain when the session is idle. However, YugabyteDB handles this seamlessly. When an explicit PREPARE is called, the YugabyteDB Connection Manager marks the session as sticky, and it cannot be reused until the DEALLOCATE call. With protocol-level prepared statements, the YugabyteDB Connection Manager maintains a map of prepared statements shared among all clients. This allows for transparent usage and the physical connection to be reused by multiple logical connections.

Here is the same run with -M prepared to get PgBench using prepared statements:

yugabyte=# \! pgbench -M prepared -R2000 -c 50 -nS -T 60 -P 10 & sleep 30
pgbench (16.2, server 11.2-YB-2.23.0.0-b0)
progress: 10.0 s, 1598.3 tps, lat 657.067 ms stddev 375.417, 0 failed, lag 633.709 ms
progress: 20.0 s, 1999.2 tps, lat 16.997 ms stddev 8.943, 0 failed, lag 1.230 ms

yugabyte=# select datname, pid, state, wait_event from pg_stat_activity where application_name='pgbench';

 datname  |   pid   | state  | wait_event
----------+---------+--------+------------
 yugabyte | 3093328 | active | ClientRead
 yugabyte | 3093332 | active | ClientRead
 yugabyte | 3093354 | active | ClientRead
 yugabyte | 3093494 | idle   | ClientRead
 yugabyte | 3093548 | active | ClientRead
(5 rows)

progress: 30.0 s, 1983.2 tps, lat 21.069 ms stddev 16.779, 0 failed, lag 4.537 ms
progress: 40.0 s, 2027.0 tps, lat 16.070 ms stddev 4.464, 0 failed, lag 0.402 ms
progress: 50.0 s, 1975.1 tps, lat 17.465 ms stddev 8.936, 0 failed, lag 1.610 ms
progress: 60.0 s, 2012.7 tps, lat 16.296 ms stddev 4.748, 0 failed, lag 0.376 ms
transaction type: <builtin: select only>
scaling factor: 1
query mode: prepared
number of clients: 50
number of threads: 1
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 115998
number of failed transactions: 0 (0.000%)
latency average = 105.683 ms
latency stddev = 260.940 ms
rate limit schedule lag: avg 88.714 (max 1153.510) ms
initial connection time = 1977.256 ms
tps = 1998.509569 (without initial connection time)

Enter fullscreen mode Exit fullscreen mode

When using a prepared statement protocol, like with the simple protocol, the throughput is higher with fewer physical connections.

Throughput and Latency

I've conducted my tests using pgbench and shared the complete output, which includes latency measurements for the initial connection time and at ten-second intervals during the run (-P 10). The average latency appears to be higher with the Connection Manager (60.196 ms and 105.683 ms) than without it (20.792 ms). However, it's essential to look at the details to properly understand these numbers, as they are influenced by the short duration of the test (duration: 60 s).

When using a connection pool without a minimum size, the pool initially has no physical connections and is only established when needed. With the connection manager, the initial connection time is shorter (two seconds instead of three). However, the first transactions may experience higher latency (hundreds of milliseconds) because the cost of establishing a physical connection is deferred until they are executed. In real deployments, the pool can start a minimum of physical transactions.

The YugabyteDB Connection Manager allows minimal physical connections to serve the logical connections when they are active without impacting latency and throughput. It is integrated with the database, operates transparently, and is compatible with prepared statements.

Top comments (0)