DEV Community

Cover image for Aurora DSQL - Simple Inserts Workload from an AWS CloudShell
Franck Pachot for AWS Heroes

Posted on

Aurora DSQL - Simple Inserts Workload from an AWS CloudShell

My first test of Amazon Aurora DSQL focuses on read and write latency using an existing Java application designed for a PostgreSQL-compatible database. I utilize the Sample Apps Workload Generator built to micro-benchmark YugabyteDB and other PostgreSQL-compatible databases. I'll run the simplest CRUD operation: single-row insert with no secondary indexes.

To simplify, I run this from a CloudShell in one of the regions where my Amazon Aurora DSQL has an endpoint. I use the multi-region cluster I set up in the previous blog post of this series.

I use the multi-region Amazon Aurora DSQL cluster set up in the previous post of this series. Here are the regions and their RTT in milliseconds:

P99 latency (from cloudping.co) us-east-1 us-east-2 us-west-2
us-east-1 (N. Virginia) Linked cluster 9.04 ms 20.55 ms 71.46 ms
us-east-2 (Ohio) Linked cluster 19.76 ms 9.52 ms 58.04 ms
us-west-2 (Oregon) Witness 64.12 ms 55.42 ms 5.81 ms

I'll connect in us-east-1 (N. Virginia) to the endpoint in the same region.

CloudShell in eu-east-1

I upgrade AWS CLI to access DSQL that was released recently, I install Java and download the yb-sample-apps JAR from the Yugabyte repository:


# upgrade AWS CLI
(
cd /var/tmp &&
 wget -c https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip &&
 unzip -u awscli-exe-linux-x86_64.zip &&
 sudo ./aws/install --update &&
 rm -f ./aws/install
)

# install what is needed to run the Java program
sudo dnf install -y wget unzip java

# install YugabyteDB workload generator

wget -c https://github.com/yugabyte/yb-sample-apps/releases/download/v1.4.1/yb-sample-apps.jar &&
 java -jar yb-sample-apps.jar --help


Enter fullscreen mode Exit fullscreen mode

The AWS_REGION is set to the region of the CloudShell. I connected to the first DSQL cluster I found using the list-clusters command (I have only one). For convenience, I utilize the libpq environment variables, even though the PostgreSQL JDBC Driver does not - I use these variables in my Java command line.


# set environment variables with connection info

PGDATABASE=postgres
PGUSER=admin
PGHOST=$(
 aws dsql list-clusters --output text |
  awk '{print $NF ; exit }').dsql.${AWS_REGION}.on.aws
PGPORT=5432
PGPASSWORD=$(
 aws dsql generate-db-connect-admin-auth-token --hostname $PGHOST --expires-in 3600 
)
PGSSLMODE=require
export PGDATABASE PGUSER PGHOST PGPORT PGUSER PGPASSWORD PGSSLMODE
set | grep ^PG

Enter fullscreen mode Exit fullscreen mode

I can test the connection with psql which is already installed in the CloudShell and uses the libpq environment variables:

[cloudshell-user@ip-10-136-53-158 ~]$ timeout 5 psql <<'SQL'
 \timing on
 explain analyze select
 \watch
SQL

Timing is on.
                     Sun 08 Dec 2024 07:56:05 PM UTC (every 2s)

                                     QUERY PLAN                                     
------------------------------------------------------------------------------------
 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.003 rows=1 loops=1)
 Planning Time: 0.382 ms
 Execution Time: 0.056 ms
(3 rows)

Time: 13.732 ms
                     Sun 08 Dec 2024 07:56:07 PM UTC (every 2s)

                                     QUERY PLAN                                     
------------------------------------------------------------------------------------
 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)
 Planning Time: 0.016 ms
 Execution Time: 0.015 ms
(3 rows)

Time: 2.753 ms
                     Sun 08 Dec 2024 07:56:09 PM UTC (every 2s)

                                     QUERY PLAN                                     
------------------------------------------------------------------------------------
 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)
 Planning Time: 0.017 ms
 Execution Time: 0.015 ms
(3 rows)

Time: 2.812 ms
[cloudshell-user@ip-10-136-53-158 ~]$ 
Enter fullscreen mode Exit fullscreen mode

The latency from the CloudShell in the same region as the cluster is less than 3 seconds.

In another CloudShell, I tested the connection to the other region by setting the AWS_REGION variable before running the script to configure the PG environment variables. The latency increased by 10 milliseconds:
Image description

This indicates the round-trip latency (RTT) I obtained is slightly lower than what was measured by CloudPing.

SqlInserts on Amazon Aurora DSQL

With this all set, I can run the SqlInserts (Sample key-value app built on PostgreSQL with concurrent readers and writers. The app inserts unique string keys) workload of yb-sample-apps.jar:


java -jar yb-sample-apps.jar \
 --workload SqlInserts \
 --nodes $PGHOST:$PGPORT --default_postgres_database $PGDATABASE \
 --username $PGUSER --password $PGPASSWORD --load_balance false

Enter fullscreen mode Exit fullscreen mode

Image description
The screenshot is small, here is the output in text:

[cloudshell-user@ip-10-136-53-158 ~]$ PGPASSWORD=$(                                                                                                                                                         
 aws dsql generate-db-connect-admin-auth-token --hostname $PGHOST --expires-in 3600
)
[cloudshell-user@ip-10-136-53-158 ~]$ java -jar yb-sample-apps.jar \
>  --workload SqlInserts \
>  --nodes $PGHOST:$PGPORT --default_postgres_database $PGDATABASE \
>  --username $PGUSER --password $PGPASSWORD --load_balance false

0 [main] INFO com.yugabyte.sample.Main  - Starting sample app...
95 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Using a randomly generated UUID : 8ae7137a-aa0c-4c1f-95fc-f71c43e5547c
103 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - App: SqlInserts
104 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Run time (seconds): -1
104 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Adding node: mqabtu7ic5sdfgwwf3sjstb6sy.dsql.us-east-1.on.aws:5432
104 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num reader threads: 2, num writer threads: 2
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Default postgres database: postgres
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num unique keys to insert: 2000000
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num keys to update: 0
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num keys to read: 1500000
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Value size: 0
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Restrict values to ASCII strings: false
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Perform sanity check at end of app run: false
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Table TTL (secs): -1
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Local reads: false
105 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Read only load: false
106 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - SqlInserts workload: using driver set for load-balance = false
1895 [main] INFO com.yugabyte.sample.apps.SqlInserts  - Created table: postgresqlkeyvalue
6969 [Thread-0] INFO .MetricsTracker  - Read: 596.62 ops/sec (2.92 ms/op), 3022 total ops  |  Write: 73.15 ops/sec (27.02 ms/op), 366 total ops  |  Uptime: 5069 ms | 
11970 [Thread-0] INFO .MetricsTracker  - Read: 796.59 ops/sec (2.51 ms/op), 7006 total ops  |  Write: 84.39 ops/sec (23.71 ms/op), 788 total ops  |  Uptime: 10070 ms | 
16974 [Thread-0] INFO .MetricsTracker  - Read: 830.45 ops/sec (2.41 ms/op), 11159 total ops  |  Write: 90.13 ops/sec (22.20 ms/op), 1239 total ops  |  Uptime: 15074 ms | 
21975 [Thread-0] INFO .MetricsTracker  - Read: 817.42 ops/sec (2.44 ms/op), 15249 total ops  |  Write: 89.59 ops/sec (22.29 ms/op), 1687 total ops  |  Uptime: 20075 ms | 
26975 [Thread-0] INFO .MetricsTracker  - Read: 805.31 ops/sec (2.48 ms/op), 19276 total ops  |  Write: 89.59 ops/sec (22.33 ms/op), 2135 total ops  |  Uptime: 25075 ms | 
31976 [Thread-0] INFO .MetricsTracker  - Read: 823.53 ops/sec (2.42 ms/op), 23394 total ops  |  Write: 89.99 ops/sec (22.19 ms/op), 2585 total ops  |  Uptime: 30076 ms | 
36976 [Thread-0] INFO .MetricsTracker  - Read: 838.30 ops/sec (2.38 ms/op), 27586 total ops  |  Write: 90.39 ops/sec (22.14 ms/op), 3037 total ops  |  Uptime: 35076 ms | 
41977 [Thread-0] INFO .MetricsTracker  - Read: 825.08 ops/sec (2.42 ms/op), 31712 total ops  |  Write: 90.39 ops/sec (22.15 ms/op), 3489 total ops  |  Uptime: 40077 ms | 
46978 [Thread-0] INFO .MetricsTracker  - Read: 825.70 ops/sec (2.42 ms/op), 35841 total ops  |  Write: 89.99 ops/sec (22.18 ms/op), 3939 total ops  |  Uptime: 45078 ms | 
51978 [Thread-0] INFO .MetricsTracker  - Read: 826.31 ops/sec (2.42 ms/op), 39973 total ops  |  Write: 90.19 ops/sec (22.16 ms/op), 4390 total ops  |  Uptime: 50078 ms | 
56979 [Thread-0] INFO .MetricsTracker  - Read: 817.31 ops/sec (2.45 ms/op), 44060 total ops  |  Write: 89.99 ops/sec (22.26 ms/op), 4840 total ops  |  Uptime: 55079 ms | 
61979 [Thread-0] INFO .MetricsTracker  - Read: 832.29 ops/sec (2.40 ms/op), 48222 total ops  |  Write: 90.19 ops/sec (22.13 ms/op), 5291 total ops  |  Uptime: 60079 ms | 
66981 [Thread-0] INFO .MetricsTracker  - Read: 830.77 ops/sec (2.41 ms/op), 52377 total ops  |  Write: 90.37 ops/sec (22.11 ms/op), 5743 total ops  |  Uptime: 65081 ms | 
71981 [Thread-0] INFO .MetricsTracker  - Read: 839.89 ops/sec (2.38 ms/op), 56577 total ops  |  Write: 90.79 ops/sec (22.10 ms/op), 6197 total ops  |  Uptime: 70081 ms | 
76982 [Thread-0] INFO .MetricsTracker  - Read: 832.10 ops/sec (2.40 ms/op), 60738 total ops  |  Write: 90.39 ops/sec (22.11 ms/op), 6649 total ops  |  Uptime: 75082 ms | 
81984 [Thread-0] INFO .MetricsTracker  - Read: 771.20 ops/sec (2.59 ms/op), 64595 total ops  |  Write: 88.98 ops/sec (22.43 ms/op), 7094 total ops  |  Uptime: 80084 ms | 
86984 [Thread-0] INFO .MetricsTracker  - Read: 769.26 ops/sec (2.59 ms/op), 68442 total ops  |  Write: 89.18 ops/sec (22.39 ms/op), 7540 total ops  |  Uptime: 85084 ms | 
91985 [Thread-0] INFO .MetricsTracker  - Read: 795.10 ops/sec (2.51 ms/op), 72418 total ops  |  Write: 90.39 ops/sec (22.17 ms/op), 7992 total ops  |  Uptime: 90085 ms | 
96987 [Thread-0] INFO .MetricsTracker  - Read: 826.64 ops/sec (2.42 ms/op), 76552 total ops  |  Write: 90.56 ops/sec (22.06 ms/op), 8445 total ops  |  Uptime: 95087 ms | 
101988 [Thread-0] INFO .MetricsTracker  - Read: 822.47 ops/sec (2.43 ms/op), 80666 total ops  |  Write: 90.59 ops/sec (22.05 ms/op), 8898 total ops  |  Uptime: 100088 ms | 
106988 [Thread-0] INFO .MetricsTracker  - Read: 823.71 ops/sec (2.43 ms/op), 84785 total ops  |  Write: 90.99 ops/sec (21.99 ms/op), 9353 total ops  |  Uptime: 105088 ms | 
111989 [Thread-0] INFO .MetricsTracker  - Read: 829.12 ops/sec (2.41 ms/op), 88931 total ops  |  Write: 90.99 ops/sec (21.97 ms/op), 9808 total ops  |  Uptime: 110089 ms | 
116989 [Thread-0] INFO .MetricsTracker  - Read: 827.30 ops/sec (2.41 ms/op), 93068 total ops  |  Write: 90.79 ops/sec (22.04 ms/op), 10262 total ops  |  Uptime: 115089 ms | 
121991 [Thread-0] INFO .MetricsTracker  - Read: 819.83 ops/sec (2.44 ms/op), 97168 total ops  |  Write: 90.78 ops/sec (22.04 ms/op), 10716 total ops  |  Uptime: 120091 ms | 
126991 [Thread-0] INFO .MetricsTracker  - Read: 831.48 ops/sec (2.40 ms/op), 101326 total ops  |  Write: 90.79 ops/sec (21.97 ms/op), 11170 total ops  |  Uptime: 125091 ms | 
131992 [Thread-0] INFO .MetricsTracker  - Read: 839.91 ops/sec (2.38 ms/op), 105526 total ops  |  Write: 91.59 ops/sec (21.88 ms/op), 11628 total ops  |  Uptime: 130092 ms | 
136992 [Thread-0] INFO .MetricsTracker  - Read: 837.29 ops/sec (2.39 ms/op), 109713 total ops  |  Write: 90.99 ops/sec (21.92 ms/op), 12083 total ops  |  Uptime: 135092 ms | 
141993 [Thread-0] INFO .MetricsTracker  - Read: 812.71 ops/sec (2.46 ms/op), 113777 total ops  |  Write: 90.99 ops/sec (22.01 ms/op), 12538 total ops  |  Uptime: 140093 ms | 
146993 [Thread-0] INFO .MetricsTracker  - Read: 802.92 ops/sec (2.49 ms/op), 117792 total ops  |  Write: 90.79 ops/sec (22.02 ms/op), 12992 total ops  |  Uptime: 145093 ms | 
151994 [Thread-0] INFO .MetricsTracker  - Read: 828.31 ops/sec (2.41 ms/op), 121934 total ops  |  Write: 91.19 ops/sec (21.91 ms/op), 13448 total ops  |  Uptime: 150094 ms | 
156995 [Thread-0] INFO .MetricsTracker  - Read: 820.49 ops/sec (2.44 ms/op), 126037 total ops  |  Write: 91.59 ops/sec (21.84 ms/op), 13906 total ops  |  Uptime: 155095 ms | 
161996 [Thread-0] INFO .MetricsTracker  - Read: 818.89 ops/sec (2.44 ms/op), 130132 total ops  |  Write: 91.39 ops/sec (21.89 ms/op), 14363 total ops  |  Uptime: 160096 ms | 
166997 [Thread-0] INFO .MetricsTracker  - Read: 822.17 ops/sec (2.43 ms/op), 134244 total ops  |  Write: 91.17 ops/sec (21.89 ms/op), 14819 total ops  |  Uptime: 165097 ms | 
171997 [Thread-0] INFO .MetricsTracker  - Read: 822.32 ops/sec (2.43 ms/op), 138356 total ops  |  Write: 91.59 ops/sec (21.86 ms/op), 15277 total ops  |  Uptime: 170097 ms | 
176998 [Thread-0] INFO .MetricsTracker  - Read: 821.70 ops/sec (2.43 ms/op), 142465 total ops  |  Write: 91.39 ops/sec (21.85 ms/op), 15734 total ops  |  Uptime: 175098 ms | 
181998 [Thread-0] INFO .MetricsTracker  - Read: 836.72 ops/sec (2.39 ms/op), 146649 total ops  |  Write: 92.19 ops/sec (21.73 ms/op), 16195 total ops  |  Uptime: 180098 ms | 
186999 [Thread-0] INFO .MetricsTracker  - Read: 823.50 ops/sec (2.43 ms/op), 150767 total ops  |  Write: 91.39 ops/sec (21.84 ms/op), 16652 total ops  |  Uptime: 185099 ms | 
192000 [Thread-0] INFO .MetricsTracker  - Read: 826.37 ops/sec (2.42 ms/op), 154900 total ops  |  Write: 91.78 ops/sec (21.78 ms/op), 17111 total ops  |  Uptime: 190100 ms | 
197001 [Thread-0] INFO .MetricsTracker  - Read: 820.52 ops/sec (2.44 ms/op), 159003 total ops  |  Write: 91.79 ops/sec (21.80 ms/op), 17570 total ops  |  Uptime: 195101 ms | 
202002 [Thread-0] INFO .MetricsTracker  - Read: 795.91 ops/sec (2.51 ms/op), 162983 total ops  |  Write: 91.58 ops/sec (21.85 ms/op), 18028 total ops  |  Uptime: 200102 ms | 
207002 [Thread-0] INFO .MetricsTracker  - Read: 828.25 ops/sec (2.41 ms/op), 167125 total ops  |  Write: 91.99 ops/sec (21.73 ms/op), 18488 total ops  |  Uptime: 205102 ms | 
212004 [Thread-0] INFO .MetricsTracker  - Read: 829.47 ops/sec (2.41 ms/op), 171273 total ops  |  Write: 92.39 ops/sec (21.63 ms/op), 18950 total ops  |  Uptime: 210104 ms | 
217006 [Thread-0] INFO .MetricsTracker  - Read: 810.48 ops/sec (2.47 ms/op), 175327 total ops  |  Write: 92.35 ops/sec (21.66 ms/op), 19412 total ops  |  Uptime: 215106 ms | 
222006 [Thread-0] INFO .MetricsTracker  - Read: 823.96 ops/sec (2.43 ms/op), 179448 total ops  |  Write: 92.38 ops/sec (21.60 ms/op), 19874 total ops  |  Uptime: 220106 ms | 
227007 [Thread-0] INFO .MetricsTracker  - Read: 824.51 ops/sec (2.42 ms/op), 183571 total ops  |  Write: 91.99 ops/sec (21.73 ms/op), 20334 total ops  |  Uptime: 225107 ms | 
Enter fullscreen mode Exit fullscreen mode

Here are my observations:

  • I expected reads to take around 3 milliseconds since all reads are local. This is a significant advantage of Optimistic Concurrency Control (OCC) combined with Multi-Version Concurrency Control (MVCC), as it doesn't require reading the current state.
  • The 23 milliseconds for writes adds more than one round-trip time. To synchronize the journal, the Round-Trip Time (RTT) should only be incurred once at commit.

Comparing SqlInserts on YugabyteDB

To compare, I've created a YugabyteDB cluster in the same regions:
Image description

The main architectural differences are:

operation YugabyteDB Aurora DSQL
Read from Raft leader (batched) from local region
Write to Raft leader (buffered) to local region
Flush when required by read after write (🔗) -
Commit to transaction table Raft leader multi-region sync + wait

This is a simple case where the Raft leader is in the region I connect to (leader preference), no command requires a write flush before commit (no PL/pgSQL), and there's only one single-shard transaction (no secondary index, no foreign key).

I've run the same workload, connected to the node with the tablet leaders are:

yugabyte=> \! java -jar yb-sample-apps.jar --workload SqlInserts --nodes $PGHOST:$PGPORT --default_postgres_database $PGDATABASE --username $PGUSER --password $PGPASSWORD --load_balance false

0 [main] INFO com.yugabyte.sample.Main  - Starting sample app...
99 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Using a randomly generated UUID : 91638708-a81f-4912-bc68-6b62f9011f59
107 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - App: SqlInserts
108 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Run time (seconds): -1
109 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Adding node: pub-us-east-1.ef0a6b90-cc2c-43f4-a2c4-aa8083a9eb3c.cloudportal.yugabyte.com:5433
110 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num reader threads: 2, num writer threads: 2
111 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Default postgres database: yugabyte
111 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num unique keys to insert: 2000000
112 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num keys to update: 0
112 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Num keys to read: 1500000
113 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Value size: 0
113 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Restrict values to ASCII strings: false
113 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Perform sanity check at end of app run: false
114 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Table TTL (secs): -1
114 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Local reads: false
114 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - Read only load: false
116 [main] INFO com.yugabyte.sample.common.CmdLineOpts  - SqlInserts workload: using driver set for load-balance = false
1547 [main] INFO com.yugabyte.sample.apps.SqlInserts  - Created table: postgresqlkeyvalue
6637 [Thread-0] INFO .MetricsTracker  - Read: 932.17 ops/sec (2.03 ms/op), 4735 total ops  |  Write: 96.16 ops/sec (20.49 ms/op), 481 total ops  |  Uptime: 5084 ms | 
11638 [Thread-0] INFO .MetricsTracker  - Read: 1140.09 ops/sec (1.75 ms/op), 10438 total ops  |  Write: 100.97 ops/sec (19.75 ms/op), 986 total ops  |  Uptime: 10085 ms | 
16638 [Thread-0] INFO .MetricsTracker  - Read: 1171.65 ops/sec (1.70 ms/op), 16297 total ops  |  Write: 101.59 ops/sec (19.72 ms/op), 1494 total ops  |  Uptime: 15085 ms | 
21639 [Thread-0] INFO .MetricsTracker  - Read: 1178.69 ops/sec (1.69 ms/op), 22191 total ops  |  Write: 101.59 ops/sec (19.67 ms/op), 2002 total ops  |  Uptime: 20086 ms | 
26639 [Thread-0] INFO .MetricsTracker  - Read: 1175.48 ops/sec (1.70 ms/op), 28069 total ops  |  Write: 101.19 ops/sec (19.74 ms/op), 2508 total ops  |  Uptime: 25086 ms | 
31640 [Thread-0] INFO .MetricsTracker  - Read: 1120.66 ops/sec (1.78 ms/op), 33673 total ops  |  Write: 101.99 ops/sec (19.60 ms/op), 3018 total ops  |  Uptime: 30087 ms | 
36641 [Thread-0] INFO .MetricsTracker  - Read: 1146.84 ops/sec (1.74 ms/op), 39408 total ops  |  Write: 101.99 ops/sec (19.59 ms/op), 3528 total ops  |  Uptime: 35088 ms | 
41641 [Thread-0] INFO .MetricsTracker  - Read: 1165.69 ops/sec (1.71 ms/op), 45237 total ops  |  Write: 101.79 ops/sec (19.64 ms/op), 4037 total ops  |  Uptime: 40088 ms | 
46645 [Thread-0] INFO .MetricsTracker  - Read: 1154.05 ops/sec (1.73 ms/op), 51010 total ops  |  Write: 101.15 ops/sec (19.73 ms/op), 4543 total ops  |  Uptime: 45092 ms | 
51647 [Thread-0] INFO .MetricsTracker  - Read: 1176.62 ops/sec (1.70 ms/op), 56897 total ops  |  Write: 101.53 ops/sec (19.69 ms/op), 5051 total ops  |  Uptime: 50094 ms | 
56647 [Thread-0] INFO .MetricsTracker  - Read: 1196.48 ops/sec (1.67 ms/op), 62880 total ops  |  Write: 101.59 ops/sec (19.71 ms/op), 5559 total ops  |  Uptime: 55094 ms | 
61648 [Thread-0] INFO .MetricsTracker  - Read: 1166.65 ops/sec (1.71 ms/op), 68714 total ops  |  Write: 101.79 ops/sec (19.64 ms/op), 6068 total ops  |  Uptime: 60095 ms | 
66649 [Thread-0] INFO .MetricsTracker  - Read: 1154.48 ops/sec (1.73 ms/op), 74487 total ops  |  Write: 101.99 ops/sec (19.64 ms/op), 6578 total ops  |  Uptime: 65096 ms | 
71649 [Thread-0] INFO .MetricsTracker  - Read: 1166.65 ops/sec (1.71 ms/op), 80321 total ops  |  Write: 101.19 ops/sec (19.70 ms/op), 7084 total ops  |  Uptime: 70096 ms | 
76650 [Thread-0] INFO .MetricsTracker  - Read: 1154.47 ops/sec (1.73 ms/op), 86094 total ops  |  Write: 101.59 ops/sec (19.71 ms/op), 7592 total ops  |  Uptime: 75097 ms | 
81651 [Thread-0] INFO .MetricsTracker  - Read: 1141.67 ops/sec (1.75 ms/op), 91803 total ops  |  Write: 101.38 ops/sec (19.70 ms/op), 8099 total ops  |  Uptime: 80098 ms | 
86651 [Thread-0] INFO .MetricsTracker  - Read: 1129.17 ops/sec (1.77 ms/op), 97450 total ops  |  Write: 101.18 ops/sec (19.78 ms/op), 8605 total ops  |  Uptime: 85098 ms | 
91653 [Thread-0] INFO .MetricsTracker  - Read: 1117.13 ops/sec (1.79 ms/op), 103037 total ops  |  Write: 101.18 ops/sec (19.76 ms/op), 9111 total ops  |  Uptime: 90099 ms | 
96657 [Thread-0] INFO .MetricsTracker  - Read: 1153.56 ops/sec (1.73 ms/op), 108810 total ops  |  Write: 101.51 ops/sec (19.71 ms/op), 9619 total ops  |  Uptime: 95104 ms | 
101658 [Thread-0] INFO .MetricsTracker  - Read: 1163.38 ops/sec (1.72 ms/op), 114628 total ops  |  Write: 101.58 ops/sec (19.66 ms/op), 10127 total ops  |  Uptime: 100105 ms | 
106664 [Thread-0] INFO .MetricsTracker  - Read: 1164.87 ops/sec (1.71 ms/op), 120453 total ops  |  Write: 102.07 ops/sec (19.61 ms/op), 10638 total ops  |  Uptime: 105111 ms | 
111665 [Thread-0] INFO .MetricsTracker  - Read: 1150.93 ops/sec (1.74 ms/op), 126215 total ops  |  Write: 101.79 ops/sec (19.61 ms/op), 11147 total ops  |  Uptime: 110112 ms | 
116667 [Thread-0] INFO .MetricsTracker  - Read: 1150.36 ops/sec (1.74 ms/op), 131969 total ops  |  Write: 101.36 ops/sec (19.70 ms/op), 11654 total ops  |  Uptime: 115114 ms | 
121668 [Thread-0] INFO .MetricsTracker  - Read: 1143.93 ops/sec (1.75 ms/op), 137690 total ops  |  Write: 101.78 ops/sec (19.67 ms/op), 12163 total ops  |  Uptime: 120115 ms | 
126669 [Thread-0] INFO .MetricsTracker  - Read: 1154.86 ops/sec (1.73 ms/op), 143465 total ops  |  Write: 101.79 ops/sec (19.66 ms/op), 12672 total ops  |  Uptime: 125116 ms | 
131669 [Thread-0] INFO .MetricsTracker  - Read: 1155.07 ops/sec (1.73 ms/op), 149241 total ops  |  Write: 101.59 ops/sec (19.66 ms/op), 13180 total ops  |  Uptime: 130116 ms | 
Enter fullscreen mode Exit fullscreen mode

The reads are less than two milliseconds because they read from the Raft leader, and the writes add latency to the Raft quorum for the write and commit.

The statistics from pg_stat_statements look like this:
Image description

This scenario describes the simplest workload: inserting data from the node where the leader is located, without utilizing any secondary indexes. There is one Raft consensus for flushing the write intents and another for persisting the transaction status. Transactions in YugabyteDB are resilient to region failures, unlike Aurora DSQL, where transactions are rolled back in the event of a region failure.

In this series of blog posts, I will run various workloads and compare them to YugabyteDB out of curiosity, as it helps me understand the architecture. The core principle is similar: a Distributed, horizontally scalable SQL database that delivers high performance, even in multi-region scenarios. However, several areas have trade-offs, including performance, availability, PostgreSQL compatibility, and multi-cloud capabilities.

PSQL and EXPLAIN ANALYZE with Aurora DSQL

To make it easier to reproduce, I run a similar insert from psql connected to the endpoint in the same region:

postgres=> \d postgresqlkeyvalue
       Table "public.postgresqlkeyvalue"
 Column | Type | Collation | Nullable | Default 
--------+------+-----------+----------+---------
 k      | text |           | not null | 
 v      | text |           |          | 
Indexes:
    "postgresqlkeyvalue_pkey" PRIMARY KEY, btree_index (k) INCLUDE (v)

postgres=> \timing on
Timing is on.

postgres=> select;
--
(1 row)

Time: 70.224 ms

postgres=> select;
--
(1 row)

Time: 2.505 ms

postgres=> explain (analyze, costs off) 
 insert into postgresqlkeyvalue values (1e6*random(),0)
\watch

               Sun 08 Dec 2024 08:57:23 PM UTC (every 2s)

                               QUERY PLAN                               
------------------------------------------------------------------------
 Insert on postgresqlkeyvalue (actual time=0.034..0.034 rows=0 loops=1)
   ->  Result (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.054 ms
 Execution Time: 1.687 ms
(4 rows)

Time: 23.914 ms
               Sun 08 Dec 2024 08:57:25 PM UTC (every 2s)

                               QUERY PLAN                               
------------------------------------------------------------------------
 Insert on postgresqlkeyvalue (actual time=0.035..0.035 rows=0 loops=1)
   ->  Result (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.053 ms
 Execution Time: 0.962 ms
(4 rows)

Time: 22.328 ms
               Sun 08 Dec 2024 08:57:27 PM UTC (every 2s)

                               QUERY PLAN                               
------------------------------------------------------------------------
 Insert on postgresqlkeyvalue (actual time=0.034..0.035 rows=0 loops=1)
   ->  Result (actual time=0.004..0.004 rows=1 loops=1)
 Planning Time: 0.053 ms
 Execution Time: 0.993 ms
(4 rows)

Time: 28.290 ms
Enter fullscreen mode Exit fullscreen mode

As expected, the statement's execution time in Aurora DSQL is quick due to local reading and writing. The total elapsed time includes the commit, which synchronizes the Aurora DSQL journal with other regions (requiring one round-trip time, the RTT to the closest region), and a brief wait to ensure that subsequent reads can recognize the committed changes.
PostgreSQL doesn't provide an EXPLAIN ANALYZE from the commit. Still, given that Aurora DSQL does everything at commit, I hope AWS will offer more insight into what happens (time to sync + time waiting) when more instrumentation comes. Amazon Aurora DSQL is currently in preview, and many features are on the roadmap.
In this simple test, the read latency is a single-digit millisecond, and the write latency is about two RTT to the nearest region at commit time.

Stay tuned for more posts on this series about Amazon Aurora DSQL. You can follow me on Twitter, Bluesky, or LinkedIn to receive updates on new posts.

Top comments (0)