DEV Community

Cover image for 13 Tips to Improve PostgreSQL Insert Performance
Dev Team for Timescale

Posted on • Originally published at timescale.com

13 Tips to Improve PostgreSQL Insert Performance

Ingest performance is critical for many common PostgreSQL use cases, including application monitoring, application analytics, IoT monitoring, and more. These use cases have something in common: unlike standard relational "business" data, changes are treated as inserts, not overwrites. In other words, every new value becomes a new row in the database instead of replacing the row's prior value with the latest one.

If you're operating in a scenario where you need to retain all data vs. overwriting past values, optimizing the speed at which your database can ingest new data becomes essential.

In Timescale, we have a lot of experience optimizing database performance, so in this article, we will look at PostgreSQL inserts and how to improve their performance. We'll include the following:

  • Useful tips for improving PostgreSQL insert performance, in general, such as moderating your use of indexes, reconsidering foreign key constraints, avoiding unnecessary UNIQUE keys, using separate disks for WAL (Write-Ahead Logging) and data, and deploying on performant disks. Each of these strategies can help optimize the speed at which your database ingests new data.

  • TimescaleDB-specific insert performance tips (Timescale works like PostgreSQL under the hood).

PostgreSQL Insert Overview

One of PostgreSQL's fundamental commands, the INSERT operation plays a crucial role in adding new data to a database. It adds one or more rows to a table, filling each column with specified data. When certain columns are not specified in the insert query, PostgreSQL automatically fills these columns with their default values, if any are defined. This feature ensures that the database maintains integrity and consistency, even when all column values are not provided.

The INSERT operation is fundamental for data ingest processes, where new data is continually added to the database. It allows for the efficient and organized storage of new information, making it accessible for querying and analysis.

Here’s a simple example of an INSERT query:

Improving Insert Performance in Postgres

INSERT INTO employees (name, position, department)
VALUES ('John Doe', 'Software Engineer', 'Development');
Enter fullscreen mode Exit fullscreen mode

In this example, the INSERT INTO statement specifies the table employees to which the row will be added. The columns name, position, and department are explicitly mentioned, indicating where the provided data should be inserted.

Following the VALUES keyword, the actual data to be inserted into these columns is provided in parentheses. If the employees table contains other columns for which default values are defined and are not included in the INSERT statement, PostgreSQL will automatically fill those columns with the default values.

When Insert Performance Matters

The speed at which data can be ingested into a database directly impacts its utility and responsiveness, especially when reaction speed in real-time or near-real-time data processing is essential.

One prominent example of such a use case is time-series data management. Time-series data, characterized by its sequential nature, accumulates moment by moment, often originating from sensors, financial transactions, or user activity logs.

The value of time-series data lies in its timeliness and the insights that can be gleaned from analyzing patterns over time. To maintain the integrity and relevance of these insights, insert performance must be optimized to ensure data is updated consistently and without delay. High insert performance allows for the seamless integration of new data, preserving the chronological order and enabling accurate real-time analysis.

Application monitoring represents another critical area where insert performance is paramount. Effective monitoring systems rely on continuously ingesting application metrics and logs to provide an up-to-date view of the application's health and performance. Any lag in data ingest can lead to delays in detecting and responding to issues, potentially affecting user experience and system stability. Strong insert performance ensures that monitoring systems remain current, allowing for immediate action in response to any anomalies detected.

Event detection applications, such as fraud detection systems, also underscore the importance of fast insert speeds. In these scenarios, the ability to rapidly ingest and process data can mean the difference between catching a fraudulent transaction as it happens or missing it entirely.

Fast data ingest enables these systems to analyze events in real time, applying algorithms to detect suspicious patterns and react promptly. The reaction speed is crucial in minimizing risk and protecting assets, highlighting the critical role of insert performance in maintaining system efficacy.

Improving Insert Performance

The previous use cases stress the critical role of ingest speed in real-time or high-volume databases, such as those handling time series. These use cases make up for most of our customer base here at Timescale, so we're pretty confident to recommend these five best practices for improving ingest performance in vanilla PostgreSQL:

1. Use indexes in moderation

Having the right indexes can speed up your queries, but they’re not a silver bullet. Incrementally maintaining indexes with each new row requires additional work. Check the number of indexes you’ve defined on your table (use the psql command \d table_name), and determine whether their potential query benefits outweigh the storage and insert overhead. Since every system is different, there aren’t any hard and fast rules or “magic number” of indexes—just be reasonable.

2. Reconsider foreign key constraints

Sometimes, it's necessary to build foreign keys (FK) from one table to other relational tables. When you have an FK constraint, every INSERT will typically need to read from your referenced table, which can degrade performance. Consider denormalizing your data—we sometimes see pretty extreme use of FK constraints from a sense of “elegance” rather than engineering trade-offs.

3. Avoid unnecessary UNIQUE keys

Developers are often trained to specify primary keys in database tables, and many ORMs love them. Yet, many use cases—including common monitoring or time-series applications—don’t require them, as each event or sensor reading can simply be logged as a separate event by inserting it at the tail of a hypertable's current chunk during write time.

If a UNIQUE constraint is otherwise defined, that insert can necessitate an index lookup to determine if the row already exists, which will adversely impact the speed of your INSERT.

4. Use separate disks for WAL and data

While this is a more advanced optimization that isn't always needed, if your disk becomes a bottleneck, you can further increase throughput by using a separate disk (tablespace) for the database's WAL and data.

5. Use performant disks

Sometimes developers deploy their database in environments with slower disks, whether due to poorly-performing HDD, remote storage area networks (SANs), or other types of configurations. And because when you insert rows, the data is durably stored in the WAL before the transaction completes, slow disks can impact insert performance. One thing to do is check your disk IOPS using the ioping command.

Read test:

$ ioping -q -c 10 -s 8k .
--- . (hfs /dev/disk1 930.7 GiB) ioping statistics ---
9 requests completed in 208 us, 72 KiB read, 43.3 k iops, 338.0 MiB/s
generated 10 requests in 9.00 s, 80 KiB, 1 iops, 8.88 KiB/s
min/avg/max/mdev = 18 us / 23.1 us / 35 us / 6.17 us
Enter fullscreen mode Exit fullscreen mode

Write test:

$ ioping -q -c 10 -s 8k -W .
--- . (hfs /dev/disk1 930.7 GiB) ioping statistics ---
9 requests completed in 10.8 ms, 72 KiB written, 830 iops, 6.49 MiB/s
generated 10 requests in 9.00 s, 80 KiB, 1 iops, 8.89 KiB/s
min/avg/max/mdev = 99 us / 1.20 ms / 2.23 ms / 919.3 us
Enter fullscreen mode Exit fullscreen mode

You should see at least thousands of read IOPS and many hundreds of write IOPS. If you are seeing far fewer, your disk hardware is likely affecting your INSERT performance. See if alternative storage configurations are feasible.

Using TimescaleDB to Improve Ingest Performance

TimescaleDB is built to improve query and ingest performance in PostgreSQL.

The most common uses for TimescaleDB involve storing massive amounts of data for cloud infrastructure metrics, product analytics, web analytics, IoT devices, and many use cases involving large PostgreSQL tables. The ideal Timescale scenarios are time-centric, almost solely append-only (lots of INSERTs), and require fast ingestion of large amounts of data within small time windows.

Here you have eight more techniques for improving ingest performance with TimescaleDB:

6. Use parallel writes

Each INSERT or COPY command to TimescaleDB (as in PostgreSQL) is executed as a single transaction and thus runs in a single-threaded fashion. To achieve higher ingest, you should execute multiple INSERT or COPY commands in parallel.

For help with bulk loading large CSV files in parallel, check out TimescaleDB's parallel copy command.

Pro tip: make sure your client machine has enough cores to execute this parallelism (running 32 client workers on a 2 vCPU machine doesn’t help much — the workers won’t actually be executed in parallel).

7. Insert rows in batches

To achieve higher ingest rates, you should insert your data with many rows in each INSERT call (or else use some bulk insert command, like COPY or our parallel copy tool).

Don't insert your data row-by-row—instead, try at least hundreds (or thousands) of rows per insert. This allows the database to spend less time on connection management, transaction overhead, SQL parsing, etc., and more time on data processing.

8. Properly configure shared_buffers

We typically recommend 25 % of available RAM. If you install TimescaleDB via a method that runs timescaledb-tune, it should automatically configure shared_buffers to something well-suited to your hardware specs.

Note: in some cases, typically with virtualization and constrained cgroups memory allocation, these automatically-configured settings may not be ideal. To check that your shared_buffers are set to within the 25 % range, run SHOW shared_buffers from your psql connection.

9. Run our Docker images on Linux hosts

If you are running a TimescaleDB Docker container (which runs Linux) on top of another Linux operating system, you're in great shape. The container is basically providing process isolation, and the overhead is extremely minimal.

If you're running the container on a Mac or Windows machine, you'll see some performance hits for the OS virtualization, including for I/O.

Instead, if you need to run on Mac or Windows, we recommend installing directly instead of using a Docker image.

10. Avoid too many or too small chunks

We don't currently recommend using space partitioning. And if you do, remember that this number of chunks is created for every time interval.

So, if you create 64 space partitions and daily chunks, you'll have 24,640 chunks per year. This may lead to a bigger performance hit during query time (due to planning overhead) than during insert time, but it's something to consider nonetheless.

Another thing to avoid is using an incorrect integer value when you specify the time interval range in create_hypertable.

Pro tip:

  • If your time column uses a native timestamp type, then any integer value should be in terms of microseconds (so one day = 86400000000). We recommend using interval types ('1 day') to avoid the potential for any confusion.

  • If your time column is an integer or bigint itself, use the appropriate range: if the integer timestamp is in seconds, use 86400; if the bigint timestamp is in nanoseconds, use 86400000000000.

  • In both cases, you can use chunk_relation_size_pretty to make sure your chunk sizes or partition ranges seem reasonable:

=> SELECT chunk_table, ranges, total_size
FROM chunk_relation_size_pretty('hypertable_name')
ORDER BY ranges DESC LIMIT 4;
chunk_table               |                         ranges                          | total_size
-----------------------------------------+---------------------------------------------------------+------------
_timescaledb_internal._hyper_1_96_chunk | {"['2020-02-13 23:00:00+00','2020-02-14 00:00:00+00')"} | 272 MB
_timescaledb_internal._hyper_1_95_chunk | {"['2020-02-13 22:00:00+00','2020-02-13 23:00:00+00')"} | 500 MB
_timescaledb_internal._hyper_1_94_chunk | {"['2020-02-13 21:30:00+00','2020-02-13 22:00:00+00')"} | 500 MB
_timescaledb_internal._hyper_1_93_chunk | {"['2020-02-13 20:00:00+00','2020-02-13 21:00:00+00')"} | 500 MB
Enter fullscreen mode Exit fullscreen mode

11. Avoid “too large” chunks

To maintain higher ingest rates, you want your latest chunk and all its associated indexes to stay in memory so that writes to the chunk and index updates merely update memory. (The write is still durable, as inserts are written to the WAL on disk before the database pages are updated.)

If your chunks are too large, then writes to even the latest chunk will start swapping to disk.

As a rule of thumb, we recommend that the latest chunks and all their indexes fit comfortably within the database's shared_buffers. You can check your chunk sizes via the chunk_relation_size_pretty SQL command.

=> SELECT chunk_table, table_size, index_size, toast_size, total_sizeFROM chunk_relation_size_pretty('hypertable_name')ORDER BY ranges DESC LIMIT 4;
chunk_table               | table_size | index_size | toast_size | total_size
-----------------------------------------+------------+------------+------------+------------
_timescaledb_internal._hyper_1_96_chunk | 200 MB     | 64 MB      | 8192 bytes | 272 MB
_timescaledb_internal._hyper_1_95_chunk | 388 MB     | 108 MB     | 8192 bytes | 500 MB
_timescaledb_internal._hyper_1_94_chunk | 388 MB     | 108 MB     | 8192 bytes | 500 MB
_timescaledb_internal._hyper_1_93_chunk | 388 MB     | 108 MB     | 8192 bytes | 500 MB
Enter fullscreen mode Exit fullscreen mode

If your chunks are too large, you can update the range for future chunks via the set_chunk_time_interval command. However, this does not modify the range of existing chunks (e.g., by rewriting large chunks into multiple small chunks).

For configurations where individual chunks are much larger than your available memory, we recommend dumping and reloading your hypertable data to properly sized chunks.

Keeping the latest chunk applies to all active hypertables; if you are actively writing to two hypertables, the latest chunks from both should fit within shared_buffers.

12. Write data in loose time order

When chunks are sized appropriately (see #10 and #11), the latest chunk(s) and their associated indexes are naturally maintained in memory. New rows inserted with recent timestamps will be written to these chunks and indexes already in memory.

If a row with a sufficiently older timestamp is inserted—i.e., it's an out-of-order or backfilled write—the disk pages corresponding to the older chunk (and its indexes) will need to be read in from disk. This will significantly increase write latency and lower insert throughput.

Particularly, when you are loading data for the first time, try to load data in sorted, increasing timestamp order.

Be careful if you're bulk-loading data about many different servers, devices, and so forth:

  • Do not bulk insert data sequentially by server (i.e., all data for server A, then server B, then C, and so forth). This will cause disk thrashing as loading each server will walk through all chunks before starting anew.
  • Instead, arrange your bulk load so that data from all servers are inserted in loose timestamp order (e.g., day 1 across all servers in parallel, then day 2 across all servers in parallel, etc.)

13. Watch row width

The overhead from inserting a wide row (say, 50, 100, 250 columns) is going to be much higher than inserting a narrower row (more network I/O, more parsing and data processing, larger writes to WAL, etc.). Most of our published benchmarks are using TSBS, which uses 12 columns per row. So you'll correspondingly see lower insert rates if you have very wide rows.

If you are considering very wide rows because you have different types of records, and each type has a disjoint set of columns, you might want to try using multiple hypertables (one per record type)—particularly if you don't often query across these types.

Additionally, JSONB records are another good option if virtually all columns are sparse. That said, if you're using sparse wide rows, use NULLs for missing records whenever possible, not default values, for the most performance gains (NULLs are much cheaper to store and query).

Finally, the cost of wide rows is actually much less once you compress rows using TimescaleDB’s native compression. Rows are converted into more columnar compressed form, sparse columns compress extremely well, and compressed columns aren’t read from disk for queries that don’t fetch individual columns.

Summary

If ingest performance is critical to your use case, consider using TimescaleDB. You can get started with Timescale for free today or download TimescaleDB to your own hardware.

Our approach to support is to address your whole solution, so we're here to help you achieve your desired performance results (see more details about our Support team and ethos).

Lastly, our Slack community is a great place to connect with 8,000+ other developers with similar use cases, as well as myself, Timescale engineers, product team members, and other developer advocates.

GitHub logo timescale / timescaledb

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale logo

TimescaleDB is a PostgreSQL extension for high-performance real-time analytics on time-series and event data

Docs SLACK Try TimescaleDB for free

Install TimescaleDB

Install from a Docker container:

  1. Run the TimescaleDB container:

    docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb:latest-pg17
    Enter fullscreen mode Exit fullscreen mode
  2. Connect to a database:

    docker exec -it timescaledb psql -d "postgres://postgres:password@localhost/postgres"
    Enter fullscreen mode Exit fullscreen mode

See other installation options or try Timescale Cloud for free.

Create a hypertable

You create a regular table and then convert it into a hypertable. A hypertable automatically partitions data into chunks based on your configuration.

-- Create timescaledb extension
CREATE EXTENSION IF NOT EXISTS timescaledb;

-- Create a regular SQL table
CREATE TABLE conditions (
  time        TIMESTAMPTZ       NOT NULL,
  location    TEXT              NOT NULL,
  temperature DOUBLE PRECISION  NULL,
  humidity    DOUBLE PRECISION  NULL
);

-- Convert the table into a hypertable that is partitioned by time
SELECT create_hypertable('conditions', by_range('time'));
Enter fullscreen mode Exit fullscreen mode

See more:

Top comments (0)