DEV Community

Aliaksei Kirkouski
Aliaksei Kirkouski

Posted on • Updated on

Basic PostgreSQL Configuration on Linux

From time to time I have to hear some system administrators say that installation, configuration and support of PostgreSQL on Linux is very complicated. That it is much cheaper to buy Windows and Microsoft SQL Server licenses than to hire highly qualified administrators to administer all these open-source systems.

In each of our business applications that use PostgreSQL as a DBMS, from 500 to 2500 users work concurrently. The applications implement almost all the main processes of retail chains. The size of the databases currently ranges from 2 to 4TB. And all of them work practically with standard PostgreSQL settings on single servers without any clustering. Even in the most heavily loaded servers there is still a significant reserve of resources for further increase in workload without the need for clustering.

Yes, of course, much depends on DBMS queries, and a few bad queries can bring down the whole server. However, it is just as possible to put down Oracle and MSSQL.

In this article, I will fully describe all the PostgreSQL (and a little bit of OS) configurations that we do on our systems. In addition, we specifically try not to change those settings that don't give a visible performance change, so that we don't have to wonder why one environment has a problem and another doesn't.

Installation

I will not focus on the installation itself, as there are a million instructions on it on the Internet, but I will briefly describe it to show how easy it is.

Right now we use Debian 11 as the operating system for most of our customers.

The only recommendation we try to give to the client's system administrators when installing the operating system is to have a separate disk for the database (preferably even without LVM). This is convenient because then you can easily change the OS if necessary by simply connecting the disk with the database to another virtual machine.

After the OS is installed and SSH-access is obtained, the installation is done as described on the official PostgreSQL site. In the console you need to run the following commands :

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -.
sudo apt-get update
sudo apt-get -y install postgresql postgresql-contrib
Enter fullscreen mode Exit fullscreen mode

These commands add the apt repository to the list, install the PostgreSQL program files, and create the database cluster.

By default, the database will be installed under the path /var/lib/postgresql/15/main, and the configuration files will be in the /etc/postgresql/15/main directory.

Configuration

All basic PostgreSQL settings are located in two files: postgresql.conf and pg_hba.conf. The first stores the settings of the database itself, and the second - the settings for accessing it. You can change the settings by editing these files in any text editor.

After any parameter changes, you should notify the DBMS that the configuration must be reread. Only a small portion of the parameters require restarting the PostgreSQL service (using systemctl restart postgresql). Most of the parameters can be changed on the fly in several ways. I most often use psql for this. To do this, first in bash you need to execute :

su postgres
psql
Enter fullscreen mode Exit fullscreen mode

And then inside psql you run :

SELECT pg_reload_conf();
Enter fullscreen mode Exit fullscreen mode

Main settings

The main settings that can significantly affect performance, which should be done first, are the memory settings. By default, PostgreSQL is configured to run on any machine without changing any settings.

The first thing you should probably do is to increase the max_connections parameter. By default, it is 100, which may not be enough if you have a large number of users. However, you should not set it too high either (as there are additional costs). Since our platform creates a dedicated connection for each user, we usually set :

max_connections = <number of users> * 2

PostgreSQL does not work with data on disk directly. When it needs to read or write something, it loads the corresponding pages from disk into a block of memory called shared buffers. This is shared memory that is used by all connections at the same time. The higher the size of these buffers, the less the load on the disk will be. For fine tuning you can analyze in dynamics how exactly these buffers are rotated, but in practice we usually set from 30 to 50% of all available memory on the server:

shared_buffers = 128GB

In addition to this parameter, we usually configure the next ones right away :

temp_buffers = 32MB
work_mem = 32MB

These parameters specify how much memory can be used by each connection for internal operations when running queries and working with temporary tables. Since they determine how much memory each connection will consume, these parameters are best adjusted empirically at runtime depending on available memory. If memory has been sufficient for the last week, you can increase both parameters (and vice versa).

There is one technique used automatically by the lsFusion platform to reduce memory consumption. Each connection to PostgreSQL is a separate process in the OS. As queries are executed, these processes are not always quick to give the used memory back to the OS. To deal with this, the platform closes active connections from time to time and reopens them. In this way, the process that consumes a lot of memory is terminated and a new one is created in its place. This significantly reduces the amount of private memory consumed by all user connections.

maintenance_work_mem = 2GB

This parameter is too small by default and it is better to increase it to speed up various system operations.

Additional settings

The settings described above are already sufficient for PostgreSQL to work well enough. Next, I will describe the settings we make to improve performance. Each of them does not give a significant increase, but they can be useful in certain cases.

wal_level = minimal
synchronous_commit = off

If we don't plan to configure asynchronous replication, we usually lower wal_level. Synchronous_commit is also disabled, since we are not writing banking systems. If the server goes down (which happens very rarely), it makes no difference whether the user gets a successful save message or not. But all transactions will work a bit faster.

checkpoint_timeout = 20min
max_wal_size = 16GB

Under heavy load, some of our clients' DBMSs manage to write 1GB of wal per minute. When max_wal_size is set to 1GB, it turns out that checkpoints will occur once a minute, which is not good (especially when full_page_writes is enabled). That's why we usually increase the value so that checkpoints occur once every 20 minutes. Accordingly, the disk load decreases a bit. Yes, it will take longer to recover in case of a crash, but it happens very rarely.

seq_page_cost = 0.1
random_page_cost = 0.1

cpu_tuple_cost = 0.05
cpu_index_tuple_cost = 0.05
cpu_operator_cost = 0.01

Usually we significantly lower (compared to the default) disk costs and in turn increase the cost of CPU operations. This is done because PostgreSQL was originally configured for slow HDD disks. We always use SSD disks in RAID-arrays, where the cost of reading is much lower, and random write/read is not much different from sequential.

The only thing is that we try to use identical settings of cost parameters everywhere so that the execution plans of requests are the same. Otherwise, everything may work fast on a test environment, while in a production environment there will be a different plan that will be much slower.

It should be noted here that changes in PostgreSQL parameters do not always lead to the expected result in query plans. We had a situation when a simple increase in the work_mem parameter resulted in a query running for 2 hours instead of 20 minutes. The execution plan started using hash join with a preliminary seq scan of the entire table, which had to be read from disk. Here lies one of the main problems of query planning in PostgreSQL. Plans do not take into account what data is in shared buffers and what is not. And often it is much more profitable to make a run on the data that are in the cache (even though there are much more of them) than to read a smaller volume from disk.

External access

If the application server is not located on the same machine as PostgreSQL, it is required to allow connections from another server. By default, PostgreSQL accepts only local connections for security reasons. To allow accepting connections from outside, you need to set the following parameter in postgresql.conf :

listen_addresses = '*'

After that you will need to restart the PostgreSQL service.

Next you need to add in pg_hba.conf IP from which to accept connections (namely the address of the application server) :

host all all 192.168.1.22/32 trust

Instead of trust you should use scram-sha-256 if access is required by password.

Additional Linux settings

In addition to the PostgreSQL settings described earlier, on memory intensive servers we often change a few other settings in Debian itself.

First, the following parameters are set in /etc/sysctl.conf :

vm.min_free_kbytes = 4194304
vm.swappiness = 1

The first parameter sets the minimum amount of free memory that the OS will try to keep. This is necessary to get rid of memory fragmentation and high system time in certain cases. Swappiness should be set to 1, as swap will be very harmful, and 0 is kind of not recommended (although I didn't notice any difference in behavior between 0 and 1).

Next, in /etc/fstab, when mounting a disk with a database, write the options noatime,nodiratime. It's a small thing, but it won't be worse. For example :

/dev/sdb /data xfs defaults,noatime,nodiratime 0 0

Also on a large memory size we usually configure the use of huge pages. To do this, first disable THP and then add a fixed number of pages that corresponds to the size of shared buffers. In the /etc/sysctl.conf file add :

vm.nr_hugepages = 67502 # (<shared_buffers> / 2MB) + 3% - for 128GB shared buffers

Finally, since we are using high performance SSD disks, we usually turn off the I/O scheduler, enabling noop or none mode. There are many ways to do this, but usually we just configure the service :

[Unit]
Description=Change scheduler

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'noop' > /sys/block/sdb/queue/scheduler" 

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Server Resources

In conclusion, I would like to write a few words about the hardware used for PostgreSQL. Despite the fact that virtualization is usually used, the PostgreSQL machine is installed as the only one on the entire physical server.

For example, one of our customers uses a server with two Intel Gold processors with 24 cores each (giving 96 virtual cores) and 256GB of memory. The server is directly connected via PCI express to 4 NVME disks of 3TB each, which are built into a software RAID-10 (via LVM) of about 5.8TB. Now the database there occupies about 3TB, with which work about 1000 concurrent users.

This configuration yields very high speeds for both disk operations and a large number of CPUs. In particular, the CPU utilization graph on this server is as follows :

Zabbix

At peak times, the read speed on such a server reaches 1.5GByte/second without a significant increase in waiting time :

dstat

Such a server performance margin will be sufficient when the number of users increases by 2-3 times before clustering should be used.

Top comments (0)