HOWTO : Performance tuning for PostgreSQL on Ubuntu/Debian

Step 1 :

Edit postgresql.conf.

sudo nano /etc/postgresql/8.3/main/postgresql.conf

Step 2 :

The performance tuning setting is as the following :

(1) shared_buffers

Recommended : 0.25 * Available Memory

(2) work_mem

Recommended : Available Memory / max_connections
(If your queries tend to be more complicated, then divide that by 2. If you typically run very close to max_connections connections, then consider dividing by 2 again. If that gives you a number that isn’t at least 16MB, buy more memory.)

(3) maintenance_work_mem

Recommended : Available Memory / 8

(4) wal_buffers

Recommended : 8MB

(5) checkpoint_segments

Recommended : 16 to 128

(6) effective_cache_size

Recommended : Available Memory * 0.75

(7) cpu_tuple_cost

Recommended : 0.0030

(8) cpu_index_tuple_cost

Recommended : 0.0010

(9) cpu_operator_cost

Recommended : 0.0005

(10) fsync

Recommended : off

Warning : If “fsync” is set to “off”, you may encounter data loss when the power failure unless you have a battery backup unit at your hardware RAID card.

(11) max_connection

Recommended : 140% (100 clients average means 140 max connections)

(12) checkpoint_timeout

Recommended : 1h

Step 3 :

Restart PostgreSQL server.

sudo /etc/init.d/postgresql-8.3 restart

Step 4 :

If it produces error message and cannot restart, change the setting for “kernel.shmmax” on sysctl.conf as suggested.

Edit the sysctl.conf as suggested.

sudo nano /etc/sysctl.conf

Reference #1 :

The following is the my setting of a 8GB RAM server which is running PostgreSQL.


max_connections = 140
shared_buffers = 2GB
temp_buffers = 8MB
work_mem = 16MB
maintenance_work_mem = 1GB
wal_buffers = 8MB
checkpoint_segments = 128
effective_cache_size = 6GB
cpu_tuple_cost = 0.0030
cpu_index_tuple_cost = 0.0010
cpu_operator_cost = 0.0005
fsync = off
checkpoint_timeout = 1h

Reference #2 :

The following is my setting of sysctl.conf on the same server.


kernel.sem = 250 32000 100 128
kernel.shmall = 2097152
kernel.shmmax = 2209914880
kernel.shmmni = 4096
fs.file-max = 262140
vm.vfs_cache_pressure = 50
vm.min_free_kbytes = 65536

net.core.rmem_default = 33554432
net.core.rmem_max = 33554432
net.core.wmem_default = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 10240 87380 33554432
net.ipv4.tcp_wmem = 10240 87380 33554432
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_max_tw_buckets = 360000

Step 5 :

Add the following parameters to the kernel tag of Grub.


Step 6 :

sudo mount -a

If no error message produced, issue the following command to make it work.

sudo mount -o remount /

8 Responses

  1. […] Here’s a short guide to how to finetune your PostgreSQL installation More here […]

  2. […] Here’s a short guide to how to finetune your PostgreSQL installation More here […]

  3. i think it would by kind to your users to explain what kind of problems you can potentially have by disabling fsync before making such suggestions.

  4. jhigdon,

    Thank you for your suggestion. It has been amended.


  5. Several of these recommendations are downright wrong and dangerous.

    fsync=off is *never* safe. If you have a properly configured battery backed RAID controller, fsync=on will be fast. That’s the difference it makes. fsync=off means you can’t expect your database to survive any type of crash. You can come up with any level of corruption in the database – not just losing your last transactions.

    So, to repeat the point: NEVER EVER run with fsync=off on a production server if you care about your data or system availability.

    You normally *never* change cpu_*_cost (3 parameters). The only such parameter you’d normally touch is random_page_cost, which isn’t on your list…

    checkpoint_timeout = 1h is basically disabling a very important feature. normally, checkpoint_timeout should be controlling your checkpoints, and checkpoint_segments only kick in when you’re doing things like bulk loading. A value of 1h makes pretty much no sense at all.

    All memory recommendations in “% of total memory” would depend completely on what your total memory is. Those numbers make some sense when it’s say 2-4Gb, but are completley off if you have 32 or 64gb which is quite common in database servers.

    Your suggestion for work_mem is dangeorus. If you said “available memory / (max_connections * 2)” it’d be semi-safe if all your queries are very simple ones, without JOINs or subselects or anything. Suggesting that it should “always” be more than 16Mb also makes no sense – most of my large installations have it at less than 16Mb for any kind of reasonable performance. It depends entirely on what your database workload is.

    maintenance_work_mem seldom gives any help at all when you go over 512 or 1024Mb, regardless of how much memory you have.

  6. […] this for further reading, I would recommend checking out other articles, and of course Gregory Smith’s book PostgreSQL High […]

  7. Reblogged this on Eknaprasath.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: