Re: Slow queries after vacuum analyze

Lists: pgsql-performance
From: "Kevin Grittner" <kgrittn(at)mail(dot)com>
To: "Ghislain ROUVIGNAC" <ghr(at)sylob(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow queries after vacuum analyze
Date: 2012-12-18 20:09:54
Message-ID: 20121218200955.14720@gmx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Ghislain ROUVIGNAC wrote:

> Memory : In use 4 Go, Free 15Go, cache 5 Go.

If the active portion of your database is actually small enough
that it fits in the OS cache, I recommend:

seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.05

> I plan to increase various parameters as follow:
> shared_buffers = 512MB
> temp_buffers = 16MB
> work_mem = 32MB
> wal_buffers = 16MB
> checkpoint_segments = 32
> effective_cache_size = 2560MB
> default_statistics_target = 500
> autovacuum_vacuum_scale_factor = 0.05
> autovacuum_analyze_scale_factor = 0.025

You could probably go a little higher on work_mem and
effective_cache_size. I would leave default_statistics_target alone
unless you see a lot of estimates which are off by more than an
order of magnitude. Even then, it is often better to set a higher
value for a few individual columns than for everything. Remember
that this setting has no effect until you reload the configuration
and then VACUUM.

-Kevin


From: Ghislain ROUVIGNAC <ghr(at)sylob(dot)com>
To: Kevin Grittner <kgrittn(at)mail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow queries after vacuum analyze
Date: 2012-12-21 10:48:58
Message-ID: CAH12p1CXpLRfopyJ84MFZxspoxFRv=g00GEPdakWfiGH3t1v=A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Hello Kevin,

I solved the issue.
I reproduced it immediatly after installing PostgreSQL 8.4.1.
I thougth they were using PostgreSQL 8.4.8 but was never able to reproduce
it with that version.
So something was changed related to my problem, but i didn't see explicitly
what in the change notes.
Nevermind.

You wrote:

> I would leave default_statistics_target alone unless you see a lot of
> estimates which are off by more than an order of magnitude. Even then, it
> is often better to set a higher value for a few individual columns than for
> everything.

We had an issue with a customer where we had to increase the statistics
parameter for a primary key.
So I'd like to know if there is a way to identify for which column we have
to change the statistics.

*Ghislain ROUVIGNAC*

2012/12/18 Kevin Grittner <kgrittn(at)mail(dot)com>

> Ghislain ROUVIGNAC wrote:
>
> > Memory : In use 4 Go, Free 15Go, cache 5 Go.
>
> If the active portion of your database is actually small enough
> that it fits in the OS cache, I recommend:
>
> seq_page_cost = 0.1
> random_page_cost = 0.1
> cpu_tuple_cost = 0.05
>
> > I plan to increase various parameters as follow:
> > shared_buffers = 512MB
> > temp_buffers = 16MB
> > work_mem = 32MB
> > wal_buffers = 16MB
> > checkpoint_segments = 32
> > effective_cache_size = 2560MB
> > default_statistics_target = 500
> > autovacuum_vacuum_scale_factor = 0.05
> > autovacuum_analyze_scale_factor = 0.025
>
> You could probably go a little higher on work_mem and
> effective_cache_size. I would leave default_statistics_target alone
> unless you see a lot of estimates which are off by more than an
> order of magnitude. Even then, it is often better to set a higher
> value for a few individual columns than for everything. Remember
> that this setting has no effect until you reload the configuration
> and then VACUUM.
>
> -Kevin
>