Re: Simple postgresql.conf wizard

From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Greg Smith <gsmith(at)gregsmith(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Simple postgresql.conf wizard
Date: 2008-12-03 22:45:04
Message-ID: 1228344304.27483.180.camel@jd-laptop.pragmaticzealot.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 2008-12-03 at 17:33 -0500, Robert Haas wrote:
> On Wed, Dec 3, 2008 at 4:41 PM, Joshua D. Drake <jd(at)commandprompt(dot)com> wrote:
> > If you are concerned about the analyze time between 10, 50 and 150, I
> > would suggest that you are concerned about the wrong things. Remember
>
> I can't rule that out. What things do you think I should be concerned
> about?

Your databases production performance with the change of the parameter.
Quite a bit more often than not, your problem (if you have one) isn't
going to be default_statistics_target is too high.

> ISTM that default_statistics_target trades off ANALYZE time
> and query planning time vs. the possibility of better plans. If the
> former considerations are not an issue for dst = 50, then maybe we
> should emit 50 by default. But the limited evidence that has been
> published in this forum thus far doesn't support that contention.
>

Actually there are years worth of evidence in these archives. Not that
the 50 is the right number but that the current settings are definitely
wrong and that higher ones are needed. That people generally start
around 100 and go from there, except where they don't and then someone
like Tom, I or some other person says, "Oh you need to increase
default_statistics_target".

There is no empirical evidence that 50 is the right setting but there is
more than enough anecdotal evidence to suggest that 50 is a lot better
than 10 and that even higher than 50 is reasonable. In an effort to
follow the PostgereSQL conservative mantra, 50 is a good compromise.

> >> > It also seems unlikely that you would hit 256MB of checkpoint segments
> >> > on a 100MB database before checkpoint_timeout and if you did, you
> >> > certainly did need them.
> >>
> >> So why do we have this parameter at all?
> >
> > Excellent question, for a different thread :)
>
> I think the rhetorical answer is "so that we don't fill up the disk",

I don't think at any time I have said to my self, I am going to set this
parameter low so I don't fill up my disk. If I am saying that to myself
I have either greatly underestimated the hardware for the task. Consider
that we are quarreling over what amounts to a nominal amount of hard
drive space, 1000 checkpoint_segments = 1.6G of space. My phone has more
capacity than that.

*If* hard drive is a space (that much of) a concern then you are having
other problems already that pgtune won't satisfy and you should be
manually tuning the conf in the first place.

Joshua D. Drake

> which gets us back to database size.
>
> ...Robert
>
--
PostgreSQL
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2008-12-03 22:46:15 Re: cvs head initdb hangs on unixware
Previous Message Gregory Stark 2008-12-03 22:38:53 Re: Visibility map, partial vacuums