Re: ANALYZE sampling is too good

From: Peter Geoghegan <pg(at)heroku(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jim Nasby <jim(at)nasby(dot)net>, Greg Stark <stark(at)mit(dot)edu>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: ANALYZE sampling is too good
Date: 2013-12-11 19:42:13
Message-ID: CAM3SWZSJwREmjVPuH8coJwLHAWoHDD0b9=hZdSamYRW55aZ+qg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Dec 10, 2013 at 4:48 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> Why would I even mention that to a statistician? We want guidance. But
> yes, I bet I could give a statistician an explanation of statistics
> target that they'd understand without too much trouble.

Actually, I think that if we told a statistician about the statistics
target, his or her response would be: why would you presume to know
ahead of time what statistics target is going to be effective? I
suspect that the basic problem is that it isn't adaptive. I think that
if we could somehow characterize the quality of our sample as we took
it, and then cease sampling when we reached a certain degree of
confidence in its quality, that would be helpful. It might not even
matter that the sample was clustered from various blocks.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2013-12-11 19:48:38 Re: -d option for pg_isready is broken
Previous Message Gavin Flower 2013-12-11 19:39:31 Re: ANALYZE sampling is too good