Re: ANALYZE sampling is too good

From: Jim Nasby <jim(at)nasby(dot)net>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Greg Stark <stark(at)mit(dot)edu>
Cc: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, Josh Berkus <josh(at)agliodbs(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: ANALYZE sampling is too good
Date: 2013-12-09 21:35:32
Message-ID: 52A637A4.5090801@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 12/8/13 1:49 PM, Heikki Linnakangas wrote:
> On 12/08/2013 08:14 PM, Greg Stark wrote:
>> The whole accounts table is 1.2GB and contains 10 million rows. As
>> expected with rows_per_block set to 1 it reads 240MB of that
>> containing nearly 2 million rows (and takes nearly 20s -- doing a full
>> table scan for select count(*) only takes about 5s):
>
> One simple thing we could do, without or in addition to changing the algorithm, is to issue posix_fadvise() calls for the blocks we're going to read. It should at least be possible to match the speed of a plain sequential scan that way.

Hrm... maybe it wouldn't be very hard to use async IO here either? I'm thinking it wouldn't be very hard to do the stage 2 work in the callback routine...
--
Jim C. Nasby, Data Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2013-12-09 21:40:16 Re: ANALYZE sampling is too good
Previous Message Bruce Momjian 2013-12-09 21:24:31 Re: Bug in VACUUM reporting of "removed %d row versions" in 9.2+