Re: Improving N-Distinct estimation by ANALYZE

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Greg Stark <gsstark(at)MIT(dot)EDU>, "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, josh(at)agliodbs(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Improving N-Distinct estimation by ANALYZE
Date: 2006-01-09 16:21:10
Message-ID: 87fynx1ifd.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


> > These numbers don't make much sense to me. It seems like 5% is about as slow
> > as reading the whole file which is even worse than I expected. I thought I was
> > being a bit pessimistic to think reading 5% would be as slow as reading 20% of
> > the table.

I have a theory. My test program, like Postgres, is reading in 8k chunks.
Perhaps that's fooling Linux into thinking it's a sequential read and reading
in 32k chunks internally. That would effectively make a 25% scan a full table
scan. And a 5% scan would be a 20% scan which is about where I would have
expected the breakeven point to be.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2006-01-09 16:25:23 Re: plperl vs LC_COLLATE (was Re: Possible savepoint bug)
Previous Message Tom Lane 2006-01-09 16:03:17 Re: plperl vs LC_COLLATE (was Re: Possible savepoint bug)