Re: more problems with count(*) on large table

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Mike Charnoky" <noky(at)nextbus(dot)com>
Cc: "Alban Hertroys" <a(dot)hertroys(at)magproductions(dot)nl>, "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, "A(dot) Kretschmer" <andreas(dot)kretschmer(at)schollglas(dot)com>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: more problems with count(*) on large table
Date: 2007-10-01 17:43:25
Message-ID: 87bqbi91du.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


"Mike Charnoky" <noky(at)nextbus(dot)com> writes:

> Here is the output from EXPLAIN ANALYZE. This is the same query run
> back to back, first time takes 42 minutes, second time takes less than 2
> minutes!

That doesn't really sound strange at all. It sounds like you have a very slow
disk and very large amount of memory. 40 minutes to scan 11.4M records sounds
kind of high to me though.

How wide are these records anyways? That is, what is the table definition for
this table? If this is a single consumer drive 42 minutes sounds about right
for 2k wide records being randomly accessed.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Joshua D. Drake 2007-10-01 17:45:09 Re: Upgrading PG
Previous Message Alan Hodgson 2007-10-01 17:42:25 Re: Upgrading PG