From: | Bill Gribble <grib(at)linuxdevel(dot)com> |
---|---|
To: | Gunther Schadow <gunther(at)aurora(dot)regenstrief(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Critical performance problems on large databases |
Date: | 2002-04-11 12:01:54 |
Message-ID: | 1018526515.29603.34.camel@flophouse |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 2002-04-10 at 17:39, Gunther Schadow wrote:
> PS: we are seriously looking into using pgsql as the core
> of a BIG medical record system, but we already know that
> if we can't get quick online responses (< 2 s) on
> large rasult sets (10000 records) at least at the first
> page (~ 100 records) we are in trouble.
There are a few tricks to getting fast results for pages of data in
large tables. I have an application in which we have a scrolling window
displaying data from a million-row table, and I have been able to make
it fairly interactively responsive (enough that it's not a problem).
We grab pages of a few screenfuls of data at a time using LIMIT /
OFFSET, enough to scroll smoothly over a short range. For LIMIT /
OFFSET queries to be fast, I found it was necessary to CREATE INDEX,
CLUSTER and ORDER BY the key field.
Then the biggest slowdown is count(*), which we have to do in order to
fake up the scrollbar (so we know what proportion of the data has been
scrolled through). I have not completely foxed this yet. I want to
keep a separate mini-table of how many records are in the big table and
update it with a trigger (the table is mostly static). ATM, I just try
hard to minimize the times I call count(*).
b.g.
From | Date | Subject | |
---|---|---|---|
Next Message | Nigel J. Andrews | 2002-04-11 13:05:07 | Re: Critical performance problems on large databases |
Previous Message | Papp, Gyozo | 2002-04-11 11:16:17 | Re: SPI_execp() failed in RI_FKey_cascade_del() |