Re: Asking advice on speeding up a big table

From: felix(at)crowfix(dot)com
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Asking advice on speeding up a big table
Date: 2006-04-10 21:20:21
Message-ID: 20060410212021.GA28712@crowfix.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
> felix(at)crowfix(dot)com writes:
> > I have a simple benchmark which runs too slow on a 100M row table, and
> > I am not sure what my next step is to make it faster.
>
> The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
> enough for anyone on that size table. You need to show us data on the
> problem case ...

It is, but it is only 32 msec because the query has already run and
cached the useful bits. And since I have random values, as soon as I
look up some new values, they are cached and no longer new.

What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million. I realize
I am asking a vague question which probably can't be solved as
presented.

--
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
Felix Finch: scarecrow repairman & rocket surgeon / felix(at)crowfix(dot)com
GPG = E987 4493 C860 246C 3B1E 6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2006-04-10 21:21:49 Re: how to prevent generating same clipids
Previous Message Hugo 2006-04-10 21:09:05 trigger firing order