Re: Asking advice on speeding up a big table

From: "hubert depesz lubaczewski" <depesz(at)gmail(dot)com>
To: "felix(at)crowfix(dot)com" <felix(at)crowfix(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Asking advice on speeding up a big table
Date: 2006-04-11 07:52:40
Message-ID: 9e4684ce0604110052h71314e84qf49f2b8260151321@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 4/10/06, felix(at)crowfix(dot)com <felix(at)crowfix(dot)com> wrote:
>
> It is, but it is only 32 msec because the query has already run and
> cached the useful bits. And since I have random values, as soon as I
> look up some new values, they are cached and no longer new.

according to my experiene i would vote for too slow filesystem

> What I was hoping for was some general insight from the EXPLAIN
> ANALYZE, that maybe extra or different indices would help, or if there
> is some better method for finding one row from 100 million. I realize
> I am asking a vague question which probably can't be solved as
> presented.
>

hmm .. perhaps you can try to denormalize the table, and then use
multicolumn indices?

depesz

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Richard Huxton 2006-04-11 08:01:49 Re: installing and using autodoc
Previous Message Dave Page 2006-04-11 07:04:24 Re: Debian package for freeradius_postgresql module