Re: Dealing with big tables

From: "Mindaugas" <ml(at)kilimas(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Dealing with big tables
Date: 2007-12-02 11:37:56
Message-ID: E1Iyn92-00056Z-So@fenris.runbox.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> my answer may be out of topic since you might be looking for a
> postgres-only solution.. But just in case....

I'd like to stay with SQL.

> What are you trying to achieve exactly ? Is there any way you could
> re-work your algorithms to avoid selects and use a sequential scan
> (consider your postgres data as one big file) to retrieve each of the
> rows, analyze / compute them (possibly in a distributed manner), and
> join the results at the end ?

I'm trying to improve performance - get answer from mentioned query
faster.

And since cardinality is high (100000+ different values) I doubt that it
would be possible to reach select speed with reasonable number of nodes of
sequential scan nodes.

Mindaugas

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Smet 2007-12-02 11:55:34 Re: Dealing with big tables
Previous Message Sami Dalouche 2007-12-02 11:05:35 Re: Dealing with big tables