Re: Speeding up a query.

Lists: pgsql-performance
From: Alberto Dalmaso <dalmaso(at)clesius(dot)it>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Speeding up a query.
Date: 2009-06-17 09:33:59
Message-ID: 1245231239.5027.113.camel@dalmaso-opensuse.cl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

yes, I have to make that because the data on the table need to be
pivoted so it is joined many times with different filter on the column
that describe the meaning of the column called numeric_value I'm going
to show.
That could be very ineffective, event because that table contains
something like 25000000 rows...
There are two tables in this condition (as you can se in the explain)
and both are the table with the higher number of rows in the database.
But I don's see any other choice to obtain that information.

P.S.: i'm trying with all enable_* to on and pumping to higher values
from_collapse_limit and join_collapse_limit that I've put to 30.
The result is that the query, after an hour of work, goes out of memory
(SQL State 53200)...


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Alberto Dalmaso" <dalmaso(at)clesius(dot)it>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Speeding up a query.
Date: 2009-06-17 14:54:28
Message-ID: 4A38BD540200002500027C85@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Alberto Dalmaso <dalmaso(at)clesius(dot)it> wrote:

> P.S.: i'm trying with all enable_* to on and pumping to higher
> values from_collapse_limit and join_collapse_limit that I've put to
> 30.

Tom suggested that you set those numbers higher than the number of
tables joined in the query. I don't think 30 will do that.

> The result is that the query, after an hour of work, goes out of
> memory (SQL State 53200)...

Ouch! Can you provide more details? All information from the
PostgreSQL log about that event would be good. If there's anything
which might be related in the OS logs from around that time, please
include that, too.

Also, with those settings at a high value, try running just an EXPLAIN
(no ANALYZE) of the query, to see how long that takes, and whether you
have a memory issue during the planning phase. (You can use \timing
in psql to get a report of the run time of the EXPLAIN.)

-Kevin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alberto Dalmaso <dalmaso(at)clesius(dot)it>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Speeding up a query.
Date: 2009-06-17 15:07:18
Message-ID: 26803.1245251238@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Alberto Dalmaso <dalmaso(at)clesius(dot)it> writes:
> P.S.: i'm trying with all enable_* to on and pumping to higher values
> from_collapse_limit and join_collapse_limit that I've put to 30.
> The result is that the query, after an hour of work, goes out of memory
> (SQL State 53200)...

Hmm, is that happening during planning (ie, do you get the same error
if you just try to EXPLAIN the query with those settings)? If not,
please show the EXPLAIN output.

regards, tom lane