From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Peter Hussey <peter(at)labkey(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Questions on query planner, join types, and work_mem |
Date: | 2010-08-03 02:48:52 |
Message-ID: | AANLkTikA87wv9zsuB+s4BiVJt92uHcL2pw0OsMVMnN9g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Aug 2, 2010 at 5:23 PM, Peter Hussey <peter(at)labkey(dot)com> wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2. It made no
> difference in the choice of plan that I could see. In the explain analyze
> output the estimated costs of nested loop were in fact lowererd, but so were
> the costs of the hash join plan, and the hash join remained the lowest
> predicted costs in all tests i tried.
What do you get if you set random_page_cost to a small value such as 0.01?
> What seems wrong to me is that the hash join strategy shows almost no
> difference in estimated costs as work_mem goes from 1MB to 500MB. The cost
> function decreases by 1%, but the actual time for the query to execute
> decreases by 86% as work_mem goes from 1MB to 500MB.
Wow. It would be interesting to find out how many batches are being
used. Unfortunately, releases prior to 9.0 don't display that
information.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-08-03 02:55:45 | Re: Questions on query planner, join types, and work_mem |
Previous Message | Scott Marlowe | 2010-08-03 00:12:41 | Re: Testing Sandforce SSD |