Re: massive performance hit when using "Limit 1"

From: Rich Doughty <rich(at)opusvl(dot)com>
To: pgsql-general(at)postgresql(dot)org
Cc: Richard Huxton <dev(at)archonet(dot)com>
Subject: Re: massive performance hit when using "Limit 1"
Date: 2005-12-06 13:22:57
Message-ID: 439590B1.5020302@opusvl.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Rich Doughty wrote:
> Richard Huxton wrote:
>
>> Rich Doughty wrote:

[snip]

>> Try the same query but with a low retailer_id (100 or something) and
>> see if it goes a lot quicker. If that is what the problem is, try
>> changing the ORDER BY to something like "_s.retailer_id, _t.value,
>> _t.number" and see if that gives the planner a nudge in the right
>> direction.
>
>
> the retailer_id would make no difference as thee are only 4000-ish rows in
> ta_tokens_stock and they all (for now) have the same retailer_id.

ooops. i (sort of) spoke too soon. i didn't read the second half of the
comment properly. changing the ORDER BY clause does force a more sensible
query plan.

many thanks. so that's one way to give the planner hints...

>> Failing that, a change to your indexes will almost certainly help.
>
>
> i'm not sure that's the case. the exact same query, but limited to >2 rows
> is fine.
>
> I found this in the 8.0.4 relnotes. i reckon its a good guess that's
> what the
> problem is:
>
> * Fix mis-planning of queries with small LIMIT values due to poorly thought
> out "fuzzy" cost comparison

--

- Rich Doughty

In response to

Browse pgsql-general by date

  From Date Subject
Next Message John D. Burger 2005-12-06 13:35:50 Re: Reduce NUMERIC size by 2 bytes, reduce max length to 508
Previous Message Rich Doughty 2005-12-06 13:01:13 Re: massive performance hit when using "Limit 1"