Re: Simple join optimized badly?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Simple join optimized badly?
Date: 2006-10-07 15:51:40
Message-ID: 18722.1160236300@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Craig A. James" <cjames(at)modgraph-usa(dot)com> writes:
> There are two plans below. The first is before an ANALYZE HITLIST_ROWS, and it's horrible -- it looks to me like it's sorting the 16 million rows of the SEARCH table. Then I run ANALYZE HITLIST_ROWS, and the plan is pretty decent.

It would be interesting to look at the before-ANALYZE cost estimate for
the hash join, which you could get by setting enable_mergejoin off (you
might have to turn off enable_nestloop too). I recall though that
there's a fudge factor in costsize.c that penalizes hashing on a column
that no statistics are available for. The reason for this is the
possibility that the column has only a small number of distinct values,
which would make a hash join very inefficient (in the worst case all
the values might end up in the same hash bucket, making it no better
than a nestloop). Once you've done ANALYZE it plugs in a real estimate
instead, and evidently the cost estimate drops enough to make hashjoin
the winner.

You might be able to persuade it to use a hashjoin anyway by increasing
work_mem enough, but on the whole my advice is to do the ANALYZE after
you load up the temp table. The planner really can't be expected to be
very intelligent when it has no stats.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Denis Lussier 2006-10-08 01:50:14 Re: Simple join optimized badly?
Previous Message Craig A. James 2006-10-07 06:34:31 Simple join optimized badly?