Re: huge execution time difference with almost same plan

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: huge execution time difference with almost same plan
Date: 2004-09-04 22:02:39
Message-ID: 15928.1094335359@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
> I've been playing with OSDL DBT-3 for a while and found very strange
> phenomemon.

What can you tell us about the physical ordering of the tables involved?
The only way I can think of to explain these results is that "orders"
is more or less physically in order by its primary key, and "lineitem"
is also more or less ordered by the orders primary key (which might or
might not be the same as l_orderkey; I forget the schema of that
database).

You have

> -> Index Scan using orders_pkey on orders (cost=0.00..65387.00 rows=53357 width=30) (actual time=44.687..36364.312 rows=57306 loops=1)
> Filter: ((o_orderdate >= '1997-10-01'::date) AND (o_orderdate < '1998-01-01'::date))

versus

> -> Index Scan using i_o_orderdate on orders (cost=0.00..46100.56 rows=53357 width=30) (actual time=215.502..650031.049 rows=57306 loops=1)
> Index Cond: ((o_orderdate >= '1997-10-01'::date) AND (o_orderdate < '1998-01-01'::date))

Now in the first case that's going to be a full-table scan because
there are no index constraints. The second one actually is being
selective and so should visit less than all the table rows. The
only way the first one can be twenty times faster is if it's doing
nearly sequential access to the table while the other one is jumping
all over the place.

(BTW, what happens if you take your thumb off the scales and allow
the planner to use a seqscan like it wanted to? Seems like that
should be faster than a full-table index scan.)

The costs for the other table are striking too:

> -> Index Scan using i_l_orderkey on lineitem (cost=0.00..3.14 rows=3 width=11) (actual time=4.628..4.628 rows=1 loops=57306)
> Index Cond: ("outer".o_orderkey = lineitem.l_orderkey)
> Filter: (l_commitdate < l_receiptdate)

versus

> -> Index Scan using i_l_orderkey on lineitem (cost=0.00..3.14 rows=3 width=11) (actual time=22.064..22.064 rows=1 loops=57306)
> Index Cond: ("outer".o_orderkey = lineitem.l_orderkey)
> Filter: (l_commitdate < l_receiptdate)

These are the identical plan but the second case is five times slower
(which adds up to a lot, over 57000+ iterations). Again, I think the
only possible explanation is that the first plan is benefiting from
locality of access while the second is not. That would have to be due
to the order of the probes in the first case being well-matched to the
physical table order whereas in the second case not.

Can you look at the data and see whether the pkey is correlated to
physical order while the orderdate is not? (That seems a bit weird
but it's the only explanation I can see.)

BTW, the planner is aware of physical order considerations; it would
not be smart enough to realize the correlation between physical order of
two different tables, but it would have accounted for this in its
preference of pkey over orderdate for scanning the orders table.
That's why you had to knock random_page_cost down so far in order
to mislead it into choosing the slower scan...

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2004-09-04 22:13:44 Re: APR 1.0 released
Previous Message Hannu Krosing 2004-09-04 21:42:43 Re: huge execution time difference with almost same plan