From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: anti-join chosen even when slower than old plan |
Date: | 2010-11-09 23:17:42 |
Message-ID: | 20215.1289344662@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> samples % symbol name
>> 2320174 33.7617 index_getnext
> I couldn't resist seeing where the time went within this function.
> Over 13.7% of the opannotate run time was on this bit of code:
> /*
> * The xmin should match the previous xmax value, else chain is
> * broken. (Note: this test is not optional because it protects
> * us against the case where the prior chain member's xmax aborted
> * since we looked at it.)
> */
> if (TransactionIdIsValid(scan->xs_prev_xmax) &&
> !TransactionIdEquals(scan->xs_prev_xmax,
> HeapTupleHeaderGetXmin(heapTuple->t_data)))
> break;
> I can't see why it would be such a hotspot, but it is.
Main-memory access waits, maybe? If at_chain_start is false, that xmin
fetch would be the first actual touch of a given heap tuple, and could
be expected to have to wait for a cache line to be pulled in from RAM.
However, you'd have to be spending a lot of time chasing through long
HOT chains before that would happen enough to make this a hotspot...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2010-11-09 23:24:48 | Re: anti-join chosen even when slower than old plan |
Previous Message | Kevin Grittner | 2010-11-09 23:07:42 | Re: anti-join chosen even when slower than old plan |