Re: Memory usage during sorting

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Memory usage during sorting
Date: 2012-03-18 16:29:15
Message-ID: CAMkU=1z81tkRDjM9jHUDGTS_YFmnkgCAstLXCef+e9uM2DhaMw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Mar 18, 2012 at 8:56 AM, Greg Stark <stark(at)mit(dot)edu> wrote:
> On Sun, Mar 18, 2012 at 3:25 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>>> No, it's about reducing the number of comparisons needed to maintain
>>>> the heap property.
>>
>>> That sounds very interesting.  I didn't know it was even theoretically
>>> possible to do that.
>>
>> So has somebody found a hole in the n log n lower bound on the cost of
>> comparison-based sorting?  I thought that had been proven pretty
>> rigorously.
>
> I think what he's describing is, for example, say you have a heap of
> 1,000 elements and that lets you do the entire sort in a single pass
> -- i.e. it generates a single sorted run after the first pass.
> Increasing the heap size to 2,000 will just mean doing an extra
> comparison for each tuple to sift up. And increasing the heap size
> further will just do more and more work for nothing. It's still nlogn
> but the constant factor is slightly higher. Of course to optimize this
> you would need to be able to see the future.
>
> I always thought the same behaviour held for if the heap was larger
> than necessary to do N merge passes. that is, making the heap larger
> might generate fewer tapes but the still enough to require the same
> number of passes. However trying to work out an example I'm not too
> sure. If you generate fewer tapes then the heap you use to do the
> merge is smaller so it might work out the same (or more likely, you
> might come out ahead).

Except for rounding effects, it does come out the same. Every extra
layer on the tuple-heap during the initial run building causes a
reduced layer on the tape-heap during the merge. So they wash. I
haven't analyzed the exact rounding effects in detail, but by just
instrumenting the code I found a huge tuple-heap with a two-tape merge
at the end used less than one percent more comparisons, but was >40%
slower, then a lower-memory sort which used a modest sized tuple-heap
followed by a modest-size tape-heap merge. My conclusion is that it
isn't the number of comparison that drove the difference, but the
number of on-chip cache misses. Two modest sized heaps are more cache
friendly than one giant heap and one tiny heap.

Of course if the data is partially sorted in a way that is apparent
over large ranges but not short ranges, the larger initial heap will
capture that during the initial run construction while the smaller one
will not.

Cheers,

Jeff

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Noah Misch 2012-03-18 18:10:20 Re: Cross-backend signals and administration (Was: Re: pg_terminate_backend for same-role)
Previous Message Tom Lane 2012-03-18 16:13:23 Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)