From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: A better way than tweaking NTUP_PER_BUCKET |
Date: | 2013-06-23 15:11:15 |
Message-ID: | 51C71013.2080802@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 23.06.2013 01:48, Simon Riggs wrote:
> On 22 June 2013 21:40, Stephen Frost<sfrost(at)snowman(dot)net> wrote:
>
>> I'm actually not a huge fan of this as it's certainly not cheap to do. If it
>> can be shown to be better than an improved heuristic then perhaps it would
>> work but I'm not convinced.
>
> We need two heuristics, it would seem:
>
> * an initial heuristic to overestimate the number of buckets when we
> have sufficient memory to do so
>
> * a heuristic to determine whether it is cheaper to rebuild a dense
> hash table into a better one.
>
> Although I like Heikki's rebuild approach we can't do this every x2
> overstretch. Given large underestimates exist we'll end up rehashing
> 5-12 times, which seems bad.
It's not very expensive. The hash values of all tuples have already been
calculated, so rebuilding just means moving the tuples to the right bins.
> Better to let the hash table build and
> then re-hash once, it we can see it will be useful.
That sounds even less expensive, though.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-06-23 15:27:32 | Re: changeset generation v5-01 - Patches & git tree |
Previous Message | Kevin Grittner | 2013-06-23 14:44:26 | Re: FILTER for aggregates [was Re: Department of Redundancy Department: makeNode(FuncCall) division] |