From: | Atri Sharma <atri(dot)jiit(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: tweaking NTUP_PER_BUCKET |
Date: | 2014-07-03 18:40:26 |
Message-ID: | CAOeZVic7oqWJUAwnQyZwR8WhjAmMK4jKTad7rhk=jSN-qEQfTw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jul 3, 2014 at 11:40 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> Tomas,
>
> * Tomas Vondra (tv(at)fuzzy(dot)cz) wrote:
> > However it's likely there are queries where this may not be the case,
> > i.e. where rebuilding the hash table is not worth it. Let me know if you
> > can construct such query (I wasn't).
>
> Thanks for working on this! I've been thinking on this for a while and
> this seems like it may be a good approach. Have you considered a bloom
> filter?
>
IIRC, last time when we tried doing bloom filters, I was short of some real
world useful hash functions that we could use for building the bloom filter.
If we are restarting experiments on this, I would be glad to assist.
Regards,
Atri
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2014-07-03 18:50:34 | Re: tweaking NTUP_PER_BUCKET |
Previous Message | Tomas Vondra | 2014-07-03 18:35:54 | Re: bad estimation together with large work_mem generates terrible slow hash joins |