Re: tweaking NTUP_PER_BUCKET

From: Greg Stark <stark(at)mit(dot)edu>
To: Atri Sharma <atri(dot)jiit(at)gmail(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Tomas Vondra <tv(at)fuzzy(dot)cz>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: tweaking NTUP_PER_BUCKET
Date: 2014-07-03 18:51:40
Message-ID: CAM-w4HPt83UWMmZQG3N+DchW97N5_FEYO-QrJ4zw38_-taTr1A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jul 3, 2014 at 11:40 AM, Atri Sharma <atri(dot)jiit(at)gmail(dot)com> wrote:
> IIRC, last time when we tried doing bloom filters, I was short of some real
> world useful hash functions that we could use for building the bloom filter.

Last time was we wanted to use bloom filters in hash joins to filter
out tuples that won't match any of the future hash batches to reduce
the amount of tuples that need to be spilled to disk. However the
problem was that it was unclear for a given amount of memory usage how
to pick the right size bloom filter and how to model how much it would
save versus how much it would cost in reduced hash table size.

I think it just required some good empirical tests and hash join heavy
workloads to come up with some reasonable guesses. We don't need a
perfect model just some reasonable bloom filter size that we're pretty
sure will usually help more than it hurts.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Stark 2014-07-03 19:22:51 Re: "RETURNING PRIMARY KEY" syntax extension
Previous Message Tomas Vondra 2014-07-03 18:50:34 Re: tweaking NTUP_PER_BUCKET