Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching

From: Peter Geoghegan <pg(at)heroku(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tv(at)fuzzy(dot)cz>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching
Date: 2014-12-12 22:30:52
Message-ID: CAM3SWZQOJsZ54CVYWHY90-GdrvYU2ChPHJK9Oz2Hv7tvdm4+vQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Dec 12, 2014 at 5:19 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Well, this is sort of one of the problems with work_mem. When we
> switch to a tape sort, or a tape-based materialize, we're probably far
> from out of memory. But trying to set work_mem to the amount of
> memory we have can easily result in a memory overrun if a load spike
> causes lots of people to do it all at the same time. So we have to
> set work_mem conservatively, but then the costing doesn't really come
> out right. We could add some more costing parameters to try to model
> this, but it's not obvious how to get it right.

I've heard of using "set work_mem = *" with advisory locks plenty of
times. There might be a better way to set it dynamically than a full
admission control implementation.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Claudio Freire 2014-12-12 22:31:05 Re: [REVIEW] Re: Compression of full-page-writes
Previous Message Michael Paquier 2014-12-12 22:25:41 Re: [REVIEW] Re: Compression of full-page-writes