Re: Poor memory context performance in large hash joins

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Poor memory context performance in large hash joins
Date: 2017-02-23 22:15:18
Message-ID: CAH2-Wz=uF=Qe0n0atK17vbdQ0LMkF6QQU9she9aaZA_BLs+_mw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Feb 23, 2017 at 2:13 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> Is there a good solution to this? Could the new chunks be put in a
> different memory context, and then destroy the old context and install the
> new one at the end of ExecHashIncreaseNumBatches? I couldn't find a destroy
> method for memory contexts, it looks like you just reset the parent instead.
> But I don't think that would work here.

Are you aware of the fact that tuplesort.c got a second memory context
for 9.6, entirely on performance grounds?

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2017-02-23 22:28:26 Re: Poor memory context performance in large hash joins
Previous Message Jeff Janes 2017-02-23 22:13:19 Poor memory context performance in large hash joins