From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Poor memory context performance in large hash joins |
Date: | 2017-02-23 22:28:26 |
Message-ID: | 10401.1487888906@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> The number of new chunks can be almost as as large as the number of old
> chunks, especially if there is a very popular value. The problem is that
> every time an old chunk is freed, the code in aset.c around line 968 has to
> walk over all the newly allocated chunks in the linked list before it can
> find the old one being freed. This is an N^2 operation, and I think it has
> horrible CPU cache hit rates as well.
Maybe it's time to convert that to a doubly-linked list. Although if the
hash code is producing a whole lot of requests that are only a bit bigger
than the separate-block threshold, I'd say It's Doing It Wrong. It should
learn to aggregate them into larger requests.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2017-02-23 22:55:37 | Re: bytea_output output of base64 |
Previous Message | Peter Geoghegan | 2017-02-23 22:15:18 | Re: Poor memory context performance in large hash joins |