From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Date: | 2014-09-10 19:02:18 |
Message-ID: | 5410A03A.8080408@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 10.9.2014 20:31, Robert Haas wrote:
> On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas
> <hlinnakangas(at)vmware(dot)com> wrote:
>> The dense-alloc-v5.patch looks good to me. I have committed that with minor
>> cleanup (more comments below). I have not looked at the second patch.
>
> Gah. I was in the middle of doing this. Sigh.
>
>>> * the chunks size is 32kB (instead of 16kB), and we're using 1/4
>>> threshold for 'oversized' items
>>>
>>> We need the threshold to be >=8kB, to trigger the special case
>>> within AllocSet. The 1/4 rule is consistent with ALLOC_CHUNK_FRACTION.
>>
>> Should we care about the fact that if there are only a few tuples, we will
>> nevertheless waste 32kB of memory for the chunk? I guess not, but I thought
>> I'd mention it. The smallest allowed value for work_mem is 64kB.
>
> I think we should change the threshold here to 1/8th. The worst case
> memory wastage as-is ~32k/5 > 6k.
So you'd lower the threshold to 4kB? That may lower the wastage in the
chunks, but palloc will actually allocate 8kB anyway, wasting up to
additional 4kB. So I don't see how lowering the threshold to 1/8th
improves the situation ...
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-09-10 19:06:13 | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Previous Message | Tomas Vondra | 2014-09-10 18:59:52 | Re: bad estimation together with large work_mem generates terrible slow hash joins |