work_mem / maintenance_work_mem maximums

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: pgsql-hackers(at)postgresql(dot)org
Subject: work_mem / maintenance_work_mem maximums
Date: 2010-09-20 16:51:11
Message-ID: 20100920165111.GP26232@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greetings,

After watching a database import go abysmally slow on a pretty beefy
box with tons of RAM, I got annoyed and went to hunt down why in the
world PG wasn't using but a bit of memory. Turns out to be a well
known and long-standing issue:

http://www.mail-archive.com/pgsql-hackers(at)postgresql(dot)org/msg101139.html

Now, we could start by fixing guc.c to correctly have the max value
for these be MaxAllocSize/1024, for starters, then at least our users
would know when they set a higher value it's not going to be used.
That, in my mind, is a pretty clear bug fix. Of course, that doesn't
help us poor data-warehousing bastards with 64G+ machines.

Sooo.. I don't know much about what the limit is or why it's there,
but based on the comments, I'm wondering if we could just move the
limit to a more 'sane' place than the-function-we-use-to-allocate. If
we need a hard limit due to TOAST, let's put it there, but I'm hopeful
we could work out a way to get rid of this limit in repalloc and that
we can let sorts and the like (uh, index creation) use what memory the
user has decided it should be able to.

Thanks,

Stephen

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2010-09-20 16:57:56 Re: bg worker: general purpose requirements
Previous Message Magnus Hagander 2010-09-20 16:49:28 Git conversion status