SET work_mem = '1TB';

Lists: pgsql-hackers
From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: SET work_mem = '1TB';
Date: 2013-05-21 21:13:52
Message-ID: CA+U5nMJpR1HsAUQR2MLLmp14mYsGCHNBf1G1Kp3hUfL_uwWAhw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.

Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.

Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.

Opinions?

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
terabyte_work_mem.v1.patch application/octet-stream 1.2 KB

From: Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-05-21 21:41:17
Message-ID: 519BE9FD.4040502@archidevsys.co.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 22/05/13 09:13, Simon Riggs wrote:
> I worked up a small patch to support Terabyte setting for memory.
> Which is OK, but it only works for 1TB, not for 2TB or above.
>
> Which highlights that since we measure things in kB, we have an
> inherent limit of 2047GB for our memory settings. It isn't beyond
> belief we'll want to go that high, or at least won't be by end 2014
> and will be annoying sometime before 2020.
>
> Solution seems to be to support something potentially bigger than INT
> for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
> platform we're on.
>
> Opinions?
>
> --
> Simon Riggs http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
I suspect it should be fixed before it starts being a problem, for 2
reasons:

1. best to panic early while we have time
(or more prosaically: doing it soon gives us more time to get it
right without undue pressure)

2. not able to cope with 2TB and above might put off companies with
seriously massive databases from moving to Postgres

Probably an idea to check what other values should be increased as well.

Cheers,
Gavin


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 04:06:08
Message-ID: CAMkU=1yQymPj_=2HA_qwiOhf40fGFGehBQXq_8+yxdHrgQCzWQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tuesday, May 21, 2013, Simon Riggs wrote:

> I worked up a small patch to support Terabyte setting for memory.
> Which is OK, but it only works for 1TB, not for 2TB or above.
>

I've incorporated my review into a new version, attached.

Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
"1TB" rather than "1024GB".

I tested several of the memory settings to see that it can be set and
retrieved. I haven't tested actual execution as I don't have that kind of
RAM.

I don't see how it could have a performance impact, it passes make check
etc., and I don't think it warrants a new regression test.

I'll set it to ready for committer.

Cheers,

Jeff

Attachment Content-Type Size
terabyte_work_mem.JJ.v2.patch text/x-patch 3.2 KB

From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 16:10:57
Message-ID: CAHGQGwG7skA6WYB9_k0dS-Mwv13zDJobrYF5T2dX8HWSty4gpg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>
>> I worked up a small patch to support Terabyte setting for memory.
>> Which is OK, but it only works for 1TB, not for 2TB or above.
>
>
> I've incorporated my review into a new version, attached.
>
> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
> "1TB" rather than "1024GB".

Looks good to me. But I found you forgot to change postgresql.conf.sample,
so I changed it and attached the updated version of the patch.

Barring any objection to this patch and if no one picks up this, I
will commit this.

Regards,

--
Fujii Masao

Attachment Content-Type Size
terabyte_work_mem_fujii_v3.patch application/octet-stream 3.8 KB

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 17:40:24
Message-ID: CA+U5nM+zo+PPBA64VV-i2PNmBxVTLC8JLnjhtiDmUft_V6-q-g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18 June 2013 17:10, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>
>>> I worked up a small patch to support Terabyte setting for memory.
>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>
>>
>> I've incorporated my review into a new version, attached.
>>
>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>> "1TB" rather than "1024GB".
>
> Looks good to me. But I found you forgot to change postgresql.conf.sample,
> so I changed it and attached the updated version of the patch.
>
> Barring any objection to this patch and if no one picks up this, I
> will commit this.

In truth, I hadn't realised somebody had added this to the CF. It was
meant to be an exploration and demonstration that further work was/is
required rather than a production quality submission. AFAICS it is
still limited to '1 TB' only...

Thank you both for adding to this patch. Since you've done that, it
seems churlish of me to interrupt that commit.

I will make a note to extend the support to higher values of TBs later.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 17:45:14
Message-ID: 51C09CAA.7060009@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> In truth, I hadn't realised somebody had added this to the CF. It was
> meant to be an exploration and demonstration that further work was/is
> required rather than a production quality submission. AFAICS it is
> still limited to '1 TB' only...

At the beginning of the CF, I do a sweep of patch files emailed to
-hackers and not in the CF. I believe there were three such of yours,
take a look at the CF list. Like I said, better to track them
unnecessarily than to lose them.

> Thank you both for adding to this patch. Since you've done that, it
> seems churlish of me to interrupt that commit.

Well, I think that someone needs to actually test doing a sort with,
say, 100GB of RAM and make sure it doesn't crash. Anyone have a machine
they can try that on?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 17:52:36
Message-ID: 20130618175236.GS23363@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Josh Berkus (josh(at)agliodbs(dot)com) wrote:
> Well, I think that someone needs to actually test doing a sort with,
> say, 100GB of RAM and make sure it doesn't crash. Anyone have a machine
> they can try that on?

It can be valuable to bump up work_mem well beyond the amount of system
memory actually available on the system to get the 'right' plan to be
chosen (which often ends up needing much less actual memory to run).

I've used that trick on a box w/ 512GB of RAM and had near-100G PG
backend processes which were doing hashjoins. Don't think I've ever had
it try doing a sort w/ a really big work_mem.

Thanks,

Stephen


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 17:59:45
Message-ID: CA+U5nMLfkj+ssSccbS=hXKp+m_D_Y1bTUkjH8iR+U_vrffqDGA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18 June 2013 18:45, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
>
>> In truth, I hadn't realised somebody had added this to the CF. It was
>> meant to be an exploration and demonstration that further work was/is
>> required rather than a production quality submission. AFAICS it is
>> still limited to '1 TB' only...
>
> At the beginning of the CF, I do a sweep of patch files emailed to
> -hackers and not in the CF. I believe there were three such of yours,
> take a look at the CF list. Like I said, better to track them
> unnecessarily than to lose them.

Thanks. Please delete the patch marked "Batch API for After Triggers".
All others are submissions by me.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 18:08:55
Message-ID: 51C0A237.1070703@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/18/2013 10:59 AM, Simon Riggs wrote:

> Thanks. Please delete the patch marked "Batch API for After Triggers".
> All others are submissions by me.

The CF app doesn't permit deletion of patches, so I marked it "returned
with feedback".

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-18 21:57:00
Message-ID: CAHGQGwENAgBY7FUhXe=AcgwcWOVcDAyfjHNdnyZQHxgx=JACfg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 18 June 2013 17:10, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>
>>>> I worked up a small patch to support Terabyte setting for memory.
>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>
>>>
>>> I've incorporated my review into a new version, attached.
>>>
>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>> "1TB" rather than "1024GB".
>>
>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>> so I changed it and attached the updated version of the patch.
>>
>> Barring any objection to this patch and if no one picks up this, I
>> will commit this.
>
> In truth, I hadn't realised somebody had added this to the CF. It was
> meant to be an exploration and demonstration that further work was/is
> required rather than a production quality submission. AFAICS it is
> still limited to '1 TB' only...

Yes.

> Thank you both for adding to this patch. Since you've done that, it
> seems churlish of me to interrupt that commit.

I was thinking that this is the infrastructure patch for your future
proposal, i.e., support higher values of TBs. But if it interferes with
your future proposal, of course I'm okay to drop this patch. Thought?

Regards,

--
Fujii Masao


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-19 07:47:49
Message-ID: CA+U5nM+Ao4rJZhR5J0qquELg6op81hAfDVeJYiYjSaRbN3pp_A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18 June 2013 22:57, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
> On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> On 18 June 2013 17:10, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>>
>>>>> I worked up a small patch to support Terabyte setting for memory.
>>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>>
>>>>
>>>> I've incorporated my review into a new version, attached.
>>>>
>>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>>> "1TB" rather than "1024GB".
>>>
>>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>>> so I changed it and attached the updated version of the patch.
>>>
>>> Barring any objection to this patch and if no one picks up this, I
>>> will commit this.
>>
>> In truth, I hadn't realised somebody had added this to the CF. It was
>> meant to be an exploration and demonstration that further work was/is
>> required rather than a production quality submission. AFAICS it is
>> still limited to '1 TB' only...
>
> Yes.
>
>> Thank you both for adding to this patch. Since you've done that, it
>> seems churlish of me to interrupt that commit.
>
> I was thinking that this is the infrastructure patch for your future
> proposal, i.e., support higher values of TBs. But if it interferes with
> your future proposal, of course I'm okay to drop this patch. Thought?

Yes, please commit.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-06-19 23:18:42
Message-ID: CAHGQGwEq+0PkO64=_QSLOXNtJ9kvf+atr+egK4BFsWdqa=deSg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jun 19, 2013 at 4:47 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 18 June 2013 22:57, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>> On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>> On 18 June 2013 17:10, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>>>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>>>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>>>
>>>>>> I worked up a small patch to support Terabyte setting for memory.
>>>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>>>
>>>>>
>>>>> I've incorporated my review into a new version, attached.
>>>>>
>>>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>>>> "1TB" rather than "1024GB".
>>>>
>>>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>>>> so I changed it and attached the updated version of the patch.
>>>>
>>>> Barring any objection to this patch and if no one picks up this, I
>>>> will commit this.
>>>
>>> In truth, I hadn't realised somebody had added this to the CF. It was
>>> meant to be an exploration and demonstration that further work was/is
>>> required rather than a production quality submission. AFAICS it is
>>> still limited to '1 TB' only...
>>
>> Yes.
>>
>>> Thank you both for adding to this patch. Since you've done that, it
>>> seems churlish of me to interrupt that commit.
>>
>> I was thinking that this is the infrastructure patch for your future
>> proposal, i.e., support higher values of TBs. But if it interferes with
>> your future proposal, of course I'm okay to drop this patch. Thought?
>
> Yes, please commit.

Committed.

Regards,

--
Fujii Masao