Re: SET work_mem = '1TB';

From: Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SET work_mem = '1TB';
Date: 2013-05-21 21:41:17
Message-ID: 519BE9FD.4040502@archidevsys.co.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 22/05/13 09:13, Simon Riggs wrote:
> I worked up a small patch to support Terabyte setting for memory.
> Which is OK, but it only works for 1TB, not for 2TB or above.
>
> Which highlights that since we measure things in kB, we have an
> inherent limit of 2047GB for our memory settings. It isn't beyond
> belief we'll want to go that high, or at least won't be by end 2014
> and will be annoying sometime before 2020.
>
> Solution seems to be to support something potentially bigger than INT
> for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
> platform we're on.
>
> Opinions?
>
> --
> Simon Riggs http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
I suspect it should be fixed before it starts being a problem, for 2
reasons:

1. best to panic early while we have time
(or more prosaically: doing it soon gives us more time to get it
right without undue pressure)

2. not able to cope with 2TB and above might put off companies with
seriously massive databases from moving to Postgres

Probably an idea to check what other values should be increased as well.

Cheers,
Gavin

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Christoph Berg 2013-05-22 00:33:44 Re: plperl segfault in plperl_trusted_init() on kfreebsd
Previous Message Simon Riggs 2013-05-21 21:13:52 SET work_mem = '1TB';