Re: Built-in support for a memory consumption ulimit?

From: "zhouqq(dot)postgres(at)gmail(dot)com" <zhouqq(dot)postgres(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: Built-in support for a memory consumption ulimit?
Date: 2014-06-17 00:20:03
Message-ID: 20140617082001329268114@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I like this feature but am wondering how to use it. If uses one value across all backends, we may have to set it conservatively to avoid OOM killer. But this does not promote resource sharing. If we set it per backend, what's the suggested value? One may is recommending user sorts his queries to small-q group and big-q group, where big-q connection sets a higher ulimit w.r.t work_mem. But this also has a limit mileage.

An ideal way is PGC_SIGHUP, which implies all server process added up shall respect this setting and it is adjustable. Not sure how to implement it, as setrlimit() seems not supporting process group (and what about windows?). Even if it does, a small issue is that this might increase the chance we hit OOM at some inconvenient places. For example, here:

/* Special case for startup: use good ol' malloc */
node = (MemoryContext) malloc(needed);
Assert(node != NULL);

I wonder how far we want to go along the line. Consider this case: we have some concurrent big-q and med-q, the system may comfortably allowing 1 big-q running with 2 or 3 med-qs to zack the left-over memory. If with query throttling, we hopefully make all queries run successfully without middle-fail-surprises, and ulimit guards the bottomline if anything goes wrong. This may lead to a discussion of more complete workload management support.

Regards,
Qingqing

From: Tom Lane
Date: 2014-06-14 22:37
To: pgsql-hackers
Subject: [HACKERS] Built-in support for a memory consumption ulimit?
After giving somebody advice, for the Nth time, to install a
memory-consumption ulimit instead of leaving his database to the tender
mercies of the Linux OOM killer, it occurred to me to wonder why we don't
provide a built-in feature for that, comparable to the "ulimit -c max"
option that already exists in pg_ctl. A reasonably low-overhead way
to do that would be to define it as something a backend process sets
once at startup, if told to by a GUC. The GUC could possibly be
PGC_BACKEND level though I'm not sure if we want unprivileged users
messing with it.

Thoughts?

regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2014-06-17 00:22:50 Re: PG Manual: Clarifying the repeatable read isolation example
Previous Message Tom Lane 2014-06-17 00:10:49 Re: avoiding tuple copying in btree index builds