Limiting number of connections to PostgreSQL per IP (not per DB/user)?

Lists: pgsql-general
From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 13:49:37
Message-ID: 4ED4E2F1.5060003@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Hello!

Sorry for that subscribe post I've just sent, that was bad reading on my
part (for the subscribe info on the homepage).

Anyway, the title says it all: is there any possibility to limit the
number of connections that a client can have concurrently with a
PostgreSQL-Server with "on-board" means (where I can't influence which
user/database the clients use, rather, the clients mostly all use the
same user/database, and I want to make sure that a single client which
runs amok doesn't kill connectivity for other clients)? I could surely
implement this with a proxy sitting in front of the server, but I'd
rather implement this with PostgreSQL directly.

I'm using (and need to stick with) PostgreSQL 8.3 because of the
frontend software in question.

Thanks for any hints!

--
--- Heiko.


From: Filip Rembiałkowski <plk(dot)zuber(at)gmail(dot)com>
To: Heiko Wundram <modelnine(at)modelnine(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 19:44:34
Message-ID: CAP_rwwkKofPyjH+LOtA+K299yLEOXfXX53wo=jzOHP5Fr1JwgA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

no easy, "standard" way of doing this in postgres.
before we go into workarounds - what's the underlying OS?

2011/11/29 Heiko Wundram <modelnine(at)modelnine(dot)org>:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!
>
> --
> --- Heiko.
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: Filip Rembiałkowski <plk(dot)zuber(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 22:18:43
Message-ID: 4ED55A43.1000409@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Am 29.11.2011 20:44, schrieb Filip Rembiałkowski:
> no easy, "standard" way of doing this in postgres.
> before we go into workarounds - what's the underlying OS?

Okay, that's too bad that there's no standard way for this. The
underlying OS is Linux (Gentoo, to be exact), and I'd already thought
about setting up some form of iptables firewalling, but there's no real
framework for this (i.e., "count" the number of connected TCP-sockets
that originate from a single client) in iptables, only for connection
throttling from the same source (which won't cut it, as there are
"spikes" in connection setup where many connections are created almost
at once, meaning that hashlimit or recent and the likes are simply not
suited to the task at hand. I just need/want to give a "hard" upper
limit on the number of simultaneous connections from a single client as
an Anti-DoS-measure - the clients aren't hostile, but their programming
is broken...).

Is there (meaning do you know of) any form of generic TCP socket proxy
that can achieve this? I've looked through portage (the Gentoo package
set) to find something applicable, but none of the socket proxy packages
I found were able to connection-limit based on source IP out of the box,
either...

Anyway, thanks for your feedback!

--
--- Heiko.


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Heiko Wundram <modelnine(at)modelnine(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 22:38:01
Message-ID: CAHyXU0wrsYShxmwBxZSGYoiBJa=gzEJ17iAeRvaf_vA+coH_qA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!

I think the (hypothetical) general solution for these types of
problems is to have logon triggers. It's one of the (very) few things
I envy from SQL Server -- see here:
http://msdn.microsoft.com/en-us/library/bb326598.aspx.

Barring the above, if you can trust the client to call a function upon
connection I'd just do that and handle the error on the client with a
connection drop. Barring *that*, I'd be putting my clients in front of
pgbouncer with some patches to the same to get what I needed
(pgbouncer is single threaded making firewally type features quite
easy to implement in an ad hoc fashion).

merlin


From: Filip Rembiałkowski <plk(dot)zuber(at)gmail(dot)com>
To: Heiko Wundram <modelnine(at)modelnine(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 22:44:47
Message-ID: CAP_rwwkP4Oq78u+0NuLdm2KAeOwg_1Pk4od_+7udB1_hz2j1QQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

W dniu 29 listopada 2011 23:18 użytkownik Heiko Wundram
<modelnine(at)modelnine(dot)org> napisał:

> Okay, that's too bad that there's no standard way for this. The underlying
> OS is Linux (Gentoo, to be exact), and I'd already thought about setting up
> some form of iptables firewalling, but there's no real framework for this
> (i.e., "count" the number of connected TCP-sockets that originate from a
> single client) in iptables, only for connection throttling from the same
> source (which won't cut it, as there are "spikes" in connection setup where
> many connections are created almost at once, meaning that hashlimit or
> recent and the likes are simply not suited to the task at hand. I just
> need/want to give a "hard" upper limit on the number of simultaneous
> connections from a single client as an Anti-DoS-measure - the clients aren't
> hostile, but their programming is broken...).
>

did you look at connlimit?
http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.

I'm not sure how do you want to allow "many connections being created
almost at once" and "limit number of connections from same IP" at the
same time?

anyway, we are going offtopic here...

regards
Filip


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Filip Rembiałkowski <plk(dot)zuber(at)gmail(dot)com>
Cc: Heiko Wundram <modelnine(at)modelnine(dot)org>, pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 22:49:55
Message-ID: 9469.1322606995@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <plk(dot)zuber(at)gmail(dot)com> writes:
> W dniu 29 listopada 2011 23:18 uytkownik Heiko Wundram
> <modelnine(at)modelnine(dot)org> napisa:
>> Okay, that's too bad that there's no standard way for this.

> did you look at connlimit?
> http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit

Another way that we've sometimes recommended people handle custom login
restrictions is
(1) use PAM for authentication
(2) find or write a PAM plugin that makes the kind of check you want

I think that there may well be a plugin out there already that does
this, or something close enough; but you'll have to do your own
research...

regards, tom lane


From: Tomas Vondra <tv(at)fuzzy(dot)cz>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-29 23:37:20
Message-ID: 4ED56CB0.4000509@fuzzy.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On 29.11.2011 14:49, Heiko Wundram wrote:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the
> number of connections that a client can have concurrently with a
> PostgreSQL-Server with "on-board" means (where I can't influence which
> user/database the clients use, rather, the clients mostly all use the
> same user/database, and I want to make sure that a single client which
> runs amok doesn't kill connectivity for other clients)? I could surely
> implement this with a proxy sitting in front of the server, but I'd
> rather implement this with PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the
> frontend software in question.
>
> Thanks for any hints!

Hi,

maybe you could use a pgbouncer - it won't allow you to limit them by
source IP, but maybe you can group them by company or something.

For example like this

[databases]
conn_a = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
pool_size=20
conn_b = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
pool_size=10

The users will then connect just like today, but they'll connect to the
pgbouncer using dbnames conn_a and conn_b. Those using conn_a will be
able to use 20 connection, those using conn_b will be able to use 10
connections.

Each customer will get different credential and his own db name (in
pgbouncer).

Tomas


From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: Filip Rembiałkowski <plk(dot)zuber(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-30 08:20:17
Message-ID: 4ED5E741.4080100@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Am 29.11.2011 23:44, schrieb Filip Rembiałkowski:
> did you look at connlimit?
> http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
> AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.

No, I didn't, and THANKS! That's exactly the hint I needed. I tried to
use the hashlimit and/or recent matches with high burst rates and low
limits, but that didn't work, and that's what I was hinting at.

> I'm not sure how do you want to allow "many connections being created
> almost at once" and "limit number of connections from same IP" at the
> same time?

The intention being that I'm trying to limit the total amount of
connections per client to something around 20; the behaviour of the
clients is such that they create 10-15 connections in a very short burst
(due to threaded accesses to the database), and (should) disconnect all
of these in an interval of 10 minutes. When a client runs amok (which
I've had twice this week), the batch of connections is not disconnected,
and a single client gradually starts eating up all connections to the
database, and thus hinders other clients from functioning. But: using
connlimit should do the trick. I'll try that out immediately.

Thanks again!

--
--- Heiko.


From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-30 08:23:25
Message-ID: 4ED5E7FD.5000305@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Am 29.11.2011 23:49, schrieb Tom Lane:
> Another way that we've sometimes recommended people handle custom login
> restrictions is
> (1) use PAM for authentication
> (2) find or write a PAM plugin that makes the kind of check you want

Very interesting - I'll first try the connlimit approach hinted at by
Filip, but if PostgreSQL does normal session setup/teardown using PAM
(I've never used PAM authentication for PostgreSQL before), this should
be a workable solution in case using iptables doesn't turn out to
properly handle disconnected sessions quickly enough.

Thanks for pointing me at the PAM!

--
--- Heiko.


From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Heiko Wundram <modelnine(at)modelnine(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-30 08:26:50
Message-ID: CABUevEw4_rJxvD-K6a5YKmxQbS9V6Qii2tfrafjuQdQQ+LuqOw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Wed, Nov 30, 2011 at 09:23, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
> Am 29.11.2011 23:49, schrieb Tom Lane:
>>
>> Another way that we've sometimes recommended people handle custom login
>> restrictions is
>> (1) use PAM for authentication
>> (2) find or write a PAM plugin that makes the kind of check you want
>
> Very interesting - I'll first try the connlimit approach hinted at by Filip,
> but if PostgreSQL does normal session setup/teardown using PAM (I've never
> used PAM authentication for PostgreSQL before), this should be a workable
> solution in case using iptables doesn't turn out to properly handle
> disconnected sessions quickly enough.

I don't believe we do teardown using PAM, just session start. So you'd
have to have your PAM module check the current state of postgresql
every time - not keep some internal state.

FWIW, another option for writing your authentication module is to
write a simple RADIUS server running on the same box. It's pretty
trivial to do, especially in a high level language. The end result is
the same as if you use PAM - you get custom authentication that can
apply specific checks.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-11-30 08:36:50
Message-ID: 4ED5EB22.4030109@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Am 30.11.2011 09:26, schrieb Magnus Hagander:
> I don't believe we do teardown using PAM, just session start. So you'd
> have to have your PAM module check the current state of postgresql
> every time - not keep some internal state.

Okay, that's too bad - if connlimit doesn't do the trick, I'll try and
see how PAM is used, and possibly patch the respective session
teardown-functionality into the server (which shouldn't be too hard, I
guess).

> FWIW, another option for writing your authentication module is to
> write a simple RADIUS server running on the same box. It's pretty
> trivial to do, especially in a high level language. The end result is
> the same as if you use PAM - you get custom authentication that can
> apply specific checks.

I'm much more used to writing PAM modules (which I've already done for
authentication used by an FTP-server), so that'd be my first route to
go, but keeping this in mind is handy, too. Thanks!

--
--- Heiko.


From: Tomas Vondra <tv(at)fuzzy(dot)cz>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 00:03:13
Message-ID: 4ED6C441.8020306@fuzzy.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On 29.11.2011 23:38, Merlin Moncure wrote:
> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
>> Hello!
>>
>> Sorry for that subscribe post I've just sent, that was bad reading on my
>> part (for the subscribe info on the homepage).
>>
>> Anyway, the title says it all: is there any possibility to limit the number
>> of connections that a client can have concurrently with a PostgreSQL-Server
>> with "on-board" means (where I can't influence which user/database the
>> clients use, rather, the clients mostly all use the same user/database, and
>> I want to make sure that a single client which runs amok doesn't kill
>> connectivity for other clients)? I could surely implement this with a proxy
>> sitting in front of the server, but I'd rather implement this with
>> PostgreSQL directly.
>>
>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>> software in question.
>>
>> Thanks for any hints!
>
> I think the (hypothetical) general solution for these types of
> problems is to have logon triggers. It's one of the (very) few things
> I envy from SQL Server -- see here:
> http://msdn.microsoft.com/en-us/library/bb326598.aspx.

I'd like to have logon triggers too, but I don't think that's the right
solution for this problem. For example the logon triggers would be
called after forking the backend, which means overhead.

The connection limits should be checked when creating the connection
(validation username/password etc.), before creating the backend.

Anyway, I do have an idea how this could be done using a shared library
(so it has the same disadvantages as logon triggers). Hopefully I'll
have time to implement a PoC of this over the weekend.

> Barring the above, if you can trust the client to call a function upon
> connection I'd just do that and handle the error on the client with a
> connection drop. Barring *that*, I'd be putting my clients in front of
> pgbouncer with some patches to the same to get what I needed
> (pgbouncer is single threaded making firewally type features quite
> easy to implement in an ad hoc fashion).

The connection pooler somehow easier and more complex at the same time.

You can use connect_query to execute whatever you want after connecting
to the database (not trusting the user to do that), but why would you do
that? But the database will see the IP of the pgbouncer, not the IP of
the original client. So executing the query is pointless.

You can modify pgbouncer and it should be quite simple, but you can
achieve different username/password (pgbouncer) to each customer,
different database, set pool_size for each of the connections. It won't
use IP to count connections, but the user's won't 'steal' connections
from the other.

Tomas


From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 12:47:38
Message-ID: CABUevEyj42z2ZPYgbP6uhxyARHV=5Nk_KEimXs01WzNiS3bb-w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Thu, Dec 1, 2011 at 01:03, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
> On 29.11.2011 23:38, Merlin Moncure wrote:
>> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
>>> Hello!
>>>
>>> Sorry for that subscribe post I've just sent, that was bad reading on my
>>> part (for the subscribe info on the homepage).
>>>
>>> Anyway, the title says it all: is there any possibility to limit the number
>>> of connections that a client can have concurrently with a PostgreSQL-Server
>>> with "on-board" means (where I can't influence which user/database the
>>> clients use, rather, the clients mostly all use the same user/database, and
>>> I want to make sure that a single client which runs amok doesn't kill
>>> connectivity for other clients)? I could surely implement this with a proxy
>>> sitting in front of the server, but I'd rather implement this with
>>> PostgreSQL directly.
>>>
>>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>>> software in question.
>>>
>>> Thanks for any hints!
>>
>> I think the (hypothetical) general solution for these types of
>> problems is to have logon triggers.  It's one of the (very) few things
>> I envy from SQL Server -- see  here:
>> http://msdn.microsoft.com/en-us/library/bb326598.aspx.
>
> I'd like to have logon triggers too, but I don't think that's the right
> solution for this problem. For example the logon triggers would be
> called after forking the backend, which means overhead.
>
> The connection limits should be checked when creating the connection
> (validation username/password etc.), before creating the backend.
>
> Anyway, I do have an idea how this could be done using a shared library
> (so it has the same disadvantages as logon triggers). Hopefully I'll
> have time to implement a PoC of this over the weekend.

We have an authentication hook that could probably be used to
implement this. See the authdelay module for an example that uses it.
It does require it to be written in C, of course, but for a usecase
like this that is probably not unreasonable..

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


From: "Tomas Vondra" <tv(at)fuzzy(dot)cz>
To: "Magnus Hagander" <magnus(at)hagander(dot)net>
Cc: "Tomas Vondra" <tv(at)fuzzy(dot)cz>, pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 13:45:32
Message-ID: c36d61aab01cf53876ca8ed6a7ab6eaa.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On 1 Prosinec 2011, 13:47, Magnus Hagander wrote:
> On Thu, Dec 1, 2011 at 01:03, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
>> Anyway, I do have an idea how this could be done using a shared library
>> (so it has the same disadvantages as logon triggers). Hopefully I'll
>> have time to implement a PoC of this over the weekend.
>
> We have an authentication hook that could probably be used to
> implement this. See the authdelay module for an example that uses it.
> It does require it to be written in C, of course, but for a usecase
> like this that is probably not unreasonable..

Hm, I was thinking about that, but my original idea was to keep my own
counters and update them at backend start/end (using
local_preload_libraries). The auth hook handles just the logon event, not
logout, so I would be unable to update the counters when the user
disconnects.

But now I think it might actually work quite well with pg_stat_activity
instead of custom counters. And IIRC it's called before a separate backend
is forked, so it avoids the overhead of forking a backend and then finding
out the user/IP already uses too many connections.

Tomas


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 17:32:02
Message-ID: CAHyXU0wW+mpf33P9QXmSUn5Ej+xPn+boeSFZch3XmfOCnf5OBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Wed, Nov 30, 2011 at 6:03 PM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
> On 29.11.2011 23:38, Merlin Moncure wrote:
>> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
>>> Hello!
>>>
>>> Sorry for that subscribe post I've just sent, that was bad reading on my
>>> part (for the subscribe info on the homepage).
>>>
>>> Anyway, the title says it all: is there any possibility to limit the number
>>> of connections that a client can have concurrently with a PostgreSQL-Server
>>> with "on-board" means (where I can't influence which user/database the
>>> clients use, rather, the clients mostly all use the same user/database, and
>>> I want to make sure that a single client which runs amok doesn't kill
>>> connectivity for other clients)? I could surely implement this with a proxy
>>> sitting in front of the server, but I'd rather implement this with
>>> PostgreSQL directly.
>>>
>>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>>> software in question.
>>>
>>> Thanks for any hints!
>>
>> I think the (hypothetical) general solution for these types of
>> problems is to have logon triggers.  It's one of the (very) few things
>> I envy from SQL Server -- see  here:
>> http://msdn.microsoft.com/en-us/library/bb326598.aspx.
>
> I'd like to have logon triggers too, but I don't think that's the right
> solution for this problem. For example the logon triggers would be
> called after forking the backend, which means overhead.
>
> The connection limits should be checked when creating the connection
> (validation username/password etc.), before creating the backend.

I disagree. I'm not convinced the overhead is really worth worrying
about and having a trigger under the user's control allows the feature
to cover a much broader array of scenarios. If the overhead *was* a
big deal, then you should be using a connection pooler anyways.

>> Barring the above, if you can trust the client to call a function upon
>> connection I'd just do that and handle the error on the client with a
>> connection drop. Barring *that*, I'd be putting my clients in front of
>> pgbouncer with some patches to the same to get what I needed
>> (pgbouncer is single threaded making firewally type features quite
>> easy to implement in an ad hoc fashion).
>
> The connection pooler somehow easier and more complex at the same time.
>
> You can use connect_query to execute whatever you want after connecting
> to the database (not trusting the user to do that), but why would you do
> that? But the database will see the IP of the pgbouncer, not the IP of
> the original client. So executing the query is pointless.
>
> You can modify pgbouncer and it should be quite simple, but you can
> achieve different username/password (pgbouncer) to each customer,
> different database, set pool_size for each of the connections. It won't
> use IP to count connections, but the user's won't 'steal' connections
> from the other.

Yeah, pgbouncer is an ideal platform for coding feature like firewall
features, query whitelist, etc while still having SQL access to the
database if you need it. You also have access to the client's real
ip, and can pass that down to some code that would presumably be
shared with your logon trigger.

merlin