Re: Restricting Postgres

From: Martin Foster <martin(at)ethereal-realms(dot)org>
To: John A Meinel <john(at)johnmeinel(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Restricting Postgres
Date: 2004-11-03 23:35:52
Message-ID: 41896B58.80209@ethereal-realms.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-performance

John A Meinel wrote:

> Martin Foster wrote:
>
>> Simon Riggs wrote:
>>
>>> On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
>
> [...]
>
>> I've seen this behavior before when restarting the web server during
>> heavy loads. Apache goes from zero connections to a solid 120,
>> causing PostgreSQL to spawn that many children in a short order of
>> time just to keep up with the demand.
>>
>
> But wouldn't limiting the number of concurrent connections do this at
> the source. If you tell it that "You can at most have 20 connections"
> you would never have postgres spawn 120 children.
> I'm not sure what apache does if it can't get a DB connection, but it
> seems exactly like what you want.
>
> Now, if you expected to have 50 clients that all like to just sit on
> open connections, you could leave the number of concurrent connections
> high.
>
> But if your only connect is from the webserver, where all of them are
> designed to be short connections, then leave the max low.
>
> The other possibility is having the webserver use connection pooling, so
> it uses a few long lived connections. But even then, you could limit it
> to something like 10-20, not 120.
>
> John
> =:->
>

I have a dual processor system that can support over 150 concurrent
connections handling normal traffic and load. Now suppose I setup
Apache to spawn all of it's children instantly, what will happen is that
as this happens the PostgreSQL server will also receive 150 attempts at
connection.

This will spawn 150 children in a short order of time and as this takes
place clients can connect and start requesting information not allowing
the machine to settle down to a normal traffic. That spike when
initiated can cripple the machine or even the webserver if a deadlocked
transaction is introduced.

Because on the webserver side a slowdown in the database means that it
will just get that many more connection attempts pooled from the
clients. As they keep clicking and hitting reload over and over to get
a page load, that server starts to buckle hitting unbelievably high load
averages.

When the above happened once, I lost the ability to type on a console
because of a 60+ (OpenBSD) load average on a single processor system.
The reason why Apache now drops a 503 Service Unavailable when loads get
too high.

It's that spike I worry about and it can happen for whatever reason. It
could just as easily be triggered by a massive concurrent request for
processing of an expensive query done in DDOS fashion. This may not
affect the webserver at all, at least immediately, but the same problem
can effect can come into effect.

Limiting connections help, but it's not the silver bullet and limits
your ability to support more connections because of that initial spike.
The penalty for forking a new child is hardly unexecpected, even
Apache will show the same effect when restarted in a high traffic time.

Martin Foster
Creator/Designer Ethereal Realms
martin(at)ethereal-realms(dot)org

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jim Crate 2004-11-03 23:37:13 Re: PostgreSQL on Linux PC vs MacOS X
Previous Message John A Meinel 2004-11-03 23:25:27 Re: Restricting Postgres

Browse pgsql-performance by date

  From Date Subject
Next Message Mike Benoit 2004-11-04 00:50:42 Re: preloading indexes
Previous Message John A Meinel 2004-11-03 23:25:27 Re: Restricting Postgres