Re: How to handle missing pgsql.connect_timeout

Lists: pgsql-php
From: Janning Vygen <vygen(at)gmx(dot)de>
To: pgsql-php(at)postgresql(dot)org
Subject: How to handle missing pgsql.connect_timeout
Date: 2005-02-01 08:38:17
Message-ID: 200502010938.17853.vygen@gmx.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

Hi,

i use persitent connections and limit them for example to
pgsql.max_persistent = 32

if all connections are in use a new http call to my script's pg_pconnect
results in waiting for a free slot. That's fine, because usually a slot is
getting free within a second.

But if my server is much more overloaded and it would take more than 5 seconds
to wait for a free slot it doesnt make sense to wait, because the user would
hit stop anyway without getting a serious error message.

it would be nice to have a timeout value after which pg_pconnect returns with
an error, so could show a nice error page telling the user to try again in a
few minutes.

i found the php directive mysql.connect_timeout which does exactly what i
want. There is no such directive for pgsql.

I tried pgpool searching in pgpool for a connection timeout directive, but i
didn't found one.

It would be so nice to have a timeout of 3 seconds and after it display an
error message to the user instead of let him waiting and waiting.

Does anybody has any hints to solve this problem?

kind regards
janning


From: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
To: Janning Vygen <vygen(at)gmx(dot)de>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-01 13:18:36
Message-ID: 1107263917.11348.10.camel@mafalda.corporateunderground.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

On Tue, 2005-02-01 at 09:38 +0100, Janning Vygen wrote:
> Does anybody has any hints to solve this problem?

This is a bit of a non-solution, but it is what worked for us. We
simply stopped using pconnect() altogether. This seemed to have solved
connection timeouts and delays. Our site gets 500k page hits per day
without any problems.

Cheers,

Chris

--
Christopher Murtagh
Enterprise Systems Administrator
ISR / Web Communications Group
McGill University
Montreal, Quebec
Canada

Tel.: (514) 398-3122
Fax: (514) 398-2017


From: Janning Vygen <vygen(at)planwerk6(dot)de>
To: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-01 16:40:48
Message-ID: 200502011740.48639.vygen@planwerk6.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

Am Dienstag, 1. Februar 2005 14:18 schrieben Sie:
> On Tue, 2005-02-01 at 09:38 +0100, Janning Vygen wrote:
> > Does anybody has any hints to solve this problem?
>
> This is a bit of a non-solution, but it is what worked for us. We
> simply stopped using pconnect() altogether. This seemed to have solved
> connection timeouts and delays. Our site gets 500k page hits per day
> without any problems.

hm. i didn't get the point. Do you mean you don't use persistent connections
just plain pg_connect instead of pg_pconnect??

kind regards,
janning


From: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
To: Janning Vygen <vygen(at)planwerk6(dot)de>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-01 17:10:32
Message-ID: 1107277832.12116.1.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

On Tue, 2005-02-01 at 17:40 +0100, Janning Vygen wrote:
> Am Dienstag, 1. Februar 2005 14:18 schrieben Sie:
> > On Tue, 2005-02-01 at 09:38 +0100, Janning Vygen wrote:
> > > Does anybody has any hints to solve this problem?
> >
> > This is a bit of a non-solution, but it is what worked for us. We
> > simply stopped using pconnect() altogether. This seemed to have solved
> > connection timeouts and delays. Our site gets 500k page hits per day
> > without any problems.
>
> hm. i didn't get the point. Do you mean you don't use persistent connections
> just plain pg_connect instead of pg_pconnect??

Exactly. pg_pconnect() is (or was the last time I tried it) broken in
PHP. We were constantly getting connections that wouldn't close, and
sometimes not time out. Ever since we gave up on them, we've had no
problems.

Cheers,

Chris

--
Christopher Murtagh
Enterprise Systems Administrator
ISR / Web Communications Group
McGill University
Montreal, Quebec
Canada

Tel.: (514) 398-3122
Fax: (514) 398-2017


From: Janning Vygen <vygen(at)planwerk6(dot)de>
To: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-02 08:49:20
Message-ID: 200502020949.21251.vygen@planwerk6.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

Am Dienstag, 1. Februar 2005 18:10 schrieb Christopher Murtagh:
> On Tue, 2005-02-01 at 17:40 +0100, Janning Vygen wrote:
> > Am Dienstag, 1. Februar 2005 14:18 schrieben Sie:
> > > On Tue, 2005-02-01 at 09:38 +0100, Janning Vygen wrote:
> > > > Does anybody has any hints to solve this problem?
> > >
> > > This is a bit of a non-solution, but it is what worked for us. We
> > > simply stopped using pconnect() altogether. This seemed to have solved
> > > connection timeouts and delays. Our site gets 500k page hits per day
> > > without any problems.
> >
> > hm. i didn't get the point. Do you mean you don't use persistent
> > connections just plain pg_connect instead of pg_pconnect??
>
> Exactly. pg_pconnect() is (or was the last time I tried it) broken in
> PHP.

It's not broken anymore! And it speeds up your website.

> We were constantly getting connections that wouldn't close, and
> sometimes not time out. Ever since we gave up on them, we've had no
> problems.

Thats not exactly my problem. I just want to have say

20 MaxClients to wait for http Connections
but only 5 possible DB Connections

I http server no. 6 tries to connect i want pg_pconnect (or pg_connect; that
doesn't matter) to wait maximal 1 second and then return an error.

At the moment it waits "forever" (until max_execution_time i guess) and you
have no chance to show an error message like: DB is overloaded, please try
again.

php_pgsql module doesn't have a directive like mysql which is
mysql_connection_timeout.

i tried pgpool but it doesn't have a timeout value for establishing a
connection.

That's my problem. My DB is overloaded once a week to a special peek time and
i can't afford more hardware. I just want a simple and kind error message.

kind regards
janning


From: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
To: Janning Vygen <vygen(at)planwerk6(dot)de>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-02 17:01:03
Message-ID: 1107363663.15805.20.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

On Wed, 2005-02-02 at 09:49 +0100, Janning Vygen wrote:
> It's not broken anymore! And it speeds up your website.

Well it might not be broken anymore, but I haven't seen any evidence of
a speed increase (quite the opposite in fact). What we did instead was
write a very small abstraction layer so that any page request would only
use 1 database connection. We're running on dual 2.8 GHz Xeons for our
web server and db machine and taking on over 500,000 hits per day. Some
of our pages are very data intensive (http://www.mcgill.ca/calendar/ for
example).

> At the moment it waits "forever" (until max_execution_time i guess) and you
> have no chance to show an error message like: DB is overloaded, please try
> again.

maybe pg_pconnect() isn't all it is cracked up to be? Anyone else here
have good experiences with it?

Cheers,

Chris

--
Christopher Murtagh
Enterprise Systems Administrator
ISR / Web Communications Group
McGill University
Montreal, Quebec
Canada

Tel.: (514) 398-3122
Fax: (514) 398-2017


From: Keary Suska <hierophant(at)pcisys(dot)net>
To: Postgres-PHP <pgsql-php(at)postgresql(dot)org>
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-03 18:41:00
Message-ID: BE27BE4C.19E41%hierophant@pcisys.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

on 2/2/05 1:49 AM, vygen(at)planwerk6(dot)de purportedly said:

> It's not broken anymore! And it speeds up your website.

This is true, insofar as you avoid the connection overhead, but in my
experience if your DB is local (i.e. on the same machine as your web site),
the difference amounts to fractions of a second. There are also many
pitfalls to using persistent connections, including greater overhead since
you are far more likely to have idle connections, unless every single HTTP
call includes a database call--and I mean *every* call: images, css,
scripts, etc.

You may also want to set pgsql.auto_reset_persistent to handle broken
persistent links.

>> We were constantly getting connections that wouldn't close, and
>> sometimes not time out. Ever since we gave up on them, we've had no
>> problems.
>
> Thats not exactly my problem. I just want to have say
>
> 20 MaxClients to wait for http Connections
> but only 5 possible DB Connections

In this case, you probably don't want persistent connections at all. To use
persistent connections properly under moderate loads your max clients and
max db connections should be the same, or you will definitely have idle
(i.e. unused) db connections.

> I http server no. 6 tries to connect i want pg_pconnect (or pg_connect; that
> doesn't matter) to wait maximal 1 second and then return an error.

AFAIK, if your max_connections setting in the PG server is 5, the 6th
connection should return with an immediate error. Perhaps this has changed
since I experienced it last--I will leave confirmation up to those who know
PG better.

Best regards,

Keary Suska
Esoteritech, Inc.
"Demystifying technology for your home or business"


From: Marco Colombo <pgsql(at)esiway(dot)net>
To: Christopher Murtagh <christopher(dot)murtagh(at)mcgill(dot)ca>
Cc: Janning Vygen <vygen(at)planwerk6(dot)de>, pgsql-php(at)postgresql(dot)org
Subject: Re: How to handle missing pgsql.connect_timeout
Date: 2005-02-08 11:25:34
Message-ID: Pine.LNX.4.61.0502081224510.5203@Megathlon.ESI
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-php

On Wed, 2 Feb 2005, Christopher Murtagh wrote:

> On Wed, 2005-02-02 at 09:49 +0100, Janning Vygen wrote:
>> It's not broken anymore! And it speeds up your website.
>
> Well it might not be broken anymore, but I haven't seen any evidence of
> a speed increase (quite the opposite in fact). What we did instead was
> write a very small abstraction layer so that any page request would only
> use 1 database connection. We're running on dual 2.8 GHz Xeons for our
> web server and db machine and taking on over 500,000 hits per day. Some
> of our pages are very data intensive (http://www.mcgill.ca/calendar/ for
> example).
>
>> At the moment it waits "forever" (until max_execution_time i guess) and you
>> have no chance to show an error message like: DB is overloaded, please try
>> again.
>
> maybe pg_pconnect() isn't all it is cracked up to be? Anyone else here
> have good experiences with it?

You have to evaluate carefully your setup. I think there's little hope
to plan this in advance, you'll have to monitor your system at runtime.

I've been using persistent connections a lot. First, think of what kind
of problem you're trying to solve... decrease the connection overhead.
One connection made to the DB, per page. Now consider the following:

- PostgreSQL forks a backend process for each connection: this is
almost universally considered a huge overhead. But on modern operating
systems, this is no longer the case.

- network latency: if the PG server runs on a different host, it may be
an issue. If it runs on the same host, and is accessed via Unix sockets,
latency is negligible.

- in order to implement connect semantic, PHP has to 'reset' the session,
which is a kind of overhead per se. Consider the same apache backend
serving two different PG users! The connection may be the same, but
all local and remote state has to be removed.

- if your pages use multiple db connections, you'll get many PG backends
per apache backend.

- each PG backend uses up some resources (RAM mostly): there are a number
of 'knobs' you can turn to control that, but of course, unless your RAM
in infinite, you may have to _reduce_ them in order to increase the
number of concurrent backends.

To put it very simply (maybe too simply) it's a CPU vs RAM tradeoff.
Using persistent connections turns a little (very OS and setup dependant)
CPU overhead into a not so little RAM overhead on the PG server side.

I wasn't able to find a definitive answer. I have many concurrent accesses
(say 200-300), with simple (read and write) queries, but on a large data
set. I have two apache frontends, and one PG server (same LAN). I wasn't
able to choose between pconnect or connect. At times, some PHP programmer
kills the DB, and I have to teach him not to perform 300 different queries
_per page_ with a PHP for loop and to learn how to use table joins instead.
I can tell you that when that happens, you'll forget about pconnect or
connect, and thank PG developers for writing such a robust application
(I've seen load averages up to 100, still PG was doing its job, even
if very slowly). :-)

.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ Colombo(at)ESI(dot)it