Re: slow pg_connect()

Lists: pgsql-performance
From: <firerox(at)centrum(dot)cz>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: slow pg_connect()
Date: 2008-03-24 07:40:15
Message-ID: 200803240840.8128@centrum.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Hi,

I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM.
(web server + database on the same server)

Please, how long takes your connectiong to postgres?

$starttimer=time()+microtime();

$dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx")
or die("Couldn't Connect".pg_last_error());

$stoptimer = time()+microtime();
echo "Generated in ".round($stoptimer-$starttimer,4)." s";

It takes more then 0.05s :(

Only this function reduce server speed max to 20request per second.

Than you for any Help!

Best regards.


From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: firerox(at)centrum(dot)cz
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: slow pg_connect()
Date: 2008-03-24 07:58:16
Message-ID: 47E75F18.9060206@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

firerox(at)centrum(dot)cz wrote:
> It takes more then 0.05s :(
>
> Only this function reduce server speed max to 20request per second.
>
If you need that sort of frequent database access, you might want to
look into:

- Doing more work in each connection and reducing the number of
connections required;
- Using multiple connections in parallel;
- Pooling connections so you don't need to create a new one for every job;
- Using a more efficient database connector and/or language;
- Dispatching requests to a persistent database access provider that's
always connected

However, your connections are indeed taking a long time. I wrote a
trivial test using psycopg for Python and found that the following script:

#!/usr/bin/env python
import psycopg
conn = pyscopg.connect("dbname=testdb")

generally took 0.035 seconds (350ms) to run on my workstation -
including OS process creation, Python interpreter startup, database
interface loading, connection, disconnection, and process termination.

A quick timing test shows that the connection/disconnection can be
performed 100 times in 1.2 seconds:

import psycopg
import timeit
print timeit.Timer('conn = psycopg.connect("dbname=craig")', 'import
psycopg').timeit(number=100);

... and this is still with an interpreted language. I wouldn't be too
surprised if much better again could be achieved with the C/C++ APIs,
though I don't currently feel the desire to write a test for that.

--
Craig Ringer


From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: firerox(at)centrum(dot)cz
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: slow pg_connect()
Date: 2008-03-24 08:04:23
Message-ID: 47E76087.3050901@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Craig Ringer wrote:
> firerox(at)centrum(dot)cz wrote:
>> It takes more then 0.05s :(
>>
>> Only this function reduce server speed max to 20request per second.
>>
> If you need that sort of frequent database access, you might want to
> look into:
>
> - Doing more work in each connection and reducing the number of
> connections required;
> - Using multiple connections in parallel;
> - Pooling connections so you don't need to create a new one for every
> job;
> - Using a more efficient database connector and/or language;
> - Dispatching requests to a persistent database access provider that's
> always connected
>
Oh, I missed:

Use a UNIX domain socket rather than a TCP/IP local socket. Database
interfaces that support UNIX sockets (like psycopg) will normally do
this if you omit the host argument entirely.

--
Craig Ringer


From: Tommy Gildseth <tommy(dot)gildseth(at)usit(dot)uio(dot)no>
To: firerox(at)centrum(dot)cz
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: slow pg_connect()
Date: 2008-03-24 10:48:20
Message-ID: 47E786F4.9040804@usit.uio.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

firerox(at)centrum(dot)cz wrote:
> Hi,
>
> I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM.
> (web server + database on the same server)
>
> Please, how long takes your connectiong to postgres?
>
> It takes more then 0.05s :(
>
> Only this function reduce server speed max to 20request per second.

I tried running the script a few times, and got substantially lower
start up times than you are getting. I'm using 8.1.11 on Debian on a 2x
Xeon CPU 2.40GHz with 3GB memory, so I don't think that would account
for the difference.

Generated in 0.0046 s
Generated in 0.0036 s
Generated in 0.0038 s
Generated in 0.0037 s
Generated in 0.0038 s
Generated in 0.0037 s
Generated in 0.0047 s
Generated in 0.0052 s
Generated in 0.005 s

--
Tommy Gildseth


From: Thomas Pundt <mlists(at)rp-online(dot)de>
To: firerox(at)centrum(dot)cz, pgsql-performance(at)postgresql(dot)org
Subject: Re: slow pg_connect()
Date: 2008-03-24 12:39:43
Message-ID: 47E7A10F.5060805@rp-online.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

Hi,

firerox(at)centrum(dot)cz schrieb:
> Please, how long takes your connectiong to postgres?
>
> $starttimer=time()+microtime();
>
> $dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx")
> or die("Couldn't Connect".pg_last_error());
>
> $stoptimer = time()+microtime();
> echo "Generated in ".round($stoptimer-$starttimer,4)." s";
>
> It takes more then 0.05s :(
>
> Only this function reduce server speed max to 20request per second.

Two hints:
* Read about configuring and using persistent database connections
(http://www.php.net/manual/en/function.pg-pconnect.php) with PHP
* Use a connection pooler such as pgpool-II
(http://pgpool.projects.postgresql.org/)

Using both techniques together should boost your performance.

Ciao,
Thomas


From: Chris <dmagick(at)gmail(dot)com>
To: thomas(dot)pundt(at)rp-online(dot)de
Cc: firerox(at)centrum(dot)cz, pgsql-performance(at)postgresql(dot)org
Subject: Re: slow pg_connect()
Date: 2008-03-25 04:18:00
Message-ID: 47E87CF8.9020407@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance


> * Read about configuring and using persistent database connections
> (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP

Though make sure you understand the ramifications of using persistent
connections. You can quickly exhaust your connections by using this and
also cause other issues for your server.

If you do this you'll probably have to adjust postgres to allow more
connections, which usually means lowering the amount of shared memory
each connection can use which can also cause performance issues.

I'd probably use pgpool-II and have it handle the connection stuff for
you rather than doing it through php.

--
Postgresql & php tutorials
http://www.designmagick.com/