time to stop tuning?

Lists: pgsql-performance
From: "David Parker" <dparker(at)tazznetworks(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: time to stop tuning?
Date: 2004-11-26 17:13:32
Message-ID: 07FDEE0ED7455A48AC42AC2070EDFF7C26BDD2@corpsrv2.tazznetworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

We have a network application in which many clients will be executing a mix of select/insert/update/deletes on a central postgres 7.4.5 database, running on Solaris 9 running on dual 2.3 ghz Xeons, with 2 gig of RAM and a RAID 10 disk. The test database is about 400 meg in size.

We have tuned the postgresql.conf parameters to the point where we are confident we have enough memory for shared buffers and for sorting. We are still tuning SQL statements, but we're pretty sure the big wins have been achieved.

We are maxing out on the backend with 30 postmaster processes, each taking up about 2.5-3% of the CPU. We have tested mounting the whole database in /tmp, hence in memory, and it has made no difference in performance, so it seems we are purely CPU bound at this point.

About 70% of our time is spent in selects, and another 25% spent in inserts/updates of a single table (about 10% out of the selects % is against this table).

Now, our application client is not doing nearly enough of it's own caching, so a lot the work the database is doing currently is redundant, and we are working on the client, but in the meantime we have to squeeze as much as we can from the backend.

After that long intro, I have a couple of questions:

1) Given that the data is all cached, what can we do to make sure that postgres is generating
the most efficient plans in this case? We have bumped up effective_cache_size, but it had no
effect. Also, what would the most efficient plan for in-memory data look like? I mean, does one
still look for the normal stuff - index usage, etc., or are seqscans what we should be looking for?
I've seen some stuff about updating statistics targets for specific tables, but I'm not sure I
understand it, and don't know if something like that applies in this case. I can supply some specific plans, if that would help (this email is already too long...).

2) We have SQL test environment where we just run the SQL statements executed by the clients (culled from the log file) in psql. In our test environment, the same set of SQL statements runs 4X faster that the times achieved in the test that generated our source log file. Obviously there was a bigger load on the machine in the full test, but I'm wondering if there are any particular diagnostics that I should be looking at to ferret out contention. I haven't seen anything that looked suspicious in pg_locks, but it's difficult to interpret that data when the database is under load (at least for someone of my limited experience).

I suspect the ultimate answer to our problem will be:

1) aggressive client-side caching
2) SQL tuning
3) more backend hardware

But I would grateful to hear any tips/anecdotes/experiences that others might have from tuning similar applications.

Thanks!

- DAP
----------------------------------------------------------------------------------
David Parker Tazz Networks (401) 709-5130
 


From: Rod Taylor <rbt(at)sitesell(dot)com>
To: David Parker <dparker(at)tazznetworks(dot)com>
Cc: Postgresql Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: time to stop tuning?
Date: 2004-11-26 18:29:19
Message-ID: 1101493759.44437.302.camel@home
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

On Fri, 2004-11-26 at 12:13 -0500, David Parker wrote:
>
> I suspect the ultimate answer to our problem will be:
>
> 1) aggressive client-side caching
> 2) SQL tuning
> 3) more backend hardware

#0 might actually be using connection pooling and using cached query
plans (PREPARE), disabling the statistics daemon, etc.

For the plans, send us EXPLAIN ANALYZE output for each of the common
queries.

If you can try it, I'd give a try at FreeBSD or a newer Linux on your
system instead of Solaris. Older versions of Solaris had not received
the same amount of attention for Intel hardware as the BSDs and Linux
have and I would imagine (having not tested it recently) that this is
still true for 32bit Intel.

Another interesting test might be to limit the number of simultaneous
connections to 8 instead of 30 (client side connection retry) after
client side connection pooling via pgpool or similar has been installed.

Please report back with your findings.
--
Rod Taylor <rbt(at)sitesell(dot)com>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "David Parker" <dparker(at)tazznetworks(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: time to stop tuning?
Date: 2004-11-26 19:04:35
Message-ID: 10504.1101495875@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-performance

"David Parker" <dparker(at)tazznetworks(dot)com> writes:
> 1) Given that the data is all cached, what can we do to make sure that
> postgres is generating the most efficient plans in this case? We have
> bumped up effective_cache_size, but it had no effect.

If you're willing to bet on everything being in RAM all the time,
dropping random_page_cost to 1 would be a theoretically sound thing
to do. In any case you should look at reducing it considerably from
the default setting of 4.

Something that might also be interesting is to try increasing all the
cpu_xxx cost factors, on the theory that since the unit of measurement
(1 sequential page fetch) relates to an action involving no actual I/O,
the relative costs of invoking an operator, etc, should be rated higher
than when you expect actual I/O. I'm not real confident that this would
make things better --- you might find that any improvement would be
swamped by the low accuracy with which we model CPU costs (such as the
assumption that every operator costs the same to evaluate). But it's
worth some experimentation.

regards, tom lane