Re: 15,000 tables - next step

From: Michael Riess <mlriess(at)gmx(dot)de>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: 15,000 tables - next step
Date: 2005-12-04 09:33:47
Message-ID: dmud5m$245l$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

William Yu schrieb:
> Michael Riess wrote:
>>> Well, I'd think that's were your problem is. Not only you have a
>>> (relatively speaking) small server -- you also share it with other
>>> very-memory-hungry services! That's not a situation I'd like to be in.
>>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB
>>> to Postgres.
>>
>>
>> No can do. I can try to switch to a 2GB machine, but I will not use
several machines. Not for a 5GB database. ;-)
>>
>>> With 1500 shared buffers you are not really going
>>> anywhere -- you should have ten times that at the very least.
>>>
>>
>> Like I said - I tried to double the buffers and the performance did
not improve in the least. And I also tried this on a 2GB machine, and
swapping was not a problem. If I used 10x more buffers, I would in
essence remove the OS buffers.
>
> Increasing buffers do improve performance -- if you have enough
memory. You just don't have enough memory to play with. My servers run
w/ 10K buffers (128MB on 64-bit FC4) and it definitely runs better w/ it
at 10K versus 1500.
>
> With that many tables, your system catalogs are probably huge.

content2=# select sum(relpages) from pg_class where relname like 'pg_%';
sum
-------
64088
(1 row)

:-)

> While my situtation was fixable by scheduling a nightly
vacuum/analyze on the system catalogs to get rid of the bazillion dead
table/index info, you have no choice but to get more memory so you can
stuff your entire system catalog into buffers/os cache. Personally, w/
1GB of ECC RAM at ~$85, it's a no brainer. Get as much memory as your
server can support.

The problem is that we use pre-built hardware which isn't configurable.
We can only switch to a bigger server with 2GB, but that's tops.

I will do the following:

- switch to 10k buffers on a 1GB machine, 20k buffers on a 2GB machine
- try to optimize my connection polls to remember which apps (groups of
30 tables) were accessed, so that there is a better chance of using caches
- "swap out" tables which are rarely used: export the content, drop the
table, and re-create it on the fly upon access.

Thanks for your comments!

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Andreas Pflug 2005-12-04 11:56:53 Re: Faster db architecture for a twisted table.
Previous Message William Yu 2005-12-04 09:21:42 Re: 15,000 tables - next step