large dataset with write vs read clients

From: Aaron Turner <synfinatic(at)gmail(dot)com>
To: pgsql-performance <pgsql-performance(at)postgresql(dot)org>
Subject: large dataset with write vs read clients
Date: 2010-10-07 17:47:54
Message-ID: AANLkTimNJc=0ffgRO_gO6Vsq06=aqBYXZouhbv0kBe-i@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

currently PG 8.1.3. See attached for my postgresql.conf. Server is
freebsd 6.2 w/ a fast 3TB storage array and only 2GB of ram.

We're running RTG which is a like mrtg, cricket, etc. basically
queries network devices via SNMP, throws stats into the DB for making
pretty bandwidth graphs. We've got hundreds of devices, with 10K+
ports and probably 100K's of stats being queried every 5 minutes. In
order to do all that work, the back end SNMP querier is multi-threaded
and opens a PG connection per-thread. We're running 30 threads. This
is basically all INSERTS, but only ends up to being about 32,000/5
minutes.

The graphing front end CGI is all SELECT. There's 12k tables today,
and new tables are created each month. The number of rows per table
is 100-700k, with most in the 600-700K range. 190GB of data so far.
Good news is that queries have no joins and are limited to only a few
tables at a time.

Basically, each connection is taking about 100MB resident. As we need
to increase the number of threads to be able to query all the devices
in the 5 minute window, we're running out of memory. There aren't
that many CGI connections at anyone one time, but obviously query
performance isn't great, but honestly is surprisingly good all things
considered.

Honestly, not looking to improve PG's performance, really although I
wouldn't complain. Just better manage memory/hardware. I assume I
can't start up two instances of PG pointing at the same files, one
read-only and one read-write with different memory profiles, so I
assume my only real option is throw more RAM at it. I don't have $$$
for another array/server for a master/slave right now. Or perhaps
tweaking my .conf file? Are newer PG versions more memory efficient?

Thanks,
Aaron

--
Aaron Turner
http://synfin.net/         Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
    -- Benjamin Franklin
"carpe diem quam minimum credula postero"

Attachment Content-Type Size
postgresql.conf application/octet-stream 13.5 KB

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Stephen Frost 2010-10-07 18:06:20 Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance
Previous Message Kevin Grittner 2010-10-07 17:21:21 Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance