pgbench to the MAXINT

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: pgbench to the MAXINT
Date: 2011-01-08 01:59:01
Message-ID: 4D27C4E5.5000609@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

At one point I was working on a patch to pgbench to have it adopt 64-bit
math internally even when running on 32 bit platforms, which are
currently limited to a dataabase scale of ~4000 before the whole process
crashes and burns. But since the range was still plenty high on a
64-bit system, I stopped working on that. People who are only running
32 bit servers at this point in time aren't doing anything serious
anyway, right?

So what is the upper limit now? The way it degrades when you cross it
amuses me:

$ pgbench -i -s 21475 pgbench
creating tables...
set primary key...
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_branches_pkey" for table "pgbench_branches"
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_tellers_pkey" for table "pgbench_tellers"
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pgbench_accounts_pkey" for table "pgbench_accounts"
vacuum...done.
$ pgbench -S -t 10 pgbench
starting vacuum...end.
setrandom: invalid maximum number -2147467296

It doesn't throw any error during the initialization step, neither via
client or database logs, even though it doesn't do anything whatsoever.
It just turns into the quickest pgbench init ever. That's the exact
threshold, because this works:

$ pgbench -i -s 21474 pgbench
creating tables...
10000 tuples done.
20000 tuples done.
30000 tuples done.
...

So where we're at now is that the maximum database pgbench can create is
a scale of 21474. That makes approximately a 313GB database. I can
tell you the size for sure when that init finishes running, which is not
going to be soon. That's not quite as big as I'd like to exercise a
system with 128GB of RAM, the biggest size I run into regularly now, but
it's close enough for now. This limit will need to finally got pushed
upward soon though, because 256GB servers are getting cheaper every
day--and the current pgbench can't make a database big enough to really
escape cache on one of them.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2011-01-08 02:27:58 More pg_upgrade clarifications
Previous Message Tom Lane 2011-01-08 01:29:10 Re: Fixing GIN for empty/null/full-scan cases

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2011-01-09 03:34:11 Re: adding foreign key constraint locks up table
Previous Message Greg Smith 2011-01-07 18:46:12 Re: Wrong docs on wal_buffers?