Re: pgbench vs. SERIALIZABLE

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pgbench vs. SERIALIZABLE
Date: 2013-05-20 11:50:46
Message-ID: 1369050646.59164.YahooMailNeo@web162903.mail.bf1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> I recently had a reason to benchmark a database which is default
> SERIALIZABLE mode.  I was startled to discover that pgbench is set up to
> abort the client once it hits a serialization failure.  You get a bunch
> of these:
>
> Client 7 aborted in state 11: ERROR:  could not serialize access due to
> read/write dependencies among transactions
> DETAIL:  Reason code: Canceled on identification as a pivot, during write.
> HINT:  The transaction might succeed if retried.
> Client 0 aborted in state 11: ERROR:  could not serialize access due to
> read/write dependencies among transactions
> DETAIL:  Reason code: Canceled on identification as a pivot, during write.
>
> ... which continue until you're down to one client, which then finished
> out the pgbench (at very low rates, of course).
>
> The problem is this code here:
>
>                 if (commands[st->state]->type == SQL_COMMAND)
>                 {
>                         /*
>
>                         * Read and discard the query result; note this
> is not included in
>                         * the statement latency numbers.
>
>                         */
>                         res = PQgetResult(st->con);
>                         switch (PQresultStatus(res))
>                         {
>                                 case PGRES_COMMAND_OK:
>                                 case PGRES_TUPLES_OK:
>                                         break;          /* OK */
>                                 default:
>                                         fprintf(stderr, "Client %d
> aborted in state %d: %s",
>                                                         st->id,
> st->state, PQerrorMessage(st->con));
>                                         PQclear(res);
>                                         return clientDone(st, false);
>                         }
>                         PQclear(res);
>                         discard_response(st);
>
>
> The way I read that, if the client encounters any errors at all, it
> gives up and halts that client.  This doesn't seem very robust, and it
> certainly won't work with SERIALIZABLE.

Yes, I ran into this and wound up testing with a hacked copy of
pgbench. Anyone using SERIALIZABLE transactions needs to be
prepared to handle serialization failures.  Even REPEATABLE READ
can see a lot of serialization failures due to write conflicts in
some workloads, and READ COMMITTED can see deadlocks on certain
workloads.

> My thinking is that what pgbench should do is:
> * track an error count
> * if it finds an error, don't increment the transaction count, but do
> increment the error count.
> * then continue to the next transaction.
>
> Does that seem like the right approach?

The main thing is to not consider a transaction complete on a
serialization failure.  Many frameworks will retry a transaction
from the start on a serialization failure.  pgbench should do the
same.  The transaction should be rolled back and retried without
incrementing the transaction count.  It would not hurt to keep
track of the retries, but as long as the transaction is retried
until successful you will get meaningful throughput numbers.

Perhaps other errors should also be counted and handled better.
For SQLSTATE values *other than* 40001 and 40P01 we should not
retry the same transaction, though, because it could well be a case
where the same attempt will head-bang forever.  Serialization
failures, by definition, can be expected to work on retry.  (Not
always on the *first* retry, but any benchmark tool should keep at
it until the transaction succeeds or gets a hard error if you want
meaningful numbers, because that's what a software frameworks
should be doing -- and many do.)

I raised this issue near the end of SSI development, but nobody
seemed very interested and someone argued that a tool to do that
would be good but we shouldn't try to do it in pgbench -- so I let
it drop at the time.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2013-05-20 11:51:08 Re: Block write statistics WIP
Previous Message Soroosh Sardari 2013-05-20 09:06:30 Re: Why there is a union in HeapTupleHeaderData struct