Re: locking question

Lists: pgsql-general
From: Brian Hirt <bhirt(at)mobygames(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: locking question
Date: 2004-04-27 23:17:18
Message-ID: 0740FC47-98A1-11D8-BDF4-000393D9FD00@mobygames.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

I have a question about locks.

I have a stats table that get updated when some other table changes.
Sometimes that other table is updated a 2nd time before the first stats
update is finished which causes an error. I've tried using 'SET
TRANSACTION ISOLATION LEVEL SERIALIZABLE' but get 'could not serialize
access due to concurrent update' If i try 'READ COMMITED' i get
primary key failures. This seems like it's a pretty common thing, and
I'l like to be able to do this without having to write code to check
for the 'could not serialize due to concurrent update' error and
re-run the query.

I don't have much experience with locking, because I haven't really
needed to use it. Any advice would be greatly helpful. Belew is
basically the transaction I'm running -- it fails when a 2nd one starts
while the 1st is still running.

BEGIN WORK
delete from blah_stats where id = 1
insert into blah_stats select id,count(*) from blah where id = 1 group
by id
COMMIT WORK

Regards,

Brian Hirt


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Hirt <bhirt(at)mobygames(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: locking question
Date: 2004-04-30 00:30:49
Message-ID: 18575.1083285049@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Brian Hirt <bhirt(at)mobygames(dot)com> writes:
> I don't have much experience with locking, because I haven't really
> needed to use it. Any advice would be greatly helpful. Belew is
> basically the transaction I'm running -- it fails when a 2nd one starts
> while the 1st is still running.

> BEGIN WORK
> delete from blah_stats where id = 1
> insert into blah_stats select id,count(*) from blah where id = 1 group
> by id
> COMMIT WORK

That will fail if both xacts are trying to act on id 1 (I'm assuming
there are various different id values that could be involved?). The
most general solution is to add a retry loop and use SERIALIZABLE mode.
(In general, writer transactions in SERIALIZABLE mode will always need
a retry loop.)

Another possibility, if the set of id's in use is not changing fast,
is to assume that there's probably already a row with the right ID
value and just update it.

BEGIN;
UPDATE blah_stats SET count = (select count(*) from blah where
id = 1) WHERE id = 1;
IF zero rows updated THEN insert as above; END IF;
COMMIT;

You need to run this in READ COMMITTED mode so that concurrent UPDATEs
won't burp. This can still fail, if two transactions try to insert the
same new row at about the same time, but you may not care too much (the
first one probably inserted the right value, or close enough...)

Plan C is to take out a write-exclusive lock on blah_stats at the start
of the transaction. This avoids all the funny cases at the cost of
preventing concurrent updates for different ID values. If your usage
pattern is such that that's not a big hit, this is the way to go.

regards, tom lane