Re: serializable lock consistency

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Florian Pflug <fgp(at)phlo(dot)org>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: serializable lock consistency
Date: 2010-12-20 16:54:19
Message-ID: AANLkTikXjxGFbfhFg0gaxXJEEeWqdAwSEiauZzgUPqMX@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Dec 20, 2010 at 9:11 AM, Florian Pflug <fgp(at)phlo(dot)org> wrote:
> On Dec20, 2010, at 13:13 , Heikki Linnakangas wrote:
>> One way to look at this is that the problem arises because SELECT FOR UPDATE doesn't create a new tuple like UPDATE does. The problematic case was:
>>
>>> T1 locks, T1 commits, T2 updates, T2 aborts, all after T0
>>> took its snapshot but before T0 attempts to delete. :-(
>>
>> If T1 does a regular UPDATE, T2 doesn't overwrite the xmax on the original tuple, but on the tuple that T1 created.
>
>> So one way to handle FOR UPDATE would be to lazily turn the lock operation by T1 into a dummy update, when T2 updates the tuple. You can't retroactively make a regular update on behalf of the locking transaction that committed already, or concurrent selects would see the same row twice, but it might work with some kind of a magic tuple that's only followed through the ctid from the original one, and only for the purpose of visibility checks.
>
> In the case of an UPDATE of a recently locked tuple, we could avoid having to insert a dummy tuple by storing the old tuple's xmax in the new tuple's xmax. We'd flag the old tuple, and attempt to restore the xmax of any flagged tuple with an aborted xmax and a ctid != t_self during scanning and vacuuming.
>
> For DELETEs, that won't work. However, could we maybe abuse the ctid to store the old xmax? It currently contains t_self, but do we actually depend on that?

My first reaction to all of this is that it sounds awfully grotty.

> FOR-SHARE and FOR-UPDATE locks could preserve information about the latest committed locker by creating a multi-xid. For FOR-SHARE locks, we'd just need to ensure that we only remove all but one finished transactions. For FOR-UPDATE locks, we'd need to create a multi-xid if the old xmax is >= GlobalXmin, but I guess that's tolerable.

Even in the original version of this patch, there's a non-trivial
overhead here when a multi-xid exists that doesn't exist today: a
serializable transaction has to grovel through the XIDs in the
multi-xact and figure out whether any of them are new enough to be a
problem. I fear that this whole approach is a case of trying to jam a
square peg through a round hole. We're trying to force the on-disk
format that we have to meet a requirement it wasn't designed for, and
it's looking pretty ugly. Kevin Grittner's work is a whole different
approach to this problem, and while that's obviously not fully
debugged and committed yet either, it's often easier to design a new
tool to solve a particular problem than to make an existing tool that
was really meant for something else do some new thing in addition.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Magnus Hagander 2010-12-20 16:56:34 Re: A quick warning on the win32 build scripts
Previous Message Robert Haas 2010-12-20 16:47:45 Re: Extensions and custom_variable_classes