Re: foreign key locks, 2nd attempt

From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Noah Misch <noah(at)leadboat(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: foreign key locks, 2nd attempt
Date: 2012-03-17 23:11:57
Message-ID: 1332025380-sup-7328@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Excerpts from Simon Riggs's message of mar mar 06 17:28:12 -0300 2012:
> On Tue, Mar 6, 2012 at 7:39 PM, Alvaro Herrera
> <alvherre(at)commandprompt(dot)com> wrote:
>
> > We provide four levels of tuple locking strength: SELECT FOR KEY UPDATE is
> > super-exclusive locking (used to delete tuples and more generally to update
> > tuples modifying the values of the columns that make up the key of the tuple);
> > SELECT FOR UPDATE is a standards-compliant exclusive lock; SELECT FOR SHARE
> > implements shared locks; and finally SELECT FOR KEY SHARE is a super-weak mode
> > that does not conflict with exclusive mode, but conflicts with SELECT FOR KEY
> > UPDATE.  This last mode implements a mode just strong enough to implement RI
> > checks, i.e. it ensures that tuples do not go away from under a check, without
> > blocking when some other transaction that want to update the tuple without
> > changing its key.
>
> So there are 4 lock types, but we only have room for 3 on the tuple
> header, so we store the least common/deprecated of the 4 types as a
> multixactid. Some rewording would help there.

Hmm, I rewrote that paragraph two times. I'll try to adjust it a bit
more.

> My understanding is that all of theses workloads will change
>
> * Users of explicit SHARE lockers will be slightly worse in the case
> of the 1st locker, but then after that they'll be the same as before.

Right. (We're assuming that there *are* users of SHARE locks, which I'm
not sure to be a given.)

> * Updates against an RI locked table will be dramatically faster
> because of reduced lock waits

Correct.

> ...and that these previous workloads are effectively unchanged:
>
> * Stream of RI checks causes mxacts

Yes.

> * Multi row deadlocks still possible

Yes.

> * Queues of writers still wait in the same way

Yes.

> * Deletes don't cause mxacts unless by same transaction

Yeah .. there's no way for anyone to not conflict with a FOR KEY UPDATE
lock (the strength grabbed by a delete) unless you're the same
transaction.

> > The possibility of having an update within a MultiXact means that they must
> > persist across crashes and restarts: a future reader of the tuple needs to
> > figure out whether the update committed or aborted.  So we have a requirement
> > that pg_multixact needs to retain pages of its data until we're certain that
> > the MultiXacts in them are no longer of interest.
>
> I think the "no longer of interest" aspect needs to be tracked more
> closely because it will necessarily lead to more I/O.

Not sure what you mean here.

> If we store the LSN on each mxact page, as I think we need to, we can
> get rid of pages more quickly if we know they don't have an LSN set.
> So its possible we can optimise that more.

Hmm, I had originally thought that this was rather pointless because it
was unlikely that a segment would *never* have *all* multis not
containing updates. But then, maybe Robert is right and there are users
out there that run a lot of RI checks and never update the masters ...
Hm. I'm not sure that LSN tracking is the right tool to do that
optimization, however -- I mean, a single multi containing an update in
a whole segment will prevent that segment from being considered useless.

> > VACUUM is in charge of removing old MultiXacts at the time of tuple freezing.
>
> You mean mxact segments?

Well, both. When a tuple is frozen, we both remove its Xmin/Xmax and
any possible multi that it might have in Xmax. That's what I really
meant above. But also, vacuum will remove pg_multixact segments just as
it will remove pg_clog segments.

(It is possible, and probably desirable, to remove a Multi much earlier
than freezing the tuple. The patch does not (yet) do that, however.)

> Surely we set hint bits on tuples same as now? Hope so.

We set hint bits, but if a multi contains an update, we don't set
HEAP_XMAX_COMMITTED even when the update is known committed. I think
we could do this in some cases.

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Daniel Farina 2012-03-18 00:28:11 Cross-backend signals and administration (Was: Re: pg_terminate_backend for same-role)
Previous Message Alvaro Herrera 2012-03-17 23:01:51 Re: foreign key locks, 2nd attempt