Concurrent MERGE

Lists: pgsql-hackers
From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Concurrent MERGE
Date: 2010-08-05 15:43:34
Message-ID: 1281023014.1838.922.camel@ebony
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Looks like MERGE is progressing well.

At 2010 Dev Mtg, we put me down to work on making merge work
concurrently. That was garbled slightly and had me down as working on
predicate locking which is the general solution to the problem.

Do we still need me to work on concurrent MERGE, or is that included in
the current MERGE patch (can't see it), or is that covered elsewhere
(for example Kevin Grittner's recent work)?

Still happy to do work as proposed, just checking still required.

Thanks,

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Training and Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 15:57:27
Message-ID: AANLkTik1+41hoqJmF4xWY4AO-VVgt02HH63mGBgJeLPm@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Aug 5, 2010 at 11:43 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> Looks like MERGE is progressing well.
>
> At 2010 Dev Mtg, we put me down to work on making merge work
> concurrently. That was garbled slightly and had me down as working on
> predicate locking which is the general solution to the problem.
>
> Do we still need me to work on concurrent MERGE, or is that included in
> the current MERGE patch (can't see it), or is that covered elsewhere
> (for example Kevin Grittner's recent work)?
>
> Still happy to do work as proposed, just checking still required.

I suspect Kevin's patch will solve it if using a sufficiently high
transaction isolation level, but something else might be needed
otherwise. However, I confess to ignorance as to the underlying
issues? Why is MERGE worse in this regard than, say, UPDATE?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 16:39:19
Message-ID: 4C5AE937.5080100@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 05/08/10 18:43, Simon Riggs wrote:
> Do we still need me to work on concurrent MERGE, or is that included in
> the current MERGE patch (can't see it), or ...

It's not in the current MERGE patch.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 16:43:52
Message-ID: 4C5AEA48.1070505@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 05/08/10 18:57, Robert Haas wrote:
> On Thu, Aug 5, 2010 at 11:43 AM, Simon Riggs<simon(at)2ndquadrant(dot)com> wrote:
>> Looks like MERGE is progressing well.
>>
>> At 2010 Dev Mtg, we put me down to work on making merge work
>> concurrently. That was garbled slightly and had me down as working on
>> predicate locking which is the general solution to the problem.
>>
>> Do we still need me to work on concurrent MERGE, or is that included in
>> the current MERGE patch (can't see it), or is that covered elsewhere
>> (for example Kevin Grittner's recent work)?
>>
>> Still happy to do work as proposed, just checking still required.
>
> I suspect Kevin's patch will solve it if using a sufficiently high
> transaction isolation level, but something else might be needed
> otherwise.

With truly serializable isolation I think you'll get a serialization
failure error.

> However, I confess to ignorance as to the underlying
> issues? Why is MERGE worse in this regard than, say, UPDATE?

MERGE can be used to implement "upsert", where a row is updated if it
exists and inserted if it doesn't. I don't think Kevin's patch will
suffice for that. You don't usually want a serialization failure error
when you run two upserts at the same time, you want both of them to
succeed, one doing an insert and the other one doing an update.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Simon Riggs" <simon(at)2ndquadrant(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 16:57:51
Message-ID: 4C5AA73F020000250003425B@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:

>> However, I confess to ignorance as to the underlying
>> issues? Why is MERGE worse in this regard than, say, UPDATE?
>
> MERGE can be used to implement "upsert", where a row is updated if
> it exists and inserted if it doesn't. I don't think Kevin's patch
> will suffice for that. You don't usually want a serialization
> failure error when you run two upserts at the same time, you want
> both of them to succeed, one doing an insert and the other one
> doing an update.

The patch Dan and I are working on won't block anything that
snapshot isolation doesn't already block, so if the behavior you
want is that one is held up until the other is done with something,
it's not going to help. What it would do is detect whether two
concurrent upserts are behaving in a way that is consistent with
some serial execution of the two upserts; it would do nothing if
there was a consistent interpretation, but roll one back if each
appeared to come before the other in some respect.

All of that, of course, with the usual caveats that it would have
*no* impact unless both were run at the SERIALIZABLE isolation
level, there could be false positives, and the MERGE code might
possibly need to add a few calls to the functions added in the
serializable patch.

I hope that clarified rather than muddied the waters....

-Kevin


From: Chris Browne <cbbrowne(at)acm(dot)org>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 17:10:54
Message-ID: 874of93rz5.fsf@cbbrowne.afilias-int.info
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

robertmhaas(at)gmail(dot)com (Robert Haas) writes:
> On Thu, Aug 5, 2010 at 11:43 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> Looks like MERGE is progressing well.
>>
>> At 2010 Dev Mtg, we put me down to work on making merge work
>> concurrently. That was garbled slightly and had me down as working on
>> predicate locking which is the general solution to the problem.
>>
>> Do we still need me to work on concurrent MERGE, or is that included in
>> the current MERGE patch (can't see it), or is that covered elsewhere
>> (for example Kevin Grittner's recent work)?
>>
>> Still happy to do work as proposed, just checking still required.
>
> I suspect Kevin's patch will solve it if using a sufficiently high
> transaction isolation level, but something else might be needed
> otherwise. However, I confess to ignorance as to the underlying
> issues? Why is MERGE worse in this regard than, say, UPDATE?

It's worse than UPDATE because
- It could be an INSERT, if the data's new, but
- If the data's there, it becomes an UPDATE, but
- If some concurrent update has just DELETEd the data that's there, it
becomes an INSERT again, but
- Oops, that DELETE rolled bac, so it's an UPDATE again...

Recurse as needed to make it more undecidable as to whether it's really
an INSERT or an UPDATE :-).
--
Rules of the Evil Overlord #208. "Members of my Legion of Terror will
attend seminars on Sensitivity Training. It's good public relations
for them to be kind and courteous to the general population when not
actively engaged in sowing chaos and destruction."


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Chris Browne" <cbbrowne(at)acm(dot)org>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 17:31:37
Message-ID: 4C5AAF290200002500034262@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Chris Browne <cbbrowne(at)acm(dot)org> wrote:
> robertmhaas(at)gmail(dot)com (Robert Haas) writes:

>> I suspect Kevin's patch will solve it if using a sufficiently
>> high transaction isolation level, but something else might be
>> needed otherwise. However, I confess to ignorance as to the
>> underlying issues? Why is MERGE worse in this regard than, say,
>> UPDATE?
>
> It's worse than UPDATE because
> - It could be an INSERT, if the data's new, but
> - If the data's there, it becomes an UPDATE, but
> - If some concurrent update has just DELETEd the data that's
> there, it becomes an INSERT again, but
> - Oops, that DELETE rolled bac, so it's an UPDATE again...
>
> Recurse as needed to make it more undecidable as to whether it's
> really an INSERT or an UPDATE :-).

Not to get too far into the serializable issues, but the server
won't do any such recursion with the serializable patch. Each
serializable transaction would have its own snapshot where the row
was there or it wasn't, and would act accordingly. If they took
conflicting actions on the same row, one of them might be rolled
back with a serialization failure. The client is likely to want to
retry the operation based on the SQLSTATE indicating serialization
failure, which (as the patch stands now) could result in some
head-banging if the client doesn't introduce some delay first. I
have an optimization in mind (described on the Wiki page) which
could help with that, but its impact on overall performance is
uncertain, so I don't want to mess with that until we have more
benchmarks in place for realistic loads which might use serializable
isolation.

So... No, it's not directly a problem on the server itself. Yes, a
client can make it a problem by resubmitting failed queries "too
quickly". But, we might be able to fix that with additional work.

-Kevin


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 17:36:32
Message-ID: 4C5AF6A0.1060005@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> At 2010 Dev Mtg, we put me down to work on making merge work
> concurrently. That was garbled slightly and had me down as working on
> predicate locking which is the general solution to the problem.

Well, we *still* want predicate locking regardless of what MERGE
supports. It's useful in about 9 different ways.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Chris Browne" <cbbrowne(at)acm(dot)org>,<pgsql-hackers(at)postgresql(dot)org>, "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 17:40:21
Message-ID: 4C5AB1350200002500034267@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:

> So... No, it's not directly a problem on the server itself.

I just had a thought -- the MERGE code isn't doing anything fancy
with snapshots, is it? I haven't been tracking that discussion too
closely or read the patch. My previous comments assume that the
*snapshot* is stable for the duration of a MERGE command, at least
if the transaction isolation level is serializable.

-Kevin


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>,<pgsql-hackers(at)postgresql(dot)org>
Cc: "Dan Ports" <drkp(at)csail(dot)mit(dot)edu>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 19:33:00
Message-ID: 4C5ACB9C020000250003428C@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> Well, we *still* want predicate locking regardless of what MERGE
> supports. It's useful in about 9 different ways.

I don't know whether this is the right time to discuss those 9
different uses, but just so everyone knows, the SIRead locks needed
for the SSI implementation in the current serializable patch have
some characteristics which may be exactly what you want (if you want
cache invalidation or some such) or may render them totally useless
from some purposes.

(1) They don't block anything. Ever. Conflicts with writes are
detected, and right now that is used to mark rw-conflicts between
serializable transactions. I assume we may want to add listeners
who can be signaled on such conflicts, too; but that isn't there
now.

(2) They are only acquired by serializable transactions.

(3) They can survive the transaction which acquired them, and even
the termination of the process which ran the transaction. Right now
they go away when the last serializable transaction which overlapped
the acquiring serializable transaction completes. If we add
listeners, I assume we'd want to keep them as long as a listener was
registered, probably with some timeout feature.

Just so everyone knows what is and isn't there right now.

-Kevin


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-hackers(at)postgresql(dot)org, Dan Ports <drkp(at)csail(dot)mit(dot)edu>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 19:45:51
Message-ID: 4C5B14EF.2060406@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 8/5/10 12:33 PM, Kevin Grittner wrote:
> I don't know whether this is the right time to discuss those 9
> different uses, but just so everyone knows, the SIRead locks needed
> for the SSI implementation in the current serializable patch have
> some characteristics which may be exactly what you want (if you want
> cache invalidation or some such) or may render them totally useless
> from some purposes.

Yeah, I haven't wrapped my head around your stuff enough yet. I would
say that having such locks available only for serializable transactions
limits some of the uses I'm thinking of.

Anyway, here's some of the uses I'm thinking of:

(1) Pre-insert lock: you know that you're going to insert a record with
PK="X" later in a long-running SP, so you want to lock out other inserts
of PK="X" at the beginning of the procedure.

(2) FK Locking: you plan to modify or delete a parent FK record in this
transaction, so you want to prevent any updates or inserts on its
related child records. (in my experience, FK-releated sharelocks are the
#1 cause of deadlocking).

(3) No-duplicate queueing: you want to create a queue table which
doesn't accept duplicate events, but you don't want it to be a source of
deadlocks. This is a variant of (1), but a common case.

(4) Blackouts: records of type "x" aren't supposed to be created during
period "y to y1" or while procedure "z" is running. Predicate locking
can be used to prevent this more easily than adding and removing a trigger.

(5) Debugging: (variant of 4) records of type "x" keep getting inserted
in the table, and you don't know where they're coming from. You can
predicate lock to force an error and debug it.

... that's off the top of my head.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>
Cc: "Dan Ports" <drkp(at)csail(dot)mit(dot)edu>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 20:07:01
Message-ID: 4C5AD3950200002500034292@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> Anyway, here's some of the uses I'm thinking of:
>
> (1) Pre-insert lock: you know that you're going to insert a record
> with PK="X" later in a long-running SP, so you want to lock out
> other inserts of PK="X" at the beginning of the procedure.

Well, if we added a listener, you could SELECT the desired key, and
be notified of a conflicting insert, but that's not really what
you're looking for. It does seem to me that you could solve this
one by inserting the tuple and then updating it at the end, but I
suppose you're looking to avoid the resulting dead tuple. Perhaps a
listener could be fed to a "cancel the conflicting query" routine?
In any event, the only resolution to such a conflict is to kill
something, right? And right now, a write/write conflict would occur
which would resolve it, you just want to be able to "reserve" the
slot up front, so your transaction isn't canceled after doing a
bunch of work, right?

> (2) FK Locking: you plan to modify or delete a parent FK record in
> this transaction, so you want to prevent any updates or inserts on
> its related child records. (in my experience, FK-releated
> sharelocks are the #1 cause of deadlocking).

I don't see how that can be resolved without killing something, do
you? You would just have to replace the current deadlock with some
other form of serialization failure. (And no, I will never give up
the position that a deadlock *is* one of many forms of serialization
failure.)

> (3) No-duplicate queueing: you want to create a queue table which
> doesn't accept duplicate events, but you don't want it to be a
> source of deadlocks. This is a variant of (1), but a common case.

I must be missing something. Please explain how this would work
*without* serialization failures. As far as I can see, you can
replace deadlocks with some other form, but I don't see the point.
Basically, I think we should change the deadlock SQLSTATE to '40001'
and any code which needs to deal with such things treats that
SQLSTATE as meaning "that wasn't a good time to try that
transaction, try again in a bit."

Or, if you just want it to do nothing if the row already exists,
perhaps the new MERGE code would work?

> (4) Blackouts: records of type "x" aren't supposed to be created
> during period "y to y1" or while procedure "z" is running.
> Predicate locking can be used to prevent this more easily than
> adding and removing a trigger.

I would have thought that advisory locks covered this. In what way
do they fall short for this use case?

> (5) Debugging: (variant of 4) records of type "x" keep getting
> inserted in the table, and you don't know where they're coming
> from. You can predicate lock to force an error and debug it.

Hmmmm.... Assuming fine enough granularity (like from an index for
which a range could be locked to detect the conflict) adding a
listener to the SIRead lock handling would be good for this. Well,
as long as the transactions were serializable.

-Kevin


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 20:37:07
Message-ID: 4C5B20F3.9080101@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin,

Overall, you're missing the point: there are workarounds for all of
these things now. However, they are *workarounds*, which means that
they are awkward, expensive, and/or hard to administrate; having
predicate locks would make things much easier.

> I don't see how that can be resolved without killing something, do
> you? You would just have to replace the current deadlock with some
> other form of serialization failure. (And no, I will never give up
> the position that a deadlock *is* one of many forms of serialization
> failure.)

If you're in lock nowait mode, you could get back a "can't lock" error
message immediately rather than waiting for the procedure to time out.
There's certainly going to be an error regardless; it's a question of
how expensive it is for the application and the database server.
Deadlocks are *very* expensive, especially since our deadlock detector
doesn't always figure them out successfully (which means the deadlock
has to be resolved by the DBA). So any other type of serialization
failure or error is better than deadlocking.

> I must be missing something. Please explain how this would work
> *without* serialization failures. As far as I can see, you can
> replace deadlocks with some other form, but I don't see the point.

See above.

>> (4) Blackouts: records of type "x" aren't supposed to be created
>> during period "y to y1" or while procedure "z" is running.
>> Predicate locking can be used to prevent this more easily than
>> adding and removing a trigger.
>
> I would have thought that advisory locks covered this. In what way
> do they fall short for this use case?

Currently, I do use advisory locks for this case. However, they require
a fair amount of administrative design and monitoring overhead.

> Hmmmm.... Assuming fine enough granularity (like from an index for
> which a range could be locked to detect the conflict) adding a
> listener to the SIRead lock handling would be good for this. Well,
> as long as the transactions were serializable.

Yeah, it's that last caveat which makes SIRead locks not as flexible as
the theoretical predicate lock. Of course, any eventual actual
implemenation of predicate locks might be equally inflexible.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>
Cc: "Dan Ports" <drkp(at)csail(dot)mit(dot)edu>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Concurrent MERGE
Date: 2010-08-05 20:59:53
Message-ID: 4C5ADFF902000025000342A5@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> Overall, you're missing the point: there are workarounds for all
> of these things now. However, they are *workarounds*, which means
> that they are awkward, expensive, and/or hard to administrate;
> having predicate locks would make things much easier.

Well, if some form of the SSI patch goes in most of your use cases
can be solved just by making the transactions serializable and
letting the chips fall where they may. That's the whole point of
it. I'll say it again: with true serializable transactions, if you
can show that your transaction will do the right thing if there are
no concurrent transactions, it will do the right thing in any mix of
serializable transactions or be rolled back with a serialization
failure. Full stop. No need to explicitly lock anything (with or
without NOWAIT), no need to SELECT FOR UPDATE/SHARE, no need to
"reserve" anything -- I consider all of those to be awkward
workarounds. You just systematically retry transactions which fail
with SQLSTATE '40001'. If your software isn't set up so that this
can be done once, in one place, you need to rethink your design.

I'm not at all clear how any form of predicate locking can help with
the "blackouts" example. Perhaps if you explained how you see that
working I might get it.

Oh, and if deadlocks are that broken, it's a bit scary that we have
let that go. Is it the problem that technically intractable?

-Kevin


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 22:16:55
Message-ID: 4C5B3857.7070800@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 8/5/10 1:59 PM, Kevin Grittner wrote:
> Oh, and if deadlocks are that broken, it's a bit scary that we have
> let that go. Is it the problem that technically intractable?

Yes; it's a major project. Our detector works pretty well for deadlocks
which are 2-process locks or even several processes all locking against
the same first process. However, triangular and quadralateral deadlocks
(which I've seen more than once) it completely cannot handle, and some
types of activity which can cause deadlocks (like autovacuum or DDL
activity) also seem to be outside its purview. The latter is probably
fixable if I can create some good test cases.

However, the "circular" deadlock problem has an n! issue with detecting it.

Also, even where the deadlock detector does its job, it's still the most
expensive type of serialization failure:

1. the detector will wait at least 1 second to check, so we're usually
looking at a couple seconds to resolve the deadlock;
2. since deadlocks don't happen in testing, most applicaiton error
handling isn't set up for them;
3. deadlocks can, and do, result in cancelling several transactions
instead of just one; there is no "winner" which is allowed to complete.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 22:41:13
Message-ID: 4039.1281048073@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> writes:
> Yes; it's a major project. Our detector works pretty well for deadlocks
> which are 2-process locks or even several processes all locking against
> the same first process. However, triangular and quadralateral deadlocks
> (which I've seen more than once) it completely cannot handle,

Hm? Please explain what you're talking about.

> and some
> types of activity which can cause deadlocks (like autovacuum or DDL
> activity) also seem to be outside its purview.

There's some known issues with deadlocks involving LWLocks as well as
regular locks, which I agree aren't fixable without some significant
rework. But I don't believe there's anything fundamentally wrong with
the deadlock detector --- the real problem there is stretching LWLocks
beyond their design intention, namely to be used only for situations
where deadlock is impossible.

> Also, even where the deadlock detector does its job, it's still the most
> expensive type of serialization failure:

Well, that's certainly true --- you don't want deadlock recovery to be
part of any high-performance path.

> 3. deadlocks can, and do, result in cancelling several transactions
> instead of just one; there is no "winner" which is allowed to complete.

Not sure I believe this either; one deadlock kills one transaction.
If you lose multiple transactions I think you had multiple deadlocks.

regards, tom lane


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 22:49:05
Message-ID: 4C5B3FE1.5030500@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> Hm? Please explain what you're talking about.

Transaction A locks 1 and wants a lock on 2
Transaction B locks 2 and wants a lock on 3
Transaction C locks 3 and wants a lock on 1

I've never had the deadlock detector successfully deal with the above.
Let alone a 4-way.

> Not sure I believe this either; one deadlock kills one transaction.
> If you lose multiple transactions I think you had multiple deadlocks.

Deadlock termination kills *all* of the transactions involved in the
deadlock; what else could it do? This is as opposed to serialization
failures, in which usually only one of the transactions involved fails.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 23:06:17
Message-ID: 4C5B43E9.7090800@catalyst.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/08/10 10:49, Josh Berkus wrote:
>
>> Hm? Please explain what you're talking about.
>>
> Transaction A locks 1 and wants a lock on 2
> Transaction B locks 2 and wants a lock on 3
> Transaction C locks 3 and wants a lock on 1
>
> I've never had the deadlock detector successfully deal with the above.
> Let alone a 4-way.
>
>

Hmm - seems to work ok for me (8.3.11 with pgbench schema updating
branches table by bid):

UPDATE branches SET filler='filled' WHERE bid=:x

I get transaction A succeeds, B is rolled back by the deadlock detector,
C left waiting for A to commit or rollback. What do you find?

Cheers

Mark


From: Andres Freund <andres(at)anarazel(dot)de>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 23:07:10
Message-ID: 20100805230709.GA2866@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Aug 05, 2010 at 03:49:05PM -0700, Josh Berkus wrote:
>
> > Hm? Please explain what you're talking about.
>
> Transaction A locks 1 and wants a lock on 2
> Transaction B locks 2 and wants a lock on 3
> Transaction C locks 3 and wants a lock on 1
>
> I've never had the deadlock detector successfully deal with the above.
> Let alone a 4-way.
Hm. I have seen 5way deadlocks getting resolved just recently. I can
find the relevant if you find it interesting, but I doubt it is.

> > Not sure I believe this either; one deadlock kills one transaction.
> > If you lose multiple transactions I think you had multiple deadlocks.
>
> Deadlock termination kills *all* of the transactions involved in the
> deadlock; what else could it do? This is as opposed to serialization
> failures, in which usually only one of the transactions involved fails.
Uhm:

postgres=# CREATE TABLE a();
CREATE TABLE
postgres=# CREATE TABLE b();
CREATE TABLE

a: postgres=# BEGIN;LOCK a;

b: postgres=# BEGIN;LOCK b;
BEGIN;LOCK a;

a: postgres=# lock b;

b:
ERROR: deadlock detected
DETAIL: Process 12016 waits for AccessExclusiveLock on relation 24585 of database 11564; blocked by process 12011.
Process 12011 waits for AccessExclusiveLock on relation 24588 of database 11564; blocked by process 12016.
HINT: See server log for query details

Afaik it worked like that for years.

Andres


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Dan Ports <drkp(at)csail(dot)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 23:08:45
Message-ID: 4741.1281049725@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> writes:
>> Hm? Please explain what you're talking about.

> Transaction A locks 1 and wants a lock on 2
> Transaction B locks 2 and wants a lock on 3
> Transaction C locks 3 and wants a lock on 1

> I've never had the deadlock detector successfully deal with the above.
> Let alone a 4-way.

>> Not sure I believe this either; one deadlock kills one transaction.
>> If you lose multiple transactions I think you had multiple deadlocks.

> Deadlock termination kills *all* of the transactions involved in the
> deadlock; what else could it do? This is as opposed to serialization
> failures, in which usually only one of the transactions involved fails.

I'm not sure whose deadlock detector you're talking about, but it's
not Postgres'.

regards, tom lane


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-05 23:24:22
Message-ID: 4C5B4826.1080700@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


>> I've never had the deadlock detector successfully deal with the above.
>> Let alone a 4-way.
> Hm. I have seen 5way deadlocks getting resolved just recently. I can
> find the relevant if you find it interesting, but I doubt it is.

Ah, I didn't know that it was even *supposed* to resolve
larger-than-2-way deadlocks, so I didn't attempt to look for more
granular information. Next time I need to resolve one of these, I'll
get analysis information about exactly which kinds of locks are being
held where. I've seen it happen at multiple sites running 8.3 and 8.4,
so whatever code is supposed to resolve circular deadlocks doesn't work
all the time.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Concurrent MERGE
Date: 2010-08-06 07:42:30
Message-ID: 1281080550.1838.2495.camel@ebony
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 2010-08-05 at 19:39 +0300, Heikki Linnakangas wrote:
> On 05/08/10 18:43, Simon Riggs wrote:
> > Do we still need me to work on concurrent MERGE, or is that included in
> > the current MERGE patch (can't see it), or ...
>
> It's not in the current MERGE patch.

OK, I will work on it, eventually, in this release.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Training and Services