Re: [HACKERS] logical decoding of two-phase transactions

Lists: pgsql-hackers
From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: logical decoding of two-phase transactions
Date: 2016-12-31 08:36:10
Message-ID: 02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is resubmission of patch to implement logical decoding of two-phase transactions (instead of treating them
as usual transaction when commit) [1] I’ve slightly polished things and used test_decoding output plugin as client.

General idea quite simple here:

* Write gid along with commit/prepare records in case of 2pc
* Add several routines to decode prepare records in the same way as it already happens in logical decoding.

I’ve also added explicit LOCK statement in test_decoding regression suit to check that it doesn’t break thing. If
somebody can create scenario that will block decoding because of existing dummy backend lock that will be great
help. Right now all my tests passing (including TAP tests to check recovery of twophase tx in case of failures from
adjacent mail thread).

If we will agree about current approach than I’m ready to add this stuff to proposed in-core logical replication.

[1] https://www.postgresql.org/message-id/EE7452CA-3C39-4A0E-97EC-17A414972884%40postgrespro.ru

Attachment Content-Type Size
logical_twophase.diff application/octet-stream 24.1 KB
unknown_filename text/plain 100 bytes

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-04 21:20:20
Message-ID: CANP8+jJGRpczm8L=LQCzuiEG3qHbY1cQB+T4A-cct7xgFmjj9g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 31 December 2016 at 08:36, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
> Here is resubmission of patch to implement logical decoding of two-phase transactions (instead of treating them
> as usual transaction when commit) [1] I’ve slightly polished things and used test_decoding output plugin as client.

Sounds good.

> General idea quite simple here:
>
> * Write gid along with commit/prepare records in case of 2pc

GID is now variable sized. You seem to have added this to every
commit, not just 2PC

> * Add several routines to decode prepare records in the same way as it already happens in logical decoding.
>
> I’ve also added explicit LOCK statement in test_decoding regression suit to check that it doesn’t break thing.

Please explain that in comments in the patch.

> If
> somebody can create scenario that will block decoding because of existing dummy backend lock that will be great
> help. Right now all my tests passing (including TAP tests to check recovery of twophase tx in case of failures from
> adjacent mail thread).
>
> If we will agree about current approach than I’m ready to add this stuff to proposed in-core logical replication.
>
> [1] https://www.postgresql.org/message-id/EE7452CA-3C39-4A0E-97EC-17A414972884%40postgrespro.ru

We'll need some measurements about additional WAL space or mem usage
from these approaches. Thanks.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-05 06:43:25
Message-ID: CANP8+jJ+=hwLcDSNAqBmZwHh0Pb46SERQsBC1i4VCpkgkL9Jjg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4 January 2017 at 21:20, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 31 December 2016 at 08:36, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>> Here is resubmission of patch to implement logical decoding of two-phase transactions (instead of treating them
>> as usual transaction when commit) [1] I’ve slightly polished things and used test_decoding output plugin as client.
>
> Sounds good.
>
>> General idea quite simple here:
>>
>> * Write gid along with commit/prepare records in case of 2pc
>
> GID is now variable sized. You seem to have added this to every
> commit, not just 2PC

I've just realised that you're adding GID because it allows you to
uniquely identify the prepared xact. But then the prepared xact will
also have a regular TransactionId, which is also unique. GID exists
for users to specify things, but it is not needed internally and we
don't need to add it here. What we do need is for the commit prepared
message to remember what the xid of the prepare was and then re-find
it using the commit WAL record's twophase_xid field. So we don't need
to add GID to any WAL records, nor to any in-memory structures.

Please re-work the patch to include twophase_xid, which should make
the patch smaller and much faster too.

Please add comments to explain how and why patches work. Design
comments allow us to check the design makes sense and if it does
whether all the lines in the patch are needed to follow the design.
Without that patches are much harder to commit and we all want patches
to be easier to commit.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-05 10:21:10
Message-ID: D11D3072-6A72-439C-8B62-1D2628134A60@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thank you for looking into this.

> On 5 Jan 2017, at 09:43, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>
>> GID is now variable sized. You seem to have added this to every
>> commit, not just 2PC
>

Hm, didn’t realise that, i’ll fix.

> I've just realised that you're adding GID because it allows you to
> uniquely identify the prepared xact. But then the prepared xact will
> also have a regular TransactionId, which is also unique. GID exists
> for users to specify things, but it is not needed internally and we
> don't need to add it here.

I think we anyway can’t avoid pushing down GID to the client side.

If we will push down only local TransactionId to remote server then we will lose mapping
of GID to TransactionId, and there will be no way for user to identify his transaction on
second server. Also Open XA and lots of libraries (e.g. J2EE) assumes that there is
the same GID everywhere and it’s the same GID that was issued by the client.

Requirements for two-phase decoding can be different depending on what one want
to build around it and I believe in some situations pushing down xid is enough. But IMO
dealing with reconnects, failures and client libraries will force programmer to use
the same GID everywhere.

> What we do need is for the commit prepared
> message to remember what the xid of the prepare was and then re-find
> it using the commit WAL record's twophase_xid field. So we don't need
> to add GID to any WAL records, nor to any in-memory structures.

Other part of the story is how to find GID during decoding of commit prepared record.
I did that by adding GID field to the commit WAL record, because by the time of decoding
all memory structures that were holding xid<->gid correspondence are already cleaned up.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-05 10:49:30
Message-ID: CANP8+j+xxYB5GoFebq1NGm17TEAaZ=JvM-Ok-Hp+n6De3pseeA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 January 2017 at 10:21, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
> Thank you for looking into this.
>
>> On 5 Jan 2017, at 09:43, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>>
>>> GID is now variable sized. You seem to have added this to every
>>> commit, not just 2PC
>>
>
> Hm, didn’t realise that, i’ll fix.
>
>> I've just realised that you're adding GID because it allows you to
>> uniquely identify the prepared xact. But then the prepared xact will
>> also have a regular TransactionId, which is also unique. GID exists
>> for users to specify things, but it is not needed internally and we
>> don't need to add it here.
>
> I think we anyway can’t avoid pushing down GID to the client side.
>
> If we will push down only local TransactionId to remote server then we will lose mapping
> of GID to TransactionId, and there will be no way for user to identify his transaction on
> second server. Also Open XA and lots of libraries (e.g. J2EE) assumes that there is
> the same GID everywhere and it’s the same GID that was issued by the client.
>
> Requirements for two-phase decoding can be different depending on what one want
> to build around it and I believe in some situations pushing down xid is enough. But IMO
> dealing with reconnects, failures and client libraries will force programmer to use
> the same GID everywhere.

Surely in this case the master server is acting as the Transaction
Manager, and it knows the mapping, so we are good?

I guess if you are using >2 nodes then you need to use full 2PC on each node.

But even then, if you adopt the naming convention that all in-progress
xacts will be called RepOriginId-EPOCH-XID, so they have a fully
unique GID on all of the child nodes then we don't need to add the
GID.

Please explain precisely how you expect to use this, to check that GID
is required.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-05 12:43:04
Message-ID: DED204B8-0877-4BAB-B2FE-0F450BD56186@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 5 Jan 2017, at 13:49, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>
> Surely in this case the master server is acting as the Transaction
> Manager, and it knows the mapping, so we are good?
>
> I guess if you are using >2 nodes then you need to use full 2PC on each node.
>
> Please explain precisely how you expect to use this, to check that GID
> is required.
>

For example if we are using logical replication just for failover/HA and allowing user
to be transaction manager itself. Then suppose that user prepared tx on server A and server A
crashed. After that client may want to reconnect to server B and commit/abort that tx.
But user only have GID that was used during prepare.

> But even then, if you adopt the naming convention that all in-progress
> xacts will be called RepOriginId-EPOCH-XID, so they have a fully
> unique GID on all of the child nodes then we don't need to add the
> GID.

Yes, that’s also possible but seems to be less flexible restricting us to some
specific GID format.

Anyway, I can measure WAL space overhead introduced by the GID’s inside commit records
to know exactly what will be the cost of such approach.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-06 04:22:51
Message-ID: CAMsr+YG2DmNBVs1fnwcD_=sQDC=DAQq=0fqNwZ594_2Tv8-GqA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 January 2017 at 20:43, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

> Anyway, I can measure WAL space overhead introduced by the GID’s inside commit records
> to know exactly what will be the cost of such approach.

Sounds like a good idea, especially if you remove any attempt to work
with GIDs for !2PC commits at the same time.

I don't think I care about having access to the GID for the use case I
have in mind, since we'd actually be wanting to hijack a normal COMMIT
and internally transform it to PREPARE TRANSACTION, <do stuff>, COMMIT
PREPARED. But for the more general case of logical decoding of 2PC I
can see the utility of having the xact identifier.

If we presume we're only interested in logically decoding 2PC xacts
that are not yet COMMIT PREPAREd, can we not avoid the WAL overhead of
writing the GID by looking it up in our shmem state at decoding-time
for PREPARE TRANSACTION? If we can't find the prepared transaction in
TwoPhaseState we know to expect a following ROLLBACK PREPARED or
COMMIT PREPARED, so we shouldn't decode it at the PREPARE TRANSACTION
stage.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-06 10:40:03
Message-ID: CANP8+jJJJd6698m=1qBOdXBuikov7uQC1bM3CKg3AKJmtwAspw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 January 2017 at 12:43, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 5 Jan 2017, at 13:49, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>
>> Surely in this case the master server is acting as the Transaction
>> Manager, and it knows the mapping, so we are good?
>>
>> I guess if you are using >2 nodes then you need to use full 2PC on each node.
>>
>> Please explain precisely how you expect to use this, to check that GID
>> is required.
>>
>
> For example if we are using logical replication just for failover/HA and allowing user
> to be transaction manager itself. Then suppose that user prepared tx on server A and server A
> crashed. After that client may want to reconnect to server B and commit/abort that tx.
> But user only have GID that was used during prepare.

I don't think that's the case your trying to support and I don't think
that's a common case that we want to pay the price to put into core in
a non-optional way.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-26 02:14:06
Message-ID: CAMsr+YH0aRgnk8XwyYypV1E10DNXwgGpWTLq_P2MGjACpiwMpA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 January 2017 at 20:43, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 5 Jan 2017, at 13:49, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>
>> Surely in this case the master server is acting as the Transaction
>> Manager, and it knows the mapping, so we are good?
>>
>> I guess if you are using >2 nodes then you need to use full 2PC on each node.
>>
>> Please explain precisely how you expect to use this, to check that GID
>> is required.
>>
>
> For example if we are using logical replication just for failover/HA and allowing user
> to be transaction manager itself. Then suppose that user prepared tx on server A and server A
> crashed. After that client may want to reconnect to server B and commit/abort that tx.
> But user only have GID that was used during prepare.
>
>> But even then, if you adopt the naming convention that all in-progress
>> xacts will be called RepOriginId-EPOCH-XID, so they have a fully
>> unique GID on all of the child nodes then we don't need to add the
>> GID.
>
> Yes, that’s also possible but seems to be less flexible restricting us to some
> specific GID format.
>
> Anyway, I can measure WAL space overhead introduced by the GID’s inside commit records
> to know exactly what will be the cost of such approach.

Stas,

Have you had a chance to look at this further?

I think the approach of storing just the xid and fetching the GID
during logical decoding of the PREPARE TRANSACTION is probably the
best way forward, per my prior mail. That should eliminate Simon's
objection re the cost of tracking GIDs and still let us have access to
them when we want them, which is the best of both worlds really.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-26 07:43:58
Message-ID: 89073436-925E-4299-A9AF-88D04E55C231@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>>
>> Yes, that’s also possible but seems to be less flexible restricting us to some
>> specific GID format.
>>
>> Anyway, I can measure WAL space overhead introduced by the GID’s inside commit records
>> to know exactly what will be the cost of such approach.
>
> Stas,
>
> Have you had a chance to look at this further?

Generally i’m okay with Simon’s approach and will send send updated patch. Anyway want to
perform some test to estimate how much disk space is actually wasted by extra WAL records.

> I think the approach of storing just the xid and fetching the GID
> during logical decoding of the PREPARE TRANSACTION is probably the
> best way forward, per my prior mail.

I don’t think that’s possible in this way. If we will not put GID in commit record, than by the time
when logical decoding will happened transaction will be already committed/aborted and there will
be no easy way to get that GID. I thought about several possibilities:

* Tracking xid/gid map in memory also doesn’t help much — if server reboots between prepare
and commit we’ll lose that mapping.
* We can provide some hooks on prepared tx recovery during startup, but that approach also fails
if reboot happened between commit and decoding of that commit.
* Logical messages are WAL-logged, but they don’t have any redo function so don’t helps much.

So to support user-accessible 2PC over replication based on 2PC decoding we should invent
something more nasty like writing them into a table.

> That should eliminate Simon's
> objection re the cost of tracking GIDs and still let us have access to
> them when we want them, which is the best of both worlds really.

Having 2PC decoding in core is a good thing anyway even without GID tracking =)

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-26 09:51:28
Message-ID: CAMsr+YGX6Q-VNXEhu3sriCf0wCS6hjQ0Lhj4rJ8VueVuuka+fg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 26 Jan. 2017 18:43, "Stas Kelvich" <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

>>
>> Yes, that’s also possible but seems to be less flexible restricting us
to some
>> specific GID format.
>>
>> Anyway, I can measure WAL space overhead introduced by the GID’s inside
commit records
>> to know exactly what will be the cost of such approach.
>
> I think the approach of storing just the xid and fetching the GID
> during logical decoding of the PREPARE TRANSACTION is probably the
> best way forward, per my prior mail.

I don’t think that’s possible in this way. If we will not put GID in commit
record, than by the time when logical decoding will happened transaction
will be already committed/aborted and there will
be no easy way to get that GID.

My thinking is that if the 2PC xact is by that point COMMIT PREPARED or
ROLLBACK PREPARED we don't care that it was ever 2pc and should just decode
it as a normal xact. Its gid has ceased to be significant and no longer
holds meaning since the xact is resolved.

The point of logical decoding of 2pc is to allow peers to participate in a
decision on whether to commit or not. Rather than only being able to decode
the xact once committed as is currently the case.

If it's already committed there's no point treating it as anything special.

So when we get to the prepare transaction in xlog we look to see if it's
already committed / rolled back. If so we proceed normally like current
decoding does. Only if it's still prepared do we decode it as 2pc and
supply the gid to a new output plugin callback for prepared xacts.

I thought about several possibilities:

* Tracking xid/gid map in memory also doesn’t help much — if server reboots
between prepare
and commit we’ll lose that mapping.

Er what? That's why I suggested using the prepared xacts shmem state. It's
persistent as you know from your work on prepared transaction files. It has
all the required info.


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-26 11:34:47
Message-ID: 3477E635-F590-4432-BD20-C59142B65CD3@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 26 Jan 2017, at 12:51, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> * Tracking xid/gid map in memory also doesn’t help much — if server reboots between prepare
> and commit we’ll lose that mapping.
>
> Er what? That's why I suggested using the prepared xacts shmem state. It's persistent as you know from your work on prepared transaction files. It has all the required info.

Imagine following scenario:

1. PREPARE happend
2. PREPARE decoded and sent where it should be sent
3. We got all responses from participating nodes and issuing COMMIT/ABORT
4. COMMIT/ABORT decoded and sent

After step 3 there is no more memory state associated with that prepared tx, so if will fail
between 3 and 4 then we can’t know GID unless we wrote it commit record (or table).

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-26 23:52:45
Message-ID: CAMsr+YHos95GFyfd2Gk61-66wpxFTgXJh=sBV3Rtfhe3xzG1HQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 26 January 2017 at 19:34, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

> Imagine following scenario:
>
> 1. PREPARE happend
> 2. PREPARE decoded and sent where it should be sent
> 3. We got all responses from participating nodes and issuing COMMIT/ABORT
> 4. COMMIT/ABORT decoded and sent
>
> After step 3 there is no more memory state associated with that prepared tx, so if will fail
> between 3 and 4 then we can’t know GID unless we wrote it commit record (or table).

If the decoding session crashes/disconnects and restarts between 3 and
4, we know the xact is now committed or rolled backand we don't care
about its gid anymore, we can decode it as a normal committed xact or
skip over it if aborted. If Pg crashes between 3 and 4 the same
applies, since all decoding sessions must restart.

No decoding session can ever start up between 3 and 4 without passing
through 1 and 2, since we always restart decoding at restart_lsn and
restart_lsn cannot be advanced past the assignment (BEGIN) of a given
xid until we pass its commit record and the downstream confirms it has
flushed the results.

The reorder buffer doesn't even really need to keep track of the gid
between 3 and 4, though it should do to save the output plugin and
downstream the hassle of keeping an xid to gid mapping. All it needs
is to know if we sent a given xact's data to the output plugin at
PREPARE time, so we can suppress sending them again at COMMIT time,
and we can store that info on the ReorderBufferTxn. We can store the
gid there too.

We'll need two new output plugin callbacks

prepare_cb
rollback_cb

since an xact can roll back after we decode PREPARE TRANSACTION (or
during it, even) and we have to be able to tell the downstream to
throw the data away.

I don't think the rollback callback should be called
abort_prepared_cb, because we'll later want to add the ability to
decode interleaved xacts' changes as they are made, before commit, and
in that case will also need to know if they abort. We won't care if
they were prepared xacts or not, but we'll know based on the
ReorderBufferTXN anyway.

We don't need a separate commit_prepared_cb, the existing commit_cb is
sufficient. The gid will be accessible on the ReorderBufferTXN.

Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
wal_level >= logical I don't think that's the end of the world. But
since we already have almost everything we need in memory, why not
just stash the gid on ReorderBufferTXN?

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-31 06:29:45
Message-ID: CAB7nPqQogmPYRkb=wwA4veeAWZXwDXQA64vw7fO_mHuDhAOHoA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 27, 2017 at 8:52 AM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
> wal_level >= logical I don't think that's the end of the world. But
> since we already have almost everything we need in memory, why not
> just stash the gid on ReorderBufferTXN?

I have been through this thread... And to be honest, I have a hard
time understanding for which purpose the information of a 2PC
transaction is useful in the case of logical decoding. The prepare and
commit prepared have been received by a node which is at the root of
the cluster tree, a node of the cluster at an upper level, or a
client, being in charge of issuing all the prepare queries, and then
issue the commit prepared to finish the transaction across a cluster.
In short, even if you do logical decoding from the root node, or the
one at a higher level, you would care just about the fact that it has
been committed.
--
Michael


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-31 06:30:15
Message-ID: CAB7nPqSrk255+zBRL9E97YSMJgBjW_LM60jckPKrnbJot9PELw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 31, 2017 at 3:29 PM, Michael Paquier
<michael(dot)paquier(at)gmail(dot)com> wrote:
> On Fri, Jan 27, 2017 at 8:52 AM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>> Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
>> wal_level >= logical I don't think that's the end of the world. But
>> since we already have almost everything we need in memory, why not
>> just stash the gid on ReorderBufferTXN?
>
> I have been through this thread... And to be honest, I have a hard
> time understanding for which purpose the information of a 2PC
> transaction is useful in the case of logical decoding. The prepare and
> commit prepared have been received by a node which is at the root of
> the cluster tree, a node of the cluster at an upper level, or a
> client, being in charge of issuing all the prepare queries, and then
> issue the commit prepared to finish the transaction across a cluster.
> In short, even if you do logical decoding from the root node, or the
> one at a higher level, you would care just about the fact that it has
> been committed.

By the way, I have moved this patch to next CF, you guys seem to make
the discussion move on.
--
Michael


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-31 09:22:39
Message-ID: CAMsr+YHQzGxnR-peT4SbX2-xiG2uApJMTgZ4a3TiRBM6COyfqg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 31 Jan. 2017 19:29, "Michael Paquier" <michael(dot)paquier(at)gmail(dot)com> wrote:

On Fri, Jan 27, 2017 at 8:52 AM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
> wal_level >= logical I don't think that's the end of the world. But
> since we already have almost everything we need in memory, why not
> just stash the gid on ReorderBufferTXN?

I have been through this thread... And to be honest, I have a hard
time understanding for which purpose the information of a 2PC
transaction is useful in the case of logical decoding.

TL;DR: this lets us decode the xact after prepare but before commit so
decoding/replay outcomes can affect the commit-or-abort decision.

The prepare and
commit prepared have been received by a node which is at the root of
the cluster tree, a node of the cluster at an upper level, or a
client, being in charge of issuing all the prepare queries, and then
issue the commit prepared to finish the transaction across a cluster.
In short, even if you do logical decoding from the root node, or the
one at a higher level, you would care just about the fact that it has
been committed.

That's where you've misunderstood - it isn't committed yet. The point or
this change is to allow us to do logical decoding at the PREPARE
TRANSACTION point. The xact is not yet committed or rolled back.

This allows the results of logical decoding - or more interestingly results
of replay on another node / to another app / whatever to influence the
commit or rollback decision.

Stas wants this for a conflict-free logical semi-synchronous replication
multi master solution. At PREPARE TRANSACTION time we replay the xact to
other nodes, each of which applies it and PREPARE TRANSACTION, then replies
to confirm it has successfully prepared the xact. When all nodes confirm
the xact is prepared it is safe for the origin node to COMMIT PREPARED. The
other nodes then see hat the first node has committed and they commit too.

Alternately if any node replies "could not replay xact" or "could not
prepare xact" the origin node knows to ROLLBACK PREPARED. All the other
nodes see that and rollback too.

This makes it possible to be much more confident that what's replicated is
exactly the same on all nodes, with no after-the-fact MM conflict
resolution that apps must be aware of to function correctly.

To really make it rock solid you also have to send the old and new values
of a row, or have row versions, or send old row hashes. Something I also
want to have, but we can mostly get that already with REPLICA IDENTITY FULL.

It is of interest to me because schema changes in MM logical replication
are more challenging awkward and restrictive without it. Optimistic
conflict resolution doesn't work well for schema changes and once the
conflciting schema changes are committed on different nodes there is no
going back. So you need your async system to have a global locking model
for schema changes to stop conflicts arising. Or expect the user not to do
anything silly / misunderstand anything and know all the relevant system
limitations and requirements... which we all know works just great in
practice. You also need a way to ensure that schema changes don't render
committed-but-not-yet-replayed row changes from other peers nonsensical.
The safest way is a barrier where all row changes committed on any node
before committing the schema change on the origin node must be fully
replayed on every other node, making an async MM system temporarily sync
single master (and requiring all nodes to be up and reachable). Otherwise
you need a way to figure out how to conflict-resolve incoming rows with
missing columns / added columns / changed types / renamed tables etc which
is no fun and nearly impossible in the general case.

2PC decoding lets us avoid all this mess by sending all nodes the proposed
schema change and waiting until they all confirm successful prepare before
committing it. It can also be used to solve the row compatibility problems
with some more lazy inter-node chat in logical WAL messages.

I think the purpose of having the GID available to the decoding output
plugin at PREPARE TRANSACTION time is that it can co-operate with a global
transaction manager that way. Each node can tell the GTM "I'm ready to
commit [X]". It is IMO not crucial since you can otherwise use a (node-id,
xid) tuple, but it'd be nice for coordinating with external systems,
simplifying inter node chatter, integrating logical deocding into bigger
systems with external transaction coordinators/arbitrators etc. It seems
pretty silly _not_ to have it really.

Personally I don't think lack of access to the GID justifies blocking 2PC
logical decoding. It can be added separately. But it'd be nice to have
especially if it's cheap.


From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-31 09:42:31
Message-ID: e4485c64-b4aa-4879-fe7a-ea58052c0877@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 31.01.2017 09:29, Michael Paquier wrote:
> On Fri, Jan 27, 2017 at 8:52 AM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>> Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
>> wal_level >= logical I don't think that's the end of the world. But
>> since we already have almost everything we need in memory, why not
>> just stash the gid on ReorderBufferTXN?
> I have been through this thread... And to be honest, I have a hard
> time understanding for which purpose the information of a 2PC
> transaction is useful in the case of logical decoding. The prepare and
> commit prepared have been received by a node which is at the root of
> the cluster tree, a node of the cluster at an upper level, or a
> client, being in charge of issuing all the prepare queries, and then
> issue the commit prepared to finish the transaction across a cluster.
> In short, even if you do logical decoding from the root node, or the
> one at a higher level, you would care just about the fact that it has
> been committed.
Sorry, may be I do not completely understand your arguments.
Actually our multimaster is completely based now on logical replication
and 2PC (more precisely we are using 3PC now:)
State of transaction (prepared, precommitted, committed) should be
persisted in WAL to make it possible to perform recovery.
Recovery can involve transactions in any state. So there three records
in the WAL: PREPARE, PRECOMMIT, COMMIT_PREPARED and
recovery can involve either all of them, either
PRECOMMIT+COMMIT_PREPARED either just COMMIT_PREPARED.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(dot)ringer(at)2ndquadrant(dot)com>
To: konstantin knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-01-31 20:42:50
Message-ID: CAMsr+YFbV77AsW7b+99xzeCwyaMyWNkVjyHai1q55-xKkhBEnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 31 Jan. 2017 22:43, "Konstantin Knizhnik" <k(dot)knizhnik(at)postgrespro(dot)ru>
wrote:

On 31.01.2017 09:29, Michael Paquier wrote:

> On Fri, Jan 27, 2017 at 8:52 AM, Craig Ringer <craig(at)2ndquadrant(dot)com>
> wrote:
>
>> Now, if it's simpler to just xlog the gid at COMMIT PREPARED time when
>> wal_level >= logical I don't think that's the end of the world. But
>> since we already have almost everything we need in memory, why not
>> just stash the gid on ReorderBufferTXN?
>>
> I have been through this thread... And to be honest, I have a hard
> time understanding for which purpose the information of a 2PC
> transaction is useful in the case of logical decoding. The prepare and
> commit prepared have been received by a node which is at the root of
> the cluster tree, a node of the cluster at an upper level, or a
> client, being in charge of issuing all the prepare queries, and then
> issue the commit prepared to finish the transaction across a cluster.
> In short, even if you do logical decoding from the root node, or the
> one at a higher level, you would care just about the fact that it has
> been committed.
>

in any state. So there three records in the WAL: PREPARE, PRECOMMIT,
COMMIT_PREPARED and
recovery can involve either all of them, either PRECOMMIT+COMMIT_PREPARED
either just COMMIT_PREPARED.

That's your modified Pg though.

This 2pc logical decoding patch proposal is for core and I think it just
confused things to introduce discussion of unrelated changes made by your
product to the codebase.

--
Konstantin Knizhnik

Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-01 02:05:19
Message-ID: CAB7nPqSQPANEwaLm1GU4MYjENrRgetiRCxhWoP-ATQbutWm+Yw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 31, 2017 at 6:22 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> That's where you've misunderstood - it isn't committed yet. The point or
> this change is to allow us to do logical decoding at the PREPARE TRANSACTION
> point. The xact is not yet committed or rolled back.

Yes, I got that. I was looking for a why or an actual use-case.

> Stas wants this for a conflict-free logical semi-synchronous replication
> multi master solution.

This sentence is hard to decrypt, less without "multi master" as the
concept applies basically only to only one master node.

> At PREPARE TRANSACTION time we replay the xact to
> other nodes, each of which applies it and PREPARE TRANSACTION, then replies
> to confirm it has successfully prepared the xact. When all nodes confirm the
> xact is prepared it is safe for the origin node to COMMIT PREPARED. The
> other nodes then see hat the first node has committed and they commit too.

OK, this is the argument I was looking for. So in your schema the
origin node, the one generating the changes, is itself in charge of
deciding if the 2PC should work or not. There are two channels between
the origin node and the replicas replaying the logical changes, one is
for the logical decoder with a receiver, the second one is used to
communicate the WAL apply status. I thought about something like
postgres_fdw doing this job with a transaction that does writes across
several nodes, that's why I got confused about this feature.
Everything goes through one channel, so the failure handling is just
simplified.

> Alternately if any node replies "could not replay xact" or "could not
> prepare xact" the origin node knows to ROLLBACK PREPARED. All the other
> nodes see that and rollback too.

The origin node could just issue the ROLLBACK or COMMIT and the
logical replicas would just apply this change.

> To really make it rock solid you also have to send the old and new values of
> a row, or have row versions, or send old row hashes. Something I also want
> to have, but we can mostly get that already with REPLICA IDENTITY FULL.

On a primary key (or a unique index), the default replica identity is
enough I think.

> It is of interest to me because schema changes in MM logical replication are
> more challenging awkward and restrictive without it. Optimistic conflict
> resolution doesn't work well for schema changes and once the conflicting
> schema changes are committed on different nodes there is no going back. So
> you need your async system to have a global locking model for schema changes
> to stop conflicts arising. Or expect the user not to do anything silly /
> misunderstand anything and know all the relevant system limitations and
> requirements... which we all know works just great in practice. You also
> need a way to ensure that schema changes don't render
> committed-but-not-yet-replayed row changes from other peers nonsensical. The
> safest way is a barrier where all row changes committed on any node before
> committing the schema change on the origin node must be fully replayed on
> every other node, making an async MM system temporarily sync single master
> (and requiring all nodes to be up and reachable). Otherwise you need a way
> to figure out how to conflict-resolve incoming rows with missing columns /
> added columns / changed types / renamed tables etc which is no fun and
> nearly impossible in the general case.

That's one vision of things, FDW-like approaches would be a second,
but those are not able to pass down utility statements natively,
though this stuff can be done with the utility hook.

> I think the purpose of having the GID available to the decoding output
> plugin at PREPARE TRANSACTION time is that it can co-operate with a global
> transaction manager that way. Each node can tell the GTM "I'm ready to
> commit [X]". It is IMO not crucial since you can otherwise use a (node-id,
> xid) tuple, but it'd be nice for coordinating with external systems,
> simplifying inter node chatter, integrating logical deocding into bigger
> systems with external transaction coordinators/arbitrators etc. It seems
> pretty silly _not_ to have it really.

Well, Postgres-XC/XL save the 2PC GID for this purpose in the GTM,
this way the COMMIT/ABORT PREPARED can be issued from any nodes, and
there is a centralized conflict resolution, the latter being done with
a huge cost, causing much bottleneck in scaling performance.

> Personally I don't think lack of access to the GID justifies blocking 2PC
> logical decoding. It can be added separately. But it'd be nice to have
> especially if it's cheap.

I think it should be added reading this thread.
--
Michael


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-01 19:20:26
Message-ID: CA+TgmoY7sqHfc=tgTPO6695gOF4bykWRO3fnk7bQr+ij5eycew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 31, 2017 at 9:05 PM, Michael Paquier
<michael(dot)paquier(at)gmail(dot)com> wrote:
>> Personally I don't think lack of access to the GID justifies blocking 2PC
>> logical decoding. It can be added separately. But it'd be nice to have
>> especially if it's cheap.
>
> I think it should be added reading this thread.

+1. If on the logical replication master the user executes PREPARE
TRANSACTION 'mumble', isn't it sensible to want the logical replica to
prepare the same set of changes with the same GID? To me, that not
only seems like *a* sensible thing to want to do but probably the
*most* sensible thing to want to do. And then, when the eventual
COMMIT PREPAPARED 'mumble' comes along, you want to have the replica
run the same command. If you don't do that, then the alternative is
that the replica has to make up new names based on the master's XID.
But that kinda sucks, because now if replication stops due to a
conflict or whatever and you have to disentangle things by hand, all
the names on the replica are basically meaningless.

Also, including the GID in the WAL for each COMMIT/ABORT PREPARED
doesn't seem inordinately expensive to me. For that to really add up
to a significant cost, wouldn't you need to be doing LOTS of 2PC
transactions, each with very little work, so that the commit/abort
prepared records weren't swamped by everything else? That seems like
an unlikely scenario, but if it does happen, that's exactly when
you'll be most grateful for the GID tracking. I think.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-01 19:32:34
Message-ID: 19628.1485977554@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Also, including the GID in the WAL for each COMMIT/ABORT PREPARED
> doesn't seem inordinately expensive to me.

I'm confused ... isn't it there already? If not, how do we handle
reconstructing 2PC state from WAL at all?

regards, tom lane


From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-01 20:15:55
Message-ID: 589241FB.8060509@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/01/2017 10:32 PM, Tom Lane wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Also, including the GID in the WAL for each COMMIT/ABORT PREPARED
>> doesn't seem inordinately expensive to me.
> I'm confused ... isn't it there already? If not, how do we handle
> reconstructing 2PC state from WAL at all?
>
> regards, tom lane
>
>
Right now logical decoding ignores prepare and take in account only "commit prepared":

/*
* Currently decoding ignores PREPARE TRANSACTION and will just
* decode the transaction when the COMMIT PREPARED is sent or
* throw away the transaction's contents when a ROLLBACK PREPARED
* is received. In the future we could add code to expose prepared
* transactions in the changestream allowing for a kind of
* distributed 2PC.
*/

For some scenarios it works well, but if we really need prepared state at replica (as in case of multimaster), then it is not enough.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-01 21:35:42
Message-ID: CAMsr+YEDR3OHhULKa_Xk+JKSvCajSSjgKqzF-EaPKNyUDEB9QQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 Feb. 2017 08:32, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Also, including the GID in the WAL for each COMMIT/ABORT PREPARED
> doesn't seem inordinately expensive to me.

I'm confused ... isn't it there already? If not, how do we handle
reconstructing 2PC state from WAL at all?

Right. Per my comments uothread I don't see why we need to add anything
more to WAL here.

Stas was concerned about what happens in logical decoding if we crash
between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back
and decode the whole txn again anyway so it doesn't matter.

We can just track it on ReorderBufferTxn when we see it at PREPARE
TRANSACTION time.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-02 19:33:21
Message-ID: CA+TgmoZN5ziYFaOCnXjvAbGAODo5oNVSNadHUcUevHWk9cf3Yw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Feb 1, 2017 at 2:32 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Also, including the GID in the WAL for each COMMIT/ABORT PREPARED
>> doesn't seem inordinately expensive to me.
>
> I'm confused ... isn't it there already? If not, how do we handle
> reconstructing 2PC state from WAL at all?

By XID. See xl_xact_twophase, which gets included in xl_xact_commit
or xl_xact_abort. The GID has got to be there in the XL_XACT_PREPARE
record, but not when actually committing/rolling back.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-02 19:34:31
Message-ID: CA+TgmoaWJ=N=h5oZZsWH-u=Fgw68hmcSxAdH-4yH8uN5-ewnzQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Feb 1, 2017 at 4:35 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> Right. Per my comments uothread I don't see why we need to add anything more
> to WAL here.
>
> Stas was concerned about what happens in logical decoding if we crash
> between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back
> and decode the whole txn again anyway so it doesn't matter.
>
> We can just track it on ReorderBufferTxn when we see it at PREPARE
> TRANSACTION time.

Oh, hmm. I guess if that's how it works then we don't need it in WAL
after all. I'm not sure that re-decoding the already-prepared
transaction is a very good plan, but if that's what we're doing anyway
this patch probably shouldn't change it.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-03 00:14:18
Message-ID: CAMsr+YHiCy99T3jfNijaS1W_6a_jK68HqtBmhjN5XkL8RfeuFg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 3 February 2017 at 03:34, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Wed, Feb 1, 2017 at 4:35 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>> Right. Per my comments uothread I don't see why we need to add anything more
>> to WAL here.
>>
>> Stas was concerned about what happens in logical decoding if we crash
>> between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back
>> and decode the whole txn again anyway so it doesn't matter.
>>
>> We can just track it on ReorderBufferTxn when we see it at PREPARE
>> TRANSACTION time.
>
> Oh, hmm. I guess if that's how it works then we don't need it in WAL
> after all. I'm not sure that re-decoding the already-prepared
> transaction is a very good plan, but if that's what we're doing anyway
> this patch probably shouldn't change it.

We don't have much choice at the moment.

Logical decoding must restart from the xl_running_xacts most recently
prior to the xid allocation for the oldest xact the client hasn't
confirmed receipt of decoded data + commit for. That's because reorder
buffers are not persistent; if a decoding session crashes we throw
away accumulated reorder buffers, both those in memory and those
spilled to disk. We have to re-create them by restarting decoding from
the beginning of the oldest xact of interest.

We could make reorder buffers persistent and shared between decoding
sessions but it'd totally change the logical decoding model and create
some other problems. It's certainly not a topic for this patch. So we
can take it as given that we'll always restart decoding from BEGIN
again at a crash.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-03 22:47:50
Message-ID: CA+TgmoaQHvRR-gAG980Nf7hzug+VfuP69eFzAaq0P8TS1WLQhg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Feb 2, 2017 at 7:14 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> We could make reorder buffers persistent and shared between decoding
> sessions but it'd totally change the logical decoding model and create
> some other problems. It's certainly not a topic for this patch. So we
> can take it as given that we'll always restart decoding from BEGIN
> again at a crash.

OK, thanks for the explanation. I have never liked this design very
much, and told Andres so: big transactions are bound to cause
noticeable replication lag. But you're certainly right that it's not
a topic for this patch.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-03 23:00:59
Message-ID: 20170203230059.t26qowgg33eiloax@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-02-03 17:47:50 -0500, Robert Haas wrote:
> On Thu, Feb 2, 2017 at 7:14 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> > We could make reorder buffers persistent and shared between decoding
> > sessions but it'd totally change the logical decoding model and create
> > some other problems. It's certainly not a topic for this patch. So we
> > can take it as given that we'll always restart decoding from BEGIN
> > again at a crash.

Sharing them seems unlikely (filtering and such would become a lot more
complicated) and separate from persistency. I'm not sure however how
it'd "totally change the logical decoding model"?

Even if we'd not always restart decoding, we'd still have the option to
add the information necessary to the spill files, so I'm unclear how
persistency plays a role here?

> OK, thanks for the explanation. I have never liked this design very
> much, and told Andres so: big transactions are bound to cause
> noticeable replication lag. But you're certainly right that it's not
> a topic for this patch.

Streaming and persistency of spill files are different topics, no?
Either would have initially complicated things beyond the point of
getting things into core - I'm all for adding them at some point.

Persistent spill files (which'd also spilling of small transactions at
regular intervals) also has the issue that it makes the spill format
something that can't be adapted in bugfixes etc, and that we need to
fsync it.

I still haven't seen a credible model for being able to apply a stream
of interleaved transactions that can roll back individually; I think we
really need the ability to have multiple transactions alive in one
backend for that.

Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-03 23:47:23
Message-ID: CA+TgmoYcCCXRrZAC88C6k81fakNdmFJTgmrKiHmNW97M1s0eZw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Feb 3, 2017 at 6:00 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> I still haven't seen a credible model for being able to apply a stream
> of interleaved transactions that can roll back individually; I think we
> really need the ability to have multiple transactions alive in one
> backend for that.

Hmm, yeah, that's a problem. That smells like autonomous transactions.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-04 00:08:02
Message-ID: 20170204000802.5s527mzrxirevljs@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-02-03 18:47:23 -0500, Robert Haas wrote:
> On Fri, Feb 3, 2017 at 6:00 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > I still haven't seen a credible model for being able to apply a stream
> > of interleaved transactions that can roll back individually; I think we
> > really need the ability to have multiple transactions alive in one
> > backend for that.
>
> Hmm, yeah, that's a problem. That smells like autonomous transactions.

Unfortunately the last few proposals, like spawning backends, to deal
with autonomous xacts aren't really suitable for replication, unless you
only have very large ones. And it really needs to be an implementation
where ATs can freely be switched inbetween. On the other hand, a good
deal of problems (like locking) shouldn't be an issue, since there's
obviously a possible execution schedule.

I suspect this'd need some low-level implemention close to xact.c that'd
allow switching between transactions.

- Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-04 00:09:43
Message-ID: CA+TgmoZTDUAyV293imtCgZQY4eGgCTjyk9G3oq0+kdU+KftnVw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Feb 3, 2017 at 7:08 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-02-03 18:47:23 -0500, Robert Haas wrote:
>> On Fri, Feb 3, 2017 at 6:00 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> > I still haven't seen a credible model for being able to apply a stream
>> > of interleaved transactions that can roll back individually; I think we
>> > really need the ability to have multiple transactions alive in one
>> > backend for that.
>>
>> Hmm, yeah, that's a problem. That smells like autonomous transactions.
>
> Unfortunately the last few proposals, like spawning backends, to deal
> with autonomous xacts aren't really suitable for replication, unless you
> only have very large ones. And it really needs to be an implementation
> where ATs can freely be switched inbetween. On the other hand, a good
> deal of problems (like locking) shouldn't be an issue, since there's
> obviously a possible execution schedule.
>
> I suspect this'd need some low-level implemention close to xact.c that'd
> allow switching between transactions.

Yeah. Well, I still feel like that's also how autonomous transactions
oughta work, but I realize that's not a unanimous viewpoint. :-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-04 00:11:25
Message-ID: 20170204001125.lnyft5qqk2up4act@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-02-03 19:09:43 -0500, Robert Haas wrote:
> On Fri, Feb 3, 2017 at 7:08 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2017-02-03 18:47:23 -0500, Robert Haas wrote:
> >> On Fri, Feb 3, 2017 at 6:00 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >> > I still haven't seen a credible model for being able to apply a stream
> >> > of interleaved transactions that can roll back individually; I think we
> >> > really need the ability to have multiple transactions alive in one
> >> > backend for that.
> >>
> >> Hmm, yeah, that's a problem. That smells like autonomous transactions.
> >
> > Unfortunately the last few proposals, like spawning backends, to deal
> > with autonomous xacts aren't really suitable for replication, unless you
> > only have very large ones. And it really needs to be an implementation
> > where ATs can freely be switched inbetween. On the other hand, a good
> > deal of problems (like locking) shouldn't be an issue, since there's
> > obviously a possible execution schedule.
> >
> > I suspect this'd need some low-level implemention close to xact.c that'd
> > allow switching between transactions.
>
> Yeah. Well, I still feel like that's also how autonomous transactions
> oughta work, but I realize that's not a unanimous viewpoint. :-)

Same here ;)


From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-04 07:19:33
Message-ID: 58958085.8040701@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/04/2017 03:08 AM, Andres Freund wrote:
> On 2017-02-03 18:47:23 -0500, Robert Haas wrote:
>> On Fri, Feb 3, 2017 at 6:00 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>>> I still haven't seen a credible model for being able to apply a stream
>>> of interleaved transactions that can roll back individually; I think we
>>> really need the ability to have multiple transactions alive in one
>>> backend for that.
>> Hmm, yeah, that's a problem. That smells like autonomous transactions.
> Unfortunately the last few proposals, like spawning backends, to deal
> with autonomous xacts aren't really suitable for replication, unless you
> only have very large ones. And it really needs to be an implementation
> where ATs can freely be switched inbetween. On the other hand, a good
> deal of problems (like locking) shouldn't be an issue, since there's
> obviously a possible execution schedule.
>
> I suspect this'd need some low-level implemention close to xact.c that'd
> allow switching between transactions.

Let me add my two coins here:

1. We are using logical decoding in our multimaster and applying transactions concurrently by pool of workers. Unlike asynchronous replication, in multimaster we need to perform voting for each transaction commit, so if transactions are applied by single
workers, then performance will be awful and, moreover, there is big chance to get "deadlock" when none of workers can complete voting because different nodes are performing voting for different transactions.

I could not say that there are no problems with this approach. There are definitely a lot of challenges. First of all we need to use special DTM (distributed transaction manager) to provide consistent applying of transaction at different nodes. Second
problem is once again related with kind of "deadlock" explained above. Even if we apply transactions concurrently, it is still possible to get such deadlock if we do not have enough workers. This is why we allow to launch extra workers dynamically (but
finally it is limited by maximal number of configures bgworkers).

But in any case, I think that "parallel apply" is "must have" mode for logical replication.

2. We have implemented autonomous transactions in PgPro EE. Unlike proposal currently present at commit fest, we execute autonomous transaction within the same backend. So we are just storing and restoring transaction context. Unfortunately it is also not
so cheap operation. Autonomous transaction should not see any changes done by parent transaction (because it can be rollbacked after commit of autonomous transaction). But there are catalog and relation caches inside backend, so we have to clean this
caches before switching to ATX. It is quite expensive operation and so speed of execution of PL/pg-SQL function with autonomous transaction is several order of magnitude slower than without it. So autonomous transaction can be used for audits (its the
primary goal of using ATX in Oracle PL/SQL applications) but this mechanism is not efficient for concurrent execution of multiple transaction in one backend.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-02-09 13:23:11
Message-ID: 1FE466EA-7058-484D-B0DB-42CD81FA59F0@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 31 Jan 2017, at 12:22, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> Personally I don't think lack of access to the GID justifies blocking 2PC logical decoding. It can be added separately. But it'd be nice to have especially if it's cheap.

Agreed.

> On 2 Feb 2017, at 00:35, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> Stas was concerned about what happens in logical decoding if we crash between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back and decode the whole txn again anyway so it doesn't matter.

Not exactly. It seems that in previous discussions we were not on the same page, probably due to unclear arguments by me.

From my point of view there is no problems (or at least new problems comparing to ordinary 2PC) with preparing transactions on slave servers with something like “#{xid}#{node_id}” instead of GID if issuing node is coordinator of that transaction. In case of failure, restart, crash we have the same options about deciding what to do with uncommitted transactions.

My concern is about the situation with external coordinator. That scenario is quite important for users of postgres native 2pc, notably J2EE user. Suppose user (or his framework) issuing “prepare transaction ‘mytxname’;" to servers with ordinary synchronous physical replication. If master will crash and replica will be promoted than user can reconnect to it and commit/abort that transaction using his GID. And it is unclear to me how to achieve same behaviour with logical replication of 2pc without GID in commit record. If we will prepare with “#{xid}#{node_id}” on acceptor nodes, then if donor node will crash we’ll lose mapping between user’s gid and our internal gid; contrary we can prepare with user's GID on acceptors, but then we will not know that GID on donor during commit decode (by the time decoding happens all memory state already gone and we can’t exchange our xid to gid).

I performed some tests to understand real impact on size of WAL. I've compared postgres -master with wal_level = logical, after 3M 2PC transactions with patched postgres where GID’s are stored inside commit record too. Testing with 194-bytes and 6-bytes GID’s. (GID max size is 200 bytes)

-master, 6-byte GID after 3M transaction: pg_current_xlog_location = 0/9572CB28
-patched, 6-byte GID after 3M transaction: pg_current_xlog_location = 0/96C442E0

so with 6-byte GID’s difference in WAL size is less than 1%

-master, 194-byte GID after 3M transaction: pg_current_xlog_location = 0/B7501578
-patched, 194-byte GID after 3M transaction: pg_current_xlog_location = 0/D8B43E28

and with 194-byte GID’s difference in WAL size is about 18%

So using big GID’s (as J2EE does) can cause notable WAL bloat, while small GID’s are almost unnoticeable.

May be we can introduce configuration option track_commit_gid by analogy with track_commit_timestamp and make that behaviour optional? Any objections to that?

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-01 09:24:38
Message-ID: CAMsr+YHVNW-Lh-XnYnkfhGL2QT8kK9kM4zkZ4+Sv=fszhujj-g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 9 February 2017 at 21:23, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

>> On 2 Feb 2017, at 00:35, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>
>> Stas was concerned about what happens in logical decoding if we crash between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back and decode the whole txn again anyway so it doesn't matter.
>
> Not exactly. It seems that in previous discussions we were not on the same page, probably due to unclear arguments by me.
>
> From my point of view there is no problems (or at least new problems comparing to ordinary 2PC) with preparing transactions on slave servers with something like “#{xid}#{node_id}” instead of GID if issuing node is coordinator of that transaction. In case of failure, restart, crash we have the same options about deciding what to do with uncommitted transactions.

But we don't *need* to do that. We have access to the GID of the 2PC
xact from PREPARE TRANSACTION until COMMIT PREPARED, after which we
have no need for it. So we can always use the user-supplied GID.

> I performed some tests to understand real impact on size of WAL. I've compared postgres -master with wal_level = logical, after 3M 2PC transactions with patched postgres where GID’s are stored inside commit record too.

Why do you do this? You don't need to. You can look the GID up from
the 2pc status table in memory unless the master already did COMMIT
PREPARED, in which case you can just decode it as a normal xact as if
it were never 2pc in the first place.

I don't think I've managed to make this point by description, so I'll
try to modify your patch to demonstrate.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-01 22:20:57
Message-ID: 46ebc39d-82a0-e59f-2552-2ea8911a64e7@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 01/03/17 10:24, Craig Ringer wrote:
> On 9 February 2017 at 21:23, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>>> On 2 Feb 2017, at 00:35, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>>
>>> Stas was concerned about what happens in logical decoding if we crash between PREPSRE TRANSACTION and COMMIT PREPARED. But we'll always go back and decode the whole txn again anyway so it doesn't matter.
>>
>> Not exactly. It seems that in previous discussions we were not on the same page, probably due to unclear arguments by me.
>>
>> From my point of view there is no problems (or at least new problems comparing to ordinary 2PC) with preparing transactions on slave servers with something like “#{xid}#{node_id}” instead of GID if issuing node is coordinator of that transaction. In case of failure, restart, crash we have the same options about deciding what to do with uncommitted transactions.
>
> But we don't *need* to do that. We have access to the GID of the 2PC
> xact from PREPARE TRANSACTION until COMMIT PREPARED, after which we
> have no need for it. So we can always use the user-supplied GID.
>
>> I performed some tests to understand real impact on size of WAL. I've compared postgres -master with wal_level = logical, after 3M 2PC transactions with patched postgres where GID’s are stored inside commit record too.
>
> Why do you do this? You don't need to. You can look the GID up from
> the 2pc status table in memory unless the master already did COMMIT
> PREPARED, in which case you can just decode it as a normal xact as if
> it were never 2pc in the first place.
>
> I don't think I've managed to make this point by description, so I'll
> try to modify your patch to demonstrate.
>

If I understand you correctly you are saying that if PREPARE is being
decoded, we can load the GID from the 2pc info in memory about the
specific 2pc. The info gets removed on COMMIT PREPARED but at that point
there is no real difference between replicating it as 2pc or 1pc since
the 2pc behavior is for all intents and purposes lost at that point.
Works for me. I guess the hard part is knowing if COMMIT PREPARED
happened at the time PREPARE is decoded, but I existence of the needed
info could be probably be used for that.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 01:11:30
Message-ID: CAMsr+YE=8=ODuw5occL-v6Oi=D5DJ31RgP1hfiR8oD8-9sX0-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 March 2017 at 06:20, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:

> If I understand you correctly you are saying that if PREPARE is being
> decoded, we can load the GID from the 2pc info in memory about the
> specific 2pc. The info gets removed on COMMIT PREPARED but at that point
> there is no real difference between replicating it as 2pc or 1pc since
> the 2pc behavior is for all intents and purposes lost at that point.
> Works for me. I guess the hard part is knowing if COMMIT PREPARED
> happened at the time PREPARE is decoded, but I existence of the needed
> info could be probably be used for that.

Right.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 07:27:51
Message-ID: DFE91C41-1410-4D89-B7D6-819F5FDC1437@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 2 Mar 2017, at 01:20, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>
> The info gets removed on COMMIT PREPARED but at that point
> there is no real difference between replicating it as 2pc or 1pc since
> the 2pc behavior is for all intents and purposes lost at that point.
>

If we are doing 2pc and COMMIT PREPARED happens then we should
replicate that without transaction body to the receiving servers since tx
is already prepared on them with some GID. So we need a way to construct
that GID.

It seems that last ~10 messages I’m failing to explain some points about this
topic. Or, maybe, I’m failing to understand some points. Can we maybe setup
skype call to discuss this and post summary here? Craig? Peter?

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 08:00:36
Message-ID: CAMsr+YHbvCtVetrtJ1_LCKen98d3cG8zmHP_-nSm=aDj6yc0vQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 March 2017 at 15:27, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 2 Mar 2017, at 01:20, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>>
>> The info gets removed on COMMIT PREPARED but at that point
>> there is no real difference between replicating it as 2pc or 1pc since
>> the 2pc behavior is for all intents and purposes lost at that point.
>>
>
> If we are doing 2pc and COMMIT PREPARED happens then we should
> replicate that without transaction body to the receiving servers since tx
> is already prepared on them with some GID. So we need a way to construct
> that GID.

We already have it, because we just decoded the PREPARE TRANSACTION.
I'm preparing a patch revision to demonstrate this.

BTW, I've been reviewing the patch in more detail. Other than a bunch
of copy-and-paste that I'm cleaning up, the main issue I've found is
that in DecodePrepare, you call:

SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,
parsed->nsubxacts, parsed->subxacts);

but I am not convinced it is correct to call it at PREPARE TRANSACTION
time, only at COMMIT PREPARED time. We want to see the 2pc prepared
xact's state when decoding it, but there might be later commits that
cannot yet see that state and shouldn't have it visible in their
snapshots. Imagine, say

BEGIN;
ALTER TABLE t ADD COLUMN ...
INSERT INTO 't' ...
PREPARE TRANSACTION 'x';

BEGIN;
INSERT INTO t ...;
COMMIT;

COMMIT PREPARED 'x';

We want to see the new column when decoding the prepared xact, but
_not_ when decoding the subsequent xact between the prepare and
commit. This particular case cannot occur because the lock held by
ALTER TABLE blocks the INSERT in the other xact, but how sure are you
that there are no other snapshot issues that could arise if we promote
a snapshot to visible early? What about if we ROLLBACK PREPARED after
we made the snapshot visible?

The tests don't appear to cover logical decoding 2PC sessions that do
DDL at all. I emphasised that that would be one of the main problem
areas when we originally discussed this. I'll look at adding some,
since I think this is one of the areas that's most likely to find
issues.

> It seems that last ~10 messages I’m failing to explain some points about this
> topic. Or, maybe, I’m failing to understand some points. Can we maybe setup
> skype call to discuss this and post summary here? Craig? Peter?

Let me prep an updated patch. Time zones make it rather hard to do
voice; I'm in +0800 Western Australia, Petr is in +0200...

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 08:07:44
Message-ID: CAMsr+YEjUAA5EoGoBQbne4MUDZjijthbjCYTcqjtg+B+MJPFGA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 March 2017 at 16:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:

> What about if we ROLLBACK PREPARED after
> we made the snapshot visible?

Yeah, I'm pretty sure that's going to be a problem actually.

You're telling the snapshot builder that an xact committed at PREPARE
TRANSACTION time.

If we then ROLLBACK PREPARED, we're in a mess. It looks like it'll
cause issues with catalogs, user-catalog tables, etc.

I suspect we need to construct a temporary snapshot to decode PREPARE
TRANSACTION then discard it. If we later COMMIT PREPARED we should
perform the current steps to merge the snapshot state in.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 08:20:39
Message-ID: 25A09FA8-5B0E-447D-97FB-F37F3B0E34E9@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 2 Mar 2017, at 11:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> We already have it, because we just decoded the PREPARE TRANSACTION.
> I'm preparing a patch revision to demonstrate this.

Yes, we already have it, but if server reboots between commit prepared (all
prepared state is gone) and decoding of this commit prepared then we loose
that mapping, isn’t it?

> BTW, I've been reviewing the patch in more detail. Other than a bunch
> of copy-and-paste that I'm cleaning up, the main issue I've found is
> that in DecodePrepare, you call:
>
> SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,
> parsed->nsubxacts, parsed->subxacts);
>
> but I am not convinced it is correct to call it at PREPARE TRANSACTION
> time, only at COMMIT PREPARED time. We want to see the 2pc prepared
> xact's state when decoding it, but there might be later commits that
> cannot yet see that state and shouldn't have it visible in their
> snapshots.

Agree, that is problem. That allows to decode this PREPARE, but after that
it is better to mark this transaction as running in snapshot or perform prepare
decoding with some kind of copied-end-edited snapshot. I’ll have a look at this.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 12:23:50
Message-ID: CAMsr+YF9a+S6uKAUEFumsAP=Kpj_Ko7kfueTCm27MoNYOwFdmg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 March 2017 at 16:20, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 2 Mar 2017, at 11:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>
>> We already have it, because we just decoded the PREPARE TRANSACTION.
>> I'm preparing a patch revision to demonstrate this.
>
> Yes, we already have it, but if server reboots between commit prepared (all
> prepared state is gone) and decoding of this commit prepared then we loose
> that mapping, isn’t it?

I was about to explain how restart_lsn works again, and how that would
mean we'd always re-decode the PREPARE TRANSACTION before any COMMIT
PREPARED or ROLLBACK PREPARED on crash. But...

Actually, the way you've implemented it, that won't be the case. You
treat PREPARE TRANSACTION as a special-case of COMMIT, and the client
will presumably send replay confirmation after it has applied the
PREPARE TRANSACTION. In fact, it has to if we want 2PC to work with
synchronous replication. This will allow restart_lsn to advance to
after the PREPARE TRANSACTION record if there's no other older xact
and we see a suitable xl_running_xacts record. So we wouldn't decode
the PREPARE TRANSACTION again after restart.

Hm.

That's actually a pretty good reason to xlog the gid for 2pc rollback
and commit if we're at wal_level >= logical . Being able to advance
restart_lsn and avoid the re-decoding work is a big win.

Come to think of it, we have to advance the client replication
identifier as part of PREPARE TRANSACTION anyway, otherwise we'd try
to repeat and re-prepare the same xact on crash recovery.

Given that, I withdraw my objection to adding the gid to commit and
rollback xlog records, though it should only be done if they're 2pc
commit/abort, and only if XLogLogicalInfoActive().

>> BTW, I've been reviewing the patch in more detail. Other than a bunch
>> of copy-and-paste that I'm cleaning up, the main issue I've found is
>> that in DecodePrepare, you call:
>>
>> SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,
>> parsed->nsubxacts, parsed->subxacts);
>>
>> but I am not convinced it is correct to call it at PREPARE TRANSACTION
>> time, only at COMMIT PREPARED time. We want to see the 2pc prepared
>> xact's state when decoding it, but there might be later commits that
>> cannot yet see that state and shouldn't have it visible in their
>> snapshots.
>
> Agree, that is problem. That allows to decode this PREPARE, but after that
> it is better to mark this transaction as running in snapshot or perform prepare
> decoding with some kind of copied-end-edited snapshot. I’ll have a look at this.

Thanks.

It's also worth noting that with your current approach, 2PC xacts will
produce two calls to the output plugin's commit() callback, once for
the PREPARE TRANSACTION and another for the COMMIT PREPARED or
ROLLBACK PREPARED, the latter two with a faked-up state. I'm not a
huge fan of that. It's not entirely backward compatible since it
violates the previously safe assumption that there's a 1:1
relationship between begin and commit callbacks with no interleaving,
for one thing, and I think it's also a bit misleading to send a
PREPARE TRANSACTION to a callback that could previously only receive a
true commit.

I particularly dislike calling a commit callback for an abort. So I'd
like to look further into the interface side of things. I'm inclined
to suggest adding new callbacks for 2pc prepare, commit and rollback,
and if the output plugin doesn't set them fall back to the existing
behaviour. Plugins that aren't interested in 2PC (think ETL) should
probably not have to deal with it, we might as well just send them
only the actually committed xacts, when they commit.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-02 16:34:18
Message-ID: abf61103-fff7-c7c3-481f-e2729db0e069@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/03/17 13:23, Craig Ringer wrote:
> On 2 March 2017 at 16:20, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>
>>> On 2 Mar 2017, at 11:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>>
>>> We already have it, because we just decoded the PREPARE TRANSACTION.
>>> I'm preparing a patch revision to demonstrate this.
>>
>> Yes, we already have it, but if server reboots between commit prepared (all
>> prepared state is gone) and decoding of this commit prepared then we loose
>> that mapping, isn’t it?
>
> I was about to explain how restart_lsn works again, and how that would
> mean we'd always re-decode the PREPARE TRANSACTION before any COMMIT
> PREPARED or ROLLBACK PREPARED on crash. But...
>
> Actually, the way you've implemented it, that won't be the case. You
> treat PREPARE TRANSACTION as a special-case of COMMIT, and the client
> will presumably send replay confirmation after it has applied the
> PREPARE TRANSACTION. In fact, it has to if we want 2PC to work with
> synchronous replication. This will allow restart_lsn to advance to
> after the PREPARE TRANSACTION record if there's no other older xact
> and we see a suitable xl_running_xacts record. So we wouldn't decode
> the PREPARE TRANSACTION again after restart.
>

Unless we just don't let restart_lsn to go forward if there is 2pc that
wasn't decoded yet (twopcs store the prepare lsn) but that's probably
too much of a kludge.

>
> It's also worth noting that with your current approach, 2PC xacts will
> produce two calls to the output plugin's commit() callback, once for
> the PREPARE TRANSACTION and another for the COMMIT PREPARED or
> ROLLBACK PREPARED, the latter two with a faked-up state. I'm not a
> huge fan of that. It's not entirely backward compatible since it
> violates the previously safe assumption that there's a 1:1
> relationship between begin and commit callbacks with no interleaving,
> for one thing, and I think it's also a bit misleading to send a
> PREPARE TRANSACTION to a callback that could previously only receive a
> true commit.
>
> I particularly dislike calling a commit callback for an abort. So I'd
> like to look further into the interface side of things. I'm inclined
> to suggest adding new callbacks for 2pc prepare, commit and rollback,
> and if the output plugin doesn't set them fall back to the existing
> behaviour. Plugins that aren't interested in 2PC (think ETL) should
> probably not have to deal with it, we might as well just send them
> only the actually committed xacts, when they commit.
>

I think this is a good approach to handle it.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: David Steele <david(at)pgmasters(dot)net>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-14 13:45:45
Message-ID: e70ba571-be38-10cc-d1f0-29a5cc68f6ac@pgmasters.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 3/2/17 11:34 AM, Petr Jelinek wrote:
> On 02/03/17 13:23, Craig Ringer wrote:
>>
>> I particularly dislike calling a commit callback for an abort. So I'd
>> like to look further into the interface side of things. I'm inclined
>> to suggest adding new callbacks for 2pc prepare, commit and rollback,
>> and if the output plugin doesn't set them fall back to the existing
>> behaviour. Plugins that aren't interested in 2PC (think ETL) should
>> probably not have to deal with it, we might as well just send them
>> only the actually committed xacts, when they commit.
>>
>
> I think this is a good approach to handle it.

It's been a while since there was any activity on this thread and a very
long time since the last patch. As far as I can see there are far more
questions than answers in this thread.

If you need more time to produce a patch, please post an explanation for
the delay and a schedule for the new patch. If no patch or explanation
is is posted by 2017-03-17 AoE I will mark this submission
"Returned with Feedback".

--
-David
david(at)pgmasters(dot)net


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-15 07:42:03
Message-ID: 11fb4e6c-58f2-7a41-ea58-c285e53c8d19@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/03/17 17:34, Petr Jelinek wrote:
> On 02/03/17 13:23, Craig Ringer wrote:
>> On 2 March 2017 at 16:20, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>>
>>>> On 2 Mar 2017, at 11:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>>>
>>>> We already have it, because we just decoded the PREPARE TRANSACTION.
>>>> I'm preparing a patch revision to demonstrate this.
>>>
>>> Yes, we already have it, but if server reboots between commit prepared (all
>>> prepared state is gone) and decoding of this commit prepared then we loose
>>> that mapping, isn’t it?
>>
>> I was about to explain how restart_lsn works again, and how that would
>> mean we'd always re-decode the PREPARE TRANSACTION before any COMMIT
>> PREPARED or ROLLBACK PREPARED on crash. But...
>>
>> Actually, the way you've implemented it, that won't be the case. You
>> treat PREPARE TRANSACTION as a special-case of COMMIT, and the client
>> will presumably send replay confirmation after it has applied the
>> PREPARE TRANSACTION. In fact, it has to if we want 2PC to work with
>> synchronous replication. This will allow restart_lsn to advance to
>> after the PREPARE TRANSACTION record if there's no other older xact
>> and we see a suitable xl_running_xacts record. So we wouldn't decode
>> the PREPARE TRANSACTION again after restart.
>>

Thinking about this some more. Why can't we use the same mechanism
standby uses, ie, use xid to identify the 2PC? If output plugin cares
about doing 2PC in two phases, it can send xid as part of its protocol
(like the PG10 logical replication and pglogical do already) and simply
remember on downstream the remote node + remote xid of the 2PC in
progress. That way there is no need for gids in COMMIT PREPARED and this
patch would be much simpler (as the tracking would be left to actual
replication implementation as opposed to decoding). Or am I missing
something?

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-16 11:44:36
Message-ID: CAMsr+YHb8gTViN3feVJFO2DrVrjVtWjESqhBHfpnssTWwf+GQg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 15 March 2017 at 15:42, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:

> Thinking about this some more. Why can't we use the same mechanism
> standby uses, ie, use xid to identify the 2PC?

It pushes work onto the downstream, which has to keep an <xid,gid>
mapping in a crash-safe, persistent form. We'll be doing a flush of
some kind anyway so we can report successful prepare to the upstream
so an additional flush of a SLRU might not be so bad for a postgres
downstream. And I guess any other clients will have some kind of
downstream persistent mapping to use.

So I think I have a mild preference for recording the gid on 2pc
commit and abort records in the master's WAL, where it's very cheap
and simple.

But I agree that just sending the xid is a viable option if that falls through.

I'm going to try to pick this patch up and amend its interface per our
discussion earlier, see if I can get it committable.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-16 11:52:28
Message-ID: 69C2AB4F-2671-4E12-B0DE-FACADB3A0F59@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 16 Mar 2017, at 14:44, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> I'm going to try to pick this patch up and amend its interface per our
> discussion earlier, see if I can get it committable.

I’m working right now on issue with building snapshots for decoding prepared tx.
I hope I'll send updated patch later today.

> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-17 00:10:27
Message-ID: EEBD82AA-61EE-46F4-845E-05B94168E8F2@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


>> On 2 Mar 2017, at 11:00, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>
>> BTW, I've been reviewing the patch in more detail. Other than a bunch
>> of copy-and-paste that I'm cleaning up, the main issue I've found is
>> that in DecodePrepare, you call:
>>
>> SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,
>> parsed->nsubxacts, parsed->subxacts);
>>
>> but I am not convinced it is correct to call it at PREPARE TRANSACTION
>> time, only at COMMIT PREPARED time. We want to see the 2pc prepared
>> xact's state when decoding it, but there might be later commits that
>> cannot yet see that state and shouldn't have it visible in their
>> snapshots.
>
> Agree, that is problem. That allows to decode this PREPARE, but after that
> it is better to mark this transaction as running in snapshot or perform prepare
> decoding with some kind of copied-end-edited snapshot. I’ll have a look at this.
>

While working on this i’ve spotted quite a nasty corner case with aborted prepared
transaction. I have some not that great ideas how to fix it, but maybe i blurred my
view and missed something. So want to ask here at first.

Suppose we created a table, then in 2pc tx we are altering it and after that aborting tx.
So pg_class will have something like this:

xmin | xmax | relname
100 | 200 | mytable
200 | 0 | mytable

After previous abort, tuple (100,200,mytable) becomes visible and if we will alter table
again then xmax of first tuple will be set current xid, resulting in following table:

xmin | xmax | relname
100 | 300 | mytable
200 | 0 | mytable
300 | 0 | mytable

In that moment we’ve lost information that first tuple was deleted by our prepared tx.
And from POV of historic snapshot that will be constructed to decode prepare first
tuple is visible, but actually send tuple should be used. Moreover such snapshot could
see both tuples violating oid uniqueness, but heapscan stops after finding first one.

I see here two possible workarounds:

* Try at first to scan catalog filtering out tuples with xmax bigger than snapshot->xmax
as it was possibly deleted by our tx. Than if nothing found scan in a usual way.

* Do not decode such transaction at all. If by the time of decoding prepare record we
already know that it is aborted than such decoding doesn’t have a lot of sense.
IMO intended usage of logical 2pc decoding is to decide about commit/abort based
on answers from logical subscribers/replicas. So there will be barrier between
prepare and commit/abort and such situations shouldn’t happen.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-17 02:34:05
Message-ID: CAMsr+YE5UC7MeZFt7+rhNrcacAne8bogftJH3GtqGQwMN4a1Gg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 17 March 2017 at 08:10, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

> While working on this i’ve spotted quite a nasty corner case with aborted prepared
> transaction. I have some not that great ideas how to fix it, but maybe i blurred my
> view and missed something. So want to ask here at first.
>
> Suppose we created a table, then in 2pc tx we are altering it and after that aborting tx.
> So pg_class will have something like this:
>
> xmin | xmax | relname
> 100 | 200 | mytable
> 200 | 0 | mytable
>
> After previous abort, tuple (100,200,mytable) becomes visible and if we will alter table
> again then xmax of first tuple will be set current xid, resulting in following table:
>
> xmin | xmax | relname
> 100 | 300 | mytable
> 200 | 0 | mytable
> 300 | 0 | mytable
>
> In that moment we’ve lost information that first tuple was deleted by our prepared tx.

Right. And while the prepared xact has aborted, we don't control when
it aborts and when those overwrites can start happening. We can and
should check if a 2pc xact is aborted before we start decoding it so
we can skip decoding it if it's already aborted, but it could be
aborted *while* we're decoding it, then have data needed for its
snapshot clobbered.

This hasn't mattered in the past because prepared xacts (and
especially aborted 2pc xacts) have never needed snapshots, we've never
needed to do something from the perspective of a prepared xact.

I think we'll probably need to lock the 2PC xact so it cannot be
aborted or committed while we're decoding it, until we finish decoding
it. So we lock it, then check if it's already aborted/already
committed/in progress. If it's aborted, treat it like any normal
aborted xact. If it's committed, treat it like any normal committed
xact. If it's in progress, keep the lock and decode it.

People using logical decoding for 2PC will presumably want to control
2PC via logical decoding, so they're not so likely to mind such a
lock.

> * Try at first to scan catalog filtering out tuples with xmax bigger than snapshot->xmax
> as it was possibly deleted by our tx. Than if nothing found scan in a usual way.

I don't think that'll be at all viable with the syscache/relcache
machinery. Way too intrusive.

> * Do not decode such transaction at all.

Yes, that's what I'd like to do, per above.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)anarazel(dot)de>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-17 02:38:02
Message-ID: CAMsr+YHqGo=yeo7ULTvqc=DA0QokzmTiOL9jezmDNkZFzC2wnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 16 March 2017 at 19:52, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

>
> I’m working right now on issue with building snapshots for decoding prepared tx.
> I hope I'll send updated patch later today.

Great.

What approach are you taking?

It looks like the snapshot builder actually does most of the work we
need for this already, maintaining a stack of snapshots we can use. It
might be as simple as invalidating the relcache/syscache when we exit
(and enter?) decoding of a prepared 2pc xact, since it violates the
usual assumption of logical decoding that we decode things strictly in
commit-time order.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-17 15:59:50
Message-ID: CA+TgmoYRuXtpuPFzx0vrPFVFuOGmfM77Lg7DQOEG=s2zQ02GSA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 16, 2017 at 10:34 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> On 17 March 2017 at 08:10, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>> While working on this i’ve spotted quite a nasty corner case with aborted prepared
>> transaction. I have some not that great ideas how to fix it, but maybe i blurred my
>> view and missed something. So want to ask here at first.
>>
>> Suppose we created a table, then in 2pc tx we are altering it and after that aborting tx.
>> So pg_class will have something like this:
>>
>> xmin | xmax | relname
>> 100 | 200 | mytable
>> 200 | 0 | mytable
>>
>> After previous abort, tuple (100,200,mytable) becomes visible and if we will alter table
>> again then xmax of first tuple will be set current xid, resulting in following table:
>>
>> xmin | xmax | relname
>> 100 | 300 | mytable
>> 200 | 0 | mytable
>> 300 | 0 | mytable
>>
>> In that moment we’ve lost information that first tuple was deleted by our prepared tx.
>
> Right. And while the prepared xact has aborted, we don't control when
> it aborts and when those overwrites can start happening. We can and
> should check if a 2pc xact is aborted before we start decoding it so
> we can skip decoding it if it's already aborted, but it could be
> aborted *while* we're decoding it, then have data needed for its
> snapshot clobbered.
>
> This hasn't mattered in the past because prepared xacts (and
> especially aborted 2pc xacts) have never needed snapshots, we've never
> needed to do something from the perspective of a prepared xact.
>
> I think we'll probably need to lock the 2PC xact so it cannot be
> aborted or committed while we're decoding it, until we finish decoding
> it. So we lock it, then check if it's already aborted/already
> committed/in progress. If it's aborted, treat it like any normal
> aborted xact. If it's committed, treat it like any normal committed
> xact. If it's in progress, keep the lock and decode it.

But that lock could need to be held for an unbounded period of time -
as long as decoding takes to complete - which seems pretty
undesirable. Worse still, the same problem will arise if you
eventually want to start decoding ordinary, non-2PC transactions that
haven't committed yet, which I think is something we definitely want
to do eventually; the current handling of bulk loads or bulk updates
leads to significant latency. You're not going to be able to tell an
active transaction that it isn't allowed to abort until you get done
with it, and I don't really think you should be allowed to lock out
2PC aborts for long periods of time either. That's going to stink for
users.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-19 13:26:21
Message-ID: 8217d096-3641-8022-09e4-4e625633b278@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 17/03/17 03:34, Craig Ringer wrote:
> On 17 March 2017 at 08:10, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> While working on this i’ve spotted quite a nasty corner case with aborted prepared
>> transaction. I have some not that great ideas how to fix it, but maybe i blurred my
>> view and missed something. So want to ask here at first.
>>
>> Suppose we created a table, then in 2pc tx we are altering it and after that aborting tx.
>> So pg_class will have something like this:
>>
>> xmin | xmax | relname
>> 100 | 200 | mytable
>> 200 | 0 | mytable
>>
>> After previous abort, tuple (100,200,mytable) becomes visible and if we will alter table
>> again then xmax of first tuple will be set current xid, resulting in following table:
>>
>> xmin | xmax | relname
>> 100 | 300 | mytable
>> 200 | 0 | mytable
>> 300 | 0 | mytable
>>
>> In that moment we’ve lost information that first tuple was deleted by our prepared tx.
>
> Right. And while the prepared xact has aborted, we don't control when
> it aborts and when those overwrites can start happening. We can and
> should check if a 2pc xact is aborted before we start decoding it so
> we can skip decoding it if it's already aborted, but it could be
> aborted *while* we're decoding it, then have data needed for its
> snapshot clobbered.
>
> This hasn't mattered in the past because prepared xacts (and
> especially aborted 2pc xacts) have never needed snapshots, we've never
> needed to do something from the perspective of a prepared xact.
>
> I think we'll probably need to lock the 2PC xact so it cannot be
> aborted or committed while we're decoding it, until we finish decoding
> it. So we lock it, then check if it's already aborted/already
> committed/in progress. If it's aborted, treat it like any normal
> aborted xact. If it's committed, treat it like any normal committed
> xact. If it's in progress, keep the lock and decode it.
>
> People using logical decoding for 2PC will presumably want to control
> 2PC via logical decoding, so they're not so likely to mind such a
> lock.
>
>> * Try at first to scan catalog filtering out tuples with xmax bigger than snapshot->xmax
>> as it was possibly deleted by our tx. Than if nothing found scan in a usual way.
>
> I don't think that'll be at all viable with the syscache/relcache
> machinery. Way too intrusive.
>

I think only genam would need changes to do two-phase scan for this as
the catalog scans should ultimately go there. It's going to slow down
things but we could limit the impact by doing the two-phase scan only
when historical snapshot is in use and the tx being decoded changed
catalogs (we already have global knowledge of the first one, and it
would be trivial to add the second one as we have local knowledge of
that as well).

What I think is better strategy than filtering out by xmax would be
filtering "in" by xmin though. Meaning that first scan would return only
tuples modified by current tx which are visible in snapshot and second
scan would return the other visible tuples. That way whatever the
decoded tx seen should always win.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 08:12:20
Message-ID: CAMsr+YHUEQoJGtG8isK_i4ts8ZW9w3UQO0Lp92yGMyZxMY8XjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 17 March 2017 at 23:59, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> But that lock could need to be held for an unbounded period of time -
> as long as decoding takes to complete - which seems pretty
> undesirable.

Yeah. We could use a recovery-conflict like mechanism to signal the
decoding session that someone wants to abort the xact, but it gets
messy.

> Worse still, the same problem will arise if you
> eventually want to start decoding ordinary, non-2PC transactions that
> haven't committed yet, which I think is something we definitely want
> to do eventually; the current handling of bulk loads or bulk updates
> leads to significant latency.

Yeah. If it weren't for that, I'd probably still just pursue locking.
But you're right that we'll have to solve this sooner or later. I'll
admit I hoped for later.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 08:32:06
Message-ID: CAMsr+YFKAw2Cc5Z+Jbw72FeLDBzUhs0drECxG4pmgkTH7LsoJg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 19 March 2017 at 21:26, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:

> I think only genam would need changes to do two-phase scan for this as
> the catalog scans should ultimately go there. It's going to slow down
> things but we could limit the impact by doing the two-phase scan only
> when historical snapshot is in use and the tx being decoded changed
> catalogs (we already have global knowledge of the first one, and it
> would be trivial to add the second one as we have local knowledge of
> that as well).

We'll also have to clobber caches after we finish decoding a 2pc xact,
since we don't know those changes are visible to other xacts and can't
guarantee they'll ever be (if it aborts).

That's going to be "interesting" when trying to decode interleaved
transaction streams since we can't afford to clobber caches whenever
we see an xlog record from a different xact. We'll probably have to
switch to linear decoding with reordering when someone makes catalog
changes.

TBH, I have no idea how to approach the genam changes for the proposed
double-scan method. It sounds like Stas has some idea how to proceed
though (right?)

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 09:29:44
Message-ID: e8ac21ca-e0cc-9d97-f6aa-4b9e18d7a4e7@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 20/03/17 09:32, Craig Ringer wrote:
> On 19 March 2017 at 21:26, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>
>> I think only genam would need changes to do two-phase scan for this as
>> the catalog scans should ultimately go there. It's going to slow down
>> things but we could limit the impact by doing the two-phase scan only
>> when historical snapshot is in use and the tx being decoded changed
>> catalogs (we already have global knowledge of the first one, and it
>> would be trivial to add the second one as we have local knowledge of
>> that as well).
>
> We'll also have to clobber caches after we finish decoding a 2pc xact,
> since we don't know those changes are visible to other xacts and can't
> guarantee they'll ever be (if it aborts).
>

AFAIK reorder buffer already does that.

> That's going to be "interesting" when trying to decode interleaved
> transaction streams since we can't afford to clobber caches whenever
> we see an xlog record from a different xact. We'll probably have to
> switch to linear decoding with reordering when someone makes catalog
> changes.

We may need something that allows for representing multiple parallel
transactions in single process and a cheap way of switching between them
(ie, similar things we need for autonomous transactions). But that's not
something current patch has to deal with.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 11:10:09
Message-ID: E6112EA9-2949-4E4A-8FE4-1C6A2096379D@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 20 Mar 2017, at 11:32, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> On 19 March 2017 at 21:26, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>
>> I think only genam would need changes to do two-phase scan for this as
>> the catalog scans should ultimately go there. It's going to slow down
>> things but we could limit the impact by doing the two-phase scan only
>> when historical snapshot is in use and the tx being decoded changed
>> catalogs (we already have global knowledge of the first one, and it
>> would be trivial to add the second one as we have local knowledge of
>> that as well).
>
>
> TBH, I have no idea how to approach the genam changes for the proposed
> double-scan method. It sounds like Stas has some idea how to proceed
> though (right?)
>

I thought about having special field (or reusing one of the existing fields)
in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
as Petr suggested. Then this logic can reside in ReorderBufferCommit().
However this is not solving problem with catcache, so I'm looking into it right now.

> On 17 Mar 2017, at 05:38, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> On 16 March 2017 at 19:52, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>>
>> I’m working right now on issue with building snapshots for decoding prepared tx.
>> I hope I'll send updated patch later today.
>
>
> Great.
>
> What approach are you taking?

Just as before I marking this transaction committed in snapbuilder, but after
decoding I delete this transaction from xip (which holds committed transactions
in case of historic snapshot).

> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 12:17:22
Message-ID: CAMsr+YFsN5D=dPjoyXN33OBTGJgfYn3rrh-BH3GX5Qfbg-3VRA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> I thought about having special field (or reusing one of the existing fields)
> in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
> as Petr suggested. Then this logic can reside in ReorderBufferCommit().
> However this is not solving problem with catcache, so I'm looking into it right now.

OK, so this is only an issue if we have xacts that change the schema
of tables and also insert/update/delete to their heaps. Right?

So, given that this is CF3 for Pg10, should we take a step back and
impose the limitation that we can decode 2PC with schema changes or
data row changes, but not both?

Applications can record DDL in transactional logical WAL messages for
decoding during 2pc processing. Or apps can do 2pc for DML. They just
can't do both at the same time, in the same xact.

Imperfect, but a lot less invasive. And we can even permit apps to use
the locking-based approach I outlined earlier instead:

All we have to do IMO is add an output plugin callback to filter
whether we want to decode a given 2pc xact at PREPARE TRANSACTION time
or defer until COMMIT PREPARED. It could:

* mark the xact for deferred decoding at commit time (the default if
the callback doesn't exist); or

* Acquire a lock on the 2pc xact and request immediate decoding only
if it gets the lock so concurrent ROLLBACK PREPARED is blocked; or

* inspect the reorder buffer contents for row changes and decide
whether to decode now or later based on that.

It has a few downsides - for example, temp tables will be considered
"catalog changes" for now. But .. eh. We already accept a bunch of
practical limitations for catalog changes and DDL in logical decoding,
most notably regarding practical handling of full table rewrites.

> Just as before I marking this transaction committed in snapbuilder, but after
> decoding I delete this transaction from xip (which holds committed transactions
> in case of historic snapshot).

That seems kind of hacky TBH. I didn't much like marking it as
committed then un-committing it.

I think it's mostly an interface issue though. I'd rather say
SnapBuildPushPrepareTransaction and SnapBuildPopPreparedTransaction or
something, to make it clear what we're doing.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 12:55:09
Message-ID: CANP8+j+LoD8JiDL-jNtUGV87M3am6Vwqz0=ds9b5FkyqdXu57w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 17 March 2017 at 23:59, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Mar 16, 2017 at 10:34 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>> On 17 March 2017 at 08:10, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>> While working on this i’ve spotted quite a nasty corner case with aborted prepared
>>> transaction. I have some not that great ideas how to fix it, but maybe i blurred my
>>> view and missed something. So want to ask here at first.
>>>
>>> Suppose we created a table, then in 2pc tx we are altering it and after that aborting tx.
>>> So pg_class will have something like this:
>>>
>>> xmin | xmax | relname
>>> 100 | 200 | mytable
>>> 200 | 0 | mytable
>>>
>>> After previous abort, tuple (100,200,mytable) becomes visible and if we will alter table
>>> again then xmax of first tuple will be set current xid, resulting in following table:
>>>
>>> xmin | xmax | relname
>>> 100 | 300 | mytable
>>> 200 | 0 | mytable
>>> 300 | 0 | mytable
>>>
>>> In that moment we’ve lost information that first tuple was deleted by our prepared tx.
>>
>> Right. And while the prepared xact has aborted, we don't control when
>> it aborts and when those overwrites can start happening. We can and
>> should check if a 2pc xact is aborted before we start decoding it so
>> we can skip decoding it if it's already aborted, but it could be
>> aborted *while* we're decoding it, then have data needed for its
>> snapshot clobbered.
>>
>> This hasn't mattered in the past because prepared xacts (and
>> especially aborted 2pc xacts) have never needed snapshots, we've never
>> needed to do something from the perspective of a prepared xact.
>>
>> I think we'll probably need to lock the 2PC xact so it cannot be
>> aborted or committed while we're decoding it, until we finish decoding
>> it. So we lock it, then check if it's already aborted/already
>> committed/in progress. If it's aborted, treat it like any normal
>> aborted xact. If it's committed, treat it like any normal committed
>> xact. If it's in progress, keep the lock and decode it.
>
> But that lock could need to be held for an unbounded period of time -
> as long as decoding takes to complete - which seems pretty
> undesirable.

This didn't seem to be too much of a problem when I read it.

Sure, the issue noted by Stas exists, but it requires
Alter-Abort-Alter for it to be a problem. Meaning that normal non-DDL
transactions do not have problems. Neither would a real-time system
that uses the decoded data to decide whether to commit or abort the
transaction; in that case there would never be an abort until after
decoding.

So I suggest we have a pre-prepare callback to ensure that the plugin
can decide whether to decode or not. We can pass information to the
plugin such as whether we have issued DDL in that xact or not. The
plugin can then decide how it wishes to handle it, so if somebody
doesn't like the idea of a lock then don't use one. The plugin is
already responsible for many things, so this is nothing new.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 12:57:07
Message-ID: EFF3D2BF-05BD-4EBA-A254-27EED342CD22@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 20 Mar 2017, at 15:17, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
>> I thought about having special field (or reusing one of the existing fields)
>> in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
>> as Petr suggested. Then this logic can reside in ReorderBufferCommit().
>> However this is not solving problem with catcache, so I'm looking into it right now.
>
> OK, so this is only an issue if we have xacts that change the schema
> of tables and also insert/update/delete to their heaps. Right?
>
> So, given that this is CF3 for Pg10, should we take a step back and
> impose the limitation that we can decode 2PC with schema changes or
> data row changes, but not both?

Yep, time is tight. I’ll try today/tomorrow to proceed with this two scan approach.
If I’ll fail to do that during this time then I’ll just update this patch to decode
only non-ddl 2pc transactions as you suggested.

>> Just as before I marking this transaction committed in snapbuilder, but after
>> decoding I delete this transaction from xip (which holds committed transactions
>> in case of historic snapshot).
>
> That seems kind of hacky TBH. I didn't much like marking it as
> committed then un-committing it.
>
> I think it's mostly an interface issue though. I'd rather say
> SnapBuildPushPrepareTransaction and SnapBuildPopPreparedTransaction or
> something, to make it clear what we're doing.

Yes, that will be less confusing. However there is no any kind of queue, so
SnapBuildStartPrepare / SnapBuildFinishPrepare should work too.

> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 13:39:23
Message-ID: CAMsr+YHwPAeMrD2HEnQFy5dQr8CR=fwVjnokZNFua_K+QmMhBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 20 March 2017 at 20:57, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 20 Mar 2017, at 15:17, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>
>>> I thought about having special field (or reusing one of the existing fields)
>>> in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
>>> as Petr suggested. Then this logic can reside in ReorderBufferCommit().
>>> However this is not solving problem with catcache, so I'm looking into it right now.
>>
>> OK, so this is only an issue if we have xacts that change the schema
>> of tables and also insert/update/delete to their heaps. Right?
>>
>> So, given that this is CF3 for Pg10, should we take a step back and
>> impose the limitation that we can decode 2PC with schema changes or
>> data row changes, but not both?
>
> Yep, time is tight. I’ll try today/tomorrow to proceed with this two scan approach.
> If I’ll fail to do that during this time then I’ll just update this patch to decode
> only non-ddl 2pc transactions as you suggested.

I wasn't suggesting not decoding them, but giving the plugin the
option of whether to proceed with decoding or not.

As Simon said, have a pre-decode-prepared callback that lets the
plugin get a lock on the 2pc xact if it wants, or say it doesn't want
to decode it until it commits.

That'd be useful anyway, so we can filter and only do decoding at
prepare transaction time of xacts the downstream wants to know about
before they commit.

>>> Just as before I marking this transaction committed in snapbuilder, but after
>>> decoding I delete this transaction from xip (which holds committed transactions
>>> in case of historic snapshot).
>>
>> That seems kind of hacky TBH. I didn't much like marking it as
>> committed then un-committing it.
>>
>> I think it's mostly an interface issue though. I'd rather say
>> SnapBuildPushPrepareTransaction and SnapBuildPopPreparedTransaction or
>> something, to make it clear what we're doing.
>
> Yes, that will be less confusing. However there is no any kind of queue, so
> SnapBuildStartPrepare / SnapBuildFinishPrepare should work too.

Yeah, that's better.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-20 13:47:50
Message-ID: F5187EFB-8907-4D73-B961-9AC8B45B03CD@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 20 Mar 2017, at 16:39, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> On 20 March 2017 at 20:57, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>
>>> On 20 Mar 2017, at 15:17, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>>
>>>> I thought about having special field (or reusing one of the existing fields)
>>>> in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
>>>> as Petr suggested. Then this logic can reside in ReorderBufferCommit().
>>>> However this is not solving problem with catcache, so I'm looking into it right now.
>>>
>>> OK, so this is only an issue if we have xacts that change the schema
>>> of tables and also insert/update/delete to their heaps. Right?
>>>
>>> So, given that this is CF3 for Pg10, should we take a step back and
>>> impose the limitation that we can decode 2PC with schema changes or
>>> data row changes, but not both?
>>
>> Yep, time is tight. I’ll try today/tomorrow to proceed with this two scan approach.
>> If I’ll fail to do that during this time then I’ll just update this patch to decode
>> only non-ddl 2pc transactions as you suggested.
>
> I wasn't suggesting not decoding them, but giving the plugin the
> option of whether to proceed with decoding or not.
>
> As Simon said, have a pre-decode-prepared callback that lets the
> plugin get a lock on the 2pc xact if it wants, or say it doesn't want
> to decode it until it commits.
>
> That'd be useful anyway, so we can filter and only do decoding at
> prepare transaction time of xacts the downstream wants to know about
> before they commit.

Ah, got that. Okay.

> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 01:31:14
Message-ID: CAMsr+YGkEtwcsvQSU_6w2hnoY=oGEAxdK_ApYs3e-iwqSm06qg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 20 March 2017 at 21:47, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 20 Mar 2017, at 16:39, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>
>> On 20 March 2017 at 20:57, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>>
>>>> On 20 Mar 2017, at 15:17, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>>>>
>>>>> I thought about having special field (or reusing one of the existing fields)
>>>>> in snapshot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
>>>>> as Petr suggested. Then this logic can reside in ReorderBufferCommit().
>>>>> However this is not solving problem with catcache, so I'm looking into it right now.
>>>>
>>>> OK, so this is only an issue if we have xacts that change the schema
>>>> of tables and also insert/update/delete to their heaps. Right?
>>>>
>>>> So, given that this is CF3 for Pg10, should we take a step back and
>>>> impose the limitation that we can decode 2PC with schema changes or
>>>> data row changes, but not both?
>>>
>>> Yep, time is tight. I’ll try today/tomorrow to proceed with this two scan approach.
>>> If I’ll fail to do that during this time then I’ll just update this patch to decode
>>> only non-ddl 2pc transactions as you suggested.
>>
>> I wasn't suggesting not decoding them, but giving the plugin the
>> option of whether to proceed with decoding or not.
>>
>> As Simon said, have a pre-decode-prepared callback that lets the
>> plugin get a lock on the 2pc xact if it wants, or say it doesn't want
>> to decode it until it commits.
>>
>> That'd be useful anyway, so we can filter and only do decoding at
>> prepare transaction time of xacts the downstream wants to know about
>> before they commit.
>
> Ah, got that. Okay.

Any news here?

We're in the last week of the CF. If you have a patch that's nearly
ready or getting there, now would be a good time to post it for help
and input from others.

I would really like to get this in, but we're running out of time.

Even if you just post your snapshot management work, with the cosmetic
changes discussed above, that would be a valuable start.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 09:26:01
Message-ID: CAMsr+YF0FDNjstkKzByeyG3G66SWf_mbH+cr3qhKAK3UYaZzaA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 27 March 2017 at 09:31, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:

> We're in the last week of the CF. If you have a patch that's nearly
> ready or getting there, now would be a good time to post it for help
> and input from others.
>
> I would really like to get this in, but we're running out of time.
>
> Even if you just post your snapshot management work, with the cosmetic
> changes discussed above, that would be a valuable start.

I'm going to pick up the last patch and:

* Ensure we only add the GID to xact records for 2pc commits and aborts

* Add separate callbacks for prepare, abort prepared, and commit
prepared (of xacts already processed during prepare), so we aren't
overloading the "commit" callback and don't have to create fake empty
transactions to pass to the commit callback;

* Add another callback to determine whether an xact should be
processed at PREPARE TRANSACTION or COMMIT PREPARED time.

* Rename the snapshot builder faux-commit stuff in the current patch
so it's clearer what's going on.

* Write tests covering DDL, abort-during-decode, etc

Some special care is needed for the callback that decides whether to
process a given xact as 2PC or not. It's called before PREPARE
TRANSACTION to decide whether to decode any given xact at prepare time
or wait until it commits. It's called again at COMMIT PREPARED time if
we crashed after we processed PREPARE TRANSACTION and advanced our
confirmed_flush_lsn such that we won't re-process the PREPARE
TRANSACTION again. Our restart_lsn might've advanced past it so we
never even decode it, so we can't rely on seeing it at all. It has
access to the xid, gid and invalidations, all of which we have at both
prepare and commit time, to make its decision from. It must have the
same result at prepare and commit time for any given xact. We can
probably use a cache in the reorder buffer to avoid the 2nd call on
commit prepared if we haven't crashed/reconnected between the two.

This proposal does not provide a way to safely decode a 2pc xact that
made catalog changes which may be aborted while being decoded. The
plugin must lock such an xact so that it can't be aborted while being
processed, or defer decoding until commit prepared. It can use the
invalidations for the commit to decide.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 09:53:01
Message-ID: DFC9E270-4554-4E46-B2F8-FD864C29610D@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 27 Mar 2017, at 12:26, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> On 27 March 2017 at 09:31, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
>> We're in the last week of the CF. If you have a patch that's nearly
>> ready or getting there, now would be a good time to post it for help
>> and input from others.
>>
>> I would really like to get this in, but we're running out of time.
>>
>> Even if you just post your snapshot management work, with the cosmetic
>> changes discussed above, that would be a valuable start.
>
> I'm going to pick up the last patch and:

I’m heavily underestimated amount of changes there, but almost finished
and will send updated patch in several hours.

> * Ensure we only add the GID to xact records for 2pc commits and aborts

And only during wal_level >= logical. Done.
Also patch adds origin info to prepares and aborts.

> * Add separate callbacks for prepare, abort prepared, and commit
> prepared (of xacts already processed during prepare), so we aren't
> overloading the "commit" callback and don't have to create fake empty
> transactions to pass to the commit callback;

Done.

> * Add another callback to determine whether an xact should be
> processed at PREPARE TRANSACTION or COMMIT PREPARED time.

Also done.

> * Rename the snapshot builder faux-commit stuff in the current patch
> so it's clearer what's going on.

Hm. Okay, i’ll leave that part to you.

> * Write tests covering DDL, abort-during-decode, etc

I’ve extended test, but it is good to have some more.

> Some special care is needed for the callback that decides whether to
> process a given xact as 2PC or not. It's called before PREPARE
> TRANSACTION to decide whether to decode any given xact at prepare time
> or wait until it commits. It's called again at COMMIT PREPARED time if
> we crashed after we processed PREPARE TRANSACTION and advanced our
> confirmed_flush_lsn such that we won't re-process the PREPARE
> TRANSACTION again. Our restart_lsn might've advanced past it so we
> never even decode it, so we can't rely on seeing it at all. It has
> access to the xid, gid and invalidations, all of which we have at both
> prepare and commit time, to make its decision from. It must have the
> same result at prepare and commit time for any given xact. We can
> probably use a cache in the reorder buffer to avoid the 2nd call on
> commit prepared if we haven't crashed/reconnected between the two.

Good point. Didn’t think about restart_lsn in case when we are skipping this
particular prepare (filter_prepared() -> true, in my terms). I think that should
work properly as it use the same code path as it was before, but I’ll look at it.

> This proposal does not provide a way to safely decode a 2pc xact that
> made catalog changes which may be aborted while being decoded. The
> plugin must lock such an xact so that it can't be aborted while being
> processed, or defer decoding until commit prepared. It can use the
> invalidations for the commit to decide.

I had played with that two-pass catalog scan and it seems to be
working but after some time I realised that it is not useful for the main
case when commit/abort is generated after receiver side will answer to
prepares. Also that two-pass scan is a massive change in relcache.c and
genam.c (FWIW there were no problems with cache, but some problems
with index scan and handling one-to-many queries to catalog, e.g. table
with it fields)

Finally i decided to throw it and switched to filter_prepare callback and
passed there txn structure to allow access to has_catalog_changes field.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 13:29:02
Message-ID: CAMsr+YGm-pft3CtxbzP0SOFj6rN62Loh43hj2oYjuYCVbDLXFg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 27 March 2017 at 17:53, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:

> I’m heavily underestimated amount of changes there, but almost finished
> and will send updated patch in several hours.

Oh, brilliant! Please post whatever you have before you knock off for
the day anyway, even if it's just a WIP, so I can pick it up tomorrow
my time and poke at its tests etc.

I'm in Western Australia +0800 time, significantly ahead of you.

> Done.
[snip]
> Also done.

Great, time is short so that's fantastic.

> I’ve extended test, but it is good to have some more.

I don't mind writing tests and I've done quite a bit with TAP now, so
happy to help there.

>> Some special care is needed for the callback that decides whether to
>> process a given xact as 2PC or not. It's called before PREPARE
>> TRANSACTION to decide whether to decode any given xact at prepare time
>> or wait until it commits. It's called again at COMMIT PREPARED time if
>> we crashed after we processed PREPARE TRANSACTION and advanced our
>> confirmed_flush_lsn such that we won't re-process the PREPARE
>> TRANSACTION again. Our restart_lsn might've advanced past it so we
>> never even decode it, so we can't rely on seeing it at all. It has
>> access to the xid, gid and invalidations, all of which we have at both
>> prepare and commit time, to make its decision from. It must have the
>> same result at prepare and commit time for any given xact. We can
>> probably use a cache in the reorder buffer to avoid the 2nd call on
>> commit prepared if we haven't crashed/reconnected between the two.
>
> Good point. Didn’t think about restart_lsn in case when we are skipping this
> particular prepare (filter_prepared() -> true, in my terms). I think that should
> work properly as it use the same code path as it was before, but I’ll look at it.

I suspect that's going to be fragile in the face of interleaving of
xacts if we crash between prepare and commit prepared. (Apologies if
the below is long or disjointed, it's been a long day but trying to
sort thoughts out).

Consider ("SSU" = "standby status update"):

0/050 xid 1 BEGIN
0/060 xid 1 INSERT ...

0/070 xid 2 BEGIN
0/080 xid 2 INSERT ...

0/090 xid 3 BEGIN
0/095 xid 3 INSERT ...
0/100 xid 3 PREPARE TRANSACTION 'x' => sent to client [y/n]?
SSU: confirmed_flush_lsn = 0/100, restart_lsn 0/050 (if we sent to client)

0/200 xid 2 COMMIT => sent to client
SSU: confirmed_flush_lsn = 0/200, restart_lsn 0/050

0/250 xl_running_xacts logged, xids = [1,3]

[CRASH or disconnect/reconnect]

Restart decoding at 0/050.

skip output of xid 3 PREPARE TRANSACTION @ 0/100: is <= confirmed_flush_lsn
skip output of xid 2 COMMIT @ 0/200: is <= confirmed_flush_lsn

0/300 xid 2 COMMIT PREPARED 'x' => sent to client, confirmed_flush_lsn
is > confirmed_flush_lsn

In the above, our problem is that restart_lsn is held down by some
other xact, so we can't rely on it to tell us if we replayed xid 3 to
the output plugin or not. We can't use confirmed_flush_lsn either,
since it'll advance at xid 2's commit whether or not we replayed xid
3's prepare to the client.

Since xid 3 will still be in xl_running_xacts when prepared, when we
recover SnapBuildProcessChange will return true for its changes and
we'll (re)buffer them, whether or not we landed up sending to the
client at prepare time. Nothing much to be done about that, we'll just
discard them when we process the prepare or the commit prepared,
depending on where we consult our filter callback again.

We MUST ask our filter callback again though, before we test
SnapBuildXactNeedsSkip when processing the PREPARE TRANSACTION again.
Otherwise we'll discard the buffered changes, and if we *didn't* send
them to the client already ... splat.

We can call the filter callback again on xid 3's prepare to find out
"would you have replayed it when we passed it last time". Or we can
call it when we get to the commit instead, to ask "when called last
time at prepare, did you replay or not?" But we have to consult the
callback. By default we'd just skip ReorderBufferCommit processing for
xid 3 entirely, which we'll do via the SnapBuildXactNeedsSkip call in
DecodeCommit when we process the COMMIT PREPARED.

If there was no other running xact when we decoded the PREPARE
TRANSACTION the first time around (i.e. xid 1 and 2 didn't exist in
the above), and if we do send it to the client at prepare time, I
think we can safely advance restart_lsn to the most recent
xl_running_xacts once we get replay confirmation. So we can pretend we
already committed at PREPARE TRANSACTION time for restart purposes if
we output at PREPARE TRANSACTION time, it just doesn't help us with
deciding whether to send the buffer contents at COMMIT PREPARED time
or not.

TL;DR: we can't rely on restart_lsn or confirmed_flush_lsn or
xl_running_xacts, we must ask the filter callback when we (re)decode
the PREPARE TRANSACTION record and/or at COMMIT PREPARED time.

This isn't a big deal. We just have to make sure we consult the filter
callback again when we decode an already-confirmed prepare
transaction, or at commit prepared time if we don't know what its
result was already.

>> This proposal does not provide a way to safely decode a 2pc xact that
>> made catalog changes which may be aborted while being decoded. The
>> plugin must lock such an xact so that it can't be aborted while being
>> processed, or defer decoding until commit prepared. It can use the
>> invalidations for the commit to decide.
>
> I had played with that two-pass catalog scan and it seems to be
> working but after some time I realised that it is not useful for the main
> case when commit/abort is generated after receiver side will answer to
> prepares. Also that two-pass scan is a massive change in relcache.c and
> genam.c (FWIW there were no problems with cache, but some problems
> with index scan and handling one-to-many queries to catalog, e.g. table
> with it fields)

Yeah, it was the intrusiveness I was concerned about. I don't think we
can even remotely hope to do that for Pg 10.

> Finally i decided to throw it and switched to filter_prepare callback and
> passed there txn structure to allow access to has_catalog_changes field.

I think that's how we'll need to go.

Plugins can either defer processing on all 2pc xacts with catalog
changes, or lock the xact. It's not perfect, but it's far from
unreasonable when you consider that plugins would only be locking 2pc
xacts where they expect the result of logical decoding to influence
the commit/abort decision, so we won't be doing a commit/abort until
we finish decoding the prepare anyway.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 21:19:29
Message-ID: 1A2E98FB-7705-43A0-B625-3F55A8FFE5D1@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 27 Mar 2017, at 16:29, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>
> On 27 March 2017 at 17:53, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> I’m heavily underestimated amount of changes there, but almost finished
>> and will send updated patch in several hours.
>
> Oh, brilliant! Please post whatever you have before you knock off for
> the day anyway, even if it's just a WIP, so I can pick it up tomorrow
> my time and poke at its tests etc.
>

Ok, here it is.

Major differences comparing to previous version:

* GID is stored to commit/abort records only when wal_level >= logical.

* More consistency about storing and parsing origin info. Now it
is stored in prepare and abort records when repsession is active.

* Some clenup, function renames to get rid of xact_even/gid fields
in ReorderBuffer which i used only to copy them ReorderBufferTXN.

* Changed output plugin interface to one that was suggested upthread.
Now prepare/CP/AP is separate callback, and if none of them is set
then 2pc tx will be decoded as 1pc to provide back-compatibility.

* New callback filter_prepare() that can be used to switch between
1pc/2pc style of decoding 2pc tx.

* test_decoding uses new API and filters out aborted and running prepared tx.
It is actually easy to move unlock of 2PCState there to prepare callback to allow
decode of running tx, but since that extension is example ISTM that is better not to
hold that lock there during whole prepare decoding. However I leaved
enough information there about this and about case when that locks are not need at all
(when we are coordinating this tx).
Talking about locking of running prepared tx during decode, I think better solution
would be to use own custom lock here and register XACT_EVENT_PRE_ABORT
callback in extension to conflict with this lock. Decode should hold it in shared way,
while commit in excluseve. That will allow to lock stuff granularly ang block only
tx that is being decoded.
However we don’t have XACT_EVENT_PRE_ABORT, but it is several LOCs to
add it. Should I?

* It is actually doesn’t pass one of mine regression tests. I’ve added expected output
as it should be. I’ll try to send follow up message with fix, but right now sending it
as is, as you asked.

Attachment Content-Type Size
logical_twophase.diff application/octet-stream 53.1 KB
unknown_filename text/plain 98 bytes

From: Andres Freund <andres(at)anarazel(dot)de>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-27 21:25:28
Message-ID: 20170327212528.cwf4aotytwrsowwx@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
> Ok, here it is.

On a very quick skim, this doesn't seem to solve the issues around
deadlocks of prepared transactions vs. catalog tables. What if the
prepared transaction contains something like LOCK pg_class; (there's a
lot more realistic examples)? Then decoding won't be able to continue,
until that transaction is committed / aborted?

- Andres


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 00:50:28
Message-ID: 12F26CDD-B097-4E5B-BEEA-F8B8438114DF@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 28 Mar 2017, at 00:19, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
> * It is actually doesn’t pass one of mine regression tests. I’ve added expected output
> as it should be. I’ll try to send follow up message with fix, but right now sending it
> as is, as you asked.
>
>

Fixed. I forgot to postpone ReorderBufferTxn cleanup in case of prepare.

So it pass provided regression tests right now.

I’ll give it more testing tomorrow and going to write TAP test to check behaviour
when we loose info whether prepare was sent to subscriber or not.

Attachment Content-Type Size
logical_twophase.diff application/octet-stream 54.4 KB
unknown_filename text/plain 97 bytes

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 00:50:56
Message-ID: CAMsr+YHM0cSQUCuFfQa9hBL7+sfouWKAGJjGfrSxYhNJYKKW9w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 05:25, Andres Freund <andres(at)anarazel(dot)de> wrote:

> On a very quick skim, this doesn't seem to solve the issues around
> deadlocks of prepared transactions vs. catalog tables. What if the
> prepared transaction contains something like LOCK pg_class; (there's a
> lot more realistic examples)? Then decoding won't be able to continue,
> until that transaction is committed / aborted?

Yeah, that's a problem and one we discussed in the past, though I lost
track of it in amongst the recent work.

I'm currently writing a few TAP tests intended to check this sort of
thing, mixed DDL/DML, overlapping xacts, interleaved prepared xacts,
etc. If they highlight problems they'll be useful for the next
iteration of this patch anyway.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 00:51:26
Message-ID: CAMsr+YEejpKLyn=76LcmkLJGbw_QFFxNJsq_WDwqN6Lurdr-EQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 08:50, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 28 Mar 2017, at 00:19, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>>
>> * It is actually doesn’t pass one of mine regression tests. I’ve added expected output
>> as it should be. I’ll try to send follow up message with fix, but right now sending it
>> as is, as you asked.
>>
>>
>
> Fixed. I forgot to postpone ReorderBufferTxn cleanup in case of prepare.
>
> So it pass provided regression tests right now.
>
> I’ll give it more testing tomorrow and going to write TAP test to check behaviour
> when we loose info whether prepare was sent to subscriber or not.

Great, thanks. I'll try to have some TAP tests ready.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 01:12:41
Message-ID: A434797D-1283-42C5-8CE5-7F1CD0B68F36@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 28 Mar 2017, at 00:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
>> Ok, here it is.
>
> On a very quick skim, this doesn't seem to solve the issues around
> deadlocks of prepared transactions vs. catalog tables. What if the
> prepared transaction contains something like LOCK pg_class; (there's a
> lot more realistic examples)? Then decoding won't be able to continue,
> until that transaction is committed / aborted?

But why is that deadlock? Seems as just lock.

In case of prepared lock of pg_class decoding will wait until it committed and
then continue to decode. As well as anything in postgres that accesses pg_class,
including inability to connect to database and bricking database if you accidentally
disconnected before committing that tx (as you showed me some while ago :-).

IMO it is issue of being able to prepare such lock, than of decoding.

Is there any other scenarios where catalog readers are blocked except explicit lock
on catalog table? Alters on catalogs seems to be prohibited.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 01:25:46
Message-ID: 20170328012546.473psm6546bgsi2c@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-03-28 04:12:41 +0300, Stas Kelvich wrote:
>
> > On 28 Mar 2017, at 00:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >
> > Hi,
> >
> > On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
> >> Ok, here it is.
> >
> > On a very quick skim, this doesn't seem to solve the issues around
> > deadlocks of prepared transactions vs. catalog tables. What if the
> > prepared transaction contains something like LOCK pg_class; (there's a
> > lot more realistic examples)? Then decoding won't be able to continue,
> > until that transaction is committed / aborted?
>
> But why is that deadlock? Seems as just lock.

If you actually need separate decoding of 2PC, then you want to wait for
the PREPARE to be replicated. If that replication has to wait for the
to-be-replicated prepared transaction to commit prepared, and commit
prepare will only happen once replication happened...

> Is there any other scenarios where catalog readers are blocked except explicit lock
> on catalog table? Alters on catalogs seems to be prohibited.

VACUUM FULL on catalog tables (but that can't happen in xact => 2pc)
CLUSTER on catalog tables (can happen in xact)
ALTER on tables modified in the same transaction (even of non catalog
tables!), because a lot of routines will do a heap_open() to get the
tupledesc etc.

Greetings,

Andres Freund


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 02:30:28
Message-ID: CANP8+jKEbhGmDOg09Was1gkuok-nATA-ZOGaee5oe2PisYtQXA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 02:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-03-28 04:12:41 +0300, Stas Kelvich wrote:
>>
>> > On 28 Mar 2017, at 00:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> >
>> > Hi,
>> >
>> > On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
>> >> Ok, here it is.
>> >
>> > On a very quick skim, this doesn't seem to solve the issues around
>> > deadlocks of prepared transactions vs. catalog tables. What if the
>> > prepared transaction contains something like LOCK pg_class; (there's a
>> > lot more realistic examples)? Then decoding won't be able to continue,
>> > until that transaction is committed / aborted?
>>
>> But why is that deadlock? Seems as just lock.
>
> If you actually need separate decoding of 2PC, then you want to wait for
> the PREPARE to be replicated. If that replication has to wait for the
> to-be-replicated prepared transaction to commit prepared, and commit
> prepare will only happen once replication happened...

Surely that's up to the decoding plugin?

If the plugin takes locks it had better make sure it can get the locks
or timeout. But that's true of any resource the plugin needs access to
and can't obtain when needed.

This issue could occur now if the transaction tool a session lock on a
catalog table.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 02:38:46
Message-ID: 20170328023846.7sd2soq4ulkzl5of@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-03-28 03:30:28 +0100, Simon Riggs wrote:
> On 28 March 2017 at 02:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2017-03-28 04:12:41 +0300, Stas Kelvich wrote:
> >>
> >> > On 28 Mar 2017, at 00:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >> >
> >> > Hi,
> >> >
> >> > On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
> >> >> Ok, here it is.
> >> >
> >> > On a very quick skim, this doesn't seem to solve the issues around
> >> > deadlocks of prepared transactions vs. catalog tables. What if the
> >> > prepared transaction contains something like LOCK pg_class; (there's a
> >> > lot more realistic examples)? Then decoding won't be able to continue,
> >> > until that transaction is committed / aborted?
> >>
> >> But why is that deadlock? Seems as just lock.
> >
> > If you actually need separate decoding of 2PC, then you want to wait for
> > the PREPARE to be replicated. If that replication has to wait for the
> > to-be-replicated prepared transaction to commit prepared, and commit
> > prepare will only happen once replication happened...
>
> Surely that's up to the decoding plugin?

It can't do much about it, so not really. A lot of the functions
dealing with datatypes (temporarily) lock relations. Both the actual
user tables, and system catalog tables (cache lookups...).

> If the plugin takes locks it had better make sure it can get the locks
> or timeout. But that's true of any resource the plugin needs access to
> and can't obtain when needed.

> This issue could occur now if the transaction tool a session lock on a
> catalog table.

That's not a self deadlock, and we don't don't do session locks outside
of operations like CIC?

Greetings,

Andres Freund


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 02:53:55
Message-ID: CAMsr+YF9ya3PsyWekqBqaeRz9WC+roGWGY5qDso0Jx6O1ajAHQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 09:25, Andres Freund <andres(at)anarazel(dot)de> wrote:

> If you actually need separate decoding of 2PC, then you want to wait for
> the PREPARE to be replicated. If that replication has to wait for the
> to-be-replicated prepared transaction to commit prepared, and commit
> prepare will only happen once replication happened...

In other words, the output plugin cannot decode a transaction at
PREPARE TRANSACTION time if that xact holds an AccessExclusiveLock on
a catalog relation we must be able to read in order to decode the
xact.

>> Is there any other scenarios where catalog readers are blocked except explicit lock
>> on catalog table? Alters on catalogs seems to be prohibited.
>
> VACUUM FULL on catalog tables (but that can't happen in xact => 2pc)
> CLUSTER on catalog tables (can happen in xact)
> ALTER on tables modified in the same transaction (even of non catalog
> tables!), because a lot of routines will do a heap_open() to get the
> tupledesc etc.

Right, and the latter one is the main issue, since it's by far the
most likely and hard to just work around.

The tests Stas has in place aren't sufficient to cover this, as they
decode only after everything has committed. I'm expanding the
pg_regress coverage to do decoding between prepare and commit (when we
actually care) first, and will add some tests involving strong locks.
I've found one bug where it doesn't decode a 2pc xact at prepare or
commit time, even without restart or strong lock issues. Pretty sure
it's due to assumptions made about the filter callback.

The current code as used by test_decoding won't work correctly. If
txn->has_catalog_changes and if it's still in-progress, the filter
skips decoding at PREPARE time. But it isn't then decoded at COMMIT
PREPARED time either, if we processed past the PREPARE TRANSACTION.
Bug.

Also, by skipping decoding of 2pc xacts with catalog changes in this
test we also hide the locking issues.

However, even once I add an option to force decoding of 2pc xacts with
catalog changes to test_decoding, I cannot reproduce the expected
locking issues so far. See tests in attached updated version, in
contrib/test_decoding/sql/prepare.sql .

Haven't done any TAP tests yet, since the pg_regress tests are so far
sufficient to turn up issues.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
logical_twophase_v4.patch text/x-patch 67.3 KB

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 03:23:09
Message-ID: CAMsr+YGKwfsxmDmxeJzWKqTfxwPNF+7e-LAK=A=jDqBFe091iQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 10:53, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:

> However, even once I add an option to force decoding of 2pc xacts with
> catalog changes to test_decoding, I cannot reproduce the expected
> locking issues so far. See tests in attached updated version, in
> contrib/test_decoding/sql/prepare.sql .

I haven't been able to create issues with CLUSTER, any ALTER TABLEs
I've tried, or anything similar.

An explicit AEL on pg_attribute causes the decoding stall, but you
can't do anything much else either, and I don't see how that'd arise
under normal circumstances.

If it's a sufficiently obscure issue I'm willing to document it as
"don't do that" or "use a command filter to prohibit that". But it's
more likely that I'm just not spotting the cases where the issue
arises.

Attempting to CLUSTER a system catalog like pg_class or pg_attribute
causes PREPARE TRANSACTION to fail with

ERROR: cannot PREPARE a transaction that modified relation mapping

and I didn't find any catalogs I could CLUSTER that'd also block decoding.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 14:32:49
Message-ID: CANP8+j+TMef2sWxiiWzpm8oFR7SvNArbOW2xqmJjKeLB7UhL8Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 03:53, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> On 28 March 2017 at 09:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
>> If you actually need separate decoding of 2PC, then you want to wait for
>> the PREPARE to be replicated. If that replication has to wait for the
>> to-be-replicated prepared transaction to commit prepared, and commit
>> prepare will only happen once replication happened...
>
> In other words, the output plugin cannot decode a transaction at
> PREPARE TRANSACTION time if that xact holds an AccessExclusiveLock on
> a catalog relation we must be able to read in order to decode the
> xact.

Yes, I understand.

The decoding plugin can choose to enable lock_timeout, or it can
choose to wait for manual resolution, or it could automatically abort
such a transaction to avoid needing to decode it.

I don't think its for us to say what the plugin is allowed to do. We
decided on a plugin architecture, so we have to trust that the plugin
author resolves the issues. We can document them so those choices are
clear.

This doesn't differ in any respect from any other resource it might
need yet cannot obtain.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 14:38:40
Message-ID: 20170328143840.b7qdvqh3uh74f2oa@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-03-28 15:32:49 +0100, Simon Riggs wrote:
> On 28 March 2017 at 03:53, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> > On 28 March 2017 at 09:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >
> >> If you actually need separate decoding of 2PC, then you want to wait for
> >> the PREPARE to be replicated. If that replication has to wait for the
> >> to-be-replicated prepared transaction to commit prepared, and commit
> >> prepare will only happen once replication happened...
> >
> > In other words, the output plugin cannot decode a transaction at
> > PREPARE TRANSACTION time if that xact holds an AccessExclusiveLock on
> > a catalog relation we must be able to read in order to decode the
> > xact.
>
> Yes, I understand.
>
> The decoding plugin can choose to enable lock_timeout, or it can
> choose to wait for manual resolution, or it could automatically abort
> such a transaction to avoid needing to decode it.

That doesn't solve the problem. You still left with replication that
can't progress. I think that's completely unacceptable. We need a
proper solution to this, not throw our hands up in the air and hope that
it's not going to hurt a whole lot of peopel.

> I don't think its for us to say what the plugin is allowed to do. We
> decided on a plugin architecture, so we have to trust that the plugin
> author resolves the issues. We can document them so those choices are
> clear.

I don't think this is "plugin architecture" related. The output pluging
can't do right here, this has to be solved at a higher level.

- Andres


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 14:55:15
Message-ID: CANP8+jLhUR=q+xPR4RKjJr6vaU+976kKmwipdjhU6VB+jR_ROg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2017 at 15:38, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-03-28 15:32:49 +0100, Simon Riggs wrote:
>> On 28 March 2017 at 03:53, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
>> > On 28 March 2017 at 09:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> >
>> >> If you actually need separate decoding of 2PC, then you want to wait for
>> >> the PREPARE to be replicated. If that replication has to wait for the
>> >> to-be-replicated prepared transaction to commit prepared, and commit
>> >> prepare will only happen once replication happened...
>> >
>> > In other words, the output plugin cannot decode a transaction at
>> > PREPARE TRANSACTION time if that xact holds an AccessExclusiveLock on
>> > a catalog relation we must be able to read in order to decode the
>> > xact.
>>
>> Yes, I understand.
>>
>> The decoding plugin can choose to enable lock_timeout, or it can
>> choose to wait for manual resolution, or it could automatically abort
>> such a transaction to avoid needing to decode it.
>
> That doesn't solve the problem. You still left with replication that
> can't progress. I think that's completely unacceptable. We need a
> proper solution to this, not throw our hands up in the air and hope that
> it's not going to hurt a whole lot of peopel.

Nobody is throwing their hands in the air, nobody is just hoping. The
concern raised is real and needs to be handled somewhere; the only
point of discussion is where and how.

>> I don't think its for us to say what the plugin is allowed to do. We
>> decided on a plugin architecture, so we have to trust that the plugin
>> author resolves the issues. We can document them so those choices are
>> clear.
>
> I don't think this is "plugin architecture" related. The output pluging
> can't do right here, this has to be solved at a higher level.

That assertion is obviously false... the plugin can resolve this in
various ways, if we allow it.

You can say that in your opinion you prefer to see this handled in
some higher level way, though it would be good to hear why and how.

Bottom line here is we shouldn't reject this patch on this point,
especially since any resource issue found during decoding could
similarly prevent progress with decoding.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-28 15:08:23
Message-ID: 20170328150823.u4he7rbd72mzeydl@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-03-28 15:55:15 +0100, Simon Riggs wrote:
> On 28 March 2017 at 15:38, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2017-03-28 15:32:49 +0100, Simon Riggs wrote:
> >> On 28 March 2017 at 03:53, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> >> > On 28 March 2017 at 09:25, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >> >
> >> >> If you actually need separate decoding of 2PC, then you want to wait for
> >> >> the PREPARE to be replicated. If that replication has to wait for the
> >> >> to-be-replicated prepared transaction to commit prepared, and commit
> >> >> prepare will only happen once replication happened...
> >> >
> >> > In other words, the output plugin cannot decode a transaction at
> >> > PREPARE TRANSACTION time if that xact holds an AccessExclusiveLock on
> >> > a catalog relation we must be able to read in order to decode the
> >> > xact.
> >>
> >> Yes, I understand.
> >>
> >> The decoding plugin can choose to enable lock_timeout, or it can
> >> choose to wait for manual resolution, or it could automatically abort
> >> such a transaction to avoid needing to decode it.
> >
> > That doesn't solve the problem. You still left with replication that
> > can't progress. I think that's completely unacceptable. We need a
> > proper solution to this, not throw our hands up in the air and hope that
> > it's not going to hurt a whole lot of peopel.
>
> Nobody is throwing their hands in the air, nobody is just hoping. The
> concern raised is real and needs to be handled somewhere; the only
> point of discussion is where and how.

> >> I don't think its for us to say what the plugin is allowed to do. We
> >> decided on a plugin architecture, so we have to trust that the plugin
> >> author resolves the issues. We can document them so those choices are
> >> clear.
> >
> > I don't think this is "plugin architecture" related. The output pluging
> > can't do right here, this has to be solved at a higher level.
>
> That assertion is obviously false... the plugin can resolve this in
> various ways, if we allow it.

Handling it by breaking replication isn't handling it (e.g. timeouts in
decoding etc). Handling it by rolling back *prepared* transactions
(which are supposed to be guaranteed to succeed!), isn't either.

> You can say that in your opinion you prefer to see this handled in
> some higher level way, though it would be good to hear why and how.

It's pretty obvious why: A bit of DDL by the user shouldn't lead to the
issues mentioned above.

> Bottom line here is we shouldn't reject this patch on this point,

I think it definitely has to be rejected because of that. And I didn't
bring this up at the last minute, I repeatedly brought it up before.
Both to Craig and Stas.

One way to fix this would be to allow decoding to acquire such locks
(i.e. locks held by the prepared xact we're decoding) - there
unfortunately are some practical issues with that (e.g. the locking code
doesnt' necessarily expect a second non-exclusive locker, when there's
an exclusive one), or we could add an exception to the locking code to
simply not acquire such locks.

> especially since any resource issue found during decoding could
> similarly prevent progress with decoding.

For example?

- Andres


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-29 00:47:08
Message-ID: CAMsr+YFOR=QAvk_2e+Yq8Jg01TBPwHCN1ccEsqbUxz5phFfFgQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 Mar. 2017 23:08, "Andres Freund" <andres(at)anarazel(dot)de> wrote:

>
> > >> I don't think its for us to say what the plugin is allowed to do. We
> > >> decided on a plugin architecture, so we have to trust that the plugin
> > >> author resolves the issues. We can document them so those choices are
> > >> clear.
> > >
> > > I don't think this is "plugin architecture" related. The output pluging
> > > can't do right here, this has to be solved at a higher level.
> >
> > That assertion is obviously false... the plugin can resolve this in
> > various ways, if we allow it.
>
> Handling it by breaking replication isn't handling it (e.g. timeouts in
> decoding etc).

IMO, if it's a rare condition and we can abort decoding then recover
cleanly and succeed on retry, that's OK. Not dissimilar to the deadlock
detector. But right now that's not the case, it's possible (however
artificially) to create prepared xacts for which decoding will stall and
not succeed.

> Handling it by rolling back *prepared* transactions
> (which are supposed to be guaranteed to succeed!), isn't either.
>

I agree, we can't rely on anything for which the only way to continue is to
rollback a prepared xact.

> > You can say that in your opinion you prefer to see this handled in
> > some higher level way, though it would be good to hear why and how.
>
> It's pretty obvious why: A bit of DDL by the user shouldn't lead to the
> issues mentioned above.
>

I agree that it shouldn't, and in fact DDL is the main part of why I want
2PC decoding.

What's surprised me is that I haven't actually been able to create any
situations where, with test_decoding, we have such a failure. Not unless I
manually LOCK TABLE pg_attribute, anyway.

Notably, we already disallow prepared xacts that make changes to the
relfilenodemap, which covers a lot of the problem cases like CLUSTERing
system tables.

> > Bottom line here is we shouldn't reject this patch on this point,
>
> I think it definitely has to be rejected because of that. And I didn't
> bring this up at the last minute, I repeatedly brought it up before.
> Both to Craig and Stas.
>

Yes, and I lost track of it while focusing on the catalog tuple visibility
issues. I warned Stas of this issue when he first mentioned an interest in
decoding of 2PC actually, but haven't kept a proper eye on it since.

Andres and I even discussed this back in the early BDR days, it's not new
and is part of why I poked Stas to try some DDL tests etc. The tests in the
initial patch didn't have enough coverage to trigger any issues - they
didn't actually test decoding of a 2pc xact while it was still in-progress
at all. But even once I added more tests I've actually been unable to
reproduce this in a realistic real world example.

Frankly I'm confused by that, since I would expect an AEL on some_table to
cause decoding of some_table to get stuck. It does not.

That doesn't mean we should accept failure cases and commit something with
holes in it. But it might inform our choices about how we solve those
issues.

> One way to fix this would be to allow decoding to acquire such locks
> (i.e. locks held by the prepared xact we're decoding) - there
> unfortunately are some practical issues with that (e.g. the locking code
> doesnt' necessarily expect a second non-exclusive locker, when there's
> an exclusive one), or we could add an exception to the locking code to
> simply not acquire such locks.
>

I've been meaning to see if we can use the parallel infrastructure's
session leader infrastructure for this, by making the 2pc fake-proc a
leader and making our decoding session inherit its locks. I haven't dug
into it to see if it's even remotely practical yet, and won't be able to
until early pg11.

We could proceed with the caveat that decoding plugins that use 2pc support
must defer decoding of 2pc xacts containing ddl until commit prepared, or
must take responsibility for ensuring (via a command filter, etc) that
xacts are safe to decode and 2pc lock xacts during decoding. But we're
likely to change the interface for all that when we iterate for pg11 and
I'd rather not carry more BC than we have to. Also, the patch has unsolved
issues with how it keeps track of whether an xact was output at prepare
time or not and suppresses output at commit time.

I'm inclined to shelve the patch for Pg 10. We've only got a couple of days
left, the tests are still pretty minimal. We have open issues around
locking, less than totally satisfactory abort handling, and potential to
skip replay of transactions for both prepare and commit prepared. It's not
ready to go. However, it's definitely to the point where with a little more
work it'll be practical to patch into variants of Pg until we can
mainstream it in Pg 11, which is nice.

--
Craig Ringer


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-03-29 15:55:13
Message-ID: E38B1BB4-4A66-44E9-B764-8C83E424174B@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 28 Mar 2017, at 18:08, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> On 2017-03-28 15:55:15 +0100, Simon Riggs wrote:
>>
>>
>> That assertion is obviously false... the plugin can resolve this in
>> various ways, if we allow it.
>
> Handling it by breaking replication isn't handling it (e.g. timeouts in
> decoding etc). Handling it by rolling back *prepared* transactions
> (which are supposed to be guaranteed to succeed!), isn't either.
>
>
>> You can say that in your opinion you prefer to see this handled in
>> some higher level way, though it would be good to hear why and how.
>
> It's pretty obvious why: A bit of DDL by the user shouldn't lead to the
> issues mentioned above.
>
>
>> Bottom line here is we shouldn't reject this patch on this point,
>
> I think it definitely has to be rejected because of that. And I didn't
> bring this up at the last minute, I repeatedly brought it up before.
> Both to Craig and Stas.

Okay. In order to find more realistic cases that blocks replication
i’ve created following setup:

* in backend: tests_decoding plugins hooks on xact events and utility
statement hooks and transform each commit into prepare, then sleeps
on latch. If transaction contains DDL that whole statement pushed in
wal as transactional message. If DDL can not be prepared or disallows
execution in transaction block than it goes as nontransactional logical
message without prepare/decode injection. If transaction didn’t issued any
DDL and didn’t write anything to wal, then it skips 2pc too.

* after prepare is decoded, output plugin in walsender unlocks backend
allowing to proceed with commit prepared. So in case when decoding
tries to access blocked catalog everything should stop.

* small python script that consumes decoded wal from walsender (thanks
Craig and Petr)

After small acrobatics with that hooks I’ve managed to run whole
regression suite in parallel mode through such setup of test_decoding
without any deadlocks. I’ve added two xact_events to postgres and
allowedn prepare of transactions that touched temp tables since
they are heavily used in tests and creates a lot of noise in diffs.

So it boils down to 3 failed regression tests out of 177, namely:

* transactions.sql — here commit of tx stucks with obtaining SafeSnapshot().
I didn’t look what is happening there specifically, but just checked that
walsender isn’t blocked. I’m going to look more closely at this.

* prepared_xacts.sql — here select prepared_xacts() sees our prepared
tx. It is possible to filter them out, but obviously it works as expected.

* guc.sql — here pendingActions arrives on 'DISCARD ALL’ preventing tx
from being prepared. I didn’t found the way to check presence of
pendingActions outside of async.c so decided to leave it as is.

It seems that at least in regression tests nothing can block twophase
logical decoding. Is that strong enough argument to hypothesis that current
approach doesn’t creates deadlock except locks on catalog which should be
disallowed anyway?

Patches attached. logical_twophase_v5 is slightly modified version of previous
patch merged with Craig’s changes. Second file is set of patches over previous
one, that implements logic i’ve just described. There is runtest.sh script that
setups postgres, runs python logical consumer in background and starts
regression test.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment Content-Type Size
logical_twophase_v5.diff application/octet-stream 66.1 KB
logical_twophase_regresstest.diff application/octet-stream 14.8 KB

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-04-04 01:23:23
Message-ID: CAD21AoDDC7USpUyUim=7BwuFByauRGfLzkE7hhiV=jjDckc+KA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 30, 2017 at 12:55 AM, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 28 Mar 2017, at 18:08, Andres Freund <andres(at)anarazel(dot)de> wrote:
>>
>> On 2017-03-28 15:55:15 +0100, Simon Riggs wrote:
>>>
>>>
>>> That assertion is obviously false... the plugin can resolve this in
>>> various ways, if we allow it.
>>
>> Handling it by breaking replication isn't handling it (e.g. timeouts in
>> decoding etc). Handling it by rolling back *prepared* transactions
>> (which are supposed to be guaranteed to succeed!), isn't either.
>>
>>
>>> You can say that in your opinion you prefer to see this handled in
>>> some higher level way, though it would be good to hear why and how.
>>
>> It's pretty obvious why: A bit of DDL by the user shouldn't lead to the
>> issues mentioned above.
>>
>>
>>> Bottom line here is we shouldn't reject this patch on this point,
>>
>> I think it definitely has to be rejected because of that. And I didn't
>> bring this up at the last minute, I repeatedly brought it up before.
>> Both to Craig and Stas.
>
> Okay. In order to find more realistic cases that blocks replication
> i’ve created following setup:
>
> * in backend: tests_decoding plugins hooks on xact events and utility
> statement hooks and transform each commit into prepare, then sleeps
> on latch. If transaction contains DDL that whole statement pushed in
> wal as transactional message. If DDL can not be prepared or disallows
> execution in transaction block than it goes as nontransactional logical
> message without prepare/decode injection. If transaction didn’t issued any
> DDL and didn’t write anything to wal, then it skips 2pc too.
>
> * after prepare is decoded, output plugin in walsender unlocks backend
> allowing to proceed with commit prepared. So in case when decoding
> tries to access blocked catalog everything should stop.
>
> * small python script that consumes decoded wal from walsender (thanks
> Craig and Petr)
>
> After small acrobatics with that hooks I’ve managed to run whole
> regression suite in parallel mode through such setup of test_decoding
> without any deadlocks. I’ve added two xact_events to postgres and
> allowedn prepare of transactions that touched temp tables since
> they are heavily used in tests and creates a lot of noise in diffs.
>
> So it boils down to 3 failed regression tests out of 177, namely:
>
> * transactions.sql — here commit of tx stucks with obtaining SafeSnapshot().
> I didn’t look what is happening there specifically, but just checked that
> walsender isn’t blocked. I’m going to look more closely at this.
>
> * prepared_xacts.sql — here select prepared_xacts() sees our prepared
> tx. It is possible to filter them out, but obviously it works as expected.
>
> * guc.sql — here pendingActions arrives on 'DISCARD ALL’ preventing tx
> from being prepared. I didn’t found the way to check presence of
> pendingActions outside of async.c so decided to leave it as is.
>
> It seems that at least in regression tests nothing can block twophase
> logical decoding. Is that strong enough argument to hypothesis that current
> approach doesn’t creates deadlock except locks on catalog which should be
> disallowed anyway?
>
> Patches attached. logical_twophase_v5 is slightly modified version of previous
> patch merged with Craig’s changes. Second file is set of patches over previous
> one, that implements logic i’ve just described. There is runtest.sh script that
> setups postgres, runs python logical consumer in background and starts
> regression test.
>
>

I reviewed this patch but when I tried to build contrib/test_decoding
I got the following error.

$ make
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -g -g -fpic -I. -I. -I../../src/include -D_GNU_SOURCE -c -o
test_decoding.o test_decoding.c -MMD -MP -MF .deps/test_decoding.Po
test_decoding.c: In function '_PG_init':
test_decoding.c:126: warning: assignment from incompatible pointer type
test_decoding.c: In function 'test_decoding_process_utility':
test_decoding.c:271: warning: passing argument 5 of
'PreviousProcessUtilityHook' from incompatible pointer type
test_decoding.c:271: note: expected 'struct QueryEnvironment *' but
argument is of type 'struct DestReceiver *'
test_decoding.c:271: warning: passing argument 6 of
'PreviousProcessUtilityHook' from incompatible pointer type
test_decoding.c:271: note: expected 'struct DestReceiver *' but
argument is of type 'char *'
test_decoding.c:271: error: too few arguments to function
'PreviousProcessUtilityHook'
test_decoding.c:276: warning: passing argument 5 of
'standard_ProcessUtility' from incompatible pointer type
../../src/include/tcop/utility.h:38: note: expected 'struct
QueryEnvironment *' but argument is of type 'struct DestReceiver *'
test_decoding.c:276: warning: passing argument 6 of
'standard_ProcessUtility' from incompatible pointer type
../../src/include/tcop/utility.h:38: note: expected 'struct
DestReceiver *' but argument is of type 'char *'
test_decoding.c:276: error: too few arguments to function
'standard_ProcessUtility'
test_decoding.c: At top level:
test_decoding.c:285: warning: 'test_decoding_twophase_commit' was used
with no prototype before its definition
make: *** [test_decoding.o] Error 1

---
After applied both patches the regression test 'make check' failed. I
think you should update expected/transactions.out file as well.

$ cat src/test/regress/regression.diffs
*** /home/masahiko/pgsql/source/postgresql/src/test/regress/expected/transactions.out
Mon May 2 09:16:02 2016
--- /home/masahiko/pgsql/source/postgresql/src/test/regress/results/transactions.out
Tue Apr 4 09:52:44 2017
***************
*** 43,58 ****
-- Read-only tests
CREATE TABLE writetest (a int);
CREATE TEMPORARY TABLE temptest (a int);
! BEGIN;
! SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE; -- ok
! SELECT * FROM writetest; -- ok
! a
! ---
! (0 rows)
!
! SET TRANSACTION READ WRITE; --fail
! ERROR: transaction read-write mode must be set before any query
! COMMIT;
BEGIN;
SET TRANSACTION READ ONLY; -- ok
SET TRANSACTION READ WRITE; -- ok
--- 43,53 ----
-- Read-only tests
CREATE TABLE writetest (a int);
CREATE TEMPORARY TABLE temptest (a int);
! -- BEGIN;
! -- SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE; -- ok
! -- SELECT * FROM writetest; -- ok
! -- SET TRANSACTION READ WRITE; --fail
! -- COMMIT;
BEGIN;
SET TRANSACTION READ ONLY; -- ok
SET TRANSACTION READ WRITE; -- ok

======================================================================
There are still some unnecessary code in v5 patch.

---
+/* PREPARE callback */
+static void
+pg_decode_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ int backend_procno;
+
+ // if (data->skip_empty_xacts && !data->xact_wrote_changes)
+ // return;
+
+ OutputPluginPrepareWrite(ctx, true);
+

Could you please update these patches?

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-04-04 10:06:13
Message-ID: 9CD693AF-51BE-4A19-9D26-16031CDAD9C9@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 4 Apr 2017, at 04:23, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>
>
> I reviewed this patch but when I tried to build contrib/test_decoding
> I got the following error.
>

Thanks!

Yes, seems that 18ce3a4a changed ProcessUtility_hook signature.
Updated.

> There are still some unnecessary code in v5 patch.

Actually second diff isn’t intended to be part of the patch, I've just shared
the way I ran regression test suite through the 2pc decoding changing
all commits to prepare/commits where commits happens only after decoding
of prepare is finished (more details in my previous message in this thread).

That is just argument against Andres concern that prepared transaction
is able to deadlock with decoding process — at least no such cases in
regression tests.

And that concern is main thing blocking this patch. Except explicit catalog
locks in prepared tx nobody yet found such cases and it is hard to address
or argue about.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment Content-Type Size
logical_twophase_v6.diff application/octet-stream 66.1 KB
logical_twophase_regresstest.diff application/octet-stream 14.9 KB

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-04-04 17:13:00
Message-ID: CAD21AoAKgo46=z4r8pJw-mgch0dvP03K5_MfharHabDH3t+jaA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Apr 4, 2017 at 7:06 PM, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
>
>> On 4 Apr 2017, at 04:23, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>>
>>
>> I reviewed this patch but when I tried to build contrib/test_decoding
>> I got the following error.
>>
>
> Thanks!
>
> Yes, seems that 18ce3a4a changed ProcessUtility_hook signature.
> Updated.
>
>> There are still some unnecessary code in v5 patch.
>

Thank you for updating the patch!

> Actually second diff isn’t intended to be part of the patch, I've just shared
> the way I ran regression test suite through the 2pc decoding changing
> all commits to prepare/commits where commits happens only after decoding
> of prepare is finished (more details in my previous message in this thread).

Understood. Sorry for the noise.

>
> That is just argument against Andres concern that prepared transaction
> is able to deadlock with decoding process — at least no such cases in
> regression tests.
>
> And that concern is main thing blocking this patch. Except explicit catalog
> locks in prepared tx nobody yet found such cases and it is hard to address
> or argue about.
>

Hmm, I also has not found such deadlock case yet.

Other than that issue current patch still could not pass 'make check'
test of contrib/test_decoding.

*** 154,167 ****
(4 rows)

:get_with2pc
! data
! -------------------------------------------------------------------------
! BEGIN
! table public.test_prepared1: INSERT: id[integer]:5
! table public.test_prepared1: INSERT: id[integer]:6 data[text]:'frakbar'
! PREPARE TRANSACTION 'test_prepared#3';
! COMMIT PREPARED 'test_prepared#3';
! (5 rows)

-- make sure stuff still works
INSERT INTO test_prepared1 VALUES (8);
--- 154,162 ----
(4 rows)

:get_with2pc
! data
! ------
! (0 rows)

-- make sure stuff still works
INSERT INTO test_prepared1 VALUES (8);

I guess that the this part is a unexpected result and should be fixed. Right?

-----

*** 215,222 ****
-- If we try to decode it now we'll deadlock
SET statement_timeout = '10s';
:get_with2pc_nofilter
! -- FIXME we expect a timeout here, but it actually works...
! ERROR: statement timed out

RESET statement_timeout;
-- we can decode past it by skipping xacts with catalog changes
--- 210,222 ----
-- If we try to decode it now we'll deadlock
SET statement_timeout = '10s';
:get_with2pc_nofilter
! data
! ----------------------------------------------------------------------------
! BEGIN
! table public.test_prepared1: INSERT: id[integer]:10 data[text]:'othercol'
! table public.test_prepared1: INSERT: id[integer]:11 data[text]:'othercol2'
! PREPARE TRANSACTION 'test_prepared_lock'
! (4 rows)

RESET statement_timeout;
-- we can decode past it by skipping xacts with catalog changes

Probably we can ignore this part for now.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From: Andres Freund <andres(at)anarazel(dot)de>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-04-04 17:31:33
Message-ID: 20170404173133.olo7keeh3dzhriay@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-04-04 13:06:13 +0300, Stas Kelvich wrote:
> That is just argument against Andres concern that prepared transaction
> is able to deadlock with decoding process — at least no such cases in
> regression tests.

There's few longer / adverse xacts, that doesn't say much.

> And that concern is main thing blocking this patch. Except explicit catalog
> locks in prepared tx nobody yet found such cases and it is hard to address
> or argue about.

I doubt that's the case. But even if it were so, it's absolutely not
acceptable that a plain user can cause such deadlocks. So I don't think
this argument buys you anything.

- Andres


From: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-05-13 19:42:32
Message-ID: CA+q6zcUJLTwammygSSp0=evX_Ns46b2JkvWu214jbM-15xc_MA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi

> On 4 April 2017 at 19:13, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>
> Other than that issue current patch still could not pass 'make check'
> test of contrib/test_decoding.

Just a note about this patch. Of course time flies by and it needs rebase,
but also there are few failing tests right now:

* one that was already mentioned by Masahiko
* one from `ddl`, where expected is:

```
SELECT slot_name, plugin, slot_type, active,
NOT catalog_xmin IS NULL AS catalog_xmin_set,
xmin IS NULl AS data_xmin_not_set,
pg_wal_lsn_diff(restart_lsn, '0/01000000') > 0 AS some_wal
FROM pg_replication_slots;
slot_name | plugin | slot_type | active | catalog_xmin_set |
data_xmin_not_set | some_wal
-----------------+---------------+-----------+--------+------------------+-------------------+----------
regression_slot | test_decoding | logical | f | t |
t | t
(1 row)
```

but the result is:

```
SELECT slot_name, plugin, slot_type, active,
NOT catalog_xmin IS NULL AS catalog_xmin_set,
xmin IS NULl AS data_xmin_not_set,
pg_wal_lsn_diff(restart_lsn, '0/01000000') > 0 AS some_wal
FROM pg_replication_slots;
ERROR: function pg_wal_lsn_diff(pg_lsn, unknown) does not exist
LINE 5: pg_wal_lsn_diff(restart_lsn, '0/01000000') > 0 AS some_w...
^
HINT: No function matches the given name and argument types. You might
need to add explicit type casts.
```


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-05-13 20:22:34
Message-ID: 19307.1494706954@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Dmitry Dolgov <9erthalion6(at)gmail(dot)com> writes:
> Just a note about this patch. Of course time flies by and it needs rebase,
> but also there are few failing tests right now:

> ERROR: function pg_wal_lsn_diff(pg_lsn, unknown) does not exist

Apparently you are not testing against current HEAD. That's been there
since d10c626de (a whole two days now ;-)).

regards, tom lane


From: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-05-13 22:32:44
Message-ID: CA+q6zcX7m3aB5ziVA4D8oGRbtMJcaKXKZO6SEWVZ3hOZv7mQWQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 13 May 2017 at 22:22, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Apparently you are not testing against current HEAD. That's been there
> since d10c626de (a whole two days now ;-))

Indeed, I was working on a more than two-day old antiquity. Unfortunately,
it's even more complicated
to apply this patch against the current HEAD, so I'll wait for a rebased
version.


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)anarazel(dot)de>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-09-07 15:58:05
Message-ID: CAMGcDxfBkghqK94VZoOBE4XaOA9xZmkaM-opKeEzOrPmajVHBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

FYI all, wanted to mention that I am working on an updated version of
the latest patch that I plan to submit to a later CF.

Regards,
Nikhils

On 14 May 2017 at 04:02, Dmitry Dolgov <9erthalion6(at)gmail(dot)com> wrote:
> On 13 May 2017 at 22:22, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>
>> Apparently you are not testing against current HEAD. That's been there
>> since d10c626de (a whole two days now ;-))
>
> Indeed, I was working on a more than two-day old antiquity. Unfortunately,
> it's even more complicated
> to apply this patch against the current HEAD, so I'll wait for a rebased
> version.

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-09-27 11:46:23
Message-ID: D6F3575D-BA17-43AB-A75F-12934885BADB@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


> On 7 Sep 2017, at 18:58, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:
>
> Hi,
>
> FYI all, wanted to mention that I am working on an updated version of
> the latest patch that I plan to submit to a later CF.
>

Cool!

So what kind of architecture do you have in mind? Same way as is it was implemented before?
As far as I remember there were two main issues:

* Decodong of aborted prepared transaction.

If such transaction modified catalog then we can’t read reliable info with our historic snapshot,
since clog already have aborted bit for our tx it will brake visibility logic. There are some way to
deal with that — by doing catalog seq scan two times and counting number of tuples (details
upthread) or by hijacking clog values in historic visibility function. But ISTM it is better not solve this
issue at all =) In most cases intended usage of decoding of 2PC transaction is to do some form
of distributed commit, so naturally decoding will happens only with in-progress transactions and
we commit/abort will happen only after it is decoded, sent and response is received. So we can
just have atomic flag that prevents commit/abort of tx currently being decoded. And we can filter
interesting prepared transactions based on GID, to prevent holding this lock for ordinary 2pc.

* Possible deadlocks that Andres was talking about.

I spend some time trying to find that, but didn’t find any. If locking pg_class in prepared tx is the only
example then (imho) it is better to just forbid to prepare such transactions. Otherwise if some realistic
examples that can block decoding are actually exist, then we probably need to reconsider the way
tx being decoded. Anyway this part probably need Andres blessing.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-10-26 19:01:40
Message-ID: cd6a59942f42c37adc19eae23dfe701c@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-09-27 14:46, Stas Kelvich wrote:
>> On 7 Sep 2017, at 18:58, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
>> wrote:
>>
>> Hi,
>>
>> FYI all, wanted to mention that I am working on an updated version of
>> the latest patch that I plan to submit to a later CF.
>>
>
> Cool!
>
> So what kind of architecture do you have in mind? Same way as is it
> was implemented before?
> As far as I remember there were two main issues:
>
> * Decodong of aborted prepared transaction.
>
> If such transaction modified catalog then we can’t read reliable info
> with our historic snapshot,
> since clog already have aborted bit for our tx it will brake
> visibility logic. There are some way to
> deal with that — by doing catalog seq scan two times and counting
> number of tuples (details
> upthread) or by hijacking clog values in historic visibility function.
> But ISTM it is better not solve this
> issue at all =) In most cases intended usage of decoding of 2PC
> transaction is to do some form
> of distributed commit, so naturally decoding will happens only with
> in-progress transactions and
> we commit/abort will happen only after it is decoded, sent and
> response is received. So we can
> just have atomic flag that prevents commit/abort of tx currently being
> decoded. And we can filter
> interesting prepared transactions based on GID, to prevent holding
> this lock for ordinary 2pc.
>
> * Possible deadlocks that Andres was talking about.
>
> I spend some time trying to find that, but didn’t find any. If locking
> pg_class in prepared tx is the only
> example then (imho) it is better to just forbid to prepare such
> transactions. Otherwise if some realistic
> examples that can block decoding are actually exist, then we probably
> need to reconsider the way
> tx being decoded. Anyway this part probably need Andres blessing.

Just rebased patch logical_twophase_v6 to master.

Fixed small issues:
- XactLogAbortRecord wrote DBINFO twice, but it was decoded in
ParseAbortRecord only once. Second DBINFO were parsed as ORIGIN.
Fixed by removing second write of DBINFO.
- SnapBuildPrepareTxnFinish tried to remove xid from `running` instead
of `committed`. And it removed only xid, without subxids.
- test_decoding skipped returning "COMMIT PREPARED" and "ABORT
PREPARED",

Big issue were with decoding ddl-including two-phase transactions:
- prepared.out were misleading. We could not reproduce decoding body of
"test_prepared#3" with logical_twophase_v6.diff. It was skipped if
`pg_logical_slot_get_changes` were called without
`twophase-decode-with-catalog-changes` set, and only "COMMIT PREPARED
test_prepared#3" were decoded.
The reason is "PREPARE TRANSACTION" is passed to `pg_filter_prepare`
twice:
- first on "PREPARE" itself,
- second - on "COMMIT PREPARED".
In v6, `pg_filter_prepare` without `with-catalog-changes` first time
answered "true" (ie it should not be decoded), and second time (when
transaction became committed) it answered "false" (ie it should be
decoded). But second time in DecodePrepare
`ctx->snapshot_builder->start_decoding_at`
is already in a future compared to `buf->origptr` (because it is
on "COMMIT PREPARED" lsn). Therefore DecodePrepare just called
ReorderBufferForget.
If `pg_filter_prepare` is called with `with-catalog-changes`, then
it returns "false" both times, thus DeocdePrepare decodes transaction
in first time, and calls `ReorderBufferForget` in second time.

I didn't found a way to fix it gracefully. I just change
`pg_filter_prepare`
to return same answer both times: "false" if called
`with-catalog-changes`
(ie need to call DecodePrepare), and "true" otherwise. With this
change, catalog changing two-phase transaction is decoded as simple
one-phase transaction, if `pg_logical_slot_get_changes` is called
without `with-catalog-changes`.

--
With regards,
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company

Attachment Content-Type Size
logical_twophase_v7.diff.gz application/x-gzip 14.4 KB

From: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-10-27 19:53:08
Message-ID: 0e46521cbbca66c3298d1a9d672c9af0@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-10-26 22:01, Sokolov Yura wrote:
> On 2017-09-27 14:46, Stas Kelvich wrote:
>>> On 7 Sep 2017, at 18:58, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
>>> wrote:
>>>
>>> Hi,
>>>
>>> FYI all, wanted to mention that I am working on an updated version of
>>> the latest patch that I plan to submit to a later CF.
>>>
>>
>> Cool!
>>
>> So what kind of architecture do you have in mind? Same way as is it
>> was implemented before?
>> As far as I remember there were two main issues:
>>
>> * Decodong of aborted prepared transaction.
>>
>> If such transaction modified catalog then we can’t read reliable info
>> with our historic snapshot,
>> since clog already have aborted bit for our tx it will brake
>> visibility logic. There are some way to
>> deal with that — by doing catalog seq scan two times and counting
>> number of tuples (details
>> upthread) or by hijacking clog values in historic visibility function.
>> But ISTM it is better not solve this
>> issue at all =) In most cases intended usage of decoding of 2PC
>> transaction is to do some form
>> of distributed commit, so naturally decoding will happens only with
>> in-progress transactions and
>> we commit/abort will happen only after it is decoded, sent and
>> response is received. So we can
>> just have atomic flag that prevents commit/abort of tx currently being
>> decoded. And we can filter
>> interesting prepared transactions based on GID, to prevent holding
>> this lock for ordinary 2pc.
>>
>> * Possible deadlocks that Andres was talking about.
>>
>> I spend some time trying to find that, but didn’t find any. If locking
>> pg_class in prepared tx is the only
>> example then (imho) it is better to just forbid to prepare such
>> transactions. Otherwise if some realistic
>> examples that can block decoding are actually exist, then we probably
>> need to reconsider the way
>> tx being decoded. Anyway this part probably need Andres blessing.
>
> Just rebased patch logical_twophase_v6 to master.
>
> Fixed small issues:
> - XactLogAbortRecord wrote DBINFO twice, but it was decoded in
> ParseAbortRecord only once. Second DBINFO were parsed as ORIGIN.
> Fixed by removing second write of DBINFO.
> - SnapBuildPrepareTxnFinish tried to remove xid from `running` instead
> of `committed`. And it removed only xid, without subxids.
> - test_decoding skipped returning "COMMIT PREPARED" and "ABORT
> PREPARED",
>
> Big issue were with decoding ddl-including two-phase transactions:
> - prepared.out were misleading. We could not reproduce decoding body of
> "test_prepared#3" with logical_twophase_v6.diff. It was skipped if
> `pg_logical_slot_get_changes` were called without
> `twophase-decode-with-catalog-changes` set, and only "COMMIT PREPARED
> test_prepared#3" were decoded.
> The reason is "PREPARE TRANSACTION" is passed to `pg_filter_prepare`
> twice:
> - first on "PREPARE" itself,
> - second - on "COMMIT PREPARED".
> In v6, `pg_filter_prepare` without `with-catalog-changes` first time
> answered "true" (ie it should not be decoded), and second time (when
> transaction became committed) it answered "false" (ie it should be
> decoded). But second time in DecodePrepare
> `ctx->snapshot_builder->start_decoding_at`
> is already in a future compared to `buf->origptr` (because it is
> on "COMMIT PREPARED" lsn). Therefore DecodePrepare just called
> ReorderBufferForget.
> If `pg_filter_prepare` is called with `with-catalog-changes`, then
> it returns "false" both times, thus DeocdePrepare decodes transaction
> in first time, and calls `ReorderBufferForget` in second time.
>
> I didn't found a way to fix it gracefully. I just change
> `pg_filter_prepare`
> to return same answer both times: "false" if called
> `with-catalog-changes`
> (ie need to call DecodePrepare), and "true" otherwise. With this
> change, catalog changing two-phase transaction is decoded as simple
> one-phase transaction, if `pg_logical_slot_get_changes` is called
> without `with-catalog-changes`.

Small improvement compared to v7:
- twophase_gid is written with alignment padding in the
XactLogCommitRecord
and XactLogAbortRecord.

--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company

Attachment Content-Type Size
logical_twophase_v8.diff.gz application/x-gzip 14.7 KB

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: logical decoding of two-phase transactions
Date: 2017-10-30 02:31:31
Message-ID: CAMsr+YF4QfAKzsfHQ7MS+5NG3aY-NG3T_DdhcdPEPkWD4fo7fQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 October 2017 at 03:53, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru> wrote:
> On 2017-10-26 22:01, Sokolov Yura wrote:

> Small improvement compared to v7:
> - twophase_gid is written with alignment padding in the XactLogCommitRecord
> and XactLogAbortRecord.

I think Nikhils has done some significant work on this patch.
Hopefully he'll be able to share it.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-23 12:27:48
Message-ID: CAMGcDxf83P5SGnGH52=_0wRP9pO6uRWCMRwAA0nxKtZvir2_vQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi all,

>
> I think Nikhils has done some significant work on this patch.
> Hopefully he'll be able to share it.
>

PFA, latest patch. This builds on top of the last patch submitted by
Sokolov Yura and adds the actual logical replication interfaces to
allow PREPARE or COMMIT/ROLLBACK PREPARED on a logical subscriber.

I tested with latest PG head by setting up PUBLICATION/SUBSCRIPTION
for some tables. I tried DML on these tables via 2PC and it seems to
work with subscribers honoring COMMIT|ROLLBACK PREPARED commands.

Now getting back to the two main issues that we have been discussing:

Logical decoding deadlocking/hanging due to locks on catalog tables
====================================================

When we are decoding, we do not hold long term locks on the table. We
do RelationIdGetRelation() and RelationClose() which
increments/decrements ref counts. Also this ref count is held/released
per ReorderBuffer change record. The call to RelationIdGetRelation()
holds an AccessShareLock on pg_class, pg_attribute etc. while building
the relation descriptor. The plugin itself can access rel/syscache but
none of it holds a lock stronger than AccessShareLock on the catalog
tables.

Even activities like:

ALTER user_table;
CLUSTER user_table;

Do not hold locks that will allow decoding to stall.

The only issue could be with locks on catalog objects itself in the
prepared transaction.

Now if the 2PC transaction is taking an AccessExclusiveLock on catalog
objects via "LOCK pg_class"
for example, then pretty much nothing else will progress ahead in
other sessions in the database
till this active session COMMIT PREPAREs or aborts this 2PC transaction.

Also, in some cases like CLUSTER on catalog objects, the code
explicitly denies preparation of a 2PC transaction.

postgres=# BEGIN;
postgres=# CLUSTER pg_class using pg_class_oid_index ;
postgres=# PREPARE TRANSACTION 'test_prepared_lock';
ERROR: cannot PREPARE a transaction that modified relation mapping

This makes sense because we do not want to get into a state where the
DB is unable to progress meaningfully at all.

Is there any other locking scenario that we need to consider?
Otherwise, are we all ok on this point being a non-issue for 2PC
logical decoding?

Now on to the second issue:

2PC Logical decoding with concurrent "ABORT PREPARED" of the same
=========================================================

Before 2PC, we always decoded regular committed transaction records.
Now with prepared
transactions, we run the risk of running decoding when some other
backend could come in and
COMMIT PREPARE or ROLLBACK PREPARE simultaneously. If the other backend commits,
that's not an issue at all.

The issue is with a concurrent rollback of the prepared transaction.
We need a way to ensure that
the 2PC does not abort when we are in the midst of a change record
apply activity.

One way to handle this is to ensure that we interlock the abort
prepared with an ongoing logical decoding operation for a bounded
period of maximum one change record apply cycle.

I am outlining one solution but am all ears for better, elegant solutions.

* We introduce two new booleans in the TwoPhaseState
GlobalTransactionData structure.
bool beingdecoded;
bool abortpending;

1) Before we start iterating through the change records, if it happens
to be a prepared transaction, we
check "abortpending" in the corresponding TwoPhaseState entry. If it's
not set, then we set "beingdecoded".
If abortpending is set, we know that this transaction is going to go
away and we treat it like a regular abort and do
not do any decoding at all.

2) With "beingdecoded" set, we start with the first change record from
the iteration, decode it and apply it.

3) Before starting decode of the next change record, we re-check if
"abortpending" is set. If "abortpending"
is set, we do not decode the next change record. Thus the abort is
delay-bounded to a maximum of one change record decoding/apply cycle
after we signal our intent to abort it. Then, we need to send ABORT
(regular, not rollback prepared, since we have not sent "PREPARE" yet.
We cannot send PREPARE midways because the transaction block on the
whole might not be consistent) to the subscriber. We will have to add
an ABORT callback in pgoutput for this. There's only a COMMIT callback
as of now. The subscribers will ABORT this transaction midways due to
this. We can then follow this up with a DUMMY prepared txn. E.g.
"BEGIN; PREPARE TRANSACTION 'gid'"; The reasoning for the DUMMY 2PC is
mentioned below in (6).

4) Keep decoding change records as long as "abortpending" is not set.

5) At end of the change set, send "PREPARE" to the subscribers and
then remove the "beingdecoded" flag from the TwoPhaseState entry. We
are now free to commit/rollback the prepared transaction anytime.

6) We will still decode the "ROLLBACK PREPARED" wal entry when it
comes to us on the provider. This will call the abort_prepared
callback on the subscriber. I have already added this in my patch.
This abort_prepared callback will abort the dummy PREPARED query from
step (3) above. Instead of doing this, we could actually check if the
'GID' entry exists and then call ROLLBACK PREPARED on the subscriber.
But in that case we can't be sure if the GID does not exist because of
a rollback-during-decode-issue on the provider or due to something
else. If we are ok with not finding GIDs on the subscriber side, then
am fine with removing the DUMMY prepare from step (3).

7) When the above activity is happening if another backend wants to
abort the prepared transaction then it will set "abortpending". If
"beingdecoded" is true, the abort prepared function will wait till it
clears out by releasing the lock and re-checking in a few moments.
When beingdecoded clears out (which will happen before the next change
record apply in walsender when it sees "abortpending" set) , the abort
prepare can go ahead as usual.

Note that we will have to be careful to clear this "beingdecoded" flag
even if the decoding fails or subscription is dropped or any other
issues. Then this can work fine, IMO.

Thoughts? Holes in the theory? Other issues?

I am attaching my latest and greatest WIP patch with does not contain
any of the above abort handling yet.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_22_11_17.patch application/octet-stream 76.0 KB

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-24 03:05:26
Message-ID: CAMsr+YGX1z5t_MccrJxd4VJPgZKx17EsAQcAQ_1iJ4dKDp0LOg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 23 November 2017 at 20:27, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
wrote:

>
> Is there any other locking scenario that we need to consider?
> Otherwise, are we all ok on this point being a non-issue for 2PC
> logical decoding?
>

Yeah.

I didn't find any sort of sensible situation where locking would pose
issues. Unless you're taking explicit LOCKs on catalog tables, you should
be fine.

There may be issues with CLUSTER or VACUUM FULL of non-relmapped catalog
relations I guess. Personally I put that in the "don't do that" box, but if
we really have to guard against it we could slightly expand the limits on
which txns you can PREPARE to any txn that has a strong lock on a catalog
relation.

> The issue is with a concurrent rollback of the prepared transaction.
> We need a way to ensure that
> the 2PC does not abort when we are in the midst of a change record
> apply activity.
>

The *reason* we care about this is that tuples created by aborted txns are
not considered "recently dead" by vacuum. They can be marked invalid and
removed immediately due to hint-bit setting and HOT pruning, vacuum runs,
etc.

This could create an inconsistent view of the catalogs if our prepared txn
did any DDL. For example, we might've updated a pg_class row, so we created
a new row and set xmax on the old one. Vacuum may merrily remove our new
row so there's no way we can find the correct data anymore, we'd have to
find the outdated row or no row. By my reading of HeapTupleSatisfiesMVCC
we'll see the old pg_class tuple.

Similar issues apply for pg_attribute etc etc. We might try to decode a
record according to the wrong table structure because relcache lookups
performed by the plugin will report outdated information.

The sanest option here seems to be to stop the txn from aborting while
we're decoding it, hence Nikhil's suggestions.

> * We introduce two new booleans in the TwoPhaseState
> GlobalTransactionData structure.
> bool beingdecoded;
> bool abortpending;
>

I think it's premature to rule out the simple option of doing a LockGXact
when we start decoding. Improve the error "prepared transaction with
identifier \"%s\" is busy" to report the locking pid too. It means you
cannot rollback or commit a prepared xact while it's being decoded, but for
the intended use of this patch, I think that's absolutely fine anyway.

But I like your suggestion much more than the idea of taking a LWLock on
TwoPhaseStateLock while decoding a record. Lots of LWLock churn, and
LWLocks held over arbitrary user plugin code. Not viable.

With your way we just have to take a LWLock once on TwoPhaseState when we
test abortpending and set beingdecoded. After that, during decoding, we can
do unlocked tests of abortpending, since a stale read will do nothing worse
than delay our response a little. The existing 2PC ops already take the
LWLock and can examine beingdecoded then. I expect they'd need to WaitLatch
in a loop until beingdecoded was cleared, re-acquiring the LWLock and
re-checking each time it's woken. We should probably add a field there for
a waiter proc that wants its latch set, so 2pc ops don't usually have to
poll for decoding to finish. (Unless condition variables will help us here?)

However, let me make an entirely alternative suggestion. Should we add a
heavyweight lock class for 2PC xacts instead, and leverage the existing
infrastructure? We already use transaction locks widely after all. That
way, we just take some kind of share lock on the 2PC xact by xid when we
start logical decoding of the 2pc xact. ROLLBACK PREPARED and COMMIT
PREPARED would acquire the same heavyweight lock in an exclusive mode
before grabbing TwoPhaseStateLock and doing their work.

That way we get automatic cleanup when decoding procs exit, we get wakeups
for waiters, etc, all for "free".

How practical is adding a lock class?

(Frankly I've often wished I could add new heavyweight lock classes when
working on complex extensions like BDR, too, and in an ideal world we'd be
able to register lock classes for use by extensions...)

3) Before starting decode of the next change record, we re-check if
> "abortpending" is set. If "abortpending"
> is set, we do not decode the next change record. Thus the abort is
> delay-bounded to a maximum of one change record decoding/apply cycle
> after we signal our intent to abort it. Then, we need to send ABORT
> (regular, not rollback prepared, since we have not sent "PREPARE" yet.
>

Just to be explicit, this means "tell the downstream the xact has aborted".
Currently logical decoding does not ever start decoding an xact until it's
committed, so it has never needed an abort callback on the output plugin
interface.

But we'll need one when we start doing speculative logical decoding of big
txns before they commit, and we'll need it for this. It's relatively
trivial.

> This abort_prepared callback will abort the dummy PREPARED query from
>
step (3) above. Instead of doing this, we could actually check if the
> 'GID' entry exists and then call ROLLBACK PREPARED on the subscriber.
> But in that case we can't be sure if the GID does not exist because of
> a rollback-during-decode-issue on the provider or due to something
> else. If we are ok with not finding GIDs on the subscriber side, then
> am fine with removing the DUMMY prepare from step (3).
>

I prefer the latter approach personally, not doing the dummy 2pc xact.
Instead we can just ignore a ROLLBACK PREPARED for a txn whose gid does not
exist on the downstream. I can easily see situations where we might
manually abort a txn and wouldn't want logical decoding to get perpetually
stuck waiting to abort a gid that doesn't exist, for example.

Ignoring commit prepared for a missing xact would not be great, but I think
it's sensible enough to ignore missing GIDs for rollback prepared.

We'd need a race-free way to do that though, so I think we'd have to
extend FinishPreparedTransaction and LockGXact with some kind of missing_ok
flag. I doubt that'd be controversial.

A couple of other considerations not covered in what you wrote:

- It's really important that the hook that decides whether to decode an
xact at prepare or commit prepared time reports the same answer each and
every time, including if it's called after a prepared txn has committed. It
probably can't look at anything more than the xact's origin replica
identity, xid, and gid. This also means we need to know the gid of prepared
txns when processing their commit record, so we can tell logical decoding
whether we already sent the data to the client at prepare-transaction time,
or if we should send it at commit-prepared time instead.

- You need to flush the syscache when you finish decoding a PREPARE
TRANSACTION of an xact that made catalog changes, unless it's immediately
followed by COMMIT PREPARED of the same xid. Because xacts between the two
cannot see changes the prepared xact made to the catalogs.

- For the same reason we need to ensure that the historic snapshot used to
decode a 2PC xact that made catalog changes isn't then used for subsequent
xacts between the prepare and commit. They'd see the uncommitted catalogs
of the prepared xact.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-24 05:44:52
Message-ID: CAMGcDxeDd4uB+tvaYrtjLgykeOpa-5nwPetNM+prCQ=wpPmPaQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Craig,

> I didn't find any sort of sensible situation where locking would pose
> issues. Unless you're taking explicit LOCKs on catalog tables, you should be
> fine.
>
> There may be issues with CLUSTER or VACUUM FULL of non-relmapped catalog
> relations I guess. Personally I put that in the "don't do that" box, but if
> we really have to guard against it we could slightly expand the limits on
> which txns you can PREPARE to any txn that has a strong lock on a catalog
> relation.
>

Well, we don't allow VACUUM FULL of regular tables in transaction blocks.

I tried "CLUSTER user_table USING pkey", it works and also it does not take
strong locks on catalog tables which would halt the decoding process.
ALTER TABLE
works without stalling decoding already as mentioned earlier.

>>
>> The issue is with a concurrent rollback of the prepared transaction.
>> We need a way to ensure that
>> the 2PC does not abort when we are in the midst of a change record
>> apply activity.
>
>
> The *reason* we care about this is that tuples created by aborted txns are
> not considered "recently dead" by vacuum. They can be marked invalid and
> removed immediately due to hint-bit setting and HOT pruning, vacuum runs,
> etc.
>
> This could create an inconsistent view of the catalogs if our prepared txn
> did any DDL. For example, we might've updated a pg_class row, so we created
> a new row and set xmax on the old one. Vacuum may merrily remove our new row
> so there's no way we can find the correct data anymore, we'd have to find
> the outdated row or no row. By my reading of HeapTupleSatisfiesMVCC we'll
> see the old pg_class tuple.
>
> Similar issues apply for pg_attribute etc etc. We might try to decode a
> record according to the wrong table structure because relcache lookups
> performed by the plugin will report outdated information.
>

We actually do the decoding in a PG_TRY/CATCH block, so if there are
any errors we
can clean those up in the CATCH block. If it's a prepared transaction
then we can send
an ABORT to the remote side to clean itself up.

> The sanest option here seems to be to stop the txn from aborting while we're
> decoding it, hence Nikhil's suggestions.
>

If we do the cleanup above in the CATCH block, then do we really care?
I guess the issue would be in determining why we reached the CATCH
block, whether it was due to a decoding error or due to network issues
or something else..

>>
>> * We introduce two new booleans in the TwoPhaseState
>> GlobalTransactionData structure.
>> bool beingdecoded;
>> bool abortpending;
>
>
> I think it's premature to rule out the simple option of doing a LockGXact
> when we start decoding. Improve the error "prepared transaction with
> identifier \"%s\" is busy" to report the locking pid too. It means you
> cannot rollback or commit a prepared xact while it's being decoded, but for
> the intended use of this patch, I think that's absolutely fine anyway.
>
> But I like your suggestion much more than the idea of taking a LWLock on
> TwoPhaseStateLock while decoding a record. Lots of LWLock churn, and LWLocks
> held over arbitrary user plugin code. Not viable.
>
> With your way we just have to take a LWLock once on TwoPhaseState when we
> test abortpending and set beingdecoded. After that, during decoding, we can
> do unlocked tests of abortpending, since a stale read will do nothing worse
> than delay our response a little. The existing 2PC ops already take the
> LWLock and can examine beingdecoded then. I expect they'd need to WaitLatch
> in a loop until beingdecoded was cleared, re-acquiring the LWLock and
> re-checking each time it's woken. We should probably add a field there for a
> waiter proc that wants its latch set, so 2pc ops don't usually have to poll
> for decoding to finish. (Unless condition variables will help us here?)
>

Yes, WaitLatch could do the job here.

> However, let me make an entirely alternative suggestion. Should we add a
> heavyweight lock class for 2PC xacts instead, and leverage the existing
> infrastructure? We already use transaction locks widely after all. That way,
> we just take some kind of share lock on the 2PC xact by xid when we start
> logical decoding of the 2pc xact. ROLLBACK PREPARED and COMMIT PREPARED
> would acquire the same heavyweight lock in an exclusive mode before grabbing
> TwoPhaseStateLock and doing their work.
>
> That way we get automatic cleanup when decoding procs exit, we get wakeups
> for waiters, etc, all for "free".
>
> How practical is adding a lock class?

Am open to suggestions. This looks like it could work decently.

>
> Just to be explicit, this means "tell the downstream the xact has aborted".
> Currently logical decoding does not ever start decoding an xact until it's
> committed, so it has never needed an abort callback on the output plugin
> interface.
>
> But we'll need one when we start doing speculative logical decoding of big
> txns before they commit, and we'll need it for this. It's relatively
> trivial.
>

Yes, it will be a standard wrapper call to implement on both send and
apply side.

>>
>> This abort_prepared callback will abort the dummy PREPARED query from
>>
>> step (3) above. Instead of doing this, we could actually check if the
>> 'GID' entry exists and then call ROLLBACK PREPARED on the subscriber.
>> But in that case we can't be sure if the GID does not exist because of
>> a rollback-during-decode-issue on the provider or due to something
>> else. If we are ok with not finding GIDs on the subscriber side, then
>> am fine with removing the DUMMY prepare from step (3).
>
>
> I prefer the latter approach personally, not doing the dummy 2pc xact.
> Instead we can just ignore a ROLLBACK PREPARED for a txn whose gid does not
> exist on the downstream. I can easily see situations where we might manually
> abort a txn and wouldn't want logical decoding to get perpetually stuck
> waiting to abort a gid that doesn't exist, for example.
>
> Ignoring commit prepared for a missing xact would not be great, but I think
> it's sensible enough to ignore missing GIDs for rollback prepared.
>

Yes, that makes sense in case of ROLLBACK. If we find a missing GID
for a COMMIT PREPARE we are in for some trouble.

> We'd need a race-free way to do that though, so I think we'd have to extend
> FinishPreparedTransaction and LockGXact with some kind of missing_ok flag. I
> doubt that'd be controversial.
>

Sure.

>
> A couple of other considerations not covered in what you wrote:
>
> - It's really important that the hook that decides whether to decode an xact
> at prepare or commit prepared time reports the same answer each and every
> time, including if it's called after a prepared txn has committed. It
> probably can't look at anything more than the xact's origin replica
> identity, xid, and gid. This also means we need to know the gid of prepared
> txns when processing their commit record, so we can tell logical decoding
> whether we already sent the data to the client at prepare-transaction time,
> or if we should send it at commit-prepared time instead.
>

We already have a filter_prepare_cb hook in place for this. TBH, I
don't think this patch needs to worry about the internals of that
hook. Whatever it returns, if it returns the same value everytime then
we should be good from the logical decoding perspective

I think, if we encode the logic in the GID itself, then it will be
good and consistent everytime. For example, if the hook sees a GID
with the prefix '_$Logical_', then it knows it has to PREPARE it.
Others can be decoded at commit time.

> - You need to flush the syscache when you finish decoding a PREPARE
> TRANSACTION of an xact that made catalog changes, unless it's immediately
> followed by COMMIT PREPARED of the same xid. Because xacts between the two
> cannot see changes the prepared xact made to the catalogs.
>
> - For the same reason we need to ensure that the historic snapshot used to
> decode a 2PC xact that made catalog changes isn't then used for subsequent
> xacts between the prepare and commit. They'd see the uncommitted catalogs of
> the prepared xact.
>

Yes, we will do TeardownHistoricSnapshot and syscache flush as part of
the cleanup for 2PC transactions.

Regards,
Nikhils

> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-24 06:41:22
Message-ID: CAMsr+YH_x7Lrz3arUewzAk4=FZ-+hdum-eW7gZJBAEmzo937eg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 24 November 2017 at 13:44, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
wrote:

>
> > This could create an inconsistent view of the catalogs if our prepared
> txn
> > did any DDL. For example, we might've updated a pg_class row, so we
> created
> > a new row and set xmax on the old one. Vacuum may merrily remove our new
> row
> > so there's no way we can find the correct data anymore, we'd have to find
> > the outdated row or no row. By my reading of HeapTupleSatisfiesMVCC we'll
> > see the old pg_class tuple.
> >
> > Similar issues apply for pg_attribute etc etc. We might try to decode a
> > record according to the wrong table structure because relcache lookups
> > performed by the plugin will report outdated information.
> >
>
> We actually do the decoding in a PG_TRY/CATCH block, so if there are
> any errors we
> can clean those up in the CATCH block. If it's a prepared transaction
> then we can send
> an ABORT to the remote side to clean itself up.
>

Yeah. I suspect it might not always ERROR gracefully though.

> > How practical is adding a lock class?
>
> Am open to suggestions. This looks like it could work decently.

It looks amazingly simple from here. Which probably means there's more to
it that I haven't seen yet. I could use advice from someone who knows the
locking subsystem better.

> Yes, that makes sense in case of ROLLBACK. If we find a missing GID
> for a COMMIT PREPARE we are in for some trouble.
>

I agree. But it's really down to the apply worker / plugin to set policy
there, I think. It's not the 2PC decoding support's problem.

I'd argue that a plugin that wishes to strictly track and match 2PC aborts
with the subsequent ROLLBACK PREPARED could do so by recording the abort
locally. It need not rely on faked-up 2pc xacts from the output plugin.
Though it might choose to create them on the downstream as its method of
tracking aborts.

In other words, we don't need the logical decoding infrastructure's help
here. It doesn't have to fake up 2PC xacts for us. Output plugins/apply
workers that want to can do it themselves, and those that don't can ignore
rollback prepared for non-matched GIDs instead.

> We'd need a race-free way to do that though, so I think we'd have to
> extend
> > FinishPreparedTransaction and LockGXact with some kind of missing_ok
> flag. I
> > doubt that'd be controversial.
> >
>
> Sure.

I reckon that should be in-scope for this patch, and pretty clearly useful.
Also simple.

>
> > - It's really important that the hook that decides whether to decode an
> xact
> > at prepare or commit prepared time reports the same answer each and every
> > time, including if it's called after a prepared txn has committed. It
> > probably can't look at anything more than the xact's origin replica
> > identity, xid, and gid. This also means we need to know the gid of
> prepared
> > txns when processing their commit record, so we can tell logical decoding
> > whether we already sent the data to the client at prepare-transaction
> time,
> > or if we should send it at commit-prepared time instead.
> >
>
> We already have a filter_prepare_cb hook in place for this. TBH, I
> don't think this patch needs to worry about the internals of that
> hook. Whatever it returns, if it returns the same value everytime then
> we should be good from the logical decoding perspective.
>

I agree. I meant that it should try to pass only info that's accessible at
both PREPARE TRANSACTION and COMMIT PREPARED time, and we should document
the importance of returning a consistent result. In particular, it's always
wrong to examine the current twophase state when deciding what to return.

> I think, if we encode the logic in the GID itself, then it will be
> good and consistent everytime. For example, if the hook sees a GID
> with the prefix '_$Logical_', then it knows it has to PREPARE it.
> Others can be decoded at commit time.
>

Yep. We can also safely tell the hook:

* the xid
* whether the xact has made catalog changes (since we know that at prepare
and commit time)

but probably not much else.

> > - You need to flush the syscache when you finish decoding a PREPARE
> > TRANSACTION of an xact that made catalog changes, unless it's immediately
> > followed by COMMIT PREPARED of the same xid. Because xacts between the
> two
> > cannot see changes the prepared xact made to the catalogs.
> >
> > - For the same reason we need to ensure that the historic snapshot used
> to
> > decode a 2PC xact that made catalog changes isn't then used for
> subsequent
> > xacts between the prepare and commit. They'd see the uncommitted
> catalogs of
> > the prepared xact.
> >
>
> Yes, we will do TeardownHistoricSnapshot and syscache flush as part of
> the cleanup for 2PC transactions.
>

Great.

Thanks.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-29 02:19:44
Message-ID: CAB7nPqT-b8VTQEhJSNA380g66dgsYT-dAcH_mOqaPG9HCYp9uw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Nov 24, 2017 at 3:41 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> It looks amazingly simple from here. Which probably means there's more to it
> that I haven't seen yet. I could use advice from someone who knows the
> locking subsystem better.

The status of this patch is I think not correct. It is marked as
waiting on author but Nikhil has showed up and has written an updated
patch. So I am moving it to next CF with "needs review".
--
Michael


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-29 23:40:36
Message-ID: e55b1240-df0b-c5eb-bcdb-18902cdf42f8@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 24/11/17 07:41, Craig Ringer wrote:
> On 24 November 2017 at 13:44, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com
>  
>
> > How practical is adding a lock class?
>
> Am open to suggestions. This looks like it could work decently.
>
>
> It looks amazingly simple from here. Which probably means there's more
> to it that I haven't seen yet. I could use advice from someone who knows
> the locking subsystem better.
>  

Hmm, I don't like the interaction that would have with ROLLBACK, meaning
that ROLLBACK has to wait for decoding to finish which may take longer
than the transaction took itself (given potential network calls, it's
practically unbounded time).

I also think that if we'll want to add streaming of transactions in the
future, we'll face similar problem and the locking approach will not
work there as the transaction may still be locked by the owning backend
while we are decoding it.

From my perspective this patch changes the assumption in
HeapTupleSatisfiesVacuum() that changes done by aborted transaction
can't be seen by anybody else. That's clearly not true here as the
decoding can see it. So perhaps better approach would be to not return
HEAPTUPLE_DEAD if the transaction id is newer than the OldestXmin (same
logic we use for deleted tuples of committed transactions) in the
HeapTupleSatisfiesVacuum() even for aborted transactions. I also briefly
checked HOT pruning and AFAICS the normal HOT pruning (the one not
called by vacuum) also uses the xmin as authoritative even for aborted
txes so nothing needs to be done there probably.

In case we are worried that this affects cleanups of for example large
aborted COPY transactions and we think it's worth worrying about then we
could limit the new OldestXmin based logic only to catalog tuples as
those are the only ones we need available in decoding.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-30 05:54:50
Message-ID: CAMsr+YGE84n5_1Z=xNwaRxS3Fx6WS7=FLcYtezt1wRx7VNjW0Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 30 November 2017 at 07:40, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
wrote:

> Hi,
>
> On 24/11/17 07:41, Craig Ringer wrote:
> > On 24 November 2017 at 13:44, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com
> >
> >
> > > How practical is adding a lock class?
> >
> > Am open to suggestions. This looks like it could work decently.
> >
> >
> > It looks amazingly simple from here. Which probably means there's more
> > to it that I haven't seen yet. I could use advice from someone who knows
> > the locking subsystem better.
> >
>
> Hmm, I don't like the interaction that would have with ROLLBACK, meaning
> that ROLLBACK has to wait for decoding to finish which may take longer
> than the transaction took itself (given potential network calls, it's
> practically unbounded time).
>

Yeah. We could check for waiters before we do the network I/O and release +
bail out. But once we enter the network call we're committed and it could
take a long time.

I don't find that particularly troubling for 2PC, but it's an obvious
nonstarter if we want to use the same mechanism for streaming normal xacts
out before commit.

Even for 2PC, if we have >1 downstream then once one reports an ERROR on
PREPARE TRANSACTION, there's probably no point continuing to stream the 2PC
xact out to other peers. So being able to abort the txn while it's being
decoded, causing decoding to bail out, is desirable there too.

> I also think that if we'll want to add streaming of transactions in the
> future, we'll face similar problem and the locking approach will not
> work there as the transaction may still be locked by the owning backend
> while we are decoding it.
>

Agreed. For that reason I agree that we need to look further afield than
locking-based solutions.

> From my perspective this patch changes the assumption in
> HeapTupleSatisfiesVacuum() that changes done by aborted transaction
> can't be seen by anybody else. That's clearly not true here as the
> decoding can see it.

Yes, *if* we don't use some locking-like approach to stop abort from
occurring while decoding is processing an xact.

> So perhaps better approach would be to not return
> HEAPTUPLE_DEAD if the transaction id is newer than the OldestXmin (same
> logic we use for deleted tuples of committed transactions) in the
> HeapTupleSatisfiesVacuum() even for aborted transactions. I also briefly
> checked HOT pruning and AFAICS the normal HOT pruning (the one not
> called by vacuum) also uses the xmin as authoritative even for aborted
> txes so nothing needs to be done there probably.
>
> In case we are worried that this affects cleanups of for example large
> aborted COPY transactions and we think it's worth worrying about then we
> could limit the new OldestXmin based logic only to catalog tuples as
> those are the only ones we need available in decoding.

Yeah, if it's limited to catalog tuples only then that sounds good. I was
quite concerned about how it'd impact vacuuming otherwise, but if limited
to catalogs about the only impact should be on workloads that create lots
of TEMPORARY tables then ROLLBACK - and not much on those.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-11-30 10:38:59
Message-ID: CAMGcDxfin6iYuHN1_21Z7YYvPDU9qLgfBSBYti1KWiUUpJBr7g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>> So perhaps better approach would be to not return
>> HEAPTUPLE_DEAD if the transaction id is newer than the OldestXmin (same
>> logic we use for deleted tuples of committed transactions) in the
>> HeapTupleSatisfiesVacuum() even for aborted transactions. I also briefly
>> checked HOT pruning and AFAICS the normal HOT pruning (the one not
>> called by vacuum) also uses the xmin as authoritative even for aborted
>> txes so nothing needs to be done there probably.
>>
>> In case we are worried that this affects cleanups of for example large
>> aborted COPY transactions and we think it's worth worrying about then we
>> could limit the new OldestXmin based logic only to catalog tuples as
>> those are the only ones we need available in decoding.
>
>
> Yeah, if it's limited to catalog tuples only then that sounds good. I was
> quite concerned about how it'd impact vacuuming otherwise, but if limited to
> catalogs about the only impact should be on workloads that create lots of
> TEMPORARY tables then ROLLBACK - and not much on those.
>

Based on these discussions, I think there are two separate issues here:

1) Make HeapTupleSatisfiesVacuum() to behave differently for recently
aborted catalog tuples.

2) Invent a mechanism to stop a specific logical decoding activity in
the middle. The reason to stop it could be a concurrent abort, maybe a
global transaction manager decides to rollback, or any other reason,
for example.

ISTM, that for 2, if (1) is able to leave the recently abort tuples
around for a little bit while (we only really need them till the
decode of the current change record is ongoing), then we could
accomplish it via a callback. This callback should be called before
commencing decode and network send of each change record. In case of
in-core logical decoding, the callback for pgoutput could check for
the transaction having aborted (a call to TransactionIdDidAbort() or
similar such functions), additional logic can be added as needed for
various scenarios. If it's aborted, we will abandon decoding and send
an ABORT to the subscribers before returning.

Regards,
Nikhils


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-04 15:15:30
Message-ID: CAMGcDxf0YDPxgG3sU=0k8zTZniEe2RhT90v4BP__3a1P4iHpEQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

PFA, latest patch for this functionality.
This patch contains the following changes as compared to the earlier patch:

- Fixed a bunch of typos and comments

- Modified HeapTupleSatisfiesVacuum to return HEAPTUPLE_RECENTLY_DEAD
if the transaction id is newer than OldestXmin. Doing this only for
CATALOG tables (htup->t_tableOid < (Oid) FirstNormalObjectId).

- Added a filter callback filter_decode_txn_cb_wrapper() to decide if
it's ok to decode the NEXT change record. This filter as of now checks
if the XID that is involved got aborted. Additional checks can be
added here as needed.

- Added ABORT callback in the decoding process. This was not needed
before because we always used to decode committed transactions. With
2PC transactions, it possible that while we are decoding it, another
backend might issue a concurrent ROLLBACK PREPARED. So when
filter_decode_txn_cb_wrapper() gets called, it will tell us to not to
decode the next change record. In that case we need to send an ABORT
to the subscriber (and not ROLLBACK PREPARED because we are yet to
issue PREPARE to the subscriber)

- Added all functionality to read the abort command and apply it on
the remote subscriber as needed.

- Added functionality in ReorderBufferCommit() to abort midways based
on the feedback from filter_decode_txn_cb_wrapper()

- Modified LockGXact() and FinishPreparedTransaction() to allow
missing GID in case of "ROLLBACK PREPARED". Currently, this will only
happen in the logical apply code path. We still send it to the
subscriber because it's difficult to identify on the provider if this
transaction was aborted midways in decoding or if it's in PREPARED
state on the subscriber. It will error out as before in all other
cases.

- Totally removed snapshot addition/deletion code while doing the
decoding. That's not needed at all while decoding an ongoing
transaction. The entries in the snapshot are needed for future
transactions to be able to decode older transactions. For 2PC
transactions, we don't need to decode them till COMMIT PREPARED gets
called. This has simplified all that unwanted snapshot push/pop code,
which is nice.

Regards,
Nikhils

On 30 November 2017 at 16:08, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:
> Hi,
>
>
>>> So perhaps better approach would be to not return
>>> HEAPTUPLE_DEAD if the transaction id is newer than the OldestXmin (same
>>> logic we use for deleted tuples of committed transactions) in the
>>> HeapTupleSatisfiesVacuum() even for aborted transactions. I also briefly
>>> checked HOT pruning and AFAICS the normal HOT pruning (the one not
>>> called by vacuum) also uses the xmin as authoritative even for aborted
>>> txes so nothing needs to be done there probably.
>>>
>>> In case we are worried that this affects cleanups of for example large
>>> aborted COPY transactions and we think it's worth worrying about then we
>>> could limit the new OldestXmin based logic only to catalog tuples as
>>> those are the only ones we need available in decoding.
>>
>>
>> Yeah, if it's limited to catalog tuples only then that sounds good. I was
>> quite concerned about how it'd impact vacuuming otherwise, but if limited to
>> catalogs about the only impact should be on workloads that create lots of
>> TEMPORARY tables then ROLLBACK - and not much on those.
>>
>
> Based on these discussions, I think there are two separate issues here:
>
> 1) Make HeapTupleSatisfiesVacuum() to behave differently for recently
> aborted catalog tuples.
>
> 2) Invent a mechanism to stop a specific logical decoding activity in
> the middle. The reason to stop it could be a concurrent abort, maybe a
> global transaction manager decides to rollback, or any other reason,
> for example.
>
> ISTM, that for 2, if (1) is able to leave the recently abort tuples
> around for a little bit while (we only really need them till the
> decode of the current change record is ongoing), then we could
> accomplish it via a callback. This callback should be called before
> commencing decode and network send of each change record. In case of
> in-core logical decoding, the callback for pgoutput could check for
> the transaction having aborted (a call to TransactionIdDidAbort() or
> similar such functions), additional logic can be added as needed for
> various scenarios. If it's aborted, we will abandon decoding and send
> an ABORT to the subscribers before returning.
>
> Regards,
> Nikhils

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_04_12_17.patch application/octet-stream 89.3 KB

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-05 03:56:16
Message-ID: CAMsr+YHDHxQtfeFH6rL4upjT7La9kif-pArR7n_ocjVDSHXjaA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4 December 2017 at 23:15, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
wrote:

> PFA, latest patch for this functionality.
> This patch contains the following changes as compared to the earlier patch:
>
> - Fixed a bunch of typos and comments
>
> - Modified HeapTupleSatisfiesVacuum to return HEAPTUPLE_RECENTLY_DEAD
> if the transaction id is newer than OldestXmin. Doing this only for
> CATALOG tables (htup->t_tableOid < (Oid) FirstNormalObjectId).
>

Because logical decoding supports user-catalog relations, we need to use
the same sort of logical that GetOldestXmin uses instead of a simple
oid-range check. See RelationIsAccessibleInLogicalDecoding() and the
user_catalog_table reloption.

Otherwise pseudo-catalogs used by logical decoding output plugins could
still suffer issues with needed tuples getting vacuumed, though only if the
txn being decoded made changes to those tables than ROLLBACKed. It's a
pretty tiny corner case for decoding of 2pc but a bigger one when we're
addressing streaming decoding.

Otherwise I'm really, really happy with how this is progressing and want to
find time to play with it.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-05 08:00:01
Message-ID: CAMGcDxeykgqNz65EDAJy8h4hdwzd39hW0vRaC-XRiXwxJ4YpWg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> - Modified HeapTupleSatisfiesVacuum to return HEAPTUPLE_RECENTLY_DEAD
>> if the transaction id is newer than OldestXmin. Doing this only for
>> CATALOG tables (htup->t_tableOid < (Oid) FirstNormalObjectId).
>
>
> Because logical decoding supports user-catalog relations, we need to use the
> same sort of logical that GetOldestXmin uses instead of a simple oid-range
> check. See RelationIsAccessibleInLogicalDecoding() and the
> user_catalog_table reloption.
>

Unfortunately, HeapTupleSatisfiesVacuum does not have the Relation
structure handily available to allow for these checks..

> Otherwise pseudo-catalogs used by logical decoding output plugins could
> still suffer issues with needed tuples getting vacuumed, though only if the
> txn being decoded made changes to those tables than ROLLBACKed. It's a
> pretty tiny corner case for decoding of 2pc but a bigger one when we're
> addressing streaming decoding.
>

We disallow rewrites on user_catalog_tables, so they cannot change
underneath. Yes, DML can be carried out on them inside a 2PC
transaction which then gets ROLLBACK'ed. But if it's getting aborted,
then we are not interested in that data anyways. Also, now that we
have the "filter_decode_txn_cb_wrapper()" function, we will stop
decoding by the next change record cycle because of the abort.

So, I am not sure if we need to track user_catalog_tables in
HeapTupleSatisfiesVacuum explicitly.

> Otherwise I'm really, really happy with how this is progressing and want to
> find time to play with it.

Yeah, I will do some more testing and add a few more test cases in the
test_decoding plugin. It might be handy to have a DELAY of a few
seconds after every change record processing, for example. That ways,
we can have a TAP test which can do a few WAL activities and then we
introduce a concurrent rollback midways from another session in the
middle of that delayed processing. I have done debugger based testing
of this concurrent rollback functionality as of now.

Another test (actually, functionality) that might come in handy, is to
have a way for DDL to be actually carried out on the subscriber. We
will need something like pglogical.replicate_ddl_command to be added
to the core for this to work. We can add this functionality as a
follow-on separate patch after discussing how we want to implement
that in core.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-05 09:26:03
Message-ID: CAMsr+YE5CbCFacsv8iO714Pk6E=jJYz2pdQEk0tKx5YJLTQBxQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 December 2017 at 16:00, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
wrote:

>
> We disallow rewrites on user_catalog_tables, so they cannot change
> underneath. Yes, DML can be carried out on them inside a 2PC
> transaction which then gets ROLLBACK'ed. But if it's getting aborted,
> then we are not interested in that data anyways. Also, now that we
> have the "filter_decode_txn_cb_wrapper()" function, we will stop
> decoding by the next change record cycle because of the abort.
>
> So, I am not sure if we need to track user_catalog_tables in
> HeapTupleSatisfiesVacuum explicitly.
>

I guess it's down to whether, when we're decoding a txn that just got
concurrently aborted, the output plugin might do anything with its user
catalogs that could cause a crash.

Output plugins are most likely to be using the genam (or even SPI, I
guess?) to read user-catalogs during logical decoding. Logical decoding its
self does not rely on the correctness of user catalogs in any way, it's
only a concern for output plugin callbacks.

It may make sense to kick this one down the road at this point, I can't
conclusively see where it'd cause an actual problem.

>
> > Otherwise I'm really, really happy with how this is progressing and want
> to
> > find time to play with it.
>
> Yeah, I will do some more testing and add a few more test cases in the
> test_decoding plugin. It might be handy to have a DELAY of a few
> seconds after every change record processing, for example. That ways,
> we can have a TAP test which can do a few WAL activities and then we
> introduce a concurrent rollback midways from another session in the
> middle of that delayed processing. I have done debugger based testing
> of this concurrent rollback functionality as of now.
>
>
Sounds good.

> Another test (actually, functionality) that might come in handy, is to
> have a way for DDL to be actually carried out on the subscriber. We
> will need something like pglogical.replicate_ddl_command to be added
> to the core for this to work. We can add this functionality as a
> follow-on separate patch after discussing how we want to implement
> that in core.

Yeah, definitely a different patch, but assuredly valuable.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-07 13:31:44
Message-ID: 45525c30-4002-861e-a82c-3b95b19d1514@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/4/17 10:15, Nikhil Sontakke wrote:
> PFA, latest patch for this functionality.

This probably needs documentation updates for the logical decoding chapter.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-07 19:45:53
Message-ID: f351fc25-7b4d-3bf3-0398-d8a55bb3bd87@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/7/17 08:31, Peter Eisentraut wrote:
> On 12/4/17 10:15, Nikhil Sontakke wrote:
>> PFA, latest patch for this functionality.
>
> This probably needs documentation updates for the logical decoding chapter.

You need the attached patch to be able to compile without warnings.

Also, the regression tests crash randomly for me at

frame #4: 0x000000010a6febdb
postgres`heap_prune_record_prunable(prstate=0x00007ffee5578990, xid=0)
at pruneheap.c:625
622 * This should exactly match the PageSetPrunable macro. We
can't store
623 * directly into the page header yet, so we update working state.
624 */
-> 625 Assert(TransactionIdIsNormal(xid));
626 if (!TransactionIdIsValid(prstate->new_prune_xid) ||
627 TransactionIdPrecedes(xid, prstate->new_prune_xid))
628 prstate->new_prune_xid = xid;

Did you build with --enable-cassert?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment Content-Type Size
0001-fixup-Original-patch.patch text/plain 1.2 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-12 12:04:31
Message-ID: CAMGcDxc0+17=5jZ7CXPpZFT9ugwLNXzPcht2J9EbaR-dXrDXCQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

Thanks for the warning fix, I will also look at the cassert case soon.

I have been adding more test cases to this patch. I added a TAP test
which now allows us to do a concurrent ROLLBACK PREPARED when the
walsender is in the midst of decoding this very prepared transaction.

Have added a "decode-delay" parameter to test_decoding via which each
apply call sleeps for a few configurable number of seconds allowing us
to have deterministic rollback in parallel. This logic seems to work
ok.

However, I am battling an issue with invalidations now. Consider the
below test case:

CREATE TABLE test_prepared1(id integer primary key);
-- test prepared xact containing ddl
BEGIN; INSERT INTO test_prepared1 VALUES (5);
ALTER TABLE test_prepared1 ADD COLUMN data text;
INSERT INTO test_prepared1 VALUES (6, 'frakbar');
PREPARE TRANSACTION 'test_prepared#3';
COMMIT PREPARED 'test_prepared#3';
SELECT data FROM pg_logical_slot_get_changes(..) <-- this shows the
2PC being decoded appropriately
-- make sure stuff still works
INSERT INTO test_prepared1 VALUES (8);
SELECT data FROM pg_logical_slot_get_changes(..)

The last pg_logical_slot_get_changes call, shows:

table public.test_prepared1: INSERT: id[integer]:8

whereas since the 2PC committed, it should have shown:

table public.test_prepared1: INSERT: id[integer]:8 data[text]:null

This is an issue because of the way we are handling invalidations. We
don't allow ReorderBufferAddInvalidations() at COMMIT PREPARE time
since we assume that handling them at PREPARE time is enough.
Apparently, it's not enough. Am trying to allow invalidations at
COMMIT PREPARE time as well, but maybe calling
ReorderBufferAddInvalidations() blindly again is not a good idea.
Also, if I do that, then I am getting some restart_lsn inconsistencies
which causes subsequent pg_logical_slot_get_changes() calls to
re-decode older records. I continue to investigate.

I am attaching the latest WIP patch. This contains the additional TAP
test changes.

Regards,
Nikhils

On 8 December 2017 at 01:15, Peter Eisentraut
<peter(dot)eisentraut(at)2ndquadrant(dot)com> wrote:
> On 12/7/17 08:31, Peter Eisentraut wrote:
>> On 12/4/17 10:15, Nikhil Sontakke wrote:
>>> PFA, latest patch for this functionality.
>>
>> This probably needs documentation updates for the logical decoding chapter.
>
> You need the attached patch to be able to compile without warnings.
>
> Also, the regression tests crash randomly for me at
>
> frame #4: 0x000000010a6febdb
> postgres`heap_prune_record_prunable(prstate=0x00007ffee5578990, xid=0)
> at pruneheap.c:625
> 622 * This should exactly match the PageSetPrunable macro. We
> can't store
> 623 * directly into the page header yet, so we update working state.
> 624 */
> -> 625 Assert(TransactionIdIsNormal(xid));
> 626 if (!TransactionIdIsValid(prstate->new_prune_xid) ||
> 627 TransactionIdPrecedes(xid, prstate->new_prune_xid))
> 628 prstate->new_prune_xid = xid;
>
> Did you build with --enable-cassert?
>
> --
> Peter Eisentraut http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_12_12_17_wip.patch application/octet-stream 96.2 KB

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-12 12:37:10
Message-ID: CANP8+jKwhA9cofuUPP_OxEgy_2k3-LHBxw9E57MnBiS1U-3xVw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12 December 2017 at 12:04, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:

> This is an issue because of the way we are handling invalidations. We
> don't allow ReorderBufferAddInvalidations() at COMMIT PREPARE time
> since we assume that handling them at PREPARE time is enough.
> Apparently, it's not enough.

Not sure what that means.

I think we would need to fire invalidations at COMMIT PREPARED, yet
logically decode them at PREPARE.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-19 08:37:41
Message-ID: CAMGcDxca0zLfpMWTUV+1QT2NjZw+w2Zfzc2NKGfOH41UwFjQmw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> I think we would need to fire invalidations at COMMIT PREPARED, yet
> logically decode them at PREPARE.
>
Yes, we need invalidations to logically decode at PREPARE and then we need
invalidations to be executed at COMMIT PREPARED time as well.

DecodeCommit() needs to know when it's processing a COMMIT PREPARED
whether this transaction was decoded at PREPARE time.The main issue is
that we cannot expect the ReorderBufferTXN structure which was created
at PREPARE time to be around till the COMMIT PREPARED gets called. The
patch earlier was not cleaning this structure at PREPARE and was
adding an is_prepared flag to it so that COMMIT PREPARED knew that it
was decoded at PREPARE time. This structure can very well be not
around when you restart between PREPARE and COMMIT PREPARED, for
example.

So now, it's the onus of the prepare filter callback to always give us
the answer if a given transaction was decoded at PREPARE time or not.
We now hand over the ReorderBufferTxn structure (it can be NULL), xid
and gid and the prepare filter tells us what to do. Always. The
is_prepared flag can be cached in the txn structure to aid in
re-lookups, but if it's not set, the filter could do xid lookup, gid
inspection and other shenanigans to give us the same answer every
invocation around.

Because of the above, we can very well cleanup the ReorderBufferTxn at
PREPARE time and it need not hang around till COMMIT PREPARED gets
called, which is a good win in terms of resource management.

My test cases pass (including the scenario described earlier) with the
above code changes in place.

I have also added crash testing related TAP test cases, they uncovered
a bug in the prepare redo restart code path which I fixed. I believe
this patch is in very stable state now. Multiple runs of the crash TAP
test pass without issues. Multiple runs of "make check-world" with
cassert enabled also pass without issues.

Note that this patch does not contain the HeapTupleSatisfiesVacuum
changes. I believe we need changes to HeapTupleSatisfiesVacuum given
than logical decoding changes the assumption that catalog tuples
belonging to a transaction which never committed can be reclaimed
immediately. With 2PC logical decoding or streaming logical decoding,
we can always have a split time window in which the ongoing decode
cycle needs those tuples. The solution is that even for aborted
transactions, we do not return HEAPTUPLE_DEAD if the transaction id is
newer than the OldestXmin (same logic we use for deleted tuples of
committed transactions). We can do this only for catalog table rows
(both system and user defined) to limit the scope of impact. In any
case, this needs to be a separate patch along with a separate
discussion thread.

Peter, I will submit a follow-on patch with documentation changes
soon. But this patch is complete IMO, with all the required 2PC
logical decoding functionality.

Comments, feedback is most welcome.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_19_12_17_without_docs.patch application/octet-stream 103.8 KB

From: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-19 19:52:00
Message-ID: dec39ab4-cc36-e01c-7a00-0087fe8644da@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/19/17 03:37, Nikhil Sontakke wrote:
> Note that this patch does not contain the HeapTupleSatisfiesVacuum
> changes. I believe we need changes to HeapTupleSatisfiesVacuum given
> than logical decoding changes the assumption that catalog tuples
> belonging to a transaction which never committed can be reclaimed
> immediately. With 2PC logical decoding or streaming logical decoding,
> we can always have a split time window in which the ongoing decode
> cycle needs those tuples. The solution is that even for aborted
> transactions, we do not return HEAPTUPLE_DEAD if the transaction id is
> newer than the OldestXmin (same logic we use for deleted tuples of
> committed transactions). We can do this only for catalog table rows
> (both system and user defined) to limit the scope of impact. In any
> case, this needs to be a separate patch along with a separate
> discussion thread.

Are you working on that as well?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-20 04:12:08
Message-ID: CAMGcDxfxEodz8+THwaXtjpW5WuBpeYp1t-UT9Wa-NfNuWaBVGA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> Note that this patch does not contain the HeapTupleSatisfiesVacuum
>> changes. I believe we need changes to HeapTupleSatisfiesVacuum given
>> than logical decoding changes the assumption that catalog tuples
>> belonging to a transaction which never committed can be reclaimed
>> immediately. With 2PC logical decoding or streaming logical decoding,
>> we can always have a split time window in which the ongoing decode
>> cycle needs those tuples. The solution is that even for aborted
>> transactions, we do not return HEAPTUPLE_DEAD if the transaction id is
>> newer than the OldestXmin (same logic we use for deleted tuples of
>> committed transactions). We can do this only for catalog table rows
>> (both system and user defined) to limit the scope of impact. In any
>> case, this needs to be a separate patch along with a separate
>> discussion thread.
>
> Are you working on that as well?

Sure, I was planning to work on that after getting the documentation
for this patch out of the way.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2017-12-22 13:57:23
Message-ID: CAMGcDxdDqZHyi=WJcOZUZ3J-oY6cKvWhHJ2dS+4qex8xA8Jz6A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>>
>> Are you working on that as well?
>
> Sure, I was planning to work on that after getting the documentation
> for this patch out of the way.
>

PFA, patch with documentation. Have added requisite entries in the
logical decoding output plugins section. No changes are needed
elsewhere, AFAICS.

I will submit the HeapTupleSatisfiesVacuum patch on a separate
discussion, soon.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_22_12_17.patch application/octet-stream 111.4 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-01-29 18:35:33
Message-ID: CAMGcDxdDCQipHyBeqT6crJE4WpFKT9LxV1TY2151Jy9udnMLFQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>
> PFA, patch with documentation. Have added requisite entries in the
> logical decoding output plugins section. No changes are needed
> elsewhere, AFAICS.
>

PFA, patch which applies cleanly against latest git head. I also
removed unwanted newlines and took care of the cleanup TODO about
making ReorderBufferTXN structure using a txn_flags field instead of
separate booleans for various statuses like has_catalog_changes,
is_subxact, is_serialized etc. The patch uses this txn_flags field for
the newer prepare related info as well.

"make check-world" passes ok, including the additional regular and tap
tests that we have added as part of this patch.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_29_01_18.patch application/octet-stream 117.1 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-06 12:20:40
Message-ID: CAMGcDxfc0yWmdkn28UBkyEjdCWqXmNgS2v=cjOgWD39mvvoY+w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi all,

>
> PFA, patch which applies cleanly against latest git head. I also
> removed unwanted newlines and took care of the cleanup TODO about
> making ReorderBufferTXN structure using a txn_flags field instead of
> separate booleans for various statuses like has_catalog_changes,
> is_subxact, is_serialized etc. The patch uses this txn_flags field for
> the newer prepare related info as well.
>
> "make check-world" passes ok, including the additional regular and tap
> tests that we have added as part of this patch.
>

PFA, latest version of this patch.

This latest version takes care of the abort-while-decoding issue along
with additional test cases and documentation changes.

We now maintain a list of processes that are decoding a specific
transactionID and make it a decode groupmember of a decode groupleader
process. The decode groupleader process is basically the PGPROC entry
which points to the prepared 2PC transaction or an ongoing regular
transaction.

If the 2PC is rollback'ed then FinishPreparedTransactions uses the
decode groupleader process to let all the decode groupmember processes
know that it's aborting. A similar logic can be used for the decoding
of uncommitted transactions. The decode groupmember processes are able
to abort sanely in such a case. We also have two new APIs
"LogicalLockTransaction" and "LogicalUnlockTransaction" that the
decoding backends need to use while doing system or user catalog
tables access. The abort code interlocks with decoding backends that
might be in the process of accessing catalog tables and waits for
those few moments before aborting the transaction.

The implementation uses the LockHashPartitionLockByProc on the decode
groupleader process to control access to these additional fields in
the PGPROC structure amongst the decode groupleader and the other
decode groupmember processes and does not need to use the
ProcArrayLock at all. The implementation is inspired from the
*existing* lockGroupLeader solution which uses a similar technique to
track processes waiting on a leader holding that lock. I believe it's
an optimal solution for this problem of ours.

Have added TAP tests to test multiple decoding backends working on the
same transaction. Used delays in the test-decoding plugin to introduce
waits after making the LogicalLockTransaction call and calling
ROLLBACK to ensure that it interlocks with such decoding backends
which are doing catalog access. Tests working as desired. Also "make
check-world" passes with asserts enabled.

I will post this same explanation about abort handling on the other
thread (http://www.postgresql-archive.org/Logical-Decoding-and-HeapTupleSatisfiesVacuum-assumptions-td5998294.html).

Comments appreciated.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
2pc_logical_with_abort_handling_06_02_18.patch application/octet-stream 139.1 KB

From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-08 12:43:23
Message-ID: 11AEF68F-14A2-4662-AAEE-87422ADA3864@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi!

Thanks for working on this patch.

Reading through patch I’ve noticed that you deleted call to SnapBuildCommitTxn()
in DecodePrepare(). As you correctly spotted upthread there was unnecessary
code that marked transaction as running after decoding of prepare. However call
marking it as committed before decoding of prepare IMHO is still needed as
SnapBuildCommitTxn does some useful thing like setting base snapshot for parent
transactions which were skipped because of SnapBuildXactNeedsSkip().

E.g. current code will crash in assert for following transaction:

BEGIN;
SAVEPOINT one;
CREATE TABLE test_prepared_savepoints (a int);
PREPARE TRANSACTION 'x';
COMMIT PREPARED 'x';
:get_with2pc_nofilter
:get_with2pc_nofilter <- second call will crash decoder

With following backtrace:

frame #3: 0x000000010dc47b40 postgres`ExceptionalCondition(conditionName="!(txn->ninvalidations == 0)", errorType="FailedAssertion", fileName="reorderbuffer.c", lineNumber=1944) at assert.c:54
frame #4: 0x000000010d9ff4dc postgres`ReorderBufferForget(rb=0x00007fe1ab832318, xid=816, lsn=35096144) at reorderbuffer.c:1944
frame #5: 0x000000010d9f055c postgres`DecodePrepare(ctx=0x00007fe1ab81b918, buf=0x00007ffee2650408, parsed=0x00007ffee2650088) at decode.c:703
frame #6: 0x000000010d9ef718 postgres`DecodeXactOp(ctx=0x00007fe1ab81b918, buf=0x00007ffee2650408) at decode.c:310

That can be fixed by calling SnapBuildCommitTxn() in DecodePrepare()
which I believe is safe because during normal work prepared transaction
holds relation locks until commit/abort and in between nobody can access
altered relations (or just I don’t know such situations — that was the reason
why i had marked that xids as running in previous versions).

> On 6 Feb 2018, at 15:20, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:
>
> Hi all,
>
>>
>> PFA, patch which applies cleanly against latest git head. I also
>> removed unwanted newlines and took care of the cleanup TODO about
>> making ReorderBufferTXN structure using a txn_flags field instead of
>> separate booleans for various statuses like has_catalog_changes,
>> is_subxact, is_serialized etc. The patch uses this txn_flags field for
>> the newer prepare related info as well.
>>
>> "make check-world" passes ok, including the additional regular and tap
>> tests that we have added as part of this patch.
>>
>
> PFA, latest version of this patch.
>
> This latest version takes care of the abort-while-decoding issue along
> with additional test cases and documentation changes.
>
>

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-09 12:58:19
Message-ID: CAMGcDxeSzhMmHFWPN2DdRYQ6m_BWwQ0kB2dZp1KfKYe-59VF-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Stas,

> Reading through patch I’ve noticed that you deleted call to SnapBuildCommitTxn()
> in DecodePrepare(). As you correctly spotted upthread there was unnecessary
> code that marked transaction as running after decoding of prepare. However call
> marking it as committed before decoding of prepare IMHO is still needed as
> SnapBuildCommitTxn does some useful thing like setting base snapshot for parent
> transactions which were skipped because of SnapBuildXactNeedsSkip().
>
> E.g. current code will crash in assert for following transaction:
>
> BEGIN;
> SAVEPOINT one;
> CREATE TABLE test_prepared_savepoints (a int);
> PREPARE TRANSACTION 'x';
> COMMIT PREPARED 'x';
> :get_with2pc_nofilter
> :get_with2pc_nofilter <- second call will crash decoder
>

Thanks for taking a look!

The first ":get_with2pc_nofilter" call consumes the data appropriately.

The second ":get_with2pc_nofilter" sees that it has to skip and hence
enters the ReorderBufferForget() function in the skip code path
causing the assert. If we have to skip anyways why do we need to setup
SnapBuildCommitTxn() for such a transaction is my query? I don't see
the need for doing that for skipped transactions..

Will continue to look at this and will add this scenario to the test
cases. Further comments/feedback appreciated.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-09 21:10:25
Message-ID: 20180209211025.d7jxh43fhqnevhji@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

First off: This patch has way too many different types of changes as
part of one huge commit. This needs to be split into several
pieces. First the cleanups (e.g. the fields -> flag changes), then the
individual infrastructure pieces (like the twophase.c changes, best
split into several pieces as well, the locking stuff), then the main
feature, then support for it in the output plugin. Each should have an
individual explanation about why the change is necessary and not a bad
idea.

On 2018-02-06 17:50:40 +0530, Nikhil Sontakke wrote:
> @@ -46,6 +48,9 @@ typedef struct
> bool skip_empty_xacts;
> bool xact_wrote_changes;
> bool only_local;
> + bool twophase_decoding;
> + bool twophase_decode_with_catalog_changes;
> + int decode_delay; /* seconds to sleep after every change record */

This seems too big a crock to add just for testing. It'll also make the
testing timing dependent...

> } TestDecodingData;

> void
> _PG_init(void)
> @@ -85,9 +106,15 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
> cb->begin_cb = pg_decode_begin_txn;
> cb->change_cb = pg_decode_change;
> cb->commit_cb = pg_decode_commit_txn;
> + cb->abort_cb = pg_decode_abort_txn;

> cb->filter_by_origin_cb = pg_decode_filter;
> cb->shutdown_cb = pg_decode_shutdown;
> cb->message_cb = pg_decode_message;
> + cb->filter_prepare_cb = pg_filter_prepare;
> + cb->filter_decode_txn_cb = pg_filter_decode_txn;
> + cb->prepare_cb = pg_decode_prepare_txn;
> + cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
> + cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
> }

Why does this introduce both abort_cb and abort_prepared_cb? That seems
to conflate two separate features.

> +/* Filter out unnecessary two-phase transactions */
> +static bool
> +pg_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
> + TransactionId xid, const char *gid)
> +{
> + TestDecodingData *data = ctx->output_plugin_private;
> +
> + /* treat all transactions as one-phase */
> + if (!data->twophase_decoding)
> + return true;
> +
> + if (txn && txn_has_catalog_changes(txn) &&
> + !data->twophase_decode_with_catalog_changes)
> + return true;

What? I'm INCREDIBLY doubtful this is a sane thing to expose to output
plugins. As in, unless I hear a very very convincing reason I'm strongly
opposed.

> +/*
> + * Check if we should continue to decode this transaction.
> + *
> + * If it has aborted in the meanwhile, then there's no sense
> + * in decoding and sending the rest of the changes, we might
> + * as well ask the subscribers to abort immediately.
> + *
> + * This should be called if we are streaming a transaction
> + * before it's committed or if we are decoding a 2PC
> + * transaction. Otherwise we always decode committed
> + * transactions
> + *
> + * Additional checks can be added here, as needed
> + */
> +static bool
> +pg_filter_decode_txn(LogicalDecodingContext *ctx,
> + ReorderBufferTXN *txn)
> +{
> + /*
> + * Due to caching, repeated TransactionIdDidAbort calls
> + * shouldn't be that expensive
> + */
> + if (txn != NULL &&
> + TransactionIdIsValid(txn->xid) &&
> + TransactionIdDidAbort(txn->xid))
> + return true;
> +
> + /* if txn is NULL, filter it out */

Why can this be NULL?

> + return (txn != NULL)? false:true;
> +}

This definitely shouldn't be a task for each output plugin. Even if we
want to make this configurable, I'm doubtful that it's a good idea to do
so here - make its much less likely to hit edge cases.

> static bool
> pg_decode_filter(LogicalDecodingContext *ctx,
> RepOriginId origin_id)
> @@ -409,8 +622,18 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
> }
> data->xact_wrote_changes = true;
>
> + if (!LogicalLockTransaction(txn))
> + return;

It really really can't be right that this is exposed to output plugins.

> + /* if decode_delay is specified, sleep with above lock held */
> + if (data->decode_delay > 0)
> + {
> + elog(LOG, "sleeping for %d seconds", data->decode_delay);
> + pg_usleep(data->decode_delay * 1000000L);
> + }

Really not on board.

> @@ -1075,6 +1077,21 @@ EndPrepare(GlobalTransaction gxact)
> Assert(hdr->magic == TWOPHASE_MAGIC);
> hdr->total_len = records.total_len + sizeof(pg_crc32c);
>
> + replorigin = (replorigin_session_origin != InvalidRepOriginId &&
> + replorigin_session_origin != DoNotReplicateId);
> +
> + if (replorigin)
> + {
> + Assert(replorigin_session_origin_lsn != InvalidXLogRecPtr);
> + hdr->origin_lsn = replorigin_session_origin_lsn;
> + hdr->origin_timestamp = replorigin_session_origin_timestamp;
> + }
> + else
> + {
> + hdr->origin_lsn = InvalidXLogRecPtr;
> + hdr->origin_timestamp = 0;
> + }
> +
> /*
> * If the data size exceeds MaxAllocSize, we won't be able to read it in
> * ReadTwoPhaseFile. Check for that now, rather than fail in the case
> @@ -1107,7 +1124,16 @@ EndPrepare(GlobalTransaction gxact)
> XLogBeginInsert();
> for (record = records.head; record != NULL; record = record->next)
> XLogRegisterData(record->data, record->len);
> +
> + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
> +

Can we perhaps merge a bit of the code with the plain commit path on
this?

> gxact->prepare_end_lsn = XLogInsert(RM_XACT_ID, XLOG_XACT_PREPARE);
> +
> + if (replorigin)
> + /* Move LSNs forward for this replication origin */
> + replorigin_session_advance(replorigin_session_origin_lsn,
> + gxact->prepare_end_lsn);
> +

Why is it ok to do this at PREPARE time? I guess the theory is that the
origin LSN is going to be from the sources PREPARE too? If so, this
needs to be commented upon here.

> +/*
> + * ParsePrepareRecord
> + */
> +void
> +ParsePrepareRecord(uint8 info, char *xlrec, xl_xact_parsed_prepare *parsed)
> +{
> + TwoPhaseFileHeader *hdr;
> + char *bufptr;
> +
> + hdr = (TwoPhaseFileHeader *) xlrec;
> + bufptr = xlrec + MAXALIGN(sizeof(TwoPhaseFileHeader));
> +
> + parsed->origin_lsn = hdr->origin_lsn;
> + parsed->origin_timestamp = hdr->origin_timestamp;
> + parsed->twophase_xid = hdr->xid;
> + parsed->dbId = hdr->database;
> + parsed->nsubxacts = hdr->nsubxacts;
> + parsed->ncommitrels = hdr->ncommitrels;
> + parsed->nabortrels = hdr->nabortrels;
> + parsed->nmsgs = hdr->ninvalmsgs;
> +
> + strncpy(parsed->twophase_gid, bufptr, hdr->gidlen);
> + bufptr += MAXALIGN(hdr->gidlen);
> +
> + parsed->subxacts = (TransactionId *) bufptr;
> + bufptr += MAXALIGN(hdr->nsubxacts * sizeof(TransactionId));
> +
> + parsed->commitrels = (RelFileNode *) bufptr;
> + bufptr += MAXALIGN(hdr->ncommitrels * sizeof(RelFileNode));
> +
> + parsed->abortrels = (RelFileNode *) bufptr;
> + bufptr += MAXALIGN(hdr->nabortrels * sizeof(RelFileNode));
> +
> + parsed->msgs = (SharedInvalidationMessage *) bufptr;
> + bufptr += MAXALIGN(hdr->ninvalmsgs * sizeof(SharedInvalidationMessage));
> +}

So this is now basically a commit record. I quite dislike duplicating
things this way. Can't we make commit records versatile enough to
represent this without problems?

> /*
> * Reads 2PC data from xlog. During checkpoint this data will be moved to
> @@ -1365,7 +1428,7 @@ StandbyTransactionIdIsPrepared(TransactionId xid)
> * FinishPreparedTransaction: execute COMMIT PREPARED or ROLLBACK PREPARED
> */
> void
> -FinishPreparedTransaction(const char *gid, bool isCommit)
> +FinishPreparedTransaction(const char *gid, bool isCommit, bool missing_ok)
> {
> GlobalTransaction gxact;
> PGPROC *proc;
> @@ -1386,8 +1449,20 @@ FinishPreparedTransaction(const char *gid, bool isCommit)
> /*
> * Validate the GID, and lock the GXACT to ensure that two backends do not
> * try to commit the same GID at once.
> + *
> + * During logical decoding, on the apply side, it's possible that a prepared
> + * transaction got aborted while decoding. In that case, we stop the
> + * decoding and abort the transaction immediately. However the ROLLBACK
> + * prepared processing still reaches the subscriber. In that case it's ok
> + * to have a missing gid
> */
> - gxact = LockGXact(gid, GetUserId());
> + gxact = LockGXact(gid, GetUserId(), missing_ok);
> + if (gxact == NULL)
> + {
> + Assert(missing_ok && !isCommit);
> + return;
> + }

I'm very doubtful it is sane to handle this at such a low level.

> @@ -2358,6 +2443,13 @@ PrepareRedoAdd(char *buf, XLogRecPtr start_lsn, XLogRecPtr end_lsn)
> Assert(TwoPhaseState->numPrepXacts < max_prepared_xacts);
> TwoPhaseState->prepXacts[TwoPhaseState->numPrepXacts++] = gxact;
>
> + if (origin_id != InvalidRepOriginId)
> + {
> + /* recover apply progress */
> + replorigin_advance(origin_id, hdr->origin_lsn, end_lsn,
> + false /* backward */ , false /* WAL */ );
> + }
> +

It's unclear to me why this is necessary / a good idea?

> case XLOG_XACT_PREPARE:
> + {
> + xl_xact_parsed_prepare parsed;
>
> - /*
> - * Currently decoding ignores PREPARE TRANSACTION and will just
> - * decode the transaction when the COMMIT PREPARED is sent or
> - * throw away the transaction's contents when a ROLLBACK PREPARED
> - * is received. In the future we could add code to expose prepared
> - * transactions in the changestream allowing for a kind of
> - * distributed 2PC.
> - */
> - ReorderBufferProcessXid(reorder, XLogRecGetXid(r), buf->origptr);
> + /* check that output plugin is capable of twophase decoding */
> + if (!ctx->enable_twophase)
> + {
> + ReorderBufferProcessXid(reorder, XLogRecGetXid(r), buf->origptr);
> + break;
> + }
> +
> + /* ok, parse it */
> + ParsePrepareRecord(XLogRecGetInfo(buf->record),
> + XLogRecGetData(buf->record), &parsed);
> +
> + /* does output plugin want this particular transaction? */
> + if (ctx->callbacks.filter_prepare_cb &&
> + ReorderBufferPrepareNeedSkip(reorder, parsed.twophase_xid,
> + parsed.twophase_gid))
> + {
> + ReorderBufferProcessXid(reorder, parsed.twophase_xid,
> + buf->origptr);

We're calling ReorderBufferProcessXid() on two different xids in
different branches, is that intentional?

> + if (TransactionIdIsValid(parsed->twophase_xid) &&
> + ReorderBufferTxnIsPrepared(ctx->reorder,
> + parsed->twophase_xid, parsed->twophase_gid))
> + {
> + Assert(xid == parsed->twophase_xid);
> + /* we are processing COMMIT PREPARED */
> + ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
> + commit_time, origin_id, origin_lsn, parsed->twophase_gid, true);
> + }
> + else
> + {
> + /* replay actions of all transaction + subtransactions in order */
> + ReorderBufferCommit(ctx->reorder, xid, buf->origptr, buf->endptr,
> + commit_time, origin_id, origin_lsn);
> + }
> +}

Why do we want this via the same routine?

> +bool
> +LogicalLockTransaction(ReorderBufferTXN *txn)
> +{
> + bool ok = false;
> +
> + /*
> + * Prepared transactions and uncommitted transactions
> + * that have modified catalogs need to interlock with
> + * concurrent rollback to ensure that there are no
> + * issues while decoding
> + */
> +
> + if (!txn_has_catalog_changes(txn))
> + return true;
> +
> + /*
> + * Is it a prepared txn? Similar checks for uncommitted
> + * transactions when we start supporting them
> + */
> + if (!txn_prepared(txn))
> + return true;
> +
> + /* check cached status */
> + if (txn_commit(txn))
> + return true;
> + if (txn_rollback(txn))
> + return false;
> +
> + /*
> + * Find the PROC that is handling this XID and add ourself as a
> + * decodeGroupMember
> + */
> + if (MyProc->decodeGroupLeader == NULL)
> + {
> + PGPROC *proc = BecomeDecodeGroupLeader(txn->xid, txn_prepared(txn));
> +
> + /*
> + * If decodeGroupLeader is NULL, then the only possibility
> + * is that the transaction completed and went away
> + */
> + if (proc == NULL)
> + {
> + Assert(!TransactionIdIsInProgress(txn->xid));
> + if (TransactionIdDidCommit(txn->xid))
> + {
> + txn->txn_flags |= TXN_COMMIT;
> + return true;
> + }
> + else
> + {
> + txn->txn_flags |= TXN_ROLLBACK;
> + return false;
> + }
> + }
> +
> + /* Add ourself as a decodeGroupMember */
> + if (!BecomeDecodeGroupMember(proc, proc->pid, txn_prepared(txn)))
> + {
> + Assert(!TransactionIdIsInProgress(txn->xid));
> + if (TransactionIdDidCommit(txn->xid))
> + {
> + txn->txn_flags |= TXN_COMMIT;
> + return true;
> + }
> + else
> + {
> + txn->txn_flags |= TXN_ROLLBACK;
> + return false;
> + }
> + }
> + }

Are we ok with this low-level lock / pgproc stuff happening outside of
procarray / lock related files? Where is the locking scheme documented?

> +/* ReorderBufferTXN flags */
> +#define TXN_HAS_CATALOG_CHANGES 0x0001
> +#define TXN_IS_SUBXACT 0x0002
> +#define TXN_SERIALIZED 0x0004
> +#define TXN_PREPARE 0x0008
> +#define TXN_COMMIT_PREPARED 0x0010
> +#define TXN_ROLLBACK_PREPARED 0x0020
> +#define TXN_COMMIT 0x0040
> +#define TXN_ROLLBACK 0x0080
> +
> +/* does the txn have catalog changes */
> +#define txn_has_catalog_changes(txn) (txn->txn_flags & TXN_HAS_CATALOG_CHANGES)
> +/* is the txn known as a subxact? */
> +#define txn_is_subxact(txn) (txn->txn_flags & TXN_IS_SUBXACT)
> +/*
> + * Has this transaction been spilled to disk? It's not always possible to
> + * deduce that fact by comparing nentries with nentries_mem, because e.g.
> + * subtransactions of a large transaction might get serialized together
> + * with the parent - if they're restored to memory they'd have
> + * nentries_mem == nentries.
> + */
> +#define txn_is_serialized(txn) (txn->txn_flags & TXN_SERIALIZED)
> +/* is this txn prepared? */
> +#define txn_prepared(txn) (txn->txn_flags & TXN_PREPARE)
> +/* was this prepared txn committed in the meanwhile? */
> +#define txn_commit_prepared(txn) (txn->txn_flags & TXN_COMMIT_PREPARED)
> +/* was this prepared txn aborted in the meanwhile? */
> +#define txn_rollback_prepared(txn) (txn->txn_flags & TXN_ROLLBACK_PREPARED)
> +/* was this txn committed in the meanwhile? */
> +#define txn_commit(txn) (txn->txn_flags & TXN_COMMIT)
> +/* was this prepared txn aborted in the meanwhile? */
> +#define txn_rollback(txn) (txn->txn_flags & TXN_ROLLBACK)
> +

These txn_* names seem too generic imo - fairly likely to conflict with
other pieces of code imo.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-12 08:06:16
Message-ID: CAMGcDxc9p7envO8t+29j=NdQXHoba6P1btfU0d3Xeiz7zT-Mvw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

> First off: This patch has way too many different types of changes as
> part of one huge commit. This needs to be split into several
> pieces. First the cleanups (e.g. the fields -> flag changes), then the
> individual infrastructure pieces (like the twophase.c changes, best
> split into several pieces as well, the locking stuff), then the main
> feature, then support for it in the output plugin. Each should have an
> individual explanation about why the change is necessary and not a bad
> idea.
>

Ok, I will break this patch into multiple logical pieces and re-submit.

>
> On 2018-02-06 17:50:40 +0530, Nikhil Sontakke wrote:
>> @@ -46,6 +48,9 @@ typedef struct
>> bool skip_empty_xacts;
>> bool xact_wrote_changes;
>> bool only_local;
>> + bool twophase_decoding;
>> + bool twophase_decode_with_catalog_changes;
>> + int decode_delay; /* seconds to sleep after every change record */
>
> This seems too big a crock to add just for testing. It'll also make the
> testing timing dependent...
>

The idea *was* to make testing timing dependent. We wanted to simulate
the case when a rollback is issued by another backend while the
decoding is still ongoing. This allows that test case to be tested.

>> } TestDecodingData;
>
>> void
>> _PG_init(void)
>> @@ -85,9 +106,15 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
>> cb->begin_cb = pg_decode_begin_txn;
>> cb->change_cb = pg_decode_change;
>> cb->commit_cb = pg_decode_commit_txn;
>> + cb->abort_cb = pg_decode_abort_txn;
>
>> cb->filter_by_origin_cb = pg_decode_filter;
>> cb->shutdown_cb = pg_decode_shutdown;
>> cb->message_cb = pg_decode_message;
>> + cb->filter_prepare_cb = pg_filter_prepare;
>> + cb->filter_decode_txn_cb = pg_filter_decode_txn;
>> + cb->prepare_cb = pg_decode_prepare_txn;
>> + cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
>> + cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
>> }
>
> Why does this introduce both abort_cb and abort_prepared_cb? That seems
> to conflate two separate features.
>

Consider the case when we have a bunch of change records to apply for
a transaction. We sent a "BEGIN" and then start decoding each change
record one by one. Now a rollback was encountered while we were
decoding. In that case it doesn't make sense to keep on decoding and
sending the change records. We immediately send a regular ABORT. We
cannot send "ROLLBACK PREPARED" because the transaction was not
prepared on the subscriber and have to send a regular ABORT instead.
And we need the "ROLLBACK PREPARED" callback for the case when a
prepared transaction gets rolled back and is encountered during the
usual WAL processing.

Please take a look at "contrib/test_decoding/t/001_twophase.pl" where
this test case is enacted.

>
>> +/* Filter out unnecessary two-phase transactions */
>> +static bool
>> +pg_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
>> + TransactionId xid, const char *gid)
>> +{
>> + TestDecodingData *data = ctx->output_plugin_private;
>> +
>> + /* treat all transactions as one-phase */
>> + if (!data->twophase_decoding)
>> + return true;
>> +
>> + if (txn && txn_has_catalog_changes(txn) &&
>> + !data->twophase_decode_with_catalog_changes)
>> + return true;
>
> What? I'm INCREDIBLY doubtful this is a sane thing to expose to output
> plugins. As in, unless I hear a very very convincing reason I'm strongly
> opposed.
>

These bools are specific to the test_decoding plugin.

Again, these are useful in testing decoding in various scenarios with
twophase decoding enabled/disabled. Testing decoding when catalog
changes are allowed/disallowed etc. Please take a look at
"contrib/test_decoding/sql/prepared.sql" for the various scenarios.

>
>> +/*
>> + * Check if we should continue to decode this transaction.
>> + *
>> + * If it has aborted in the meanwhile, then there's no sense
>> + * in decoding and sending the rest of the changes, we might
>> + * as well ask the subscribers to abort immediately.
>> + *
>> + * This should be called if we are streaming a transaction
>> + * before it's committed or if we are decoding a 2PC
>> + * transaction. Otherwise we always decode committed
>> + * transactions
>> + *
>> + * Additional checks can be added here, as needed
>> + */
>> +static bool
>> +pg_filter_decode_txn(LogicalDecodingContext *ctx,
>> + ReorderBufferTXN *txn)
>> +{
>> + /*
>> + * Due to caching, repeated TransactionIdDidAbort calls
>> + * shouldn't be that expensive
>> + */
>> + if (txn != NULL &&
>> + TransactionIdIsValid(txn->xid) &&
>> + TransactionIdDidAbort(txn->xid))
>> + return true;
>> +
>> + /* if txn is NULL, filter it out */
>
> Why can this be NULL?
>

Depending on parameters passed to the ReorderBufferTXNByXid()
function, the txn might be NULL in some cases, especially during
restarts.

>> + return (txn != NULL)? false:true;
>> +}
>
>
> This definitely shouldn't be a task for each output plugin. Even if we
> want to make this configurable, I'm doubtful that it's a good idea to do
> so here - make its much less likely to hit edge cases.
>

Agreed, I will try to add it to the core logical decoding handling.

>
>
>> static bool
>> pg_decode_filter(LogicalDecodingContext *ctx,
>> RepOriginId origin_id)
>> @@ -409,8 +622,18 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
>> }
>> data->xact_wrote_changes = true;
>>
>> + if (!LogicalLockTransaction(txn))
>> + return;
>
> It really really can't be right that this is exposed to output plugins.
>

This was discussed in the other thread
(http://www.postgresql-archive.org/Logical-Decoding-and-HeapTupleSatisfiesVacuum-assumptions-td5998294i20.html).
Any catalog access in any plugins need to interlock with concurrent
aborts. This is only a problem if the transaction is a prepared one or
yet uncommitted one. Rest of the majority of the cases, this function
will do nothing at all.

>
>> + /* if decode_delay is specified, sleep with above lock held */
>> + if (data->decode_delay > 0)
>> + {
>> + elog(LOG, "sleeping for %d seconds", data->decode_delay);
>> + pg_usleep(data->decode_delay * 1000000L);
>> + }
>
> Really not on board.
>

Again, specific to test_decoding plugin. We want to test the
interlocking code for concurrent abort handling which needs to wait
out for plugins in locked state before allowing the rollback to go
ahead. Please take a look at "contrib/test_decoding/t/001_twophase.pl"
and "Waiting for backends to abort" string.

>
>
>
>> @@ -1075,6 +1077,21 @@ EndPrepare(GlobalTransaction gxact)
>> Assert(hdr->magic == TWOPHASE_MAGIC);
>> hdr->total_len = records.total_len + sizeof(pg_crc32c);
>>
>> + replorigin = (replorigin_session_origin != InvalidRepOriginId &&
>> + replorigin_session_origin != DoNotReplicateId);
>> +
>> + if (replorigin)
>> + {
>> + Assert(replorigin_session_origin_lsn != InvalidXLogRecPtr);
>> + hdr->origin_lsn = replorigin_session_origin_lsn;
>> + hdr->origin_timestamp = replorigin_session_origin_timestamp;
>> + }
>> + else
>> + {
>> + hdr->origin_lsn = InvalidXLogRecPtr;
>> + hdr->origin_timestamp = 0;
>> + }
>> +
>> /*
>> * If the data size exceeds MaxAllocSize, we won't be able to read it in
>> * ReadTwoPhaseFile. Check for that now, rather than fail in the case
>> @@ -1107,7 +1124,16 @@ EndPrepare(GlobalTransaction gxact)
>> XLogBeginInsert();
>> for (record = records.head; record != NULL; record = record->next)
>> XLogRegisterData(record->data, record->len);
>> +
>> + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
>> +
>
> Can we perhaps merge a bit of the code with the plain commit path on
> this?
>

Given that PREPARE ROLLBACK handling is totally separate from the
regular commit code paths, wouldn't it be a little difficult?

>
>> gxact->prepare_end_lsn = XLogInsert(RM_XACT_ID, XLOG_XACT_PREPARE);
>> +
>> + if (replorigin)
>> + /* Move LSNs forward for this replication origin */
>> + replorigin_session_advance(replorigin_session_origin_lsn,
>> + gxact->prepare_end_lsn);
>> +
>
> Why is it ok to do this at PREPARE time? I guess the theory is that the
> origin LSN is going to be from the sources PREPARE too? If so, this
> needs to be commented upon here.
>

Ok, will add a comment.

>
>> +/*
>> + * ParsePrepareRecord
>> + */
>> +void
>> +ParsePrepareRecord(uint8 info, char *xlrec, xl_xact_parsed_prepare *parsed)
>> +{
>> + TwoPhaseFileHeader *hdr;
>> + char *bufptr;
>> +
>> + hdr = (TwoPhaseFileHeader *) xlrec;
>> + bufptr = xlrec + MAXALIGN(sizeof(TwoPhaseFileHeader));
>> +
>> + parsed->origin_lsn = hdr->origin_lsn;
>> + parsed->origin_timestamp = hdr->origin_timestamp;
>> + parsed->twophase_xid = hdr->xid;
>> + parsed->dbId = hdr->database;
>> + parsed->nsubxacts = hdr->nsubxacts;
>> + parsed->ncommitrels = hdr->ncommitrels;
>> + parsed->nabortrels = hdr->nabortrels;
>> + parsed->nmsgs = hdr->ninvalmsgs;
>> +
>> + strncpy(parsed->twophase_gid, bufptr, hdr->gidlen);
>> + bufptr += MAXALIGN(hdr->gidlen);
>> +
>> + parsed->subxacts = (TransactionId *) bufptr;
>> + bufptr += MAXALIGN(hdr->nsubxacts * sizeof(TransactionId));
>> +
>> + parsed->commitrels = (RelFileNode *) bufptr;
>> + bufptr += MAXALIGN(hdr->ncommitrels * sizeof(RelFileNode));
>> +
>> + parsed->abortrels = (RelFileNode *) bufptr;
>> + bufptr += MAXALIGN(hdr->nabortrels * sizeof(RelFileNode));
>> +
>> + parsed->msgs = (SharedInvalidationMessage *) bufptr;
>> + bufptr += MAXALIGN(hdr->ninvalmsgs * sizeof(SharedInvalidationMessage));
>> +}
>
> So this is now basically a commit record. I quite dislike duplicating
> things this way. Can't we make commit records versatile enough to
> represent this without problems?
>

Maybe we can. We have already re-used existing records for
XLOG_XACT_COMMIT_PREPARED and XLOG_XACT_ABORT_PREPARED. We can add a
flag to existing commit records to indicate that it's a PREPARE and
not a COMMIT.

>
>> /*
>> * Reads 2PC data from xlog. During checkpoint this data will be moved to
>> @@ -1365,7 +1428,7 @@ StandbyTransactionIdIsPrepared(TransactionId xid)
>> * FinishPreparedTransaction: execute COMMIT PREPARED or ROLLBACK PREPARED
>> */
>> void
>> -FinishPreparedTransaction(const char *gid, bool isCommit)
>> +FinishPreparedTransaction(const char *gid, bool isCommit, bool missing_ok)
>> {
>> GlobalTransaction gxact;
>> PGPROC *proc;
>> @@ -1386,8 +1449,20 @@ FinishPreparedTransaction(const char *gid, bool isCommit)
>> /*
>> * Validate the GID, and lock the GXACT to ensure that two backends do not
>> * try to commit the same GID at once.
>> + *
>> + * During logical decoding, on the apply side, it's possible that a prepared
>> + * transaction got aborted while decoding. In that case, we stop the
>> + * decoding and abort the transaction immediately. However the ROLLBACK
>> + * prepared processing still reaches the subscriber. In that case it's ok
>> + * to have a missing gid
>> */
>> - gxact = LockGXact(gid, GetUserId());
>> + gxact = LockGXact(gid, GetUserId(), missing_ok);
>> + if (gxact == NULL)
>> + {
>> + Assert(missing_ok && !isCommit);
>> + return;
>> + }
>
> I'm very doubtful it is sane to handle this at such a low level.
>

FinishPreparedTransaction() is called directly from ProcessUtility. If
not here, where else could we do this?

>
>> @@ -2358,6 +2443,13 @@ PrepareRedoAdd(char *buf, XLogRecPtr start_lsn, XLogRecPtr end_lsn)
>> Assert(TwoPhaseState->numPrepXacts < max_prepared_xacts);
>> TwoPhaseState->prepXacts[TwoPhaseState->numPrepXacts++] = gxact;
>>
>> + if (origin_id != InvalidRepOriginId)
>> + {
>> + /* recover apply progress */
>> + replorigin_advance(origin_id, hdr->origin_lsn, end_lsn,
>> + false /* backward */ , false /* WAL */ );
>> + }
>> +
>
> It's unclear to me why this is necessary / a good idea?
>

Keeping PREPARE handling as close to regular COMMIT handling seems
like a good idea, no?

>
>
>> case XLOG_XACT_PREPARE:
>> + {
>> + xl_xact_parsed_prepare parsed;
>>
>> - /*
>> - * Currently decoding ignores PREPARE TRANSACTION and will just
>> - * decode the transaction when the COMMIT PREPARED is sent or
>> - * throw away the transaction's contents when a ROLLBACK PREPARED
>> - * is received. In the future we could add code to expose prepared
>> - * transactions in the changestream allowing for a kind of
>> - * distributed 2PC.
>> - */
>> - ReorderBufferProcessXid(reorder, XLogRecGetXid(r), buf->origptr);
>> + /* check that output plugin is capable of twophase decoding */
>> + if (!ctx->enable_twophase)
>> + {
>> + ReorderBufferProcessXid(reorder, XLogRecGetXid(r), buf->origptr);
>> + break;
>> + }
>> +
>> + /* ok, parse it */
>> + ParsePrepareRecord(XLogRecGetInfo(buf->record),
>> + XLogRecGetData(buf->record), &parsed);
>> +
>> + /* does output plugin want this particular transaction? */
>> + if (ctx->callbacks.filter_prepare_cb &&
>> + ReorderBufferPrepareNeedSkip(reorder, parsed.twophase_xid,
>> + parsed.twophase_gid))
>> + {
>> + ReorderBufferProcessXid(reorder, parsed.twophase_xid,
>> + buf->origptr);
>
> We're calling ReorderBufferProcessXid() on two different xids in
> different branches, is that intentional?
>

Don't think that's intentional. Maybe Stas can also provide his views on this?

>> + if (TransactionIdIsValid(parsed->twophase_xid) &&
>> + ReorderBufferTxnIsPrepared(ctx->reorder,
>> + parsed->twophase_xid, parsed->twophase_gid))
>> + {
>> + Assert(xid == parsed->twophase_xid);
>> + /* we are processing COMMIT PREPARED */
>> + ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
>> + commit_time, origin_id, origin_lsn, parsed->twophase_gid, true);
>> + }
>> + else
>> + {
>> + /* replay actions of all transaction + subtransactions in order */
>> + ReorderBufferCommit(ctx->reorder, xid, buf->origptr, buf->endptr,
>> + commit_time, origin_id, origin_lsn);
>> + }
>> +}
>
> Why do we want this via the same routine?
>

As I mentioned above, xl_xact_parsed_commit handles both regular
commits and also "COMMIT PREPARED". That's why one routine for them
both.

>
>
>> +bool
>> +LogicalLockTransaction(ReorderBufferTXN *txn)
>> +{
>> + bool ok = false;
>> +
>> + /*
>> + * Prepared transactions and uncommitted transactions
>> + * that have modified catalogs need to interlock with
>> + * concurrent rollback to ensure that there are no
>> + * issues while decoding
>> + */
>> +
>> + if (!txn_has_catalog_changes(txn))
>> + return true;
>> +
>> + /*
>> + * Is it a prepared txn? Similar checks for uncommitted
>> + * transactions when we start supporting them
>> + */
>> + if (!txn_prepared(txn))
>> + return true;
>> +
>> + /* check cached status */
>> + if (txn_commit(txn))
>> + return true;
>> + if (txn_rollback(txn))
>> + return false;
>> +
>> + /*
>> + * Find the PROC that is handling this XID and add ourself as a
>> + * decodeGroupMember
>> + */
>> + if (MyProc->decodeGroupLeader == NULL)
>> + {
>> + PGPROC *proc = BecomeDecodeGroupLeader(txn->xid, txn_prepared(txn));
>> +
>> + /*
>> + * If decodeGroupLeader is NULL, then the only possibility
>> + * is that the transaction completed and went away
>> + */
>> + if (proc == NULL)
>> + {
>> + Assert(!TransactionIdIsInProgress(txn->xid));
>> + if (TransactionIdDidCommit(txn->xid))
>> + {
>> + txn->txn_flags |= TXN_COMMIT;
>> + return true;
>> + }
>> + else
>> + {
>> + txn->txn_flags |= TXN_ROLLBACK;
>> + return false;
>> + }
>> + }
>> +
>> + /* Add ourself as a decodeGroupMember */
>> + if (!BecomeDecodeGroupMember(proc, proc->pid, txn_prepared(txn)))
>> + {
>> + Assert(!TransactionIdIsInProgress(txn->xid));
>> + if (TransactionIdDidCommit(txn->xid))
>> + {
>> + txn->txn_flags |= TXN_COMMIT;
>> + return true;
>> + }
>> + else
>> + {
>> + txn->txn_flags |= TXN_ROLLBACK;
>> + return false;
>> + }
>> + }
>> + }
>
> Are we ok with this low-level lock / pgproc stuff happening outside of
> procarray / lock related files? Where is the locking scheme documented?
>

Some details are in src/include/storage/proc.h where these fields have
been added.

This implementation is similar to the existing lockGroupLeader
implementation and uses the same locking mechanism using
LockHashPartitionLockByProc.

>
>
>> +/* ReorderBufferTXN flags */
>> +#define TXN_HAS_CATALOG_CHANGES 0x0001
>> +#define TXN_IS_SUBXACT 0x0002
>> +#define TXN_SERIALIZED 0x0004
>> +#define TXN_PREPARE 0x0008
>> +#define TXN_COMMIT_PREPARED 0x0010
>> +#define TXN_ROLLBACK_PREPARED 0x0020
>> +#define TXN_COMMIT 0x0040
>> +#define TXN_ROLLBACK 0x0080
>> +
>> +/* does the txn have catalog changes */
>> +#define txn_has_catalog_changes(txn) (txn->txn_flags & TXN_HAS_CATALOG_CHANGES)
>> +/* is the txn known as a subxact? */
>> +#define txn_is_subxact(txn) (txn->txn_flags & TXN_IS_SUBXACT)
>> +/*
>> + * Has this transaction been spilled to disk? It's not always possible to
>> + * deduce that fact by comparing nentries with nentries_mem, because e.g.
>> + * subtransactions of a large transaction might get serialized together
>> + * with the parent - if they're restored to memory they'd have
>> + * nentries_mem == nentries.
>> + */
>> +#define txn_is_serialized(txn) (txn->txn_flags & TXN_SERIALIZED)
>> +/* is this txn prepared? */
>> +#define txn_prepared(txn) (txn->txn_flags & TXN_PREPARE)
>> +/* was this prepared txn committed in the meanwhile? */
>> +#define txn_commit_prepared(txn) (txn->txn_flags & TXN_COMMIT_PREPARED)
>> +/* was this prepared txn aborted in the meanwhile? */
>> +#define txn_rollback_prepared(txn) (txn->txn_flags & TXN_ROLLBACK_PREPARED)
>> +/* was this txn committed in the meanwhile? */
>> +#define txn_commit(txn) (txn->txn_flags & TXN_COMMIT)
>> +/* was this prepared txn aborted in the meanwhile? */
>> +#define txn_rollback(txn) (txn->txn_flags & TXN_ROLLBACK)
>> +
>
> These txn_* names seem too generic imo - fairly likely to conflict with
> other pieces of code imo.
>

Happy to add the RB prefix to all of them for clarity. E.g.

/* ReorderBufferTXN flags */
#define RBTXN_HAS_CATALOG_CHANGES 0x0001

I will submit multiple patches with cleanups where needed as discussed
above soon.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-12 16:20:40
Message-ID: 20180212162040.rsnlnysmf7n4yo7i@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-02-12 13:36:16 +0530, Nikhil Sontakke wrote:
> Hi Andres,
>
> > First off: This patch has way too many different types of changes as
> > part of one huge commit. This needs to be split into several
> > pieces. First the cleanups (e.g. the fields -> flag changes), then the
> > individual infrastructure pieces (like the twophase.c changes, best
> > split into several pieces as well, the locking stuff), then the main
> > feature, then support for it in the output plugin. Each should have an
> > individual explanation about why the change is necessary and not a bad
> > idea.
> >
>
> Ok, I will break this patch into multiple logical pieces and re-submit.

Thanks.

> >
> > On 2018-02-06 17:50:40 +0530, Nikhil Sontakke wrote:
> >> @@ -46,6 +48,9 @@ typedef struct
> >> bool skip_empty_xacts;
> >> bool xact_wrote_changes;
> >> bool only_local;
> >> + bool twophase_decoding;
> >> + bool twophase_decode_with_catalog_changes;
> >> + int decode_delay; /* seconds to sleep after every change record */
> >
> > This seems too big a crock to add just for testing. It'll also make the
> > testing timing dependent...
> >
>
> The idea *was* to make testing timing dependent. We wanted to simulate
> the case when a rollback is issued by another backend while the
> decoding is still ongoing. This allows that test case to be tested.

What I mean is that this will be hell on the buildfarm because the
different animals are differently fast.

> >> } TestDecodingData;
> >
> >> void
> >> _PG_init(void)
> >> @@ -85,9 +106,15 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
> >> cb->begin_cb = pg_decode_begin_txn;
> >> cb->change_cb = pg_decode_change;
> >> cb->commit_cb = pg_decode_commit_txn;
> >> + cb->abort_cb = pg_decode_abort_txn;
> >
> >> cb->filter_by_origin_cb = pg_decode_filter;
> >> cb->shutdown_cb = pg_decode_shutdown;
> >> cb->message_cb = pg_decode_message;
> >> + cb->filter_prepare_cb = pg_filter_prepare;
> >> + cb->filter_decode_txn_cb = pg_filter_decode_txn;
> >> + cb->prepare_cb = pg_decode_prepare_txn;
> >> + cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
> >> + cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
> >> }
> >
> > Why does this introduce both abort_cb and abort_prepared_cb? That seems
> > to conflate two separate features.
> >
>
> Consider the case when we have a bunch of change records to apply for
> a transaction. We sent a "BEGIN" and then start decoding each change
> record one by one. Now a rollback was encountered while we were
> decoding.

This will be quite the mess once streaming of changes is introduced.

> >> +/* Filter out unnecessary two-phase transactions */
> >> +static bool
> >> +pg_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
> >> + TransactionId xid, const char *gid)
> >> +{
> >> + TestDecodingData *data = ctx->output_plugin_private;
> >> +
> >> + /* treat all transactions as one-phase */
> >> + if (!data->twophase_decoding)
> >> + return true;
> >> +
> >> + if (txn && txn_has_catalog_changes(txn) &&
> >> + !data->twophase_decode_with_catalog_changes)
> >> + return true;
> >
> > What? I'm INCREDIBLY doubtful this is a sane thing to expose to output
> > plugins. As in, unless I hear a very very convincing reason I'm strongly
> > opposed.
> >
>
> These bools are specific to the test_decoding plugin.

txn_has_catalog_changes() definitely isn't just exposed to
test_decoding. I think you're making the output plugin interface
massively more complicated in this patch and I think we need to push
back on that.

> Again, these are useful in testing decoding in various scenarios with
> twophase decoding enabled/disabled. Testing decoding when catalog
> changes are allowed/disallowed etc. Please take a look at
> "contrib/test_decoding/sql/prepared.sql" for the various scenarios.

I don't se ehow that addresses my concern in any sort of way.

> >> +/*
> >> + * Check if we should continue to decode this transaction.
> >> + *
> >> + * If it has aborted in the meanwhile, then there's no sense
> >> + * in decoding and sending the rest of the changes, we might
> >> + * as well ask the subscribers to abort immediately.
> >> + *
> >> + * This should be called if we are streaming a transaction
> >> + * before it's committed or if we are decoding a 2PC
> >> + * transaction. Otherwise we always decode committed
> >> + * transactions
> >> + *
> >> + * Additional checks can be added here, as needed
> >> + */
> >> +static bool
> >> +pg_filter_decode_txn(LogicalDecodingContext *ctx,
> >> + ReorderBufferTXN *txn)
> >> +{
> >> + /*
> >> + * Due to caching, repeated TransactionIdDidAbort calls
> >> + * shouldn't be that expensive
> >> + */
> >> + if (txn != NULL &&
> >> + TransactionIdIsValid(txn->xid) &&
> >> + TransactionIdDidAbort(txn->xid))
> >> + return true;
> >> +
> >> + /* if txn is NULL, filter it out */
> >
> > Why can this be NULL?
> >
>
> Depending on parameters passed to the ReorderBufferTXNByXid()
> function, the txn might be NULL in some cases, especially during
> restarts.

That a) isn't an explanation why that's ok b) reasoning why this ever
needs to be exposed to the output plugin.

> >> static bool
> >> pg_decode_filter(LogicalDecodingContext *ctx,
> >> RepOriginId origin_id)
> >> @@ -409,8 +622,18 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
> >> }
> >> data->xact_wrote_changes = true;
> >>
> >> + if (!LogicalLockTransaction(txn))
> >> + return;
> >
> > It really really can't be right that this is exposed to output plugins.
> >
>
> This was discussed in the other thread
> (http://www.postgresql-archive.org/Logical-Decoding-and-HeapTupleSatisfiesVacuum-assumptions-td5998294i20.html).
> Any catalog access in any plugins need to interlock with concurrent
> aborts. This is only a problem if the transaction is a prepared one or
> yet uncommitted one. Rest of the majority of the cases, this function
> will do nothing at all.

That doesn't address at all that it's not ok that the output plugin
needs to handle this. Doing this in output plugins, the majority of
which are external projects, means that a) the work needs to be done
many times. b) we can't simply adjust the relevant code in a minor
release, because every output plugin needs to be changed.

> >
> >> + /* if decode_delay is specified, sleep with above lock held */
> >> + if (data->decode_delay > 0)
> >> + {
> >> + elog(LOG, "sleeping for %d seconds", data->decode_delay);
> >> + pg_usleep(data->decode_delay * 1000000L);
> >> + }
> >
> > Really not on board.
> >
>
> Again, specific to test_decoding plugin.

Again, this is not a justification. People look at the code to write
output plugins. Also see my above complaint about this going to be hell
to get right on slow buildfarm members - we're going to crank up the
sleep times to make it robust-ish.

> >> + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
> >> +
> >
> > Can we perhaps merge a bit of the code with the plain commit path on
> > this?
> >
>
> Given that PREPARE ROLLBACK handling is totally separate from the
> regular commit code paths, wouldn't it be a little difficult?

Why? A helper function doing so ought to be doable.

> >> @@ -1386,8 +1449,20 @@ FinishPreparedTransaction(const char *gid, bool isCommit)
> >> /*
> >> * Validate the GID, and lock the GXACT to ensure that two backends do not
> >> * try to commit the same GID at once.
> >> + *
> >> + * During logical decoding, on the apply side, it's possible that a prepared
> >> + * transaction got aborted while decoding. In that case, we stop the
> >> + * decoding and abort the transaction immediately. However the ROLLBACK
> >> + * prepared processing still reaches the subscriber. In that case it's ok
> >> + * to have a missing gid
> >> */
> >> - gxact = LockGXact(gid, GetUserId());
> >> + gxact = LockGXact(gid, GetUserId(), missing_ok);
> >> + if (gxact == NULL)
> >> + {
> >> + Assert(missing_ok && !isCommit);
> >> + return;
> >> + }
> >
> > I'm very doubtful it is sane to handle this at such a low level.
> >
>
> FinishPreparedTransaction() is called directly from ProcessUtility. If
> not here, where else could we do this?

I don't think this is something that ought to be handled at this layer
at all. You should get an error in that case, the replay logic needs to
handle that, not the low level 2pc code.

> >> @@ -2358,6 +2443,13 @@ PrepareRedoAdd(char *buf, XLogRecPtr start_lsn, XLogRecPtr end_lsn)
> >> Assert(TwoPhaseState->numPrepXacts < max_prepared_xacts);
> >> TwoPhaseState->prepXacts[TwoPhaseState->numPrepXacts++] = gxact;
> >>
> >> + if (origin_id != InvalidRepOriginId)
> >> + {
> >> + /* recover apply progress */
> >> + replorigin_advance(origin_id, hdr->origin_lsn, end_lsn,
> >> + false /* backward */ , false /* WAL */ );
> >> + }
> >> +
> >
> > It's unclear to me why this is necessary / a good idea?
> >
>
> Keeping PREPARE handling as close to regular COMMIT handling seems
> like a good idea, no?

But this code *means* something? Explain to me why it's a good idea to
advance, or don't do it.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-02-28 15:42:42
Message-ID: CAMGcDxeViP+R-OL7QhzUV9eKCVjURobuY1Zijik4Ay_Ddwo4Cg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

>> > First off: This patch has way too many different types of changes as
>> > part of one huge commit. This needs to be split into several
>> > pieces. First the cleanups (e.g. the fields -> flag changes), then the
>> > individual infrastructure pieces (like the twophase.c changes, best
>> > split into several pieces as well, the locking stuff), then the main
>> > feature, then support for it in the output plugin. Each should have an
>> > individual explanation about why the change is necessary and not a bad
>> > idea.
>> >
>>
>> Ok, I will break this patch into multiple logical pieces and re-submit.
>
> Thanks.
>

Attached are 5 patches split up from the original patch that I had
submitted earlier.

ReorderBufferTXN_flags_cleanup_1.patch:
cleanup of the ReorderBufferTXN bools and addition of some new flags
that following patches will need.

Logical_lock_unlock_api_2.patch:
Streaming changes of uncommitted transactions and of prepared
transaction runs the risk of aborts (rollback prepared) happening
while we are decoding. It's not a problem for most transactions, but
some of the transactions which do catalog changes need to get a
consistent view of the metadata so that the decoding does not behave
in uncertain ways when such concurrent aborts occur. We came up with
the concept of a logical locking/unlocking API to safeguard access to
catalog tables. This patch contains the implementation for this
functionality.

2PC_gid_wal_and_2PC_origin_tracking_3.patch:
We now store the 2PC gid in the commit/abort records. This allows us
to send the proper gid to the downstream across restarts. We also want
to avoid receiving the prepared transaction AGAIN from the upstream
and use replorigin tracking across prepared transactions.

reorderbuffer_2PC_logic_4.patch:
Add decoding logic to understand PREPARE related wal records and
relevant changes in the reorderbuffer logic to deal with 2PC. This
includes logic to handle concurrent rollbacks while we are going
through the change buffers belonging to a prepared or uncommitted
transaction.

pgoutput_plugin_support_2PC_5.patch:
Logical protocol changes to apply and send changes via the internal
pgoutput output plugin. Includes test case and relevant documentation
changes.

Besides the above, you had feedback around the test_decoding plugin
and the use of sleep() etc. I will submit a follow-on patch for the
test_decoding plugin stuff soon.

More comments inline below.

>> >> bool only_local;
>> >> + bool twophase_decoding;
>> >> + bool twophase_decode_with_catalog_changes;
>> >> + int decode_delay; /* seconds to sleep after every change record */
>> >
>> > This seems too big a crock to add just for testing. It'll also make the
>> > testing timing dependent...
>> >
>>
>> The idea *was* to make testing timing dependent. We wanted to simulate
>> the case when a rollback is issued by another backend while the
>> decoding is still ongoing. This allows that test case to be tested.
>
> What I mean is that this will be hell on the buildfarm because the
> different animals are differently fast.
>

Will handle this in the test_decoding plugin patch soon.

>
>> >> +/* Filter out unnecessary two-phase transactions */
>> >> +static bool
>> >> +pg_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
>> >> + TransactionId xid, const char *gid)
>> >> +{
>> >> + TestDecodingData *data = ctx->output_plugin_private;
>> >> +
>> >> + /* treat all transactions as one-phase */
>> >> + if (!data->twophase_decoding)
>> >> + return true;
>> >> +
>> >> + if (txn && txn_has_catalog_changes(txn) &&
>> >> + !data->twophase_decode_with_catalog_changes)
>> >> + return true;
>> >
>> > What? I'm INCREDIBLY doubtful this is a sane thing to expose to output
>> > plugins. As in, unless I hear a very very convincing reason I'm strongly
>> > opposed.
>> >
>>
>> These bools are specific to the test_decoding plugin.
>

Will handle in the test_decoding plugin patch soon.

> txn_has_catalog_changes() definitely isn't just exposed to
> test_decoding. I think you're making the output plugin interface
> massively more complicated in this patch and I think we need to push
> back on that.
>
>
>> Again, these are useful in testing decoding in various scenarios with
>> twophase decoding enabled/disabled. Testing decoding when catalog
>> changes are allowed/disallowed etc. Please take a look at
>> "contrib/test_decoding/sql/prepared.sql" for the various scenarios.
>
> I don't se ehow that addresses my concern in any sort of way.
>

Will handle in the test_decoding plugin patch soon.

>
>> >> +/*
>> >> + * Check if we should continue to decode this transaction.
>> >> + *
>> >> + * If it has aborted in the meanwhile, then there's no sense
>> >> + * in decoding and sending the rest of the changes, we might
>> >> + * as well ask the subscribers to abort immediately.
>> >> + *
>> >> + * This should be called if we are streaming a transaction
>> >> + * before it's committed or if we are decoding a 2PC
>> >> + * transaction. Otherwise we always decode committed
>> >> + * transactions
>> >> + *
>> >> + * Additional checks can be added here, as needed
>> >> + */
>> >> +static bool
>> >> +pg_filter_decode_txn(LogicalDecodingContext *ctx,
>> >> + ReorderBufferTXN *txn)
>> >> +{
>> >> + /*
>> >> + * Due to caching, repeated TransactionIdDidAbort calls
>> >> + * shouldn't be that expensive
>> >> + */
>> >> + if (txn != NULL &&
>> >> + TransactionIdIsValid(txn->xid) &&
>> >> + TransactionIdDidAbort(txn->xid))
>> >> + return true;
>> >> +
>> >> + /* if txn is NULL, filter it out */
>> >
>> > Why can this be NULL?
>> >
>>
>> Depending on parameters passed to the ReorderBufferTXNByXid()
>> function, the txn might be NULL in some cases, especially during
>> restarts.
>
> That a) isn't an explanation why that's ok b) reasoning why this ever
> needs to be exposed to the output plugin.
>

Removing this pg_filter_decode_txn() function. You are right, there's
no need to expose this function to the output plugin and we can make
the decision entirely inside the ReorderBuffer code handling.

>
>> >> static bool
>> >> pg_decode_filter(LogicalDecodingContext *ctx,
>> >> RepOriginId origin_id)
>> >> @@ -409,8 +622,18 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
>> >> }
>> >> data->xact_wrote_changes = true;
>> >>
>> >> + if (!LogicalLockTransaction(txn))
>> >> + return;
>> >
>> > It really really can't be right that this is exposed to output plugins.
>> >
>>
>> This was discussed in the other thread
>> (http://www.postgresql-archive.org/Logical-Decoding-and-HeapTupleSatisfiesVacuum-assumptions-td5998294i20.html).
>> Any catalog access in any plugins need to interlock with concurrent
>> aborts. This is only a problem if the transaction is a prepared one or
>> yet uncommitted one. Rest of the majority of the cases, this function
>> will do nothing at all.
>
> That doesn't address at all that it's not ok that the output plugin
> needs to handle this. Doing this in output plugins, the majority of
> which are external projects, means that a) the work needs to be done
> many times. b) we can't simply adjust the relevant code in a minor
> release, because every output plugin needs to be changed.
>

How do we know if the external project is going to access catalog
data? How do we ensure that the data that they access is safe from
concurrent aborts if we are decoding uncommitted or prepared
transactions? We are providing a guideline here and recommending them
to use these APIs if they need to.

>> >
>> >> + /* if decode_delay is specified, sleep with above lock held */
>> >> + if (data->decode_delay > 0)
>> >> + {
>> >> + elog(LOG, "sleeping for %d seconds", data->decode_delay);
>> >> + pg_usleep(data->decode_delay * 1000000L);
>> >> + }
>> >
>> > Really not on board.
>> >
>>
>> Again, specific to test_decoding plugin.
>
> Again, this is not a justification. People look at the code to write
> output plugins. Also see my above complaint about this going to be hell
> to get right on slow buildfarm members - we're going to crank up the
> sleep times to make it robust-ish.
>

Sure, as mentioned above, will come up with a different way for the
test_decoding plugin later.

>
>
>> >> + XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
>> >> +
>> >
>> > Can we perhaps merge a bit of the code with the plain commit path on
>> > this?
>> >
>>
>> Given that PREPARE ROLLBACK handling is totally separate from the
>> regular commit code paths, wouldn't it be a little difficult?
>
> Why? A helper function doing so ought to be doable.
>

Can you elaborate on what exactly you mean here?

>
>
>> >> @@ -1386,8 +1449,20 @@ FinishPreparedTransaction(const char *gid, bool isCommit)
>> >> /*
>> >> * Validate the GID, and lock the GXACT to ensure that two backends do not
>> >> * try to commit the same GID at once.
>> >> + *
>> >> + * During logical decoding, on the apply side, it's possible that a prepared
>> >> + * transaction got aborted while decoding. In that case, we stop the
>> >> + * decoding and abort the transaction immediately. However the ROLLBACK
>> >> + * prepared processing still reaches the subscriber. In that case it's ok
>> >> + * to have a missing gid
>> >> */
>> >> - gxact = LockGXact(gid, GetUserId());
>> >> + gxact = LockGXact(gid, GetUserId(), missing_ok);
>> >> + if (gxact == NULL)
>> >> + {
>> >> + Assert(missing_ok && !isCommit);
>> >> + return;
>> >> + }
>> >
>> > I'm very doubtful it is sane to handle this at such a low level.
>> >
>>
>> FinishPreparedTransaction() is called directly from ProcessUtility. If
>> not here, where else could we do this?
>
> I don't think this is something that ought to be handled at this layer
> at all. You should get an error in that case, the replay logic needs to
> handle that, not the low level 2pc code.
>

Removed the above changes. The replay logic now checks if the GID
still exists in the abort rollback codepath. If not, it returns
immediately. In case of commit rollback replay, the GID has to
obviously exist at the downstream.

>
>> >> @@ -2358,6 +2443,13 @@ PrepareRedoAdd(char *buf, XLogRecPtr start_lsn, XLogRecPtr end_lsn)
>> >> Assert(TwoPhaseState->numPrepXacts < max_prepared_xacts);
>> >> TwoPhaseState->prepXacts[TwoPhaseState->numPrepXacts++] = gxact;
>> >>
>> >> + if (origin_id != InvalidRepOriginId)
>> >> + {
>> >> + /* recover apply progress */
>> >> + replorigin_advance(origin_id, hdr->origin_lsn, end_lsn,
>> >> + false /* backward */ , false /* WAL */ );
>> >> + }
>> >> +
>> >
>> > It's unclear to me why this is necessary / a good idea?
>> >
>>
>> Keeping PREPARE handling as close to regular COMMIT handling seems
>> like a good idea, no?
>
> But this code *means* something? Explain to me why it's a good idea to
> advance, or don't do it.
>

We want to do this to use it as protection against receiving the
prepared tx again.

Other than the above,

*) Changed the flags and added "RB" prefix to all flags and macros.

*) Added a few fields into existing xl_xact_parsed_commit record and avoided
creating an entirely new xl_xact_parsed_prepare record.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
pgoutput_plugin_support_2PC_5.patch application/octet-stream 34.0 KB
reorderbuffer_2PC_logic_4.patch application/octet-stream 31.6 KB
2PC_gid_wal_and_2PC_origin_tracking_3.patch application/octet-stream 17.9 KB
ReorderBufferTXN_flags_cleanup_1.patch application/octet-stream 7.8 KB
Logical_lock_unlock_api_2.patch application/octet-stream 18.3 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-02 00:53:43
Message-ID: 20180302005343.zv6nsqimcujzjrcd@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-02-28 21:12:42 +0530, Nikhil Sontakke wrote:
> Attached are 5 patches split up from the original patch that I had
> submitted earlier.

In the future you should number them. Right now they appear to be out of
order in your email. I suggest using git format-patch, that does all
the necessary work for you.

Greetings,

Andres Freund


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-02 01:14:10
Message-ID: CAMsr+YEb-f1oG+1wGji_sgswRsVQYtVXuTbVH=nnY6thMfiCqA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2 March 2018 at 08:53, Andres Freund <andres(at)anarazel(dot)de> wrote:

> Hi,
>
> On 2018-02-28 21:12:42 +0530, Nikhil Sontakke wrote:
> > Attached are 5 patches split up from the original patch that I had
> > submitted earlier.
>
> In the future you should number them. Right now they appear to be out of
> order in your email. I suggest using git format-patch, that does all
> the necessary work for you.
>
> Yep, specially git format-patch with a -v argument, so the whole patchset
is visibly versioned and sorts in the correct order.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-02 04:21:14
Message-ID: CAMGcDxcD2fWqUwiPWGL_d-ie-CoGTVFAsgQvQ91H93q2Jc-mhA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres and Craig,

>> In the future you should number them. Right now they appear to be out of
>> order in your email. I suggest using git format-patch, that does all
>> the necessary work for you.
>>
> Yep, specially git format-patch with a -v argument, so the whole patchset is
> visibly versioned and sorts in the correct order.
>

I did try to use *_Number.patch to convey the sequence, but admittedly
it's pretty lame.

I will re-submit with "git format-patch" soon.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-05 16:37:01
Message-ID: CAMGcDxf9HHmDOti4-DSQTtGo48R4OMki+BktPn2uTOtGr4XomQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

>
> I will re-submit with "git format-patch" soon.
>
PFA, patches in "format-patch" format.

This patch set also includes changes in the test_decoding plugin along
with an additional savepoint related test case that was pointed out on
this thread, upstream.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0006-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 21.4 KB
0005-pgoutput-output-plugin-support-for-logical-decoding-.patch application/octet-stream 24.6 KB
0004-Teach-ReorderBuffer-to-deal-with-2PC.patch application/octet-stream 30.9 KB
0003-Add-support-for-logging-GID-in-commit-abort-WAL-reco.patch application/octet-stream 18.4 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.patch application/octet-stream 18.7 KB
0001-Cleaning-up-and-addition-of-new-flags-in-ReorderBuff.patch application/octet-stream 8.1 KB

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-22 01:22:47
Message-ID: c167340c-0751-d54b-0f32-fa83fd414863@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Nikhil,

I've been looking at this patch over the past few days, so here are my
thoughts so far ...

decoding aborted transactions
=============================

First, let's talk about handling of aborted transaction, which was
originally discussed in thread [1]. I'll try to summarize the status and
explain my understanding of the choices first.

[1]
https://www.postgresql.org/message-id/CAMGcDxeHBaXCz12LdfEmyJdghbms_dtC26pRZXKWRV2dazO-UQ%40mail.gmail.com

There were multiple ideas about how to deal with aborted transactions,
but I we eventually found various issues in all of them except for two -
interlocking decoding and aborts, and modifying the rules so that
aborted transactions are considered to be running while being decoded.

This patch uses the first approach, i.e. interlock. It has a couple of
disadvantages:

a) The abort may need to wait for decoding workers for a while.

This is annoying, but aborts are generally rare. And for systems with
many concurrent short transactions (where even tiny delays would matter)
it's unlikely the decoding workers will already start decoding the
aborted transaction.

b) output plugins need to call lock/unlock explicitly from the callbacks

Technically, we could wrap the whole callback in a lock/unlock, but that
would needlessly increase the amount of time spent holding the lock,
making the previous point much worse. As the callbacks are expected to
do network I/O etc. the amount of time could be quite significant.

The main disadvantage is of course that it's likely much less invasive
than tweaking which transactions are seen as running. So I think taking
this approach is a sensible choice at this point.

Now, about the interlock implementation - I see you've reused the "lock
group" concept from parallel query. That may make sense, unfortunately
there's about no documentation explaining how it works, what is the
"protocol" etc. There is fairly extensive documentation for "lock
groups" in src/backend/storage/lmgr/README, but while the "decoding
group" code is inspired by it, the code is actually very different.
Compare for example BecomeLockGroupLeader and BecomeDecodeGroupLeader,
and you'll see what I mean.

So I think the first thing we need to do is add proper documentation
(possibly into the same README), explaining how the decode groups work,
how the decodeAbortPending works, etc.

Also, some function names seem a bit misleading. For example in the lock
group "BecomeLockGroupLeader" means (make the current process a group
leader), but apparently "BecomeDecodeGroupLeader" means "find the
process handling XID and make it a leader". But perhaps I got that
entirely wrong.

Of course LogicalLockTransaction and LogicalUnlockTransaction, should
have proper comments, which is particularly important as it's part of
the public API.

BTW, do we need to do any of this with (wal_level < logical)? I don't
see any quick bail-out in any of the functions in this case, but it
seems like a fairly obvious optimization.

Similarly, can't the logical workers indicate that they need to decode
2PC transactions (or in-progress transactions in general) in some way?
If we knew there are no such workers, that would also allow ignoring the
interlock, no?

Another thing is that I'm yet to see any performance tests. While we do
believe it will work fine, it's based on a number of assumptions:

a) aborts are rare
b) it has no measurable impact on commit

I think we need to verify this by actually measuring the impact on a
bunch of workloads. In particular, I think we need to test

i) impact on commit-only workloads
ii) impact on worst-case scenario

I'm not sure how (ii) would look like, considering the patch only deals
with decoding 2PC transactions, which have significant overhead on their
own - so I'm afraid the impact on "regular transactions" might be much
worse, once we add support for that.

decoding 2PC transactions
=========================

Now, the main topic of the patch. Overall the changes make sense, I
think - it modifies about the same places I touched in the streaming
patch, in similar ways.

The following comments are mostly in random order:

1) test_decoding.c
------------------

The "filter" functions do not follow the naming convention, so I suggest
to rename them like this:

- pg_filter_decode_txn -> pg_decode_filter_txn
- pg_filter_prepare -> pg_decode_filter_prepare_txn

or something like that. Also, looking at those functions (and those same
callbacks in the pgoutput plugin) I wonder if we really need to make
them part of the output plugin API.

I mean, AFAICS their only purpose is to filter 2PC transactions, but I
don't quite see why implementing those checks should be responsibility
of the plugin? I suppose it was done to make test_decoding customizable
(i.e. allow enabling/disabling of decoding 2PC as needed), right?

In that case I suggest make it configurable by plugin-level flags (I see
LogicalDecodingContext already has a enable_twophase), and moving the
checks to a function that is not part of the plugin API. Of course, in
that case the flag needs to be customizable from plugin options, not
just "Does the plugin have all the callbacks?".

The "twophase-decoding" and "twophase-decode-with-catalog-changes" seem
a bit inconsistently named too (why decode vs. decoding?).

2) regression tests
-------------------

I really dislike the use of \set to run the same query repeatedly. It
makes analysis of regression failures even more tedious than it already
is. I'd just copy the query to all the places.

3) worker.c
-----------

The comment in apply_handle_rollback_prepared_txn says this:

/*
* During logical decoding, on the apply side, it's possible that a
* prepared transaction got aborted while decoding. In that case, we
* stop the decoding and abort the transaction immediately. However
* the ROLLBACK prepared processing still reaches the subscriber. In
* that case it's ok to have a missing gid
*/
if (LookupGXact(commit_data->gid)) { ... }

But is it safe to assume it never happens due to an error? In other
words, is there a way to decide that the GID really aborted? Or, why
should the provider sent the rollback at all - surely it could know if
the transaction/GID was sent to subscriber or not, right?

4) twophase.c
-------------

I wonder why the patch modifies the TWOPHASE_MAGIC at all - if it's
meant to identify 2PC files, then why not to keep the value. And if we
really need to modify it, why not to use another random number? By only
adding 1 to the current one, it makes it look like a random bit flip.

5) decode.c
-----------

The changes in DecodeCommit need proper comments.

In DecodeAbort, the "if" includes this condition:

ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid)

which essentially means ROLLBACK PREPARED is translated into "is the
transaction prepared?. Shouldn't the code look at xl_xact_parsed_abort
instead, and make the ReorderBufferTxnIsPrepared an Assert?

6) logical.c
------------

I see StartupDecodingContext does this:

twophase_callbacks = (ctx->callbacks.prepare_cb != NULL) +
(ctx->callbacks.commit_prepared_cb != NULL) +
(ctx->callbacks.abort_prepared_cb != NULL);

It seems a bit strange to make arithmetics on bools, I guess. In any
case, I think this should be an ERROR and not a WARNING:

if (twophase_callbacks != 3 && twophase_callbacks != 0)
ereport(WARNING,
(errmsg("Output plugin registered only %d twophase callbacks. "
"Twophase transactions will be decoded at commit time.",
twophase_callbacks)));

A plugin that implements only a subset of the callbacks seems outright
broken, so let's just fail.

7) proto.c / worker.c
---------------------

Until now, the 'action' (essentially the first byte of each message)
clearly identified what the message does. So 'C' -> commit, 'I' ->
insert, 'D' -> delete etc. This also means the "handle" methods were
inherently simple, because each handled exactly one particular action
and nothing else.

You've expanded the protocol in a way that suddenly 'C' means either
COMMIT or ROLLBACK, and 'P' means PREPARE, ROLLBACK PREPARED or COMMIT
PREPARED. I don't think that's how the protocol should be extended - if
anything, it's damn confusing and unlike the existing code. You should
define new action, and keep the handlers in worker.c simple.

Also, this probably implies LOGICALREP_PROTO_VERSION_NUM increase.

8) reorderbuffer.h/c
--------------------

Similarly, I wonder why you replaced the ReorderBuffer boolean flags
(is_known_subxact, has_catalog_changes) with a bitmask? I find it way
more difficult to read (which is subjective, of course) but it also
makes IDEs dumber (suddenly they can't offer you field names).

Surely it wasn't done to save space, because by using an "int" you've
saved just 4B (there are 8 flags right now, so it'd need 8 bytes with
plain bool flags) on a structure that is already ~200B.

And you the added gid[GIDSIZE] to it, making it 400B for *all*
transactions and subtransactions (not just 2PC). Not to mention that the
GID is usually much shorter than the 200B.

So I suggest to use just a simple (char *) pointer for the GID, keeping
it NULL for most transactions, and switching back to plain bool flags.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-23 15:26:26
Message-ID: CANP8+jKXy6EZ=wiyowys4A1e0+yoeqYaQV-7XuSMYnBewzCHyQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 5 March 2018 at 16:37, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:

>>
>> I will re-submit with "git format-patch" soon.
>>
> PFA, patches in "format-patch" format.
>
> This patch set also includes changes in the test_decoding plugin along
> with an additional savepoint related test case that was pointed out on
> this thread, upstream.

Reviewing 0001-Cleaning-up-and-addition-of-new-flags-in-ReorderBuff.patch

Change from is_known_as_subxact to rbtxn_is_subxact
loses some meaning, since rbtxn entries with this flag set false might
still be subxacts, we just don't know yet.

rbtxn_is_serialized refers to RBTXN_SERIALIZED
so flag name should be RBTXN_IS_SERIALIZED so it matches

Otherwise looks OK to commit

Reviewing 0003-Add-support-for-logging-GID-in-commit-abort-WAL-reco

Looks fine, reworked patch attached
* added changes to xact.h from patch 4 so that this is a whole,
committable patch
* added comments to make abort and commit structs look same

Attached patch is proposed for a separate, early commit as part of this

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment Content-Type Size
logging-GID-in-commit-abort-WAL.v2.patch application/octet-stream 18.1 KB

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-27 09:19:37
Message-ID: CANP8+jLr9dsXjHNnZM=eVUf7WRbjCbOeuoEhNff1Mq_bq3mjjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 23 March 2018 at 15:26, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

> Reviewing 0003-Add-support-for-logging-GID-in-commit-abort-WAL-reco
>
> Looks fine, reworked patch attached
> * added changes to xact.h from patch 4 so that this is a whole,
> committable patch
> * added comments to make abort and commit structs look same
>
> Attached patch is proposed for a separate, early commit as part of this

Looking to commit "logging GID" patch today, if no further objections.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-28 00:58:46
Message-ID: 20180328005846.jwrkk6cax7fb7tpq@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-03-27 10:19:37 +0100, Simon Riggs wrote:
> On 23 March 2018 at 15:26, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>
> > Reviewing 0003-Add-support-for-logging-GID-in-commit-abort-WAL-reco
> >
> > Looks fine, reworked patch attached
> > * added changes to xact.h from patch 4 so that this is a whole,
> > committable patch
> > * added comments to make abort and commit structs look same
> >
> > Attached patch is proposed for a separate, early commit as part of this
>
> Looking to commit "logging GID" patch today, if no further objections.

None here.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-28 15:28:33
Message-ID: CAMGcDxcGtRXtF1r3VqddzHba+BTm7q8wbgEsyhmOFuDckMTZFQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

> Now, about the interlock implementation - I see you've reused the "lock
> group" concept from parallel query. That may make sense, unfortunately
> there's about no documentation explaining how it works, what is the
> "protocol" etc. There is fairly extensive documentation for "lock
> groups" in src/backend/storage/lmgr/README, but while the "decoding
> group" code is inspired by it, the code is actually very different.
> Compare for example BecomeLockGroupLeader and BecomeDecodeGroupLeader,
> and you'll see what I mean.
>
> So I think the first thing we need to do is add proper documentation
> (possibly into the same README), explaining how the decode groups work,
> how the decodeAbortPending works, etc.
>

I have added details about this in src/backend/storage/lmgr/README as
suggested by you.

>
> BTW, do we need to do any of this with (wal_level < logical)? I don't
> see any quick bail-out in any of the functions in this case, but it
> seems like a fairly obvious optimization.
>

The calls to the LogicalLockTransaction/LogicalUnLockTransaction APIs
will be from inside plugins or the reorderbuffer code paths. Those
will get invoked only in the wal_level logical case, hence I did not
add further checks.

> Similarly, can't the logical workers indicate that they need to decode
> 2PC transactions (or in-progress transactions in general) in some way?
> If we knew there are no such workers, that would also allow ignoring the
> interlock, no?
>

These APIs check if the transaction is already committed and cache
that information for further calls, so for regular transactions this
becomes a no-op

>
> decoding 2PC transactions
> =========================
>
> Now, the main topic of the patch. Overall the changes make sense, I
> think - it modifies about the same places I touched in the streaming
> patch, in similar ways.
>
> The following comments are mostly in random order:
>
> 1) test_decoding.c
> ------------------
>
> The "filter" functions do not follow the naming convention, so I suggest
> to rename them like this:
>
> - pg_filter_decode_txn -> pg_decode_filter_txn
> - pg_filter_prepare -> pg_decode_filter_prepare_txn
>
> or something like that. Also, looking at those functions (and those same
> callbacks in the pgoutput plugin) I wonder if we really need to make
> them part of the output plugin API.
>
> I mean, AFAICS their only purpose is to filter 2PC transactions, but I
> don't quite see why implementing those checks should be responsibility
> of the plugin? I suppose it was done to make test_decoding customizable
> (i.e. allow enabling/disabling of decoding 2PC as needed), right?
>
> In that case I suggest make it configurable by plugin-level flags (I see
> LogicalDecodingContext already has a enable_twophase), and moving the
> checks to a function that is not part of the plugin API. Of course, in
> that case the flag needs to be customizable from plugin options, not
> just "Does the plugin have all the callbacks?".
>

The idea behind exposing the API is to allow the plugins to have
selective control over specific 2PC actions. They might want to decode
certain 2PC but not some others. By providing this callback, they can
do that selectively.

> The "twophase-decoding" and "twophase-decode-with-catalog-changes" seem
> a bit inconsistently named too (why decode vs. decoding?).
>

This has been removed in the latest patches altogether. Maybe you were
referring to an older patch.

>
> 2) regression tests
> -------------------
>
> I really dislike the use of \set to run the same query repeatedly. It
> makes analysis of regression failures even more tedious than it already
> is. I'd just copy the query to all the places.
>

They are long-winded queries and IMO made the test file look too
cluttered and verbose..

>
> 3) worker.c
> -----------
>
> The comment in apply_handle_rollback_prepared_txn says this:
>
> /*
> * During logical decoding, on the apply side, it's possible that a
> * prepared transaction got aborted while decoding. In that case, we
> * stop the decoding and abort the transaction immediately. However
> * the ROLLBACK prepared processing still reaches the subscriber. In
> * that case it's ok to have a missing gid
> */
> if (LookupGXact(commit_data->gid)) { ... }
>
> But is it safe to assume it never happens due to an error? In other
> words, is there a way to decide that the GID really aborted? Or, why
> should the provider sent the rollback at all - surely it could know if
> the transaction/GID was sent to subscriber or not, right?
>

Since we decode in commit WAL order, when we reach the ROLLBACK
PREPARED wal record, we cannot be sure that we did infact abort the
decoding mid ways because of this concurrent rollback. It's possible
that this rollback comes much much later as well when all decoding
backends have successfully prepared it on the subscribers already.

>
> 4) twophase.c
> -------------
>
> I wonder why the patch modifies the TWOPHASE_MAGIC at all - if it's
> meant to identify 2PC files, then why not to keep the value. And if we
> really need to modify it, why not to use another random number? By only
> adding 1 to the current one, it makes it look like a random bit flip.
>

We could retain the existing magic here.

>
> 5) decode.c
> -----------
>
> The changes in DecodeCommit need proper comments.
>
> In DecodeAbort, the "if" includes this condition:
>
> ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid)
>
> which essentially means ROLLBACK PREPARED is translated into "is the
> transaction prepared?. Shouldn't the code look at xl_xact_parsed_abort
> instead, and make the ReorderBufferTxnIsPrepared an Assert?

This again goes back to the earlier callback in which want the
pg_decode_filter_prepare_txn to selectively decide to filter out or
decode some of the 2PC transactions. If we allow that callback, then
we need to consult ReorderBufferTxnIsPrepared to get the same response
for these 2PC transactions.

>
>
> 6) logical.c
> ------------
>
> I see StartupDecodingContext does this:
>
> twophase_callbacks = (ctx->callbacks.prepare_cb != NULL) +
> (ctx->callbacks.commit_prepared_cb != NULL) +
> (ctx->callbacks.abort_prepared_cb != NULL);
>
> It seems a bit strange to make arithmetics on bools, I guess. In any
> case, I think this should be an ERROR and not a WARNING:
>
> if (twophase_callbacks != 3 && twophase_callbacks != 0)
> ereport(WARNING,
> (errmsg("Output plugin registered only %d twophase callbacks. "
> "Twophase transactions will be decoded at commit time.",
> twophase_callbacks)));
>
> A plugin that implements only a subset of the callbacks seems outright
> broken, so let's just fail.
>

Ok, done.

>
> 7) proto.c / worker.c
> ---------------------
>
> Until now, the 'action' (essentially the first byte of each message)
> clearly identified what the message does. So 'C' -> commit, 'I' ->
> insert, 'D' -> delete etc. This also means the "handle" methods were
> inherently simple, because each handled exactly one particular action
> and nothing else.
>
> You've expanded the protocol in a way that suddenly 'C' means either
> COMMIT or ROLLBACK, and 'P' means PREPARE, ROLLBACK PREPARED or COMMIT
> PREPARED. I don't think that's how the protocol should be extended - if
> anything, it's damn confusing and unlike the existing code. You should
> define new action, and keep the handlers in worker.c simple.
>

I thought this grouped regular commit and 2PC transactions properly.
Can look at this again if this style is not favored.

> Also, this probably implies LOGICALREP_PROTO_VERSION_NUM increase.
>

Ok, increased it to 2.

PFA, latest patch set. The ReorderBufferCommit() handling has been
further simplified now without worrying too much about optimizing for
abort handling at various steps.

This also contains an additional/optional 7th patch which has a test
case to solely demonstrate the concurrent abort/logical decoding
interlocking. It uses the delay using sleep logic while holding
LogicalTransactionLock. This additional patch might not be considered
for commit as the delay based approach is prone to failures on slower
machines.

Simon, 0003-Add-GID-and-replica-origin-to-two-phase-commit-abort.patch
is the exact patch that you had posted for an earlier commit.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.patch application/octet-stream 23.8 KB
0003-Add-GID-and-replica-origin-to-two-phase-commit-abort.patch application/octet-stream 19.1 KB
0004-Support-decoding-of-two-phase-transactions-at-PREPAR.patch application/octet-stream 31.5 KB
0005-pgoutput-output-plugin-support-for-logical-decoding-.patch application/octet-stream 33.6 KB
0006-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 20.9 KB
0007-Additional-optional-test-case-to-demonstrate-decoding-rollbac.patch application/octet-stream 9.6 KB

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-28 17:14:28
Message-ID: CANP8+j+MWUP05JiXKYFyZWVLS11S12dnnYuqLLgeALKM9EK9Jg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 March 2018 at 16:28, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> wrote:

> Simon, 0003-Add-GID-and-replica-origin-to-two-phase-commit-abort.patch
> is the exact patch that you had posted for an earlier commit.

0003 Pushed

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 21:52:18
Message-ID: dd5a9cb7-bb10-cb5c-834e-08dc3ffc057f@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I've been reviewing the last patch version, focusing mostly on the
decoding group part. Let me respond to several points first, then new
review bits.

On 03/28/2018 05:28 PM, Nikhil Sontakke wrote:
> Hi Tomas,
>
>> Now, about the interlock implementation - I see you've reused the "lock
>> group" concept from parallel query. That may make sense, unfortunately
>> there's about no documentation explaining how it works, what is the
>> "protocol" etc. There is fairly extensive documentation for "lock
>> groups" in src/backend/storage/lmgr/README, but while the "decoding
>> group" code is inspired by it, the code is actually very different.
>> Compare for example BecomeLockGroupLeader and BecomeDecodeGroupLeader,
>> and you'll see what I mean.
>>
>> So I think the first thing we need to do is add proper documentation
>> (possibly into the same README), explaining how the decode groups work,
>> how the decodeAbortPending works, etc.
>>
>
> I have added details about this in src/backend/storage/lmgr/README as
> suggested by you.
>

Thanks. I think the README is a good start, but I think we also need to
improve the comments, which is usually more detailed than the README.
For example, it's not quite acceptable that LogicalLockTransaction and
LogicalUnlockTransaction have about no comments, especially when it's
meant to be public API for decoding plugins.

>>
>> BTW, do we need to do any of this with (wal_level < logical)? I don't
>> see any quick bail-out in any of the functions in this case, but it
>> seems like a fairly obvious optimization.
>>
>
> The calls to the LogicalLockTransaction/LogicalUnLockTransaction APIs
> will be from inside plugins or the reorderbuffer code paths. Those
> will get invoked only in the wal_level logical case, hence I did not
> add further checks.
>

Oh, right.

>> Similarly, can't the logical workers indicate that they need to decode
>> 2PC transactions (or in-progress transactions in general) in some way?
>> If we knew there are no such workers, that would also allow ignoring the
>> interlock, no?
>>
>
> These APIs check if the transaction is already committed and cache
> that information for further calls, so for regular transactions this
> becomes a no-op
>

I see. So when the output plugin never calls LogicalLockTransaction on
an in-progress transaction (e.g. 2PC after PREPARE), it never actually
initializes the decoding group. Works for me.

>>
>> 2) regression tests
>> -------------------
>>
>> I really dislike the use of \set to run the same query repeatedly. It
>> makes analysis of regression failures even more tedious than it already
>> is. I'd just copy the query to all the places.
>>
>
> They are long-winded queries and IMO made the test file look too
> cluttered and verbose..
>

Well, I don't think that's a major problem, and it certainly makes it
more difficult to investigate regression failures.

>>
>> 3) worker.c
>> -----------
>>
>> The comment in apply_handle_rollback_prepared_txn says this:
>>
>> /*
>> * During logical decoding, on the apply side, it's possible that a
>> * prepared transaction got aborted while decoding. In that case, we
>> * stop the decoding and abort the transaction immediately. However
>> * the ROLLBACK prepared processing still reaches the subscriber. In
>> * that case it's ok to have a missing gid
>> */
>> if (LookupGXact(commit_data->gid)) { ... }
>>
>> But is it safe to assume it never happens due to an error? In other
>> words, is there a way to decide that the GID really aborted? Or, why
>> should the provider sent the rollback at all - surely it could know if
>> the transaction/GID was sent to subscriber or not, right?
>>
>
> Since we decode in commit WAL order, when we reach the ROLLBACK
> PREPARED wal record, we cannot be sure that we did infact abort the
> decoding mid ways because of this concurrent rollback. It's possible
> that this rollback comes much much later as well when all decoding
> backends have successfully prepared it on the subscribers already.
>

Ah, OK. So when the transaction gets aborted (by ROLLBACK PREPARED)
concurrently with the decoding, we abort the apply transaction and
discard the ReorderBufferTXN.

Which means that later, when we decode the abort, we don't know whether
the decoding reached abort or prepare, and so we have to send the
ROLLBACK PREPARED to the subscriber too.

For a moment I was thinking we might simply remember TXN outcome in
reorder buffer, but obviously that does not work - the decoding might
restart in between, and as you say the distance (in terms of WAL) may be
quite significant.

>>
>> 7) proto.c / worker.c
>> ---------------------
>>
>> Until now, the 'action' (essentially the first byte of each message)
>> clearly identified what the message does. So 'C' -> commit, 'I' ->
>> insert, 'D' -> delete etc. This also means the "handle" methods were
>> inherently simple, because each handled exactly one particular action
>> and nothing else.
>>
>> You've expanded the protocol in a way that suddenly 'C' means either
>> COMMIT or ROLLBACK, and 'P' means PREPARE, ROLLBACK PREPARED or COMMIT
>> PREPARED. I don't think that's how the protocol should be extended - if
>> anything, it's damn confusing and unlike the existing code. You should
>> define new action, and keep the handlers in worker.c simple.
>>
>
> I thought this grouped regular commit and 2PC transactions properly.
> Can look at this again if this style is not favored.
>

Hmmm, it's not how I'd do it, but perhaps someone who originally
designed the protocol should review this bit.

Now, the new bits ... attached is a .diff with a couple of changes and
comments on various places.

1) LogicalLockTransaction

- This function is part of a public API, yet it has no comment. That
needs fixing - it has to be clear how to use it. The .diff suggests a
comment, but it may need improvements.

- As I mentioned in the previous review, BecomeDecodeGroupLeader is a
misleading name. It suggest the called becomes a leader, while in fact
it looks up the PROC running the XID and makes it a leader. This is
obviously due to copying the code from lock groups, where the caller
actually becomes the leader. It's incorrect here. I suggest something
like LookupDecodeGroupLeader() or something.

- In the "if (MyProc->decodeGroupLeader == NULL)" block there are two
blocks rechecking the transaction status:

if (proc == NULL)
{ ... recheck ... }

if (!BecomeDecodeGroupMember(proc, proc->pid, rbtxn_prepared(txn)))
{ ... recheck ...}

I suggest to join them into a single block.

- This Assert() is either bogus and there can indeed be cases with
(MyProc->decodeGroupLeader==NULL), or the "if" is unnecessary:

Assert(MyProc->decodeGroupLeader);

if (MyProc->decodeGroupLeader) { ... }

- I'm wondering why we're maintaining decodeAbortPending flags both for
the leader and all the members. ISTM it'd be perfectly fine to only
check the leader, particularly because RemoveDecodeGroupMemberLocked
removes the members from the decoding group. So that seems unnecessary,
and we can remove the

if (MyProc->decodeAbortPending)
{ ... }

- LogicalUnlockTransaction needs a comment(s) too.

2) BecomeDecodeGroupLeader

- Wrong name (already mentioned above).

- It can bail out when (!proc), which will simplify the code a bit.

- Why does it check PID of the process at all? Seems unnecessary,
considering we're already checking the XID.

- Can a proc executing a XID have a different leader? I don't think so,
so I'd make that an Assert().

Assert(!proc || (proc->decodeGroupLeader == proc));

And it'll allow simplification of some of the conditions.

- We're only dealing with prepared transactions now, so I'd just drop
the is_prepared flag - it'll make the code a bit simpler, we can add it
later in patch adding decoding of regular in-progress transactions. We
can't test the (!is_prepared) anyway.

- Why are we making the leader also a member of the group? Seems rather
unnecessary, and it complicates the abort handling, because we need to
skip the leader when deciding to wait.

3) LogicalDecodeRemoveTransaction

- It's not clear to me what happens when a decoding backend gets killed
between LogicalLockTransaction/LogicalUnlockTransaction. Doesn't that
mean LogicalDecodeRemoveTransaction will get stuck, because the proc is
still in the decoding group?

- The loop now tweaks decodeAbortPending of the members, but I don't
think that's necessary either - the LogicalUnlockTransaction can check
the leader flag just as easily.

4) a bunch of comment / docs improvements, ...

I'm suggesting rewording a couple of comments. I've also added a couple
of missing comments - e.g. to LogicalLockTransaction and the lock group
methods in general.

Also, a couple more questions and suggestions in XXX comments.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment Content-Type Size
logical-2pc-decoding-review.diff text/x-patch 19.2 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 21:58:15
Message-ID: 20180329215815.36xxakgujx4luo4m@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-03-29 23:52:18 +0200, Tomas Vondra wrote:
> > I have added details about this in src/backend/storage/lmgr/README as
> > suggested by you.
> >
>
> Thanks. I think the README is a good start, but I think we also need to
> improve the comments, which is usually more detailed than the README.
> For example, it's not quite acceptable that LogicalLockTransaction and
> LogicalUnlockTransaction have about no comments, especially when it's
> meant to be public API for decoding plugins.

FWIW, for me that's ground to not accept the feature. Burdening output
plugins with this will make their development painful (because they'll
have to adapt regularly) and correctness doubful (there's nothing
checking for the lock being skipped). Another way needs to be found.

- Andres


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 22:23:00
Message-ID: 0925358a-e11d-51bc-c3c1-959ded75f604@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 03/29/2018 11:58 PM, Andres Freund wrote:
> On 2018-03-29 23:52:18 +0200, Tomas Vondra wrote:
>>> I have added details about this in src/backend/storage/lmgr/README as
>>> suggested by you.
>>>
>>
>> Thanks. I think the README is a good start, but I think we also need to
>> improve the comments, which is usually more detailed than the README.
>> For example, it's not quite acceptable that LogicalLockTransaction and
>> LogicalUnlockTransaction have about no comments, especially when it's
>> meant to be public API for decoding plugins.
>
> FWIW, for me that's ground to not accept the feature. Burdening output
> plugins with this will make their development painful (because they'll
> have to adapt regularly) and correctness doubful (there's nothing
> checking for the lock being skipped). Another way needs to be found.
>

The lack of docs/comments, or the fact that the decoding plugins would
need to do some lock/unlock operation?

I agree with the former, of course - docs are a must. I disagree with
the latter, though - there have been about no proposals how to do it
without the locking. If there are, I'd like to hear about it.

FWIW plugins that don't want to decode in-progress transactions don't
need to do anything, obviously.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 22:24:41
Message-ID: 20180329222441.xxo2zxkjydpgjs4n@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-03-30 00:23:00 +0200, Tomas Vondra wrote:
> On 03/29/2018 11:58 PM, Andres Freund wrote:
> > FWIW, for me that's ground to not accept the feature. Burdening output
> > plugins with this will make their development painful (because they'll
> > have to adapt regularly) and correctness doubful (there's nothing
> > checking for the lock being skipped). Another way needs to be found.
> >
>
> The lack of docs/comments, or the fact that the decoding plugins would
> need to do some lock/unlock operation?

The latter.

> I agree with the former, of course - docs are a must. I disagree with
> the latter, though - there have been about no proposals how to do it
> without the locking. If there are, I'd like to hear about it.

I don't care. Either another solution needs to be found, or the locking
needs to be automatically performed when necessary.

Greetings,

Andres Freund


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 22:30:40
Message-ID: ff89e6f0-baae-fd4f-6565-13e7c2f23b90@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 29/03/18 23:58, Andres Freund wrote:
> On 2018-03-29 23:52:18 +0200, Tomas Vondra wrote:
>>> I have added details about this in src/backend/storage/lmgr/README as
>>> suggested by you.
>>>
>>
>> Thanks. I think the README is a good start, but I think we also need to
>> improve the comments, which is usually more detailed than the README.
>> For example, it's not quite acceptable that LogicalLockTransaction and
>> LogicalUnlockTransaction have about no comments, especially when it's
>> meant to be public API for decoding plugins.
>
> FWIW, for me that's ground to not accept the feature. Burdening output
> plugins with this will make their development painful (because they'll
> have to adapt regularly) and correctness doubful (there's nothing
> checking for the lock being skipped). Another way needs to be found.
>

I have to agree with Andres here. It's also visible in the latter
patches. The pgoutput patch forgets to call these new APIs completely.
The test_decoding calls them, but it does so even when it's processing
changes for committed transaction.. I think that should be avoided as it
means potentially doing SLRU lookup for every change. So doing it right
is indeed not easy.

I as wondering how to hide this. Best idea I had so far would be to put
it in heap_beginscan (and index_beginscan given that catalog scans use
is as well) behind some condition. That would also improve performance
because locking would not need to happen for syscache hits. The problem
is however how to inform the heap_beginscan about the fact that we are
in 2PC decoding. We definitely don't want to change all the scan apis
for this. I wonder if we could add some kind of property to Snapshot
which would indicate this fact - logical decoding is using it's own
snapshots it could inject the information about being inside the 2PC
decoding.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-29 22:41:56
Message-ID: 87d3e688-89fa-bd76-8841-4fc7d16bca77@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 30/03/18 00:30, Petr Jelinek wrote:
> On 29/03/18 23:58, Andres Freund wrote:
>> On 2018-03-29 23:52:18 +0200, Tomas Vondra wrote:
>>>> I have added details about this in src/backend/storage/lmgr/README as
>>>> suggested by you.
>>>>
>>>
>>> Thanks. I think the README is a good start, but I think we also need to
>>> improve the comments, which is usually more detailed than the README.
>>> For example, it's not quite acceptable that LogicalLockTransaction and
>>> LogicalUnlockTransaction have about no comments, especially when it's
>>> meant to be public API for decoding plugins.
>>
>> FWIW, for me that's ground to not accept the feature. Burdening output
>> plugins with this will make their development painful (because they'll
>> have to adapt regularly) and correctness doubful (there's nothing
>> checking for the lock being skipped). Another way needs to be found.
>>
>
> I have to agree with Andres here. It's also visible in the latter
> patches. The pgoutput patch forgets to call these new APIs completely.
> The test_decoding calls them, but it does so even when it's processing
> changes for committed transaction.. I think that should be avoided as it
> means potentially doing SLRU lookup for every change. So doing it right
> is indeed not easy.

Ah turns out it actually does not need SLRU lookup in this case (I
missed the reorder buffer call), so I take that part back.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 07:56:47
Message-ID: CAMGcDxdboSU8cyNoCav7Qpa_U5FVqhTnb+cbnDa6=kL=3KWoXg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Petr, Andres and Tomas

>>> Thanks. I think the README is a good start, but I think we also need to
>>> improve the comments, which is usually more detailed than the README.
>>> For example, it's not quite acceptable that LogicalLockTransaction and
>>> LogicalUnlockTransaction have about no comments, especially when it's
>>> meant to be public API for decoding plugins.
>>

Tomas, thanks for providing your review comments based patch. I will include the
documentation that you have provided in that patch for the APIs. Will
also look at
your decodeGroupLocking related comments and submit a fresh patch soon.

>> FWIW, for me that's ground to not accept the feature. Burdening output
>> plugins with this will make their development painful (because they'll
>> have to adapt regularly) and correctness doubful (there's nothing
>> checking for the lock being skipped). Another way needs to be found.
>>
>
> I have to agree with Andres here.
>

Ok. Let's have another go at alleviating this issue then.

> I as wondering how to hide this. Best idea I had so far would be to put
> it in heap_beginscan (and index_beginscan given that catalog scans use
> is as well) behind some condition. That would also improve performance
> because locking would not need to happen for syscache hits. The problem
> is however how to inform the heap_beginscan about the fact that we are
> in 2PC decoding. We definitely don't want to change all the scan apis
> for this. I wonder if we could add some kind of property to Snapshot
> which would indicate this fact - logical decoding is using it's own
> snapshots it could inject the information about being inside the 2PC
> decoding.
>

The idea of adding that info in the Snapshot itself is interesting. We
could introduce a logicalxid field in SnapshotData to point to the XID
that the decoding backend is interested in. This could be added only
for the 2PC case. Support in the future for in-progress transactions
could use this field as well. If it's a valid XID, we could call
LogicalLockTransaction/LogicalUnlockTransaction on that XID from
heap_beginscan/head_endscan respectively. I can also look at what
other *_beginscan APIs would need this as well.

Regards,
Nikhils


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 17:27:18
Message-ID: f92d74fe-0e55-4c9b-a342-4c134591f1be@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 30/03/18 09:56, Nikhil Sontakke wrote:
>
>> I as wondering how to hide this. Best idea I had so far would be to put
>> it in heap_beginscan (and index_beginscan given that catalog scans use
>> is as well) behind some condition. That would also improve performance
>> because locking would not need to happen for syscache hits. The problem
>> is however how to inform the heap_beginscan about the fact that we are
>> in 2PC decoding. We definitely don't want to change all the scan apis
>> for this. I wonder if we could add some kind of property to Snapshot
>> which would indicate this fact - logical decoding is using it's own
>> snapshots it could inject the information about being inside the 2PC
>> decoding.
>>
>
> The idea of adding that info in the Snapshot itself is interesting. We
> could introduce a logicalxid field in SnapshotData to point to the XID
> that the decoding backend is interested in. This could be added only
> for the 2PC case. Support in the future for in-progress transactions
> could use this field as well. If it's a valid XID, we could call
> LogicalLockTransaction/LogicalUnlockTransaction on that XID from
> heap_beginscan/head_endscan respectively. I can also look at what
> other *_beginscan APIs would need this as well.
>

So I have spent some significant time today thinking about this (the
issue in general not this specific idea). And I think this proposal does
not work either.

The problem is that we fundamentally want two things, not one. It's true
we want to block ABORT from finishing while we are reading catalogs, but
the other important part is that we want to bail gracefully when ABORT
happened for the transaction being decoded.

In other words,, if we do the locking transparently somewhere in the
scan or catalog read or similar there is no way to let the plugin know
that it should bail. So the locking code that's called from several
layers deep would have only one option, to ERROR. I don't think we want
to throw ERRORs when transaction which is being decoded has been aborted
as that disrupts the replication.

I think that we basically only have two options here that can satisfy
both blocking ABORT and bailing gracefully in case ABORT has happened.
Either the plugin has full control over locking (as in the patch), so
that it can bail when the locking function reports that the transaction
has aborted. Or we do the locking around the plugin calls, ie directly
in logical decoding callback wrappers or similar.

Both of these options have some disadvantages. Locking inside plugin
make the plugin code much more complex if it wants to support this. For
example if I as plugin author call any function that somewhere access
syscache, I have to do the locking around that function call. Locking
around plugin callbacks can hold he lock for longer periods of time
since plugins usually end up writing to network. I think for most
use-cases of 2PC decoding the latter is more useful as plugin should be
connected to some kind transaction management solution. Also the time
should be bounded by things like wal_sender_timeout (or
statement_timeout for SQL variant of decoding).

Note that I was initially advocating against locking around whole
callbacks when Nikhil originally came up with the idea, but after we
went over several other options here and given it a lot of thought I now
think it's probably least bad way we have available. At least until
somebody figures out how to solve all the issues around reading aborted
catalog changes, but that does seem like rather large project on its
own. And if we do locking around plugin callbacks now then we can easily
switch to that solution if it ever happens without anybody having to
rewrite the plugins.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)lists(dot)postgresql(dot)org,Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>,Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>,Craig Ringer <craig(at)2ndquadrant(dot)com>,Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>,Simon Riggs <simon(at)2ndquadrant(dot)com>,Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>,Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>,Dmitry Dolgov <9erthalion6(at)gmail(dot)com>,Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>,Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>,PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>,Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 17:36:27
Message-ID: 1750517D-1FC6-4C9E-845B-918B6E275A03@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On March 30, 2018 10:27:18 AM PDT, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>. Locking
>around plugin callbacks can hold he lock for longer periods of time
>since plugins usually end up writing to network. I think for most
>use-cases of 2PC decoding the latter is more useful as plugin should be
>connected to some kind transaction management solution. Also the time
>should be bounded by things like wal_sender_timeout (or
>statement_timeout for SQL variant of decoding).

Quick thought: Should be simple to release lock when interacting with network. Could also have abort signal lockers.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


From: Nikhil Sontakke <nikhil(dot)sontakke(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 18:19:43
Message-ID: C4BBE697-D7E9-4DCE-80CA-D831D45CADCD@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> Quick thought: Should be simple to release lock when interacting with network.

I don’t think this will be that simple. The network calls will typically happen from inside the plugins and we don’t want to make plugin authors responsible for that.

> Could also have abort signal lockers.

With the decodegroup locking we do have access to all the decoding backend pids. So we could signal them. But am not sure signaling will work if the plugin is in the midst of a network
Call.

I agree with Petr. With this decodegroup
Lock implementation we have an inexpensive but workable implementation for locking around the plugin call. Sure, the abort will be penalized but it’s bounded by the Wal sender timeout or a max of one change apply cycle.
As he mentioned if we can optimize this later we can do so without changing plugin coding semantics later.

Regards,
Nikhils

>
> Andres
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)lists(dot)postgresql(dot)org, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 18:43:01
Message-ID: 47a3ab17-1a65-c747-1156-fb0c0a570204@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 30/03/18 19:36, Andres Freund wrote:
>
>
> On March 30, 2018 10:27:18 AM PDT, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>> . Locking
>> around plugin callbacks can hold he lock for longer periods of time
>> since plugins usually end up writing to network. I think for most
>> use-cases of 2PC decoding the latter is more useful as plugin should be
>> connected to some kind transaction management solution. Also the time
>> should be bounded by things like wal_sender_timeout (or
>> statement_timeout for SQL variant of decoding).
>
> Quick thought: Should be simple to release lock when interacting with network. Could also have abort signal lockers.
>

I thought about that as well, but then we need to change API of the
write functions of logical decoding to return info about transaction
being aborted in mean time so that plugin can abort. Seems bit ugly that
those should know about it. Alternatively we would have to disallow
multiple writes from single plugin callback. Otherwise abort can happen
during the network interaction without plugin noticing.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhil(dot)sontakke(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 18:50:37
Message-ID: 20180330185037.ehgvu4lplcgwekqr@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-03-30 23:49:43 +0530, Nikhil Sontakke wrote:
> > Quick thought: Should be simple to release lock when interacting with network.
>
> I don’t think this will be that simple. The network calls will
> typically happen from inside the plugins and we don’t want to make
> plugin authors responsible for that.

You can just throw results away... ;). I'm not even kidding. We've all
the necessary access in the callback for writing from a context.

> > Could also have abort signal lockers.
>
> With the decodegroup locking we do have access to all the decoding backend pids. So we could signal them. But am not sure signaling will work if the plugin is in the midst of a network
> Call.

All walsender writes are nonblocking, so that's not an issue.

Greetings,

Andres Freund


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Nikhil Sontakke <nikhil(dot)sontakke(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 19:05:29
Message-ID: 3cca7c66-2a5e-a3c1-a197-9bb107e0b69f@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 30/03/18 20:50, Andres Freund wrote:
> Hi,
>
> On 2018-03-30 23:49:43 +0530, Nikhil Sontakke wrote:
>>> Quick thought: Should be simple to release lock when interacting with network.
>>
>> I don’t think this will be that simple. The network calls will
>> typically happen from inside the plugins and we don’t want to make
>> plugin authors responsible for that.
>
> You can just throw results away... ;). I'm not even kidding. We've all
> the necessary access in the callback for writing from a context.
>

You mean, if we detect abort in the write callback, set something in the
context which will make all the future writes noop until it's reset
again after we yield back to the logical decoding?

That's not the most beautiful design I've seen, but I'd be okay with
that, it seems like it would solve all the issues we have with this.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhil(dot)sontakke(at)2ndquadrant(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-03-30 19:07:14
Message-ID: 20180330190714.ykicf7eqoqhegtrs@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-03-30 21:05:29 +0200, Petr Jelinek wrote:
> You mean, if we detect abort in the write callback, set something in the
> context which will make all the future writes noop until it's reset
> again after we yield back to the logical decoding?

Something like that, yea. I *think* doing it via signalling is going to
be a more efficient design than constantly checking, but I've not
thought it fully through.

> That's not the most beautiful design I've seen, but I'd be okay with
> that, it seems like it would solve all the issues we have with this.

Yea, it's not too pretty, but seems pragmatic.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-02 07:49:24
Message-ID: CAMGcDxeUFoGM-nVent3qiOaYKr3KdxRfL=BDhS0vpjM-oLha_Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

> Thanks. I think the README is a good start, but I think we also need to
> improve the comments, which is usually more detailed than the README.
> For example, it's not quite acceptable that LogicalLockTransaction and
> LogicalUnlockTransaction have about no comments, especially when it's
> meant to be public API for decoding plugins.
>

Additional documents around the APIs incorporated from your review patch.

>
>>>
>>> 2) regression tests
>>> -------------------
>> They are long-winded queries and IMO made the test file look too
>> cluttered and verbose..
>>
>
> Well, I don't think that's a major problem, and it certainly makes it
> more difficult to investigate regression failures.
>

Changed the test files to use the actual queries everywhere now.

> Now, the new bits ... attached is a .diff with a couple of changes and
> comments on various places.
>
> 1) LogicalLockTransaction
>
> - This function is part of a public API, yet it has no comment. That
> needs fixing - it has to be clear how to use it. The .diff suggests a
> comment, but it may need improvements.
>

Done.

>
> - As I mentioned in the previous review, BecomeDecodeGroupLeader is a
> misleading name. It suggest the called becomes a leader, while in fact
> it looks up the PROC running the XID and makes it a leader. This is
> obviously due to copying the code from lock groups, where the caller
> actually becomes the leader. It's incorrect here. I suggest something
> like LookupDecodeGroupLeader() or something.
>

Done. Used AssignDecodeGroupLeader() as the function name now.

>
> - In the "if (MyProc->decodeGroupLeader == NULL)" block there are two
> blocks rechecking the transaction status:
>
> if (proc == NULL)
> { ... recheck ... }
>
> if (!BecomeDecodeGroupMember(proc, proc->pid, rbtxn_prepared(txn)))
> { ... recheck ...}
>
> I suggest to join them into a single block.
>

Done. Combined into a single block.

>
> - This Assert() is either bogus and there can indeed be cases with
> (MyProc->decodeGroupLeader==NULL), or the "if" is unnecessary:
>
> Assert(MyProc->decodeGroupLeader);
>
> if (MyProc->decodeGroupLeader) { ... }
>

Done. Removed the assert now.

> - I'm wondering why we're maintaining decodeAbortPending flags both for
> the leader and all the members. ISTM it'd be perfectly fine to only
> check the leader, particularly because RemoveDecodeGroupMemberLocked
> removes the members from the decoding group. So that seems unnecessary,
> and we can remove the
>
> if (MyProc->decodeAbortPending)
> { ... }
>

IMO, this looked clearer that each proc has been notified that an
abort is pending.

> - LogicalUnlockTransaction needs a comment(s) too.
>

Done.

>
> 2) BecomeDecodeGroupLeader
>
> - It can bail out when (!proc), which will simplify the code a bit.
>

Done.

> - Why does it check PID of the process at all? Seems unnecessary,
> considering we're already checking the XID.
>

Agreed. Especially for the current case of 2PC, the proc will have 0 as pid.

> - Can a proc executing a XID have a different leader? I don't think so,
> so I'd make that an Assert().
>
> Assert(!proc || (proc->decodeGroupLeader == proc));
>
> And it'll allow simplification of some of the conditions.
>

Done.

> - We're only dealing with prepared transactions now, so I'd just drop
> the is_prepared flag - it'll make the code a bit simpler, we can add it
> later in patch adding decoding of regular in-progress transactions. We
> can't test the (!is_prepared) anyway.
>

Done.

> - Why are we making the leader also a member of the group? Seems rather
> unnecessary, and it complicates the abort handling, because we need to
> skip the leader when deciding to wait.
>

The leader is part of the decode group. And other than not waiting for ourself
at abort time, no other coding complications are there AFAICS.

>
> 3) LogicalDecodeRemoveTransaction
>
> - It's not clear to me what happens when a decoding backend gets killed
> between LogicalLockTransaction/LogicalUnlockTransaction. Doesn't that
> mean LogicalDecodeRemoveTransaction will get stuck, because the proc is
> still in the decoding group?
>

SIGSEGV, SIGABRT, SIGKILL will all cause the PG instance to restart because of
possible shmem corruption issues. So I don't think the above scenario
will arise. I also did not see any related handling in the parallel
lock group case as well.

>
> 4) a bunch of comment / docs improvements, ...
>
> I'm suggesting rewording a couple of comments. I've also added a couple
> of missing comments - e.g. to LogicalLockTransaction and the lock group
> methods in general.
>
> Also, a couple more questions and suggestions in XXX comments.
>

Incorporated relevant changes in the new patchset.

Andres, Petr:

As discussed, I have now added lock/unlock API calls around the
"apply_change" callback. This callback is now free to consult catalog
metadata without worrying about a concurrent rollback operation. Have
removed direct logicallock/logicalunlock calls from inside the
pgoutput and test_decoding plugins now. Also modified the sgml
documentation appropriately.

Am looking at how we can further optimize this by looking at the two
approaches about signaling about abort or adding abort related info in
the context, but this will be an additional patch over this patch set
anyways.

Regards,
Nikhils

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0204.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0204.patch application/octet-stream 28.5 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0204.patch application/octet-stream 32.7 KB
0004-pgoutput-output-plugin-support-for-logical-decoding-.0204.patch application/octet-stream 33.2 KB
0005-Teach-test_decoding-plugin-to-work-with-2PC.0204.patch application/octet-stream 25.3 KB
0006-Optional-Additional-test-case-to-demonstrate-decoding-rollbac.0204.patch application/octet-stream 9.6 KB

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-02 08:23:10
Message-ID: CANP8+jLZy6Pxqp2Vxo20OuR5v_B1hEywR6Hp7DHwNn6Yi3B-Dw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 29 March 2018 at 23:24, Andres Freund <andres(at)anarazel(dot)de> wrote:

>> I agree with the former, of course - docs are a must. I disagree with
>> the latter, though - there have been about no proposals how to do it
>> without the locking. If there are, I'd like to hear about it.
>
> I don't care. Either another solution needs to be found, or the locking
> needs to be automatically performed when necessary.

That seems unreasonable.

It's certainly a nice future goal to have it all happen automatically,
but we don't know what the plugin will do.

How can we ever make an unknown task happen automatically? We can't.

We have a reasonable approach here. Locking shared resources before
using them is not a radical new approach, its just standard
development. If we find a better way in the future, we can use that,
but requiring a better solution when there isn't one is unreasonable.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-02 08:28:57
Message-ID: CANP8+jJ4eiqA-1TLBj7UX98z4zJojDQt0FczxEey4n3YRvs9YA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 29 March 2018 at 23:30, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
> On 29/03/18 23:58, Andres Freund wrote:
>> On 2018-03-29 23:52:18 +0200, Tomas Vondra wrote:
>>>> I have added details about this in src/backend/storage/lmgr/README as
>>>> suggested by you.
>>>>
>>>
>>> Thanks. I think the README is a good start, but I think we also need to
>>> improve the comments, which is usually more detailed than the README.
>>> For example, it's not quite acceptable that LogicalLockTransaction and
>>> LogicalUnlockTransaction have about no comments, especially when it's
>>> meant to be public API for decoding plugins.
>>
>> FWIW, for me that's ground to not accept the feature. Burdening output
>> plugins with this will make their development painful (because they'll
>> have to adapt regularly) and correctness doubful (there's nothing
>> checking for the lock being skipped). Another way needs to be found.
>>
>
> I have to agree with Andres here. It's also visible in the latter
> patches. The pgoutput patch forgets to call these new APIs completely.
> The test_decoding calls them, but it does so even when it's processing
> changes for committed transaction.. I think that should be avoided as it
> means potentially doing SLRU lookup for every change. So doing it right
> is indeed not easy.

Yet you spotted these problems easily enough. Similar to finding
missing LWlocks.

> I as wondering how to hide this. Best idea I had so far would be to put
> it in heap_beginscan (and index_beginscan given that catalog scans use
> is as well) behind some condition. That would also improve performance
> because locking would not need to happen for syscache hits. The problem
> is however how to inform the heap_beginscan about the fact that we are
> in 2PC decoding. We definitely don't want to change all the scan apis
> for this. I wonder if we could add some kind of property to Snapshot
> which would indicate this fact - logical decoding is using it's own
> snapshots it could inject the information about being inside the 2PC
> decoding.

Perhaps, but how do we know we've covered all the right places? We
don't know what every plugin will require, do we?

The plugin needs to take responsibility for its own correctness,
whether we make it easier or not.

It seems clear that we would need a generalized API (the proposed
locking approach) to cover all requirements.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-02 18:04:51
Message-ID: 20180402180451.ip3uuvadhgxjvefo@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-04-02 09:23:10 +0100, Simon Riggs wrote:
> On 29 March 2018 at 23:24, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> >> I agree with the former, of course - docs are a must. I disagree with
> >> the latter, though - there have been about no proposals how to do it
> >> without the locking. If there are, I'd like to hear about it.
> >
> > I don't care. Either another solution needs to be found, or the locking
> > needs to be automatically performed when necessary.
>
> That seems unreasonable.

> It's certainly a nice future goal to have it all happen automatically,
> but we don't know what the plugin will do.

No, fighting too complicated APIs is not unreasonable. And we've found
an alternative.

> How can we ever make an unknown task happen automatically? We can't.

The task isn't unknown, so this just seems like a non sequitur.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 10:40:43
Message-ID: CAMGcDxchx=0PeQBVLzrgYG2AQ49QSRxHj5DCp7yy0QrJR0S0nA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>> It's certainly a nice future goal to have it all happen automatically,
>> but we don't know what the plugin will do.
>
> No, fighting too complicated APIs is not unreasonable. And we've found
> an alternative.
>

PFA, latest patch set.

The LogicalLockTransaction/LogicalUnlockTransaction API implementation
using decode groups now has proper cleanup handling in case there's an
ERROR while holding the logical lock.

Rest of the patches are the same as yesterday.

Other than this, we would want to have pgoutput support for 2PC
decoding to be made optional? In that case we could add an option to
"CREATE SUBSCRIPTION". This will mean adding a new
Anum_pg_subscription_subenable_twophase attribute to Subscription
struct and related processing. Should we go down this route?

Other than this, unless am mistaken, every other issue has been taken
care of. Please do let me know if we think anything is pending in
these patch sets.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0304.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0304.patch application/octet-stream 31.2 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0304.patch application/octet-stream 32.7 KB
0004-pgoutput-output-plugin-support-for-logical-decoding-.0304.patch application/octet-stream 33.2 KB
0005-Teach-test_decoding-plugin-to-work-with-2PC.0304.patch application/octet-stream 25.3 KB
0006-Optional-Additional-test-case-to-demonstrate-decoding-rollbac.0304.patch application/octet-stream 9.6 KB

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 13:56:50
Message-ID: a76f3f12-ed44-b724-1c66-d13d1920fcab@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 04/03/2018 12:40 PM, Nikhil Sontakke wrote:
> Hi,
>
>>> It's certainly a nice future goal to have it all happen automatically,
>>> but we don't know what the plugin will do.
>>
>> No, fighting too complicated APIs is not unreasonable. And we've found
>> an alternative.
>>
>
> PFA, latest patch set.
>
> The LogicalLockTransaction/LogicalUnlockTransaction API implementation
> using decode groups now has proper cleanup handling in case there's an
> ERROR while holding the logical lock.
>
> Rest of the patches are the same as yesterday.
>

Unfortunately, this does segfault for me in `make check` almost
immediately. Try

./configure --enable-debug --enable-cassert CFLAGS="-O0 -ggdb3
-DRANDOMIZE_ALLOCATED_MEMORY" && make -s clean && make -s -j4 check

and you should get an assert failure right away. Examples of backtraces
attached, not sure what exactly is the issue.

Also, I get this compiler warning:

proc.c: In function ‘AssignDecodeGroupLeader’:
proc.c:1975:8: warning: variable ‘pid’ set but not used
[-Wunused-but-set-variable]
int pid;
^~~
All of PostgreSQL successfully made. Ready to install.

which suggests we don't really need the pid variable.

> Other than this, we would want to have pgoutput support for 2PC
> decoding to be made optional? In that case we could add an option to
> "CREATE SUBSCRIPTION". This will mean adding a new
> Anum_pg_subscription_subenable_twophase attribute to Subscription
> struct and related processing. Should we go down this route?
>

I'd say yes, we need to make it opt-in (assuming we want pgoutput to
support the 2PC decoding at all).

The trouble is that while it may improve replication of two-phase
transactions, it may also require config changes on the subscriber (to
support enough prepared transactions) and furthermore the GID is going
to be copied to the subscriber.

Which means that if the publisher/subscriber (at the instance level) are
already part of the are on the same 2PC transaction, it can't possibly
proceed because the subscriber won't be able to do PREPARE TRANSACTION.

So I think we need a subscription parameter to enable/disable this,
defaulting to 'disabled'.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment Content-Type Size
backtraces.txt text/plain 3.8 KB

From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:07:28
Message-ID: 94CEE786-EBC3-40D6-B986-6F0E0ACA1791@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> On 3 Apr 2018, at 16:56, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>
>
> So I think we need a subscription parameter to enable/disable this,
> defaulting to 'disabled’.

+1

Also, current value for LOGICALREP_IS_COMMIT is 1, but previous code expected
flags to be zero, so this way logical replication between postgres-10 and
postgres-with-2pc-decoding will be broken. So ISTM it’s better to set
LOGICALREP_IS_COMMIT to zero and change flags checking rules to accommodate that.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:34:52
Message-ID: 7f33efcf-d28d-cc22-3430-665adbb0dbf2@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 04/03/2018 04:07 PM, Stas Kelvich wrote:
>
>
>> On 3 Apr 2018, at 16:56, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>>
>>
>> So I think we need a subscription parameter to enable/disable this,
>> defaulting to 'disabled’.
>
> +1
>
> Also, current value for LOGICALREP_IS_COMMIT is 1, but previous code expected
> flags to be zero, so this way logical replication between postgres-10 and
> postgres-with-2pc-decoding will be broken. So ISTM it’s better to set
> LOGICALREP_IS_COMMIT to zero and change flags checking rules to accommodate that.
>

Yes, that is a good point actually - we need to test that replication
between PG10 and PG11 works correctly, i.e. that the protocol version is
correctly negotiated, and features are disabled/enabled accordingly etc.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:37:36
Message-ID: 20180403143736.z42lxviikrf4iyxc@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tomas Vondra wrote:

> Yes, that is a good point actually - we need to test that replication
> between PG10 and PG11 works correctly, i.e. that the protocol version is
> correctly negotiated, and features are disabled/enabled accordingly etc.

Maybe it'd be good to have a buildfarm animal to specifically test for
that?

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:38:55
Message-ID: 52ebcc54-9952-79da-64fe-f28f539bec00@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 04/03/2018 04:37 PM, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>
>> Yes, that is a good point actually - we need to test that replication
>> between PG10 and PG11 works correctly, i.e. that the protocol version is
>> correctly negotiated, and features are disabled/enabled accordingly etc.
>
> Maybe it'd be good to have a buildfarm animal to specifically test for
> that?
>

Not sure a buildfarm supports running two clusters with different
versions easily?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:55:05
Message-ID: 6102.1522767305@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
> On 04/03/2018 04:37 PM, Alvaro Herrera wrote:
>> Tomas Vondra wrote:
>>> Yes, that is a good point actually - we need to test that replication
>>> between PG10 and PG11 works correctly, i.e. that the protocol version is
>>> correctly negotiated, and features are disabled/enabled accordingly etc.

>> Maybe it'd be good to have a buildfarm animal to specifically test for
>> that?

> Not sure a buildfarm supports running two clusters with different
> versions easily?

You'd need some specialized buildfarm infrastructure like --- maybe the
same as --- the infrastructure for testing cross-version pg_upgrade.
Andrew could speak to the details better than I.

regards, tom lane


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 14:59:27
Message-ID: 782517ca-1e99-06fe-eba9-34f826118289@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

FWIW, a couple of additional comments based on eyeballing the diffs:

1) twophase.c
---------

I think this comment is slightly inaccurate:

/*
* Coordinate with logical decoding backends that may be already
* decoding this prepared transaction. When aborting a transaction,
* we need to wait for all of them to leave the decoding group. If
* committing, we simply remove all members from the group.
*/

Strictly speaking, we're not waiting for the workers to leave the
decoding group, but to set decodeLocked=false. That is, we may proceed
when there still are members, but they must be in unlocked state.

2) reorderbuffer.c
------------------

I've already said it before, I find the "flags" bitmask and rbtxn_*
macros way less readable than individual boolean flags. It was claimed
this was done on Andres' request, but I don't see that in the thread. I
admit it's rather subjective, though.

I see ReorederBuffer only does the lock/unlock around apply_change and
RelationIdGetRelation. That seems insufficient - RelidByRelfilenode can
do heap_open on pg_class, for example. And I guess we need to protect
rb->message too, because who knows what the plugin does in the callback?

Also, we should not allocate gid[GIDSIZE] for every transaction. For
example subxacts never need it, and it seems rather wasteful to allocate
200B when the rest of the struct has only ~100B. This is particularly
problematic considering ReorderBufferTXN is not spilled to disk when
reaching the memory limit. It needs to be allocated ad-hoc only when
actually needed.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 15:15:52
Message-ID: CAMGcDxdKBYBTUOgGh7MGjb89Nm2JLht3iHWoYopTL820t9-wuQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

>> Unfortunately, this does segfault for me in `make check` almost
immediately. Try

This is due to the new ERROR handling code that I added today for the
lock/unlock APIs. Will fix.

>> Also, current value for LOGICALREP_IS_COMMIT is 1, but previous code expected
flags to be zero, so this way logical replication between postgres-10 and
postgres-with-2pc-decoding will be broken.

Good point. Will also test pg-10 to pg-11 logical replication to
ensure that there are no issues.

>> So I think we need a subscription parameter to enable/disable this,
defaulting to 'disabled'.

Ok, will add it to the "CREATE SUBSCRIPTION", btw, we should have
allowed storing options in an array form for a subscription. We might
add more options in the future and adding fields one by one doesn't
seem that extensible.

> 1) twophase.c
> ---------
>
> I think this comment is slightly inaccurate:
>
> /*
> * Coordinate with logical decoding backends that may be already
> * decoding this prepared transaction. When aborting a transaction,
> * we need to wait for all of them to leave the decoding group. If
> * committing, we simply remove all members from the group.
> */
>
> Strictly speaking, we're not waiting for the workers to leave the
> decoding group, but to set decodeLocked=false. That is, we may proceed
> when there still are members, but they must be in unlocked state.
>

Agreed. Will modify it to mention that it will wait only if some of
the backends are in locked state.

>
> 2) reorderbuffer.c
> ------------------
>
> I've already said it before, I find the "flags" bitmask and rbtxn_*
> macros way less readable than individual boolean flags. It was claimed
> this was done on Andres' request, but I don't see that in the thread. I
> admit it's rather subjective, though.
>

Yeah, this is a little subjective.

> I see ReorederBuffer only does the lock/unlock around apply_change and
> RelationIdGetRelation. That seems insufficient - RelidByRelfilenode can
> do heap_open on pg_class, for example. And I guess we need to protect
> rb->message too, because who knows what the plugin does in the callback?
>
> Also, we should not allocate gid[GIDSIZE] for every transaction. For
> example subxacts never need it, and it seems rather wasteful to allocate
> 200B when the rest of the struct has only ~100B. This is particularly
> problematic considering ReorderBufferTXN is not spilled to disk when
> reaching the memory limit. It needs to be allocated ad-hoc only when
> actually needed.
>

OK, will look at allocating GID only when needed.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-03 22:05:18
Message-ID: CAA8=A7_1b5yO+MUoMKVKkJbnwi=VHt3jd5uxqNCXmxDTUrHCQw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Apr 4, 2018 at 12:25 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
>> On 04/03/2018 04:37 PM, Alvaro Herrera wrote:
>>> Tomas Vondra wrote:
>>>> Yes, that is a good point actually - we need to test that replication
>>>> between PG10 and PG11 works correctly, i.e. that the protocol version is
>>>> correctly negotiated, and features are disabled/enabled accordingly etc.
>
>>> Maybe it'd be good to have a buildfarm animal to specifically test for
>>> that?
>
>> Not sure a buildfarm supports running two clusters with different
>> versions easily?
>
> You'd need some specialized buildfarm infrastructure like --- maybe the
> same as --- the infrastructure for testing cross-version pg_upgrade.
> Andrew could speak to the details better than I.
>

It's quite possible. The cross-version upgrade module saves out each
built version. See
<https://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm>

Since this occupies a significant amount of disk space we'd probably
want to leverage it rather than have another module do the same thing.
Perhaps the "save" part of it needs to be factored out.

In any case, it's quite doable. I can work on that after this gets committed.

Currently we seem to have only two machines doing the cross-version
upgrade checks, which might make it easier to rearrange anything if
necessary.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-04 09:53:37
Message-ID: CAMGcDxc-kuO9uq0zRCRwbHWBj_rePY9=raR7M9pZGWoj9EOGdg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> This is due to the new ERROR handling code that I added today for the
> lock/unlock APIs. Will fix.
>

Fixed. I continue to test this area for other issues.

>>> Also, current value for LOGICALREP_IS_COMMIT is 1, but previous code expected
> flags to be zero, so this way logical replication between postgres-10 and
> postgres-with-2pc-decoding will be broken.
>
> Good point. Will also test pg-10 to pg-11 logical replication to
> ensure that there are no issues.
>

I started making changes for supporting replication between
postgres-10 and postgres-11 but then very quickly realized that
pgoutput support is too far from being done. It needs to be optional
and per subscription. It definitely needs proto version bump and we
don't even have a framework for negotiating proto version yet (since
the proto was never bumped) so there is a chunk of completely new code
missing. For demo and functionality purposes we have test_decoding
support for 2pc decoding in this patch set. External plugins like bdr
and pglogical will be able to leverage this infrastructure as well.

Importantly, since we don't do negotiation then PG10 -> PG11
replication is not possible making one of the most important current
use cases not possible. To add support in pgoutput, we'd first have to
get multi-protocol publisher/subscriber communication working as a
pre-requisite. The good thing is that once we get the proto stuff in,
we can easily add the patch from the earlier patchset which provides
full 2PC decoding support in pgoutput.
Thoughts?

So, we should consider not adding pgoutput support right away and I
have removed that patch from this patchset now. Another aspect of not
working on pgoutput is we need not worry about adding an
enable_twophase option to CREATE SUBSCRIPTION immediately as well. The
test_decoding plugin is easy to extend with options and the patch set
already does that for enabling/disabling 2PC decoding in it.

>>> So I think we need a subscription parameter to enable/disable this,
> defaulting to 'disabled'.
>
> Ok, will add it to the "CREATE SUBSCRIPTION", btw, we should have
> allowed storing options in an array form for a subscription. We might
> add more options in the future and adding fields one by one doesn't
> seem that extensible.
>

This is not needed since we should not look at pgoutput 2PC decode support now.

>
>> 1) twophase.c
>> ---------
>>
>> I think this comment is slightly inaccurate:
>>
>> /*
>> * Coordinate with logical decoding backends that may be already
>> * decoding this prepared transaction. When aborting a transaction,
>> * we need to wait for all of them to leave the decoding group. If
>> * committing, we simply remove all members from the group.
>> */
>>
>> Strictly speaking, we're not waiting for the workers to leave the
>> decoding group, but to set decodeLocked=false. That is, we may proceed
>> when there still are members, but they must be in unlocked state.
>>
>
> Agreed. Will modify it to mention that it will wait only if some of
> the backends are in locked state.
>

Modified the comment.

>>
>> 2) reorderbuffer.c
>> ------------------
>>
>> I've already said it before, I find the "flags" bitmask and rbtxn_*
>> macros way less readable than individual boolean flags. It was claimed
>> this was done on Andres' request, but I don't see that in the thread. I
>> admit it's rather subjective, though.
>>
>
> Yeah, this is a little subjective.
>

If the committer has strong opinions on this, then I can whip up
patches along desired lines.

>> I see ReorederBuffer only does the lock/unlock around apply_change and
>> RelationIdGetRelation. That seems insufficient - RelidByRelfilenode can
>> do heap_open on pg_class, for example. And I guess we need to protect
>> rb->message too, because who knows what the plugin does in the callback?
>>

Added lock/unlock APIs around rb->message and other places where
Relations are fetched.

>> Also, we should not allocate gid[GIDSIZE] for every transaction. For
>> example subxacts never need it, and it seems rather wasteful to allocate
>> 200B when the rest of the struct has only ~100B. This is particularly
>> problematic considering ReorderBufferTXN is not spilled to disk when
>> reaching the memory limit. It needs to be allocated ad-hoc only when
>> actually needed.
>>
>
> OK, will look at allocating GID only when needed.
>
Done. Now GID is a char pointer and gets palloc'ed and pfree'd.

PFA, latest patchset.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0404.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0404.patch application/octet-stream 32.4 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0404.patch application/octet-stream 43.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.0404.patch application/octet-stream 25.3 KB
0005-Optional-Additional-test-case-to-demonstrate-decoding-rollbac.0404.patch application/octet-stream 4.1 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-04 11:58:54
Message-ID: CAMGcDxc3sc-C928toHEvj2dH9t=wTP=BZwUv3rE1bnHzTr9R6Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> This is due to the new ERROR handling code that I added today for the
>> lock/unlock APIs. Will fix.
>>
>
> Fixed. I continue to test this area for other issues.
>

Revised the patch after more testing and added more documentation in
the ERROR handling code path.

I tested ERROR handling by ensuring that LogicalLock is held by
multiple backends and induced ERROR while holding it. The handling in
ProcKill rightly removes entries from these backends as part of ERROR
cleanup. A future ROLLBACK removes the only one entry belonging to the
Leader from the decodeGroup appropriately later. Seems to be holding
up ok

Had also missed out a new test file for the option 0005 patch earlier.
That's also included now.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0404.v2.0.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0404.v2.0.patch application/octet-stream 33.0 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0404.v2.0.patch application/octet-stream 43.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.0404.v2.0.patch application/octet-stream 25.3 KB
0005-Optional-Additional-test-case-to-demonstrate-decoding-rollbac.0404.v2.0.patch application/octet-stream 9.7 KB

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-04 16:22:54
Message-ID: 9cac2837-7e58-1cad-3940-27ac0dd7c198@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I think the patch looks mostly fine. I'm about to do a bit more testing
on it, but a few comments. Attached diff shows which the discussed
places / comments more closely.

1) There's a race condition in LogicalLockTransaction. The code does
roughly this:

if (!BecomeDecodeGroupMember(...))
... bail out ...

Assert(MyProc->decodeGroupLeader);
lwlock = LockHashPartitionLockByProc(MyProc->decodeGroupLeader);
...

but AFAICS there is no guarantee that the transaction does not commit
(or even abort) right after the become decode group member. In which
case LogicalDecodeRemoveTransaction might have already reset our pointer
to a leader to NULL. In which case the Assert() and lock will fail.

I've initially thought this can be fixed by setting decodeLocked=true in
BecomeDecodeGroupMember, but that's not really true - that would fix the
race for aborts, but not commits. LogicalDecodeRemoveTransaction skips
the wait for commits entirely, and just resets the flags anyway.

So this needs a different fix, I think. BecomeDecodeGroupMember also
needs the leader PGPROC pointer, but it does not have the issue because
it gets it as a parameter. I think the same thing would work for here
too - that is, use the AssignDecodeGroupLeader() result instead.

2) BecomeDecodeGroupMember sets the decodeGroupLeader=NULL when the
leader does not match the parameters, despite enforcing it by Assert()
at the beginning. Let's remove that assignment.

3) I don't quite understand why BecomeDecodeGroupMember does the
cross-check using PID. In which case would it help?

4) AssignDecodeGroupLeader still sets pid, which is never read. Remove.

5) ReorderBufferCommitInternal does elog(LOG) about interrupting the
decoding of aborted transaction only in one place. There are about three
other places where we check LogicalLockTransaction. Seems inconsistent.

6) The comment before LogicalLockTransaction is somewhat inaccurate,
because it talks about adding/removing the backend to the group, but
that's not what's happening. We join the group on the first call and
then we only tweak the decodeLocked flag.

7) I propose minor changes to a couple of comments.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment Content-Type Size
2pc-review.diff text/x-patch 6.4 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-05 06:50:42
Message-ID: CAMGcDxfAnJtJ17hejKFDa3xXS2OWpys+p-PvGS+MZj4h73QqPA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

> 1) There's a race condition in LogicalLockTransaction. The code does
> roughly this:
>
> if (!BecomeDecodeGroupMember(...))
> ... bail out ...
>
> Assert(MyProc->decodeGroupLeader);
> lwlock = LockHashPartitionLockByProc(MyProc->decodeGroupLeader);
> ...
>
> but AFAICS there is no guarantee that the transaction does not commit
> (or even abort) right after the become decode group member. In which
> case LogicalDecodeRemoveTransaction might have already reset our pointer
> to a leader to NULL. In which case the Assert() and lock will fail.
>
> I've initially thought this can be fixed by setting decodeLocked=true in
> BecomeDecodeGroupMember, but that's not really true - that would fix the
> race for aborts, but not commits. LogicalDecodeRemoveTransaction skips
> the wait for commits entirely, and just resets the flags anyway.
>
> So this needs a different fix, I think. BecomeDecodeGroupMember also
> needs the leader PGPROC pointer, but it does not have the issue because
> it gets it as a parameter. I think the same thing would work for here
> too - that is, use the AssignDecodeGroupLeader() result instead.
>

That's a good catch. One of the earlier patches had a check for this
(it also had an ill-placed assert above though) which we removed as
part of the ongoing review.

Instead of doing the above, we can just re-check if the
decodeGroupLeader pointer has become NULL and if so, re-assert that
the leader has indeed gone away before returning false. I propose a
diff like below.

/*

* If we were able to add ourself, then Abort processing will

- * interlock with us.

+ * interlock with us. If the leader was done in the meanwhile

+ * it could have removed us and gone away as well.

*/

- Assert(MyProc->decodeGroupLeader);

+ if (MyProc->decodeGroupLeader == NULL)

+ {

+ Assert(BackendXidGetProc(txn->xid) == NULL);

+ return false

+ }

>
> 2) BecomeDecodeGroupMember sets the decodeGroupLeader=NULL when the
> leader does not match the parameters, despite enforcing it by Assert()
> at the beginning. Let's remove that assignment.
>

Ok, done.

>
> 3) I don't quite understand why BecomeDecodeGroupMember does the
> cross-check using PID. In which case would it help?
>

When I wrote this support, I had written it with the intention of
supporting both 2PC (in which case pid is 0) and in-progress regular
transactions. That's why the presence of PID in these functions. The
current use case is just for 2PC, so we could remove it.

>
> 4) AssignDecodeGroupLeader still sets pid, which is never read. Remove.
>

Ok, will do.

>
> 5) ReorderBufferCommitInternal does elog(LOG) about interrupting the
> decoding of aborted transaction only in one place. There are about three
> other places where we check LogicalLockTransaction. Seems inconsistent.
>

Note that I have added it for the OPTIONAL test_decoding test cases
(which AFAIK we don't plan to commit in that state) which demonstrate
concurrent rollback interlocking with the lock/unlock APIs. The first
ELOG was enough to catch the interaction. If we think these elogs
should be present in the code, then yes, I can add it elsewhere as
well as part of an earlier patch.

>
> 6) The comment before LogicalLockTransaction is somewhat inaccurate,
> because it talks about adding/removing the backend to the group, but
> that's not what's happening. We join the group on the first call and
> then we only tweak the decodeLocked flag.
>

True.

>
> 7) I propose minor changes to a couple of comments.
>

Ok, I am looking at your provided patch and incorporating relevant
changes from it. WIll submit a patch set soon.

Regards,
Nikhils

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-05 08:45:49
Message-ID: b1c6ba51-436c-b10d-895f-3805d3caf4a5@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4/5/18 8:50 AM, Nikhil Sontakke wrote:
> Hi Tomas,
>
>> 1) There's a race condition in LogicalLockTransaction. The code does
>> roughly this:
>>
>> if (!BecomeDecodeGroupMember(...))
>> ... bail out ...
>>
>> Assert(MyProc->decodeGroupLeader);
>> lwlock = LockHashPartitionLockByProc(MyProc->decodeGroupLeader);
>> ...
>>
>> but AFAICS there is no guarantee that the transaction does not commit
>> (or even abort) right after the become decode group member. In which
>> case LogicalDecodeRemoveTransaction might have already reset our pointer
>> to a leader to NULL. In which case the Assert() and lock will fail.
>>
>> I've initially thought this can be fixed by setting decodeLocked=true in
>> BecomeDecodeGroupMember, but that's not really true - that would fix the
>> race for aborts, but not commits. LogicalDecodeRemoveTransaction skips
>> the wait for commits entirely, and just resets the flags anyway.
>>
>> So this needs a different fix, I think. BecomeDecodeGroupMember also
>> needs the leader PGPROC pointer, but it does not have the issue because
>> it gets it as a parameter. I think the same thing would work for here
>> too - that is, use the AssignDecodeGroupLeader() result instead.
>>
>
> That's a good catch. One of the earlier patches had a check for this
> (it also had an ill-placed assert above though) which we removed as
> part of the ongoing review.
>
> Instead of doing the above, we can just re-check if the
> decodeGroupLeader pointer has become NULL and if so, re-assert that
> the leader has indeed gone away before returning false. I propose a
> diff like below.
>
> /*
>
> * If we were able to add ourself, then Abort processing will
>
> - * interlock with us.
>
> + * interlock with us. If the leader was done in the meanwhile
>
> + * it could have removed us and gone away as well.
>
> */
>
> - Assert(MyProc->decodeGroupLeader);
>
> + if (MyProc->decodeGroupLeader == NULL)
>
> + {
>
> + Assert(BackendXidGetProc(txn->xid) == NULL);
>
> + return false
>
> + }
>
>

Uh? Simply rechecking if MyProc->decodeGroupLeader is NULL obviously
does not fix the race condition - it might get NULL right after the
check. So we need to either lookup the PROC again (and then get the
associated lwlock), or hold some other type of lock.

>>
>> 3) I don't quite understand why BecomeDecodeGroupMember does the
>> cross-check using PID. In which case would it help?
>>
>
> When I wrote this support, I had written it with the intention of
> supporting both 2PC (in which case pid is 0) and in-progress regular
> transactions. That's why the presence of PID in these functions. The
> current use case is just for 2PC, so we could remove it.
>

Sure, but why do we need to cross-check the PID at all? I may be missing
something here, but I don't see what does this protect against?

>
>>
>> 5) ReorderBufferCommitInternal does elog(LOG) about interrupting the
>> decoding of aborted transaction only in one place. There are about three
>> other places where we check LogicalLockTransaction. Seems inconsistent.
>>
>
> Note that I have added it for the OPTIONAL test_decoding test cases
> (which AFAIK we don't plan to commit in that state) which demonstrate
> concurrent rollback interlocking with the lock/unlock APIs. The first
> ELOG was enough to catch the interaction. If we think these elogs
> should be present in the code, then yes, I can add it elsewhere as
> well as part of an earlier patch.
>

Ah, I see. Makes sense. I've been looking at the patch as a whole and
haven't realized it's part of this piece.

>
> Ok, I am looking at your provided patch and incorporating relevant
> changes from it. WIll submit a patch set soon.
>

OK.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-05 09:17:30
Message-ID: CAMGcDxeyO-vu4WEsX8ZZRkV7LB037Rd5EmekQuZ_bj0Y33KbBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

>>
>
> Uh? Simply rechecking if MyProc->decodeGroupLeader is NULL obviously
> does not fix the race condition - it might get NULL right after the
> check. So we need to either lookup the PROC again (and then get the
> associated lwlock), or hold some other type of lock.
>

I realized my approach was short-sighted while coding it up. So now we
lookup the leader pgproc, recheck if the XID is the same that we are
interested in and go ahead.

>
>>>
>>> 3) I don't quite understand why BecomeDecodeGroupMember does the
>>> cross-check using PID. In which case would it help?
>>>
>>
>> When I wrote this support, I had written it with the intention of
>> supporting both 2PC (in which case pid is 0) and in-progress regular
>> transactions. That's why the presence of PID in these functions. The
>> current use case is just for 2PC, so we could remove it.
>>
>
> Sure, but why do we need to cross-check the PID at all? I may be missing
> something here, but I don't see what does this protect against?
>

The fact that PID is 0 in case of prepared transactions was making me
nervous. So, I had added the assert that pid should only be 0 when
it's a prepared transaction and not otherwise. Anyways, since we are
dealing with only 2PC, I have removed the PID argument now. Also
removed is_prepared argument for the same reason.

>>
>> Ok, I am looking at your provided patch and incorporating relevant
>> changes from it. WIll submit a patch set soon.
>>
>
> OK.
>
PFA, latest patch set.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0504.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0504.patch application/octet-stream 33.6 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0504.patch application/octet-stream 43.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.0504.patch application/octet-stream 25.3 KB
0005-OPTIONAL-Additional-test-case-to-demonstrate-decoding-rollbac.0504.patch application/octet-stream 9.7 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-05 14:53:11
Message-ID: CAMGcDxeW2xE51Rx8V8hwL7YMkVFBwtOwDco5=opWd-YACJMi0g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>> Uh? Simply rechecking if MyProc->decodeGroupLeader is NULL obviously
>> does not fix the race condition - it might get NULL right after the
>> check. So we need to either lookup the PROC again (and then get the
>> associated lwlock), or hold some other type of lock.
>>
>
> I realized my approach was short-sighted while coding it up. So now we
> lookup the leader pgproc, recheck if the XID is the same that we are
> interested in and go ahead.
>

I did some more gdb single-stepping and debugging on this. Introduced a few
more fetch pgproc using XID calls for more robustness. I am satisfied now from
my point of view with the decodegroup lock changes.

Also a few other changes related to cleanups and setting of the txn flags at
all places.

PFA, v2.0 of the patchset for today.

"make check-world" passes ok on these patches.

Regards,
Nikhils

>>
>>>>
>>>> 3) I don't quite understand why BecomeDecodeGroupMember does the
>>>> cross-check using PID. In which case would it help?
>>>>
>>>
>>> When I wrote this support, I had written it with the intention of
>>> supporting both 2PC (in which case pid is 0) and in-progress regular
>>> transactions. That's why the presence of PID in these functions. The
>>> current use case is just for 2PC, so we could remove it.
>>>
>>
>> Sure, but why do we need to cross-check the PID at all? I may be missing
>> something here, but I don't see what does this protect against?
>>
>
> The fact that PID is 0 in case of prepared transactions was making me
> nervous. So, I had added the assert that pid should only be 0 when
> it's a prepared transaction and not otherwise. Anyways, since we are
> dealing with only 2PC, I have removed the PID argument now. Also
> removed is_prepared argument for the same reason.
>
>>>
>>> Ok, I am looking at your provided patch and incorporating relevant
>>> changes from it. WIll submit a patch set soon.
>>>
>>
>> OK.
>>
> PFA, latest patch set.
>
> Regards,
> Nikhils
> --
> Nikhil Sontakke http://www.2ndQuadrant.com/
> PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.0504.v2.0.patch application/octet-stream 7.3 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.0504.v2.0.patch application/octet-stream 33.9 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.0504.v2.0.patch application/octet-stream 43.6 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.0504.v2.0.patch application/octet-stream 25.3 KB
0005-OPTIONAL-Additional-test-case-to-demonstrate-decoding-rollbac.0504.v2.0.patch application/octet-stream 9.8 KB

From: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-06 12:00:36
Message-ID: CAA8=A7_c9jiNMzpHsH+wxEA91WUt_cRc-BN8U0S+MErhJnhZTw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Apr 6, 2018 at 12:23 AM, Nikhil Sontakke
<nikhils(at)2ndquadrant(dot)com> wrote:
> Hi,
>
>
>
>>> Uh? Simply rechecking if MyProc->decodeGroupLeader is NULL obviously
>>> does not fix the race condition - it might get NULL right after the
>>> check. So we need to either lookup the PROC again (and then get the
>>> associated lwlock), or hold some other type of lock.
>>>
>>
>> I realized my approach was short-sighted while coding it up. So now we
>> lookup the leader pgproc, recheck if the XID is the same that we are
>> interested in and go ahead.
>>
>
> I did some more gdb single-stepping and debugging on this. Introduced a few
> more fetch pgproc using XID calls for more robustness. I am satisfied now from
> my point of view with the decodegroup lock changes.
>
> Also a few other changes related to cleanups and setting of the txn flags at
> all places.
>
> PFA, v2.0 of the patchset for today.
>
> "make check-world" passes ok on these patches.
>

OK, I think this is now committable. The changes are small, fairly
isolated in effect, and I think every objection has been met, partly
by reducing the scope of the changes. By committing this we will allow
plugin authors to start developing 2PC support, which is important in
some use cases.

I therefore intent to commit these patches some time before the
deadline, either in 12 hours or so, or about 24 hours after that
(which would be right up against the deadline by my calculation) ,
depending on some other important obligations I have.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
To: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-06 12:30:28
Message-ID: 7b386dcb-985b-c67a-c5b2-0b59ef01af84@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4/3/18 18:05, Andrew Dunstan wrote:
> Currently we seem to have only two machines doing the cross-version
> upgrade checks, which might make it easier to rearrange anything if
> necessary.

I think we should think about making this even more general. We could
use some cross-version testing for pg_dump, psql, pg_basebackup,
pg_upgrade, logical replication, and so on. Ideally, we would be able
to run the whole test set against an older version somehow. Lots of
details omitted here, of course. ;-)

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-06 16:20:23
Message-ID: 20180406162023.wcic5kw73bgswgb6@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-04-06 21:30:36 +0930, Andrew Dunstan wrote:
> OK, I think this is now committable.

> The changes are small, fairly isolated in effect, and I think every
> objection has been met, partly by reducing the scope of the
> changes. By committing this we will allow plugin authors to start
> developing 2PC support, which is important in some use cases.
>
> I therefore intent to commit these patches some time before the
> deadline, either in 12 hours or so, or about 24 hours after that
> (which would be right up against the deadline by my calculation) ,
> depending on some other important obligations I have.

I object. And I'm negatively surprised that this is even considered.

This is a complicated patch that has been heavily reworked in the last
few days to, among other things, address objections that have first been
made months ago ([1]). There we nontrivial bugs less than a day ago. It
has not received a lot of reviews since these changes. This isn't an
area you've previously been involved in to a significant degree.

Greetings,

Andres Freund

[1] http://archives.postgresql.org/message-id/20180209211025.d7jxh43fhqnevhji%40alap3.anarazel.de


From: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-06 21:18:48
Message-ID: CAA8=A79=BdriLJhLfrp8=+mNj538mE-8j-yxJ1w-+2aHku+0YA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Apr 7, 2018 at 1:50 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Hi,
>
> On 2018-04-06 21:30:36 +0930, Andrew Dunstan wrote:
>> OK, I think this is now committable.
>
>> The changes are small, fairly isolated in effect, and I think every
>> objection has been met, partly by reducing the scope of the
>> changes. By committing this we will allow plugin authors to start
>> developing 2PC support, which is important in some use cases.
>>
>> I therefore intent to commit these patches some time before the
>> deadline, either in 12 hours or so, or about 24 hours after that
>> (which would be right up against the deadline by my calculation) ,
>> depending on some other important obligations I have.
>
> I object. And I'm negatively surprised that this is even considered.
>
> This is a complicated patch that has been heavily reworked in the last
> few days to, among other things, address objections that have first been
> made months ago ([1]). There we nontrivial bugs less than a day ago. It
> has not received a lot of reviews since these changes. This isn't an
> area you've previously been involved in to a significant degree.
>

No I haven't although I have been spending some time familiarizing
myself with it. Nevertheless, since you object I won't persist.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
To: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-06 21:36:28
Message-ID: CAA8=A7_7YvdXOwQJVWvDbyYm=HeokVCvXo7h6d5z=udgUq9sGQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Apr 6, 2018 at 10:00 PM, Peter Eisentraut
<peter(dot)eisentraut(at)2ndquadrant(dot)com> wrote:
> On 4/3/18 18:05, Andrew Dunstan wrote:
>> Currently we seem to have only two machines doing the cross-version
>> upgrade checks, which might make it easier to rearrange anything if
>> necessary.
>
> I think we should think about making this even more general. We could
> use some cross-version testing for pg_dump, psql, pg_basebackup,
> pg_upgrade, logical replication, and so on. Ideally, we would be able
> to run the whole test set against an older version somehow. Lots of
> details omitted here, of course. ;-)
>

Yeah, that's more or less the plan. One way to generalize it might be
to see if ${branch}_SAVED exists and points to a directory with bin
share and lib directories. If so, use it as required to test against
that branch. The buildfarm will make sure that that setting exists.
There are some tricks you have to play with the environment, but it's
basically doable.

Anyway, this is really matter for another thread.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-09 06:01:20
Message-ID: CAMGcDxev9QiGpAkrNxvTKeGtYJTKRmo0UxJuQUyWGuOKgjEtdw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> I object. And I'm negatively surprised that this is even considered.
>

I am also a bit surprised..

> This is a complicated patch that has been heavily reworked in the last
> few days to, among other things, address objections that have first been
> made months ago ([1]). There we nontrivial bugs less than a day ago. It
> has not received a lot of reviews since these changes. This isn't an
> area you've previously been involved in to a significant degree.
>

I thought all the points that you had raised in [1] had been met with
satisfactorily. Let me know if that's not the case. The last few days,
the focus was on making the decodegroup locking implementation a bit
more robust.

Anyways, will now wait for the next commitfest/opportunity to try to
get this in.

>
> [1] http://archives.postgresql.org/message-id/20180209211025.d7jxh43fhqnevhji%40alap3.anarazel.de

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: David Steele <david(at)pgmasters(dot)net>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-10 13:23:45
Message-ID: 9ea4c35b-f8ec-220b-2429-b860b49748f3@pgmasters.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4/9/18 2:01 AM, Nikhil Sontakke wrote:
>
> Anyways, will now wait for the next commitfest/opportunity to try to
> get this in.

It looks like this patch should be in the Needs Review state so I have
done that and moved it to the next CF.

Regards,
--
-David
david(at)pgmasters(dot)net


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: David Steele <david(at)pgmasters(dot)net>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-04-10 14:48:27
Message-ID: CAMGcDxfmy1nLtoukiHbavfqm+scvT9=WckaCtRYW1fq7LRFd9Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> Anyways, will now wait for the next commitfest/opportunity to try to
>> get this in.
>
> It looks like this patch should be in the Needs Review state so I have
> done that and moved it to the next CF.
>
Thanks David,

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL/Postgres-XL Development, 24x7 Support, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: David Steele <david(at)pgmasters(dot)net>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-03 17:20:01
Message-ID: CAMGcDxdn8SQMuGWZ-hDVnkXv_eAbPzWRmFNU36a63b7zPeUhiw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi all,

>>> Anyways, will now wait for the next commitfest/opportunity to try to
>>> get this in.
>>
>> It looks like this patch should be in the Needs Review state so I have
>> done that and moved it to the next CF.
>>
PFA, patchset updated to take care of bitrot.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch application/octet-stream 7.9 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.patch application/octet-stream 33.9 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.patch application/octet-stream 42.7 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 25.3 KB
0005-OPTIONAL-Additional-test-case-to-demonstrate-decoding-rollbac.patch application/octet-stream 9.8 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: David Steele <david(at)pgmasters(dot)net>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-03 18:09:41
Message-ID: CAMGcDxdW0LMpjxwnC6_YT2UoPgY9onXAEAbJwYDeux2mSuekOw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>>>> Anyways, will now wait for the next commitfest/opportunity to try to
>>>> get this in.
>>>
>>> It looks like this patch should be in the Needs Review state so I have
>>> done that and moved it to the next CF.
>>>
> PFA, patchset updated to take care of bitrot.
>

For some reason, the 3rd patch was missing a few lines. Revised patch
set attached.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch application/octet-stream 7.9 KB
0002-Introduce-LogicalLockTransaction-LogicalUnlockTransa.patch application/octet-stream 33.9 KB
0003-Support-decoding-of-two-phase-transactions-at-PREPAR.patch application/octet-stream 43.1 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 25.3 KB
0005-OPTIONAL-Additional-test-case-to-demonstrate-decoding-rollbac.patch application/octet-stream 9.8 KB

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 15:21:22
Message-ID: 0c7e9fb9-01f7-3284-da9e-22b6dde679d4@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Nikhil,

I've been looking at this patch series, and I do have a bunch of
comments and questions, as usual ;-)

Overall, I think it's clear the main risk associated with this patch is
the decode group code - it touches PROC entries, so a bug may cause
trouble pretty easily. So I've focused on this part, for now.

1) LogicalLockTransaction does roughly this

...

if (MyProc->decodeGroupLeader == NULL)
{
leader = AssignDecodeGroupLeader(txn->xid);

if (leader == NULL ||
!BecomeDecodeGroupMember((PGPROC *)leader, txn->xid))
goto lock_cleanup;
}

leader = BackendXidGetProc(txn->xid);
if (!leader)
goto lock_cleanup;

leader_lwlock = LockHashPartitionLockByProc(leader);
LWLockAcquire(leader_lwlock, LW_EXCLUSIVE);

pgxact = &ProcGlobal->allPgXact[leader->pgprocno];
if(pgxact->xid != txn->xid)
{
LWLockRelease(leader_lwlock);
goto lock_cleanup;
}

...

I wonder why we need the BackendXidGetProc call after the first if
block. Can we simply grab MyProc->decodeGroupLeader at that point?

2) InitProcess now resets decodeAbortPending/decodeLocked flags, while
checking decodeGroupLeader/decodeGroupMembers using asserts. Isn't that
a bit strange? Shouldn't it do the same thing with both?

3) A comment in ProcKill says this:

* Detach from any decode group of which we are a member. If the leader
* exits before all other group members, its PGPROC will remain allocated
* until the last group process exits; that process must return the
* leader's PGPROC to the appropriate list.

So I'm wondering what happens if the leader dies before other group
members, but the PROC entry gets reused for a new connection. It clearly
should not be a leader for that old decode group, but it may need to be
a leader for another group.

4) strange hunk in ProcKill

There seems to be some sort of merge/rebase issue, because this block of
code (line ~880) related to lock groups

/* Return PGPROC structure (and semaphore) to appropriate freelist */
proc->links.next = (SHM_QUEUE *) *procgloballist;
*procgloballist = proc;

got replaced by code relared to decode groups. That seems strange.

5) ReorderBufferCommitInternal

I see the LogicalLockTransaction() calls in ReorderBufferCommitInternal
have vastly variable comments. Some calls have no comment, some calls
have "obvious" comment like "Lock transaction before catalog access" and
one call has this very long comment

/*
* Output plugins can access catalog metadata and we
* do not have any control over that. We could ask
* them to call
* LogicalLockTransaction/LogicalUnlockTransaction
* APIs themselves, but that leads to unnecessary
* complications and expectations from plugin
* writers. We avoid this by calling these APIs
* here, thereby ensuring that the in-progress
* transaction will be around for the duration of
* the apply_change call below
*/

I find that rather inconsistent, and I'd say those comments are useless.
I suggest to remove all the per-call comments and instead add a comment
about the locking into the initial file-level comment, which already
explains handling of large transactions, etc.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 16:15:46
Message-ID: CA+TgmoZfuGSRBv=LDK2mk4Q1MB9wjMuL3AQfXAax2orDWerFUw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jul 16, 2018 at 11:21 AM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> Overall, I think it's clear the main risk associated with this patch is the
> decode group code - it touches PROC entries, so a bug may cause trouble
> pretty easily. So I've focused on this part, for now.

I agree. As a general statement, I think the idea of trying to
prevent transactions from aborting is really scary. It's almost an
axiom of the system that we're always allowed to abort, and I think
there could be a lot of unintended and difficult-to-fix consequences
of undermining that guarantee. I think it will be very difficult to
create a sound system for delaying transactions, and I doubt very much
that the proposed system is sound.

In particular:

- The do_wait loop contains a CHECK_FOR_INTERRUPTS(). If that makes
it interruptible, then it's possible for the abort to complete before
the decoding processes have aborted. If that can happen, then this
whole mechanism is completely pointless, because it fails to actually
achieve the guarantee which is its central goal. On the other hand,
if you don't make this abort interruptible, then you are significantly
increase the risk that a backend could get stuck in the abort path for
an unbounded period of time. If the aborting backend holds any
significant resources at this point, such as heavyweight locks, then
you risk creating a deadlock that cannot be broken until the decoding
process manages to abort, and if that process is involved in the
deadlock, then you risk creating an unbreakable deadlock.

- BackendXidGetProc() seems to be called in multiple places without
any lock held. I don't see how that can be safe, because AFAICS it
must inevitably introduce a race condition: the answer can change
after that value is returned but before it is used. There's a bunch
of recheck logic that looks like it is trying to cope with this
problem, but I'm not sure it's very solid. For example,
AssignDecodeGroupLeader reads proc->decodeGroupLeader without holding
any lock; we have historically avoided assuming that pointer-width
reads cannot be torn. (We have assumed this only for 4-byte reads or
narrower.) There are no comments about the locking hazards here, and
no real explanation of how the recheck algorithm tries to patch things
up:

+ leader = BackendXidGetProc(xid);
+ if (!leader || leader != proc)
+ {
+ LWLockRelease(leader_lwlock);
+ return NULL;
+ }

Can be non-NULL yet unequal to proc? I don't understand how that can
happen: surely once the PGPROC that has that XID aborts, the same XID
can't possibly be assigned to a different PGPROC.

- The code for releasing PGPROCs in ProcKill looks completely unsafe
to me. With locking groups for parallel query, a process always
enters a lock group of its own volition. It can safely use
(MyProc->lockGroupLeader != NULL) as a race-free test because no other
process can modify that value. But in this implementation of decoding
groups, one process can put another process into a decoding group,
which means this test has a race condition. If there's some reason
this is safe, the comments sure don't explain it.

I don't want to overplay my hand, but I think this code is a very long
way from being committable, and I am concerned that the fundamental
approach of blocking transaction aborts may be unsalvageably broken or
at least exceedingly dangerous.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 17:21:17
Message-ID: 15404.1531761677@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> I agree. As a general statement, I think the idea of trying to
> prevent transactions from aborting is really scary. It's almost an
> axiom of the system that we're always allowed to abort, and I think
> there could be a lot of unintended and difficult-to-fix consequences
> of undermining that guarantee. I think it will be very difficult to
> create a sound system for delaying transactions, and I doubt very much
> that the proposed system is sound.

Ugh, is this patch really dependent on such a thing?

TBH, I think the odds of making that work are indistinguishable from zero;
and even if you managed to commit something that did work at the instant
you committed it, the odds that it would stay working in the face of later
system changes are exactly zero. I would reject this idea out of hand.

regards, tom lane


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 17:28:09
Message-ID: 1242e10a-a8bd-a7b7-1338-9715f825c96d@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/16/2018 06:15 PM, Robert Haas wrote:
> On Mon, Jul 16, 2018 at 11:21 AM, Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> Overall, I think it's clear the main risk associated with this patch is the
>> decode group code - it touches PROC entries, so a bug may cause trouble
>> pretty easily. So I've focused on this part, for now.
>
> I agree. As a general statement, I think the idea of trying to
> prevent transactions from aborting is really scary. It's almost an
> axiom of the system that we're always allowed to abort, and I think
> there could be a lot of unintended and difficult-to-fix consequences
> of undermining that guarantee. I think it will be very difficult to
> create a sound system for delaying transactions, and I doubt very much
> that the proposed system is sound.
>
> In particular:
>
> - The do_wait loop contains a CHECK_FOR_INTERRUPTS(). If that makes
> it interruptible, then it's possible for the abort to complete before
> the decoding processes have aborted. If that can happen, then this
> whole mechanism is completely pointless, because it fails to actually
> achieve the guarantee which is its central goal. On the other hand,
> if you don't make this abort interruptible, then you are significantly
> increase the risk that a backend could get stuck in the abort path for
> an unbounded period of time. If the aborting backend holds any
> significant resources at this point, such as heavyweight locks, then
> you risk creating a deadlock that cannot be broken until the decoding
> process manages to abort, and if that process is involved in the
> deadlock, then you risk creating an unbreakable deadlock.
>

I'm not sure I understand. Are you suggesting the process might get
killed or something, thanks to the CHECK_FOR_INTERRUPTS() call?

> - BackendXidGetProc() seems to be called in multiple places without
> any lock held. I don't see how that can be safe, because AFAICS it
> must inevitably introduce a race condition: the answer can change
> after that value is returned but before it is used. There's a bunch
> of recheck logic that looks like it is trying to cope with this
> problem, but I'm not sure it's very solid.

But BackendXidGetProc() internally acquires ProcArrayLock, of course.
It's true there are a few places where we do != NULL checks on the
result without holding any lock, but I don't see why that would be a
problem? And before actually inspecting the contents, the code always
does LockHashPartitionLockByProc.

But I certainly agree this would deserve comments explaining why this
(lack of) locking is safe. (The goal why it's done this way is clearly
an attempt to acquire the lock as infrequently as possible, in an effort
to minimize the overhead.)

> For example,
> AssignDecodeGroupLeader reads proc->decodeGroupLeader without holding
> any lock; we have historically avoided assuming that pointer-width
> reads cannot be torn. (We have assumed this only for 4-byte reads or
> narrower.) There are no comments about the locking hazards here, and
> no real explanation of how the recheck algorithm tries to patch things
> up:
>
> + leader = BackendXidGetProc(xid);
> + if (!leader || leader != proc)
> + {
> + LWLockRelease(leader_lwlock);
> + return NULL;
> + }
>
> Can be non-NULL yet unequal to proc? I don't understand how that can
> happen: surely once the PGPROC that has that XID aborts, the same XID
> can't possibly be assigned to a different PGPROC.
>

Yeah. I have the same question.

> - The code for releasing PGPROCs in ProcKill looks completely unsafe
> to me. With locking groups for parallel query, a process always
> enters a lock group of its own volition. It can safely use
> (MyProc->lockGroupLeader != NULL) as a race-free test because no other
> process can modify that value. But in this implementation of decoding
> groups, one process can put another process into a decoding group,
> which means this test has a race condition. If there's some reason
> this is safe, the comments sure don't explain it.
>

I don't follow. How could one process put another process into a
decoding group? I don't think that's possible.

> I don't want to overplay my hand, but I think this code is a very long
> way from being committable, and I am concerned that the fundamental
> approach of blocking transaction aborts may be unsalvageably broken or
> at least exceedingly dangerous.
>

I'm not sure about the 'unsalvageable' part, but it needs more work,
that's for sure. Unfortunately, all previous attempts to make this work
in various other ways failed (see past discussions in this thread), so
this is the only approach left :-( So let's see if we can make it work.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 17:38:28
Message-ID: ad722d03-4468-b5f7-8c59-8ac79ff3c2d0@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/16/2018 07:21 PM, Tom Lane wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> I agree. As a general statement, I think the idea of trying to
>> prevent transactions from aborting is really scary. It's almost an
>> axiom of the system that we're always allowed to abort, and I think
>> there could be a lot of unintended and difficult-to-fix consequences
>> of undermining that guarantee. I think it will be very difficult to
>> create a sound system for delaying transactions, and I doubt very much
>> that the proposed system is sound.
>
> Ugh, is this patch really dependent on such a thing?
>

Unfortunately it does :-( Without it the decoding (or output plugins)
may see catalogs broken in various ways - the catalog records may get
vacuumed, HOT chains are broken, ... There were attempts to change that
part, but that seems an order of magnitude more invasive than this.

> TBH, I think the odds of making that work are indistinguishable from zero;
> and even if you managed to commit something that did work at the instant
> you committed it, the odds that it would stay working in the face of later
> system changes are exactly zero. I would reject this idea out of hand.
>

Why? How is this significantly different from other patches touching
ProcArray and related bits?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 18:09:33
Message-ID: CA+Tgmoa1c7DL6ANf1Z3YfneVSnALMJW0soMXkVq0dSakGDG0DQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jul 16, 2018 at 1:28 PM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> I'm not sure I understand. Are you suggesting the process might get killed
> or something, thanks to the CHECK_FOR_INTERRUPTS() call?

Yes. CHECK_FOR_INTERRUPTS() can certainly lead to a non-local
transfer of control.

> But BackendXidGetProc() internally acquires ProcArrayLock, of course. It's
> true there are a few places where we do != NULL checks on the result without
> holding any lock, but I don't see why that would be a problem? And before
> actually inspecting the contents, the code always does
> LockHashPartitionLockByProc.

I think at least some of those cases are a problem. See below...

> I don't follow. How could one process put another process into a decoding
> group? I don't think that's possible.

Isn't that exactly what AssignDecodeGroupLeader() is doing? It looks
up the process that currently has that XID, then turns that process
into a decode group leader. Then after that function returns, the
caller adds itself to the decode group as well. So it seems entirely
possible for somebody to swing the decodeGroupLeader pointer for a
PGPROC from NULL to some other value at an arbitrary point in time.

> I'm not sure about the 'unsalvageable' part, but it needs more work, that's
> for sure. Unfortunately, all previous attempts to make this work in various
> other ways failed (see past discussions in this thread), so this is the only
> approach left :-( So let's see if we can make it work.

I think that's probably not going to work out, but of course it's up
to you how you want to spend your time!

After thinking about it a bit more, if you want to try to stick with
this design, I don't think that this decode group leader/members thing
has much to recommend it. In the case of parallel query, the point of
the lock group stuff is to treat all of those processes as one for
purposes of heavyweight lock acquisition. There's no similar need
here, so the design that makes sure the "leader" is in the list of
processes that are members of the "group" is, AFAICS, just wasted
code. All you really need is a list of processes hung off of the
PGPROC that must abort before the leader is allowed to abort; the
leader itself doesn't need to be in the list, and there's no need to
consider it as a "group". It's just a list of waiters.

That having been said, I still don't see how that's really going to
work. Just to take one example, suppose that the leader is trying to
ERROR out, and the decoding workers are blocked waiting for a lock
held by the leader. The system has no way of detecting this deadlock
and resolving it automatically, which certainly seems unacceptable.
The only way that's going to work is if the leader waits for the
worker by trying to acquire a lock held by the worker. Then the
deadlock detector would know to abort some transaction. But that
doesn't really work either - the deadlock was created by the
foreground process trying to abort, and if the deadlock detector
chooses that process as its victim, what then? We're already trying
to abort, and the abort code isn't supposed to throw further errors,
or fail in any way, lest we break all kinds of other things. Not to
mention the fact that running the deadlock detector in the abort path
isn't really safe to begin with, again because we can't throw errors
when we're already in an abort path.

If we're only ever talking about decoding prepared transactions, we
could probably work around all of these problems: have the decoding
process take a heavyweight lock before it begins decoding. Have a
process that wants to execute ROLLBACK PREPARED take a conflicting
heavyweight lock on the same object. The net effect would be that
ROLLBACK PREPARED would simply wait for decoding to finish. That
might be rather lousy from a latency point of view since the
transaction could take an arbitrarily long time to decode, but it
seems safe enough. Possibly you could also design a mechanism for the
ROLLBACK PREPARED command to SIGTERM the processes that are blocking
its lock acquisition, if they are decoding processes. The difference
between this and what you the current patch is doing is that nothing
complex or fragile is happening in the abort pathway itself. The
complicated stuff in both the worker and in the main backend happens
while the transaction is still good and can still be rolled back at
need. This kind of approach won't work if you want to decode
transactions that aren't yet prepared, so if that is the long term
goal then we need to think harder. I'm honestly not sure that problem
has any reasonable solution. The assumption that a running process
can abort at any time is deeply baked into many parts of the system
and for good reasons. Trying to undo that is going to be like trying
to push water up a hill. I think we need to install interlocks in
such a way that any waiting happens before we enter the abort path,
not while we're actually trying to perform the abort. But I don't
know how to do that for a foreground task that's still actively doing
stuff.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-16 19:25:19
Message-ID: 19803f28-0fd2-1bfb-80e1-9525c08ff795@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/16/2018 08:09 PM, Robert Haas wrote:
> On Mon, Jul 16, 2018 at 1:28 PM, Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> I'm not sure I understand. Are you suggesting the process might get killed
>> or something, thanks to the CHECK_FOR_INTERRUPTS() call?
>
> Yes. CHECK_FOR_INTERRUPTS() can certainly lead to a non-local
> transfer of control.
>
>> But BackendXidGetProc() internally acquires ProcArrayLock, of course. It's
>> true there are a few places where we do != NULL checks on the result without
>> holding any lock, but I don't see why that would be a problem? And before
>> actually inspecting the contents, the code always does
>> LockHashPartitionLockByProc.
>
> I think at least some of those cases are a problem. See below...
>
>> I don't follow. How could one process put another process into a decoding
>> group? I don't think that's possible.
>
> Isn't that exactly what AssignDecodeGroupLeader() is doing? It looks
> up the process that currently has that XID, then turns that process
> into a decode group leader. Then after that function returns, the
> caller adds itself to the decode group as well. So it seems entirely
> possible for somebody to swing the decodeGroupLeader pointer for a
> PGPROC from NULL to some other value at an arbitrary point in time.
>

Oh, right, I forgot the patch also adds the leader into the group, for
some reason (I agree it's unclear why that would be necessary, as you
pointed out later).

But all this is happening while holding the partition lock (in exclusive
mode). And the decoding backends do synchronize on the lock correctly
(although, man, the rechecks are confusing ...).

But now I see ProcKill accesses decodeGroupLeader in multiple places,
and only the first one is protected by the lock, for some reason
(interestingly enough the one in lockGroupLeader block). Is that what
you mean?

FWIW I suspect the ProcKill part is borked due to incorrectly resolved
merge conflict or something, per my initial response from today.

>> I'm not sure about the 'unsalvageable' part, but it needs more work, that's
>> for sure. Unfortunately, all previous attempts to make this work in various
>> other ways failed (see past discussions in this thread), so this is the only
>> approach left :-( So let's see if we can make it work.
>
> I think that's probably not going to work out, but of course it's up
> to you how you want to spend your time!
>

Well, yeah. I'm sure I could think of more fun things to do, but OTOH I
also have patches that require the capability to decode in-progress
transactions.

> After thinking about it a bit more, if you want to try to stick with
> this design, I don't think that this decode group leader/members thing
> has much to recommend it. In the case of parallel query, the point of
> the lock group stuff is to treat all of those processes as one for
> purposes of heavyweight lock acquisition. There's no similar need
> here, so the design that makes sure the "leader" is in the list of
> processes that are members of the "group" is, AFAICS, just wasted
> code. All you really need is a list of processes hung off of the
> PGPROC that must abort before the leader is allowed to abort; the
> leader itself doesn't need to be in the list, and there's no need to
> consider it as a "group". It's just a list of waiters.
>

But the way I understand it, it pretty much *is* a list of waiters,
along with a couple of flags to allow the processes to notify the other
side about lock/unlock/abort. It does resemble the lock groups, but I
don't think it has the same goals.

The thing is that the lock/unlock happens for each decoded change
independently, and it'd be silly to modify the list all the time, so
instead it just sets the decodeLocked flag to true/false. Similarly,
when the leader decides to abort, it marks decodeAbortPending and waits
for the decoding backends to complete.

Of course, that's my understanding/interpretation, and perhaps Nikhil as
a patch author has a better explanation.

> That having been said, I still don't see how that's really going to
> work. Just to take one example, suppose that the leader is trying to
> ERROR out, and the decoding workers are blocked waiting for a lock
> held by the leader. The system has no way of detecting this deadlock
> and resolving it automatically, which certainly seems unacceptable.
> The only way that's going to work is if the leader waits for the
> worker by trying to acquire a lock held by the worker. Then the
> deadlock detector would know to abort some transaction. But that
> doesn't really work either - the deadlock was created by the
> foreground process trying to abort, and if the deadlock detector
> chooses that process as its victim, what then? We're already trying
> to abort, and the abort code isn't supposed to throw further errors,
> or fail in any way, lest we break all kinds of other things. Not to
> mention the fact that running the deadlock detector in the abort path
> isn't really safe to begin with, again because we can't throw errors
> when we're already in an abort path.
>

Fair point, not sure. I'll leave this up to Nikhil.

> If we're only ever talking about decoding prepared transactions, we
> could probably work around all of these problems: have the decoding
> process take a heavyweight lock before it begins decoding. Have a
> process that wants to execute ROLLBACK PREPARED take a conflicting
> heavyweight lock on the same object. The net effect would be that
> ROLLBACK PREPARED would simply wait for decoding to finish. That
> might be rather lousy from a latency point of view since the
> transaction could take an arbitrarily long time to decode, but it
> seems safe enough. Possibly you could also design a mechanism for the
> ROLLBACK PREPARED command to SIGTERM the processes that are blocking
> its lock acquisition, if they are decoding processes. The difference
> between this and what you the current patch is doing is that nothing
> complex or fragile is happening in the abort pathway itself. The
> complicated stuff in both the worker and in the main backend happens
> while the transaction is still good and can still be rolled back at
> need. This kind of approach won't work if you want to decode
> transactions that aren't yet prepared, so if that is the long term
> goal then we need to think harder. I'm honestly not sure that problem
> has any reasonable solution. The assumption that a running process
> can abort at any time is deeply baked into many parts of the system
> and for good reasons. Trying to undo that is going to be like trying
> to push water up a hill. I think we need to install interlocks in
> such a way that any waiting happens before we enter the abort path,
> not while we're actually trying to perform the abort. But I don't
> know how to do that for a foreground task that's still actively doing
> stuff.
>

Unfortunately it's not just for prepared transactions :-( The reason why
I'm interested in this capability (decoding in-progress xacts) is that
I'd like to use it to stream large transactions before commit, to reduce
replication lag due to limited network bandwidth etc. It's also needed
for things like speculative apply (starting apply before commit) etc.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-17 18:10:32
Message-ID: CA+Tgmoa8yDZH0i_58=o6WBESbyqij8m5hKYycV2sWzo2rX9fGg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jul 16, 2018 at 3:25 PM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> Oh, right, I forgot the patch also adds the leader into the group, for
> some reason (I agree it's unclear why that would be necessary, as you
> pointed out later).
>
> But all this is happening while holding the partition lock (in exclusive
> mode). And the decoding backends do synchronize on the lock correctly
> (although, man, the rechecks are confusing ...).
>
> But now I see ProcKill accesses decodeGroupLeader in multiple places,
> and only the first one is protected by the lock, for some reason
> (interestingly enough the one in lockGroupLeader block). Is that what
> you mean?

I haven't traced out the control flow completely, but it sure looks to
me like there are places where decodeGroupLeader is checked without
holding any LWLock at all. Also, it looks to me like some places
(like where we're trying to find a PGPROC by XID) we use ProcArrayLock
and in others -- I guess where we're checking the decodeGroupBlah
stuff -- we are using the lock manager locks. I don't know how safe
that is, and there are not a lot of comments justifying it. I also
wonder why we're using the lock manager locks to protect the
decodeGroup stuff rather than backendLock.

> FWIW I suspect the ProcKill part is borked due to incorrectly resolved
> merge conflict or something, per my initial response from today.

Yeah I wasn't seeing the code the way I thought you were describing it
in that response, but I'm dumb this week so maybe I just
misunderstood.

>> I think that's probably not going to work out, but of course it's up
>> to you how you want to spend your time!
>
> Well, yeah. I'm sure I could think of more fun things to do, but OTOH I
> also have patches that require the capability to decode in-progress
> transactions.

It's not a matter of fun; it's a matter of whether it can be made to
work. Don't get me wrong -- I want the ability to decode in-progress
transactions. I complained about that aspect of the design to Andres
when I was reviewing and committing logical slots & logical decoding,
and I complained about it probably more than I complained about any
other aspect of it, largely because it instantaneously generates a
large lag when a bulk load commits. But not liking something about
the way things are is not the same as knowing how to make them better.
I believe there is a way to make it work because I believe there's a
way to make anything work. But I suspect that it's at least one order
of magnitude more complex than this patch currently is, and likely an
altogether different design.

> But the way I understand it, it pretty much *is* a list of waiters,
> along with a couple of flags to allow the processes to notify the other
> side about lock/unlock/abort. It does resemble the lock groups, but I
> don't think it has the same goals.

So the parts that aren't relevant shouldn't be copied over.

>> That having been said, I still don't see how that's really going to
>> work. Just to take one example, suppose that the leader is trying to
>> ERROR out, and the decoding workers are blocked waiting for a lock
>> held by the leader. The system has no way of detecting this deadlock
>> and resolving it automatically, which certainly seems unacceptable.
>> The only way that's going to work is if the leader waits for the
>> worker by trying to acquire a lock held by the worker. Then the
>> deadlock detector would know to abort some transaction. But that
>> doesn't really work either - the deadlock was created by the
>> foreground process trying to abort, and if the deadlock detector
>> chooses that process as its victim, what then? We're already trying
>> to abort, and the abort code isn't supposed to throw further errors,
>> or fail in any way, lest we break all kinds of other things. Not to
>> mention the fact that running the deadlock detector in the abort path
>> isn't really safe to begin with, again because we can't throw errors
>> when we're already in an abort path.
>
> Fair point, not sure. I'll leave this up to Nikhil.

That's fine, but please understand that I think there's a basic design
flaw here that just can't be overcome with any amount of hacking on
the details here. I think we need a much higher-level consideration
of the problem here and probably a lot of new infrastructure to
support it. One idea might be to initially support decoding of
in-progress transactions only if they don't modify any catalog state.
That would leave out a bunch of cases we'd probably like to support,
such as CREATE TABLE + COPY in the same transaction, but it would
likely dodge a lot of really hard problems, too, and we could improve
things later. One approach to the problem of catalog changes would be
to prevent catalog tuples from being removed even after transaction
abort until such time as there's no decoding in progress that might
care about them. That is not by itself sufficient because a
transaction can abort after inserting a heap tuple but before
inserting an index tuple and we can't look at the catalog when it's an
inconsistent state like that and expect reasonable results. But it
helps: for example, if you are decoding a transaction that has
inserted a WAL record with a cmin or cmax value of 4, and you know
that none of the catalog records created by that transaction can have
been pruned, then it should be safe to use a snapshot with CID 3 or
smaller to decode the catalogs. So consider a case like:

BEGIN;
CREATE TABLE blah ... -- command ID 0
COPY blah FROM '/tmp/blah' ... -- command ID 1

Once we see the COPY show up in the WAL, it should be safe to decode
the CREATE TABLE command and figure out what a snapshot with command
ID 0 can see (again, assuming we've suppressed pruning in the catalogs
in a sufficiently-well-considered way). Then, as long as the COPY
command doesn't do any DML via a trigger or a datatype input function
(!) or whatever, we should be able to use that snapshot to decode the
data inserted by COPY. I'm not quite sure what happens if the COPY
does do some DML or something like that -- we might have to stop
decoding until the following command begins in the live transaction,
or something like that. Or maybe we don't have to do that. I'm not
totally sure how the command counter is managed for catalog snapshots.
However it works in detail, we will get into trouble if we ever use a
catalog snapshot that can see a change that the live transaction is
still in the midst of making. Even with pruning prevented, we can
only count on the catalogs to be in a consistent state once the live
transaction has finished the command -- otherwise, for example, it
might have increased pg_class.relnatts but not yet added the
pg_attribute entry at the time it aborts, or something like that. I'm
blathering a little bit but hopefully you get the point: I think the
way forward is for somebody to think carefully through how and under
what circumstances using a catalog snapshot can be made safe even if
an abort has occurred afterwards -- not trying to postpone the abort,
which I think is never going to be right.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-18 14:08:37
Message-ID: c9005196-e858-5dfd-80f3-faa076f7102f@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On 07/17/2018 08:10 PM, Robert Haas wrote:
> On Mon, Jul 16, 2018 at 3:25 PM, Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> Oh, right, I forgot the patch also adds the leader into the group, for
>> some reason (I agree it's unclear why that would be necessary, as you
>> pointed out later).
>>
>> But all this is happening while holding the partition lock (in exclusive
>> mode). And the decoding backends do synchronize on the lock correctly
>> (although, man, the rechecks are confusing ...).
>>
>> But now I see ProcKill accesses decodeGroupLeader in multiple places,
>> and only the first one is protected by the lock, for some reason
>> (interestingly enough the one in lockGroupLeader block). Is that what
>> you mean?
>
> I haven't traced out the control flow completely, but it sure looks to
> me like there are places where decodeGroupLeader is checked without
> holding any LWLock at all. Also, it looks to me like some places
> (like where we're trying to find a PGPROC by XID) we use ProcArrayLock
> and in others -- I guess where we're checking the decodeGroupBlah
> stuff -- we are using the lock manager locks. I don't know how safe
> that is, and there are not a lot of comments justifying it. I also
> wonder why we're using the lock manager locks to protect the
> decodeGroup stuff rather than backendLock.
>
>> FWIW I suspect the ProcKill part is borked due to incorrectly resolved
>> merge conflict or something, per my initial response from today.
>
> Yeah I wasn't seeing the code the way I thought you were describing it
> in that response, but I'm dumb this week so maybe I just
> misunderstood.
>
>>> I think that's probably not going to work out, but of course it's up
>>> to you how you want to spend your time!
>>
>> Well, yeah. I'm sure I could think of more fun things to do, but OTOH I
>> also have patches that require the capability to decode in-progress
>> transactions.
>
> It's not a matter of fun; it's a matter of whether it can be made to
> work. Don't get me wrong -- I want the ability to decode in-progress
> transactions. I complained about that aspect of the design to Andres
> when I was reviewing and committing logical slots & logical decoding,
> and I complained about it probably more than I complained about any
> other aspect of it, largely because it instantaneously generates a
> large lag when a bulk load commits. But not liking something about
> the way things are is not the same as knowing how to make them better.
> I believe there is a way to make it work because I believe there's a
> way to make anything work. But I suspect that it's at least one order
> of magnitude more complex than this patch currently is, and likely an
> altogether different design.
>

Sure, it may turn out not to work - but how do you know until you try?

We have a well known theater play here, where of the actors is blowing
tobacco smoke into the sink, to try if gold can be created that way.
Which is foolish, but his reasoning is "Someone had to try, to be sure!"
So we're in the phase of blowing tobacco smoke, kinda ;-)

Also, you often discover solutions while investigating approaches that
seem to be unworkable initially. Or entirely new approaches. It sure
happened to me, many times.

There's a great book/movie "Touching the Void" [1] about a climber
falling into a deep crevasse. Unable to climb up, he decides to crawl
down - which is obviously foolish, but he happens to find a way out.

I suppose we're kinda doing the same thing here - crawling down a
crevasse (while still smoking and blowing the tobacco smoke into a sink,
which we happened to find in the crevasse or something).

Anyway, I have no clear idea what changes would be necessary to the
original design of logical decoding to make implementing this easier
now. The decoding in general is quite constrained by how our transam and
WAL stuff works. I suppose Andres thought about this aspect, and I guess
he concluded that (a) it's not needed for v1, and (b) adding it later
will require about the same effort. So in the "better" case we'd end up
waiting for logical decoding much longer, in the worse case we would not
have it at all.

>> But the way I understand it, it pretty much *is* a list of waiters,
>> along with a couple of flags to allow the processes to notify the other
>> side about lock/unlock/abort. It does resemble the lock groups, but I
>> don't think it has the same goals.
>
> So the parts that aren't relevant shouldn't be copied over.
>

I'm not sure which parts aren't relevant, but in general I agree that
stuff that is not necessary should not be copied over.

>>> That having been said, I still don't see how that's really going to
>>> work. Just to take one example, suppose that the leader is trying to
>>> ERROR out, and the decoding workers are blocked waiting for a lock
>>> held by the leader. The system has no way of detecting this deadlock
>>> and resolving it automatically, which certainly seems unacceptable.
>>> The only way that's going to work is if the leader waits for the
>>> worker by trying to acquire a lock held by the worker. Then the
>>> deadlock detector would know to abort some transaction. But that
>>> doesn't really work either - the deadlock was created by the
>>> foreground process trying to abort, and if the deadlock detector
>>> chooses that process as its victim, what then? We're already trying
>>> to abort, and the abort code isn't supposed to throw further errors,
>>> or fail in any way, lest we break all kinds of other things. Not to
>>> mention the fact that running the deadlock detector in the abort path
>>> isn't really safe to begin with, again because we can't throw errors
>>> when we're already in an abort path.
>>
>> Fair point, not sure. I'll leave this up to Nikhil.
>
> That's fine, but please understand that I think there's a basic design
> flaw here that just can't be overcome with any amount of hacking on
> the details here. I think we need a much higher-level consideration
> of the problem here and probably a lot of new infrastructure to
> support it. One idea might be to initially support decoding of
> in-progress transactions only if they don't modify any catalog state.

The problem is you don't know if a transaction does DDL sometime later,
in the part that you might not have decoded yet (or perhaps concurrently
with the decoding). So I don't see how you could easily exclude such
transactions from the decoding ...

> That would leave out a bunch of cases we'd probably like to support,
> such as CREATE TABLE + COPY in the same transaction, but it would
> likely dodge a lot of really hard problems, too, and we could improve
> things later. One approach to the problem of catalog changes would be
> to prevent catalog tuples from being removed even after transaction
> abort until such time as there's no decoding in progress that might
> care about them. That is not by itself sufficient because a
> transaction can abort after inserting a heap tuple but before
> inserting an index tuple and we can't look at the catalog when it's an
> inconsistent state like that and expect reasonable results. But it
> helps: for example, if you are decoding a transaction that has
> inserted a WAL record with a cmin or cmax value of 4, and you know
> that none of the catalog records created by that transaction can have
> been pruned, then it should be safe to use a snapshot with CID 3 or
> smaller to decode the catalogs. So consider a case like:
>
> BEGIN;
> CREATE TABLE blah ... -- command ID 0
> COPY blah FROM '/tmp/blah' ... -- command ID 1
>
> Once we see the COPY show up in the WAL, it should be safe to decode
> the CREATE TABLE command and figure out what a snapshot with command
> ID 0 can see (again, assuming we've suppressed pruning in the catalogs
> in a sufficiently-well-considered way). Then, as long as the COPY
> command doesn't do any DML via a trigger or a datatype input function
> (!) or whatever, we should be able to use that snapshot to decode the
> data inserted by COPY.

One obvious issue with this is that it actually does not help with
reducing the replication lag, which is about the main goal of this whole
effort. Because if the COPY is a big data load, waiting until after the
COPY completes gives us pretty much nothing.

> I'm not quite sure what happens if the COPY
> does do some DML or something like that -- we might have to stop
> decoding until the following command begins in the live transaction,
> or something like that. Or maybe we don't have to do that. I'm not
> totally sure how the command counter is managed for catalog snapshots.
> However it works in detail, we will get into trouble if we ever use a
> catalog snapshot that can see a change that the live transaction is
> still in the midst of making. Even with pruning prevented, we can
> only count on the catalogs to be in a consistent state once the live
> transaction has finished the command -- otherwise, for example, it
> might have increased pg_class.relnatts but not yet added the
> pg_attribute entry at the time it aborts, or something like that. I'm
> blathering a little bit but hopefully you get the point: I think the
> way forward is for somebody to think carefully through how and under
> what circumstances using a catalog snapshot can be made safe even if
> an abort has occurred afterwards -- not trying to postpone the abort,
> which I think is never going to be right.
>

But isn't this (delaying the catalog cleanup etc.) pretty much the
original approach, implemented by the original patch? Which you also
claimed to be unworkable, IIRC? Or how is this addressing the problems
with broken HOT chains, for example? Those issues were pretty much the
reason why we started looking at alternative approaches, like delaying
the abort ...

I wonder if disabling HOT on catalogs with wal_level=logical would be an
option here. I'm not sure how important HOT on catalogs is, in practice
(it surely does not help with the typical catalog bloat issue, which is
temporary tables, because that's mostly insert+delete). I suppose we
could disable it only when there's a replication slot indicating support
for decoding of in-progress transactions, so that you still get HOT with
plain logical decoding.

I'm sure there will be other obstacles, not just the HOT chain stuff,
but it would mean one step closer to a solution.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-18 14:56:31
Message-ID: CA+TgmoY2eTJJ5BVgtMkxjwCsrKPbaT_T5g0K5GcmeTk0FeF8DA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jul 18, 2018 at 10:08 AM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> The problem is you don't know if a transaction does DDL sometime later, in
> the part that you might not have decoded yet (or perhaps concurrently with
> the decoding). So I don't see how you could easily exclude such transactions
> from the decoding ...

One idea is that maybe the running transaction could communicate with
the decoding process through shared memory. For example, suppose that
before you begin decoding an ongoing transaction, you have to send
some kind of notification to the process saying "hey, I'm going to
start decoding you" and wait for that process to acknowledge receipt
of that message (say, at the next CFI). Once it acknowledges receipt,
you can begin decoding. Then, we're guaranteed that the foreground
process knows when that it must be careful about catalog changes. If
it's going to make one, it sends a note to the decoding process and
says, hey, sorry, I'm about to do catalog changes, please pause
decoding. Once it gets an acknowledgement that decoding has paused,
it continues its work. Decoding resumes after commit (or maybe
earlier if it's provably safe).

> But isn't this (delaying the catalog cleanup etc.) pretty much the original
> approach, implemented by the original patch? Which you also claimed to be
> unworkable, IIRC? Or how is this addressing the problems with broken HOT
> chains, for example? Those issues were pretty much the reason why we started
> looking at alternative approaches, like delaying the abort ...

I don't think so. The original approach, IIRC, was to decode after
the abort had already happened, and my objection was that you can't
rely on the state of anything at that point. The approach here is to
wait until the abort is in progress and then basically pause it while
we try to read stuff, but that seems similarly riddled with problems.
The newer approach could be considered an improvement in that you've
tried to get your hands around the problem at an earlier point, but
it's not early enough. To take a very rough analogy, the original
approach was like trying to install a sprinkler system after the
building had already burned down, while the new approach is like
trying to install a sprinkler system when you notice that the building
is on fire. But we need to install the sprinkler system in advance.
That is, we need to make all of the necessary preparations for a
possible abort before the abort occurs. That could perhaps be done by
arranging things so that decoding after an abort is actually still
safe (e.g. by making it look to certain parts of the system as though
the aborted transaction is still in progress until decoding no longer
cares about it) or by making sure that we are never decoding at the
point where a problematic abort happens (e.g. as proposed above, pause
decoding before doing dangerous things).

> I wonder if disabling HOT on catalogs with wal_level=logical would be an
> option here. I'm not sure how important HOT on catalogs is, in practice (it
> surely does not help with the typical catalog bloat issue, which is
> temporary tables, because that's mostly insert+delete). I suppose we could
> disable it only when there's a replication slot indicating support for
> decoding of in-progress transactions, so that you still get HOT with plain
> logical decoding.

Are you talking about HOT updates, or HOT pruning? Disabling the
former wouldn't help, and disabling the latter would break VACUUM,
which assumes that any tuple not removed by HOT pruning is not a dead
tuple (cf. 1224383e85eee580a838ff1abf1fdb03ced973dc, which was caused
by a case where that wasn't true).

> I'm sure there will be other obstacles, not just the HOT chain stuff, but it
> would mean one step closer to a solution.

Right.

Here's a crazy idea. Instead of disabling HOT pruning or anything
like that, have the decoding process advertise the XID of the
transaction being decoded as its own XID in its PGPROC. Also, using
magic, acquire a lock on that XID even though the foreground
transaction already holds that lock in exclusive mode. Fix the code
(and I'm pretty sure there is some) that relies on an XID appearing in
the procarray only once to no longer make that assumption. Then, if
the foreground process aborts, it will appear to the rest of the
system that the it's still running, so HOT pruning won't remove the
XID, CLOG won't get truncated, people who are waiting to update a
tuple updated by the aborted transaction will keep waiting, etc. We
know that we do the right thing for running transactions, so if we
make this aborted transaction look like it is running and are
sufficiently convincing about the way we do that, then it should also
work. That seems more likely to be able to be made robust than
addressing specific problems (e.g. a tuple might get removed!) one by
one.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-18 15:27:40
Message-ID: 00bafa2d-4742-6555-5a72-3208812dc3fe@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/18/2018 04:56 PM, Robert Haas wrote:
> On Wed, Jul 18, 2018 at 10:08 AM, Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> The problem is you don't know if a transaction does DDL sometime later, in
>> the part that you might not have decoded yet (or perhaps concurrently with
>> the decoding). So I don't see how you could easily exclude such transactions
>> from the decoding ...
>
> One idea is that maybe the running transaction could communicate with
> the decoding process through shared memory. For example, suppose that
> before you begin decoding an ongoing transaction, you have to send
> some kind of notification to the process saying "hey, I'm going to
> start decoding you" and wait for that process to acknowledge receipt
> of that message (say, at the next CFI). Once it acknowledges receipt,
> you can begin decoding. Then, we're guaranteed that the foreground
> process knows when that it must be careful about catalog changes. If
> it's going to make one, it sends a note to the decoding process and
> says, hey, sorry, I'm about to do catalog changes, please pause
> decoding. Once it gets an acknowledgement that decoding has paused,
> it continues its work. Decoding resumes after commit (or maybe
> earlier if it's provably safe).
>

Let's assume running transaction is holding an exclusive lock on
something. We start decoding it and do this little dance with sending
messages, confirmations etc. The decoding starts, and the plugin asks
for the same lock (and starts waiting). Then the transaction decides to
do some catalog changes, and sends a "pause" message to the decoding.
Who's going to respond, considering the decoding is waiting for the lock
(and it's not easy to jump out, because it might be deep inside the
output plugin, i.e. deep in some extension).

>> But isn't this (delaying the catalog cleanup etc.) pretty much the original
>> approach, implemented by the original patch? Which you also claimed to be
>> unworkable, IIRC? Or how is this addressing the problems with broken HOT
>> chains, for example? Those issues were pretty much the reason why we started
>> looking at alternative approaches, like delaying the abort ...
>
> I don't think so. The original approach, IIRC, was to decode after
> the abort had already happened, and my objection was that you can't
> rely on the state of anything at that point.

Pretty much, yes. Clearly there needs to be some sort of coordination
between the transaction and decoding process ...

> The approach here is to
> wait until the abort is in progress and then basically pause it while
> we try to read stuff, but that seems similarly riddled with problems.

Yeah :-(

> The newer approach could be considered an improvement in that you've
> tried to get your hands around the problem at an earlier point, but
> it's not early enough. To take a very rough analogy, the original
> approach was like trying to install a sprinkler system after the
> building had already burned down, while the new approach is like
> trying to install a sprinkler system when you notice that the building
> is on fire.

When an oil well is burning, they detonate a small bomb next to it to
extinguish it. What would be the analogy to that, here? pg_resetwal? ;-)

> But we need to install the sprinkler system in advance.

Damn causality!

> That is, we need to make all of the necessary preparations for a
> possible abort before the abort occurs. That could perhaps be done by
> arranging things so that decoding after an abort is actually still
> safe (e.g. by making it look to certain parts of the system as though
> the aborted transaction is still in progress until decoding no longer
> cares about it) or by making sure that we are never decoding at the
> point where a problematic abort happens (e.g. as proposed above, pause
> decoding before doing dangerous things).
>
>> I wonder if disabling HOT on catalogs with wal_level=logical would be an
>> option here. I'm not sure how important HOT on catalogs is, in practice (it
>> surely does not help with the typical catalog bloat issue, which is
>> temporary tables, because that's mostly insert+delete). I suppose we could
>> disable it only when there's a replication slot indicating support for
>> decoding of in-progress transactions, so that you still get HOT with plain
>> logical decoding.
>
> Are you talking about HOT updates, or HOT pruning? Disabling the
> former wouldn't help, and disabling the latter would break VACUUM,
> which assumes that any tuple not removed by HOT pruning is not a dead
> tuple (cf. 1224383e85eee580a838ff1abf1fdb03ced973dc, which was caused
> by a case where that wasn't true).
>

I'm talking about the issue you described here:

https://www.postgresql.org/message-id/CA+TgmoZP0SxEfKW1Pn=ackUj+KdWCxs7PumMAhSYJeZ+_61_GQ@mail.gmail.com

>> I'm sure there will be other obstacles, not just the HOT chain stuff, but it
>> would mean one step closer to a solution.
>
> Right.
>
> Here's a crazy idea. Instead of disabling HOT pruning or anything
> like that, have the decoding process advertise the XID of the
> transaction being decoded as its own XID in its PGPROC. Also, using
> magic, acquire a lock on that XID even though the foreground
> transaction already holds that lock in exclusive mode. Fix the code
> (and I'm pretty sure there is some) that relies on an XID appearing in
> the procarray only once to no longer make that assumption. Then, if
> the foreground process aborts, it will appear to the rest of the
> system that the it's still running, so HOT pruning won't remove the
> XID, CLOG won't get truncated, people who are waiting to update a
> tuple updated by the aborted transaction will keep waiting, etc. We
> know that we do the right thing for running transactions, so if we
> make this aborted transaction look like it is running and are
> sufficiently convincing about the way we do that, then it should also
> work. That seems more likely to be able to be made robust than
> addressing specific problems (e.g. a tuple might get removed!) one by
> one.
>

A dumb question - would this work with subtransaction-level aborts? I
mean, a transaction that does some catalog changes in a subxact which
then however aborts, but then still continues.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-18 16:59:30
Message-ID: CA+TgmoZGhw7WJ++w+GF13EbxHm58aCWUTc8an6HvfacQEY-94Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jul 18, 2018 at 11:27 AM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> One idea is that maybe the running transaction could communicate with
>> the decoding process through shared memory. For example, suppose that
>> before you begin decoding an ongoing transaction, you have to send
>> some kind of notification to the process saying "hey, I'm going to
>> start decoding you" and wait for that process to acknowledge receipt
>> of that message (say, at the next CFI). Once it acknowledges receipt,
>> you can begin decoding. Then, we're guaranteed that the foreground
>> process knows when that it must be careful about catalog changes. If
>> it's going to make one, it sends a note to the decoding process and
>> says, hey, sorry, I'm about to do catalog changes, please pause
>> decoding. Once it gets an acknowledgement that decoding has paused,
>> it continues its work. Decoding resumes after commit (or maybe
>> earlier if it's provably safe).
> Let's assume running transaction is holding an exclusive lock on something.
> We start decoding it and do this little dance with sending messages,
> confirmations etc. The decoding starts, and the plugin asks for the same
> lock (and starts waiting). Then the transaction decides to do some catalog
> changes, and sends a "pause" message to the decoding. Who's going to
> respond, considering the decoding is waiting for the lock (and it's not easy
> to jump out, because it might be deep inside the output plugin, i.e. deep in
> some extension).

I think it's inevitable that any solution that is based on pausing
decoding might have to wait for a theoretically unbounded time for
decoding to get back to a point where it can safely pause. That is
one of several reasons why I don't believe that any solution based on
holding off aborts has any chance of being acceptable -- mid-abort is
a terrible time to pause. Now, if the time is not only theoretically
unbounded but also in practice likely to be very long (e.g. the
foreground transaction could easily have to wait minutes for the
decoding process to be able to process the pause request), then this
whole approach is probably not going to work. If, on the other hand,
the time is theoretically unbounded but in practice likely to be no
more than a few seconds in almost every case, then we might have
something. I don't know which is the case. It probably depends on
where you put the code to handle pause requests, and I'm not sure what
options are viable. For example, if there's a loop that eats WAL
records one at a time, and we can safely pause after any given
iteration of that loop, that sounds pretty good, unless a single
iteration of that loop might hang inside of a network I/O, in which
case it sounds ... less good, probably? But there might be ways
around that, too, like ... could we pause at the next CFI? I don't
understand the constraints well enough to comment intelligently here.

>> The newer approach could be considered an improvement in that you've
>> tried to get your hands around the problem at an earlier point, but
>> it's not early enough. To take a very rough analogy, the original
>> approach was like trying to install a sprinkler system after the
>> building had already burned down, while the new approach is like
>> trying to install a sprinkler system when you notice that the building
>> is on fire.
>
> When an oil well is burning, they detonate a small bomb next to it to
> extinguish it. What would be the analogy to that, here? pg_resetwal? ;-)

Yep. :-)

>> But we need to install the sprinkler system in advance.
>
> Damn causality!

I know, right?

>> Are you talking about HOT updates, or HOT pruning? Disabling the
>> former wouldn't help, and disabling the latter would break VACUUM,
>> which assumes that any tuple not removed by HOT pruning is not a dead
>> tuple (cf. 1224383e85eee580a838ff1abf1fdb03ced973dc, which was caused
>> by a case where that wasn't true).
>
> I'm talking about the issue you described here:
>
> https://www.postgresql.org/message-id/CA+TgmoZP0SxEfKW1Pn=ackUj+KdWCxs7PumMAhSYJeZ+_61_GQ@mail.gmail.com

There are several issues there. The second and third ones boil down
to this: As soon as the system thinks that your transaction is no
longer in process, it is going to start making decisions based on
whether that transaction committed or aborted. If it thinks your
transaction aborted, it is going to feel entirely free to make
decisions that permanently lose information -- like removing tuples or
overwriting CTIDs or truncating CLOG or killing index entries. I
doubt it makes any sense to try to fix each of those problems
individually -- if we're going to do something about this, it had
better be broad enough to nail all or nearly all of the problems in
this area in one fell swoop.

The first issue in that email is different. That's really about the
possibility that the aborted transaction itself has created chaos,
whereas the other ones are about the chaos that the rest of the system
might impose based on the belief that the transaction is no longer
needed for anything after an abort has occurred.

> A dumb question - would this work with subtransaction-level aborts? I mean,
> a transaction that does some catalog changes in a subxact which then however
> aborts, but then still continues.

Well, I would caution you against relying on me to design this for
you. The fact that I can identify the pitfalls of trying to install a
sprinkler system while the building is on fire does not mean that I
know what diameter of pipe should be used to provide for proper fire
containment. It's really important that this gets designed by someone
who knows -- or learns -- enough to make it really good and safe.
Replacing obvious problems (the building has already burned down!)
with subtler problems (the water pressure is insufficient to reach the
upper stories!) might get the patch committed, but that's not the
right goal.

That having been said, I cannot immediately see any reason why the
idea that I sketched there couldn't be made to work just as well or
poorly for subtransactions as it would for toplevel transactions. I
don't really know that it will work even for toplevel transactions --
that would require more thought and careful study than I've given it
(or, given that this is not my patch, feel that I should need to give
it). However, if it does, and if there are no other problems that
I've missed in thinking casually about it, then I think it should be
possible to make it work for subtransactions, too. Likely, as the
decoding process first encountered each new sub-XID, it would need to
magically acquire a duplicate lock and advertise the subxid just as it
did for the toplevel XID, so that at any given time the set of XIDs
advertised by the decoding process would be a subset (not necessarily
proper) of the set advertised by the foreground process.

To try to be a little clearer about my overall position, I am
suggesting that you (1) abandon the current approach and (2) make sure
that everything is done by making sufficient preparations in advance
of any abort rather than trying to cope after it's already started. I
am also suggesting that, to get there, it might be helpful to (a)
contemplate communication and active cooperation between the running
process and the decoding process(es), but it might turn out not to be
needed and I don't know exactly what needs to be communicated, (b)
consider whether it there's a reasonable way to make it look to other
parts of the system like the aborted transaction is still running, but
this also might turn out not to be the right approach, (c) consider
whether logical decoding already does or can be made to use historical
catalog snapshots that only see command IDs prior to the current one
so that incompletely-made changes by the last CID aren't seen if an
abort happens. I think there is a good chance that a full solution
involves more than one of these things, and maybe some other things I
haven't thought about. These are ideas, not a plan.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-19 08:55:19
Message-ID: CAMGcDxeZ+BCRb7xn+9VrSaceV5oxOCcbEjxH8P95TLVfUD+v8A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Robert and Tomas,

It seems clear to me that the decodeGroup list of decoding backends
waiting on the backend doing the transaction of interest is not a
favored approach here. Note that I came down to this approach after
trying various other approaches/iterations. I was especially enthused
to see the lockGroupLeader implementation in the code and based this
decodeGroup implementation on the same premise. Although our
requirements are simply to have a list of waiters in the main
transaction backend process.

Sure, there might be some issues related to locking in the code, and
am willing to try and work them out. However if the decodeGroup
approach of interlocking abort processing with the decoding backends
is itself considered suspect, then it might be another waste of time.

> I think it's inevitable that any solution that is based on pausing
> decoding might have to wait for a theoretically unbounded time for
> decoding to get back to a point where it can safely pause. That is
> one of several reasons why I don't believe that any solution based on
> holding off aborts has any chance of being acceptable -- mid-abort is
> a terrible time to pause. Now, if the time is not only theoretically
> unbounded but also in practice likely to be very long (e.g. the
> foreground transaction could easily have to wait minutes for the
> decoding process to be able to process the pause request), then this
> whole approach is probably not going to work. If, on the other hand,
> the time is theoretically unbounded but in practice likely to be no
> more than a few seconds in almost every case, then we might have
> something. I don't know which is the case.

We have tried to minimize the pausing requirements by holding the
"LogicalLock" only when the decoding activity needs to access catalog
tables. The decoding goes ahead only if it gets the logical lock,
reads the catalog and unlocks immediately. If the decoding backend
does not get the "LogicalLock" then it stops decoding the current
transaction. So, the time to pause is pretty short in practical
scenarios.

>It probably depends on
> where you put the code to handle pause requests, and I'm not sure what
> options are viable. For example, if there's a loop that eats WAL
> records one at a time, and we can safely pause after any given
> iteration of that loop, that sounds pretty good, unless a single
> iteration of that loop might hang inside of a network I/O, in which
> case it sounds ... less good, probably?

It's for the above scenarios of not waiting inside network I/O that we
lock only before doing catalog access as described above.

> There are several issues there. The second and third ones boil down
> to this: As soon as the system thinks that your transaction is no
> longer in process, it is going to start making decisions based on
> whether that transaction committed or aborted. If it thinks your
> transaction aborted, it is going to feel entirely free to make
> decisions that permanently lose information -- like removing tuples or
> overwriting CTIDs or truncating CLOG or killing index entries. I
> doubt it makes any sense to try to fix each of those problems
> individually -- if we're going to do something about this, it had
> better be broad enough to nail all or nearly all of the problems in
> this area in one fell swoop.

Agreed, this was the crux of the issues. Decisions that cause
permanent loss of information regardless of the ongoing decoding
happening around that transaction was what led us down this rabbit
hole in the first place.

>> A dumb question - would this work with subtransaction-level aborts? I mean,
>> a transaction that does some catalog changes in a subxact which then however
>> aborts, but then still continues.
>
> That having been said, I cannot immediately see any reason why the
> idea that I sketched there couldn't be made to work just as well or
> poorly for subtransactions as it would for toplevel transactions. I
> don't really know that it will work even for toplevel transactions --
> that would require more thought and careful study than I've given it
> (or, given that this is not my patch, feel that I should need to give
> it). However, if it does, and if there are no other problems that
> I've missed in thinking casually about it, then I think it should be
> possible to make it work for subtransactions, too. Likely, as the
> decoding process first encountered each new sub-XID, it would need to
> magically acquire a duplicate lock and advertise the subxid just as it
> did for the toplevel XID, so that at any given time the set of XIDs
> advertised by the decoding process would be a subset (not necessarily
> proper) of the set advertised by the foreground process.
>

Am ready to go back to the drawing board and have another stab at this
pesky little large issue :-)

> To try to be a little clearer about my overall position, I am
> suggesting that you (1) abandon the current approach and (2) make sure
> that everything is done by making sufficient preparations in advance
> of any abort rather than trying to cope after it's already started. I
> am also suggesting that, to get there, it might be helpful to (a)
> contemplate communication and active cooperation between the running
> process and the decoding process(es), but it might turn out not to be
> needed and I don't know exactly what needs to be communicated, (b)
> consider whether it there's a reasonable way to make it look to other
> parts of the system like the aborted transaction is still running, but
> this also might turn out not to be the right approach, (c) consider
> whether logical decoding already does or can be made to use historical
> catalog snapshots that only see command IDs prior to the current one
> so that incompletely-made changes by the last CID aren't seen if an
> abort happens. I think there is a good chance that a full solution
> involves more than one of these things, and maybe some other things I
> haven't thought about. These are ideas, not a plan.
>

I will think more on the above lines and see if we can get something workable..

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-19 19:42:08
Message-ID: 20180719194208.j5m7qhdmqp46d6yg@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-07-18 10:56:31 -0400, Robert Haas wrote:
> Are you talking about HOT updates, or HOT pruning? Disabling the
> former wouldn't help, and disabling the latter would break VACUUM,
> which assumes that any tuple not removed by HOT pruning is not a dead
> tuple (cf. 1224383e85eee580a838ff1abf1fdb03ced973dc, which was caused
> by a case where that wasn't true).

I don't think this reasoning actually applies for making HOT pruning
weaker as necessary for decoding. The xmin horizon on catalog tables is
already pegged, which'd prevent similar problems.

There's already plenty cases where dead tuples, if they only recently
became so, are not removed by the time vacuumlazy.c processes the tuple.

I actually think the balance of all the solutions discussed in this
thread seem to make neutering pruning *a bit* by far the most palatable
solution. We don't need to fully prevent removal of such tuple chains,
it's sufficient that we can detect that a tuple has been removed. A
large-sledgehammer approach would be to just error out when attempting
to read such a tuple. The existing error handling logic can relatively
easily be made to work with that.

Greetings,

Andres Freund


From: Andres Freund <andres(at)anarazel(dot)de>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-19 19:43:42
Message-ID: 20180719194342.2u6rjo7tfjvtxvtc@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-07-18 16:08:37 +0200, Tomas Vondra wrote:
> Anyway, I have no clear idea what changes would be necessary to the original
> design of logical decoding to make implementing this easier now. The
> decoding in general is quite constrained by how our transam and WAL stuff
> works. I suppose Andres thought about this aspect, and I guess he concluded
> that (a) it's not needed for v1, and (b) adding it later will require about
> the same effort. So in the "better" case we'd end up waiting for logical
> decoding much longer, in the worse case we would not have it at all.

I still don't really see an alternative that'd have been (or even *is*)
realistically doable.

Greetings,

Andres Freund


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-19 22:35:00
Message-ID: 20180719223500.lclj45aiojikwnwm@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-07-19 12:42:08 -0700, Andres Freund wrote:
> I actually think the balance of all the solutions discussed in this
> thread seem to make neutering pruning *a bit* by far the most palatable
> solution. We don't need to fully prevent removal of such tuple chains,
> it's sufficient that we can detect that a tuple has been removed. A
> large-sledgehammer approach would be to just error out when attempting
> to read such a tuple. The existing error handling logic can relatively
> easily be made to work with that.

So. I'm just back from not working for a few days. I've not followed
this discussion in all it's detail over the last months. I've an
annoying bout of allergies. So I might be entirely off.

I think this whole issue only exists if we actually end up doing catalog
lookups, not if there's only cached lookups (otherwise our invalidation
handling is entirely borked). And we should normally do cached lookups
for a large large percentage of the cases. Therefore we can make the
cache-miss cases a bit slower.

So what if we, at the begin / end of cache miss handling, re-check if
the to-be-decoded transaction is still in-progress (or has
committed). And we throw an error if that happened. That error is then
caught in reorderbuffer, the in-progress-xact aborted callback is
called, and processing continues (there's a couple nontrivial details
here, but it should be doable).

The biggest issue is what constitutes a "cache miss". It's fairly
trivial to do this for syscache / relcache, but that's not sufficient:
there's plenty cases where catalogs are accessed without going through
either. But as far as I can tell if we declared that all historic
accesses have to go through systable_beginscan* - which'd imo not be a
crazy restriction - we could put the checks at that layer.

That'd require that an index lookup can't crash if the corresponding
heap entry doesn't exist (etc), but that's something we need to handle
anyway. The issue that multiple separate catalog lookups need to be
coherent (say Robert's pg_class exists, but pg_attribute doesn't
example) is solved by virtue of the the pg_attribute lookups failing if
the transaction aborted.

Am I missing something here?

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-20 06:43:19
Message-ID: CAMGcDxfgg8yP0MsdKRuqwihkrQF=0Mz9Se+0paNGADWaexULjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

> So what if we, at the begin / end of cache miss handling, re-check if
> the to-be-decoded transaction is still in-progress (or has
> committed). And we throw an error if that happened. That error is then
> caught in reorderbuffer, the in-progress-xact aborted callback is
> called, and processing continues (there's a couple nontrivial details
> here, but it should be doable).
>
> The biggest issue is what constitutes a "cache miss". It's fairly
> trivial to do this for syscache / relcache, but that's not sufficient:
> there's plenty cases where catalogs are accessed without going through
> either. But as far as I can tell if we declared that all historic
> accesses have to go through systable_beginscan* - which'd imo not be a
> crazy restriction - we could put the checks at that layer.
>

Documenting that historic accesses go through systable_* APIs does
seem reasonable. In our earlier discussions, we felt asking plugin
writers to do anything along these lines was too onerous and
cumbersome to expect.

> That'd require that an index lookup can't crash if the corresponding
> heap entry doesn't exist (etc), but that's something we need to handle
> anyway. The issue that multiple separate catalog lookups need to be
> coherent (say Robert's pg_class exists, but pg_attribute doesn't
> example) is solved by virtue of the the pg_attribute lookups failing if
> the transaction aborted.
>
> Am I missing something here?
>

Are you suggesting we have a:

PG_TRY()
{
Catalog_Access();
}
PG_CATCH()
{
Abort_Handling();
}

here?

Regards,
Nikhils


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-20 14:58:36
Message-ID: 20180720145836.gxwhbftuoyx5h4gc@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-07-20 12:13:19 +0530, Nikhil Sontakke wrote:
> Hi Andres,
>
>
> > So what if we, at the begin / end of cache miss handling, re-check if
> > the to-be-decoded transaction is still in-progress (or has
> > committed). And we throw an error if that happened. That error is then
> > caught in reorderbuffer, the in-progress-xact aborted callback is
> > called, and processing continues (there's a couple nontrivial details
> > here, but it should be doable).
> >
> > The biggest issue is what constitutes a "cache miss". It's fairly
> > trivial to do this for syscache / relcache, but that's not sufficient:
> > there's plenty cases where catalogs are accessed without going through
> > either. But as far as I can tell if we declared that all historic
> > accesses have to go through systable_beginscan* - which'd imo not be a
> > crazy restriction - we could put the checks at that layer.
> >
>
> Documenting that historic accesses go through systable_* APIs does
> seem reasonable. In our earlier discussions, we felt asking plugin
> writers to do anything along these lines was too onerous and
> cumbersome to expect.

But they don't really need to do that - in just about all cases access
"automatically" goes through systable_* or layers above. If you call
output functions, do syscache lookups, etc you're good.

> > That'd require that an index lookup can't crash if the corresponding
> > heap entry doesn't exist (etc), but that's something we need to handle
> > anyway. The issue that multiple separate catalog lookups need to be
> > coherent (say Robert's pg_class exists, but pg_attribute doesn't
> > example) is solved by virtue of the the pg_attribute lookups failing if
> > the transaction aborted.
> >
> > Am I missing something here?
> >
>
> Are you suggesting we have a:
>
> PG_TRY()
> {
> Catalog_Access();
> }
> PG_CATCH()
> {
> Abort_Handling();
> }
>
> here?

Not quite, no. Basically, in a simplified manner, the logical decoding
loop is like:

while (true)
record = readRecord()
logical = decodeRecord()

PG_TRY():
StartTransactionCommand();

switch (TypeOf(logical))
case INSERT:
insert_callback(logical);
break;
...

CommitTransactionCommand();

PG_CATCH():
AbortCurrentTransaction();
PG_RE_THROW();

what I'm proposing is that that various catalog access functions throw a
new class of error, something like "decoding aborted transactions". The
PG_CATCH() above would then not unconditionally re-throw, but set a flag
and continue iff that class of error was detected.

while (true)
if (in_progress_xact_abort_pending)
StartTransactionCommand();
in_progress_xact_abort_callback(made_up_record);
in_progress_xact_abort_pending = false;
CommitTransactionCommand();

record = readRecord()
logical = decodeRecord()

PG_TRY():
StartTransactionCommand();

switch (TypeOf(logical))
case INSERT:
insert_callback(logical);
break;
...

CommitTransactionCommand();

PG_CATCH():
AbortCurrentTransaction();
if (errclass == DECODING_ABORTED_XACT)
in_progress_xact_abort_pending = true;
continue;
else
PG_RE_THROW();

Now obviously that's just pseudo code with lotsa things missing, but I
think the basic idea should come through?

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 11:01:50
Message-ID: CAMGcDxebtC_ZO6EtLA1MgJ6sWbud3Fjtq9HsBwxxhYfoHJdANw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

>> > That'd require that an index lookup can't crash if the corresponding
>> > heap entry doesn't exist (etc), but that's something we need to handle
>> > anyway. The issue that multiple separate catalog lookups need to be
>> > coherent (say Robert's pg_class exists, but pg_attribute doesn't
>> > example) is solved by virtue of the the pg_attribute lookups failing if
>> > the transaction aborted.
> Not quite, no. Basically, in a simplified manner, the logical decoding
> loop is like:
>
> while (true)
> record = readRecord()
> logical = decodeRecord()
>
> PG_TRY():
> StartTransactionCommand();
>
> switch (TypeOf(logical))
> case INSERT:
> insert_callback(logical);
> break;
> ...
>
> CommitTransactionCommand();
>
> PG_CATCH():
> AbortCurrentTransaction();
> PG_RE_THROW();
>
> what I'm proposing is that that various catalog access functions throw a
> new class of error, something like "decoding aborted transactions".

When will this error be thrown by the catalog functions? How will it
determine that it needs to throw this error?

> PG_CATCH():
> AbortCurrentTransaction();
> if (errclass == DECODING_ABORTED_XACT)
> in_progress_xact_abort_pending = true;
> continue;
> else
> PG_RE_THROW();
>
> Now obviously that's just pseudo code with lotsa things missing, but I
> think the basic idea should come through?
>

How do we handle the cases where the catalog returns inconsistent data
(without erroring out) which does not help with the ongoing decoding?
Consider for example:

BEGIN;
/* CONSIDER T1 has one column C1 */
ALTER TABLE T1 ADD COL c2;
INSERT INTO TABLE T1(c2) VALUES;
PREPARE TRANSACTION;

If we abort the above 2PC and the catalog row for the ALTER gets
cleaned up by vacuum, then the catalog read will return us T1 with one
column C1. The catalog scan will NOT error out but will return
metadata which causes the insert-decoding change apply callback to
error out. The point here is that in some cases the catalog scan might
not error out and might return inconsistent metadata which causes
issues further down the line in apply processing.

Regards,
Nikhils

> Greetings,
>
> Andres Freund

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 13:50:09
Message-ID: 20180723135009.w25re6qxbb7njrbd@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-07-23 16:31:50 +0530, Nikhil Sontakke wrote:
> >> > That'd require that an index lookup can't crash if the corresponding
> >> > heap entry doesn't exist (etc), but that's something we need to handle
> >> > anyway. The issue that multiple separate catalog lookups need to be
> >> > coherent (say Robert's pg_class exists, but pg_attribute doesn't
> >> > example) is solved by virtue of the the pg_attribute lookups failing if
> >> > the transaction aborted.
> > Not quite, no. Basically, in a simplified manner, the logical decoding
> > loop is like:
> >
> > while (true)
> > record = readRecord()
> > logical = decodeRecord()
> >
> > PG_TRY():
> > StartTransactionCommand();
> >
> > switch (TypeOf(logical))
> > case INSERT:
> > insert_callback(logical);
> > break;
> > ...
> >
> > CommitTransactionCommand();
> >
> > PG_CATCH():
> > AbortCurrentTransaction();
> > PG_RE_THROW();
> >
> > what I'm proposing is that that various catalog access functions throw a
> > new class of error, something like "decoding aborted transactions".
>
> When will this error be thrown by the catalog functions? How will it
> determine that it needs to throw this error?

The error check would have to happen at the end of most systable_*
functions. They'd simply do something like

if (decoding_in_progress_xact && TransactionIdDidAbort(xid_of_aborted))
ereport(ERROR, (errcode(DECODING_ABORTED_XACT), errmsg("oops")));

i.e. check whether the transaction to be decoded still is in
progress. As that would happen before any potentially wrong result can
be returned (as the check happens at the tail end of systable_*),
there's no issue with wrong state in the syscache etc.

> > PG_CATCH():
> > AbortCurrentTransaction();
> > if (errclass == DECODING_ABORTED_XACT)
> > in_progress_xact_abort_pending = true;
> > continue;
> > else
> > PG_RE_THROW();
> >
> > Now obviously that's just pseudo code with lotsa things missing, but I
> > think the basic idea should come through?
> >
>
> How do we handle the cases where the catalog returns inconsistent data
> (without erroring out) which does not help with the ongoing decoding?
> Consider for example:

I don't think that situation exists, given the scheme described
above. That's just the point.

> BEGIN;
> /* CONSIDER T1 has one column C1 */
> ALTER TABLE T1 ADD COL c2;
> INSERT INTO TABLE T1(c2) VALUES;
> PREPARE TRANSACTION;
>
> If we abort the above 2PC and the catalog row for the ALTER gets
> cleaned up by vacuum, then the catalog read will return us T1 with one
> column C1.

No, it'd throw an error due to the bew is-aborted check.

> The catalog scan will NOT error out but will return metadata which
> causes the insert-decoding change apply callback to error out.

Why would it not throw an error?

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 14:07:46
Message-ID: CAMGcDxdJNZVq1mX9dZbwEniigi=af8H5dt62C+q2UHwsE+BTaw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

>> > what I'm proposing is that that various catalog access functions throw a
>> > new class of error, something like "decoding aborted transactions".
>>
>> When will this error be thrown by the catalog functions? How will it
>> determine that it needs to throw this error?
>
> The error check would have to happen at the end of most systable_*
> functions. They'd simply do something like
>
> if (decoding_in_progress_xact && TransactionIdDidAbort(xid_of_aborted))
> ereport(ERROR, (errcode(DECODING_ABORTED_XACT), errmsg("oops")));
>
> i.e. check whether the transaction to be decoded still is in
> progress. As that would happen before any potentially wrong result can
> be returned (as the check happens at the tail end of systable_*),
> there's no issue with wrong state in the syscache etc.
>

Oh, ok. The systable_* functions use the passed in snapshot and return
tuples matching to it. They do not typically have access to the
current XID being worked upon..

We can find out if the snapshot is a logical decoding one by virtue of
its "satisfies" function pointing to HeapTupleSatisfiesHistoricMVCC.

>
>> The catalog scan will NOT error out but will return metadata which
>> causes the insert-decoding change apply callback to error out.
>
> Why would it not throw an error?
>

In your scheme, it will throw an error, indeed. We'd need to make the
"being-currently-decoded-XID" visible to these systable_* functions
and then this scheme will work.

Regards,
Nikhils

> Greetings,
>
> Andres Freund

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 14:14:35
Message-ID: 20180723141435.ercxxh3bj2uc2mjm@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-07-23 19:37:46 +0530, Nikhil Sontakke wrote:
> Hi Andres,
>
> >> > what I'm proposing is that that various catalog access functions throw a
> >> > new class of error, something like "decoding aborted transactions".
> >>
> >> When will this error be thrown by the catalog functions? How will it
> >> determine that it needs to throw this error?
> >
> > The error check would have to happen at the end of most systable_*
> > functions. They'd simply do something like
> >
> > if (decoding_in_progress_xact && TransactionIdDidAbort(xid_of_aborted))
> > ereport(ERROR, (errcode(DECODING_ABORTED_XACT), errmsg("oops")));
> >
> > i.e. check whether the transaction to be decoded still is in
> > progress. As that would happen before any potentially wrong result can
> > be returned (as the check happens at the tail end of systable_*),
> > there's no issue with wrong state in the syscache etc.
> >
>
> Oh, ok. The systable_* functions use the passed in snapshot and return
> tuples matching to it. They do not typically have access to the
> current XID being worked upon..

That seems like quite a solvable issue, especially compared to the
locking schemes proposed.

> We can find out if the snapshot is a logical decoding one by virtue of
> its "satisfies" function pointing to HeapTupleSatisfiesHistoricMVCC.

I think we even can just do something like a global
TransactionId check_if_transaction_is_alive = InvalidTransactionId;
and just set it up during decoding. And then just check it whenever it's
not set tot InvalidTransactionId.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhil(dot)sontakke(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 15:52:18
Message-ID: 3B37B16B-A473-4352-ABCF-F23AE314D4F7@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Andres,

>> We can find out if the snapshot is a logical decoding one by virtue of
>> its "satisfies" function pointing to HeapTupleSatisfiesHistoricMVCC.
>
> I think we even can just do something like a global
> TransactionId check_if_transaction_is_alive = InvalidTransactionId;
> and just set it up during decoding. And then just check it whenever it's
> not set tot InvalidTransactionId.
>
>

Ok. I will work on something along these lines and re-submit the set of patches.

Regards,
Nikhils


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 16:11:13
Message-ID: CA+TgmobpdQ-ZU_+rUJ3MAop5TcjPQ9DQyd2MazLC3ROe=DySJQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jul 19, 2018 at 3:42 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> I don't think this reasoning actually applies for making HOT pruning
> weaker as necessary for decoding. The xmin horizon on catalog tables is
> already pegged, which'd prevent similar problems.

That sounds completely wrong to me. Setting the xmin horizon keeps
tuples that are made dead by a committing transaction from being
removed, but I don't think it will do anything to keep tuples that are
made dead by an aborting transaction from being removed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)lists(dot)postgresql(dot)org,Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>,Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>,David Steele <david(at)pgmasters(dot)net>,Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>,Simon Riggs <simon(at)2ndquadrant(dot)com>,Craig Ringer <craig(at)2ndquadrant(dot)com>,Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>,Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>,Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>,Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>,Dmitry Dolgov <9erthalion6(at)gmail(dot)com>,Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>,Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>,PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 16:13:01
Message-ID: CC88AED5-9D79-4AEF-A66E-89A1EDF3AF17@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On July 23, 2018 9:11:13 AM PDT, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>On Thu, Jul 19, 2018 at 3:42 PM, Andres Freund <andres(at)anarazel(dot)de>
>wrote:
>> I don't think this reasoning actually applies for making HOT pruning
>> weaker as necessary for decoding. The xmin horizon on catalog tables
>is
>> already pegged, which'd prevent similar problems.
>
>That sounds completely wrong to me. Setting the xmin horizon keeps
>tuples that are made dead by a committing transaction from being
>removed, but I don't think it will do anything to keep tuples that are
>made dead by an aborting transaction from being removed.

My point is that we could just make HTSV treat them as recently dead, without incurring the issues of the bug you referenced.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 16:38:25
Message-ID: CA+TgmoYwW7DYuOkvp3a7_HHe=5wtGUvUH4hmJpy52=dc2x0gJg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jul 23, 2018 at 12:13 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> My point is that we could just make HTSV treat them as recently dead, without incurring the issues of the bug you referenced.

That doesn't seem sufficient. For example, it won't keep the
predecessor tuple's ctid field from being overwritten by a subsequent
updater -- and if that happens then the update chain is broken. Maybe
your idea of cross-checking at the end of each syscache lookup would
be sufficient to prevent that from happening, though. But I wonder if
there are subtler problems, too -- e.g. relfrozenxid vs. actual xmins
in the table, clog truncation, or whatever. There might be no
problem, but the idea that an aborted transaction is of no further
interest to anybody is pretty deeply ingrained in the system.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-23 18:23:42
Message-ID: 20180723182342.okofpy6kyi7oqaql@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2018-07-23 12:38:25 -0400, Robert Haas wrote:
> On Mon, Jul 23, 2018 at 12:13 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > My point is that we could just make HTSV treat them as recently dead, without incurring the issues of the bug you referenced.
>
> That doesn't seem sufficient. For example, it won't keep the
> predecessor tuple's ctid field from being overwritten by a subsequent
> updater -- and if that happens then the update chain is broken.

Sure. I wasn't arguing that it'd be sufficient. Just that the specific
issue that it'd bring the bug you mentioned isn't right. I agree that
it's quite terrifying to attempt to get this right.

> Maybe your idea of cross-checking at the end of each syscache lookup
> would be sufficient to prevent that from happening, though.

Hm? If we go for that approach we would not do *anything* about pruning,
which is why I think it has appeal. Because we'd check at the end of
system table scans (not syscache lookups, positive cache hits are fine
because of invalidation handling) whether the to-be-decoded transaction
aborted, we'd not need to do anything about pruning: If the transaction
aborted, we're guaranteed to know - the result might have been wrong,
but since we error out before filling any caches, we're ok. If it
hasn't yet aborted at the end of the scan, we conversely are guaranteed
that the scan results are correct.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-26 14:54:00
Message-ID: CAMGcDxeBcMBF0KZ9qMpcb07uX6uLNO5hk=VWCQC514oZuofLmw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>> I think we even can just do something like a global
>> TransactionId check_if_transaction_is_alive = InvalidTransactionId;
>> and just set it up during decoding. And then just check it whenever it's
>> not set tot InvalidTransactionId.
>>
>>
>
> Ok. I will work on something along these lines and re-submit the set of patches.
>
PFA, latest patchset, which completely removes the earlier
LogicalLock/LogicalUnLock implementation using groupDecode stuff and
uses the newly suggested approach of checking the currently decoded
XID for abort in systable_* API functions. Much simpler to code and
easier to test as well.

Out of the patchset, the specific patch which focuses on the above
systable_* API based XID checking implementation is part of
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch. So,
it might help to take a look at this patch first for any additional
feedback on this approach.

There's an additional test case in
0005-Additional-test-case-to-demonstrate-decoding-rollbac.patch which
uses a sleep in the "change" plugin API to allow a concurrent rollback
on the 2PC being currently decoded. Andres generally doesn't like this
approach :-), but there are no timing/interlocking issues now, and the
sleep just helps us do a concurrent rollback, so it might be ok now,
all things considered. Anyways, it's an additional patch for now.

Comments, feedback appreciated.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch application/octet-stream 7.9 KB
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patch application/octet-stream 40.2 KB
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch application/octet-stream 9.0 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 25.3 KB
0005-Additional-test-case-to-demonstrate-decoding-rollbac.patch application/octet-stream 8.7 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-26 20:02:41
Message-ID: 20180726200241.aje4dv4jsv25v4k2@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2018-07-26 20:24:00 +0530, Nikhil Sontakke wrote:
> Hi,
>
> >> I think we even can just do something like a global
> >> TransactionId check_if_transaction_is_alive = InvalidTransactionId;
> >> and just set it up during decoding. And then just check it whenever it's
> >> not set tot InvalidTransactionId.
> >>
> >>
> >
> > Ok. I will work on something along these lines and re-submit the set of patches.

> PFA, latest patchset, which completely removes the earlier
> LogicalLock/LogicalUnLock implementation using groupDecode stuff and
> uses the newly suggested approach of checking the currently decoded
> XID for abort in systable_* API functions. Much simpler to code and
> easier to test as well.

So, leaving the fact that it might not actually be correct aside ;), you
seem to be ok with the approach?

> Out of the patchset, the specific patch which focuses on the above
> systable_* API based XID checking implementation is part of
> 0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch. So,
> it might help to take a look at this patch first for any additional
> feedback on this approach.

K.

> There's an additional test case in
> 0005-Additional-test-case-to-demonstrate-decoding-rollbac.patch which
> uses a sleep in the "change" plugin API to allow a concurrent rollback
> on the 2PC being currently decoded. Andres generally doesn't like this
> approach :-), but there are no timing/interlocking issues now, and the
> sleep just helps us do a concurrent rollback, so it might be ok now,
> all things considered. Anyways, it's an additional patch for now.

Yea, I still don't think it's ok. The tests won't be reliable. There's
ways to make this reliable, e.g. by forcing a lock to be acquired that's
externally held or such. Might even be doable just with a weird custom
datatype.

> From 75edeb440794fff7de48082dafdecb065940bee5 Mon Sep 17 00:00:00 2001
> From: Nikhil Sontakke <nikhils(at)2ndQuadrant(dot)com>
> Date: Thu, 26 Jul 2018 18:45:26 +0530
> Subject: [PATCH 3/5] Gracefully handle concurrent aborts of uncommitted
> transactions that are being decoded alongside.
>
> When a transaction aborts, it's changes are considered unnecessary for
> other transactions. That means the changes may be either cleaned up by
> vacuum or removed from HOT chains (thus made inaccessible through
> indexes), and there may be other such consequences.
>
> When decoding committed transactions this is not an issue, and we
> never decode transactions that abort before the decoding starts.
>
> But for in-progress transactions - for example when decoding prepared
> transactions on PREPARE (and not COMMIT PREPARED as before), this
> may cause failures when the output plugin consults catalogs (both
> system and user-defined).
>
> We handle such failures by returning ERRCODE_TRANSACTION_ROLLBACK
> sqlerrcode from system table scan APIs to the backend decoding a
> specific uncommitted transaction. The decoding logic on the receipt
> of such an sqlerrcode aborts the ongoing decoding and returns
> gracefully.
> ---
> src/backend/access/index/genam.c | 31 +++++++++++++++++++++++++
> src/backend/replication/logical/reorderbuffer.c | 30 ++++++++++++++++++++----
> src/backend/utils/time/snapmgr.c | 25 ++++++++++++++++++--
> src/include/utils/snapmgr.h | 4 +++-
> 4 files changed, 82 insertions(+), 8 deletions(-)
>
> diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
> index 9d08775687..67c5810bf7 100644
> --- a/src/backend/access/index/genam.c
> +++ b/src/backend/access/index/genam.c
> @@ -423,6 +423,16 @@ systable_getnext(SysScanDesc sysscan)
> else
> htup = heap_getnext(sysscan->scan, ForwardScanDirection);
>
> + /*
> + * If CheckXidAlive is valid, then we check if it aborted. If it did, we
> + * error out
> + */
> + if (TransactionIdIsValid(CheckXidAlive) &&
> + TransactionIdDidAbort(CheckXidAlive))
> + ereport(ERROR,
> + (errcode(ERRCODE_TRANSACTION_ROLLBACK),
> + errmsg("transaction aborted during system catalog scan")));
> +
> return htup;
> }

Don't we have to check TransactionIdIsInProgress() first? C.f. header
comments in tqual.c. Note this is also not guaranteed to be correct
after a crash (where no clog entry will exist for an aborted xact), but
we probably shouldn't get here in that case - but better be safe.

I suspect it'd be better reformulated as
TransactionIdIsValid(CheckXidAlive) &&
!TransactionIdIsInProgress(CheckXidAlive) &&
!TransactionIdDidCommit(CheckXidAlive)

What do you think?

I think it'd also be good to add assertions to codepaths not going
through systable_* asserting that
!TransactionIdIsValid(CheckXidAlive). Alternatively we could add an
if (unlikely(TransactionIdIsValid(CheckXidAlive)) && ...)
branch to those too.

> From 80fc576bda483798919653991bef6dc198625d90 Mon Sep 17 00:00:00 2001
> From: Nikhil Sontakke <nikhils(at)2ndQuadrant(dot)com>
> Date: Wed, 13 Jun 2018 16:31:15 +0530
> Subject: [PATCH 4/5] Teach test_decoding plugin to work with 2PC
>
> Includes a new option "enable_twophase". Depending on this options
> value, PREPARE TRANSACTION will either be decoded or treated as
> a single phase commit later.

FWIW, I don't think I'm ok with doing this on a per-plugin-option basis.
I think this is something that should be known to the outside of the
plugin. More similar to how binary / non-binary support works. Should
also be able to inquire the output plugin whether it's supported (cf
previous similarity).

> From 682b0de2827d1f55c4e471c3129eb687ae0825a5 Mon Sep 17 00:00:00 2001
> From: Nikhil Sontakke <nikhils(at)2ndQuadrant(dot)com>
> Date: Wed, 13 Jun 2018 16:32:16 +0530
> Subject: [PATCH 5/5] Additional test case to demonstrate decoding/rollback
> interlocking
>
> Introduce a decode-delay parameter in the test_decoding plugin. Based
> on the value provided in the plugin, sleep for those many seconds while
> inside the "decode change" plugin call. A concurrent rollback is fired
> off which aborts that transaction in the meanwhile. A subsequent
> systable access will error out causing the logical decoding to abort.

Yea, I'm *definitely* still not on board with this. This'll just lead to
a fragile or extremely slow test.

Greetings,

Andres Freund


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-07-27 13:00:11
Message-ID: CAMGcDxfVBiBXt4F7v58_DfqFD9xnmU8hBEiyqoypon6Ftroe_Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> PFA, latest patchset, which completely removes the earlier
>> LogicalLock/LogicalUnLock implementation using groupDecode stuff and
>> uses the newly suggested approach of checking the currently decoded
>> XID for abort in systable_* API functions. Much simpler to code and
>> easier to test as well.
>
> So, leaving the fact that it might not actually be correct aside ;), you
> seem to be ok with the approach?
>

;-)

Yes, I do like the approach. Do you think there are other locations
other than systable_* APIs which might need such checks?

>> There's an additional test case in
>> 0005-Additional-test-case-to-demonstrate-decoding-rollbac.patch which
>> uses a sleep in the "change" plugin API to allow a concurrent rollback
>> on the 2PC being currently decoded. Andres generally doesn't like this
>> approach :-), but there are no timing/interlocking issues now, and the
>> sleep just helps us do a concurrent rollback, so it might be ok now,
>> all things considered. Anyways, it's an additional patch for now.
>
> Yea, I still don't think it's ok. The tests won't be reliable. There's
> ways to make this reliable, e.g. by forcing a lock to be acquired that's
> externally held or such. Might even be doable just with a weird custom
> datatype.
>

Ok, I will look at ways to do away with the sleep.

>> diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
>> index 9d08775687..67c5810bf7 100644
>> --- a/src/backend/access/index/genam.c
>> +++ b/src/backend/access/index/genam.c
>> @@ -423,6 +423,16 @@ systable_getnext(SysScanDesc sysscan)
>> else
>> htup = heap_getnext(sysscan->scan, ForwardScanDirection);
>>
>> + /*
>> + * If CheckXidAlive is valid, then we check if it aborted. If it did, we
>> + * error out
>> + */
>> + if (TransactionIdIsValid(CheckXidAlive) &&
>> + TransactionIdDidAbort(CheckXidAlive))
>> + ereport(ERROR,
>> + (errcode(ERRCODE_TRANSACTION_ROLLBACK),
>> + errmsg("transaction aborted during system catalog scan")));
>> +
>> return htup;
>> }
>
> Don't we have to check TransactionIdIsInProgress() first? C.f. header
> comments in tqual.c. Note this is also not guaranteed to be correct
> after a crash (where no clog entry will exist for an aborted xact), but
> we probably shouldn't get here in that case - but better be safe.
>
> I suspect it'd be better reformulated as
> TransactionIdIsValid(CheckXidAlive) &&
> !TransactionIdIsInProgress(CheckXidAlive) &&
> !TransactionIdDidCommit(CheckXidAlive)
>
> What do you think?
>

tqual.c does seem to mention this for a non-MVCC snapshot, so might as
well do it this ways. The caching of fetched XID should not make these
checks too expensive anyways.

>
> I think it'd also be good to add assertions to codepaths not going
> through systable_* asserting that
> !TransactionIdIsValid(CheckXidAlive). Alternatively we could add an
> if (unlikely(TransactionIdIsValid(CheckXidAlive)) && ...)
> branch to those too.
>

I was wondering if anything else would be needed for user-defined
catalog tables..

>
>
>> From 80fc576bda483798919653991bef6dc198625d90 Mon Sep 17 00:00:00 2001
>> From: Nikhil Sontakke <nikhils(at)2ndQuadrant(dot)com>
>> Date: Wed, 13 Jun 2018 16:31:15 +0530
>> Subject: [PATCH 4/5] Teach test_decoding plugin to work with 2PC
>>
>> Includes a new option "enable_twophase". Depending on this options
>> value, PREPARE TRANSACTION will either be decoded or treated as
>> a single phase commit later.
>
> FWIW, I don't think I'm ok with doing this on a per-plugin-option basis.
> I think this is something that should be known to the outside of the
> plugin. More similar to how binary / non-binary support works. Should
> also be able to inquire the output plugin whether it's supported (cf
> previous similarity).
>

Hmm, lemme see if we can do it outside of the plugin. But note that a
plugin might want to decode some 2PC at prepare time and another are
"commit prepared" time.

We also need to take care to not break logical replication if the
other node is running non-2PC enabled code. We tried to optimize the
COMMIT/ABORT handling by adding sub flags to the existing protocol. I
will test that as well.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-10-31 00:08:58
Message-ID: a5ba1686-bb79-f2c3-568d-993e4c53b920@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Nikhil,

Any progress on the issues discussed in the last couple of messages?
That is:

1) removing of the sleep() from tests

2) changes to systable_getnext() wrt. TransactionIdIsInProgress()

3) adding asserts / checks to codepaths not going through systable_*

4) (not) adding this as a per-plugin option

5) handling cases where the downstream does not have 2PC enabled

I guess it'd be good an updated patch or further discussion before
continuing the review efforts.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-11-29 14:40:34
Message-ID: CAMGcDxfWORpkaOaNhFwS4839R_w4wtDgnB8Wj2TmAQXnV78HsQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

> Any progress on the issues discussed in the last couple of messages?
> That is:
>
> 1) removing of the sleep() from tests
>

Done. Now the test_decoding plugin takes a new option "check-xid". We
will pass the XID which is going to be aborted via this option. The
test_decoding plugin will wait for this XID to abort and exit when
that happens. This removes any arbitrary sleep dependencies.

> 2) changes to systable_getnext() wrt. TransactionIdIsInProgress()
>

Done.

> 3) adding asserts / checks to codepaths not going through systable_*
>

Done. All the heap_* get api calls now assert that they are not being
invoked with a valid
CheckXidAlive value.

> 4) (not) adding this as a per-plugin option
>
> 5) handling cases where the downstream does not have 2PC enabled
>
struct OutputPluginOptions now has an enable_twophase field which will
be set by the plugin at init time similar to the way output_type is
set to binary/text now.

> I guess it'd be good an updated patch or further discussion before
> continuing the review efforts.
>

PFA, latest patchset which implements the above.

Regards,
Nikhil
> regards
>
> --
> Tomas Vondra http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch application/octet-stream 7.9 KB
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patch application/octet-stream 45.5 KB
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch application/octet-stream 14.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.patch application/octet-stream 28.1 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-11-30 05:25:11
Message-ID: CAMGcDxe276HwzQ1ErqMHKPf=KTvdSGgu3jYaR+j3-368+amZSA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

>
> PFA, latest patchset which implements the above.
>

The newly added test_decoding test was failing due to a slight
expected output mismatch. The attached patch-set corrects that.

Regards,
Nikhil

> Regards,
> Nikhil
> > regards
> >
> > --
> > Tomas Vondra http://www.2ndQuadrant.com
> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Nikhil Sontakke http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.Nov30.patch application/octet-stream 7.9 KB
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.Nov30.patch application/octet-stream 45.4 KB
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.Nov30.patch application/octet-stream 14.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.Nov30.patch application/octet-stream 28.0 KB

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-12-16 14:00:27
Message-ID: 2ba9c978-ede8-35bf-14ec-45570bf0f6aa@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Nikhil,

Thanks for the updated patch - I've started working on a review, with
the hope of getting it committed sometime in 2019-01. But the patch
bit-rotted again a bit (probably due to d3c09b9b), which broke the last
part. Can you post a fixed version?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Arseny Sher <a(dot)sher(at)postgrespro(dot)ru>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-12-18 09:28:43
Message-ID: 87h8fbxjxw.fsf@ars-thinkpad
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:

> Hi Nikhil,
>
> Thanks for the updated patch - I've started working on a review, with
> the hope of getting it committed sometime in 2019-01. But the patch
> bit-rotted again a bit (probably due to d3c09b9b), which broke the last
> part. Can you post a fixed version?

Please also note that at some time the thread was torn and continued in
another place:
https://www.postgresql.org/message-id/flat/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3%2BQ%40mail.gmail.com

And now we have two branches =(

I hadn't checked whether my concerns where addressed in the latest
version though.

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Arseny Sher <a(dot)sher(at)postgrespro(dot)ru>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2018-12-18 14:44:47
Message-ID: 5d4ec920-7a88-476c-b492-c0bf1ee76ebc@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/18/18 10:28 AM, Arseny Sher wrote:
>
> Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
>
>> Hi Nikhil,
>>
>> Thanks for the updated patch - I've started working on a review, with
>> the hope of getting it committed sometime in 2019-01. But the patch
>> bit-rotted again a bit (probably due to d3c09b9b), which broke the last
>> part. Can you post a fixed version?
>
> Please also note that at some time the thread was torn and continued in
> another place:
> https://www.postgresql.org/message-id/flat/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3%2BQ%40mail.gmail.com
>
> And now we have two branches =(
>

Thanks for pointing that out - I've added the other thread to the CF
entry, so that we don't loose it.

> I hadn't checked whether my concerns where addressed in the latest
> version though.
>

OK, I'll read through the other thread and will check. Or perhaps Nikhil
can comment on that.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-04 08:31:34
Message-ID: CAMGcDxcBmN6jNeQkgWddfhX8HbSjQpW=Uo70iBY3P_EPdp+LTQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tomas,

> Thanks for the updated patch - I've started working on a review, with
> the hope of getting it committed sometime in 2019-01. But the patch
> bit-rotted again a bit (probably due to d3c09b9b), which broke the last
> part. Can you post a fixed version?
>

PFA, updated patch set.

Regards,
Nikhil

> regards
>
> --
> Tomas Vondra http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Nikhil Sontakke
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/

Attachment Content-Type Size
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.Jan4.patch application/octet-stream 7.9 KB
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.Jan4.patch application/octet-stream 45.5 KB
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.Jan4.patch application/octet-stream 14.3 KB
0004-Teach-test_decoding-plugin-to-work-with-2PC.Jan4.patch application/octet-stream 27.9 KB

From: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
To: Arseny Sher <a(dot)sher(at)postgrespro(dot)ru>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-04 08:35:35
Message-ID: CAMGcDxfnkjtB0MaLRp6wdGoTbGr20yq=dwGf5V26guKBNCtacg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Arseny,

> I hadn't checked whether my concerns where addressed in the latest
> version though.
>

I'd like to believe that the latest patch set tries to address some
(if not all) of your concerns. Can you please take a look and let me
know?

Regards,
Nikhil

--
Nikhil Sontakke
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/


From: Arseny Sher <a(dot)sher(at)postgrespro(dot)ru>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-14 22:16:33
Message-ID: 87a7k2996m.fsf@ars-thinkpad
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> writes:

> I'd like to believe that the latest patch set tries to address some
> (if not all) of your concerns. Can you please take a look and let me
> know?

Hi, sure.

General things:

- Earlier I said that there is no point of sending COMMIT PREPARED if
decoding snapshot became consistent after PREPARE, i.e. PREPARE hadn't
been sent. I realized since then that such use cases actually exist:
prepare might be copied to the replica by e.g. basebackup or something
else earlier. Still, a plugin must be able to easily distinguish these
too early PREPARES without doing its own bookkeeping (remembering each
PREPARE it has seen). Fortunately, turns out this we can make it
easy. If during COMMIT PREPARED / ABORT PREPARED record decoding we
see that ReorderBufferTXN with such xid exists, it means that either
1) plugin refused to do replay of this xact at PREPARE or 2) PREPARE
was too early in the stream. Otherwise xact would be replayed at
PREPARE processing and rbtxn purged immediately after. I think we
should add this to the documentation of filter_prepare_cb. Also, to
this end we need to add an argument to this callback specifying at
which context it was called: during prepare / commit prepared / abort
prepared. Also, for this to work, ReorderBufferProcessXid must be
always called at PREPARE, not only when 2PC decoding is disabled.

- BTW, ReorderBufferProcessXid at PREPARE should be always called
anyway, because otherwise if xact is empty, we will not prepare it
(and call cb), even if the output plugin asked us not to filter it
out. However, we will call commit_prepared cb, which is inconsistent.

- I find it weird that in DecodePrepare and in DecodeCommit you always
ask the plugin whether to filter an xact, given that sometimes you
know beforehand that you are not going to replay it: it might have
already been replayed, might have wrong dbid, origin, etc. One
consequence of this: imagine that notorious xact with PREPARE before
point where snapshot became consistent and COMMIT PREPARED after that
point. Even if filter_cb says 'I want 2PC on this xact', with current
code it won't be replayed on PREPARE and rbxid will be destroyed with
ReorderBufferForget. Now this xact is lost.

- Doing full-blown SnapBuildCommitTxn during PREPARE decoding is wrong,
because xact effects must not yet be seen to others. I discussed this
at length and described adjacent problems in [1].

- I still don't like that if 2PC xact was aborted and its replay
stopped, prepare callback won't be called but abort_prepared would be.
This either should be documented or fixed.

Second patch:

+ /* filter_prepare is optional, but requires two-phase decoding */
+ if ((ctx->callbacks.filter_prepare_cb != NULL) && (!opt->enable_twophase))
+ ereport(ERROR,
+ (errmsg("Output plugin does not support two-phase decoding, but "
+ "registered filter_prepared callback.")));

I actually think that enable_twophase output plugin option is
redundant. If plugin author wants 2PC, he just provides
filter_prepare_cb callback and potentially others. I also don't see much
value in checking that exactly 0 or 3 callbacks were registred.

- You allow (commit|abort)_prepared_cb, prepare_cb callbacks to be not
specified with enabled 2PC and call them without check that they
actually exist.

- executed within that transaction.
+ executed within that transaction. A transaction that is prepared for
+ a two-phase commit using <command>PREPARE TRANSACTION</command> will
+ also be decoded if the output plugin callbacks needed for decoding
+ them are provided. It is possible that the current transaction which
+ is being decoded is aborted concurrently via a <command>ROLLBACK PREPARED</command>
+ command. In that case, the logical decoding of this transaction will
+ be aborted too.

This should say explicitly that such 2PC xact will be decoded at PREPARE
record. Probably also add that otherwise it is decoded at CP
record. Probably also add "and abort_cb callback called" to the last
sentence.

+ The required <function>abort_cb</function> callback is called whenever
+ a transaction abort has to be initiated. This can happen if we are

This callback is not required in the code, and it would be indeed a bad
idea to demand it, breaking compatibility with existing plugins not
caring about 2PC.

+ * Otherwise call either PREPARE (for twophase transactions) or COMMIT
+ * (for regular ones).
+ */
+ if (rbtxn_rollback(txn))
+ rb->abort(rb, txn, commit_lsn);

This is dead code since we don't have decoding of in-progress xacts yet.

+ /*
+ * If there is a valid top-level transaction that's different from the
+ * two-phase one we are aborting, clear its reorder buffer as well.
+ */
+ if (TransactionIdIsNormal(xid) && xid != parsed->twophase_xid)
+ ReorderBufferAbort(ctx->reorder, xid, origin_lsn);

What is the aim of this? How xl_xid xid of commit prepared record can be
normal?

+ /*
+ * The transaction may or may not exist (during restarts for example).
+ * Anyways, 2PC transactions do not contain any reorderbuffers. So allow
+ * it to be created below.
+ */

Code around looks sane, but I think that restarts are irrelevant to
rbtxn existence at this moment: if we are going to COMMIT/ABORT PREPARED
it, it must have been replayed and rbtxn purged immediately after. The
only reason why rbtxn can exist here is invalidation addition
(ReorderBufferAddInvalidations) happening a couple of calls earlier.
Also, instead of misty '2PC transactions do not contain any
reorderbuffers' I would say something like 'create dummy
ReorderBufferTXN to pass it to the callback'.

- filter_prepare_cb callback existence is checked in both decode.c and
in filter_prepare_cb_wrapper.

Third patch:

+/*
+ * An xid value pointing to a possibly ongoing or a prepared transaction.
+ * Currently used in logical decoding. It's possible that such transactions
+ * can get aborted while the decoding is ongoing.
+ */

I would explain here that this xid is checked for abort after each
catalog scan, and sent for the details to SetupHistoricSnapshot.

Nitpicking:

First patch: I still don't think that these flags need a bitmask.

Second patch:

- I still think ReorderBufferCommitInternal name is confusing and should
be renamed to something like ReorderBufferReplay.

/* Do we know this is a subxact? Xid of top-level txn if so */
TransactionId toplevel_xid;
+ /* In case of 2PC we need to pass GID to output plugin */
+ char *gid;

Better add here newline as between other fields.

+ txn->txn_flags |= RBTXN_PREPARE;
+ txn->gid = palloc(strlen(gid) + 1); /* trailing '\0' */
+ strcpy(txn->gid, gid);

pstrdup?

- ReorderBufferTxnIsPrepared and ReorderBufferPrepareNeedSkip do the
same and should be merged with comments explaining that the answer
must be stable.

+ The optional <function>commit_prepared_cb</function> callback is called whenever
+ a commit prepared transaction has been decoded. The <parameter>gid</parameter> field,

a commit prepared transaction *record* has been decoded?

Fourth patch:

Applying: Teach test_decoding plugin to work with 2PC
.git/rebase-apply/patch:347: trailing whitespace.
-- test savepoints
.git/rebase-apply/patch:424: trailing whitespace.
# get XID of the above two-phase transaction
warning: 2 lines add whitespace errors.

[1] https://www.postgresql.org/message-id/87zhxrwgvh.fsf%40ars-thinkpad

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-25 14:15:42
Message-ID: 380de666-a30e-8f99-1fac-0002bce60a1d@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I think the difference between abort and abort prepared should be
explained better (I am not quite sure I get it myself).

> + The required <function>abort_cb</function> callback is called whenever

Also, why is this one required when all the 2pc stuff is optional?

> +static void
> +DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
> + xl_xact_parsed_prepare * parsed)
> +{
> + XLogRecPtr origin_lsn = parsed->origin_lsn;
> + TimestampTz commit_time = parsed->origin_timestamp;
> + XLogRecPtr origin_id = XLogRecGetOrigin(buf->record);
> + TransactionId xid = parsed->twophase_xid;
> + bool skip;
> +
> + Assert(parsed->dbId != InvalidOid);
> + Assert(TransactionIdIsValid(parsed->twophase_xid));
> +
> + /* Whether or not this PREPARE needs to be skipped. */
> + skip = DecodeEndOfTxn(ctx, buf, parsed, xid);
> +
> + FinalizeTxnDecoding(ctx, buf, parsed, xid, skip);

Given that DecodeEndOfTxn calls SnapBuildCommitTxn, won't this make the
catalog changes done by prepared transaction visible to other
transactions (which is undesirable as they should only be visible after
it's committed) ?

> + if (unlikely(TransactionIdIsValid(CheckXidAlive) &&
> + !(IsCatalogRelation(scan->rs_rd) ||
> + RelationIsUsedAsCatalogTable(scan->rs_rd))))
> + ereport(ERROR,
> + (errcode(ERRCODE_INVALID_TRANSACTION_STATE),
> + errmsg("improper heap_getnext call")));
> +
I think we should log the relation oid as well so that plugin developers
have easier time debugging this (for all variants of this).

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
To: Arseny Sher <a(dot)sher(at)postgrespro(dot)ru>, Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-25 14:26:55
Message-ID: 2e1cdeb7-6940-8558-910a-9673b507bacb@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 14/01/2019 23:16, Arseny Sher wrote:
>
> Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com> writes:
>
>> I'd like to believe that the latest patch set tries to address some
>> (if not all) of your concerns. Can you please take a look and let me
>> know?
>
> Hi, sure.
>
> General things:
>
> - Earlier I said that there is no point of sending COMMIT PREPARED if
> decoding snapshot became consistent after PREPARE, i.e. PREPARE hadn't
> been sent. I realized since then that such use cases actually exist:
> prepare might be copied to the replica by e.g. basebackup or something
> else earlier.

Basebackup does not copy slots though and slot should not reach
consistency until all prepared transactions are committed no?

>
> - BTW, ReorderBufferProcessXid at PREPARE should be always called
> anyway, because otherwise if xact is empty, we will not prepare it
> (and call cb), even if the output plugin asked us not to filter it
> out. However, we will call commit_prepared cb, which is inconsistent.
>
> - I find it weird that in DecodePrepare and in DecodeCommit you always
> ask the plugin whether to filter an xact, given that sometimes you
> know beforehand that you are not going to replay it: it might have
> already been replayed, might have wrong dbid, origin, etc. One
> consequence of this: imagine that notorious xact with PREPARE before
> point where snapshot became consistent and COMMIT PREPARED after that
> point. Even if filter_cb says 'I want 2PC on this xact', with current
> code it won't be replayed on PREPARE and rbxid will be destroyed with
> ReorderBufferForget. Now this xact is lost.

Yeah this is wrong.

>
> Second patch:
>
> + /* filter_prepare is optional, but requires two-phase decoding */
> + if ((ctx->callbacks.filter_prepare_cb != NULL) && (!opt->enable_twophase))
> + ereport(ERROR,
> + (errmsg("Output plugin does not support two-phase decoding, but "
> + "registered filter_prepared callback.")));
>
> I actually think that enable_twophase output plugin option is
> redundant. If plugin author wants 2PC, he just provides
> filter_prepare_cb callback and potentially others.

+1

> I also don't see much
> value in checking that exactly 0 or 3 callbacks were registred.
>

I think that check makes sense, if you support 2pc you need to register
all callbacks.

>
> Nitpicking:
>
> First patch: I still don't think that these flags need a bitmask.

Since we are discussing this, I personally prefer the bitmask here.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-01-25 17:03:27
Message-ID: 201901251703.r7idekybsep2@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Eyeballing 0001, it has a few problems.

1. It's under-parenthesizing the txn argument of the macros.

2. the "has"/"is" macro definitions don't return booleans -- see
fce4609d5e5b.

3. the remainder of this no longer makes sense:

/* Do we know this is a subxact? Xid of top-level txn if so */
- bool is_known_as_subxact;
TransactionId toplevel_xid;

I suggest to fix the comment, and also improve the comment next to the
macro that tests this flag.

(4. the macro names are ugly.)

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-02-04 05:15:04
Message-ID: 20190204051504.GM29064@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 25, 2019 at 02:03:27PM -0300, Alvaro Herrera wrote:
> Eyeballing 0001, it has a few problems.
>
> 1. It's under-parenthesizing the txn argument of the macros.
>
> 2. the "has"/"is" macro definitions don't return booleans -- see
> fce4609d5e5b.
>
> 3. the remainder of this no longer makes sense:
>
> /* Do we know this is a subxact? Xid of top-level txn if so */
> - bool is_known_as_subxact;
> TransactionId toplevel_xid;
>
> I suggest to fix the comment, and also improve the comment next to the
> macro that tests this flag.
>
>
> (4. the macro names are ugly.)

This is an old thread, and the latest review is very recent. So I am
moving the patch to next CF, waiting on author.
--
Michael


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-09-02 22:12:44
Message-ID: 20190902221244.GA2253@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I don't understand why this patch record has been kept aliv for so long,
since no new version has been sent in ages. If this patch is really
waiting on the author, let's see the author do something. If no voice
is heard very soon, I'll close this patch as RwF.

If others want to see this feature in PostgreSQL, they are welcome to
contribute.

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: David Steele <david(at)pgmasters(dot)net>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Date: 2019-09-02 23:22:52
Message-ID: 2822ef3e-0f03-379e-9e66-13b93f104079@pgmasters.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 9/2/19 6:12 PM, Alvaro Herrera wrote:
> I don't understand why this patch record has been kept aliv for so long,
> since no new version has been sent in ages. If this patch is really
> waiting on the author, let's see the author do something. If no voice
> is heard very soon, I'll close this patch as RwF.

+1. I should have marked this RWF in March but I ignored it because it
was tagged v13 before the CF started.

--
-David
david(at)pgmasters(dot)net