Re: [BUG?] lag of minRecoveryPont in archive recovery

Lists: pgsql-hackers
From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: pgsql-hackers(at)postgresql(dot)org
Subject: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-06 04:04:58
Message-ID: 20121206.130458.170549097.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, I have a problem with PostgreSQL 9.2 with Pacemaker.

HA standby sometime failes to start under normal operation.

Testing with a bare replication pair showed that the standby
failes startup recovery under the operation sequence shown
below. 9.3dev too, but 9.1 does not have this problem. This
problem became apparent by the invalid-page check of xlog, but
9.1 also has same glitch potentially.

After the investigation, the lag of minRecoveryPoint behind
EndRecPtr in redo loop seems to be the cause. The lag brings
about repetitive redoing of unrepeatable xlog sequences such as
XLOG_HEAP2_VISIBLE -> SMGR_TRUNCATE on the same page. So I did
the same aid work as xact_redo_commit_internal for
smgr_redo. While doing this, I noticed that
CheckRecoveryConsistency() in redo apply loop should be after
redoing the record, so moved it.

The patch attached (for 9.3dev) puts down this failure, also
works for 9.2, and seems no harm and maybe some grace for 9.1..

What do you think about this?

The details follow.

reagrds,

--
Kyotaro Horiguchi
NTT Open Source Software Center

Operation sequence to cause the problem.
=======

Mst$export PGDATA=$HOME/pgdatam
Mst$rm -rf $PGDATA
Mst$initdb
Mst$vi $PGDATA/postgresql.conf
| wal_level = hot_standby
| checkpoint_segments = 300
| checkpoint_timeout = 1h
| archive_mode = on
| archive_command = 'cp %p <ARCHIVE_DIR>/%f'
| synchronous_standby_names = '*'
| hot_standby = on
| log_checkpoints = on
Mst$vi $PGDATA/pg_hba.conf
| local replication horiguti trust
Mst$pg_ctl start

Stb$export PGDATA=$HOME/pgdatas
Stb$export PGPORT=5433
Stb$rm -rf $PGDATA
Stb$pg_basebackup -X s -F p -D $PGDATA -h /tmp -p 5432
Stb$vi $PGDATA/postgresql.conf
| checkpoint_segments = 1
| checkpoint_timeout = 30s
Stb$vi $PGDATA/recovery.conf
| standby_mode=yes
| primary_conninfo='host=/tmp port=5432'
| restore_command='cp <ARCHIVE_DIR>/%f %p'
Stb$pg_ctl start

Mst$createdb
Mst$psql
Mst_psql=#create table t (a int);
Mst_psql=#insert into t values (1);
Mst_psql=#chekcpoint;
Mst_psql=#vacuum;

!!! Waiting for the completion of restart point of the standby !!!

Mst_psql=#delete from t;
Mst_psql=#vacuum;

Stb$pg_ctl stop -m i
Stb$vi PGDATA/recovery.conf
| restore_command='cp <ARCHIVE_DIR>/%f %p'
Stb$pg_ctl start
LOG: database system was interrupted while in recovery at log time 2012-12-05 09:55:18 JST
HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.
LOG: entering standby mode
LOG: redo starts at 0/A020890
WARNING: page 0 of relation base/16384/16385 does not exist
PANIC: WAL contains references to invalid pages
LOG: startup process (PID 18422) was terminated by signal 6: Aborted
LOG: terminating any other active server processes
======

Attachment Content-Type Size
unknown_filename text/plain 1.3 KB

From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-06 11:39:16
Message-ID: 00da01cdd3a6$52ba37e0$f82ea7a0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thursday, December 06, 2012 9:35 AM Kyotaro HORIGUCHI wrote:
> Hello, I have a problem with PostgreSQL 9.2 with Pacemaker.
>
> HA standby sometime failes to start under normal operation.
>
> Testing with a bare replication pair showed that the standby failes
> startup recovery under the operation sequence shown below. 9.3dev too,
> but 9.1 does not have this problem. This problem became apparent by the
> invalid-page check of xlog, but
> 9.1 also has same glitch potentially.
>
> After the investigation, the lag of minRecoveryPoint behind EndRecPtr in
> redo loop seems to be the cause. The lag brings about repetitive redoing
> of unrepeatable xlog sequences such as XLOG_HEAP2_VISIBLE ->
> SMGR_TRUNCATE on the same page. So I did the same aid work as
> xact_redo_commit_internal for smgr_redo. While doing this, I noticed
> that
> CheckRecoveryConsistency() in redo apply loop should be after redoing
> the record, so moved it.

I think moving CheckRecoveryConsistency() after redo apply loop might cause
a problem.
As currently it is done before recoveryStopsHere() function, which can allow
connections
on HOTSTANDY. But now if due to some reason recovery pauses or stops due to
above function,
connections might not be allowed as CheckRecoveryConsistency() is not
called.

With Regards,
Amit Kapila.


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-09 15:36:31
Message-ID: CAHGQGwG4W5QZ7+LJimg8xxuevwz0bYniHmZLZmWf0j6kBiuRCg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Dec 6, 2012 at 8:39 PM, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Thursday, December 06, 2012 9:35 AM Kyotaro HORIGUCHI wrote:
>> Hello, I have a problem with PostgreSQL 9.2 with Pacemaker.
>>
>> HA standby sometime failes to start under normal operation.
>>
>> Testing with a bare replication pair showed that the standby failes
>> startup recovery under the operation sequence shown below. 9.3dev too,
>> but 9.1 does not have this problem. This problem became apparent by the
>> invalid-page check of xlog, but
>> 9.1 also has same glitch potentially.
>>
>> After the investigation, the lag of minRecoveryPoint behind EndRecPtr in
>> redo loop seems to be the cause. The lag brings about repetitive redoing
>> of unrepeatable xlog sequences such as XLOG_HEAP2_VISIBLE ->
>> SMGR_TRUNCATE on the same page. So I did the same aid work as
>> xact_redo_commit_internal for smgr_redo. While doing this, I noticed
>> that
>> CheckRecoveryConsistency() in redo apply loop should be after redoing
>> the record, so moved it.
>
> I think moving CheckRecoveryConsistency() after redo apply loop might cause
> a problem.
> As currently it is done before recoveryStopsHere() function, which can allow
> connections
> on HOTSTANDY. But now if due to some reason recovery pauses or stops due to
> above function,
> connections might not be allowed as CheckRecoveryConsistency() is not
> called.

Yes, so we should just add the CheckRecoveryConsistency() call after
rm_redo rather than moving it? This issue is related to the old discussion:
http://archives.postgresql.org/pgsql-bugs/2012-09/msg00101.php

Regards,

--
Fujii Masao


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-09 15:41:34
Message-ID: CAHGQGwEUpEvK=s0xRFinWnoasjhy3poD3HiE4namoWHK0a-d_w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Dec 6, 2012 at 1:04 PM, Kyotaro HORIGUCHI
<horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
> diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
> index 993bc49..d34ab65 100644
> --- a/src/backend/catalog/storage.c
> +++ b/src/backend/catalog/storage.c
> @@ -519,6 +519,12 @@ smgr_redo(XLogRecPtr lsn, XLogRecord *record)
> visibilitymap_truncate(rel, xlrec->blkno);
>
> FreeFakeRelcacheEntry(rel);
> +
> + /*
> + * Xlogs before this record is unrepeatable, so winding
> + * minRecoveryPoint to here.
> + */
> + XLogFlush(lsn);
> }
> else
> elog(PANIC, "smgr_redo: unknown op code %u", info);

I think that minRecoveryPoint should be updated before the data-file
changes, so XLogFlush() should be called before smgrtruncate(). No?

Regards,

--
Fujii Masao


From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: amit(dot)kapila(at)huawei(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 01:46:08
Message-ID: 20121210.104608.153955651.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thank you.

> I think moving CheckRecoveryConsistency() after redo apply loop might cause
> a problem.
> As currently it is done before recoveryStopsHere() function, which can allow
> connections
> on HOTSTANDY. But now if due to some reason recovery pauses or stops due to
> above function,
> connections might not be allowed as CheckRecoveryConsistency() is not
> called.

It depends on the precise meaning of minRecoveryPoint. I've not
found the explicit explanation for it.

Currently minRecoveryPoint is updated only in XLogFlush. Other
updates of it seems to reset to InvalidXLogRecPtr. XLogFlush
seems to be called AFTER the redo core operation has been done,
at least in xact_redo_commit_internal. I shows me that the
meaning of minRecoveryPoint is that "Just AFTER applying the XLog
at current LSN, the database files will be assumed to be
consistent."

At Mon, 10 Dec 2012 00:36:31 +0900, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote in <CAHGQGwG4W5QZ7+LJimg8xxuevwz0bYniHmZLZmWf0j6kBiuRCg(at)mail(dot)gmail(dot)com>
> Yes, so we should just add the CheckRecoveryConsistency() call after
> rm_redo rather than moving it? This issue is related to the old discussion:
> http://archives.postgresql.org/pgsql-bugs/2012-09/msg00101.php

Since I've not cleary understood the problem of missing it before
redo, and it also seems to have no harm on performance, I have no
objection to place it both before and after of redo.

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: masao(dot)fujii(at)gmail(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 01:52:34
Message-ID: 20121210.105234.26361529.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello,

> I think that minRecoveryPoint should be updated before the data-file
> changes, so XLogFlush() should be called before smgrtruncate(). No?

Mmm.. As far as I saw in xact_redo_commit_internal, XLogFlush
seems should be done AFTER redo substantial operation. Is it
wrong? If so, I suppose xact_redo_commit_internal could shold be
fixed in the same way.

At Mon, 10 Dec 2012 00:41:34 +0900, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote in <CAHGQGwEUpEvK=s0xRFinWnoasjhy3poD3HiE4namoWHK0a-d_w(at)mail(dot)gmail(dot)com>
> On Thu, Dec 6, 2012 at 1:04 PM, Kyotaro HORIGUCHI
> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
> > diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
> > index 993bc49..d34ab65 100644
> > --- a/src/backend/catalog/storage.c
> > +++ b/src/backend/catalog/storage.c
> > @@ -519,6 +519,12 @@ smgr_redo(XLogRecPtr lsn, XLogRecord *record)
> > visibilitymap_truncate(rel, xlrec->blkno);
> >
> > FreeFakeRelcacheEntry(rel);
> > +
> > + /*
> > + * Xlogs before this record is unrepeatable, so winding
> > + * minRecoveryPoint to here.
> > + */
> > + XLogFlush(lsn);
> > }
> > else
> > elog(PANIC, "smgr_redo: unknown op code %u", info);
>
> I think that minRecoveryPoint should be updated before the data-file
> changes, so XLogFlush() should be called before smgrtruncate(). No?

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <pgsql-hackers(at)postgresql(dot)org>, "'Fujii Masao'" <masao(dot)fujii(at)gmail(dot)com>
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 08:26:23
Message-ID: 010401cdd6b0$0a8021d0$1f806570$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Monday, December 10, 2012 7:16 AM Kyotaro HORIGUCHI wrote:
> Thank you.
>
> > I think moving CheckRecoveryConsistency() after redo apply loop might
> cause
> > a problem.
> > As currently it is done before recoveryStopsHere() function, which can
> allow
> > connections
> > on HOTSTANDY. But now if due to some reason recovery pauses or stops
> due to
> > above function,
> > connections might not be allowed as CheckRecoveryConsistency() is not
> > called.
>
> It depends on the precise meaning of minRecoveryPoint. I've not
> found the explicit explanation for it.
>
> Currently minRecoveryPoint is updated only in XLogFlush. Other
> updates of it seems to reset to InvalidXLogRecPtr. XLogFlush
> seems to be called AFTER the redo core operation has been done,
> at least in xact_redo_commit_internal. I shows me that the
> meaning of minRecoveryPoint is that "Just AFTER applying the XLog
> at current LSN, the database files will be assumed to be
> consistent."
>
> At Mon, 10 Dec 2012 00:36:31 +0900, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
> wrote in
> <CAHGQGwG4W5QZ7+LJimg8xxuevwz0bYniHmZLZmWf0j6kBiuRCg(at)mail(dot)gmail(dot)com>
> > Yes, so we should just add the CheckRecoveryConsistency() call after
> > rm_redo rather than moving it? This issue is related to the old
> discussion:
> > http://archives.postgresql.org/pgsql-bugs/2012-09/msg00101.php
>
> Since I've not cleary understood the problem of missing it before
> redo, and it also seems to have no harm on performance, I have no
> objection to place it both before and after of redo.

I have tried that way as well, but it didn't completely resolved the problem
reported in above link.
As the situation of defect got arised when it does first time ReadRecord().

To resolve the defect mentioned in link by Fujii Masao, actually we need to
check and
try to reset the backupStartPoint before each ReadRecord.
The reason is that in ReadRecord(), it can go and start waiting for records
from Master.
So now if backupStartPoint is not set and CheckRecoveryConsistency() is not
done, it can keep on waiting
Records from Master and no connections will be allowed on Standby.

I have modified the code of XLogPageRead(...) such that before it calls
WaitForWALToBecomeavailable(), following code will be added

if (!XLogRecPtrIsInvalid(ControlFile->backupEndPoint) &&

XLByteLE(ControlFile->backupEndPoint, EndRecPtr))
{
/*
* We have reached the end of base
backup, the point where
* the minimum recovery point in
pg_control indicates. The
* data on disk is now consistent.
Reset backupStartPoint
* and backupEndPoint.
*/
elog(DEBUG1, "end of backup
reached");

LWLockAcquire(ControlFileLock,
LW_EXCLUSIVE);


MemSet(&ControlFile->backupStartPoint, 0, sizeof(XLogRecPtr));
MemSet(&ControlFile->backupEndPoint,
0, sizeof(XLogRecPtr));
ControlFile->backupEndRequired =
false;
UpdateControlFile();

LWLockRelease(ControlFileLock);
}

CheckRecoveryConsistency();

This had completely resolved the problem reported on above link for me.

With Regards,
Amit Kapila.


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: masao(dot)fujii(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 11:50:58
Message-ID: 50C5CCA2.9030000@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10.12.2012 03:52, Kyotaro HORIGUCHI wrote:
>> I think that minRecoveryPoint should be updated before the data-file
>> changes, so XLogFlush() should be called before smgrtruncate(). No?
>
> Mmm.. As far as I saw in xact_redo_commit_internal, XLogFlush
> seems should be done AFTER redo substantial operation. Is it
> wrong? If so, I suppose xact_redo_commit_internal could shold be
> fixed in the same way.

So, two options:

1. Redo truncation, then XLogFlush()

There's a window where the original problem could still occur: The file
is truncated, and you crash before XLogFlush() finishes.

2. XLogFlush(), then redo truncation.

If the truncation fails for some reason (disk failure?) and you have
already updated minRecoveryPoint, you can not start up until you somehow
fix the problem so that the truncation can succeed, and then finish
recovery.

Both options have their merits. The window in 1. is very small, and in
2., the probability that an unlink() or truncation fails is very small
as well.

We've chosen 1. in xact_redo_commit_internal(), but I don't think that
was a carefully made decision, it just imitates what happens in the
corresponding non-redo commit path. In xact_redo_commit_internal(), it
makes sense to do XLogFlush() afterwards, for CREATE DATABASE and CREATE
TABLESPACE. Those create files, and if you e.g run out of disk space, or
non-existent path, you don't want minRecoveryPoint to be updated yet.
Otherwise you can no longer recover to the point just before the
creation of those files. But in case of DROP DATABASE, you have the
exact same situation as with truncation: if you have already deleted
some files, you must not be able to stop recovery at a point before that.

So I'd say we should update minRecoveryPoint first, then
truncate/delete. But we should still keep the XLogFlush() at the end of
xact_redo_commit_internal(), for the case where files/directories are
created. Patch attached.

- Heikki

Attachment Content-Type Size
xlogflush-before-truncation-1.patch text/x-diff 4.1 KB

From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: masao(dot)fujii(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 16:33:34
Message-ID: 50C60EDE.206@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10.12.2012 13:50, Heikki Linnakangas wrote:
> So I'd say we should update minRecoveryPoint first, then
> truncate/delete. But we should still keep the XLogFlush() at the end of
> xact_redo_commit_internal(), for the case where files/directories are
> created. Patch attached.

Committed and backpatched that. Attached is a script I used to reproduce
this problem, going back to 8.4.

- Heikki

Attachment Content-Type Size
minrecoverypoint-recipe.sh application/x-sh 2.4 KB

From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-10 18:04:51
Message-ID: CAHGQGwEi6Q6h65nVzBngaCR=Vse03-io2_7VFrC9dWKnNoa15w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 11, 2012 at 1:33 AM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> On 10.12.2012 13:50, Heikki Linnakangas wrote:
>>
>> So I'd say we should update minRecoveryPoint first, then
>> truncate/delete. But we should still keep the XLogFlush() at the end of
>> xact_redo_commit_internal(), for the case where files/directories are
>> created. Patch attached.

Sounds reasonable.

> Committed and backpatched that. Attached is a script I used to reproduce
> this problem, going back to 8.4.

Thanks!

Unfortunately I could reproduce the problem even after that commit.
Attached is the script I used to reproduce the problem.

The cause is that CheckRecoveryConsistency() is called before rm_redo(),
as Horiguchi-san suggested upthead. Imagine the case where
minRecoveryPoint is set to the location of the XLOG_SMGR_TRUNCATE
record. When restarting the server with that minRecoveryPoint,
the followings would happen, and then PANIC occurs.

1. XLOG_SMGR_TRUNCATE record is read.
2. CheckRecoveryConsistency() is called, and database is marked as
consistent since we've reached minRecoveryPoint (i.e., the location
of XLOG_SMGR_TRUNCATE).
3. XLOG_SMGR_TRUNCATE record is replayed, and invalid page is
found.
4. Since the database has already been marked as consistent, an invalid
page leads to PANIC.

Regards,

--
Fujii Masao

Attachment Content-Type Size
fujii_test.sh application/x-sh 831 bytes

From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: masao(dot)fujii(at)gmail(dot)com
Cc: hlinnakangas(at)vmware(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-11 06:07:11
Message-ID: 20121211.150711.187018644.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, I've also found this does not fix this problem.

> >> So I'd say we should update minRecoveryPoint first, then
> >> truncate/delete. But we should still keep the XLogFlush() at the end of
> >> xact_redo_commit_internal(), for the case where files/directories are
> >> created. Patch attached.
>
> Sounds reasonable.

It makes perfectly sense.

> > Committed and backpatched that. Attached is a script I used to reproduce
> > this problem, going back to 8.4.
>
> Thanks!
>
> Unfortunately I could reproduce the problem even after that commit.
> Attached is the script I used to reproduce the problem.

Me too.

> The cause is that CheckRecoveryConsistency() is called before rm_redo(),
> as Horiguchi-san suggested upthead. Imagine the case where
> minRecoveryPoint is set to the location of the XLOG_SMGR_TRUNCATE
> record. When restarting the server with that minRecoveryPoint,
> the followings would happen, and then PANIC occurs.
>
> 1. XLOG_SMGR_TRUNCATE record is read.
> 2. CheckRecoveryConsistency() is called, and database is marked as
> consistent since we've reached minRecoveryPoint (i.e., the location
> of XLOG_SMGR_TRUNCATE).
> 3. XLOG_SMGR_TRUNCATE record is replayed, and invalid page is
> found.
> 4. Since the database has already been marked as consistent, an invalid
> page leads to PANIC.

Exactly.

In smgr_redo, EndRecPtr which is pointing the record next to
SMGR_TRUNCATE, is used as the new minRecoveryPoint. On the other
hand, during the second startup of the standby,
CheckRecoveryConsistency checks for consistency by
XLByteLE(minRecoveryPoint, EndRecPtr) which should be true at
just BEFORE the SMGR_TRUNCATE record is applied. So
reachedConsistency becomes true just before the SMGR_TRUNCATE
record will be applied. Bang!

I said I had no objection to placing CheckRecoveryConsistency
both before and after of rm_redo in previous message, but it was
wrong. Given aminRecoveryPoint value, it should be placed after
rm_redo from the point of view of when the database should be
considered to be consistent.

Actually, simply moving CheckRecoeverConsistency after rm_redo
turned into succeessfully startup, ignoring the another reason
for it should be before, which is unknown to me.

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: masao(dot)fujii(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-11 09:14:18
Message-ID: 50C6F96A.7060305@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11.12.2012 08:07, Kyotaro HORIGUCHI wrote:
>> The cause is that CheckRecoveryConsistency() is called before rm_redo(),
>> as Horiguchi-san suggested upthead. Imagine the case where
>> minRecoveryPoint is set to the location of the XLOG_SMGR_TRUNCATE
>> record. When restarting the server with that minRecoveryPoint,
>> the followings would happen, and then PANIC occurs.
>>
>> 1. XLOG_SMGR_TRUNCATE record is read.
>> 2. CheckRecoveryConsistency() is called, and database is marked as
>> consistent since we've reached minRecoveryPoint (i.e., the location
>> of XLOG_SMGR_TRUNCATE).
>> 3. XLOG_SMGR_TRUNCATE record is replayed, and invalid page is
>> found.
>> 4. Since the database has already been marked as consistent, an invalid
>> page leads to PANIC.
>
> Exactly.
>
> In smgr_redo, EndRecPtr which is pointing the record next to
> SMGR_TRUNCATE, is used as the new minRecoveryPoint. On the other
> hand, during the second startup of the standby,
> CheckRecoveryConsistency checks for consistency by
> XLByteLE(minRecoveryPoint, EndRecPtr) which should be true at
> just BEFORE the SMGR_TRUNCATE record is applied. So
> reachedConsistency becomes true just before the SMGR_TRUNCATE
> record will be applied. Bang!

Ah, I see. I thought moving CheckRecoveryConsistency was just a
nice-to-have thing, so that we'd notice that we're consistent earlier,
so didn't pay attention to that part.

I think the real issue here is that CheckRecoveryConsistency should not
be comparing minRecoveryPoint against recoveryLastRecPtr, not EndRecPtr
as it currently does. EndRecPtr points to the end of the last record
*read*, while recoveryLastRecPtr points to the end of the last record
*replayed*. The latter is what CheckRecoveryConsistency should be
concerned with.

BTW, it occurs to me that we have two variables in the shared struct
that are almost the same thing: recoveryLastRecPtr and replayEndRecPtr.
The former points to the end of the last record successfully replayed,
while replayEndRecPtr points to the end of the last record successfully
replayed, or begin replayed at the moment. I find that confusing, so I
suggest that we rename recoveryLastRecPtr to make that more clear.
Included in the attached patch.

- Heikki

Attachment Content-Type Size
fix-check-recovery-consistency-1.patch text/x-diff 3.1 KB

From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-11 15:04:34
Message-ID: CAHGQGwHB-ToOv1W52YRBX9+5vW2B5BvHad0XjfW6FfYS=5F-=Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 11, 2012 at 6:14 PM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> On 11.12.2012 08:07, Kyotaro HORIGUCHI wrote:
>>>
>>> The cause is that CheckRecoveryConsistency() is called before rm_redo(),
>>> as Horiguchi-san suggested upthead. Imagine the case where
>>> minRecoveryPoint is set to the location of the XLOG_SMGR_TRUNCATE
>>> record. When restarting the server with that minRecoveryPoint,
>>> the followings would happen, and then PANIC occurs.
>>>
>>> 1. XLOG_SMGR_TRUNCATE record is read.
>>> 2. CheckRecoveryConsistency() is called, and database is marked as
>>> consistent since we've reached minRecoveryPoint (i.e., the location
>>> of XLOG_SMGR_TRUNCATE).
>>> 3. XLOG_SMGR_TRUNCATE record is replayed, and invalid page is
>>> found.
>>> 4. Since the database has already been marked as consistent, an invalid
>>> page leads to PANIC.
>>
>>
>> Exactly.
>>
>> In smgr_redo, EndRecPtr which is pointing the record next to
>> SMGR_TRUNCATE, is used as the new minRecoveryPoint. On the other
>> hand, during the second startup of the standby,
>> CheckRecoveryConsistency checks for consistency by
>> XLByteLE(minRecoveryPoint, EndRecPtr) which should be true at
>> just BEFORE the SMGR_TRUNCATE record is applied. So
>> reachedConsistency becomes true just before the SMGR_TRUNCATE
>> record will be applied. Bang!
>
>
> Ah, I see. I thought moving CheckRecoveryConsistency was just a nice-to-have
> thing, so that we'd notice that we're consistent earlier, so didn't pay
> attention to that part.
>
> I think the real issue here is that CheckRecoveryConsistency should not be
> comparing minRecoveryPoint against recoveryLastRecPtr, not EndRecPtr as it
> currently does. EndRecPtr points to the end of the last record *read*, while
> recoveryLastRecPtr points to the end of the last record *replayed*. The
> latter is what CheckRecoveryConsistency should be concerned with.
>
> BTW, it occurs to me that we have two variables in the shared struct that
> are almost the same thing: recoveryLastRecPtr and replayEndRecPtr. The
> former points to the end of the last record successfully replayed, while
> replayEndRecPtr points to the end of the last record successfully replayed,
> or begin replayed at the moment. I find that confusing, so I suggest that we
> rename recoveryLastRecPtr to make that more clear. Included in the attached
> patch.

The patch looks good. I confirmed that the PANIC error didn't happen,
with the patch.

xlog.c
> * Initialize shared replayEndRecPtr, recoveryLastRecPtr, and

s/recoveryLastRecPtr/lastReplayedEndRecPtr

Regards,

--
Fujii Masao


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-11 16:37:02
Message-ID: 50C7612E.5060008@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11.12.2012 17:04, Fujii Masao wrote:
> On Tue, Dec 11, 2012 at 6:14 PM, Heikki Linnakangas
> <hlinnakangas(at)vmware(dot)com> wrote:
>> On 11.12.2012 08:07, Kyotaro HORIGUCHI wrote:
>>>>
>>>> The cause is that CheckRecoveryConsistency() is called before rm_redo(),
>>>> as Horiguchi-san suggested upthead. Imagine the case where
>>>> minRecoveryPoint is set to the location of the XLOG_SMGR_TRUNCATE
>>>> record. When restarting the server with that minRecoveryPoint,
>>>> the followings would happen, and then PANIC occurs.
>>>>
>>>> 1. XLOG_SMGR_TRUNCATE record is read.
>>>> 2. CheckRecoveryConsistency() is called, and database is marked as
>>>> consistent since we've reached minRecoveryPoint (i.e., the location
>>>> of XLOG_SMGR_TRUNCATE).
>>>> 3. XLOG_SMGR_TRUNCATE record is replayed, and invalid page is
>>>> found.
>>>> 4. Since the database has already been marked as consistent, an invalid
>>>> page leads to PANIC.
>>>
>>>
>>> Exactly.
>>>
>>> In smgr_redo, EndRecPtr which is pointing the record next to
>>> SMGR_TRUNCATE, is used as the new minRecoveryPoint. On the other
>>> hand, during the second startup of the standby,
>>> CheckRecoveryConsistency checks for consistency by
>>> XLByteLE(minRecoveryPoint, EndRecPtr) which should be true at
>>> just BEFORE the SMGR_TRUNCATE record is applied. So
>>> reachedConsistency becomes true just before the SMGR_TRUNCATE
>>> record will be applied. Bang!
>>
>>
>> Ah, I see. I thought moving CheckRecoveryConsistency was just a nice-to-have
>> thing, so that we'd notice that we're consistent earlier, so didn't pay
>> attention to that part.
>>
>> I think the real issue here is that CheckRecoveryConsistency should not be
>> comparing minRecoveryPoint against recoveryLastRecPtr, not EndRecPtr as it
>> currently does. EndRecPtr points to the end of the last record *read*, while
>> recoveryLastRecPtr points to the end of the last record *replayed*. The
>> latter is what CheckRecoveryConsistency should be concerned with.
>>
>> BTW, it occurs to me that we have two variables in the shared struct that
>> are almost the same thing: recoveryLastRecPtr and replayEndRecPtr. The
>> former points to the end of the last record successfully replayed, while
>> replayEndRecPtr points to the end of the last record successfully replayed,
>> or begin replayed at the moment. I find that confusing, so I suggest that we
>> rename recoveryLastRecPtr to make that more clear. Included in the attached
>> patch.
>
> The patch looks good. I confirmed that the PANIC error didn't happen,
> with the patch.

Thanks. After thinking about this more, I think it's still not quite
right. Now that we fix the check in CheckRecoveryConsistency, we have to
move the call to CheckRecoveryConsistency to after the rm_redo call.
Otherwise you don't enter hot standby mode after replaying the last
record, the one ending at minRecoveryPoint. You have to wait for one
more record to be read (but not replayed), so that
CheckRecoveryConsistency gets called and we let backends in.

As the code stands, the bad placement of CheckRecoveryConsistency is
compensated by the bug that we open up for hot standby one record too
early. The net result is that we open up for hot standby just before
replaying the last record that makes us consistent. The window for
someone to see the inconsistent state is really small, postmaster will
have to receive the signal, start accepting connections, and you have
connect and run a query, all before the startup process gets around to
replay the record its already read into memory.

- Heikki


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-11 17:03:47
Message-ID: 50C76773.9020907@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11.12.2012 17:04, Fujii Masao wrote:
> The patch looks good. I confirmed that the PANIC error didn't happen,
> with the patch.

Ok, committed, and also moved the CheckRecoveryConsistency call. Please
take a look to double-check that I didn't miss anything.

- Heikki


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG?] lag of minRecoveryPont in archive recovery
Date: 2012-12-12 15:36:48
Message-ID: CAHGQGwHuqQ9pKuYd=dvE3iCcW2LP=Hs0ZPX01ztBu5yb0TOfZg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Dec 12, 2012 at 2:03 AM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> On 11.12.2012 17:04, Fujii Masao wrote:
>>
>> The patch looks good. I confirmed that the PANIC error didn't happen,
>> with the patch.
>
>
> Ok, committed, and also moved the CheckRecoveryConsistency call. Please take
> a look to double-check that I didn't miss anything.

Looks good to me. Thanks a lot!

Regards,

--
Fujii Masao