Re: Review: Patch to compute Max LSN of Data Pages

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: 'Josh Berkus' <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Review: Patch to compute Max LSN of Data Pages
Date: 2013-06-26 11:09:59
Message-ID: 20130626110959.GC8637@awork2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Amit,

On 2013-06-26 16:22:28 +0530, Amit Kapila wrote:
> On Wednesday, June 26, 2013 1:20 PM Andres Freund wrote:
> > On 2013-06-26 08:50:27 +0530, Amit Kapila wrote:
> > > On Tuesday, June 25, 2013 11:12 PM Andres Freund wrote:
> > > > On 2013-06-16 17:19:49 -0700, Josh Berkus wrote:
> > > > > Amit posted a new version of this patch on January 23rd. But
> > last
> > > > > comment on it by Tom is "not sure everyone wants this".
> > > > >
> > > > > https://commitfest.postgresql.org/action/patch_view?id=905
> > > >
> > > > > ... so, what's the status of this patch?
> > > >
> > > > That comment was referencing a mail of mine - so perhaps I better
> > > > explain:
> > > >
> > > > I think the usecase for this utility isn't big enough to be
> > included in
> > > > postgres since it really can only help in a very limited
> > > > circumstances. And I think it's too likely to be misused for stuff
> > it's
> > > > not useable for (e.g. remastering).
> > > >
> > > > The only scenario I see is that somebody deleted/corrupted
> > > > pg_controldata. In that scenario the tool is supposed to be used to
> > > > find
> > > > the biggest lsn used so far so the user then can use pg_resetxlog
> > to
> > > > set
> > > > that as the wal starting point.
> > > > But that can be way much easier solved by just setting the LSN to
> > > > something very, very high. The database cannot be used for anything
> > > > reliable afterwards anyway.
> > >
> > > One of the main reason this was written was to make server up in case
> > of
> > > corruption and
> > > user can take dump of some useful information if any.
> > >
> > > By setting LSN very, very high user might loose the information which
> > he
> > > wants to take dump.
> >
> > Which information would that loose?
> Information from WAL replay which can be more appropriate by selecting
> LSN.

Sorry, I can't follow. If wal replay still is an option you can just
look at the WAL and get a sensible value way easier. The whole tool
seems to only make sense if you've lost pg_xlog.

> Also for a developer, guessing very high LSN might be easy, but for users
> it might not be equally easy, and getting such value by utility would be
> comfortable.

Well, then we can just document some very high lsn and be done with
it. Like CF000000/00000000.
That would leave enough space for eventual writes caused while dumping
the database (say hint bit writes in a checksummed database) and cannot
yet be realistically be reached during normal operation.

> One more use case for which this utility was done is as below:
> It will be used to decide that on new-standby (old-master) whether a full
> backup is needed from
> New-master(old-standby).
> The backup is required when the data page in old-master precedes
> the last applied LSN in old-standby (i.e., new-master) at the moment
> of the failover.

That's exactly what I was afraid of. Unless I miss something the tool is
*NOT* sufficient to do this. Look at the mail introducing pg_rewind and
the ensuing discussion for what's necessary for that.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeevan Chalke 2013-06-26 11:12:42 Re: checking variadic "any" argument in parser - should be array
Previous Message Ants Aasma 2013-06-26 11:08:20 Re: Bloom Filter lookup for hash joins