Re: Support for REINDEX CONCURRENTLY

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: Support for REINDEX CONCURRENTLY
Date: 2013-09-26 11:56:17
Message-ID: 20130926115617.GC6672@awork2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2013-09-26 20:47:33 +0900, Michael Paquier wrote:
> On Thu, Sep 26, 2013 at 8:43 PM, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:
> > On 2013-09-26 20:40:40 +0900, Michael Paquier wrote:
> >> On Thu, Sep 26, 2013 at 7:34 PM, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:
> >> > On 2013-09-26 12:13:30 +0900, Michael Paquier wrote:
> >> >> > 2) I don't think the drop algorithm used now is correct. Your
> >> >> > index_concurrent_set_dead() sets both indisvalid = false and indislive =
> >> >> > false at the same time. It does so after doing a WaitForVirtualLocks() -
> >> >> > but that's not sufficient. Between waiting and setting indisvalid =
> >> >> > false another transaction could start which then would start using that
> >> >> > index. Which will not get updated anymore by other concurrent backends
> >> >> > because of inislive = false.
> >> >> > You really need to follow index_drop's lead here and first unset
> >> >> > indisvalid then wait till nobody can use the index for querying anymore
> >> >> > and only then unset indislive.
> >> >
> >> >> Sorry, I do not follow you here. index_concurrent_set_dead calls
> >> >> index_set_state_flags that sets indislive and *indisready* to false,
> >> >> not indisvalid. The concurrent index never uses indisvalid = true so
> >> >> it can never be called by another backend for a read query. The drop
> >> >> algorithm is made to be consistent with DROP INDEX CONCURRENTLY btw.
> >> >
> >> > That makes it even worse... You can do the concurrent drop only in the
> >> > following steps:
> >> > 1) set indisvalid = false, no future relcache lookups will have it as valid
> >
> >> indisvalid is never set to true for the concurrent index. Swap is done
> >> with concurrent index having indisvalid = false and former index with
> >> indisvalid = true. The concurrent index is validated with
> >> index_validate in a transaction before swap transaction.
> >
> > Yes. I've described how it *has* to be done, not how it's done.
> >
> > The current method of going straight to isready = false for the original
> > index will result in wrong results because it's not updated anymore
> > while it's still being used.

> The index being dropped at the end of process is not the former index,
> but the concurrent index. The index used after REINDEX CONCURRENTLY is
> the old index but with the new relfilenode.

That's not relevant unless I miss something.

After phase 4 both indexes are valid (although only the old one is
flagged as such), but due to the switching of the relfilenodes backends
could have either of both open, depending on the time they built the
relcache entry. Right?
Then you go ahead and mark the old index - which still might be used! -
as dead in phase 5. Which means other backends might (again, depending
on the time they have built the relcache entry) not update it
anymore. In read committed we very well might go ahead and use the index
with the same plan as before, but with a new snapshot. Which now will
miss entries.

Am I misunderstanding the algorithm you're using?

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Samrat Revagade 2013-09-26 12:30:19 setting separate values of replication parameters to each standby to provide more granularity
Previous Message Pavan Deolasee 2013-09-26 11:54:54 Re: Patch for fail-back without fresh backup