Re: [HACKERS] Concurrent ALTER SEQUENCE RESTART Regression

From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org, Noah Misch <noah(at)leadboat(dot)com>, Jason Petersen <jason(at)citusdata(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-bugs(at)postgresql(dot)org>
Subject: Re: [HACKERS] Concurrent ALTER SEQUENCE RESTART Regression
Date: 2017-05-11 21:28:39
Message-ID: 20170511212839.b6i6xccdv5slrttx@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-hackers

On 2017-05-11 17:21:18 -0400, Tom Lane wrote:
> Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> writes:
> > I ran this script
>
> > CREATE SEQUENCE seq1;
>
> > DO LANGUAGE plpythonu $$
> > plan = plpy.prepare("SELECT nextval('seq1')")
> > for i in range(0, 10000000):
> > plpy.execute(plan)
> > $$;
>
> > and timed the "DO".
>
> It occurred to me that plpy.execute is going to run a subtransaction for
> each call, which makes this kind of a dubious test case, both because of
> the extra subtransaction overhead and because subtransaction lock
> ownership effects will possibly skew the results. So I tried to
> reproduce the test using plain plpgsql,
>
> CREATE SEQUENCE IF NOT EXISTS seq1;
>
> \timing on
>
> do $$
> declare x int;
> begin
> for i in 0 .. 10000000 loop
> x := nextval('seq1');
> end loop;
> end
> $$;
>
> On HEAD, building without cassert, I got a fairly reproducible result
> of about 10.6 seconds. I doubt my machine is 6X faster than yours,
> so this indicates that the subtransaction overhead is pretty real.

Isn't that pretty much the point? The whole open_share_lock()
optimization looks like it really only can make a difference with
subtransactions?

> > I compared the stock releases with a patched version that replaces the
> > body of open_share_lock() with just
> > return relation_open(seq->relid, AccessShareLock);
>
> Hm. I don't think that's a sufficient code change, because if you do it
> like that then the lock remains held after nextval() returns.

Hm? That's not new, is it? We previously did a LockRelationOid(seq->relid) and
then relation_open(seq->relid, NoLock)?

> This means
> that (a) subsequent calls aren't hitting the shared lock manager at all,
> they're just incrementing a local lock counter, and whether it would be
> fastpath or not is irrelevant; and (b) this means that the semantic issues
> Andres is worried about remain in place, because we will hold the lock
> till transaction end.

My concern was aborted subtransactions, and those should release their
lock via resowner stuff at abort?

> I experimented with something similar, just replacing open_share_lock
> as above, and I got runtimes of just about 12 seconds, which surprised
> me a bit.

Yea, I suspect we'd need something like the current open_share_lock,
except with the resowner trickery removed.

> I'd have thought the locallock-already-exists code path would be
> faster than that.

It's a hash-table lookup, via dynahash no less, that's definitley going
to be more expensive than a single seq->lxid != thislxid...

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message David G. Johnston 2017-05-11 21:59:25 Re: BUG #14646: performance hint to remove
Previous Message Tom Lane 2017-05-11 21:28:12 Re: [HACKERS] Concurrent ALTER SEQUENCE RESTART Regression

Browse pgsql-hackers by date

  From Date Subject
Next Message Yaroslav 2017-05-11 21:37:27 Re: CTE inlining
Previous Message Tom Lane 2017-05-11 21:28:12 Re: [HACKERS] Concurrent ALTER SEQUENCE RESTART Regression