CommandCounterIncrement versus plan caching

Lists: pgsql-hackers
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: CommandCounterIncrement versus plan caching
Date: 2007-11-29 20:40:08
Message-ID: 18439.1196368808@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I was able to reproduce the problem complained of here
http://archives.postgresql.org/pgsql-bugs/2007-11/msg00322.php
with this function:

create or replace function foo() returns int as $$
declare r int;
begin
drop table if exists temptable cascade;
create temp table temptable as select * from generate_series(1,4) f1;
create temp view vv as select * from temptable;
-- perform 2+2;
for r in select * from vv loop
raise notice '%', r;
end loop;
return 0;
end$$ language plpgsql;

regression=# select foo();
NOTICE: table "temptable" does not exist, skipping
CONTEXT: SQL statement "drop table if exists temptable cascade"
PL/pgSQL function "foo" line 3 at SQL statement
NOTICE: 1
NOTICE: 2
NOTICE: 3
NOTICE: 4
foo
-----
0
(1 row)

regression=# select foo();
NOTICE: drop cascades to rule _RETURN on view vv
CONTEXT: SQL statement "drop table if exists temptable cascade"
PL/pgSQL function "foo" line 3 at SQL statement
NOTICE: drop cascades to view vv
CONTEXT: SQL statement "drop table if exists temptable cascade"
PL/pgSQL function "foo" line 3 at SQL statement
ERROR: could not open relation 1663/121218/145930: No such file or directory
CONTEXT: PL/pgSQL function "foo" line 7 at FOR over SELECT rows

The problem goes away if there's any SQL action between the CREATE VIEW
and the FOR command, eg if you uncomment the PERFORM shown above.
What is happening is that the last step of CREATE VIEW, namely
DefineViewRules(), isn't visible at the time we try to re-validate
the cached plan for the FOR command, because no CommandCounterIncrement
has happened between. So the plan gets regenerated as a simple seqscan
of the view relation, which of course fails for lack of any underlying
storage.

This is not CREATE VIEW's fault, since no utility command expects that
it should do a final CommandCounterIncrement (henceforth CCI)
internally; CCI calls are supposed to be done between commands by system
control logic when needed. Moreover inserting a CCI at the end of
DefineView would only fix this particular manifestation, and not other
cases of DDL immediately before re-use of a plan.

One fairly simple answer is to insert a CCI call at the start of
RevalidateCachedPlan. I dislike that solution, at least by itself,
on two grounds:

* A patch of that form would approximately double the number of CCI
calls involved in executing a plpgsql function; which quite aside from
any performance cost would halve the distance to the
2^32-commands-per-transaction horizon. We've already heard from people
who ran into that limit, so I don't want to bring it closer.

* This would result in executing CCI calls even during stable/immutable
PL functions. I'm not sure that would have any bad semantic side-effects,
but I'm not convinced it wouldn't, either. And it also gives back
whatever command count limit savings we bought when we fixed things
so that stable/immutable functions don't call CCI.

I've also thought about rearranging the current conventions for where to
call CCI. This particular form of the problem would go away if SPI
command execution did CCI after, instead of before, each non-read-only
command. Or perhaps safer, before each such command and after the last
one. I'm a bit worried though about whether that leaves any code paths
in which we're still missing a needed CCI.

An idea we could use in combination with either of the above is to make
command ID assignment "lazy" in a similar sense to what we did for XID
assignment recently; that is, fix things so that CCI is a no-op if
no database change actually happened since the last one. This would
greatly reduce the command-limit disadvantages of having a scheme that
executes "unnecessary" CCI's.

Comments, better ideas?

regards, tom lane


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 00:11:19
Message-ID: 87r6i8d23c.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> One fairly simple answer is to insert a CCI call at the start of
> RevalidateCachedPlan. I dislike that solution, at least by itself,
> on two grounds:
>
> * A patch of that form would approximately double the number of CCI
> calls involved in executing a plpgsql function; which quite aside from
> any performance cost would halve the distance to the
> 2^32-commands-per-transaction horizon. We've already heard from people
> who ran into that limit, so I don't want to bring it closer.

Wait, shouldn't it be sufficient to do a CCI only in the "if (!plan)" case?
Ie, before actually replanning a query? That would only cause an additional
CCI the first time through a plpgsql query. Presumably if you're nearing the
4-billion mark it's because you're going through a loop. It's still kind of
ugly though. And it wouldn't help any if you're looping around some dynamic
SQL.

I didn't trace through all your logic so I'm not sure if only doing the CCI if
you actually invalidate a previously planned query would help any.

> * This would result in executing CCI calls even during stable/immutable
> PL functions. I'm not sure that would have any bad semantic side-effects,
> but I'm not convinced it wouldn't, either. And it also gives back
> whatever command count limit savings we bought when we fixed things
> so that stable/immutable functions don't call CCI.

Hm, if you have a stable function which looks up some value from a table then
would doing a CCI might screw up something like this?

postgres=# create table tab(id integer, val text);
CREATE TABLE
postgres=# insert into tab values (1,'a');
INSERT 0 1
postgres=# insert into tab values (2,'b');
INSERT 0 1
postgres=# insert into tab values (3,'c');
INSERT 0 1
postgres=# create function lookup(integer) returns text as 'select val from tab where id = $1' language sql stable;
CREATE FUNCTION
postgres=# update tab set val = lookup(id-1);
UPDATE 3
postgres=# select * from tab;
id | val
----+-----
1 |
2 | a
3 | b
(3 rows)

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Gregory Stark <stark(at)enterprisedb(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 05:56:10
Message-ID: 24247.1196402170@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gregory Stark <stark(at)enterprisedb(dot)com> writes:
> Wait, shouldn't it be sufficient to do a CCI only in the "if (!plan)" case?

No. The problem is that if you don't do the CCI then you don't get the
invalidation events that might-or-might-not be pending in the inval
queue. So testing for whether the plan is still valid is meaningless
unless that queue's been flushed.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 17:15:24
Message-ID: 3563.1196442924@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> One fairly simple answer is to insert a CCI call at the start of
> RevalidateCachedPlan. I dislike that solution, at least by itself,
> on two grounds:
> ...
> I've also thought about rearranging the current conventions for where to
> call CCI. This particular form of the problem would go away if SPI
> command execution did CCI after, instead of before, each non-read-only
> command. Or perhaps safer, before each such command and after the last
> one.

After further thought, I've concluded that the second of these
approaches is the more attractive, because it avoids adding CCI calls
into read-only functions. While I haven't yet tested any of this, the
plan that is in my head is:

1. Add "if (!read_only) CommandCounterIncrement();" at the end of
_SPI_execute_plan(). We keep the "before" call, though, so that a
volatile function still sees the partial results of a calling query;
that's how it's worked historically and I don't want to be tinkering
with those semantics right now.

2. Remove the CCI call at the top of _SPI_prepare_plan. It should be
unnecessary given that we are now expecting any previous DDL to have
been followed by CCI. (If it *isn't* unnecessary, then this whole idea
is wrong, because paths that involve re-using a previously prepared
plan instead of making a new one will still be broken.)

3. Do something to ameliorate the consequences of the increased number
of CCI calls.

As previously mentioned, the main problem with this approach is that
for the typical case of one SQL command per _SPI_execute_plan call,
we'd be doubling the number of CCI calls and thus consuming command
IDs twice as fast. I propose fixing that by not allocating a new
command ID unless the previous ID was actually used to mark a tuple.

Looking at the uses of GetCurrentCommandId, it seems that we can
distinguish "read only" from "read/write" calls easily in many places,
but there is one problem: the Executor uses the passed-in snapshot's
curcid as the CommandId to write tuples with. When we set up a snapshot
we typically don't know whether it will be used with a SELECT or an
updating query, so we cannot decide at that point whether the command ID
has been "dirtied" or not.

I think this can be fixed by changing the Executor so that it doesn't
use snapshot->curcid for this purpose. Instead, add a field to EState
showing the CommandID to mark tuples with. ExecutorStart, which has
enough information to know whether the query is read-only or not,
can set this field, or not, and tell GetCurrentCommandId to mark the
command ID "dirty" (or not). In practice, it appears that all callers
of the Executor pass in snapshots that have current curcid, and so
this would not result in any actual change of the CID being used.
(If a caller did pass in a snap with an older CID, there'd still not
be any real change of behavior --- correct behavior ensues as long
as the executor's output CID is >= snapshot CID.)

One fine point is that we have to mark the ID dirty at ExecutorStart
time, whether or not the query actually ends up marking any tuples with
it; we cannot wait until a heap_insert/update/delete actually happens
with it, as I'd first thought. The problem is that the query might
call a volatile function before it first inserts any tuple, and that
function needs to take a new command ID for itself; if it doesn't
then we can conflate the output of the function with the output of the
calling query later on.

Once we have the knowledge of whether the current command ID is "dirty",
we can skip everything inside CommandCounterIncrement when it is not;
except for the AtStart_Cache() call, ie, AcceptInvalidationMessages().
What that is looking for is asynchronous DDL-change notifications from
other backends. I believe that it is actually not necessary for
correctness for CCI to do that, because we should (had better) have
adequate locking to ensure that messages about any particular table are
absorbed before we touch that table. Rather, the reasoning for having
this in CCI is to make sure we do it often enough in a long-running
transaction to keep the sinval message queue from overflowing. I am
tempted to remove that from CCI and call it from just a selected few CCI
call sites, instead --- maybe only CommitTransactionCommand. OTOH this
step might reasonably be considered too risky for late beta, since it
would affect asychronous backend interactions, which are way harder to
test properly than within-a-backend behavior.

Comments?

regards, tom lane


From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 22:27:30
Message-ID: 20071130222730.GF3175@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:

> Once we have the knowledge of whether the current command ID is "dirty",
> we can skip everything inside CommandCounterIncrement when it is not;
> except for the AtStart_Cache() call, ie, AcceptInvalidationMessages().
> What that is looking for is asynchronous DDL-change notifications from
> other backends. I believe that it is actually not necessary for
> correctness for CCI to do that, because we should (had better) have
> adequate locking to ensure that messages about any particular table are
> absorbed before we touch that table. Rather, the reasoning for having
> this in CCI is to make sure we do it often enough in a long-running
> transaction to keep the sinval message queue from overflowing. I am
> tempted to remove that from CCI and call it from just a selected few CCI
> call sites, instead --- maybe only CommitTransactionCommand. OTOH this
> step might reasonably be considered too risky for late beta, since it
> would affect asychronous backend interactions, which are way harder to
> test properly than within-a-backend behavior.

I agree that it seems risky to remove it at this point. It could have
severe performance impact if it turns out that not calling it enough
causes the queue to overflow. It seems safer to be calling it as
frequently as possible.

I was going to say, what would happen if vacuum were to run on a large
table and a high vacuum_delay setting, but then I noticed that currently
it doesn't call CCI at all. Is this a concern?

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"Crear es tan difícil como ser libre" (Elsa Triolet)


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 23:10:15
Message-ID: 8723.1196464215@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> writes:
> Tom Lane wrote:
>> ... I am
>> tempted to remove that from CCI and call it from just a selected few CCI
>> call sites, instead --- maybe only CommitTransactionCommand. OTOH this
>> step might reasonably be considered too risky for late beta, since it
>> would affect asychronous backend interactions, which are way harder to
>> test properly than within-a-backend behavior.

> I agree that it seems risky to remove it at this point. It could have
> severe performance impact if it turns out that not calling it enough
> causes the queue to overflow. It seems safer to be calling it as
> frequently as possible.

Yeah, I left it alone for the moment. It should be revisited someday.

> I was going to say, what would happen if vacuum were to run on a large
> table and a high vacuum_delay setting, but then I noticed that currently
> it doesn't call CCI at all. Is this a concern?

It's always been the case that long-running queries might not call
AcceptInvalidationMessages for a very long time. The worst-case
consequence is a system cache flush, which might be annoying from
a performance point of view but shouldn't be a problem otherwise.
Again, it's not something I care to mess with right now.

One thought is that the typical case would probably involve only one or
two laggard backends. Right now, we handle queue overflow by discarding
*all* pending messages and forcing a cache reset instead in *all*
backends. That's simple but it makes every backend pay the performance
price. Might be worthwhile to teach it to discard, say, the oldest half
of the queued messages and only force reset for those backends that
hadn't eaten those yet. Then, a reset would be expected to occur only
within a backend that'd just finished a long-running query, and so the
relative performance cost should be small.

regards, tom lane


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-12-01 21:59:23
Message-ID: 87eje6xeis.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> I think this can be fixed by changing the Executor so that it doesn't
> use snapshot->curcid for this purpose. Instead, add a field to EState
> showing the CommandID to mark tuples with. ExecutorStart, which has
> enough information to know whether the query is read-only or not,
> can set this field, or not, and tell GetCurrentCommandId to mark the
> command ID "dirty" (or not).

ExecutorStart could also determine when the query is a "write-only" query for
which the provided command id won't be used for any snapshot checks (ie, a
simple INSERT) and tell CCI not to bump the CCI if the previous CC even if
it's dirty.

That would eliminate the other big use case where users can run out of command
ids, batch inserts. If you're importing data from a tool which either
generates tons of INSERT statements, uses a plpgsql loop to insert many
records, or uses prepared queries and then executes them a few billion times
you can run into the same limitation.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-12-01 23:16:43
Message-ID: 8763zixaxw.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Gregory Stark" <stark(at)enterprisedb(dot)com> writes:

> "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>
>> I think this can be fixed by changing the Executor so that it doesn't
>> use snapshot->curcid for this purpose. Instead, add a field to EState
>> showing the CommandID to mark tuples with. ExecutorStart, which has
>> enough information to know whether the query is read-only or not,
>> can set this field, or not, and tell GetCurrentCommandId to mark the
>> command ID "dirty" (or not).
>
> ExecutorStart could also determine when the query is a "write-only" query for
> which the provided command id won't be used for any snapshot checks (ie, a
> simple INSERT) and tell CCI not to bump the CCI if the previous CC even if
> it's dirty.

oops, garbled that. "tell CCI not to bump the counter even if it's dirty"q

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Gregory Stark <stark(at)enterprisedb(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-12-02 00:33:27
Message-ID: 29457.1196555607@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gregory Stark <stark(at)enterprisedb(dot)com> writes:
> ExecutorStart could also determine when the query is a "write-only" query for
> which the provided command id won't be used for any snapshot checks (ie, a
> simple INSERT) and tell CCI not to bump the CCI if the previous CC even if
> it's dirty.

Ummm ... I'm not convinced about that. ExecutorStart doesn't have any
cheap way of inspecting the query carefully (eg, to see if it invokes
volatile functions), nor any understanding of what context the query is
being called in (eg, inside a subtransaction or volatile function).

> That would eliminate the other big use case where users can run out of
> command ids, batch inserts.

If you're planning to insert 4 billion rows without using COPY (or at
least multi-VALUES inserts), you've got more patience than I do.

regards, tom lane