Re: delta relations in AFTER triggers

Lists: pgsql-hackers
From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: delta relations in AFTER triggers
Date: 2014-06-14 23:56:44
Message-ID: 1402790204.65037.YahooMailNeo@web122301.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Attached is a WIP patch for implementing the capture of delta
relations for a DML statement, in the form of two tuplestores --
one for the old versions and one for the new versions.  In the
short term it is intended to make these relations available in
trigger functions, although the patch so far doesn't touch any PLs
-- it just takes things as far as providing the relations as
tuplestores in the TriggerData structure when appropriate, for the
PLs to pick up from there.  It seemed best to get agreement on the
overall approach before digging into all the PLs.  This is
implemented only for INSERT, UPDATE, and DELETE since it wasn't
totally clear what the use cases and proper behavior was for other
triggers.  Opinions on whether we should try to provide deltas for
other cases, and if so what the semantics are, are welcome.

Once triggers can access this delta data, it will also be used for
incremental maintenance of materialized views, although I don't
want get too sidetracked on any details of that until we have
proven delta data available in triggers.  (One step at a time or
we'll never get there.)

I looked at the standard, and initially tried to implement the
standard syntax for this; however, it appeared that the reasons
given for not using standard syntax for the row variables also
apply to the transition relations (the term used by the standard).
There isn't an obvious way to tie that in to all the PLs we
support.  It could be done, but it seems like it would intolerably
ugly, and more fragile than what we have done so far.

Some things which I *did* follow from the standard: these new
relations are only allowed within AFTER triggers, but are available
in both AFTER STATEMENT and AFTER ROW triggers.  That is, an AFTER
UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
variables as well as the delta relations (under whatever names we
pick).  That probably won't be used very often, but I can imagine
some cases where it might be useful.  I expect that these will
normally be used in FOR EACH STATEMENT triggers.

There are a couple things I would really like to get settled in
this round of review, so things don't need to be refactored in
major ways later:

(1)  My first impulse was to capture this delta data in the form of
tuplestores of just TIDs, and fetching the tuples themselves from
the heap on reference.  In earlier discussions others have argued
for using tuplestores of the actual rows themselves.  I have taken
that advice here, but still don't feel 100% convinced. What I am
convinced of is that I don't want to write a lot of code based on
that decision and only then have people weigh in on the side of how
I had planned to do it in the first place.  I hate it when that
happens.

(2)  Do we want to just pick names for these in the PLs rather than
using the standard syntax?  Implementing the standard syntax seemed
to require three new (unreserved) keywords, changes to the catalogs
to store the chosen relations names, and some way to tie the
specified relation names in to the various PLs.  The way I have
gone here just adds two new fields to the TriggerData structure and
leaves it to each PL how to deal with that.  Failure to do anything
in a PL just leaves it at the status quo with no real harm done --
it just won't have the new delta relations available.

Of course, any other comments on the approach taken or how it can
be improved are welcome.

At this point the only testing is that make check-world completes
without problems.  If we can agree on this part of it I will look
at the PLs, and create regression tests.  I would probably submit
each PL implementation as a separate patch.

I was surprised that the patch to this point was so small:
 5 files changed, 170 insertions(+), 19 deletions(-)
Hopefully that's not due to having missed something.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
after-trigger-delta-relations-v1.patch text/x-diff 18.1 KB

From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-15 14:54:04
Message-ID: 20140615145404.GA29714@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jun 14, 2014 at 04:56:44PM -0700, Kevin Grittner wrote:
> Attached is a WIP patch for implementing the capture of delta
> relations for a DML statement, in the form of two tuplestores --
> one for the old versions and one for the new versions.

Thanks!

Any chance we might be able to surface the old version for the case of
UPDATE ... RETURNING?

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-15 16:33:46
Message-ID: 1402850026.55960.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

David Fetter <david(at)fetter(dot)org> wrote:

> Any chance we might be able to surface the old version for the
> case of UPDATE ... RETURNING?

Not as part of this patch.

Of course, once delta relations are available, who knows what
people might do with them.  I have a hard time imagining exactly
how you would expose what you're talking about, but a column to
distinguish before and after images might work.  Incremental
maintenance of materialized views will require that in the form of
a count column with -1 for deleted and +1 for inserted, so there
might be some common ground when we get to that.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-16 19:59:00
Message-ID: 20140616195900.GC18688@eldon.alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner wrote:
> Attached is a WIP patch for implementing the capture of delta
> relations for a DML statement, in the form of two tuplestores --
> one for the old versions and one for the new versions.  In the
> short term it is intended to make these relations available in
> trigger functions, although the patch so far doesn't touch any PLs
> -- it just takes things as far as providing the relations as
> tuplestores in the TriggerData structure when appropriate, for the
> PLs to pick up from there.  It seemed best to get agreement on the
> overall approach before digging into all the PLs.  This is
> implemented only for INSERT, UPDATE, and DELETE since it wasn't
> totally clear what the use cases and proper behavior was for other
> triggers.  Opinions on whether we should try to provide deltas for
> other cases, and if so what the semantics are, are welcome.

TRUNCATE triggers shouldn't have delta relations, that's for sure, or
it'd completely negate the point of TRUNCATE triggers. I don't think we
have any other event, do we? People who get delta relations for
deleting all rows should be using DELETE, I think.

I am not sure about providing delta relations in the FOR EACH ROW case.
For what cases are them useful? In all cases I think NEW and OLD are
sufficient. I didn't read the standard, but if it's saying that in FOR
EACH ROW the new/deleted/changed row should be accessible by way of a
delta relation, then perhaps we should look into making NEW and OLD be
accesible via that relation rather than adding delta relations. It
might be more code, though.

Now, if you only have delta relations in FOR EACH STATEMENT, then it
seems to me you can optimize obtaining them only when such triggers
are defined; this lets you do away with the reloption entirely, doesn't
it?

I noticed that GetCurrentFDWTuplestore() changed its identity without
getting its comment updated. The new name seems awfully generic, and I
don't think it really describes what the function does. I think it
needs more renaminguu

> (1)  My first impulse was to capture this delta data in the form of
> tuplestores of just TIDs, and fetching the tuples themselves from
> the heap on reference.  In earlier discussions others have argued
> for using tuplestores of the actual rows themselves.

Can you please supply pointers to such discussion? I don't see any
point in not just storing TIDs, but perhaps I'm missing something.

> (2)  Do we want to just pick names for these in the PLs rather than
> using the standard syntax?  Implementing the standard syntax seemed
> to require three new (unreserved) keywords, changes to the catalogs
> to store the chosen relations names, and some way to tie the
> specified relation names in to the various PLs.

I think the only one for which we have a compulsion to follow someone
in this area would be PL/pgSQL, which probably needs to follow PL/SQL's
lead if there is one. Other than that I don't think we need to do
anything standard. We don't (yet) have PL/PSM which would need to have
the standard-mandated syntax.

> The way I have
> gone here just adds two new fields to the TriggerData structure and
> leaves it to each PL how to deal with that.  Failure to do anything
> in a PL just leaves it at the status quo with no real harm done --
> it just won't have the new delta relations available.

It seems to me that plpgsql support is mandatory, but other PLs can add
support as interested parties weigh in with patches. As with event
triggers, I don't feel the author of the main feature is responsible for
patching all PLs.

> + <varlistentry>
> + <term><literal>generate_deltas</literal> (<type>boolean</type>)</term>
> + <listitem>
> + <para>
> + Declare that a table generates delta relations when modified. This
> + allows <literal>AFTER</> triggers to reference the set of rows modified
> + by a statement. See
> + <xref linkend="sql-createtrigger"> for details.
> + </para>
> + </listitem>
> + </varlistentry>

Are you intentionally omitting the corresponding sql-createtrigger patch?

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-17 20:07:55
Message-ID: CA+TgmobpSP22vZuiaugvZr31=qnoCSJD_bW5=L6-Yjk=NYabsw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jun 14, 2014 at 7:56 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> I looked at the standard, and initially tried to implement the
> standard syntax for this; however, it appeared that the reasons
> given for not using standard syntax for the row variables also
> apply to the transition relations (the term used by the standard).
> There isn't an obvious way to tie that in to all the PLs we
> support. It could be done, but it seems like it would intolerably
> ugly, and more fragile than what we have done so far.

I'm not too familiar with this area. Can you describe what the
standard syntax for the row variables is, as opposed to our syntax?
Also, what's the standard syntax for the the transition relations?

> Some things which I *did* follow from the standard: these new
> relations are only allowed within AFTER triggers, but are available
> in both AFTER STATEMENT and AFTER ROW triggers. That is, an AFTER
> UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
> variables as well as the delta relations (under whatever names we
> pick). That probably won't be used very often, but I can imagine
> some cases where it might be useful. I expect that these will
> normally be used in FOR EACH STATEMENT triggers.

I'm concerned about the performance implications of capturing the
delta relations unconditionally. If a particular trigger actually
needs the delta relations, then the time spent capturing that
information is well spent; but if it doesn't, then it's a waste.
There are lots of people using FOR EACH STATEMENT triggers right now
who won't be happy if those start requiring O(n) additional temporary
storage, where n is the number of rows modified by the statement.
This is likely an even bigger issue for per-row triggers, of course,
where as you say, it probably won't be used often.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-18 16:39:33
Message-ID: 20140618163933.GB17042@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jun 17, 2014 at 04:07:55PM -0400, Robert Haas wrote:
> On Sat, Jun 14, 2014 at 7:56 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> > I looked at the standard, and initially tried to implement the
> > standard syntax for this; however, it appeared that the reasons
> > given for not using standard syntax for the row variables also
> > apply to the transition relations (the term used by the standard).
> > There isn't an obvious way to tie that in to all the PLs we
> > support. It could be done, but it seems like it would intolerably
> > ugly, and more fragile than what we have done so far.
>
> I'm not too familiar with this area. Can you describe what the
> standard syntax for the row variables is, as opposed to our syntax?
> Also, what's the standard syntax for the the transition relations?

The good:
- Generating the tuplestores. Yay!

The bad:
- Generating them exactly and only for AFTER triggers
- Requiring that the tuplestores both be generated or not at all.
There are real use cases described below where only one would be relevant.
- Generating the tuplestores unconditionally.

The ugly:
- Attaching tuplestore generation to tables rather than callers (triggers, DML, etc.)

The SQL standard says:

<trigger definition> ::=
CREATE TRIGGER <trigger name> <trigger action time> <trigger event>
ON <table name> [ REFERENCING <transition table or variable list> ]
<triggered action>

<trigger action time> ::=
BEFORE
| AFTER
| INSTEAD OF

<trigger event> ::=
INSERT
| DELETE
| UPDATE [ OF <trigger column list> ]

<trigger column list> ::=
<column name list>

<triggered action> ::=
[ FOR EACH { ROW | STATEMENT } ]
[ <triggered when clause> ]
<triggered SQL statement>

<triggered when clause> ::=
WHEN <left paren> <search condition> <right paren>

<triggered SQL statement> ::=
<SQL procedure statement>
| BEGIN ATOMIC { <SQL procedure statement> <semicolon> }... END

<transition table or variable list> ::=
<transition table or variable>...

<transition table or variable> ::=
OLD [ ROW ] [ AS ] <old transition variable name>
| NEW [ ROW ] [ AS ] <new transition variable name>
| OLD TABLE [ AS ] <old transition table name>
| NEW TABLE [ AS ] <new transition table name>

<old transition table name> ::=
<transition table name>
<new transition table name> ::=
<transition table name>

<transition table name> ::=
<identifier>

<old transition variable name> ::=
<correlation name>

<new transition variable name> ::=
<correlation name>

Sorry that was a little verbose, but what it does do is give us what we need at
trigger definition time. I'd say it's pilot error if a trigger
definition says "make these tuplestores" and the trigger body then
does nothing with them, which goes to Robert's point below re:
unconditional overhead.

> > Some things which I *did* follow from the standard: these new
> > relations are only allowed within AFTER triggers, but are available
> > in both AFTER STATEMENT and AFTER ROW triggers. That is, an AFTER
> > UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
> > variables as well as the delta relations (under whatever names we
> > pick). That probably won't be used very often, but I can imagine
> > some cases where it might be useful. I expect that these will
> > normally be used in FOR EACH STATEMENT triggers.
>
> I'm concerned about the performance implications of capturing the
> delta relations unconditionally.

Along that same line, we don't always need to capture both the before
tuplestores and the after ones. Two examples of this come to mind:

- BEFORE STATEMENT triggers accessing rows, where there is no after part to use, and
- DML (RETURNING BEFORE, e.g.) which only touches one of them. This
applies both to extant use cases of RETURNING and to planned ones.

I'm sure if I can think of two, there are more.

In summary, I'd like to propose that the tuplestores be generated
separately in general and attached to callers. We can optimize this by
not generating redundant tuplestores.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-18 21:30:06
Message-ID: 1403127006.98728.YahooMailNeo@web122301.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
> Kevin Grittner wrote:

> TRUNCATE triggers shouldn't have delta relations, that's for
> sure, or it'd completely negate the point of TRUNCATE triggers.
> I don't think we have any other event, do we?  People who get
> delta relations for deleting all rows should be using DELETE, I
> think.

That sounds reasonable.  The only other issue was INSTEAD OF
triggers, but I don't think we need them there, either.

> I am not sure about providing delta relations in the FOR EACH ROW
> case.  For what cases are them useful?

In an accounting application for courts I have seen a case where
each receivable (asset) account had a contra (liability) account of
"uncollected receivables", because until and unless the Clerk of
Courts collected the judgment there was no obligation to pay the
money back out.  Any financial transaction had to be in a database
transaction, and not only did all the transaction detail need to
balance to zero, but any receivable detail row needed to be
immediately followed by a balancing contra account row (and vice
versa).  A FOR EACH ROW trigger, on seeing one of these rows, could
check for the required balancing row.  Now this would only be
solved by the standard feature if both rows were always inserted by
a single statement, which might not always have been the case; so
even with this example I am stretching a bit.  But it's close
enough to suggest that there might be legitimate uses.

And there is the fact that the standard seems to want this to be
supported.

> In all cases I think NEW and OLD are sufficient.  I didn't read
> the standard, but if it's saying that in FOR EACH ROW the
> new/deleted/changed row should be accessible by way of a delta
> relation, [...]

No, it says that you can specify *both* the row variables for the
row under consideration and the tables for the full set of rows
affected by the statement *for the same FOR EACH ROW trigger*.

> Now, if you only have delta relations in FOR EACH STATEMENT, then
> it seems to me you can optimize obtaining them only when such
> triggers are defined; this lets you do away with the reloption
> entirely, doesn't it?

That was intended to minimize the situations under which there was
a performance hit where the new delta relations were not needed in
an AFTER trigger.  If we only generate these for FOR EACH STATEMENT
triggers, that certainly reduces the need for that, but I'm not
sure it eliminates it -- especially if we're generating tuplestores
for the full rows rather than their TIDs.

It is already generating the tuplestores only if there is an AFTER
trigger for the type of statement being executed, but I agree that
it would be a very rare FOR EACH ROW trigger that actually used it.
Of course, that is one argument for the standard syntax -- we
could test whether any of the trigger definitions for that
operation on that table specified each transition table, and only
generate them if needed based on that.

> I noticed that GetCurrentFDWTuplestore() changed its identity
> without getting its comment updated.  The new name seems awfully
> generic, and I don't think it really describes what the function
> does.  I think it needs more renaminguu

Good point.  Will fix that in the next version.

>> (1)  My first impulse was to capture this delta data in the form
>> of tuplestores of just TIDs, and fetching the tuples themselves
>> from the heap on reference.  In earlier discussions others have
>> argued for using tuplestores of the actual rows themselves.
>
> Can you please supply pointers to such discussion?

That was in a matview-related discussion, so perhaps we should
ignore that for now and discuss what's best for triggers here.  If
matviews need something different, the rows could always be
materialized from the TIDs at a later point.

> I don't see any point in not just storing TIDs, but perhaps I'm
> missing something.

I think there was some question about performance, but I really
have a hard time seeing the performance being significantly worse
with a TID tuplestore for most reasonable uses; and I think the TID
tuplestore could be a lot faster if (for example) you have a table
with a large, TOASTed text or bytea column which is not referenced
(in selection criteria or the SET clause) and is not used in the
FOR EACH STATEMENT trigger.

I think there might also have been some question about visibility.
A TID tuplestore would need to use a different snapshot (like maybe
SnapshotAny) in the same query where it joined to other tables with
a normal MVCC snapshot.

>> (2)  Do we want to just pick names for these in the PLs rather
>> than using the standard syntax?  Implementing the standard
>> syntax seemed to require three new (unreserved) keywords,
>> changes to the catalogs to store the chosen relations names, and
>> some way to tie the specified relation names in to the various
>> PLs.
>
> I think the only one for which we have a compulsion to follow
> someone in this area would be PL/pgSQL,

... which currently hard-codes the row variable names rather than
using standard syntax to specify them.

> which probably needs to follow PL/SQL's lead if there is one.

I don't believe we can create PL/SQL trigger functions, can we?

> Other than that I don't think we need to do anything standard.
> We don't (yet) have PL/PSM which would need to have the
> standard-mandated syntax.

The stated reason for not specifying the row variable names in the
CREATE TRIGGER statement is the difficulty of tying that in to all
the supported PLs.  I couldn't see any reason for that to be easier
for the tables than the row variables.  Of course, if we intend to
support PL/PSM and we want to use standard CREATE TRIGGER syntax
for that (rather than having trigger functions in that language act
as though some particular name was specified) it seems better to me
that we implement it now.  But I don't really see why we would need
to do that to have a conforming PL/PSM implementation.

>> The way I have gone here just adds two new fields to the
>> TriggerData structure and leaves it to each PL how to deal with
>> that.  Failure to do anything in a PL just leaves it at the
>> status quo with no real harm done -- it just won't have the new
>> delta relations available.
>
> It seems to me that plpgsql support is mandatory, but other PLs
> can add support as interested parties weigh in with patches.  As
> with event triggers, I don't feel the author of the main feature
> is responsible for patching all PLs.

I like that point of view!

> Are you intentionally omitting the corresponding
> sql-createtrigger patch?

Yes, because that depends on decisions made about whether to use
standard syntax and (if not) a round of bikeshedding about what
fixed names to use in plpgsql.

Thanks for the feedback!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-18 21:54:18
Message-ID: 1403128458.51266.YahooMailNeo@web122306.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> Can you describe what the standard syntax for the row variables
> is, as opposed to our syntax?  Also, what's the standard syntax
> for the the transition relations?

If you want either (or both), the standard has you using a
REFERENCING clause right before the FOR EACH part of the statement.
You can specify one or more sections in the format:

{ OLD | NEW } [ ROW | TABLE ] [ AS ] name

If you have more than one, they are separated by whitespace.  (No
commas or other visible delimiters.)  If you omit ROW/TABLE it
defaults to ROW.  You are only allowed to specify TABLE on an AFTER
trigger.  You are only allowed to specify ROW on a FOR EACH ROW
trigger.  (There is no prohibition against specifying TABLE on a
FOR EACH ROW trigger.)  You are only allowed to specify OLD for a
DELETE or UPDATE trigger.  (The ability for one trigger definition
to specify multiple operations is a PostgreSQL extension.)  You are
only allowed to specify NEW for an UPDATE or INSERT trigger.  You
may not repeat an entry.

Essentially, we have an implied clause on every FOR EACH ROW
trigger like:

  REFERENCING OLD ROW AS OLD NEW ROW AS NEW

>> Some things which I *did* follow from the standard: these new
>> relations are only allowed within AFTER triggers, but are
>> available in both AFTER STATEMENT and AFTER ROW triggers.  That
>> is, an AFTER UPDATE ... FOR EACH ROW trigger could use both the
>> OLD and NEW row variables as well as the delta relations (under
>> whatever names we pick).  That probably won't be used very
>> often, but I can imagine some cases where it might be useful.  I
>> expect that these will normally be used in FOR EACH STATEMENT
>> triggers.
>
> I'm concerned about the performance implications of capturing the
> delta relations unconditionally.  If a particular trigger
> actually needs the delta relations, then the time spent capturing
> that information is well spent; but if it doesn't, then it's a
> waste.  There are lots of people using FOR EACH STATEMENT
> triggers right now who won't be happy if those start requiring
> O(n) additional temporary storage, where n is the number of rows
> modified by the statement.  This is likely an even bigger issue
> for per-row triggers, of course, where as you say, it probably
> won't be used often.

That is why I added a reloption which must be specifically enabled
for a table in order to generate these deltas.  That would be an
inconvenience for those wanting to use the new feature, but should
prevent a performance regression for any tables where it is not
specifically turned on.  That's not perfect, of course, because if
you turn it on for an UPDATE ... AFTER EACH STATEMENT trigger where
you want it, you do suffer the overhead on every AFTER trigger on
that table.  Perhaps this is sufficient reason to use the standard
syntax for the new delta tables -- the would then be generated only
in the specific cases where they were needed.  And I think I could
lose the reloption.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-18 22:30:34
Message-ID: 1403130634.33625.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

David Fetter <david(at)fetter(dot)org> wrote:
> Robert Haas wrote:
>> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> The good:
>     - Generating the tuplestores.  Yay!

Thanks for that.  ;-)

> The bad:
>     - Generating them exactly and only for AFTER triggers

The standard only allows them for AFTER triggers, and I'm not sure
what the semantics would be for any others.

>     - Requiring that the tuplestores both be generated or not at
>       all.  There are real use cases described below where only
>       one would be relevant.

Yeah.

>     - Generating the tuplestores unconditionally.

Well, there are conditions.  Only when the reloption allows and
only if there is an AFTER trigger for the type of operation in
progress.

> The ugly:
>     - Attaching tuplestore generation to tables rather than
        callers (triggers, DML, etc.)

I'm not sure what you're getting at here.  This patch is
specifically only concerned with generating delta relations for DML
AFTER triggers, although my hope is that it will be a basis for
delta relations used for other purposes.  This seems to me like the
right place to initially capture the data for incremental
maintenance of materialized views, and might be of value for other
purposes, too.

> [formal definition of standard CREATE TRIGGER statement]

> Sorry that was a little verbose, but what it does do is give us
> what we need at trigger definition time.  I'd say it's pilot
> error if a trigger definition says "make these tuplestores" and
> the trigger body then does nothing with them, which goes to
> Robert's point below re: unconditional overhead.

Yeah, the more I think about it (and discuss it) the more I'm
inclined to suffer the additional complexity of the standard syntax
for specifying transition relations in order to avoid unnecessary
overhead creating them when not needed.  I'm also leaning toward
just storing TIDs in the tuplestores, even though it requires mixed
snapshots in executing queries in the triggers.  It just seems like
there will otherwise be to much overhead in copying around big,
unreferenced columns for some situations.

> Along that same line, we don't always need to capture both the
> before tuplestores and the after ones.  Two examples of this come
> to mind:
>
> - BEFORE STATEMENT triggers accessing rows, where there is no
> after part to use,

Are you talking about an UPDATE for which the AFTER trigger(s) only
reference the before transition table, and don't look at AFTER?  If
so, using the standard syntax would cover that just fine.  If not,
can you elaborate?

> and
> - DML (RETURNING BEFORE, e.g.) which only touches one of them.
> This applies both to extant use cases of RETURNING and to planned
> ones.

I think that can be sorted out by a patch which implements that, if
these deltas even turn out to be the appropriate way to get that
data (which is not clear to me at this time).  Assuming standard
syntax, the first thing would be for the statement to somehow
communicate to the trigger layer the need to capture a tuplestore
it might otherwise not generate, and there would need to be a way
for the statement to access the needed tuplestore(s).  The
statement would also need to project the right set of columns.
None of that seems to me to be relevant to this patch.  If this
patch turns out to provide infrastructure that helps, great.  If
you have a specific suggestion about how to make the tuplestores
more accessible to other layers, I'm listening.

> In summary, I'd like to propose that the tuplestores be generated
> separately in general and attached to callers. We can optimize
> this by not generating redundant tuplestores.

Well, if we use the standard syntax for CREATE TRIGGER and store
the transition table names (if any) in pg_trigger, the code can
generate one relation if any AFTER triggers which are going to fire
need it.  I don't see any point in generating exactly the same
tuplestore contents for each trigger.  And suspect that we can wire
in any other uses later when we have something to connect them to.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-19 02:49:00
Message-ID: 20140619024900.GE17042@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jun 18, 2014 at 03:30:34PM -0700, Kevin Grittner wrote:
> David Fetter <david(at)fetter(dot)org> wrote:
> > Robert Haas wrote:
> >> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>
> > The good:
> >     - Generating the tuplestores.  Yay!
>
> Thanks for that.  ;-)

Sorry, I just can't resist references to Spaghetti Westerns.
https://en.wikipedia.org/wiki/The_Good,_the_Bad_and_the_Ugly

> > The bad:
> >     - Generating them exactly and only for AFTER triggers
>
> The standard only allows them for AFTER triggers, and I'm not sure
> what the semantics would be for any others.

As, so here's where we differ. You're looking at deltas, a very nice
capability to have. I'm looking at the before and after tuplestores
as components of which deltas, among many other things, could be
composed.

> >     - Requiring that the tuplestores both be generated or not at
> >       all.  There are real use cases described below where only
> >       one would be relevant.
>
> Yeah.
>
> >     - Generating the tuplestores unconditionally.
>
> Well, there are conditions.  Only when the reloption allows and
> only if there is an AFTER trigger for the type of operation in
> progress.

For deltas, this is just the thing.

I'm vaguely picturing the following as infrastructure:

- Instead of modifying Rel, we modify Query to contain two more bools
default false: hasBeforeTuplestore and hasAfterTuplestore
- Each use case we implement would set 0 or more of these to true.
For the delta use case, appropriate trigger definitions would set
both.

This is vague because I haven't really gotten hacking on it, just
exploring what I hope are the relevant parts of the code.

> > The ugly:
> >     - Attaching tuplestore generation to tables rather than
>         callers (triggers, DML, etc.)
>
> I'm not sure what you're getting at here.  This patch is
> specifically only concerned with generating delta relations for DML
> AFTER triggers, although my hope is that it will be a basis for
> delta relations used for other purposes.  This seems to me like the
> right place to initially capture the data for incremental
> maintenance of materialized views, and might be of value for other
> purposes, too.

Hrm. I don't really see this stuff as table properties. The
materialized view case is an obvious example where the matview, not
the relations underneath, wants this information. The relations
underneath may have their own concerns, but it's the matview whose
existence should ensure that the tuplestores are being generated.

Once the last depending-on-one-of-the-tuplestores things is gone, and
this could simply be the end of a RETURNING query, the tuplestores go
away.

> > [formal definition of standard CREATE TRIGGER statement]
>
> > Sorry that was a little verbose, but what it does do is give us
> > what we need at trigger definition time.  I'd say it's pilot
> > error if a trigger definition says "make these tuplestores" and
> > the trigger body then does nothing with them, which goes to
> > Robert's point below re: unconditional overhead.
>
> Yeah, the more I think about it (and discuss it) the more I'm
> inclined to suffer the additional complexity of the standard syntax
> for specifying transition relations in order to avoid unnecessary
> overhead creating them when not needed.  I'm also leaning toward
> just storing TIDs in the tuplestores, even though it requires mixed
> snapshots in executing queries in the triggers.

So in this case one tuplestore with two TIDs, either of which might be
NULL?

> just seems like there will otherwise be to much overhead in copying
> around big, unreferenced columns for some situations.

Yeah, it'd be nice to have the minimal part be as slim as possible.

> > Along that same line, we don't always need to capture both the
> > before tuplestores and the after ones.  Two examples of this come
> > to mind:
> >
> > - BEFORE STATEMENT triggers accessing rows, where there is no
> > after part to use,
>
> Are you talking about an UPDATE for which the AFTER trigger(s) only
> reference the before transition table, and don't look at AFTER?If
> so, using the standard syntax would cover that just fine.  If not,
> can you elaborate?

Sorry I was unclear. I was looking at one of the many things having
these tuplestores around could enable. As things stand now, there is
no access of any kind to rows with any per-statement trigger, modulo
user-space hacks like this one:

http://people.planetpostgresql.org/dfetter/index.php?/archives/71-Querying-Rows-in-Statement-Triggers.html

Having the "before" tuplestore available to a BEFORE STATEMENT trigger
would make it possible to do things with the before transition table
that are fragile and hacky now.

> > and
> > - DML (RETURNING BEFORE, e.g.) which only touches one of them.
> > This applies both to extant use cases of RETURNING and to planned
> > ones.
>
> I think that can be sorted out by a patch which implements that, if
> these deltas even turn out to be the appropriate way to get that
> data (which is not clear to me at this time).

Again, I see the tuplestores as infrastructure both deltas and many
other things, so long as they're attached to the right objects. In my
opinion, the right objects would include materialized views, triggers,
and certain very specific kinds of DML of which all the RETURNING ones
are one example. They would not include the underlying tables.

> standard
> syntax, the first thing would be for the statement to somehow
> communicate to the trigger layer the need to capture a tuplestore
> it might otherwise not generate, and there would need to be a way
> for the statement to access the needed tuplestore(s).

Right. Hence my proposal to make the existence of the tuplestores
part of Query, writeable by the types of triggers which specify that
they'll be needed.

> The statement would also need to project the right set of columns.
> None of that seems to me to be relevant to this patch.  If this
> patch turns out to provide infrastructure that helps, great.  If you
> have a specific suggestion about how to make the tuplestores more
> accessible to other layers, I'm listening.

See above :)

> > In summary, I'd like to propose that the tuplestores be generated
> > separately in general and attached to callers. We can optimize
> > this by not generating redundant tuplestores.
>
> Well, if we use the standard syntax for CREATE TRIGGER and store
> the transition table names (if any) in pg_trigger, the code can
> generate one relation if any AFTER triggers which are going to fire
> need it.  I don't see any point in generating exactly the same
> tuplestore contents for each trigger.  And suspect that we can wire
> in any other uses later when we have something to connect them to.

Yes. I just don't think that Rel is the place to connect them.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-19 04:14:06
Message-ID: 1403151246.60573.YahooMailNeo@web122301.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

David Fetter <david(at)fetter(dot)org> wrote:
> On Wed, Jun 18, 2014 at 03:30:34PM -0700, Kevin Grittner wrote:

>> the more I think about it (and discuss it) the more I'm
>> inclined to suffer the additional complexity of the standard
>> syntax for specifying transition relations in order to avoid
>> unnecessary overhead creating them when not needed.  I'm also
>> leaning toward just storing TIDs in the tuplestores, even though
>> it requires mixed snapshots in executing queries in the
>> triggers.
>
> So in this case one tuplestore with two TIDs, either of which
> might be NULL?

No, one or two tuplestores, depending on need, each with TIDs of
either the "before" set or the "after" set of all tuples affected
by the DML statement, however many that may be.  More or less like
this first draft patch, except with TIDs instead of copies of the
rows, and with better selectivity about when the tuplestores are
generated.

> Having the "before" tuplestore available to a BEFORE STATEMENT
> trigger would make it possible to do things with the before
> transition table that are fragile and hacky now.

How do you propose to have an accurate "before" tuplestore of
affected rows before the scan runs and before the BEFORE ... FOR
EACH ROW triggers fire?  That would be particularly interesting to
try to generate if the scan involves evaluating any VOLATILE
functions.

> Again, I see the tuplestores as infrastructure both deltas and
> many other things, so long as they're attached to the right
> objects.  In my opinion, the right objects would include
> materialized views, triggers, and certain very specific kinds of
> DML of which all the RETURNING ones are one example.  They would
> not include the underlying tables.

Right now I've presented something for capturing the information
and allowing it to be accessed from triggers.  I don't think the
means of capturing it precludes passing it along to other
consumers.  I would like to get this part working before trying to
wire it up to anything other than triggers.  The best way to kill
an effort like this is to allow scope creep.  Now, if you think
that something fundamentally belongs at another level, that's
something to address -- but the point where we capture data to pass
to triggers seems like a natural and sound place to capture it for
other purposes.  And since almost all the code for this patch is in
trigger.c, this seems like I'm in the right place for a trigger
feature.

>> standard syntax, the first thing would be for the statement to
>> somehow communicate to the trigger layer the need to capture a
>> tuplestore it might otherwise not generate, and there would need
>> to be a way for the statement to access the needed
>> tuplestore(s).
>
> Right.  Hence my proposal to make the existence of the
> tuplestores part of Query, writeable by the types of triggers
> which specify that they'll be needed.

> I just don't think that Rel is the place to connect them.

I don't know what you mean by that.  I've already said that I now
think we should use the standard CREATE TRIGGER syntax to specify
the transition tables, and that if we do that we don't need the
reloption that's in the patch.  If you ignore the 19 lines of new
code to add that reloption, absolutely 100% of the code changes in
the patch so far are in trigger.c and trigger.h.  That reloption
was never anything I would consider as *connecting* the tuplestores
to the Rel anyway -- it was simply an attempt to minimize
unnecessary work.  No matter how I try, I'm not seeing what you
mean by references to "connecting to Rel".

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-19 15:57:18
Message-ID: 1403193438.59354.YahooMailNeo@web122304.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> I've already said that I now think we should use the standard
> CREATE TRIGGER syntax to specify the transition tables, and that
> if we do that we don't need the reloption that's in the patch.
> If you ignore the 19 lines of new code to add that reloption,
> absolutely 100% of the code changes in the patch so far are in
> trigger.c and trigger.h.

Although nobody has actually framed their feedback as a review, I
feel that I have enough to work with to throw the patch into
Waiting on Author status.  Since I started with the assumption that
I was going to be using standard syntax and got a ways into that
before convincing myself it was a bad idea, I should have a new
version of the patch working that way in a couple days.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-21 18:06:26
Message-ID: 1403373986.39220.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>
>> I've already said that I now think we should use the standard
>> CREATE TRIGGER syntax to specify the transition tables, and that
>> if we do that we don't need the reloption that's in the patch.
>> If you ignore the 19 lines of new code to add that reloption,
>> absolutely 100% of the code changes in the patch so far are in
>> trigger.c and trigger.h.
>
> Although nobody has actually framed their feedback as a review, I
> feel that I have enough to work with to throw the patch into
> Waiting on Author status.  Since I started with the assumption
> that I was going to be using standard syntax and got a ways into
> that before convincing myself it was a bad idea, I should have a
> new version of the patch working that way in a couple days.

Here is v2.

This implements the standard syntax for transition tables, but
leaves transition row values alone.  (They remain as OLD and NEW in
plpgsql, for example.)  I took a first pass at the documentation
changes; I hope they are fairly readable.  I didn't create new
regression tests yet, since those will be a lot more interesting
when there is a PL to use with this.  That does mean there's not a
lot to test yet.  You can create triggers with transition tables
specified, and they should show correctly in psql and behave
correctly in pg_dump.

I think the new columns in pg_trigger should be null capable (which
currently means moving them to the variable part of the record).
This seems like it is better done once plpgsql support is there and
we have regression tests, since techniques for that seem a little
fragile.

I didn't change the tuplestores to TID because it seemed to me that
it would preclude using transition relations with FDW triggers, and
it seemed bad not to support that.  Does anyone see a way around
that, or feel that it's OK to not support FDW triggers in this
regard?

Does this look good otherwise, as far as it goes?

Unless people want the tuplestores changed to hold TIDs of the
tuples rather than the tuples themselves, or there are other
problems with the generation of the tuplestores, I think this is
done as far as capturing the data and passing it to the triggers.
I don't think any of the existing execution node types quite fit
this, although there are some which are similar enough to crib most
of the code from.  Have I missed a node type that can be bent to
this purpose?

What I'm looking for in this CF is to confirm the approach for
capturing the data, and get any suggestions people want to offer on
the PL implementations to use it -- at which point I think it can
be Returned with Feedback to be finished after this CF.  Now is a
great time to tell me what I've done wrong in the work so far, or
make suggestions for the next phase.  :-)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
after-trigger-delta-relations-v2.patch text/x-diff 43.8 KB

From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-27 20:00:30
Message-ID: 20140627200030.GS2296@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> > Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> >
> >> I've already said that I now think we should use the standard
> >> CREATE TRIGGER syntax to specify the transition tables, and that
> >> if we do that we don't need the reloption that's in the patch.
> >> If you ignore the 19 lines of new code to add that reloption,
> >> absolutely 100% of the code changes in the patch so far are in
> >> trigger.c and trigger.h.
> >
> > Although nobody has actually framed their feedback as a review, I
> > feel that I have enough to work with to throw the patch into
> > Waiting on Author status.  Since I started with the assumption
> > that I was going to be using standard syntax and got a ways into
> > that before convincing myself it was a bad idea, I should have a
> > new version of the patch working that way in a couple days.
>
> Here is v2.

Thanks!

I've taken the liberty of making an extension that uses this.
Preliminary tests indicate a 10x performance improvement over the
user-space hack I did that's similar in functionality.

Please find attached the extension, etc., which I've published to
https://github.com/davidfetter/postgresql_projects/tree/test_delta_v2

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

Attachment Content-Type Size
test_delta_v2_001.diff text/plain 10.0 KB

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-28 14:35:10
Message-ID: 1403966110.84913.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

David Fetter <david(at)fetter(dot)org> wrote:
> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:

>> Here is v2.

> I've taken the liberty of making an extension that uses this.
> Preliminary tests indicate a 10x performance improvement over the
> user-space hack I did that's similar in functionality.

Wow, this goes well beyond what I expected for a review!  Thanks!

As I said in an earlier post, I think that this is best committed
as a series of patches, one for the core portion and one for each
PL which implements the ability to use the transition (delta)
relations in AFTER triggers.  Your extension covers the C trigger
angle, and it seems to me to be worth committing to contrib as a
sample of how to use this feature in C.

It is very encouraging that you were able to use this without
touching what I did in core, and that it runs 10x faster than the
alternatives before the patch.

Because this review advances the patch so far, it may be feasible
to get it committed in this CF.  I'll see what is needed to get
there and maybe have a patch toward that end in a few days.  The
minimum that would require, IMV, is a plpgsql implementation,
moving the new pg_trigger columns to the variable portion of the
record so they can be null capable, more docs, and regression
tests.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-29 22:14:16
Message-ID: 20140629221416.GF1218@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jun 28, 2014 at 07:35:10AM -0700, Kevin Grittner wrote:
> David Fetter <david(at)fetter(dot)org> wrote:
> > On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
>
> >> Here is v2.
>
> > I've taken the liberty of making an extension that uses this.
> > Preliminary tests indicate a 10x performance improvement over the
> > user-space hack I did that's similar in functionality.
>
> Wow, this goes well beyond what I expected for a review!  Thanks!

It was the minimum I could come up with to test whether the patch
worked.

> As I said in an earlier post, I think that this is best committed
> as a series of patches, one for the core portion and one for each
> PL which implements the ability to use the transition (delta)
> relations in AFTER triggers.

Right. I'm still holding out hope of having the transition relations
available in some more general way, but that seems more like a
refactoring job than anything fundamental.

> Your extension covers the C trigger angle, and it seems to me to be
> worth committing to contrib as a sample of how to use this feature
> in C.

It's missing a few pieces like surfacing transition table names. I'll
work on those. Also, it's not clear to me how to access the pre- and
post- relations at the same time, this being necessary for many use
cases. I guess I need to think more about how that would be done.

> It is very encouraging that you were able to use this without
> touching what I did in core, and that it runs 10x faster than the
> alternatives before the patch.

The alternative included was pretty inefficient, so there's that.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-30 15:03:06
Message-ID: CA+Tgmob2ARxaxYTOvq_owwrobgz2nJV59ffZWy=uCG+h6vdTpg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jun 28, 2014 at 10:35 AM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> David Fetter <david(at)fetter(dot)org> wrote:
>> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
>
>>> Here is v2.
>
>> I've taken the liberty of making an extension that uses this.
>> Preliminary tests indicate a 10x performance improvement over the
>> user-space hack I did that's similar in functionality.
>
> Wow, this goes well beyond what I expected for a review! Thanks!
>
> As I said in an earlier post, I think that this is best committed
> as a series of patches, one for the core portion and one for each
> PL which implements the ability to use the transition (delta)
> relations in AFTER triggers. Your extension covers the C trigger
> angle, and it seems to me to be worth committing to contrib as a
> sample of how to use this feature in C.
>
> It is very encouraging that you were able to use this without
> touching what I did in core, and that it runs 10x faster than the
> alternatives before the patch.
>
> Because this review advances the patch so far, it may be feasible
> to get it committed in this CF. I'll see what is needed to get
> there and maybe have a patch toward that end in a few days. The
> minimum that would require, IMV, is a plpgsql implementation,
> moving the new pg_trigger columns to the variable portion of the
> record so they can be null capable, more docs, and regression
> tests.

Not to rain on your parade, but this patch hasn't really had a serious
code review yet. Performance testing is good, but it's not the same
thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-30 15:22:07
Message-ID: 20140630152207.GJ1218@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 30, 2014 at 11:03:06AM -0400, Robert Haas wrote:
> On Sat, Jun 28, 2014 at 10:35 AM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> > David Fetter <david(at)fetter(dot)org> wrote:
> >> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
> >
> >>> Here is v2.
> >
> >> I've taken the liberty of making an extension that uses this.
> >> Preliminary tests indicate a 10x performance improvement over the
> >> user-space hack I did that's similar in functionality.
> >
> > Wow, this goes well beyond what I expected for a review! Thanks!
> >
> > As I said in an earlier post, I think that this is best committed
> > as a series of patches, one for the core portion and one for each
> > PL which implements the ability to use the transition (delta)
> > relations in AFTER triggers. Your extension covers the C trigger
> > angle, and it seems to me to be worth committing to contrib as a
> > sample of how to use this feature in C.
> >
> > It is very encouraging that you were able to use this without
> > touching what I did in core, and that it runs 10x faster than the
> > alternatives before the patch.
> >
> > Because this review advances the patch so far, it may be feasible
> > to get it committed in this CF. I'll see what is needed to get
> > there and maybe have a patch toward that end in a few days. The
> > minimum that would require, IMV, is a plpgsql implementation,
> > moving the new pg_trigger columns to the variable portion of the
> > record so they can be null capable, more docs, and regression
> > tests.
>
> Not to rain on your parade, but this patch hasn't really had a serious
> code review yet. Performance testing is good, but it's not the same
> thing.

Happy to help with that, too.

What I wanted to start with is whether there was even rudimentary
functionality, which I established by writing that extension. I
happened to notice, basically as a sanity check, that doing this via
tuplestores happened, at least in one case, to be quicker than doing
it in user space with temp tables.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-06-30 16:22:21
Message-ID: 1404145341.1852.YahooMailNeo@web122301.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

David Fetter <david(at)fetter(dot)org> wrote:

> It's missing a few pieces like surfacing transition table names.
> I'll work on those.  Also, it's not clear to me how to access the
> pre- and post- relations at the same time, this being necessary
> for many use cases.  I guess I need to think more about how that
> would be done.

If you're going to do any work on the C extension, please start
from the attached.  I renamed it to something which seemed more
meaningful (to me, at least), and cleaned up some cosmetic issues.
The substance is the same.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-table-c-v2.diff text/x-diff 8.0 KB

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-05 14:38:20
Message-ID: 1404571100.73407.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> Because this review advances the patch so far, it may be feasible
> to get it committed in this CF.  I'll see what is needed to get
> there and maybe have a patch toward that end in a few days.

It appears that I need to create a new execution node and a way for
SPI calls to use it.  That seems beyond the scope of what is fair
to include in this CF, even if I got something put together in the
next couple days.

FWIW, I think that once I have the other pieces, what I initially
posted is committable as the first patch of a series.  A second
patch would add the new execution node and code to allow SPI calls
to use it.  The patch that David submitted, as modified by myself
and with further refinements that David is working on would be the
third patch.  An implementation in plpgsql, would be the fourth.
Other PLs would be left for people more familiar with those
languages to implement.

What I was hoping for in this CF was a review to confirm the
approach before proceeding to build on this foundation.  David
found nothing to complain about, and went to the trouble of writing
code to confirm that it was actually generating complete results
which were usable.  Robert doesn't feel this constitutes "a serious
code review".  I'm not aware of any changes which are needed to the
pending patch once the follow-on patches are complete.  I'm moving
this to Needs Review status.  People will have another chance to
review this patch when the other code is available, but if we want
incremental maintenance of materialized views in 9.5, delaying
review of what I have submitted in this CF until the next CF will
put that goal in jeopardy.

The one thing I don't feel great about is that it's using
tuplestores of the actual tuples rather than just their TIDs; but
it seems to me that we need the full tuple to support triggers on
FDWs, so the TID approach would be an optimization for a subset of
the cases, and would probably be more appropriate, if we do it at
all, in a follow-on patch after this is working (although I think
it is an optimization we should get into 9.5 if we are going to do
it).  If you disagree with that assessment, now would be a good
time to explain your reasoning.

A minor point is psql tab-completion for the REFERENCING clause.
It seems to me that's not critical, but I might slip it in anyway
before commit.

I took a look at whether I could avoid making OLD and NEW
non-reserved keywords, but I didn't see how to do that without
making FOR at least partially reserved.  If someone sees a way to
do this without creating three new unreserved keywords
(REFERENCING, OLD, and NEW) I'm all ears.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-28 14:34:17
Message-ID: CABRT9RDdgV8YxF5+m5-QTJd_1vZ4ZGy6y3G6sZVTbyikdAsRdA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jul 5, 2014 at 5:38 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> it seems to me that we need the full tuple to support triggers on
> FDWs, so the TID approach would be an optimization for a subset of
> the cases, and would probably be more appropriate, if we do it at
> all, in a follow-on patch
> If you disagree with that assessment, now would be a good
> time to explain your reasoning.

Maybe I just have a limited imagination because I've never found a use
for FDWs personally. But recording changes from a trigger on a FDW
table doesn't seem that useful, since you can only capture changes
done by the local node. I expect that in many situations there are
multiple writers accessing the same underlying remote table. Certainly
it's can't guarantee the consistency of materialized views.

> I took a look at whether I could avoid making OLD and NEW
> non-reserved keywords, but I didn't see how to do that without
> making FOR at least partially reserved. If someone sees a way to
> do this without creating three new unreserved keywords
> (REFERENCING, OLD, and NEW) I'm all ears.

Sorry, I know I am very late to make this point, so feel free to ignore this.

I'm not a fan of the SQL standard syntax for this feature. One nice
thing about PostgreSQL's triggers is that you can declare the trigger
function once and re-use it on many tables. It would make more sense
if the same function declaration could say what variable/relation
names it wants to use. They're more like function argument names, not
some metadata about a table-function relationship.

Putting these in the CREATE TRIGGER command means you have to repeat
them for each table you want to apply the trigger to. It introduces
the possibility of making more mistakes without any gain in
flexibility.

But then again, I understand that there's value in supporting standard syntax.

Regards,
Marti


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-28 15:24:27
Message-ID: 1406561067.22267.YahooMailNeo@web122306.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Marti Raudsepp <marti(at)juffo(dot)org> wrote:
> On Sat, Jul 5, 2014 at 5:38 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> it seems to me that we need the full tuple to support triggers on
>> FDWs, so the TID approach would be an optimization for a subset of
>> the cases, and would probably be more appropriate, if we do it at
>> all, in a follow-on patch
>> If you disagree with that assessment, now would be a good
>> time to explain your reasoning.
>
> Maybe I just have a limited imagination because I've never found a use
> for FDWs personally. But recording changes from a trigger on a FDW
> table doesn't seem that useful,

It's a matter of whether AFTER triggers on an FDW can see the
modified data in table form.  We just recently added the ability
for FDW triggers to see the data in *row* form; it seemed odd to
immediately follow that with a new way to get at the data and
cripple FDW triggers for it.

> since you can only capture changes
> done by the local node. I expect that in many situations there are
> multiple writers accessing the same underlying remote table. Certainly
> it's can't guarantee the consistency of materialized views.

While I expect the techniques used here to help with development of
incremental maintenance of materialized views, this seems like a
useful feature in its own right.  I think the question is what the
basis would be for supporting access to the changes in row format
but not table format for FDWs, if we're supporting both formats for
other tables.

>> I took a look at whether I could avoid making OLD and NEW
>> non-reserved keywords, but I didn't see how to do that without
>> making FOR at least partially reserved.  If someone sees a way to
>> do this without creating three new unreserved keywords
>> (REFERENCING, OLD, and NEW) I'm all ears.
>
> Sorry, I know I am very late to make this point, so feel free to ignore this.
>
> I'm not a fan of the SQL standard syntax for this feature. One nice
> thing about PostgreSQL's triggers is that you can declare the trigger
> function once and re-use it on many tables. It would make more sense
> if the same function declaration could say what variable/relation
> names it wants to use. They're more like function argument names, not
> some metadata about a table-function relationship.
>
> Putting these in the CREATE TRIGGER command means you have to repeat
> them for each table you want to apply the trigger to. It introduces
> the possibility of making more mistakes without any gain in
> flexibility.
>
> But then again, I understand that there's value in supporting standard
> syntax.

Do you have some other suggestion?  Keep in mind that it must allow
the code which will *generate* the transition tables to know
whether any of the attached triggers use a given transition table
for the specific operation, regardless of the language of the
trigger function.  Using the standard syntax has the advantage of
making it pretty easy to put the information exactly where it is
needed for easy access at run time.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-28 17:27:01
Message-ID: CABRT9RBxtvhqH6LfXod-EsTcHK-tKsiKqay-644Ms_Gj-0QYRA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jul 28, 2014 at 6:24 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Do you have some other suggestion? Keep in mind that it must allow
> the code which will *generate* the transition tables to know
> whether any of the attached triggers use a given transition table
> for the specific operation, regardless of the language of the
> trigger function.

You will need to access the pg_proc record of the trigger function
anyway, so it's just a matter of coming up with syntax that makes
sense, right?

What I had in mind was that we could re-use function argument
declaration syntax. For instance, use the "argmode" specifier to
declare OLD and NEW. Shouldn't cause grammar conflicts because the
current OUT and INOUT aren't reserved keywords.

We could also re-use the refcursor type, which already has bindings in
some PLs, if that's not too much overhead. That would make the
behavior straightforward without introducing new constructs, plus you
can pass them around between functions. Though admittedly it's
annoying to integrate cursor results into queries.

Something like:

CREATE FUNCTION trig(OLD old_rows refcursor, NEW new_rows refcursor)
RETURNS trigger LANGUAGE plpgsql AS '...';

Or maybe if the grammar allows, we could spell out "NEW TABLE", "OLD
TABLE", but that's redundant since you can already deduce that from
the refcursor type.

It could also be extended for different types, like tid[], and maybe
"record" for the FOR EACH ROW variant (dunno if that can be made to
work).

Regards,
Marti


From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-29 06:49:50
Message-ID: CAFj8pRCyAitxVjVLbLNagwsLzu+PiiAPcEUpg2zx0VXFU2zSFg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

2014-07-28 19:27 GMT+02:00 Marti Raudsepp <marti(at)juffo(dot)org>:

> On Mon, Jul 28, 2014 at 6:24 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> > Do you have some other suggestion? Keep in mind that it must allow
> > the code which will *generate* the transition tables to know
> > whether any of the attached triggers use a given transition table
> > for the specific operation, regardless of the language of the
> > trigger function.
>
> You will need to access the pg_proc record of the trigger function
> anyway, so it's just a matter of coming up with syntax that makes
> sense, right?
>
> What I had in mind was that we could re-use function argument
> declaration syntax. For instance, use the "argmode" specifier to
> declare OLD and NEW. Shouldn't cause grammar conflicts because the
> current OUT and INOUT aren't reserved keywords.
>
> We could also re-use the refcursor type, which already has bindings in
> some PLs, if that's not too much overhead. That would make the
> behavior straightforward without introducing new constructs, plus you
> can pass them around between functions. Though admittedly it's
> annoying to integrate cursor results into queries.
>
> Something like:
>
> CREATE FUNCTION trig(OLD old_rows refcursor, NEW new_rows refcursor)
> RETURNS trigger LANGUAGE plpgsql AS '...';
>

I dislike this proposal - it is strongly inconsistent with current trigger
design

Regards

Pavel

>
> Or maybe if the grammar allows, we could spell out "NEW TABLE", "OLD
> TABLE", but that's redundant since you can already deduce that from
> the refcursor type.
>
> It could also be extended for different types, like tid[], and maybe
> "record" for the FOR EACH ROW variant (dunno if that can be made to
> work).
>
> Regards,
> Marti
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-29 07:41:33
Message-ID: CABRT9RDYNRo2xT2o_sMENUVH0ki4W4hOEp6Rm6U_jqaYr1Sp+g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jul 29, 2014 at 9:49 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
> I dislike this proposal - it is strongly inconsistent with current trigger
> design

The real point I was trying to convey (in my previous email) is that
these declarations should be part of the trigger *function* not the
function-to-table relationship. CREATE TRIGGER shouldn't be in the
business of declaring new local variables for the trigger function.
Whether we define new syntax for that or re-use the argument list is
secondary.

But the inconsistency is deliberate, I find the current trigger API
horrible. Magic variables... Text-only TG_ARGV for arguments...
RETURNS trigger... No way to invoke trigger functions directly for
testing.

By not imitating past mistakes, maybe we can eventually arrive at a
language that makes sense.

Regards,
Marti


From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-29 07:59:37
Message-ID: CAFj8pRAPfsHsoMxJBETRHf+UO6mhGZV4YkHTdJf+G86WmroPGQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

2014-07-29 9:41 GMT+02:00 Marti Raudsepp <marti(at)juffo(dot)org>:

> On Tue, Jul 29, 2014 at 9:49 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
> wrote:
> > I dislike this proposal - it is strongly inconsistent with current
> trigger
> > design
>
> The real point I was trying to convey (in my previous email) is that
> these declarations should be part of the trigger *function* not the
> function-to-table relationship. CREATE TRIGGER shouldn't be in the
> business of declaring new local variables for the trigger function.
> Whether we define new syntax for that or re-use the argument list is
> secondary.
>
> But the inconsistency is deliberate, I find the current trigger API
> horrible. Magic variables... Text-only TG_ARGV for arguments...
> RETURNS trigger...
>

A notation RETURNS TRIGGER I don't like too much too - RETURNS void or
RETURNS record are much more natural.

My dream is some like CREATE OR REPLACE TRIGGER FUNCTION trg() RETURNS
RECORD

but it is only syntactic sugar - and I don't see any benefit why we should
to implement it.

> No way to invoke trigger functions directly for
> testing.
>

It is horrible idea. I can agree, it is a limit - but not too hard - there
is simple possibility to take code from trigger to auxiliary function. But
current design is simply and robust with few possible user errors.

>
> By not imitating past mistakes, maybe we can eventually arrive at a
> language that makes sense.
>

Sorry I disagree. Can be subjective is this API is too or not too bad for
redesign. More objective arguments - there are no performance issue, no
security issue. I am thinking, so it has sense, so I don't see reason why
to change it and why we should to have two API. Last argument, if we change
something, then we should to use a ANSI SQL syntax everywhere it is
possible (when we don't get any new special functionality).

Regards

Pavel

>
> Regards,
> Marti
>


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-07-31 21:27:00
Message-ID: 1406842020.75764.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Marti Raudsepp <marti(at)juffo(dot)org> wrote:

> The real point I was trying to convey (in my previous email) is
> that these declarations should be part of the trigger *function*
> not the function-to-table relationship. CREATE TRIGGER shouldn't
> be in the business of declaring new local variables for the
> trigger function. Whether we define new syntax for that or re-use
> the argument list is secondary.

I think the reason the standard includes it in the trigger
definition is that it allows the trigger code to be specified in
the CREATE TRIGGER statement; and in fact, PostgreSQL is the first
database product I have used which requires a trigger function to
be created first.

<triggered action> ::=
  [ FOR EACH { ROW | STATEMENT } ]
      [ WHEN <left paren> <search condition> <right paren> ]
      <triggered SQL statement>

<triggered SQL statement> ::=
    <SQL procedure statement>
  | BEGIN ATOMIC { <SQL procedure statement> <semicolon> }... END

If we ever support this part of the standard, we need to be able to
specify these in the CREATE TRIGGER statement (or stay with the
concept of reserving "special names" for these relations).  So if
we use non-standard syntax now, we are likely to be looking at an
ugly hybrid of techniques in the future.

That said, if we want to have a way to specify this for a function
(for now), we could use the CREATE FUNCTION's WITH clause.  I
assume that a mismatch between the name specified there and the
name which is used in the function body would be a run-time error
when the trigger function is fired, not an error at the time that
CREATE FUNCTION is run, since we only ensure valid syntax at CREATE
FUNCTION time; we don't resolve any table names or column names at
that time.  For example, on a freshly-created database with no
tables created:

test=# create function xxx() returns trigger language plpgsql
test-# as $$ begin update adsf set qwerty = 'xxx'; end; $$;
CREATE FUNCTION

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-07 12:11:21
Message-ID: CACoZds0nA3cXFofNiE3=VtMnSC1nqB6PKUvHLX9mum2iwh7H1Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 21 June 2014 23:36, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> I didn't change the tuplestores to TID because it seemed to me that
> it would preclude using transition relations with FDW triggers, and
> it seemed bad not to support that. Does anyone see a way around
> that, or feel that it's OK to not support FDW triggers in this
> regard?

I think it is ok to use tuplestores for now, but as mentioned by you
somewhere else in the thread, later on we should change this to using
tids as an optimization.

>
> Does this look good otherwise, as far as it goes?

I didn't yet extensively go through the patch, but before that, just a
few quick comments:

I see that the tupelstores for transition tables are stored per query
depth. If the
DML involves a table that has multiple child tables, it seems as though
a single tuplestore would be shared among all these tables. That means
if we define
such triggers using transition table clause for all the child tables, then
the trigger function for a child table will see tuples from other child tables
as well. Is that true ? If it is, it does not make sense.
For fdw tuplestore, this issue does not arise because the DML won't involve
multiple target tables I suppose.

-----------

I tried to google some SQLs that use REFERENCING clause with triggers.
It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
can refer to a transition table, just like how it refers to NEW and
OLD row variables.

For e.g. :
CREATE TRIGGER notify_dept
AFTER UPDATE ON weather
REFERENCING NEW_TABLE AS N_TABLE
NEW AS N_ROW
FOR EACH ROW
WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
BEGIN
notify_department(N_ROW.temperature, N_ROW.city);
END

Above, it is used to get an aggregate value of all the changed rows. I think
we do not currently support aggregate expressions in the where clause, but with
transition tables, it makes more sense to support it later if not now.


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-07 14:19:19
Message-ID: 1407421159.2916.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thanks for your review and comments, Amit!

Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
> On 21 June 2014 23:36, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> I didn't change the tuplestores to TID because it seemed to me that
>> it would preclude using transition relations with FDW triggers, and
>> it seemed bad not to support that.  Does anyone see a way around
>> that, or feel that it's OK to not support FDW triggers in this
>> regard?
>
> I think it is ok to use tuplestores for now, but as mentioned by you
> somewhere else in the thread, later on we should change this to using
> tids as an optimization.

Well, the optimization would probably be to use a tuplestore of
tids referencing modified tuples in the base table, rather than a
tuplestore of the data itself.  But I think we're in agreement.

>> Does this look good otherwise, as far as it goes?
>
> I didn't yet extensively go through the patch, but before that, just a
> few quick comments:
>
> I see that the tupelstores for transition tables are stored per query
> depth. If the
> DML involves a table that has multiple child tables, it seems as though
> a single tuplestore would be shared among all these tables. That means
> if we define
> such triggers using transition table clause for all the child tables, then
> the trigger function for a child table will see tuples from other child tables
> as well. Is that true ?

I don't think so.  I will make a note of the concern to confirm by testing.

> If it is, it does not make sense.

I agree.

> I tried to google some SQLs that use REFERENCING clause with triggers.
> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
> can refer to a transition table, just like how it refers to NEW and
> OLD row variables.
>
> For e.g. :
> CREATE TRIGGER notify_dept
>   AFTER UPDATE ON weather
>   REFERENCING NEW_TABLE AS N_TABLE
>   NEW AS N_ROW
>   FOR EACH ROW
>   WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>   BEGIN
>     notify_department(N_ROW.temperature, N_ROW.city);
>   END
>
> Above, it is used to get an aggregate value of all the changed rows. I think
> we do not currently support aggregate expressions in the where clause, but with
> transition tables, it makes more sense to support it later if not now.

Interesting point; I had not thought about that.  Will see if I can
include support for that in the patch for the next CF; failing
that; I will at least be careful to not paint myself into a corner
where it is unduly hard to do later.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-12 06:54:16
Message-ID: CACoZds3pjc3tJsV1zPt60xDwnchh3iGiCei1ZHbcL9EQ=gRjww@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 7 August 2014 19:49, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
>> On 21 June 2014 23:36, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>>> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>>> I didn't change the tuplestores to TID because it seemed to me that
>>> it would preclude using transition relations with FDW triggers, and
>>> it seemed bad not to support that. Does anyone see a way around
>>> that, or feel that it's OK to not support FDW triggers in this
>>> regard?
>>
>> I think it is ok to use tuplestores for now, but as mentioned by you
>> somewhere else in the thread, later on we should change this to using
>> tids as an optimization.
>
> Well, the optimization would probably be to use a tuplestore of
> tids referencing modified tuples in the base table, rather than a
> tuplestore of the data itself. But I think we're in agreement.
Right, that's what I meant.

>> I see that the tupelstores for transition tables are stored per query
>> depth. If the
>> DML involves a table that has multiple child tables, it seems as though
>> a single tuplestore would be shared among all these tables. That means
>> if we define
>> such triggers using transition table clause for all the child tables, then
>> the trigger function for a child table will see tuples from other child tables
>> as well. Is that true ?
>
> I don't think so. I will make a note of the concern to confirm by testing.
Thanks. I will wait for this.

>
>> I tried to google some SQLs that use REFERENCING clause with triggers.
>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>> can refer to a transition table, just like how it refers to NEW and
>> OLD row variables.
>>
>> For e.g. :
>> CREATE TRIGGER notify_dept
>> AFTER UPDATE ON weather
>> REFERENCING NEW_TABLE AS N_TABLE
>> NEW AS N_ROW
>> FOR EACH ROW
>> WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>> BEGIN
>> notify_department(N_ROW.temperature, N_ROW.city);
>> END
>>
>> Above, it is used to get an aggregate value of all the changed rows. I think
>> we do not currently support aggregate expressions in the where clause, but with
>> transition tables, it makes more sense to support it later if not now.
>
> Interesting point; I had not thought about that. Will see if I can
> include support for that in the patch for the next CF; failing
> that; I will at least be careful to not paint myself into a corner
> where it is unduly hard to do later.
We currently do the WHEN checks while saving the AFTER trigger events,
and also add the tuples one by one while saving the trigger events. If
and when we support WHEN, we would need to make all of these tuples
saved *before* the first WHEN clause execution, and that seems to
demand more changes in the current trigger code.

More comments below :

---------------

Are we later going to extend this support for constraint triggers as
well ? I think these variables would make sense even for deferred
constraint triggers. I think we would need some more changes if we
want to support this, because there is no query depth while executing
deferred triggers and so the tuplestores might be inaccessible with
the current design.

---------------

The following (and similarly other) statements :
trigdesc->trig_insert_new_table |=
(TRIGGER_FOR_INSERT(tgtype) &&
TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;

can be simplfied to :

trigdesc->trig_insert_new_table |=
(TRIGGER_FOR_INSERT(tgtype) &&
TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));

-------------

AfterTriggerExecute()
{
.....
/*
* Set up the tuplestore information.
*/
if (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table)
LocTriggerData.tg_olddelta =
GetCurrentTriggerDeltaTuplestore(afterTriggers->old_tuplestores);
.....
}
Above, trigdesc->trig_update_old_table is true if at least one of the
triggers of the table has transition tables. So this will cause the
delta table to be available on all of the triggers of the table even
if only one out of them uses transition tables. May be this would be
solved by using LocTriggerData.tg_trigger->tg_olddelta rather than
trigdesc->trig_update_old_table to decide whether
LocTriggerData.tg_olddelta should be assigned.

---------------

GetCurrentTriggerDeltaTuplestore() is now used for getting fdw
tuplestore also, so it should have a more general name.

---------------

#define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
Since all other code sections assume trigger->tgoldtable to be
non-NULL, we can skip the NULL check above.

---------------

We should add a check to make sure the user does not supply same name
for OLD TABLE and NEW TABLE.

---------------

The below code comment needs to be changed.
* Only tables are initially supported, and only for AFTER EACH STATEMENT
* triggers, but other permutations are accepted by the parser so we can give
* a meaningful message from C code.
The comment implies that with ROW triggers we do not support TABLE
transition variables. But the patch does make these variables visible
through ROW triggers.

--------------

Other than these, there are no more issues that I could find.


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-12 14:39:25
Message-ID: 1407854365.90730.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
> On 7 August 2014 19:49, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:

>>> I tried to google some SQLs that use REFERENCING clause with triggers.
>>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>>> can refer to a transition table, just like how it refers to NEW and
>>> OLD row variables.
>>>
>>> For e.g. :
>>> CREATE TRIGGER notify_dept
>>>   AFTER UPDATE ON weather
>>>   REFERENCING NEW_TABLE AS N_TABLE
>>>   NEW AS N_ROW
>>>   FOR EACH ROW
>>>   WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>>>   BEGIN
>>>     notify_department(N_ROW.temperature, N_ROW.city);
>>>   END
>>>
>>> Above, it is used to get an aggregate value of all the changed rows. I think
>>> we do not currently support aggregate expressions in the where clause, but with
>>> transition tables, it makes more sense to support it later if not now.
>>
>> Interesting point; I had not thought about that.  Will see if I can
>> include support for that in the patch for the next CF; failing
>> that; I will at least be careful to not paint myself into a corner
>> where it is unduly hard to do later.
> We currently do the WHEN checks while saving the AFTER trigger events,
> and also add the tuples one by one while saving the trigger events. If
> and when we support WHEN, we would need to make all of these tuples
> saved *before* the first WHEN clause execution, and that seems to
> demand more changes in the current trigger code.

In that case my inclination is to get things working with the less
invasive patch that doesn't try to support transition table usage
in WHEN clauses, and make support for that a later patch.

> ---------------
>
> Are we later going to extend this support for constraint triggers as
> well ? I think these variables would make sense even for deferred
> constraint triggers. I think we would need some more changes if we
> want to support this, because there is no query depth while executing
> deferred triggers and so the tuplestores might be inaccessible with
> the current design.

Hmm, I would also prefer to exclude that from an initial patch, but
this and the WHEN clause issue may influence a decision I've been
struggling with.  This is my first non-trivial foray into the
planner territory, and I've been struggling with how best to bridge
the gap between where the tuplestores are *captured* in the trigger
code and where they are referenced by name in a query and
incorporated into a plan for the executor.  (The execution level
itself was almost trivial; it's getting the tuplestore reference
through the parse analysis and planning phases that is painful for
me.)  At one point I create a "tuplestore registry" using a
process-local hashmap where each Tuplestorestate and its associated
name, TupleDesc, etc. would be registered, yielding a Tsrid (based
on Oid) to use through the parse analysis and planning steps, but
then I ripped it back out again in favor of just passing the
pointer to the structure which was stored in the registry; because
the registry seemed to me to introduce more risk of memory leaks,
references to freed memory, etc.  While it helped a little with
abstraction, it seemed to make things more fragile.  But I'm still
torn on this, and unsure whether such a registry is a good idea.

Any thoughts on that?

> ---------------
>
> The following (and similarly other) statements :
> trigdesc->trig_insert_new_table |=
> (TRIGGER_FOR_INSERT(tgtype) &&
> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;
>
> can be simplfied to :
>
> trigdesc->trig_insert_new_table |=
> (TRIGGER_FOR_INSERT(tgtype) &&
> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));

Yeah, I did recognize that, but I always get squeamish about
logical operations with values which do not equal true or false.
TRIGGER_FOR_INSERT and similar macros don't necessarily return true
for something which is not false.  I should just get over that and
trust the compiler a bit more....  :-)

> ---------------
>
> AfterTriggerExecute()
> {
> .....
> /*
> * Set up the tuplestore information.
> */
> if (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table)
> LocTriggerData.tg_olddelta =
> GetCurrentTriggerDeltaTuplestore(afterTriggers->old_tuplestores);
> .....
> }
> Above, trigdesc->trig_update_old_table is true if at least one of the
> triggers of the table has transition tables. So this will cause the
> delta table to be available on all of the triggers of the table even
> if only one out of them uses transition tables. May be this would be
> solved by using LocTriggerData.tg_trigger->tg_olddelta rather than
> trigdesc->trig_update_old_table to decide whether
> LocTriggerData.tg_olddelta should be assigned.

Good point.  Will do.

> ---------------
>
> GetCurrentTriggerDeltaTuplestore() is now used for getting fdw
> tuplestore also, so it should have a more general name.

Well, "delta" *was* my attempt at a more general name.  I need to
do another pass over the naming choices to make sure I'm being
consistent, but I attempted to use these distinctions:

  transition - OLD or NEW, ROW or TABLE data for a trigger; this is
  the terminology used in the SQL standard

  oldtable/newtable - transition data for a relation representing
  what a statement affected; corresponding to the REFERENCING
  [OLD|NEW] TABLE clauses in the SQL standard

  tuplestore - for the planner and executor, since we may find
  other uses for feeding in tuplestores; I don't want to assume in
  the naming there that these are from triggers at all

  delta - a tuplestore representing changes to data; perhaps this
  is too close in concept to transition in the trigger code since
  there is no other source for delta data and the naming should
  just  use transition

Any specific suggestions?  Maybe eliminate use of "delta" and make
that GetCurrentTriggerTransitionTuplestore()?

> ---------------
>
> #define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
> ((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
> Since all other code sections assume trigger->tgoldtable to be
> non-NULL, we can skip the NULL check above.

I intentionally left both tests in initially because I wasn't sure
which representation would be used.  Will review whether it is time
to get off the fence on that.  ;-)

> ---------------
>
> We should add a check to make sure the user does not supply same name
> for OLD TABLE and NEW TABLE.

I already caught that and implemented a check in my development code.

> ---------------
>
> The below code comment needs to be changed.
> * Only tables are initially supported, and only for AFTER EACH STATEMENT
> * triggers, but other permutations are accepted by the parser so we can give
> * a meaningful message from C code.
> The comment implies that with ROW triggers we do not support TABLE
> transition variables. But the patch does make these variables visible
> through ROW triggers.

Oops.  Will fix.

Thanks for the review!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-14 09:23:03
Message-ID: CACoZds2-hj6e7gjeEH4J_5VyUVJ1w1jkyP-y5uOPcOaBKyYJjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12 August 2014 20:09, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
>> On 7 August 2014 19:49, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>>> Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
>
>>>> I tried to google some SQLs that use REFERENCING clause with triggers.
>>>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>>>> can refer to a transition table, just like how it refers to NEW and
>>>> OLD row variables.
>>>>
>>>> For e.g. :
>>>> CREATE TRIGGER notify_dept
>>>> AFTER UPDATE ON weather
>>>> REFERENCING NEW_TABLE AS N_TABLE
>>>> NEW AS N_ROW
>>>> FOR EACH ROW
>>>> WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>>>> BEGIN
>>>> notify_department(N_ROW.temperature, N_ROW.city);
>>>> END
>>>>
>>>> Above, it is used to get an aggregate value of all the changed rows. I think
>>>> we do not currently support aggregate expressions in the where clause, but with
>>>> transition tables, it makes more sense to support it later if not now.
>>>
>>> Interesting point; I had not thought about that. Will see if I can
>>> include support for that in the patch for the next CF; failing
>>> that; I will at least be careful to not paint myself into a corner
>>> where it is unduly hard to do later.
>> We currently do the WHEN checks while saving the AFTER trigger events,
>> and also add the tuples one by one while saving the trigger events. If
>> and when we support WHEN, we would need to make all of these tuples
>> saved *before* the first WHEN clause execution, and that seems to
>> demand more changes in the current trigger code.
>
> In that case my inclination is to get things working with the less
> invasive patch that doesn't try to support transition table usage
> in WHEN clauses, and make support for that a later patch.
Agreed.
>
>> ---------------
>>
>> Are we later going to extend this support for constraint triggers as
>> well ? I think these variables would make sense even for deferred
>> constraint triggers. I think we would need some more changes if we
>> want to support this, because there is no query depth while executing
>> deferred triggers and so the tuplestores might be inaccessible with
>> the current design.
>
> Hmm, I would also prefer to exclude that from an initial patch, but
> this and the WHEN clause issue may influence a decision I've been
> struggling with. This is my first non-trivial foray into the
> planner territory, and I've been struggling with how best to bridge
> the gap between where the tuplestores are *captured* in the trigger
> code and where they are referenced by name in a query and
> incorporated into a plan for the executor. (The execution level
> itself was almost trivial; it's getting the tuplestore reference
> through the parse analysis and planning phases that is painful for
> me.
I am not sure why you think we would need to refer the tuplestore in
the parse analysis and planner phases. It seems that we would need
them only in execution phase. Or may be I didn't understand your
point.

> ) At one point I create a "tuplestore registry" using a
> process-local hashmap where each Tuplestorestate and its associated
> name, TupleDesc, etc. would be registered, yielding a Tsrid (based
> on Oid) to use through the parse analysis and planning steps, but
> then I ripped it back out again in favor of just passing the
> pointer to the structure which was stored in the registry; because
> the registry seemed to me to introduce more risk of memory leaks,
> references to freed memory, etc. While it helped a little with
> abstraction, it seemed to make things more fragile. But I'm still
> torn on this, and unsure whether such a registry is a good idea.
I feel it is ok to use direct tuplestore pointers as handles like how
you have done in the patch. I may be biased with doing that as against
the above method of accessing tuplestore by its name through hash
lookup; the reason of my bias might be because of one particular way I
see how deferred constraint triggers can be supported. In the after
trigger event structure, we can store these delta tuplestores pointers
as well. This way, we don't need to worry about how to lookup these
tuplestores, and also need not worry about the mechanism that moves
these events from deferred event list to immediate event list in case
of SET CONSTRAINTS. Only thing we would need to make sure is to
cleanup these tuplestores exactly where the event structures get
cleaned up.

This is all my speculations. But what I think is that we don't have to
heavily refactor your patch changes in order to extend support for
deferred constraint triggers. And for WHEN clause, we anyways have to
do some changes in the existing trigger code.

>> ---------------
>>
>> The following (and similarly other) statements :
>> trigdesc->trig_insert_new_table |=
>> (TRIGGER_FOR_INSERT(tgtype) &&
>> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;
>>
>> can be simplfied to :
>>
>> trigdesc->trig_insert_new_table |=
>> (TRIGGER_FOR_INSERT(tgtype) &&
>> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));
>
> Yeah, I did recognize that, but I always get squeamish about
> logical operations with values which do not equal true or false.
> TRIGGER_FOR_INSERT and similar macros don't necessarily return true
> for something which is not false. I should just get over that and
> trust the compiler a bit more.... :-)
I am ok with either way :) . May be your change is more readable to
others as well, and it does seem to be readable to me now that I see
again.

> Well, "delta" *was* my attempt at a more general name. I need to
> do another pass over the naming choices to make sure I'm being
> consistent, but I attempted to use these distinctions:
>
> transition - OLD or NEW, ROW or TABLE data for a trigger; this is
> the terminology used in the SQL standard
Ah ok, so transition applies to both OLD/NEW and ROW/TABLE. Then is
that case, the one you suggested below
(GetCurrentTriggerTransitionTuplestore) sounds very good to me.
>
> oldtable/newtable - transition data for a relation representing
> what a statement affected; corresponding to the REFERENCING
> [OLD|NEW] TABLE clauses in the SQL standard
>
> tuplestore - for the planner and executor, since we may find
> other uses for feeding in tuplestores; I don't want to assume in
> the naming there that these are from triggers at all
>
> delta - a tuplestore representing changes to data; perhaps this
> is too close in concept to transition in the trigger code since
> there is no other source for delta data and the naming should
> just use transition
All these names sound good to me. Thanks.

>
> Any specific suggestions? Maybe eliminate use of "delta" and make
> that GetCurrentTriggerTransitionTuplestore()?
Yes this sounds good.

>
>> ---------------
>>
>> #define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
>> ((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
>> Since all other code sections assume trigger->tgoldtable to be
>> non-NULL, we can skip the NULL check above.
>
> I intentionally left both tests in initially because I wasn't sure
> which representation would be used. Will review whether it is time
> to get off the fence on that. ;-)
OK.


From: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-14 09:45:53
Message-ID: CACoZds2joQLMF2pW_axdxzk62M6PMOky-5RBPCmT208kyhkQzw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>> The execution level
>> itself was almost trivial; it's getting the tuplestore reference
>> through the parse analysis and planning phases that is painful for
>> me.
> I am not sure why you think we would need to refer the tuplestore in
> the parse analysis and planner phases. It seems that we would need
> them only in execution phase. Or may be I didn't understand your
> point.
Ah I think I understand now. That might be because you are thinking of
having an infrastructure common to triggers and materialized views,
right ?


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-14 22:34:57
Message-ID: 1408055697.71525.YahooMailNeo@web122306.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:

>>> The execution level itself was almost trivial; it's getting the
>>> tuplestore reference through the parse analysis and planning
>>> phases that is painful for me.
>> I am not sure why you think we would need to refer the
>> tuplestore in the parse analysis and planner phases. It seems
>> that we would need them only in execution phase. Or may be I
>> didn't understand your point.
> Ah I think I understand now. That might be because you are
> thinking of having an infrastructure common to triggers and
> materialized views, right ?

Well, it's more immediate than that.  The identifiers in the
trigger function are not resolved to particular objects until there
is a request to fire the trigger.  At that time the parse analysis
needs to find the name defined somewhere.  It's not defined in the
catalogs like a table or view, and it's not defined in the query
itself like a CTE or VALUES clause.  The names specified in trigger
creation must be recognized as needing to resolve to the new
TuplestoreScan, and it needs to match those to the tuplestores
themselves.  Row counts, costing, etc. needs to be provided so the
optimizer can pick a good plan in what might be a complex query
with many options.  I'm finding the planner work here to be harder
than everything else put together.

On the bright side, once I'm done, I might know enough about the
planner to do things a lot faster next time.  :-)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-18 06:23:38
Message-ID: CACoZds0S1RsLJm641aoWHbP7hu8YtvGJGHYJqhkS81f3WUmRZg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 15 August 2014 04:04, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
>
>>>> The execution level itself was almost trivial; it's getting the
>>>> tuplestore reference through the parse analysis and planning
>>>> phases that is painful for me.
>>> I am not sure why you think we would need to refer the
>>> tuplestore in the parse analysis and planner phases. It seems
>>> that we would need them only in execution phase. Or may be I
>>> didn't understand your point.
>> Ah I think I understand now. That might be because you are
>> thinking of having an infrastructure common to triggers and
>> materialized views, right ?
>
> Well, it's more immediate than that. The identifiers in the
> trigger function are not resolved to particular objects until there
> is a request to fire the trigger.
Ah ok, you are talking about changes specific to the PL language
handlers. Yes, I agree that in the plpgsql parser (and in any PL
handler), we need to parse such table references in the SQL construct,
and transform it into something else.
> At that time the parse analysis
> needs to find the name defined somewhere. It's not defined in the
> catalogs like a table or view, and it's not defined in the query
> itself like a CTE or VALUES clause. The names specified in trigger
> creation must be recognized as needing to resolve to the new
> TuplestoreScan, and it needs to match those to the tuplestores
> themselves.
One approach that comes to my mind is by transforming such transition
table references into a RangeFunction reference while in plpgsql
parser/lexer. This RangeFunction would point to a set-returning
catalog function that would return rows from the delta tuplestore. So
the tuplestore itself would remain a blackbox. Once we have such a
function, any language handler can re-use the same interface.

> Row counts, costing, etc. needs to be provided so the
> optimizer can pick a good plan in what might be a complex query
> with many options.
I am also not sure about the costing, but I guess it may be possible
to supply some costs to the FunctionScan plan node.


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-22 14:47:39
Message-ID: 1408718859.60100.YahooMailNeo@web122302.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> wrote:
> On 15 August 2014 04:04, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

>> The identifiers in the trigger function are not resolved to
>> particular objects until there is a request to fire the trigger.
> Ah ok, you are talking about changes specific to the PL language
> handlers. Yes, I agree that in the plpgsql parser (and in any PL
> handler), we need to parse such table references in the SQL construct,
> and transform it into something else.
>> At that time the parse analysis
>> needs to find the name defined somewhere.  It's not defined in the
>> catalogs like a table or view, and it's not defined in the query
>> itself like a CTE or VALUES clause.  The names specified in trigger
>> creation must be recognized as needing to resolve to the new
>> TuplestoreScan, and it needs to match those to the tuplestores
>> themselves.

For now I have added the capability to register named tuplestores
(with the associated TupleDesc) to SPI.  This seems like a pretty
useful place to be able to do this, although I tried to arrange for
it to be reasonably easy to add other registration mechanisms
later.

> One approach that comes to my mind is by transforming such transition
> table references into a RangeFunction reference while in plpgsql
> parser/lexer. This RangeFunction would point to a set-returning
> catalog function that would return rows from the delta tuplestore. So
> the tuplestore itself would remain a blackbox. Once we have such a
> function, any language handler can re-use the same interface.

plpgsql seems entirely the wrong place for that.  What I have done
is for trigger.c to add the tuplestores to the TriggerData
structure, and not worry about it beyond that.  I have provided a
way to register named tuplestores to SPI, and for the subsequent
steps to recognize those names as relation names.  The change to
plpgsql are minimal, since they only need to take the tuplestores
from TriggerData and register them with SPI before making the SPI
calls to plan and execute.

>> Row counts, costing, etc. needs to be provided so the
>> optimizer can pick a good plan in what might be a complex query
>> with many options.
> I am also not sure about the costing, but I guess it may be possible
> to supply some costs to the FunctionScan plan node.

I went with a bogus row estimate for now.  I think we can arrange
for a tuplestore to keep a count of rows added, and a function to
retrieve that, and thereby get a better estimate, but that is not
yet done.

I think this is approaching a committable state, although I think
it should probably be broken down to four separate patches.  Two of
them are very small and easy to generate: the SPI changes and the
plpgsql changes are in files that are distinct from all the other
changes.  What remains is to separate the parsing of the CREATE
TRIGGER statement and the trigger code that generates the
tuplestores for the transition tables from the analysis, planning,
and execution phases which deal with a more generic concept of
named tuplestores.  Some of the changes for those two things both
touch files related to parsing -- for different things, but in the
same files.

To avoid polluting paring code with SPI and tuplestore includes, I
created a very thin abstraction layer for parse analysis to use.  I
didn't do the same for the executor yet, but it may be a good idea
there, too.

It probably still needs more documentation and definitely needs
more regression tests.  I have used these transition tables in
plpgsql in simple tests, but there may be bugs still lurking.

New patch attached.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
after-trigger-delta-relations-v3.patch.gz application/x-tgz 26.2 KB

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-26 23:26:13
Message-ID: 1409095573.93122.YahooMailNeo@web122304.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> I think this is approaching a committable state, although I think
> it should probably be broken down to four separate patches.

And here they are. This should net to the same set of changes as
the prior post, but the changes are logically separated. They are
labeled as v3 to match the last post.

trigger-transition-tables allows transition table names to be
specified in a REFERENCING clause of CREATE TRIGGER, per spec, and
creates tuplestores when needed in the TriggerData structure. It
doesn't worry about who does what with that data. This doesn't
depend on anything else.
15 files changed, 577 insertions(+), 43 deletions(-)

spi-tuplestore-registry allows tuplestores, with associated name
and TupleDesc, to be registered with the current SPI connection.
Queries planned or executed on that connection will recognize the
name as a tuplestore relation. It doesn't care who is registering
the tuplestores or what happens to them. It doesn't depend on
anything else.
5 files changed, 445 insertions(+)

executor-tuplestore-relations covers parse analysis,
planner/optimizer, and executor layers. It pulls information from
the registry in a couple places, but is not very intertwined with
SPI. That is the only registry it knows right now, but it should
be easy to add other registries if needed. It doesn't care where
the tuplestore came from, so we should be able to use this for
other things. I have it in mind to use it for incremental
maintenance of materialized views, but I expect that other uses
will be found. It has a logical dependency on the
spi-tuplestore-registry patch. While it doesn't have a logical
dependency on trigger-transition-tables, they both modified some of
the same files, and this won't apply cleanly unless
trigger-transition-tables is applied first. If you hand-correct
the failed hunks, it compiles and runs fine without
trigger-transition-tables.
30 files changed, 786 insertions(+), 9 deletions(-)

plpgsql-after-trigger-transition-tables takes the tuplestores from
TriggerData and registers them with SPI before trigger planning and
execution. It depends on the trigger-transition-tables and
spi-tuplestore-registry patches to build, and won't do anything
useful at run time without the executor-tuplestore-relations patch.
3 files changed, 37 insertions(+), 11 deletions(-)

Hopefully this will make review easier.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
trigger-transition-tables-v3.patch.gz application/x-tgz 12.0 KB
spi-tuplestore-registry-v3.patch.gz application/x-tgz 3.2 KB
executor-tuplestore-relations-v3.patch.gz application/x-tgz 10.7 KB
plpgsql-after-trigger-transition-tables-v3.patch.gz application/x-tgz 1.3 KB

From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 07:23:56
Message-ID: 53FD878C.3030707@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 08/27/2014 02:26 AM, Kevin Grittner wrote:
> spi-tuplestore-registry allows tuplestores, with associated name
> and TupleDesc, to be registered with the current SPI connection.
> Queries planned or executed on that connection will recognize the
> name as a tuplestore relation. It doesn't care who is registering
> the tuplestores or what happens to them. It doesn't depend on
> anything else.
> 5 files changed, 445 insertions(+)
>
> ...
>
> plpgsql-after-trigger-transition-tables takes the tuplestores from
> TriggerData and registers them with SPI before trigger planning and
> execution. It depends on the trigger-transition-tables and
> spi-tuplestore-registry patches to build, and won't do anything
> useful at run time without the executor-tuplestore-relations patch.
> 3 files changed, 37 insertions(+), 11 deletions(-)

This is a surprising way to expose the NEW/OLD relations to the
planner/executor. The problem is the same as with making PL/pgSQL
variables available to the planner/executor in queries within a PL/pgSQL
function, and the solution we have for that is the "parser hooks" you
pass to SPI_prepare_params. This tuplestore registry is a different
solution to the same problem - we could've implemented parameters with a
registry like this as well. Let's not mix two different designs.

I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
?). The planner calls it whenever it sees a reference to a table, and
the hook function returns back some sort of placeholder reference to the
tuplestore. With variables, the hook returns a Param node, and at
execution time, the executor calls the paramFetch hook to fetch the
value of the param. For relations/tuplestores, I guess we'll need to
invent something like a Param node, but for holding information about
the relation. Like your TsrData struct, but without the pointer to the
tuplestore. At execution time, in the SPI_execute call, you pass the
pointer to the tuplestore in the ParamListInfo struct, like you pass
parameter values.

Does this make sense? In essence, make the relations work like PL/pgSQL
variables do. If you squint a little, the new/old relation is a variable
from the function's point of view, and a parameter from the
planner/executor's point of view. It's just a variable/parameter that
holds a set of tuples, instead of a single Datum.

- Heikki


From: Jim Nasby <jim(at)nasby(dot)net>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 16:51:40
Message-ID: 53FE0C9C.5000209@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 8/27/14, 2:23 AM, Heikki Linnakangas wrote:
> Does this make sense? In essence, make the relations work like PL/pgSQL variables do. If you squint a little, the new/old relation is a variable from the function's point of view, and a parameter from the planner/executor's point of view. It's just a variable/parameter that holds a set of tuples, instead of a single Datum.

Something to keep in mind is that users will definitely think about NEW/OLD as tables. I suspect that it won't be long after release before someone asks why they can't create an index on it. :)
--
Jim C. Nasby, Data Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net


From: David Fetter <david(at)fetter(dot)org>
To: Jim Nasby <jim(at)nasby(dot)net>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 17:19:19
Message-ID: 20140827171919.GA31498@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 27, 2014 at 11:51:40AM -0500, Jim Nasby wrote:
> On 8/27/14, 2:23 AM, Heikki Linnakangas wrote:
> >Does this make sense? In essence, make the relations work like
> >PL/pgSQL variables do. If you squint a little, the new/old relation
> >is a variable from the function's point of view, and a parameter
> >from the planner/executor's point of view. It's just a
> >variable/parameter that holds a set of tuples, instead of a single
> >Datum.
>
> Something to keep in mind is that users will definitely think about
> NEW/OLD as tables. I suspect that it won't be long after release
> before someone asks why they can't create an index on it. :)

Continuing with this digression, that request seems more likely with
views and foreign tables, given that they persist across statements.
I'm given to understand that other systems have at least the former.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 21:03:04
Message-ID: 1409173384.60803.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
> On 08/27/2014 02:26 AM, Kevin Grittner wrote:

>> spi-tuplestore-registry allows tuplestores, with associated name
>> and TupleDesc, to be registered with the current SPI connection.
>> Queries planned or executed on that connection will recognize the
>> name as a tuplestore relation.  It doesn't care who is registering
>> the tuplestores or what happens to them.  It doesn't depend on
>> anything else.
>> 5 files changed, 445 insertions(+)
>>
>> ...
>>
>> plpgsql-after-trigger-transition-tables takes the tuplestores from
>> TriggerData and registers them with SPI before trigger planning and
>> execution.  It depends on the trigger-transition-tables and
>> spi-tuplestore-registry patches to build, and won't do anything
>> useful at run time without the executor-tuplestore-relations patch.
>> 3 files changed, 37 insertions(+), 11 deletions(-)
>
> This is a surprising way to expose the NEW/OLD relations to the
> planner/executor. The problem is the same as with making PL/pgSQL
> variables available to the planner/executor in queries within a PL/pgSQL
> function, and the solution we have for that is the "parser hooks" you
> pass to SPI_prepare_params. This tuplestore registry is a different
> solution to the same problem - we could've implemented parameters with a
> registry like this as well. Let's not mix two different designs.
>
> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
> ?). The planner calls it whenever it sees a reference to a table, and
> the hook function returns back some sort of placeholder reference to the
> tuplestore. With variables, the hook returns a Param node, and at
> execution time, the executor calls the paramFetch hook to fetch the
> value of the param. For relations/tuplestores, I guess we'll need to
> invent something like a Param node, but for holding information about
> the relation. Like your TsrData struct, but without the pointer to the
> tuplestore. At execution time, in the SPI_execute call, you pass the
> pointer to the tuplestore in the ParamListInfo struct, like you pass
> parameter values.
>
> Does this make sense?

I see your point, but SPI first has to be made aware of the
tuplestores and their corresponding names and TupleDesc structures.
Does it make sense to keep the SPI_register_tuplestore() and
SPI_unregister_tuplestore() functions for the client side of the
API, and pass things along to the parse analysis through execution
phases using the techniques you describe?  That would eliminate the
need for the SPI_get_caller_tuplestore() function and the
parse_tuplestore.[ch] files, and change how the data is fetched in
parse analysis and execution phases, but that seems fairly minimal
-- there are exactly three places that would need to call the new
hooks where the patch is now getting the information from the
registry.

> In essence, make the relations work like PL/pgSQL
> variables do. If you squint a little, the new/old relation is a variable
> from the function's point of view, and a parameter from the
> planner/executor's point of view. It's just a variable/parameter that
> holds a set of tuples, instead of a single Datum.

I don't have to squint that hard -- I've always been comfortable
with the definition of a table as a relation variable, and it's not
too big a stretch to expand that to a tuplestore.  ;-)  In fact, I
will be surprised if someone doesn't latch onto this to create a
new "declared temporary table" that only exists within the scope of
a compound statement (i.e., a BEGIN/END block).  You would DECLARE
them just like you would a scalar variable in a PL, and they would
have the same scope.

I'll take a look at doing this in the next couple days, and see
whether doing it that way is as easy as it seems on the face of it.

Thanks!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Jim Nasby <jim(at)nasby(dot)net>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 21:09:39
Message-ID: 1409173779.26972.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Jim Nasby <jim(at)nasby(dot)net> wrote:

> Something to keep in mind is that users will definitely think about NEW/OLD as
> tables. I suspect that it won't be long after release before someone asks
> why they can't create an index on it. :)

I'm comfortable saying "No" to that.  But it's a good point -- I'll
review error checking and documentation to make sure that it is
clear.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 21:11:06
Message-ID: 20140827211106.GG16422@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Kevin Grittner (kgrittn(at)ymail(dot)com) wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
> > In essence, make the relations work like PL/pgSQL
> > variables do. If you squint a little, the new/old relation is a variable
> > from the function's point of view, and a parameter from the
> > planner/executor's point of view. It's just a variable/parameter that
> > holds a set of tuples, instead of a single Datum.
>
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore.  ;-)  In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block).  You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

(not following this very closely, but saw this...)

Yes, please? :)

Thanks!

Stephen


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-27 21:22:39
Message-ID: 53FE4C1F.3030109@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 08/28/2014 12:03 AM, Kevin Grittner wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>> ?). The planner calls it whenever it sees a reference to a table, and
>> the hook function returns back some sort of placeholder reference to the
>> tuplestore. With variables, the hook returns a Param node, and at
>> execution time, the executor calls the paramFetch hook to fetch the
>> value of the param. For relations/tuplestores, I guess we'll need to
>> invent something like a Param node, but for holding information about
>> the relation. Like your TsrData struct, but without the pointer to the
>> tuplestore. At execution time, in the SPI_execute call, you pass the
>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>> parameter values.
>>
>> Does this make sense?
>
> I see your point, but SPI first has to be made aware of the
> tuplestores and their corresponding names and TupleDesc structures.
> Does it make sense to keep the SPI_register_tuplestore() and
> SPI_unregister_tuplestore() functions for the client side of the
> API, and pass things along to the parse analysis through execution
> phases using the techniques you describe?

Sorry, I didn't understand that. What do you mean by "first", and the
"client side of the API"? I don't see any need for the
SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
if you use the hooks.

>> In essence, make the relations work like PL/pgSQL
>> variables do. If you squint a little, the new/old relation is a variable
>> from the function's point of view, and a parameter from the
>> planner/executor's point of view. It's just a variable/parameter that
>> holds a set of tuples, instead of a single Datum.
>
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore. ;-) In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.

Yeah, that would be cool :-).

- Heikki


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-28 08:48:55
Message-ID: CABRT9RBddGWGYpRZDeO9j_aDGiMAJowiVApXrdoPxOCCt5OhoA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Aug 28, 2014 at 12:03 AM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> In essence, make the relations work like PL/pgSQL
>> variables do. If you squint a little, the new/old relation is a variable
>> from the function's point of view, and a parameter from the
>> planner/executor's point of view. It's just a variable/parameter that
>> holds a set of tuples, instead of a single Datum.

> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

We already have all this with refcursor variables. Admittedly cursors
have some weird semantics (mutable position) and currently they're
cumbersome to combine into queries, but would a new "relation
variable" be sufficiently better to warrant a new type?

Why not extend refcursors and make them easier to use instead?

Regards,
Marti


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-29 21:15:01
Message-ID: 1409346901.59767.YahooMailNeo@web122306.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
> On 08/28/2014 12:03 AM, Kevin Grittner wrote:
>> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>>> ?). The planner calls it whenever it sees a reference to a table, and
>>> the hook function returns back some sort of placeholder reference to the
>>> tuplestore. With variables, the hook returns a Param node, and at
>>> execution time, the executor calls the paramFetch hook to fetch the
>>> value of the param. For relations/tuplestores, I guess we'll need to
>>> invent something like a Param node, but for holding information about
>>> the relation. Like your TsrData struct, but without the pointer to the
>>> tuplestore. At execution time, in the SPI_execute call, you pass the
>>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>>> parameter values.
>>>
>>> Does this make sense?
>>
>> I see your point, but SPI first has to be made aware of the
>> tuplestores and their corresponding names and TupleDesc structures.
>> Does it make sense to keep the SPI_register_tuplestore() and
>> SPI_unregister_tuplestore() functions for the client side of the
>> API, and pass things along to the parse analysis through execution
>> phases using the techniques you describe?
>
> Sorry, I didn't understand that. What do you mean by "first", and the
> "client side of the API"? I don't see any need for the
> SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
> if you use the hooks.

If we were to go with the hooks as you propose, we would still need
to take the information from TriggerData and put it somewhere else
for the hook to reference.  The hooks are generalized for plpgsql,
not just for triggers, and it doesn't seem appropriate for them to
be fishing around in the TriggerData structure.  And what if we add
other sources for tuplestores?  The lookup during parse analysis
each time an apparent relation name is encountered must be simple
and fast.

I want named tuplestores to be easy to use from *all* PLs (for
trigger usage) as well as useful for other purposes people may want
to develop.  I had to change the hashkey for plpgsql's plan
caching, but that needs to be done regardless of the API (to
prevent problems in the obscure case that someone attaches the same
trigger function to the same table for the same events more than
once with different trigger names and different transition table
names).  If you ignore that, the *entire* change to use this in
plpgsql is to add these lines to plpgsql_exec_trigger():

    /*
     * Capture the NEW and OLD transition TABLE tuplestores (if specified for
     * this trigger).
     */
    if (trigdata->tg_newtable)
    {
        Tsr tsr = palloc(sizeof(TsrData));

        tsr->name = trigdata->tg_trigger->tgnewtable;
        tsr->tstate = trigdata->tg_newtable;
        tsr->tupdesc = trigdata->tg_relation->rd_att;
        tsr->relid = trigdata->tg_relation->rd_id;
        SPI_register_tuplestore(tsr);
    }
    if (trigdata->tg_oldtable)
    {
        Tsr tsr = palloc(sizeof(TsrData));

        tsr->name = trigdata->tg_trigger->tgoldtable;
        tsr->tstate = trigdata->tg_oldtable;
        tsr->tupdesc = trigdata->tg_relation->rd_att;
        tsr->relid = trigdata->tg_relation->rd_id;
        SPI_register_tuplestore(tsr);
    }

With the new SPI functions, the code to implement this in each
other PL should be about the same (possibly identical), and areas
using SPI only need similar code to make tuplestores visible to the
planner and usable in the executor if someone has another use for
this.  You just do the above once you have run SPI_connect() and
before preparing or executing any query that references the named
tuplestore.  It remains available on that SPI connection until
SPI_finish() is called or you explicitly unregister it (by name).

If we use the hooks, I think it will be several times as much code,
more invasive, and probably more fragile.  More importantly, these
hooks are not used by the other PLs included with core, and are not
used in any of the other core code, anywhere.  They all use SPI, so
they can do the above very easily, but adding infrastructure for
them to use the hooks would be a lot more work, and I'm not seeing
a corresponding benefit.

I think there is some room to change the API as used by the parser,
planner, and executor so that no changes are needed there if we add
some other registration mechanism besides SPI, but I think having a
registry for tuplestores that sort of takes the place of the
catalogs and related caches (but is far simpler, being process
local) is a better model than what you propose.

In summary, thinking of the definition of a named tuplestore as a
variable is the wrong parallel; it is more like a lightweight
relation, and the comparison should be to that, not to function
parameters or local variables.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-08-30 14:46:31
Message-ID: 1409409991.99501.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:

> executor-tuplestore-relations covers parse analysis,
> planner/optimizer, and executor layers.

> 30 files changed, 786 insertions(+), 9 deletions(-)

Testing and further review found a few places that needed to add
lines for the new RTE kind that I had missed.  Delta patch
attached.
7 files changed, 58 insertions(+)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
executor-tuplestore-relations-v3a.diff text/x-diff 5.7 KB

From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-01 17:07:42
Message-ID: 5404A7DE.5040204@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 08/30/2014 12:15 AM, Kevin Grittner wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>> On 08/28/2014 12:03 AM, Kevin Grittner wrote:
>>> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>>>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>>>> ?). The planner calls it whenever it sees a reference to a table, and
>>>> the hook function returns back some sort of placeholder reference to the
>>>> tuplestore. With variables, the hook returns a Param node, and at
>>>> execution time, the executor calls the paramFetch hook to fetch the
>>>> value of the param. For relations/tuplestores, I guess we'll need to
>>>> invent something like a Param node, but for holding information about
>>>> the relation. Like your TsrData struct, but without the pointer to the
>>>> tuplestore. At execution time, in the SPI_execute call, you pass the
>>>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>>>> parameter values.
>>>>
>>>> Does this make sense?
>>>
>>> I see your point, but SPI first has to be made aware of the
>>> tuplestores and their corresponding names and TupleDesc structures.
>>> Does it make sense to keep the SPI_register_tuplestore() and
>>> SPI_unregister_tuplestore() functions for the client side of the
>>> API, and pass things along to the parse analysis through execution
>>> phases using the techniques you describe?
>>
>> Sorry, I didn't understand that. What do you mean by "first", and the
>> "client side of the API"? I don't see any need for the
>> SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
>> if you use the hooks.
>
> If we were to go with the hooks as you propose, we would still need
> to take the information from TriggerData and put it somewhere else
> for the hook to reference.

Sure.

> The hooks are generalized for plpgsql,
> not just for triggers, and it doesn't seem appropriate for them to
> be fishing around in the TriggerData structure.

PLpgSQL_execstate seems like the appropriate place.

> And what if we add other sources for tuplestores?

What about it?

> The lookup during parse analysis
> each time an apparent relation name is encountered must be simple
> and fast.

We already use hooks for ColumnRefs, which are called even more often,
and we haven't had a problem making that fast enough.

> I want named tuplestores to be easy to use from *all* PLs (for
> trigger usage) as well as useful for other purposes people may want
> to develop.

I'm not sure other PLs would even want to resolve the old/new relations
like PL/pgSQL does. It might be more natural to access the new/old
tuplestores as perl or python hashes or arrays, for example. But if they
do, it's not that difficult to write the hooks.

> I had to change the hashkey for plpgsql's plan
> caching, but that needs to be done regardless of the API (to
> prevent problems in the obscure case that someone attaches the same
> trigger function to the same table for the same events more than
> once with different trigger names and different transition table
> names). If you ignore that, the *entire* change to use this in
> plpgsql is to add these lines to plpgsql_exec_trigger():
>
> /*
> * Capture the NEW and OLD transition TABLE tuplestores (if specified for
> * this trigger).
> */
> if (trigdata->tg_newtable)
> {
> Tsr tsr = palloc(sizeof(TsrData));
>
> tsr->name = trigdata->tg_trigger->tgnewtable;
> tsr->tstate = trigdata->tg_newtable;
> tsr->tupdesc = trigdata->tg_relation->rd_att;
> tsr->relid = trigdata->tg_relation->rd_id;
> SPI_register_tuplestore(tsr);
> }
> if (trigdata->tg_oldtable)
> {
> Tsr tsr = palloc(sizeof(TsrData));
>
> tsr->name = trigdata->tg_trigger->tgoldtable;
> tsr->tstate = trigdata->tg_oldtable;
> tsr->tupdesc = trigdata->tg_relation->rd_att;
> tsr->relid = trigdata->tg_relation->rd_id;
> SPI_register_tuplestore(tsr);
> }
>
> With the new SPI functions, the code to implement this in each
> other PL should be about the same (possibly identical), and areas
> using SPI only need similar code to make tuplestores visible to the
> planner and usable in the executor if someone has another use for
> this. You just do the above once you have run SPI_connect() and
> before preparing or executing any query that references the named
> tuplestore.

With hooks, the code to implement them in other PLs would be about the
same too, if they want the same behavior.

> It remains available on that SPI connection until
> SPI_finish() is called or you explicitly unregister it (by name).

Yeah, I don't like that. The SPI interface is currently stateless. Well,
except for cursors and plans explicitly saved with SPI_keepplan. But the
way queries are parsed is stateless - you pass all the necessary
information as part of the SPI_execute call (or similar), using direct
arguments and the ParamListInfo struct.

If you don't want to use hooks, I nevertheless feel that the old/new
relations should be passed as part of the ParamListInfo struct, one way
or another. With hooks, you would set the parserSetup hook, which in
turn would set up the table-ref hook similar to the column-ref hooks,
but if you don't want to do that, you could also add new fields directly
to ParamListInfo for the relations, like the "ParamExternData
params[1]" array that's there currently.

> If we use the hooks, I think it will be several times as much code,
> more invasive, and probably more fragile. More importantly, these
> hooks are not used by the other PLs included with core, and are not
> used in any of the other core code, anywhere. They all use SPI, so
> they can do the above very easily, but adding infrastructure for
> them to use the hooks would be a lot more work, and I'm not seeing
> a corresponding benefit.

If they use SPI, they can use the hooks just as well.

> I think there is some room to change the API as used by the parser,
> planner, and executor so that no changes are needed there if we add
> some other registration mechanism besides SPI, but I think having a
> registry for tuplestores that sort of takes the place of the
> catalogs and related caches (but is far simpler, being process
> local) is a better model than what you propose.
>
> In summary, thinking of the definition of a named tuplestore as a
> variable is the wrong parallel; it is more like a lightweight
> relation, and the comparison should be to that, not to function
> parameters or local variables.

I'm not too convinced. Marti Raudsepp mentioned refcursor variables, and
I think he's on to something. Refcursor are a bit difficult to
understand, so I wouldn't use those directly, but it would make a lot of
sense if you could e.g. loop over the rows directly with a FOR loop,
e.g. "FOR recordvar IN new LOOP ... END LOOP" without having to do
"SELECT * FROM new". Introducing a new "relation variable" datatype for
this would be nice. I won't insist that you have to implement that right
now, but let's not foreclose the option to add it later.

BTW, do we expect the old/new relations to be visible to dynamic SQL,
ie. EXECUTE? I think most users would say yes, even though the old/new
variables are not - you have to pass them with EXECUTE USING. Admittedly
that supports thinking of them more as lightweight relations rather than
variables.

Another question is whether you would expect the NEW/OLD table to be
visible to functions that you call from the trigger? For example, if the
trigger does:

...
PERFORM myfunction()
...

Can myfunction do "SELECT * FROM new" ? If not, is there a work-around?
I guess that's not this patch's problem, because we don't have a way to
pass sets as arguments in general. Refcursor is the closest thing, but
it's cumbersome.

In any case, I think having the parser call back into SPI, in
parse_tuplestore.c, is a modularity violation. The tuplestores need to
somehow find their way into ParseState. If you go with the
SPI_register_tuplestore API, then you should still have the tuplestores
in the ParseState struct, and have SPI fill in that information. Now
that I look back, I think you also referred to that earlier in the
paragraph that I didn't understand. Let me try again:

> I see your point, but SPI first has to be made aware of the
> tuplestores and their corresponding names and TupleDesc structures.
> Does it make sense to keep the SPI_register_tuplestore() and
> SPI_unregister_tuplestore() functions for the client side of the
> API, and pass things along to the parse analysis through execution
> phases using the techniques you describe? That would eliminate the
> need for the SPI_get_caller_tuplestore() function and the
> parse_tuplestore.[ch] files, and change how the data is fetched in
> parse analysis and execution phases, but that seems fairly minimal
> -- there are exactly three places that would need to call the new
> hooks where the patch is now getting the information from the
> registry.

Yes, if you decide to keep the SPI_register_tuplestore() function, then
IMHO you should still use hooks to pass that information to the parser,
planner and executor.

- Heikki


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-01 18:09:44
Message-ID: 12664.1409594984@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> writes:
> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>> If we were to go with the hooks as you propose, we would still need
>> to take the information from TriggerData and put it somewhere else
>> for the hook to reference.

> Sure.

FWIW, I agree with Heikki on this point. It makes a lot more sense for
the parser to provide hooks comparable to the existing hooks for resolving
column refs, and it's not apparent that loading such functionality into
SPI is sane at all.

OTOH, I agree with Kevin that the things we're talking about are
lightweight relations not variables.

regards, tom lane


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-02 14:46:21
Message-ID: CABRT9RC6qN4b3_0Ew85r-Pm2RNpzXVo19Spmu-xhC6yLEMB4zQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Sep 1, 2014 at 9:09 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> OTOH, I agree with Kevin that the things we're talking about are
> lightweight relations not variables.

My worry is that PL/pgSQL and Postgres's SQL dialect is turning into a
Frankenstein monster with many ways to do the same thing, each having
different semantics that require effort to reason about.

Variables and function arguments are non-contriversial, every
experienced coder understands their semantics without thinking twice
-- even if they're not familiar with Postgres.

The concept of "lightweight relations" that pop into existence when a
certain kind of trigger definition is used somewhere in the function
stack, without a CREATE TABLE, without being discoverable in
information_schema etc., I find needs some more justification than
I've seen in this thread. So far I've only heard that it's more
convenient to implement in the current PostgreSQL code base.

I'm sure more questions would pop up in practice, but as Heikki
mentioned: Are such relations also visible to other functions called
by the trigger function?
* If yes, this introduces non-obvious dependencies between functions.
What happens when one trigger with delta relations invokes another
trigger, does the previous one get shadowed or overwritten? What are
the interactions with search_path? Can an unprivileged function
override relation names when calling a SECURITY DEFINER function?

* If not, this further inhibits developers from properly modularizing
their trigger code (this is already a problem due to the current magic
trigger variables).

Even if these questions have reasonable answers, it takes mental
effort to remember the details. Procedure code debugging, especially
triggers, is hard enough due to poor tooling; increasing the cognitive
load should not be done lightly.

You could argue that CREATE TEMP TABLE already has some of these
problems, but it's very rare that people actually need to use that. If
delta relations get built on this new mechanism, avoiding won't be an
option any more.

Regards,
Marti


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-03 19:49:29
Message-ID: 1409773769.71219.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Marti Raudsepp <marti(at)juffo(dot)org> wrote:
> On Mon, Sep 1, 2014 at 9:09 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> The concept of "lightweight relations" that pop into existence when a
> certain kind of trigger definition is used somewhere in the function
> stack, without a CREATE TABLE, without being discoverable in
> information_schema etc., I find needs some more justification than
> I've seen in this thread. So far I've only heard that it's more
> convenient to implement in the current PostgreSQL code base.

It is required by the SQL standard.

> I'm sure more questions would pop up in practice, but as Heikki
> mentioned: Are such relations also visible to other functions called
> by the trigger function?
> * If yes, this introduces non-obvious dependencies between functions.
> What happens when one trigger with delta relations invokes another
> trigger, does the previous one get shadowed or overwritten?

This is indeed a killer objection.  As things stand in the patch, a
function called from a trigger function might have the table of the
same name (if it's not a not schema-qualified reference) shadowed,
or it might not -- depending on whether it was already planned.
That's obviously not acceptable.  Passing the metadata from the
TriggerData structure to the PLpgSQL_execstate structure to the
PLpgSQL_expr structure and on to the ParseState structure, and
passing it down to child ParseState structures as needed, along
with similar passing of the Tuplestorestate pointer (and associated
name) to the execution state structures should fix that.

> What are the interactions with search_path?

Pretty much the same as the interactions of RTEs with search_path.
If the apparent relation name is not schema-qualified, parse
analysis first tries to resolve the name as an RTE, and if that
fails it tries to resolve it as a named tuplestore, and if that
fails it goes to the catalogs using search_path.

> Can an unprivileged function override relation names when calling
> a SECURITY DEFINER function?

By changing things to the way Heikki and Tom suggest, any called
functions are not aware of or affected by a named tuplestore in the
caller's context.  (Changing *back*, actually -- I had this largely
done that way before; but it seemed like a rather fragile relay
race, passing the baton from one structure to another at odd
places.  I guess there's no helping that.  Or maybe once I post a
version changed back to that someone can show me something I missed
that makes it better.)

> You could argue that CREATE TEMP TABLE already has some of these
> problems, but it's very rare that people actually need to use that. If
> delta relations get built on this new mechanism, avoiding won't be an
> option any more.

Not true -- you don't have them unless you request them in CREATE
TRIGGER.  Nobody can be using this now, so a table owner must
*choose* to add the REFERENCING clause to the CREATE TRIGGER
statement for it to matter in the trigger function that is then
referenced.  Perhaps if we implement the ability to specify the
trigger code in the CREATE TRIGGER statement itself (rather than
requiring that a trigger function be created first) it will be
easier to look at and cope with.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-03 20:22:18
Message-ID: 1409775738.77167.YahooMailNeo@web122306.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Marti Raudsepp <marti(at)juffo(dot)org> wrote:

>> What are the interactions with search_path?

>
> Pretty much the same as the interactions of RTEs with search_path.
> If the apparent relation name is not schema-qualified, parse
> analysis first tries to resolve the name as an RTE, and if that
> fails it tries to resolve it as a named tuplestore, and if that
> fails it goes to the catalogs using search_path.

Argh. s/RTE/CTE/

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-03 20:55:00
Message-ID: CABRT9RBO3OrdbpA82mkSrzebsEvuvw8x7zAREKcYH36bPMJawQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Marti Raudsepp <marti(at)juffo(dot)org> wrote:
>> The concept of "lightweight relations" that pop into existence when a
>> certain kind of trigger definition is used somewhere in the function
>> stack, without a CREATE TABLE, without being discoverable in
>> information_schema etc., I find needs some more justification than
>> I've seen in this thread. So far I've only heard that it's more
>> convenient to implement in the current PostgreSQL code base.
>
> It is required by the SQL standard.

I had a cursory read of the SQL 20nn draft and I don't get this
impression. The only place I could find discussing the behavior of
"transition tables" is in Foundation "4.39.1 General description of
triggers", which says:

"Special variables make the data in the transition table(s) available
to the triggered action. For a statement-level
trigger the variable is one whose value is a transition table."

There is no information about the scoping of such variables, so I
assume it refers to a regular locally scoped variable.

Did I miss something? Are you reading a different version of the spec?

Regards,
Marti


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-03 21:14:17
Message-ID: 1409778857.7980.YahooMailNeo@web122305.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Marti Raudsepp <marti(at)juffo(dot)org> wrote:
> On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> Marti Raudsepp <marti(at)juffo(dot)org> wrote:
>>> The concept of "lightweight relations" that pop into existence when a
>>> certain kind of trigger definition is used somewhere in the function
>>> stack, without a CREATE TABLE, without being discoverable in
>>> information_schema etc., I find needs some more justification than
>>> I've seen in this thread. So far I've only heard that it's more
>>> convenient to implement in the current PostgreSQL code base.
>>
>> It is required by the SQL standard.
>
> I had a cursory read of the SQL 20nn draft and I don't get this
> impression. The only place I could find discussing the behavior of
> "transition tables" is in Foundation "4.39.1 General description of
> triggers", which says:
>
> "Special variables make the data in the transition table(s) available
> to the triggered action. For a statement-level
> trigger the variable is one whose value is a transition table."
>
> There is no information about the scoping of such variables, so I
> assume it refers to a regular locally scoped variable.
>
> Did I miss something?

Apparently. I did a search on the document and counted and got 101
occurrences of "transition table". I might be off by a few, but
that should be pretty close. Perhaps this, from 4.14 most directly
answers your point:

| A transient table is a named table that may come into existence
| implicitly during the evaluation of a <query expression> or the
| execution of a trigger. A transient table is identified by a
| <query name> if it arises during the evaluation of a <query
| expression>, or by a <transition table name> if it arises during
| the execution of a trigger. Such tables exist only for the
| duration of the executing SQL-statement containing the <query
| expression> or for the duration of the executing trigger.

> Are you reading a different version of the spec?

I'm looking at a draft of 200x from 2006-02-01.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-15 14:25:37
Message-ID: 1410791137.9135.YahooMailNeo@web122302.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> writes:
>
>> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>>> If we were to go with the hooks as you propose, we would still need
>>> to take the information from TriggerData and put it somewhere else
>>> for the hook to reference.
>
>> Sure.
>
> FWIW, I agree with Heikki on this point. It makes a lot more sense for
> the parser to provide hooks comparable to the existing hooks for resolving
> column refs, and it's not apparent that loading such functionality into
> SPI is sane at all.
>
> OTOH, I agree with Kevin that the things we're talking about are
> lightweight relations not variables.

Try as I might, I was unable to find any sensible way to use hooks.
If the issue was only the parsing, the route was fairly obvious,
but the execution side needs to access the correct tuplestore(s)
for each run, too -- so the sort of information provided by
relcache needed to be passed in to based on the context within the
process (i.e., if you have nested triggers firing, each needs to
use a different tuplestore with the same name in the same
function, even though it's using the same plan). On both sides it
seemed easier to pass things through the same sort of techniques as
"normal" parameters; I couldn't find a way to use hooks that didn't
just make things uglier.

I see the down side of making this a feature which can only be used
from SPI, so I've updated the patch to allow it from other
contexts. On the other hand, I see many uses for it where SPI *is*
used, and the SPI interface needs to change to support that. The
functions I had added are one way to do that on the
parsing/planning side without breaking any existing code. The same
information (i.e., the metadata) needs to be passed to the executor
along with references to the actual tuplestores, and that needs to
go through a different path -- at least for the plpgsql usage.

I broke out the changes from the previous patch in multiple commits
in my repository on github:

commit 2520db8fbb41c68a82c2c750c8543154c6d85eb9
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Mon Sep 15 01:17:14 2014 -0500

Use executor state rather than SPI to get named tuplestores.

spi.c and trigger.c will need a little more clean-up to match this,
and it's likely that not all places that need to pass the baton are
doing so, but at least this passes regression tests.

commit de9067258125226d1625f160c3eee9aff90ca598
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Sun Sep 14 22:01:04 2014 -0500

Pass the Tsrcache through ParserState to parse analysis.

commit e94779c1e22ec587446a7aa2593ba5f102b6a28b
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Sun Sep 14 19:23:45 2014 -0500

Modify SPI to use Tsrcache instead of List.

commit 7af841881d9113eb4c8dca8e82dc1867883bf75d
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Sun Sep 14 18:45:54 2014 -0500

Create context-specific Tsrcache.

Not used yet.

commit 35790a4b6c236d09e0be261be9b0017d34eaf9c9
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Sun Sep 14 16:49:37 2014 -0500

Create Tsrmd structure so tuplestore.h isn't needed in parser.

commit 93d57c580da095b71d9214f69fede71d2f8ed840
Author: Kevin Grittner <kgrittn(at)postgresql(dot)org>
Date: Sun Sep 14 15:59:45 2014 -0500

Improve some confusing naming for TuplestoreRelation node.

Anyone who has already reviewed the earlier patch may want to look
at these individually:

https://github.com/kgrittn/postgres/compare/transition

Attached is a new patch including all of that. Hopefully I haven't
misunderstood what Heikki and Tom wanted; if I have, just let me
know where I went wrong.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
trigger-transition-tables-v4.patch.gz application/x-tgz 38.5 KB

From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Heikki Linnakangas" <hlinnakangas(at)vmware(dot)com>
Cc: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-23 16:46:49
Message-ID: 5421A3F9.6030805@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/15/2014 05:25 PM, Kevin Grittner wrote:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> writes:
>>
>>> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>>>> If we were to go with the hooks as you propose, we would still need
>>>> to take the information from TriggerData and put it somewhere else
>>>> for the hook to reference.
>>
>>> Sure.
>>
>> FWIW, I agree with Heikki on this point. It makes a lot more sense for
>> the parser to provide hooks comparable to the existing hooks for resolving
>> column refs, and it's not apparent that loading such functionality into
>> SPI is sane at all.
>>
>> OTOH, I agree with Kevin that the things we're talking about are
>> lightweight relations not variables.
>
> Try as I might, I was unable to find any sensible way to use hooks.
> If the issue was only the parsing, the route was fairly obvious,
> but the execution side needs to access the correct tuplestore(s)
> for each run, too -- so the sort of information provided by
> relcache needed to be passed in to based on the context within the
> process (i.e., if you have nested triggers firing, each needs to
> use a different tuplestore with the same name in the same
> function, even though it's using the same plan). On both sides it
> seemed easier to pass things through the same sort of techniques as
> "normal" parameters; I couldn't find a way to use hooks that didn't
> just make things uglier.

Hmph. You didn't actually use the same sort of techniques we use for
normal parameters. You invented a new TsrCache registry, which marries
the metadata for planning with the tuplestore itself. That's quite
different from the way we deal with parameters (TsrCache is a misnomer,
BTW; it's not a cache, but the primary source of information for the
planner). And instead of passing parameters to the SPI calls
individually, you invented SPI_register_tuplestore which affects all
subsequent SPI calls.

To recap, this is how normal parameters work:

In the parse stage, you pass a ParserSetupHook function pointer to the
parser. The parser calls the callback, which sets up more hooks in the
ParseState struct: a column-ref hook and/or a param ref hook. The parser
then proceeds to parse the query, and whenever it sees a reference to a
column that it doesn't recognize, or a $n style parameter marker, it
calls the column-ref or param-ref hook.

The column- or param-ref hook can return a Param node, indicating that
the parameter's value will be supplied later, at execution time. The
Param node contains a numeric ID for the parameter, and the type OID and
other information needed to complete the parsing.

At execution time, you pass a ParamListInfo struct to the executor. It
contains values for all the parameters. Alternatively, the values can be
supplied lazily, by providing a param-fetch hook in the ParamListInfo
struct. Whenever the executor needs the value of a parameter, and the
ParamListInfo struct doesn't contain it, it calls the paramFetch hook
which should fill it in ParamListInfo.

Now, how do we make the tuplestores work similarly? Here's what I think
we should do:

Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
Whenever the parser sees a RangeVar that it doesn't recognize (or
actually, I think it should call it *before* resolving regular tables,
but let's ignore that for now), it calls the p_tableref_hook. It can
return a new RelationParam node (similar to regular Param), which
contains a numeric ID for the table/tuplestore, as well as its tuple
descriptor.

For the execution phase, add a new array of Tuplestorestates to
ParamListInfo. Similar to the existing array of ParamExternalDatas.

The next question is how to pass the new hooks and tuplestores through
the SPI interface. For prepared plans, the current SPI_prepare_params +
SPI_execute_plan_with_paramlist functions work fine. However, there
doesn't seem to be any way to do one-shot queries with a ParserSetupHook
and ParamListInfo. That seems like an oversight, and would be nice to
address that anyway.

PS. the copy/read/write functions for TuplestoreRelation in the patch
are broken; TuplestoreRelation is not a subclass of Plan. (But if you go
with the approach I'm advocating for, TuplestoreRelation in its current
form will be gone)

- Heikki


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-23 17:03:06
Message-ID: CA+TgmoYeB0iFaHfFMthRb-rD4RoyGZrmPwU9-+k1W4U1BoNeSg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> On 09/15/2014 05:25 PM, Kevin Grittner wrote:
> Now, how do we make the tuplestores work similarly? Here's what I think we
> should do:
>
> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
> I think it should call it *before* resolving regular tables, but let's
> ignore that for now), it calls the p_tableref_hook. It can return a new
> RelationParam node (similar to regular Param), which contains a numeric ID
> for the table/tuplestore, as well as its tuple descriptor.
>
> For the execution phase, add a new array of Tuplestorestates to
> ParamListInfo. Similar to the existing array of ParamExternalDatas.

I haven't been following this issue closely, but this sounds like a
really nice design.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-23 17:51:58
Message-ID: 15387.1411494718@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
> <hlinnakangas(at)vmware(dot)com> wrote:
>> Now, how do we make the tuplestores work similarly? Here's what I think we
>> should do:
>>
>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>> I think it should call it *before* resolving regular tables, but let's
>> ignore that for now), it calls the p_tableref_hook. It can return a new
>> RelationParam node (similar to regular Param), which contains a numeric ID
>> for the table/tuplestore, as well as its tuple descriptor.
>>
>> For the execution phase, add a new array of Tuplestorestates to
>> ParamListInfo. Similar to the existing array of ParamExternalDatas.

> I haven't been following this issue closely, but this sounds like a
> really nice design.

I'm on board with the parser hooks part of that. I don't especially agree
with the idea of a new sub-structure for ParamListInfo: if you do that you
will need a whole bunch of new boilerplate infrastructure to allocate,
copy, and generally manage that structure, for darn little gain. What I'd
suggest is just saying that some Params might have type INTERNAL with
Datum values that are pointers to tuplestores; then all you need to do is
remember which Param number has been assigned to the particular tuplestore
you want. There is already precedent for that in the recursive CTE code,
IIRC.

regards, tom lane


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, "Alvaro Herrera" <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-23 19:38:05
Message-ID: 5421CC1D.20704@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/23/2014 08:51 PM, Tom Lane wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
>> <hlinnakangas(at)vmware(dot)com> wrote:
>>> Now, how do we make the tuplestores work similarly? Here's what I think we
>>> should do:
>>>
>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>> I think it should call it *before* resolving regular tables, but let's
>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>> for the table/tuplestore, as well as its tuple descriptor.
>>>
>>> For the execution phase, add a new array of Tuplestorestates to
>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>
>> I haven't been following this issue closely, but this sounds like a
>> really nice design.
>
> I'm on board with the parser hooks part of that. I don't especially agree
> with the idea of a new sub-structure for ParamListInfo: if you do that you
> will need a whole bunch of new boilerplate infrastructure to allocate,
> copy, and generally manage that structure, for darn little gain. What I'd
> suggest is just saying that some Params might have type INTERNAL with
> Datum values that are pointers to tuplestores; then all you need to do is
> remember which Param number has been assigned to the particular tuplestore
> you want. There is already precedent for that in the recursive CTE code,
> IIRC.

Works for me.

- Heikki


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-23 21:22:58
Message-ID: 1411507378.8558.YahooMailNeo@web122303.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thanks for reviewing this. I will spend some time looking at your
recommendations in detail and seeing what it would take to
implement them, and whether I agree that is better; but I wanted to
point out a couple things regarding the SPI interface where I'm not
sure you realize what's currently being done. I may want to argue
some of the rest if I don't agree after more detailed review; this
is just what jumps out on a first pass.

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:

> instead of passing parameters to the SPI calls individually, you
> invented SPI_register_tuplestore which affects all subsequent SPI
> calls.

All subsequent SPI calls on that particular SPI connection until it
is closed, except for any tuplestores are later unregistered.
Nested SPI connections do not automatically inherit the named
tuplestores; whatever code opens an SPI connection would need to
register them for the new context, if desired. This seemed to me
to provide minimal disruption to the existing SPI callers who might
want to use this.

> The next question is how to pass the new hooks and tuplestores
> through the SPI interface. For prepared plans, the current
> SPI_prepare_params + SPI_execute_plan_with_paramlist functions
> work fine.

They work fine, I guess, in the *one* place they are used.
SPI_prepare_params() is called exactly *once* from plpgsql's
pl_exec.c, and SPI_execute_plan_with_paramlist() is called twice
from the same file. There are no other calls to either from
anywhere else in the code base.

> However, there doesn't seem to be any way to do one-shot queries
> with a ParserSetupHook and ParamListInfo. That seems like an
> oversight, and would be nice to address that anyway.

There are dozens of SPI_prepare* and SPI_exec* calls scattered all
over the backend, pl, and contrib code which appear to get along
fine without that. Partly it may be because it involves something
of a modularity violation; the comment for the function used for
this call (where it *is* used) says:

/*
* plpgsql_parser_setup set up parser hooks for dynamic parameters
*
* Note: this routine, and the hook functions it prepares for, are logically
* part of plpgsql parsing. But they actually run during function execution,
* when we are ready to evaluate a SQL query or expression that has not
* previously been parsed and planned.
*/

Can you clarify what benefit you see to modifying the SPI API the
way you suggest, and what impact it might have on existing calling
code?

> PS. the copy/read/write functions for TuplestoreRelation in the
> patch are broken; TuplestoreRelation is not a subclass of Plan.

I'm not sure what you mean by "broken" -- could you elaborate? It
is, in a lot of ways, parallel to the CommonTableExpr defined a
little above it in the parsenodes.h file -- a relation which can
only be referenced by unqualified name. It is after CTEs in
search order, and before anything else. Having such a node in the
tree after parse analysis allows a number of special cases to be
handled pretty much the same as they are for CTEs, which seems like
a good thing to me.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-24 13:41:14
Message-ID: 5422C9FA.3080708@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/24/2014 12:22 AM, Kevin Grittner wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>
>> instead of passing parameters to the SPI calls individually, you
>> invented SPI_register_tuplestore which affects all subsequent SPI
>> calls.
>
> All subsequent SPI calls on that particular SPI connection until it
> is closed, except for any tuplestores are later unregistered.
> Nested SPI connections do not automatically inherit the named
> tuplestores; whatever code opens an SPI connection would need to
> register them for the new context, if desired. This seemed to me
> to provide minimal disruption to the existing SPI callers who might
> want to use this.

Yeah, I got that. And note that I'm not saying that's necessarily a bad
design per se - it's just that it's different from the way parameters
work, and I don't like it for that reason.

You could imagine doing the same for parameters; have a
SPI_register_param() function that you could use to register parameter
types, and the parameters could then be referenced in any SPI calls that
follow (within the same connection). But as the code stands, SPI is
stateless wrt. to parameters, and tuplestores or relation parameters
should follow the lead.

>> The next question is how to pass the new hooks and tuplestores
>> However, there doesn't seem to be any way to do one-shot queries
>> with a ParserSetupHook and ParamListInfo. That seems like an
>> oversight, and would be nice to address that anyway.
>
> There are dozens of SPI_prepare* and SPI_exec* calls scattered all
> over the backend, pl, and contrib code which appear to get along
> fine without that.

Yeah. None of the current callers have apparently needed that
functionality. But it's not hard to imagine cases where it would be
needed. For example, imagine a variant of EXECUTE '...' where all the
PL/pgSQL variables can be used in the query, like they can in static
queries:

declare
myvar int4;
tablename text;
begin
...
EXECUTE 'insert into ' || tablename ||' values (myvar)';
end;

Currently you have to use $1 in place of the variable name, and pass the
variable's current value with USING. If you wanted to make the above
work, you would need a variant of SPI_execute that can run a one-shot
query, with a parser-hook.

Whether you want to use a parser-hook or is orthogonal to whether or not
you want to run a one-shot query or prepare it and keep the plan.

> Partly it may be because it involves something
> of a modularity violation; the comment for the function used for
> this call (where it *is* used) says:
>
> /*
> * plpgsql_parser_setup set up parser hooks for dynamic parameters
> *
> * Note: this routine, and the hook functions it prepares for, are logically
> * part of plpgsql parsing. But they actually run during function execution,
> * when we are ready to evaluate a SQL query or expression that has not
> * previously been parsed and planned.
> */

No, that's something completely different. The comment points out that
even though plpgsql_parser_setup is in pl_comp.c, which contains code
related to compiling a PL/pgSQL function, it's actually called at
execution time, not compilation time.

> Can you clarify what benefit you see to modifying the SPI API the
> way you suggest, and what impact it might have on existing calling
> code?

Well, we'll have to keep the existing functions anyway, to avoid
breaking 3rd party code that use them, so there would be no impact on
existing code. The benefit would be that you could use the parser hooks
and the ParamListInfo struct even when doing a one-shot query.

Or perhaps you could just use SPI_prepare_params +
SPI_execute_plan_with_paramlist even for one-shot queries. There is some
overhead when a SPIPlan has to be allocated, but maybe it's not big
enough to worry about. That would be worth measuring before adding new
functions to the SPI.

>> PS. the copy/read/write functions for TuplestoreRelation in the
>> patch are broken; TuplestoreRelation is not a subclass of Plan.
>
> I'm not sure what you mean by "broken" -- could you elaborate?

Sure:

> + /*
> + * _copyTuplestoreRelation
> + */
> + static TuplestoreRelation *
> + _copyTuplestoreRelation(const TuplestoreRelation *from)
> + {
> + TuplestoreRelation *newnode = makeNode(TuplestoreRelation);
> +
> + /*
> + * copy node superclass fields
> + */
> + CopyPlanFields((const Plan *) from, (Plan *) newnode);
> +
> + /*
> + * copy remainder of node
> + */
> + COPY_STRING_FIELD(refname);
> +
> + return newnode;
> + }

You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
That will crash, because TuplestoreRelation is nothing like a Plan:

> + /*
> + * TuplestoreRelation -
> + * synthetic node for tuplestore passed in to the query by name
> + *
> + * This is initially added to support trigger transition tables, but may find
> + * other uses, so we try to keep it generic.
> + */
> + typedef struct TuplestoreRelation
> + {
> + NodeTag type;
> + char *refname;
> + } TuplestoreRelation;

The corresponding code in outfuncs.c is similarly broken.

- Heikki


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-25 01:58:32
Message-ID: 542376C8.5070000@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 08/28/2014 05:03 AM, Kevin Grittner wrote:
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore. ;-) In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

Oracle's TABLE variables in PL/SQL are quite similar; it might be worth
observing how they work for comparison.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-25 02:05:10
Message-ID: 54237856.3070308@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/15/2014 10:25 PM, Kevin Grittner wrote:
> I broke out the changes from the previous patch in multiple commits
> in my repository on github:

*Thankyou*

That gives me the incentive to pull it and test it.

A nice patch series published in a git repo is so much easier to work
with than a giant squashed patch as an attachment.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-25 15:54:06
Message-ID: 1411660446.94544.YahooMailNeo@web122302.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:

> You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
> That will crash, because TuplestoreRelation is nothing like a Plan:

Oops. That's a copy/paste error I should have noticed. Fixed,
even though the node type might be going away. Since all of this
seems to be working very well from a user point of view, I'm going
to try to generate a lot more regression tests against the existing
code before taking another run at the API, to make sure that things
don't break in the refactoring.

I didn't hit the copy/out bugs in testing so far -- any suggestions
on a test that would exercise this code? (I'm probably missing
something obvious.)

Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:

> On 09/15/2014 10:25 PM, Kevin Grittner wrote:
>
>> I broke out the changes from the previous patch in multiple commits
>> in my repository on github:
>
> *Thankyou*

> A nice patch series published in a git repo is so much easier to work
> with than a giant squashed patch as an attachment.

I have fixed the bug reported by Heikki; be sure to grab that.

I have been merging in changes to master as I go, so that bit rot
doesn't accumulate, but I don't squash or rebase; hopefully that
style works for you.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-26 00:39:48
Message-ID: 5424B5D4.5020002@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/25/2014 11:54 PM, Kevin Grittner wrote:
> I have fixed the bug reported by Heikki; be sure to grab that.

Will do.

> I have been merging in changes to master as I go, so that bit rot
> doesn't accumulate, but I don't squash or rebase; hopefully that
> style works for you.

IMO it only really matters before the final push to master; before then
it's all just a matter of how you prefer to work.

I'm a big fan of rebasing my feature branches as I go:

git tag before-rebase
git pull --rebase
... do any merges during rebase ...
git tag -d before-rebase

For bug fixes I tend to commit them separately, then when I rebase I
squash them into the relevant patch. Git's "fixup! " commits are really
handy for this; if I have a commit:

Add widget support

Everyone wants more widgets.

and want to fix an issue in that commit I can just commit

fixup! Add widget support

It's spelled widget not wodget

and when I "git rebase --autosquash master" they get automatically
squashed into the relevant changeset. (I usually run with the config
rebase.autosquash enabled so this happens during my rebase pulls on top
of master).

I got in the habit while working on RLS, to keep me sane with that
patchset, and find it works well for me.

However, everyone has a different work style. Colour me happy if it's in
git at all.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-09-26 12:21:28
Message-ID: 54255A48.6040704@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 09/25/2014 06:54 PM, Kevin Grittner wrote:
> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>
>> You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
>> That will crash, because TuplestoreRelation is nothing like a Plan:
>
> Oops. That's a copy/paste error I should have noticed. Fixed,
> even though the node type might be going away. Since all of this
> seems to be working very well from a user point of view, I'm going
> to try to generate a lot more regression tests against the existing
> code before taking another run at the API, to make sure that things
> don't break in the refactoring.
>
> I didn't hit the copy/out bugs in testing so far -- any suggestions
> on a test that would exercise this code? (I'm probably missing
> something obvious.)

There's some debugging code in tcop/postgres.c, search for
COPY_PARSE_PLAN_TREES. It won't catch everything, but probably would've
caught this one.

- Heikki


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-10-08 22:36:12
Message-ID: CABRT9RB4kUj7Hkgpqiik4Q1F_ZoHJ-X7M-+OeYH1L6CR5EFx7g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Sep 4, 2014 at 12:14 AM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> Did I miss something?
>
> Apparently. I did a search on the document and counted and got 101
> occurrences of "transition table".
> | A transient table is a named table that may come into existence
> | implicitly during the evaluation of a <query expression> or the
> | execution of a trigger.

D'oh, I was reading only the sections about triggers. You are correct.

Anyway, I tried out the latest from your GitHub branch and it seems
most of my concerns no longer apply to the current version, as
transition tables are now local to the trigger function. Thanks.

----
Not sure if you're looking for feedback on this level yet, but I tried
breaking it and found that transition tuplestores don't work with
cursors.

Probably not a valid use case once we have some other way to pass
tuplestores between functions. I don't know if it could work anyway,
as cursors may outlive the trigger call. But in that case, a better
error message is in order.

create table test1(i int);
create or replace function test1trg() returns trigger language plpgsql as $$
declare
curs cursor for select * from newtab;
r record;
begin
for r in curs loop
end loop;
return new;
end;$$;
create trigger test1trg after insert on test1 referencing new table as
newtab execute procedure test1trg();
insert into test1 values(1);
ERROR: executor could not find named tuplestore "newtab"
CONTEXT: PL/pgSQL function test1trg() line 6 at FOR over cursor

----
I still see a chance of introducing security problems with SECURITY
DEFINER trigger functions. An attacker can overshadow table names
queried by such a function and inject arbitrary data into it. Similar
to search_path vulnerabilities, but there are ways to avoid that.

Perhaps there should be a restriction when using REFERENCING and the
function is SECURITY DEFINER: require that the trigger definer (if not
superuser) matches the function owner?

Regards,
Marti


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-10-22 20:10:26
Message-ID: CA+TgmoYD9BbeYJpCsZyOyWA_LKUEsLv7LuW9BhspKT1h+vKUqg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Sep 23, 2014 at 1:51 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>> I think it should call it *before* resolving regular tables, but let's
>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>> for the table/tuplestore, as well as its tuple descriptor.
>>>
>>> For the execution phase, add a new array of Tuplestorestates to
>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>
>> I haven't been following this issue closely, but this sounds like a
>> really nice design.
>
> I'm on board with the parser hooks part of that. I don't especially agree
> with the idea of a new sub-structure for ParamListInfo: if you do that you
> will need a whole bunch of new boilerplate infrastructure to allocate,
> copy, and generally manage that structure, for darn little gain. What I'd
> suggest is just saying that some Params might have type INTERNAL with
> Datum values that are pointers to tuplestores; then all you need to do is
> remember which Param number has been assigned to the particular tuplestore
> you want. There is already precedent for that in the recursive CTE code,
> IIRC.

Studying this proposed design a bit further, I am a little fuzzy on
what is supposed to happen in this design during parse analysis. In
Kevin's current code, after checking whether a RangeVar might be a CTE
reference and before deciding that it must be a table reference,
scanNameSpaceForTsr() is used to check whether there's a tuplestore by
that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
which references the tuplestore by name. To me, the obvious thing to
do here seems to be to instead call p_tableref_hook and let it return
a suitable RangeTblRef (or NULL if it wishes to take no action). In
the case where the hook wants to redirect the use of that name to a
tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
that entry can store a parameter-index rather than, as in the current
design, a name. But then where does Heikki's notion of a
RelationParam get used?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, "Alvaro Herrera" <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-10-22 21:04:35
Message-ID: 54481BE3.4030808@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10/22/2014 11:10 PM, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 1:51 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>>> I think it should call it *before* resolving regular tables, but let's
>>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>>> for the table/tuplestore, as well as its tuple descriptor.
>>>>
>>>> For the execution phase, add a new array of Tuplestorestates to
>>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>>
>>> I haven't been following this issue closely, but this sounds like a
>>> really nice design.
>>
>> I'm on board with the parser hooks part of that. I don't especially agree
>> with the idea of a new sub-structure for ParamListInfo: if you do that you
>> will need a whole bunch of new boilerplate infrastructure to allocate,
>> copy, and generally manage that structure, for darn little gain. What I'd
>> suggest is just saying that some Params might have type INTERNAL with
>> Datum values that are pointers to tuplestores; then all you need to do is
>> remember which Param number has been assigned to the particular tuplestore
>> you want. There is already precedent for that in the recursive CTE code,
>> IIRC.
>
> Studying this proposed design a bit further, I am a little fuzzy on
> what is supposed to happen in this design during parse analysis. In
> Kevin's current code, after checking whether a RangeVar might be a CTE
> reference and before deciding that it must be a table reference,
> scanNameSpaceForTsr() is used to check whether there's a tuplestore by
> that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
> which references the tuplestore by name. To me, the obvious thing to
> do here seems to be to instead call p_tableref_hook and let it return
> a suitable RangeTblRef (or NULL if it wishes to take no action). In
> the case where the hook wants to redirect the use of that name to a
> tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
> that entry can store a parameter-index rather than, as in the current
> design, a name. But then where does Heikki's notion of a
> RelationParam get used?

I was thinking that the hook would return a RelationParam. When parse
analysis sees the returned RelationParam, it adds an entry for that to
the range table, and creates the RangeTblRef for it. The way you
describe it works too, but manipulating the range table directly in the
hook seems a bit too low-level.

- Heikki


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, "Alvaro Herrera" <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-10-22 21:29:02
Message-ID: 29949.1414013342@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> writes:
> On 10/22/2014 11:10 PM, Robert Haas wrote:
>> Studying this proposed design a bit further, I am a little fuzzy on
>> what is supposed to happen in this design during parse analysis. In
>> Kevin's current code, after checking whether a RangeVar might be a CTE
>> reference and before deciding that it must be a table reference,
>> scanNameSpaceForTsr() is used to check whether there's a tuplestore by
>> that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
>> which references the tuplestore by name. To me, the obvious thing to
>> do here seems to be to instead call p_tableref_hook and let it return
>> a suitable RangeTblRef (or NULL if it wishes to take no action). In
>> the case where the hook wants to redirect the use of that name to a
>> tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
>> that entry can store a parameter-index rather than, as in the current
>> design, a name. But then where does Heikki's notion of a
>> RelationParam get used?

> I was thinking that the hook would return a RelationParam. When parse
> analysis sees the returned RelationParam, it adds an entry for that to
> the range table, and creates the RangeTblRef for it. The way you
> describe it works too, but manipulating the range table directly in the
> hook seems a bit too low-level.

The problem with that idea is that then the API for the hook has to cover
every possible sort of RTE that hooks might wish to create; I see no
reason to restrict them to creating just one kind. I agree that the hook
should avoid *physically* manipulating the rangetable, but it seems
reasonable to expect that it can call one of the addRangeTableEntryXXX
functions provided by parse_relation.c, and then return a RangeTblEntry*
gotten that way. So hooks would have an API more or less equivalent
to, eg, transformRangeFunction().

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2014-10-23 15:19:13
Message-ID: CA+TgmoZYahEBz-2OXf4baEEy907QRhG0Y6c74o14w2g7nUG+FQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Oct 22, 2014 at 5:29 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I was thinking that the hook would return a RelationParam. When parse
>> analysis sees the returned RelationParam, it adds an entry for that to
>> the range table, and creates the RangeTblRef for it. The way you
>> describe it works too, but manipulating the range table directly in the
>> hook seems a bit too low-level.
>
> The problem with that idea is that then the API for the hook has to cover
> every possible sort of RTE that hooks might wish to create; I see no
> reason to restrict them to creating just one kind. I agree that the hook
> should avoid *physically* manipulating the rangetable, but it seems
> reasonable to expect that it can call one of the addRangeTableEntryXXX
> functions provided by parse_relation.c, and then return a RangeTblEntry*
> gotten that way. So hooks would have an API more or less equivalent
> to, eg, transformRangeFunction().

Right, that reasoning makes sense to me. Unlike regular parameters,
where the existence of the parameter is known at parse time but the
value isn't available until bind time, we would be creating a
RelationParam node and then, literally immediately, turning it into a
range-table entry. That seems like unnecessary complexity, and it's
also something we can invent later if a more compelling use case
emerges.

So what I'm imagining now is:

1. During parse analysis, p_tableref_hook gets control and calls
addRangeTableEntryForTuplestore(), creating an RTE of type
RTE_TUPLESTORE. The RTE stores an integer parameter-index.

2. Path generation doesn't need to do anything very exciting; it just
generates a Path node of type T_TuplestoreScan. The RTE is still
available, so the path itself doesn't need to know which tuplestore
we're referencing, because that information is present in the RTE.

3. At plan generation time, we look up the RTE for the path and
extract the parameter index, which is what gets stored in the
TuplestoreScan node.

4. At executor initialization time, we use the parameter index in the
TuplestoreScan to index into the EState's es_param_list_info and
retrieve the tuplestore.

This means that Kevin's notion of a Tsrcache goes away completely,
which means a lot of the function-signature changes in his last
version of the patch can be reverted. The EState doesn't need a
es_tsrcache either. The mapping from name (OLD/NEW) to parameter
index happens inside the p_paramref_hook and after that we use integer
indices throughout. All that sees good.

One thing that's not too clear to me is how we're imagining that the
TuplestoreScan will get it's tupledesc. Right now the Tsrcache stores
essentially (tupledesc, tuplestore), but I understood the suggestions
above to imply that the ParamListInfo should point only to the
tuplestore, not to the tupledesc. I *think* the information we need
to reconstruct the TupleDesc is mostly present in the RTE; Kevin
reused the ctecoltypes, ctecoltypmods, and ctecolcollations fields to
store that information, which (a) probably requires some thought about
renaming those fields but (b) seems like it ought to be enough to
construct a viable TupleDesc. It seems that for CTEs, we somehow
engineer things so that the RecursiveUnion's target-list is such that
we can apply ExecAssignResultTypeFromTL() to it and get the tupledesc
that matches its Tuplestorestate, but I'm kinda unclear about what
makes that work and whether we can use a similar trick here.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2015-01-22 13:39:42
Message-ID: CA+TgmobHV=TF_MOnrQJk+icZh6arWHDUSQaAG=pTU-HYyxGTJA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Oct 23, 2014 at 11:19 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> So what I'm imagining now is:
>
> 1. During parse analysis, p_tableref_hook gets control and calls
> addRangeTableEntryForTuplestore(), creating an RTE of type
> RTE_TUPLESTORE. The RTE stores an integer parameter-index.
>
> 2. Path generation doesn't need to do anything very exciting; it just
> generates a Path node of type T_TuplestoreScan. The RTE is still
> available, so the path itself doesn't need to know which tuplestore
> we're referencing, because that information is present in the RTE.
>
> 3. At plan generation time, we look up the RTE for the path and
> extract the parameter index, which is what gets stored in the
> TuplestoreScan node.
>
> 4. At executor initialization time, we use the parameter index in the
> TuplestoreScan to index into the EState's es_param_list_info and
> retrieve the tuplestore.

I spent some time poking at this yesterday, based on commit
5060b9352b0d0301ffb002355f0572e93f8b05fe from
https://github.com/kgrittn/postgres.git

Here's where I got stuck:

The plpgsql_parser_setup() callback sets pstate->p_ref_hook_state =
(void *) expr, so if we add p_tableref_hook as an additional callback,
that's what it has to work with to find the information needed to
generate a RangeTblEntry. That is a PLpgSQL_expr, and it contains a
pointer to the PLpgSQL_function, which is created when the function is
compiled, which seems good, but the information we need is not there.
Specifically, we need to know the names the user picked for the old
and new tuplestores (tgoldtable and tgnewtable) and we need to know
what the tuple descriptor should be, and the PLpgSQL_function hasn't
got it.

It does not seem impossible to fix that, but I'm not sure it's safe.
do_compile() has the FunctionCallInfo, so from there it can get at the
TriggerData and the Trigger. The trigger has got tgoldtable and
tgnewtable, and the TriggerData has got tg_relation, so everything we
need is there. We could add pointers to the relevant stuff to the
PLpgSQL_function, and then the parser callbacks could get at it.
However, I'm not sure that's really OK -- presumably, tg_relation is
going to be valid only during the initial compile. If somebody came
back and looked at that PLpgSQL_function again later, and tried to
follow that pointer, bad things would happen. In practice maybe it
would be OK because the only likely reason to come back and look at
the PLpgSQL_function again is because we're recompiling, and at that
point we'd have a new relation pointer to copy in there, and so it
would probably be OK. But that feels mighty ugly.

Another idea is to change what actually gets passed to the parser
callback. Right now we just pass the PLpgSQL_expr. If we created a
new structure that contained that plus the PLpgSQL_execstate, we'd be
in fine shape. But this sort of gets at the root of the problem here:
with variables, the parser callback doesn't return the actual *value*,
it returns a Param node that will later, at execution time, be looked
up to find the actual value. With relations, we're sort of doing the
same thing - the tuplestore RTE doesn't need to contain the actual
data, just the tuple descriptor. But the data structures from which
we can get that information also contain the actual execution-time
information, so passing them into parse-time callbacks seems like it
might be, if nothing else, a recipe for future bugs.

Any suggestions?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2015-01-22 14:41:42
Message-ID: 346436483.3047971.1421937702109.JavaMail.yahoo@jws10044.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> Another idea is to change what actually gets passed to the parser
> callback. Right now we just pass the PLpgSQL_expr. If we created a
> new structure that contained that plus the PLpgSQL_execstate, we'd be
> in fine shape. But this sort of gets at the root of the problem here:
> with variables, the parser callback doesn't return the actual *value*,
> it returns a Param node that will later, at execution time, be looked
> up to find the actual value. With relations, we're sort of doing the
> same thing - the tuplestore RTE doesn't need to contain the actual
> data, just the tuple descriptor. But the data structures from which
> we can get that information also contain the actual execution-time
> information, so passing them into parse-time callbacks seems like it
> might be, if nothing else, a recipe for future bugs.

That was, of course, why this patch evolved to using this structure
during parsing:

typedef struct TsrmdData
{
char *name; /* name used to identify the tuplestore */
TupleDesc tupdesc; /* description of result rows */
} TsrmdData;

typedef TsrmdData *Tsrmd;

... and this during execution:

typedef struct TsrData
{
TsrmdData md;
Tuplestorestate *tstate; /* data (or tids) */
} TsrData;

typedef TsrData *Tsr;

The big problem, as I see it, is how to deliver these to where they
are needed. I didn't think it was that hard to do with the parser
hook; it's what to do to get the execution time structure to where
it's needed that I can't figure out. Passing it with the
parameters is tricky because we're often passing a NULL for the
reference to the parameter list when we need these. Trying to coax
the executor to pass in a parameter list when there are no
parameters, just these ephemeral relations, seems very tricky and
all solutions I have tried (other than the one Heikki and others
have objected to) very fragile.

In short, the only solution which I've been able to come up with
that works (and seems to me solid enough to commit) is the one that
Hekki, Tom, and Robert seem to think should be made more like
parameter handling; and every attempt at handling these relations
like parameters seems to me too fragile for me to feel it is worthy
of commit.

We're really down to the wire on getting this feature into 9.5; and
we're way past what was initially my goal, which was to build on
this to get some incremental maintenance of (some types of simple)
materialized views into 9.5. IMO we need to be able to build up
tuplestores and easily reference them from within complex queries
to create any sane and efficient implementation of incremental
maintenance of materialized views. This patch was mainly intended
to make progress on the MV front, with an incidental benefit of
providing a standard feature that I have seen requested a few times.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)ymail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-05-13 18:02:37
Message-ID: 20160513180237.GC29195@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 22, 2015 at 02:41:42PM +0000, Kevin Grittner wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> > Another idea is to change what actually gets passed to the parser
> > callback. Right now we just pass the PLpgSQL_expr. If we created a
> > new structure that contained that plus the PLpgSQL_execstate, we'd be
> > in fine shape. But this sort of gets at the root of the problem here:
> > with variables, the parser callback doesn't return the actual *value*,
> > it returns a Param node that will later, at execution time, be looked
> > up to find the actual value. With relations, we're sort of doing the
> > same thing - the tuplestore RTE doesn't need to contain the actual
> > data, just the tuple descriptor. But the data structures from which
> > we can get that information also contain the actual execution-time
> > information, so passing them into parse-time callbacks seems like it
> > might be, if nothing else, a recipe for future bugs.
>
> That was, of course, why this patch evolved to using this structure
> during parsing:
>
> typedef struct TsrmdData
> {
> char *name; /* name used to identify the tuplestore */
> TupleDesc tupdesc; /* description of result rows */
> } TsrmdData;
>
> typedef TsrmdData *Tsrmd;
>
> ... and this during execution:
>
> typedef struct TsrData
> {
> TsrmdData md;
> Tuplestorestate *tstate; /* data (or tids) */
> } TsrData;
>
> typedef TsrData *Tsr;
>
> The big problem, as I see it, is how to deliver these to where they
> are needed. I didn't think it was that hard to do with the parser
> hook; it's what to do to get the execution time structure to where
> it's needed that I can't figure out. Passing it with the
> parameters is tricky because we're often passing a NULL for the
> reference to the parameter list when we need these. Trying to coax
> the executor to pass in a parameter list when there are no
> parameters, just these ephemeral relations, seems very tricky and
> all solutions I have tried (other than the one Heikki and others
> have objected to) very fragile.
>
> In short, the only solution which I've been able to come up with
> that works (and seems to me solid enough to commit) is the one that
> Hekki, Tom, and Robert seem to think should be made more like
> parameter handling; and every attempt at handling these relations
> like parameters seems to me too fragile for me to feel it is worthy
> of commit.
>
> We're really down to the wire on getting this feature into 9.5; and
> we're way past what was initially my goal, which was to build on
> this to get some incremental maintenance of (some types of simple)
> materialized views into 9.5. IMO we need to be able to build up
> tuplestores and easily reference them from within complex queries
> to create any sane and efficient implementation of incremental
> maintenance of materialized views. This patch was mainly intended
> to make progress on the MV front, with an incidental benefit of
> providing a standard feature that I have seen requested a few times.

[almost 17 months later]

It seems like now is getting close to the time to get this into
master. The patch might have suffered from some bit rot, but the
design so far seems sound.

What say?

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-05-13 18:37:28
Message-ID: CACjxUsOb16-K4qt48svEe_FVk0Z8YSA+dfjhkynL36qj43kE1g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, May 13, 2016 at 1:02 PM, David Fetter <david(at)fetter(dot)org> wrote:
> On Thu, Jan 22, 2015 at 02:41:42PM +0000, Kevin Grittner wrote:

>> [ideas on how to pass around references to ephemeral relations]
>
> [almost 17 months later]
>
> It seems like now is getting close to the time to get this into
> master. The patch might have suffered from some bit rot, but the
> design so far seems sound.
>
> What say?

I had a talk with Tom in Brussels about this. As I understood it,
he wasn't too keen on the suggestion by Heikki (vaguely
sorta-endorsed by Robert) of passing ephemeral named relations such
as these tuplestores around in the structures currently used for
parameter values. He intuitively foresaw the types of problems I
had run into trying to use the same structure to pass a relation
(with structure and rows and columns of data) as is used to pass,
say, an integer.

After discussing a while, he suggested that this patch should be
looked at as an opportunity to refactor the existing handling of
the data used by AFTER triggers. He pointed out that the existing
technique is unbounded in RAM use, with no ability to spill to
temporary files, and regularly generates complaints. This patch is
putting all that same data into a pair of tuplestores (for old and
new row versions) that would spill to disk as needed and would
likely perform better.

I think that still leaves the question open of how best to pass
around information about ephemeral relations. It seems to me that
threads about other features have brushed up against similar
needs, and we should consider those in conjunction with this to
make sure we solve the problem once in a sufficiently general way
to cover all the needs. (For example, the "asynchronous and
vectorized execution" brushes against related concepts, and I seem
to recall others.)

Unfortunately for those eager to have incremental maintenance of
materialized views (for which this patch was a milestone on the
way), Tom's suggestions seem to put additional work on that path.
When I pointed that out, he pointed out that doing features well
can take a lot of time. I think that was about the time the next
round of drinks arrived and we decided we'd better not try to take
the discussion much further that night. ;-)

I welcome any other thoughts on this, particularly from Tom (and
*especially* if I've misrepresented his position in any way!).
Perhaps we can get a design we all like and split the work so that
the refactoring Tom wants and the feature this patch is intended to
implement can get done in the next CF or two, and some initial
work on IMMV can still make it into the next development cycle,
building on this.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-07-07 22:07:27
Message-ID: CA+TgmoYH1TYRjHy7HrJ5RdX_kPpfh150KXJA3Gqp6BXBNVFOgQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, May 13, 2016 at 2:37 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Fri, May 13, 2016 at 1:02 PM, David Fetter <david(at)fetter(dot)org> wrote:
>> On Thu, Jan 22, 2015 at 02:41:42PM +0000, Kevin Grittner wrote:
>
>>> [ideas on how to pass around references to ephemeral relations]
>>
>> [almost 17 months later]
>>
>> It seems like now is getting close to the time to get this into
>> master. The patch might have suffered from some bit rot, but the
>> design so far seems sound.
>>
>> What say?
>
> I had a talk with Tom in Brussels about this. As I understood it,
> he wasn't too keen on the suggestion by Heikki (vaguely
> sorta-endorsed by Robert) of passing ephemeral named relations such
> as these tuplestores around in the structures currently used for
> parameter values. He intuitively foresaw the types of problems I
> had run into trying to use the same structure to pass a relation
> (with structure and rows and columns of data) as is used to pass,
> say, an integer.

See, the thing that disappoints me about this is that I think we were
pretty closed to having the ParamListInfo-based approach working. The
thing I liked about that approach is that we already know that any
place where you can provide parameters for a query, there will also be
an opportunity to provide a ParamListInfo. And I think that
parameterizing a query by an ephemeral table is conceptually similar
to parameterizing it by a scalar value. If we invent a new mechanism,
it's got to reach all of those same places in the code.

One other comment that I would make here is that I think that it's
important, however we pass the data, that the scope of the tuplestores
is firmly lexical and not dynamic. We need to make sure that we don't
set some sort of context via global variables that might get used for
some query other than the one to which it's intended to apply.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-08-31 20:24:52
Message-ID: CACjxUsN+O-jT4XFU_WyAyzjZyKoxsnxT-ZJuqYQQaHVJsQnQ2g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I have merged in the changes since v4 (a year and a half ago) and
cured all bit-rot I found, to get the attached v5 which runs `make
check world` without problem -- including the tests added for this
feature.

I did remove the contrib code that David Fetter wrote to
demonstrate the correctness and performance of the tuplestores as
created during the transaction, and how to use them directly from C
code, before any API code was written. If we want that to be
committed, it should be considered separately after the main
feature is in.

Thanks to Thomas Munro who took a look at v4 and pointed out a bug
(which is fixed in v5) and suggested a way forward for using the
parameters. Initial attempts to get that working were not
successful,, but I think it is fundamentally the right course,
should we reach a consensus to go that way,

On Thu, Jul 7, 2016 at 5:07 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Fri, May 13, 2016 at 2:37 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> On Fri, May 13, 2016 at 1:02 PM, David Fetter <david(at)fetter(dot)org> wrote:
>>> On Thu, Jan 22, 2015 at 02:41:42PM +0000, Kevin Grittner wrote:
>>
>>>> [ideas on how to pass around references to ephemeral relations]
>>>
>>> [almost 17 months later]
>>>
>>> It seems like now is getting close to the time to get this into
>>> master. The patch might have suffered from some bit rot, but the
>>> design so far seems sound.
>>>
>>> What say?
>>
>> I had a talk with Tom in Brussels about this. As I understood it,
>> he wasn't too keen on the suggestion by Heikki (vaguely
>> sorta-endorsed by Robert) of passing ephemeral named relations such
>> as these tuplestores around in the structures currently used for
>> parameter values. He intuitively foresaw the types of problems I
>> had run into trying to use the same structure to pass a relation
>> (with structure and rows and columns of data) as is used to pass,
>> say, an integer.
>
> See, the thing that disappoints me about this is that I think we were
> pretty closed to having the ParamListInfo-based approach working.

Maybe, but the thing I would like to do before proceeding down that
road is to confirm that we have a consensus that such a course is
better than what Is on the branch which is currently working. If
that's the consensus here, I'll work on that for the next CF. If
not, there may not be a lot left to do before commit. (Notably, we
may want to provide a way to free a tuplestore pointer when done
with it, but that's not too much work.) Let me describe the API I
have working.

There are 11 function prototypes modified under src/include, in all
cases to add a Tsrcache parameter:
1 createas.h
3 explain.h
1 prepare.h
1 analyze.h
2 tcopprot.h
3 utility.h

There are three new function prototypes in SPI. NOTE: This does
*not* mean that SPI is required to use named tuplestores in
queries, just that there are convenience functions for any queries
being run through SPI, so that any code using SPI (including any
PLs that do) will find assigning a name to a tuplestore and
referencing that within a query about as easy as falling off a log.
A tuplestore is registered to the current SPI context and not
visible outside that context. This results in a Tsrcache being
passed to the functions mentioned above when that context is
active, just as any non-SPI code could do.

> The thing I liked about that approach is that we already know that any
> place where you can provide parameters for a query, there will also be
> an opportunity to provide a ParamListInfo. And I think that
> parameterizing a query by an ephemeral table is conceptually similar
> to parameterizing it by a scalar value. If we invent a new mechanism,
> it's got to reach all of those same places in the code.

Do you see someplace that the patch missed?

> One other comment that I would make here is that I think that it's
> important, however we pass the data, that the scope of the tuplestores
> is firmly lexical and not dynamic. We need to make sure that we don't
> set some sort of context via global variables that might get used for
> some query other than the one to which it's intended to apply.

Is this based on anything actually in the patch?

For this CF, the main patch attached is a working version of the
feature that people can test, review documentation, etc. Any API
changes are not expected to change these visible behaviors, so any
feedback on usability or documentation can be directly useful
regardless of the API discussion.

I have also attached a smaller patch which applies on top of the
main one which rips out the Tsrcache API to get to a "no API"
version that compiles cleanly and runs fine as long as you don't
try to use the feature, in which case it will not recognize the
tuplestore names and will give this message: "executor could not
find named tuplestore \"%s\"". There may be a little bit left to
rip out when adding a parameter-based API, but this is basically
where we're moving from if we go that way. I include it both to
help isolate the API we're discussing and in case anyone wants to
play with the "no API" version to try any alternative API.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
trigger-transition-tables-v5.patch.gz application/x-gzip 38.9 KB
trigger-transition-tables-v5-noapi.patch.gz application/x-gzip 11.0 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-09-09 22:28:58
Message-ID: CACjxUsOaAgAsGwUX820-1CD+jtx0n2wNuEt6+Ui2pAL_PFoKOA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

v6 fixes recently-introduced bit-rot.

Kevin Grittner

On Wed, Aug 31, 2016 at 3:24 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> I have merged in the changes since v4 (a year and a half ago) and
> cured all bit-rot I found, to get the attached v5 which runs `make
> check world` without problem -- including the tests added for this
> feature.
>
> I did remove the contrib code that David Fetter wrote to
> demonstrate the correctness and performance of the tuplestores as
> created during the transaction, and how to use them directly from C
> code, before any API code was written. If we want that to be
> committed, it should be considered separately after the main
> feature is in.
>
> Thanks to Thomas Munro who took a look at v4 and pointed out a bug
> (which is fixed in v5) and suggested a way forward for using the
> parameters. Initial attempts to get that working were not
> successful,, but I think it is fundamentally the right course,
> should we reach a consensus to go that way,
>
> On Thu, Jul 7, 2016 at 5:07 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Fri, May 13, 2016 at 2:37 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>> On Fri, May 13, 2016 at 1:02 PM, David Fetter <david(at)fetter(dot)org> wrote:
>>>> On Thu, Jan 22, 2015 at 02:41:42PM +0000, Kevin Grittner wrote:
>>>
>>>>> [ideas on how to pass around references to ephemeral relations]
>>>>
>>>> [almost 17 months later]
>>>>
>>>> It seems like now is getting close to the time to get this into
>>>> master. The patch might have suffered from some bit rot, but the
>>>> design so far seems sound.
>>>>
>>>> What say?
>>>
>>> I had a talk with Tom in Brussels about this. As I understood it,
>>> he wasn't too keen on the suggestion by Heikki (vaguely
>>> sorta-endorsed by Robert) of passing ephemeral named relations such
>>> as these tuplestores around in the structures currently used for
>>> parameter values. He intuitively foresaw the types of problems I
>>> had run into trying to use the same structure to pass a relation
>>> (with structure and rows and columns of data) as is used to pass,
>>> say, an integer.
>>
>> See, the thing that disappoints me about this is that I think we were
>> pretty closed to having the ParamListInfo-based approach working.
>
> Maybe, but the thing I would like to do before proceeding down that
> road is to confirm that we have a consensus that such a course is
> better than what Is on the branch which is currently working. If
> that's the consensus here, I'll work on that for the next CF. If
> not, there may not be a lot left to do before commit. (Notably, we
> may want to provide a way to free a tuplestore pointer when done
> with it, but that's not too much work.) Let me describe the API I
> have working.
>
> There are 11 function prototypes modified under src/include, in all
> cases to add a Tsrcache parameter:
> 1 createas.h
> 3 explain.h
> 1 prepare.h
> 1 analyze.h
> 2 tcopprot.h
> 3 utility.h
>
> There are three new function prototypes in SPI. NOTE: This does
> *not* mean that SPI is required to use named tuplestores in
> queries, just that there are convenience functions for any queries
> being run through SPI, so that any code using SPI (including any
> PLs that do) will find assigning a name to a tuplestore and
> referencing that within a query about as easy as falling off a log.
> A tuplestore is registered to the current SPI context and not
> visible outside that context. This results in a Tsrcache being
> passed to the functions mentioned above when that context is
> active, just as any non-SPI code could do.
>
>> The thing I liked about that approach is that we already know that any
>> place where you can provide parameters for a query, there will also be
>> an opportunity to provide a ParamListInfo. And I think that
>> parameterizing a query by an ephemeral table is conceptually similar
>> to parameterizing it by a scalar value. If we invent a new mechanism,
>> it's got to reach all of those same places in the code.
>
> Do you see someplace that the patch missed?
>
>> One other comment that I would make here is that I think that it's
>> important, however we pass the data, that the scope of the tuplestores
>> is firmly lexical and not dynamic. We need to make sure that we don't
>> set some sort of context via global variables that might get used for
>> some query other than the one to which it's intended to apply.
>
> Is this based on anything actually in the patch?
>
>
> For this CF, the main patch attached is a working version of the
> feature that people can test, review documentation, etc. Any API
> changes are not expected to change these visible behaviors, so any
> feedback on usability or documentation can be directly useful
> regardless of the API discussion.
>
> I have also attached a smaller patch which applies on top of the
> main one which rips out the Tsrcache API to get to a "no API"
> version that compiles cleanly and runs fine as long as you don't
> try to use the feature, in which case it will not recognize the
> tuplestore names and will give this message: "executor could not
> find named tuplestore \"%s\"". There may be a little bit left to
> rip out when adding a parameter-based API, but this is basically
> where we're moving from if we go that way. I include it both to
> help isolate the API we're discussing and in case anyone wants to
> play with the "no API" version to try any alternative API.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
trigger-transition-tables-v6.patch.gz application/x-gzip 39.0 KB
trigger-transition-tables-v6-noapi.patch.gz application/x-gzip 11.0 KB

From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-10-03 04:20:56
Message-ID: CAB7nPqRvMhz-POx9Hmo7z+vECgoQxY5TGBLaApVmV4X-LGsrAQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Sep 10, 2016 at 7:28 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> v6 fixes recently-introduced bit-rot.

Not as big as I thought, only 2k when both patches are combined... The
patch without noapi in its name needs to be applied first, and after
the patch with noapi can be applied.
60 files changed, 2073 insertions(+), 63 deletions(-)
Moved to next CF.
--
Michael


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-10-30 15:35:21
Message-ID: CACjxUsM7LjYE7A3A=5p+Z+q5dawk=o_GH9RYUhTEOpG1xsiRVA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Oct 2, 2016 at 11:20 PM, Michael Paquier
<michael(dot)paquier(at)gmail(dot)com> wrote:
> On Sat, Sep 10, 2016 at 7:28 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> v6 fixes recently-introduced bit-rot.
>
> Not as big as I thought, only 2k when both patches are combined... The
> patch without noapi in its name needs to be applied first, and after
> the patch with noapi can be applied.
> 60 files changed, 2073 insertions(+), 63 deletions(-)
> Moved to next CF.

In an attempt to make this patch more digestible for reviewers, I
have split it up as follows:

transition-c-triggers-only-v7.diff
contrib/pgstattuple/pgstattuple.c | 2 +
doc/src/sgml/catalogs.sgml | 16 ++
doc/src/sgml/ref/create_trigger.sgml | 94 +++++--
src/backend/commands/tablecmds.c | 5 +-
src/backend/commands/trigger.c | 327 ++++++++++++++++++++++++-
src/backend/nodes/copyfuncs.c | 16 ++
src/backend/nodes/equalfuncs.c | 14 ++
src/backend/nodes/outfuncs.c | 13 +
src/backend/parser/gram.y | 70 +++++-
src/backend/utils/adt/ruleutils.c | 23 ++
src/bin/pg_basebackup/walmethods.c | 5 -
src/include/catalog/pg_trigger.h | 13 +-
src/include/commands/trigger.h | 2 +
src/include/nodes/nodes.h | 1 +
src/include/nodes/parsenodes.h | 17 ++
src/include/parser/kwlist.h | 3 +
src/include/utils/reltrigger.h | 7 +
17 files changed, 581 insertions(+), 47 deletions(-)

This part is virtually unchanged (just curing bit-rot) since
August, 2014, when I believe I had addressed all issues raised by
reviewers. It does provide a barely usable feature, since the
syntax for transition tables is added and tuplestores are created
when needed (and only when needed), with references stored in the
TriggerData structure. No new execution nodes are provided, so
only C triggers can use these relations, and must explicitly and
directly access the tuplestores from within the C code -- there is
no support for referencing these tuplestores from within queries.

This is basic infrastructure needed for the more complete feature.
As far as I know there are no objections to what is implemented
here. I have pulled it out to make the review of the more
controversial portions easier. Since it had quite a bit of review
two years ago, I will do some testing to make sure that nothing has
broken and then push this part in a few days if there are no
objections.

transition-noapi.diff
contrib/pgstattuple/pgstattuple.c | 2 -
doc/src/sgml/spi.sgml | 279 ++++++++++++++++++++
src/backend/commands/explain.c | 10 +
src/backend/executor/Makefile | 3 +-
src/backend/executor/execAmi.c | 6 +
src/backend/executor/execProcnode.c | 14 +
src/backend/executor/nodeTuplestorescan.c | 200 ++++++++++++++
src/backend/nodes/copyfuncs.c | 25 ++
src/backend/nodes/nodeFuncs.c | 2 +
src/backend/nodes/outfuncs.c | 20 ++
src/backend/nodes/print.c | 4 +
src/backend/nodes/readfuncs.c | 7 +
src/backend/optimizer/path/allpaths.c | 44 +++
src/backend/optimizer/path/costsize.c | 66 +++++
src/backend/optimizer/plan/createplan.c | 71 +++++
src/backend/optimizer/plan/setrefs.c | 11 +
src/backend/optimizer/plan/subselect.c | 5 +
src/backend/optimizer/prep/prepjointree.c | 2 +
src/backend/optimizer/util/pathnode.c | 25 ++
src/backend/optimizer/util/plancat.c | 4 +-
src/backend/optimizer/util/relnode.c | 1 +
src/backend/parser/Makefile | 3 +-
src/backend/parser/analyze.c | 11 +
src/backend/parser/parse_clause.c | 23 +-
src/backend/parser/parse_node.c | 2 +
src/backend/parser/parse_relation.c | 115 +++++++-
src/backend/parser/parse_target.c | 2 +
src/backend/parser/parse_tuplestore.c | 34 +++
src/backend/tcop/utility.c | 3 +-
src/backend/utils/adt/ruleutils.c | 1 +
src/backend/utils/cache/Makefile | 2 +-
src/backend/utils/cache/tsrcache.c | 111 ++++++++
src/bin/pg_basebackup/walmethods.c | 5 +
src/include/executor/nodeTuplestorescan.h | 24 ++
src/include/nodes/execnodes.h | 18 ++
src/include/nodes/nodes.h | 2 +
src/include/nodes/parsenodes.h | 11 +-
src/include/nodes/plannodes.h | 10 +
src/include/optimizer/cost.h | 3 +
src/include/optimizer/pathnode.h | 2 +
src/include/parser/parse_node.h | 2 +
src/include/parser/parse_relation.h | 4 +
src/include/parser/parse_tuplestore.h | 24 ++
src/include/utils/tsrcache.h | 27 ++
src/include/utils/tsrmd.h | 29 ++
src/include/utils/tsrmdcache.h | 31 +++
src/include/utils/tuplestore.h | 14 +
src/pl/plpgsql/src/pl_comp.c | 13 +-
src/pl/plpgsql/src/pl_exec.c | 26 ++
src/pl/plpgsql/src/plpgsql.h | 11 +-
src/test/regress/expected/plpgsql.out | 85 ++++++
src/test/regress/sql/plpgsql.sql | 73 +++++
52 files changed, 1499 insertions(+), 23 deletions(-)

This is the meat of the feature, with the API more-or-less ripped
out. It depends on the transition-c-triggers-only patch. This
part is, as far as I know, OK with others based on past review, but
is useless without some API to glue the pieces together --
specifically, passing the tuplestore name and its related TupleDesc
structure to the parse and plan phases, and passing those plus the
actual tuplestore to the execution phase. I will not commit this
until it is integrated with a working API that has consensus
support. If that doesn't happen before release we will have the
choice of documenting this feature as C trigger only for now, or
reverting transition-c-triggers-only-v7.diff.

transition-tsr
.../pg_stat_statements/pg_stat_statements.c | 15 +-
src/backend/catalog/pg_proc.c | 3 +-
src/backend/commands/copy.c | 5 +-
src/backend/commands/createas.c | 4 +-
src/backend/commands/explain.c | 23 +--
src/backend/commands/extension.c | 6 +-
src/backend/commands/foreigncmds.c | 2 +-
src/backend/commands/matview.c | 2 +-
src/backend/commands/prepare.c | 7 +-
src/backend/commands/schemacmds.c | 1 +
src/backend/commands/trigger.c | 2 +-
src/backend/commands/view.c | 2 +-
src/backend/executor/execMain.c | 5 +
src/backend/executor/execParallel.c | 2 +-
src/backend/executor/execUtils.c | 2 +
src/backend/executor/functions.c | 8 +-
src/backend/executor/nodeTuplestorescan.c | 4 +-
src/backend/executor/spi.c | 135 ++++++++++++++++-
src/backend/optimizer/util/clauses.c | 2 +-
src/backend/parser/analyze.c | 4 +-
src/backend/tcop/postgres.c | 11 +-
src/backend/tcop/pquery.c | 10 +-
src/backend/tcop/utility.c | 34 +++--
src/backend/utils/cache/plancache.c | 6 +-
src/include/commands/createas.h | 3 +-
src/include/commands/explain.h | 9 +-
src/include/commands/prepare.h | 4 +-
src/include/executor/execdesc.h | 2 +
src/include/executor/spi.h | 8 +
src/include/executor/spi_priv.h | 2 +
src/include/nodes/execnodes.h | 4 +
src/include/nodes/parsenodes.h | 2 +-
src/include/parser/analyze.h | 2 +-
src/include/tcop/tcopprot.h | 6 +-
src/include/tcop/utility.h | 7 +-
src/include/utils/portal.h | 1 +
src/pl/plpgsql/src/pl_exec.c | 13 +-
src/pl/plpgsql/src/plpgsql.h | 3 +
38 files changed, 281 insertions(+), 80 deletions(-)

This is the API that has been working and tested for two years, but
that Heikki doesn't like. It depends on the transition-noapi
patch.

Note that about half of it is in SPI support, which could be split
out. There is a complete and usable API without it, consisting of
new fields in some structures and an additional parameter in 7
existing functions. The SPI is there to make using the feature
easy from code using SPI. For example, the total of lines added
and deleted to implement support in plpgsql is 16. Each other PL
should need about the same for this API. SPI support would also
allow us to consider using set logic for validating foreign keys,
instead of the one-row-at-a-time approach currently used. My guess
is that even if we go with some other API, we will choose to add
SPI support more-or-less like this to avoid duplicating complex
alternative code everywhere.

transition-via-params
.../pg_stat_statements/pg_stat_statements.c | 4 +
src/backend/executor/nodeTuplestorescan.c | 30 +++---
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/outfuncs.c | 10 +-
src/backend/nodes/readfuncs.c | 2 +-
src/backend/optimizer/path/costsize.c | 37 +++++++
src/backend/optimizer/plan/createplan.c | 45 ++++++++-
src/backend/parser/parse_clause.c | 4 +
src/backend/parser/parse_node.c | 1 +
src/backend/parser/parse_relation.c | 23 ++---
src/include/nodes/parsenodes.h | 3 +-
src/include/nodes/plannodes.h | 2 +-
src/include/parser/parse_node.h | 4 +-
src/include/parser/parse_relation.h | 2 +
src/pl/plpgsql/src/pl_comp.c | 95 +++++++++++++++++-
src/pl/plpgsql/src/pl_exec.c | 65 ++++++++----
src/pl/plpgsql/src/pl_funcs.c | 2 +
src/pl/plpgsql/src/plpgsql.h | 30 ++++--
src/test/regress/expected/plpgsql.out | 1 -
19 files changed, 291 insertions(+), 73 deletions(-)

This is my attempt to get an API working along the lines suggested
by Heikki. It depends on the transition-noapi patch. Significant
assistance has been provided by Thomas Munro, although please blame
me for the problems that remain. It does not yet work. Note that
plpgsql so far has close to 200 lines inserted and deleted in an
attempt to make it work, and other PLs would either need the SPI
support or would need more lines of support per PL, because their
implementations are not as close to this technique as plpgsql is.

I'm sure there is some wet paint in this one from repeated attempts
to modify it one way or another to try to get it to actually work.
Apologies for that. Any suggestions on how to wrangle this into
something actually functional would be welcome.

As with the last CF, I think that it would be useful for anyone
with an interest in this feature itself or in the incremental
maintenance of materialized views (which this is an incremental
step toward) to look at the documentation and to play with the
transition-tsr variation, which should be fully functional and
should match the behavior we will see with any alternative API.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-v7.tgz application/x-gzip 46.8 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 14:57:01
Message-ID: CACjxUsM4s9=CUmPU4YFOYiD5f=2ULVDBjuFSo20Twe7KbUe8Mw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Oct 30, 2016 at 10:35 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:

> SPI support would also
> allow us to consider using set logic for validating foreign keys,
> instead of the one-row-at-a-time approach currently used.

Just as a proof of concept for this I used the attached test case
to create foreign keys using current techniques versus set-oriented
queries with the transition-tsr code. These probably can be
improved, since this is a "first cut" off the top of my head.

The delete of about one million rows from a "parent" table with no
matching rows in the "child" table, and no index on referencing
column in the child table, took 24:17.969 using current triggers
and 00:03.262 using the set-based triggers. Yes, that reduces
current run time for that case by 99.78%

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
ri-set-logic.sql application/sql 3.5 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 15:10:06
Message-ID: CACjxUsNCcUmhapbDL1bSOKXEiW4jhbrmJVn43pwo1PBQfou9qw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Attached is a minor fix to go on top of transition-tsr for issues
found yesterday in testing.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-tsr-v7a.diff text/plain 2.0 KB

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 16:07:24
Message-ID: CAFj8pRAs2P9aQOtQS8Kr2WjLisq9UJxf4AF58fQg0ujzEDoUWA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

2016-11-02 15:57 GMT+01:00 Kevin Grittner <kgrittn(at)gmail(dot)com>:

> On Sun, Oct 30, 2016 at 10:35 AM, Kevin Grittner <kgrittn(at)gmail(dot)com>
> wrote:
>
> > SPI support would also
> > allow us to consider using set logic for validating foreign keys,
> > instead of the one-row-at-a-time approach currently used.
>
> Just as a proof of concept for this I used the attached test case
> to create foreign keys using current techniques versus set-oriented
> queries with the transition-tsr code. These probably can be
> improved, since this is a "first cut" off the top of my head.
>
> The delete of about one million rows from a "parent" table with no
> matching rows in the "child" table, and no index on referencing
> column in the child table, took 24:17.969 using current triggers
> and 00:03.262 using the set-based triggers. Yes, that reduces
> current run time for that case by 99.78%
>

this is great number

Pavel

>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


From: Adam Brusselback <adambrusselback(at)gmail(dot)com>
To: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 16:41:00
Message-ID: CAMjNa7ePipxKWZJosbVPQFff+jZEx4MJ4MjjGPTucKGZQ3=ASg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> The delete of about one million rows from a "parent" table with no
> matching rows in the "child" table, and no index on referencing
> column in the child table, took 24:17.969 using current triggers
> and 00:03.262 using the set-based triggers. Yes, that reduces
> current run time for that case by 99.78%

That is really incredible. Gets rid of the need for an index on
referencing columns for a ton of use cases.


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Adam Brusselback <adambrusselback(at)gmail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 20:42:20
Message-ID: CACjxUsPPn3qK=0YaFGCSN6+E8Hb3Sf8HxWZk5VAVsQGHDgiMLg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> 2016-11-02 15:57 GMT+01:00 Kevin Grittner <kgrittn(at)gmail(dot)com>:
>> On Sun, Oct 30, 2016 at 10:35 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>
>>> SPI support would also
>>> allow us to consider using set logic for validating foreign keys,
>>> instead of the one-row-at-a-time approach currently used.
>>
>> Just as a proof of concept for this I used the attached test case
>> to create foreign keys using current techniques versus set-oriented
>> queries with the transition-tsr code. These probably can be
>> improved, since this is a "first cut" off the top of my head.
>>
>> The delete of about one million rows from a "parent" table with no
>> matching rows in the "child" table, and no index on referencing
>> column in the child table, took 24:17.969 using current triggers
>> and 00:03.262 using the set-based triggers. Yes, that reduces
>> current run time for that case by 99.78%

On Wed, Nov 2, 2016 at 11:07 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
>
> this is great number

On Wed, Nov 2, 2016 at 11:41 AM, Adam Brusselback
<adambrusselback(at)gmail(dot)com> wrote:
>
> That is really incredible. Gets rid of the need for an index on referencing
> columns for a ton of use cases.

Keep in mind that this is just a quick proof of concept. Unless
all participating transactions are at serializable isolation level
something would need to be done to handle race conditions, and that
might affect performance. I do think that this approach is likely
to be better in enough circumstances, even after that is covered,
that it will be worth pursuing -- either as an option when
declaring a foreign key, or as the only implementation. Until we
have a version that covers the race conditions and benchmark it in
a variety of workloads, it is hard to feel sure about the latter.
There may be some situations where crawling the indexes a row at a
time will perform better than this by enough to want to retain that
option.

A big plus of a single set-oriented statement is that it doesn't
suck unlimited RAM -- it will use work_mem to limit each tuplestore
and each query node, spilling to disk if needed. The current FK
implementation sometimes runs for a very long time and can run
people out of memory.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Adam Brusselback <adambrusselback(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-02 21:09:04
Message-ID: CAMjNa7cSy5hennr8uk2pkSKQqUGNS5FHWyDXW-3OsGwh3ur4CQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> There may be some situations where crawling the indexes a row at a
> time will perform better than this by enough to want to retain that
> option.

If an index existed, wouldn't it still be able to use that in the set-based
implementation? Is there something which would make doing the check
set-based ever worse than row based inherently?


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Adam Brusselback <adambrusselback(at)gmail(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-03 10:39:53
Message-ID: CACjxUsO7L60aO0jGzJUnesrz09sriT3A2774LowCyB5tnZAoqg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 2, 2016 at 4:09 PM, Adam Brusselback
<adambrusselback(at)gmail(dot)com> wrote:
>> There may be some situations where crawling the indexes a row at a
>> time will perform better than this by enough to want to retain that
>> option.
>
> If an index existed, wouldn't it still be able to use that in the set-based
> implementation?

Yes. The optimizer would compare plans and pick the lowest cost.

> Is there something which would make doing the check
> set-based ever worse than row based inherently?

I'm not sure. I doubt that it would ever lose by very much, but
only benchmarking can really answer that question.

Anyway, it's probably premature to get too far into it now. It
just occurred to me that it might be a worthwhile project once the
transition tables are available, so I did a quick set of triggers
to see what the potential was in a "best case" scenario.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-04 16:09:26
Message-ID: CACjxUsNWYRRPeEpBzGsZQSDjLxzw_Tmy5yyyUc0-76-JVYGp0Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Oct 30, 2016 at 10:35 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Sun, Oct 2, 2016 at 11:20 PM, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> wrote:

>> Not as big as I thought, only 2k when both patches are combined... The
>> patch without noapi in its name needs to be applied first, and after
>> the patch with noapi can be applied.
>> 60 files changed, 2073 insertions(+), 63 deletions(-)
>> Moved to next CF.
>
> In an attempt to make this patch more digestible for reviewers, I
> have split it up as follows:
>
> transition-c-triggers-only-v7.diff

> 17 files changed, 581 insertions(+), 47 deletions(-)
>
> This part is virtually unchanged (just curing bit-rot) since
> August, 2014, when I believe I had addressed all issues raised by
> reviewers. It does provide a barely usable feature, since the
> syntax for transition tables is added and tuplestores are created
> when needed (and only when needed), with references stored in the
> TriggerData structure. No new execution nodes are provided, so
> only C triggers can use these relations, and must explicitly and
> directly access the tuplestores from within the C code -- there is
> no support for referencing these tuplestores from within queries.
>
> This is basic infrastructure needed for the more complete feature.
> As far as I know there are no objections to what is implemented
> here. I have pulled it out to make the review of the more
> controversial portions easier. Since it had quite a bit of review
> two years ago, I will do some testing to make sure that nothing has
> broken and then push this part in a few days if there are no
> objections.

Hearing none, done. Hopefully that makes what remains easier to
review.

During final testing I was annoyed by the thin support for CREATE
TRIGGER in the tab completion code, so I improved that a bit and
pushed that, too.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 07:05:06
Message-ID: CAEepm=3NvS1MZqzJ5T-rh2xtshqEXwCRCL16Bwhksy1EY03g6A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Nov 5, 2016 at 5:09 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> Hearing none, done. Hopefully that makes what remains easier to
> review.

So, as of commit 8c48375e we have the standard syntax REFERENCING
NEW/OLD TABLE AS so that you can provide the name for transition
relations that trigger functions can access, and they are filled with
before and after tuples as appropriate and made available to trigger
code via the AfterTriggersData struct. So now we can write triggers
in C and access those tuplestores. Great!

I spent some time last week reviewing the remaining patches in the
series. They've bit-rotted a bit and one is incomplete, but it's
clear enough what's going on. First, we have transition-noapi-v7.diff
which introduces a new executor node "TuplestoreScan" and a named
tuplestore registry mechanism called "Tsrcache". Then we have a
choice of:

1. transition-tsr-v7.diff, which allows SPI client code to register
tuplestores + associated metadata though a new SPI interface, and
which passes Tsrcache objects to all the places they are needed.

2. transition-via-params-v7.diff, which allows SPI client code to
provide a new parser hook to resolve references to the new relations,
analogous to the way that plpgsql deals with variables referenced in
SQL statements, and then does some juggling to get the tuplestore to
the executor ndoe via a parameter slot.

Both ways have attracted criticism: the first involves touching
basically every core function that might eventually parse, plan or
execute a query to make it accept a Tsrcache and pass that on, and the
second involves a bunch of Rube Goldberg machine-like
callback/variable/parameter code.

I spent some time investigating whether a third way would be viable:
use ParamListInfo's setupParser hook and add an analogous one for the
executor, so that there is no registry equivalent to Tsrcache, but
also no dealing with param slots or (in plpgsql's case) new kinds of
variables. Instead, there would just be two callbacks: one for asking
the tuplestore provider for metadata (statistics, TupleDesc) at
planning time, and another for asking for the tuplestore by name at
execution time. One problem with this approach is that it requires
using the SPI_*_with_paramlist interfaces, not the more commonly used
arg-based versions, and requires using a ParamListInfo when you
otherwise have no reason to because you have no parameters. Also,
dealing with callbacks to register your tuplestore supplier is a
little clunky. More seriously, requiring providers of those callbacks
to write code that directly manipulates ParserState and EState and
calls addRangeTableEntryXXX seems like a modularity violation --
especially for PLs that are less integrated with core Postgres code
than plpgsql. I got this more or less working, but didn't like it
much and didn't think it would pass muster.

After going through that experience, I now agree with Kevin: an
interface where a new SPI interface lets PLs push a named tuplestore
into the SPI connection to make it available to SQL seems like the
simplest and tidiest way. I do have some feedback and suggestions
though:

1. Naming: Using tuplestores in AfterTriggersData make perfect sense
to me but I don't think it follows that the exact thing we should
expose to the executor to allow these to be scanned is a
TuplestoreScan. We have other nodes that essentially scan a
tuplestore like WorkTableScan and Material but they have names that
tell you what they are for. This is for scanning data that has either
been conjured up or passed on by SPI client code and exposed to SQL
queries. How about SpiRelationScan, SpiDataScan, SpiRelVarScan, ....?
Also, Tsrcache is strangely named: it's not exactly a cache, it's
more of a registry.

2. Scope of changes: If we're going to touch functions all over the
source tree to get the Tsrcache where it needs to be for parsing and
execution, then I wonder if we should consider thinking a bit more
about where this is going. What if we had a thing called a
QueryEnvironment, and we passed a pointer to that into to all those
places, and it could contain the named tuplestore registry? Then we
wouldn't have to change all those interfaces again in future if
someone wants to allow more kinds of transient objects to be injected
into the SQL namespace via SPI. For example, future uses could
include transient functions (ie uncatalogued functions that are made
available to a query by the SPI client using a callback to support
private and nested functions), or relations provided via SPI that emit
tuples one-at-a-time, or more sophisticated kinds of transient
relations that support writes, indexes and constraints along the lines
of those in SQL Server's T-SQL.

See attached patches:

* spi-relation-v1.patch, which provides: (1) an SpiRelationScan
executor node, (2) the SPI interface required to feed data to it, (3)
a new QueryEnvironment struct which is used to convey SpiRelation into
the right bits of the planner and executor; this patch is based on
fragments extracted from the -noapi and -tsr patches, and modified as
described above

* spi-relation-plpgsql-v1.patch, to teach plpgsql how to expose the
new and old tables to SQL via the above

* spi-relation-plpython-v1.patch, ditto for plpython; this patch makes
the OLD TABLE and NEW TABLE automatically available to any query you
run via plpy.execute, and is intended to show that the code required
to make this work for each PL is small, in support of Kevin's earlier
argument (a more featureful patch might would presumably also expose
the contents of the tuplestores directly to Python code, and let
Python code create arbitrary new tuplestores and expose those to SQL
queries)

Thoughts?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
spi-relation-v1.patch application/octet-stream 99.1 KB
spi-relation-plpgsql-v1.patch application/octet-stream 7.7 KB
spi-relation-plpython-v1.patch application/octet-stream 3.2 KB

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 09:49:01
Message-ID: CAMsr+YGYazZ8RgGRM2_tpRP2YQj_eXw139yxvOtCpLdL0T9Rbw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 21 November 2016 at 15:05, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> After going through that experience, I now agree with Kevin: an
> interface where a new SPI interface lets PLs push a named tuplestore
> into the SPI connection to make it available to SQL seems like the
> simplest and tidiest way.

That also offers a handy step on the path toward table-valued
variables and pipelined functions, both of which would be _really_
nice for PL/PgSQL users.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 17:04:34
Message-ID: CACjxUsOWAK4-rLWSAsa4FfAUX6Mxa41-TtCVN-D4UysJsod7BA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thanks for the review! Will respond further after reviewing your
suggested patches; this is a quick response just to the contents of
the email.

On Mon, Nov 21, 2016 at 1:05 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> Both ways have attracted criticism: the first involves touching
> basically every core function that might eventually parse, plan or
> execute a query to make it accept a Tsrcache and pass that on, and the
> second involves a bunch of Rube Goldberg machine-like
> callback/variable/parameter code.

Just to quantify "touching basically every core function that
might...", the number of functions which have a change in signature
(adding one parameter) is seven. No more; no less.

> I spent some time investigating whether a third way would be viable:
> use ParamListInfo's setupParser hook and add an analogous one for the
> executor, so that there is no registry equivalent to Tsrcache, but
> also no dealing with param slots or (in plpgsql's case) new kinds of
> variables. Instead, there would just be two callbacks: one for asking
> the tuplestore provider for metadata (statistics, TupleDesc) at
> planning time, and another for asking for the tuplestore by name at
> execution time. One problem with this approach is that it requires
> using the SPI_*_with_paramlist interfaces, not the more commonly used
> arg-based versions, and requires using a ParamListInfo when you
> otherwise have no reason to because you have no parameters. Also,
> dealing with callbacks to register your tuplestore supplier is a
> little clunky. More seriously, requiring providers of those callbacks
> to write code that directly manipulates ParserState and EState and
> calls addRangeTableEntryXXX seems like a modularity violation --
> especially for PLs that are less integrated with core Postgres code
> than plpgsql. I got this more or less working, but didn't like it
> much and didn't think it would pass muster.

ok

> After going through that experience, I now agree with Kevin: an
> interface where a new SPI interface lets PLs push a named tuplestore
> into the SPI connection to make it available to SQL seems like the
> simplest and tidiest way. I do have some feedback and suggestions
> though:
>
> 1. Naming: Using tuplestores in AfterTriggersData make perfect sense
> to me but I don't think it follows that the exact thing we should
> expose to the executor to allow these to be scanned is a
> TuplestoreScan. We have other nodes that essentially scan a
> tuplestore like WorkTableScan and Material but they have names that
> tell you what they are for. This is for scanning data that has either
> been conjured up or passed on by SPI client code and exposed to SQL
> queries. How about SpiRelationScan, SpiDataScan, SpiRelVarScan, ....?

I think an SPI centered approach is the wrong way to go. I feel
that an SPI *interface* will be very useful, and probably save
thousands of lines of fragile code which would otherwise be
blurring the lines of the layering, but I feel there there should
be a lower-level interface, and the SPI interface should use that
to provide a convenience layer. In particular, I suspect that some
uses of these named tuplestore relations will initially use SPI for
convenience of development, but may later move some of that code to
dealing with parse trees, for performance. Ideally, the execution
plan would be identical whether or not SPI was involved, so naming
implying the involvement of SPI would be misleading.
NamedTuplestoreScan might be an improvement over just
TuplestoreScan.

> Also, Tsrcache is strangely named: it's not exactly a cache, it's
> more of a registry.

When I used the word "cache" here, I was thinking more of this
English language definition:

a : a hiding place especially for concealing and preserving
provisions or implements
b : a secure place of storage

The intent being to emphasize that there is not one public
"registry" of such objects, but context-specific collections where
references are tucked away when they become available for later use
in the only the appropriate context. Eventually, when these are
used for some of the less "eager" timings of materialized view
maintenance, they may be set aside for relatively extended periods
(i.e., minutes or maybe even hours) before being used. Neither
"registry" nor "cache" seems quite right; maybe someone can think
of a word with more accurate semantics.

> 2. Scope of changes: If we're going to touch functions all over the
> source tree to get the Tsrcache where it needs to be for parsing and
> execution, then I wonder if we should consider thinking a bit more
> about where this is going. What if we had a thing called a
> QueryEnvironment, and we passed a pointer to that into to all those
> places, and it could contain the named tuplestore registry?

I agree. I had a version building on the Tsrcache approach which
differentiated between three levels of generality: Ephemeral
Relations, a subclass of that call Named Ephemeral Relations, and a
subclass of that called Named Tuplestore Relations. That
experiment seems to have been lost somewhere along the way, but I
think it was fairly easy to draw those lines in the Tsrcache
version to support other types of lightweight relations. I didn't
have the concept of a QueryEnvironment in that; I would be
interested in hearing more about how you see that working.

> See attached patches:
>
> * spi-relation-v1.patch, which provides: (1) an SpiRelationScan
> executor node, (2) the SPI interface required to feed data to it, (3)
> a new QueryEnvironment struct which is used to convey SpiRelation into
> the right bits of the planner and executor; this patch is based on
> fragments extracted from the -noapi and -tsr patches, and modified as
> described above
>
> * spi-relation-plpgsql-v1.patch, to teach plpgsql how to expose the
> new and old tables to SQL via the above
>
> * spi-relation-plpython-v1.patch, ditto for plpython; this patch makes
> the OLD TABLE and NEW TABLE automatically available to any query you
> run via plpy.execute, and is intended to show that the code required
> to make this work for each PL is small, in support of Kevin's earlier
> argument (a more featureful patch might would presumably also expose
> the contents of the tuplestores directly to Python code, and let
> Python code create arbitrary new tuplestores and expose those to SQL
> queries)

Right, I think we should assume that there will be other ways
people want to use parts of what is done here, including building
tuplestores through other means and referencing them in queries.
As I said, I think we should make that very easy to do through SPI
while providing a low-level mechanism for those building parse
trees directly in C code. If we do this right, we might eventually
want to collapse CTEs or some other existing types of ephemeral
relations into this framework.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 17:17:14
Message-ID: CA+TgmobbbJ7+Er2rjVEStAsZeSbkXn5DjGFz_ht-KD6CgJyZnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Nov 21, 2016 at 12:04 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> Also, Tsrcache is strangely named: it's not exactly a cache, it's
>> more of a registry.
>
> When I used the word "cache" here, I was thinking more of this
> English language definition:
>
> a : a hiding place especially for concealing and preserving
> provisions or implements
> b : a secure place of storage
>
> The intent being to emphasize that there is not one public
> "registry" of such objects, but context-specific collections where
> references are tucked away when they become available for later use
> in the only the appropriate context. Eventually, when these are
> used for some of the less "eager" timings of materialized view
> maintenance, they may be set aside for relatively extended periods
> (i.e., minutes or maybe even hours) before being used. Neither
> "registry" nor "cache" seems quite right; maybe someone can think
> of a word with more accurate semantics.

I complained about the use of "cache" in this name before, and I still
think that it is off-base. I'm not saying there isn't some definition
of the word that could cover what you're doing here, but it's not the
definition that is going to pop to mind for people reading the code.
I think "registry" would be OK; the fact that there is a registry does
not mean it is a global registry; it can be a registry of ephemeral
relations specific to that query. I'm sure there are other good
choices, too, but, please, not cache!

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 17:29:41
Message-ID: 20161121172941.qv5rtq7d57izy7yh@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner wrote:

> On Mon, Nov 21, 2016 at 1:05 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> > Also, Tsrcache is strangely named: it's not exactly a cache, it's
> > more of a registry.
>
> When I used the word "cache" here, I was thinking more of this
> English language definition:
>
> a : a hiding place especially for concealing and preserving
> provisions or implements
> b : a secure place of storage
>
> The intent being to emphasize that there is not one public
> "registry" of such objects, but context-specific collections where
> references are tucked away when they become available for later use
> in the only the appropriate context. Eventually, when these are
> used for some of the less "eager" timings of materialized view
> maintenance, they may be set aside for relatively extended periods
> (i.e., minutes or maybe even hours) before being used. Neither
> "registry" nor "cache" seems quite right; maybe someone can think
> of a word with more accurate semantics.

How about "stash"? According to my reading of Merriam-Webster's
definition, "stash" mostly appears to be the thing that is stored
(hidden), rather than the place it's stored in, but one of the
definitions is "hiding place", and "cache" is listed as a synonym.

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 18:29:36
Message-ID: CACjxUsNBXgExJLVQy00JmiyrxBn57Foun0uAAkAQCgTurfdebA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Nov 21, 2016 at 11:29 AM, Alvaro Herrera
<alvherre(at)2ndquadrant(dot)com> wrote:

>> When I used the word "cache" here, I was thinking more of this
>> English language definition:
>>
>> a : a hiding place especially for concealing and preserving
>> provisions or implements
>> b : a secure place of storage
>>
>> The intent being to emphasize that there is not one public
>> "registry" of such objects, but context-specific collections where
>> references are tucked away when they become available for later use
>> in the only the appropriate context.

> How about "stash"? According to my reading of Merriam-Webster's
> definition, "stash" mostly appears to be the thing that is stored
> (hidden), rather than the place it's stored in, but one of the
> definitions is "hiding place", and "cache" is listed as a synonym.

"Stash" seems better that "cache" or "registry", especially since
many programmers these days seem to associate "cache" with
pass-through proxy techniques. I first became familiar with the
term "cache" while reading Jack London, and tend to retain some
association with the more general definition. Clearly I am in the
minority on that here.

http://ereimer.net/20080706/13586_erC720.htm

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-21 19:00:40
Message-ID: CAEepm=0b91nAYs=28ndY2dO0A+9JW4RmnVa=-r2-SvTFgszdrQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Nov 22, 2016 at 7:29 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Mon, Nov 21, 2016 at 11:29 AM, Alvaro Herrera
> <alvherre(at)2ndquadrant(dot)com> wrote:
>
>>> When I used the word "cache" here, I was thinking more of this
>>> English language definition:
>>>
>>> a : a hiding place especially for concealing and preserving
>>> provisions or implements
>>> b : a secure place of storage
>>>
>>> The intent being to emphasize that there is not one public
>>> "registry" of such objects, but context-specific collections where
>>> references are tucked away when they become available for later use
>>> in the only the appropriate context.
>
>> How about "stash"? According to my reading of Merriam-Webster's
>> definition, "stash" mostly appears to be the thing that is stored
>> (hidden), rather than the place it's stored in, but one of the
>> definitions is "hiding place", and "cache" is listed as a synonym.
>
> "Stash" seems better that "cache" or "registry", especially since
> many programmers these days seem to associate "cache" with
> pass-through proxy techniques. I first became familiar with the
> term "cache" while reading Jack London, and tend to retain some
> association with the more general definition. Clearly I am in the
> minority on that here.

I was suggesting something like QueryEnvironment because it focuses on
its role, not that fact that there are things stored in it. It's
conceptually like the environment in an interpreter, which is some
kind of namespace into which objects are bound by name. My suggestion
"SpiRelation" now seems a bit short sighted in light of your comments,
so I retract that bit. How about a QueryEnvironment (as shown in the
patch I posted) that contains a list of NamedTuplestore pointers (the
SpiRelation struct in the patch I posted, renamed) and in future
perhaps lists of other ephemeral/transient objects that we want to
expose to SQL? We would possibly have more than one list because SQL
is not "Lisp-1" in nature: relations and functions and other kinds of
object exist in different namespaces, though there may need to be
polymorphism among kinds of named relations in the same list, so
perhaps NamedTuplestore should be a node with a tag.

--
Thomas Munro
http://www.enterprisedb.com


From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-11-22 23:02:59
Message-ID: dea9bd93-8bb4-8173-a0a5-916dc754caee@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11/21/16 3:49 AM, Craig Ringer wrote:
>> After going through that experience, I now agree with Kevin: an
>> > interface where a new SPI interface lets PLs push a named tuplestore
>> > into the SPI connection to make it available to SQL seems like the
>> > simplest and tidiest way.
> That also offers a handy step on the path toward table-valued
> variables and pipelined functions, both of which would be _really_
> nice for PL/PgSQL users.

FWIW, I expect at some point we'd like the ability to index tuplestores
as well.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


From: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
To: Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
Cc: Craig Ringer <craig(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-05 05:35:34
Message-ID: CAJrrPGfxYPBPo_BSB7JXw=E5C0h=eSMZgjf2ToMuACatq8wR4w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 23, 2016 at 10:02 AM, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>
wrote:

> On 11/21/16 3:49 AM, Craig Ringer wrote:
>
>> After going through that experience, I now agree with Kevin: an
>>> > interface where a new SPI interface lets PLs push a named tuplestore
>>> > into the SPI connection to make it available to SQL seems like the
>>> > simplest and tidiest way.
>>>
>> That also offers a handy step on the path toward table-valued
>> variables and pipelined functions, both of which would be _really_
>> nice for PL/PgSQL users.
>>
>
> FWIW, I expect at some point we'd like the ability to index tuplestores as
> well.

Moved to next CF with "waiting on author" status.

Regards,
Hari Babu
Fujitsu Australia


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
Cc: Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-18 02:15:49
Message-ID: CACjxUsPNe-3yJTFt4V7pMZEPBTi5KGYoCnvXpHLtwZ=5BD3hsQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 4, 2016 at 11:35 PM, Haribabu Kommi
<kommi(dot)haribabu(at)gmail(dot)com> wrote:

> Moved to next CF with "waiting on author" status.

Patch v8 attempts to address the issues explicitly raised in
Thomas Munro's review. An opaque query environment is created
that, for now, only passes through ephemeral named relations, of
which the only initial implementation is named tuplestores; but
the techniques are intended to support both other types of
ephemeral named relations and environmental properties (things
affecting parsing, planning, and execution that are not coming from
the system catalog) besides ENRs. There is no clue in the access
to the QE whether something is, for example, stored in a list or a
hash table. That's on purpose, so that the addition of other
properties or changes to their implementation doesn't affect the
calling code.

There were a few changes Thomas included in the version he posted,
without really delving into an explanation for those changes. Some
or all of them are likely to be worthwhile, but I would rather
incorporate them based on explicit discussion, so this version
doesn't do much other than generalize the interface a little,
change some names, and add more regression tests for the new
feature. (The examples I worked up for the rough proof of concept
of enforcement of RI through set logic rather than row-at-a-time
navigation were the basis for the new tests, so the idea won't get
totally lost.) Thomas, please discuss each suggested change (e.g.,
the inclusion of the query environment in the parameter list of a
few more functions).

Changed to "Needs review" status.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-v8.diff.gz application/x-gzip 28.8 KB

From: Craig Ringer <craig(dot)ringer(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Álvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-18 02:25:34
Message-ID: CAMsr+YH+=FiVV_OvKdN0z8x8cVZhqv-7f9N5ecJFudyhkkzypw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 22 Nov. 2016 01:05, "Kevin Grittner" <kgrittn(at)gmail(dot)com> wrote:

Right, I think we should assume that there will be other ways
people want to use parts of what is done here, including building
tuplestores through other means and referencing them in queries.

Yes. PL/pgsql table-valued variables for one thing.


From: Craig Ringer <craig(dot)ringer(at)2ndquadrant(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-18 02:29:51
Message-ID: CAMsr+YGVbC2_ThxSG5LduE6x+EVUkEba_exSyDSRZKqhbFao2A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 22 Nov. 2016 03:01, "Thomas Munro" <thomas(dot)munro(at)enterprisedb(dot)com> wrote:

How about a QueryEnvironment (as shown in the
patch I posted) that contains a list of NamedTuplestore pointers (the
SpiRelation struct in the patch I posted, renamed) and in future
perhaps lists of other ephemeral/transient objects that we want to
expose to SQL?

Very good idea. Sooner or later someone will probably want query-scoped
variables like MS-SQL and MySQL for example.


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-18 23:05:43
Message-ID: CAEepm=02A5LTfGtHo8xWpEmW4AYY+ES-uPr02bWb0OzGF4Ej-Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2016 at 3:15 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Sun, Dec 4, 2016 at 11:35 PM, Haribabu Kommi
> <kommi(dot)haribabu(at)gmail(dot)com> wrote:
>
>> Moved to next CF with "waiting on author" status.
>
> Patch v8 attempts to address the issues explicitly raised in
> Thomas Munro's review. An opaque query environment is created
> that, for now, only passes through ephemeral named relations, of
> which the only initial implementation is named tuplestores; but
> the techniques are intended to support both other types of
> ephemeral named relations and environmental properties (things
> affecting parsing, planning, and execution that are not coming from
> the system catalog) besides ENRs. There is no clue in the access
> to the QE whether something is, for example, stored in a list or a
> hash table. That's on purpose, so that the addition of other
> properties or changes to their implementation doesn't affect the
> calling code.

+1

> There were a few changes Thomas included in the version he posted,
> without really delving into an explanation for those changes. Some
> or all of them are likely to be worthwhile, but I would rather
> incorporate them based on explicit discussion, so this version
> doesn't do much other than generalize the interface a little,
> change some names, and add more regression tests for the new
> feature. (The examples I worked up for the rough proof of concept
> of enforcement of RI through set logic rather than row-at-a-time
> navigation were the basis for the new tests, so the idea won't get
> totally lost.) Thomas, please discuss each suggested change (e.g.,
> the inclusion of the query environment in the parameter list of a
> few more functions).

I was looking for omissions that would cause some kind of statements
to miss out on ENRs arbitrarily. It seemed to me that
parse_analyze_varparams should take a QueryEnvironment, mirroring
parse_analyze, so that PrepareQuery could pass it in. Otherwise,
PREPARE wouldn't see ENRs. Is there any reason why SPI_prepare should
see them but PREPARE not?

Should we attempt to detect if the tupledesc changes incompatibly
between planning and execution?

> Changed to "Needs review" status.

+ enr->md.enrtuples =
tuplestore_tuple_count(trigdata->tg_newtable);

I was wondering about this. As I understand it, plans for statements
in plpgsql functions are automatically cached. Is it OK for the
number of tuples on the first invocation of the trigger in a given
session to determine the estimate used for the plan? I suppose that
is the case with regular tables, so maybe it is.

+ register_enr(estate.queryEnv, enr);
+ SPI_register_relation(enr);

Here plpgsql calls both register_enr and SPI_register_relation. Yet
SPI_register_relation calls register_enr too, so is this a mistake?
Also, the return code isn't checked.

--
Thomas Munro
http://www.enterprisedb.com


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2016-12-19 00:38:54
Message-ID: CAEepm=3WVmPmz3bkFtK2kcnD9Kr7hxPZ2SKJ8SfZX_VSuTeh2A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2016 at 3:15 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> Patch v8 ...

FWIW here's that plpython patch, adjusted to apply on top of your latest patch.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
delta-relations-plpython.patch application/octet-stream 3.6 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-19 21:19:39
Message-ID: CACjxUsOmo2C+7w3nW1sGYNtk=4tWTjYRD7OqenzhdiSU4JBDAA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Attached is v9 which fixes bitrot from v8. No other changes.

Still needs review.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-v9.diff.gz application/x-gzip 28.8 KB

From: Nico Williams <nico(at)cryptonector(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-19 22:14:09
Message-ID: 20170119213859.GA1838@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Dec 17, 2016 at 08:15:49PM -0600, Kevin Grittner wrote:
> On Sun, Dec 4, 2016 at 11:35 PM, Haribabu Kommi
> <kommi(dot)haribabu(at)gmail(dot)com> wrote:
> > Moved to next CF with "waiting on author" status.
>
> [...]

I hope what I've done about delta relations will be mostly irrelevant
given your patch (which I've not looked at in detail), but just FYI,
I've built an alternate, all-SQL-coded materialized view system that
captures deltas between refreshes and deltas from direct DMLs of the
materialized view:

https://github.com/twosigma/postgresql-contrib/blob/master/pseudo_mat_views.sql

There are some good ideas there, IMO, even if that implementation were
useless because of your patch.

Incidentally, it's really nice that PG has some "higher order" SQL
features that make this sort of thing easier. In particular, here, row
values and record types, and being able to refer to a table as a column
of the table's record type.

Nico
--


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-20 19:26:54
Message-ID: CACjxUsMLLUKDkBYHiyRDisNVCfomoVLjK0_5NdqDXrThza_jSg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Attached is v10 which fixes bitrot from v9 caused by ea15e186.

Still needs review.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
transition-v10.diff.gz application/x-gzip 28.8 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Nico Williams <nico(at)cryptonector(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-20 19:37:33
Message-ID: CACjxUsOkqrSkzeRS92-7bDJw7PxEYudhn4_YV8GbG-2zk1CY6w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 19, 2017 at 4:14 PM, Nico Williams <nico(at)cryptonector(dot)com> wrote:

> I hope what I've done about delta relations will be mostly irrelevant
> given your patch (which I've not looked at in detail),

Reviews welcome!

> but just FYI,
> I've built an alternate, all-SQL-coded materialized view system that
> captures deltas between refreshes and deltas from direct DMLs of the
> materialized view:

There is currently plenty of room for pseudo-MV implementations,
and may be for a while. It's a good indication of the need for the
feature in core. An implementation in the guts of core can have
advantages that nothing else can, of course. For example, for
eager application of the deltas, nothing will be able to beat
capturing tuples already in RAM and being looked at for possible
trigger firing into a RAM-with-spill-to-disk tuplestore.

> https://github.com/twosigma/postgresql-contrib/blob/master/pseudo_mat_views.sql
>
> There are some good ideas there, IMO, even if that implementation were
> useless because of your patch.

I don't have time to review what you've done right now, but will
save that link to look at later, if you give permission to borrow
from it (with proper attribution, of course) if there is something
that can advance what I'm doing. If such permission is not
forthcoming, I will probably avoid looking at it, to avoid any
possible copyright issues.

> Incidentally, it's really nice that PG has some "higher order" SQL
> features that make this sort of thing easier. In particular, here, row
> values and record types, and being able to refer to a table as a column
> of the table's record type.

Yeah, I found that quite handy in developing the REFRESH feature,
and expect to be using it in incremental maintenance.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Nico Williams <nico(at)cryptonector(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-20 20:08:51
Message-ID: 20170120200849.GB1838@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 20, 2017 at 01:37:33PM -0600, Kevin Grittner wrote:
> On Thu, Jan 19, 2017 at 4:14 PM, Nico Williams <nico(at)cryptonector(dot)com> wrote:
>
> Reviews welcome!

I will review.

> There is currently plenty of room for pseudo-MV implementations,
> and may be for a while. It's a good indication of the need for the
> feature in core. An implementation in the guts of core can have
> advantages that nothing else can, of course. For example, for
> eager application of the deltas, nothing will be able to beat
> capturing tuples already in RAM and being looked at for possible
> trigger firing into a RAM-with-spill-to-disk tuplestore.

BTW, automatic updates of certain types of MVs should be easy: add
constraints based on NEW/OLD rows from synthetic triggers to the
underlying query. However, there is a bug in the query planner that
prevents this from being very fast. At some point I want to tackle that
bug. Basically, the planner does not notice that a table source in a
join has a lookup key sufficiently well-specified by those additional
constraints that it should be the first table source in the outermost
loop.

> I don't have time to review what you've done right now, but will
> save that link to look at later, if you give permission to borrow
> from it (with proper attribution, of course) if there is something
> that can advance what I'm doing. If such permission is not
> forthcoming, I will probably avoid looking at it, to avoid any
> possible copyright issues.

Our intention is to contribute this. We're willing to sign reasonable
contribution agreements.

I'd appreciate a review, for sure. Thanks!

> > Incidentally, it's really nice that PG has some "higher order" SQL
> > features that make this sort of thing easier. In particular, here, row
> > values and record types, and being able to refer to a table as a column
> > of the table's record type.
>
> Yeah, I found that quite handy in developing the REFRESH feature,
> and expect to be using it in incremental maintenance.

Indeed, I *copied* the pattern. However, I didn't have to generate
SELECT statements that include column names, as I was able to just
compare table source row values. There's a gotcha w.r.t. NULL columns,
but it affects the built-in REFRESH as well, IIRC. The commentary in
our implementation discusses that in more detail.

Nico
--


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Nico Williams <nico(at)cryptonector(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-20 21:37:20
Message-ID: CACjxUsNKL4Dh0iaVVioqqvXg1WrPGrR6mMahEv0R2NhuGANEYw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 20, 2017 at 2:08 PM, Nico Williams <nico(at)cryptonector(dot)com> wrote:
> On Fri, Jan 20, 2017 at 01:37:33PM -0600, Kevin Grittner wrote:

>> There is currently plenty of room for pseudo-MV implementations,
>> and may be for a while. It's a good indication of the need for the
>> feature in core. An implementation in the guts of core can have
>> advantages that nothing else can, of course. For example, for
>> eager application of the deltas, nothing will be able to beat
>> capturing tuples already in RAM and being looked at for possible
>> trigger firing into a RAM-with-spill-to-disk tuplestore.
>
> BTW, automatic updates of certain types of MVs should be easy: add
> constraints based on NEW/OLD rows from synthetic triggers to the
> underlying query.

Convincing me that this is a good idea for actual MVs, versus
pseudo-MVs using tables, would be an uphill battle. I recognize
the need to distinguish between MVs which contain recursive CTEs in
their definitions and MVs that don't, so that the DRed algorithm
can be used for the former and the counting algorithm for the
latter; but firing triggers for row-at-a-time maintenance is not
going to be efficient for very many cases, and the cost of
identifying those cases to handle them differently is probably
going to exceed any gains. Comparative benchmarks, once there is
an implementation using set-based techniques, could potentially
convince me; but there's not much point arguing about it before
that exists. :-)

> However, there is a bug in the query planner that prevents this
> from being very fast. At some point I want to tackle that bug.

What bug is that?

> Basically, the planner does not notice that a table source in a
> join has a lookup key sufficiently well-specified by those additional
> constraints that it should be the first table source in the outermost
> loop.

Is that a description of what you see as the bug? Can you give an
example, to clarify the point?

I am dubious, though, of the approach in general, as stated above.

>> I don't have time to review what you've done right now, but will
>> save that link to look at later, if you give permission to borrow
>> from it (with proper attribution, of course) if there is something
>> that can advance what I'm doing. If such permission is not
>> forthcoming, I will probably avoid looking at it, to avoid any
>> possible copyright issues.
>
> Our intention is to contribute this. We're willing to sign
> reasonable contribution agreements.

Posting a patch to these lists constitutes an assertion that you
have authority to share the IP, and are doing so. Referencing a
URL is a bit iffy, since it doesn't leave an archival copy of the
contribution under the community's control.

> I'd appreciate a review, for sure. Thanks!

Would it be possible to get your approach running using tables
and/or (non-materialized) views as an extension? A trigger-based
way to maintain pseudo-MVs via triggers might make an interesting
extension, possibly even included in contrib if it could be shown
to have advantages over built-in MVs for some non-trivial
applications.

> There's a gotcha w.r.t. NULL columns, but it affects the built-in
> REFRESH as well, IIRC. The commentary in our implementation
> discusses that in more detail.

Could you report that on a new thread on the lists? I've seen
comments about such a "gotcha", but am not clear on the details.
It probably deserves its own thread. Once understood, we can
probably fix it.

Thanks!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Nico Williams <nico(at)cryptonector(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Updating MATERIALIZED VIEWs (Re: delta relations in AFTER triggers)
Date: 2017-01-20 23:38:22
Message-ID: 20170120233822.GC1838@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

[Looking at your patch I see that it's not quite related to MVs, so I'm
changing the Subject. Apologies for the noise.]

[Responding out of order.]

On Fri, Jan 20, 2017 at 03:37:20PM -0600, Kevin Grittner wrote:
> On Fri, Jan 20, 2017 at 2:08 PM, Nico Williams <nico(at)cryptonector(dot)com> wrote:
> > BTW, automatic updates of certain types of MVs should be easy: add
> > constraints based on NEW/OLD rows from synthetic triggers to the
> > underlying query.
>
> Convincing me that this is a good idea for actual MVs, versus
> pseudo-MVs using tables, would be an uphill battle. [...]

I don't think it's necessary, and I would not insist on it.

My alternative MV implementation lets _me_ choose when to update an MV
synchronously, and when to defer refreshes, by using [hand-coded]
triggers. This is good enough for me.

If these triggers could be automatically generated, that sure would be
nice, but some control would be needed over when to update the MV vs.
mark it as needing a refresh.

> > Our intention is to contribute this. We're willing to sign
> > reasonable contribution agreements.
>
> Posting a patch to these lists constitutes an assertion that you
> have authority to share the IP, and are doing so. Referencing a
> URL is a bit iffy, since it doesn't leave an archival copy of the
> contribution under the community's control.

Fair enough. I'll post the source file itself. I've not done the work
of properly integrating it because I need to gauge interest first,
before dedicating a lot of effort to it.

> I am dubious, though, of the approach in general, as stated above.

I'm using this _now_. With a caveat:

a) the trigger functions needed to either mark an MV as needing a
refresh, or else to update it directly, are hand-coded, and

b) I chose which operations yield synchronous MV updates and which defer
to a refresh.

The MV, in my scheme, is really just a table with triggers that update a
deltas table the same way that a refresh would. A refresh locks the
table, disables those triggers, populates another table with the current
output of the underlying view, compares to the previous materialization,
and lastly generates, records, and applies deltas to the
materialization.

To give an example, adding a user to a group -> generally fast; deleting
a user (and thus all their group memberships) -> potentially very slow.

The "add a user to a group" case can then yield near real-time updates
of external caches, while the other case results in a deferred REFRESH
so as to not slow down the current transaction. The deferred REFRESH is
not deferred too long, so the net effect is still very fast updates of
external caches.

> > However, there is a bug in the query planner that prevents this
> > from being very fast. At some point I want to tackle that bug.
>
> What bug is that?

I... asked for help on the IRC #postgresql channel. I never posted here
about it.

Revisiting it now... the main problem was query _preparation time_, not
execution time. So perhaps not so bad. Still, it's worth looking into.

The query was something like this:

SELECT v.data->'verb_name' || '^' || (r.data->'named_acl_name') AS grant_name,
grantee.right_entity_id AS grantee_id
FROM relationships grantee
JOIN relationships grant ON
grantee.left_entity_id = grant.right_entity_id AND
grantee.relationship_type_id IN (10421, 10431, 13591, 13921)
AND grant.relationship_type_id = 10331
JOIN relationships perm_actions ON
grantee.left_entity_id = perm_actions.right_entity_id AND
perm_actions.relationship_type_id = 10381
JOIN relationships verb_in_vs ON
verb_in_vs.right_entity_id = perm_actions.left_entity_id AND
verb_in_vs.relationship_type_id = 10371
JOIN entities v ON v.id = verb_in_vs.left_entity_id
JOIN entities r ON r.id = grant.left_entity_id;

(This query uses a bit of an EAV schema. There's an "entities" table
with an hstore column for storing attributes ("data") and another
table, "relationships" that has (relationship_type_id, left_entity_id,
right_entity_id) columns and which is indexed by both, left_entity_id
and right_entity_id. EAV schemas hide relevant information from the
query planner, so there is that.)

The query plan for this is about as fast as one could hope. After all,
it has to scan many of the rows.

Now suppose we were adding a new 'grantee' and wanted to generate the
additions that would result in the MV. We could add this constraint to
the query:

WHERE grantee.left_entity_id = NEW.left_entity_id AND
grantee.right_entity_id = NEW.right_entity_id;

Now we've basically [almost] fully-specified the primary key for the
grantee table source.

The resulting query plan is actually pretty good. It has the grantee
table source as the first table source in the inner-most loop.

If I re-write the query using WITH CTEs to get a similarly good plan,
then query preparation runs about 250x faster.

Since I want to update (for some kinds of changes) an MV based on this
query, and I want to do it from some triggers. I need this to be fast.

I'm not sure when trigger functions' queries get re-prepared, so maybe
I'm concerned over nothing much?

> > I'd appreciate a review, for sure. Thanks!
>
> Would it be possible to get your approach running using tables
> and/or (non-materialized) views as an extension? A trigger-based
> way to maintain pseudo-MVs via triggers might make an interesting
> extension, possibly even included in contrib if it could be shown
> to have advantages over built-in MVs for some non-trivial
> applications.

I can certainly send a patch to include this in /contrib.

> > There's a gotcha w.r.t. NULL columns, but it affects the built-in
> > REFRESH as well, IIRC. The commentary in our implementation
> > discusses that in more detail.
>
> Could you report that on a new thread on the lists? I've seen
> comments about such a "gotcha", but am not clear on the details.
> It probably deserves its own thread. Once understood, we can
> probably fix it.

OK, will do.

Thanks,

Nico
--


From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Nico Williams <nico(at)cryptonector(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Updating MATERIALIZED VIEWs (Re: delta relations in AFTER triggers)
Date: 2017-01-21 23:05:50
Message-ID: 2016db76-02ba-7cc3-ae4e-47e25f967663@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/20/17 5:38 PM, Nico Williams wrote:
> If these triggers could be automatically generated, that sure would be
> nice, but some control would be needed over when to update the MV vs.
> mark it as needing a refresh.

FWIW, pg_classy[1], which is still a work in progress, would allow for
that. The idea is that you define a code template which you can then
call with arguments (such as the name of a matview table), and those
arguments get put into the template before executing the resulting SQL.

Most of the basic framework for that is in place; I just need to finish
code that will allow for the extension to track arbitrary database
objects that were created.

1: https://github.com/decibel/pg_classy/blob/master/doc/pg_classy.asc
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Nico Williams <nico(at)cryptonector(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-01-31 03:45:52
Message-ID: CAB7nPqTY5C_uaCH2f1b+S_JmnLNu3psALjsbkJZYpM7tOBWbbA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jan 21, 2017 at 6:37 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Fri, Jan 20, 2017 at 2:08 PM, Nico Williams <nico(at)cryptonector(dot)com> wrote:
>> On Fri, Jan 20, 2017 at 01:37:33PM -0600, Kevin Grittner wrote:
>
>>> There is currently plenty of room for pseudo-MV implementations,
>>> and may be for a while. It's a good indication of the need for the
>>> feature in core. An implementation in the guts of core can have
>>> advantages that nothing else can, of course. For example, for
>>> eager application of the deltas, nothing will be able to beat
>>> capturing tuples already in RAM and being looked at for possible
>>> trigger firing into a RAM-with-spill-to-disk tuplestore.
>>
>> BTW, automatic updates of certain types of MVs should be easy: add
>> constraints based on NEW/OLD rows from synthetic triggers to the
>> underlying query.
>
> Convincing me that this is a good idea for actual MVs, versus
> pseudo-MVs using tables, would be an uphill battle. I recognize
> the need to distinguish between MVs which contain recursive CTEs in
> their definitions and MVs that don't, so that the DRed algorithm
> can be used for the former and the counting algorithm for the
> latter; but firing triggers for row-at-a-time maintenance is not
> going to be efficient for very many cases, and the cost of
> identifying those cases to handle them differently is probably
> going to exceed any gains. Comparative benchmarks, once there is
> an implementation using set-based techniques, could potentially
> convince me; but there's not much point arguing about it before
> that exists. :-)

I have moved this patch to the next CF. It would be nice to progress
in this topic in PG10.
--
Michael


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-02-21 01:44:35
Message-ID: CAEepm=31mN9mGQrzYDKa4WG7qYnjW8hPzv8SVFwDtOYW2v8iHg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 20, 2017 at 2:49 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> Attached is v9 which fixes bitrot from v8. No other changes.
>
> Still needs review.

This patch still applies, builds cleanly after a small modification,
includes regression tests and the tests past. The modification I
needed to make was due to this compile error:

nodeNamedtuplestorescan.c:154:19: error: no member named
'ps_TupFromTlist' in 'struct PlanState'
scanstate->ss.ps.ps_TupFromTlist = false;

Commit ea15e18677fc2eff3135023e27f69ed8821554ed got rid of that member
of PlanState and I assume based on other changes in that commit that
the thing to do was simply to remove that line. Having done that, it
builds cleanly.

+INSERT INTO level1_table(level1_no)
+ SELECT generate_series(1,200);
+INSERT INTO level2_table(level2_no, parent_no)
+ SELECT level2_no, level2_no / 50 + 1 AS parent_no
+ FROM generate_series(1,9999) level2_no;
+INSERT INTO all_level_status(level, node_no, status)
+ SELECT 1, level1_no, 0 FROM level1_table;
+INSERT INTO all_level_status(level, node_no, status)
+ SELECT 2, level2_no, 0 FROM level2_table;
+INSERT INTO level1_table(level1_no)
+ SELECT generate_series(201,1000);
+DELETE FROM level1_table WHERE level1_no BETWEEN 201 AND 1000;
+DELETE FROM level1_table WHERE level1_no BETWEEN 100000000 AND 100000010;
+SELECT count(*) FROM level1_table;
+ count
+-------
+ 200
+(1 row)

Was it intentional that this test doesn't include any statements that
reach the case where the trigger does RAISE EXCEPTION 'RI error'?
From the output generated there doesn't seem to be any evidence that
the triggers run at all, though I know from playing around with this
that they do:

postgres=# delete from level1_table where level1_no = 42;
ERROR: RI error
CONTEXT: PL/pgSQL function level1_table_ri_parent_del_func() line 6 at RAISE

+ * Portions Copyright (c) 1996-2014, PostgreSQL Global Development Group

These copyright messages are missing 3 years' worth of bugfixes.

+SPI_get_caller_relation(const char *name)

Do we need this function? It's not used by the implementation. If it
does have a good use for end-users, then perhaps it should be called
something like SPI_get_registered_relation, to make it clear that it
will return whatever you registered with SPI_register_relation,
instead of introducing this 'caller' terminology?

+typedef struct NamedTuplestoreScan
+{
+ Scan scan;
+ char *enrname;
+} NamedTuplestoreScan;

Nearly plan node structs always have a comment for the members below
'scan'; I think this needs one too because 'enrname' is not
self-explanatory.

/*
+ * Capture the NEW and OLD transition TABLE tuplestores (if specified for
+ * this trigger).
+ */
+ if (trigdata->tg_newtable || trigdata->tg_oldtable)
+ {
+ estate.queryEnv = create_queryEnv();
+ if (trigdata->tg_newtable)
+ {
+ Enr enr = palloc(sizeof(EnrData));
+
+ enr->md.name = trigdata->tg_trigger->tgnewtable;
+ enr->md.tupdesc = trigdata->tg_relation->rd_att;
+ enr->md.enrtuples = tuplestore_tuple_count(trigdata->tg_newtable);
+ enr->reldata = trigdata->tg_newtable;
+ register_enr(estate.queryEnv, enr);
+ SPI_register_relation(enr);
+ }

Why do we we have to call register_enr and also SPI_register_relation here?

On Mon, Dec 19, 2016 at 4:35 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Sun, Dec 18, 2016 at 3:15 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> There were a few changes Thomas included in the version he posted,
>> without really delving into an explanation for those changes. Some
>> or all of them are likely to be worthwhile, but I would rather
>> incorporate them based on explicit discussion, so this version
>> doesn't do much other than generalize the interface a little,
>> change some names, and add more regression tests for the new
>> feature. (The examples I worked up for the rough proof of concept
>> of enforcement of RI through set logic rather than row-at-a-time
>> navigation were the basis for the new tests, so the idea won't get
>> totally lost.) Thomas, please discuss each suggested change (e.g.,
>> the inclusion of the query environment in the parameter list of a
>> few more functions).
>
> I was looking for omissions that would cause some kind of statements
> to miss out on ENRs arbitrarily. It seemed to me that
> parse_analyze_varparams should take a QueryEnvironment, mirroring
> parse_analyze, so that PrepareQuery could pass it in. Otherwise,
> PREPARE wouldn't see ENRs. Is there any reason why SPI_prepare should
> see them but PREPARE not?

Any thoughts about that?

More soon.

--
Thomas Munro
http://www.enterprisedb.com


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-02-21 03:43:55
Message-ID: CAEepm=1YHR9fDuOnjqGdMiijOvO4zN3d=M_jvyS5R1g9UGdoZw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Feb 21, 2017 at 7:14 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Fri, Jan 20, 2017 at 2:49 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> Attached is v9 which fixes bitrot from v8. No other changes.
>>
>> Still needs review.

Based on a suggestion from Robert off-list I tried inserting into a
delta relation from a trigger function and discovered that it
segfaults:

* frame #0: 0x00000001057705a6
postgres`addRangeTableEntryForRelation(pstate=0x00007fa58186a4d0,
rel=0x0000000000000000, alias=0x0000000000000000, inh='\0',
inFromCl='\0') + 70 at parse_relation.c:1280 [opt]
frame #1: 0x000000010575bbda
postgres`setTargetTable(pstate=0x00007fa58186a4d0,
relation=0x00007fa58186a098, inh=<unavailable>, alsoSource='\0',
requiredPerms=1) + 90 at parse_clause.c:199 [opt]
frame #2: 0x0000000105738530 postgres`transformStmt [inlined]
transformInsertStmt(pstate=<unavailable>) + 69 at analyze.c:540 [opt]
frame #3: 0x00000001057384eb
postgres`transformStmt(pstate=<unavailable>, parseTree=<unavailable>)
+ 2411 at analyze.c:279 [opt]
frame #4: 0x0000000105737a42
postgres`transformTopLevelStmt(pstate=<unavailable>,
parseTree=0x00007fa58186a438) + 18 at analyze.c:192 [opt]
frame #5: 0x00000001059408d0
postgres`pg_analyze_and_rewrite_params(parsetree=0x00007fa58186a438,
query_string="insert into d values (1000000, 1000000, 'x')",
parserSetup=(plpgsql.so`plpgsql_parser_setup at pl_comp.c:1017),
parserSetupArg=0x00007fa58185c2a0, queryEnv=0x00007fa581857798) + 128
at postgres.c:706 [opt]

--
Thomas Munro
http://www.enterprisedb.com


From: David Steele <david(at)pgmasters(dot)net>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-02 15:04:44
Message-ID: ee4038f8-25be-1667-e0f0-05ae09ca7e05@pgmasters.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Kevin,

On 2/20/17 10:43 PM, Thomas Munro wrote:
> On Tue, Feb 21, 2017 at 7:14 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> On Fri, Jan 20, 2017 at 2:49 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>> Attached is v9 which fixes bitrot from v8. No other changes.
>>>
>>> Still needs review.
>
> Based on a suggestion from Robert off-list I tried inserting into a
> delta relation from a trigger function and discovered that it
> segfaults:
>
> * frame #0: 0x00000001057705a6
> postgres`addRangeTableEntryForRelation(pstate=0x00007fa58186a4d0,
> rel=0x0000000000000000, alias=0x0000000000000000, inh='\0',
> inFromCl='\0') + 70 at parse_relation.c:1280 [opt]
> frame #1: 0x000000010575bbda
> postgres`setTargetTable(pstate=0x00007fa58186a4d0,
> relation=0x00007fa58186a098, inh=<unavailable>, alsoSource='\0',
> requiredPerms=1) + 90 at parse_clause.c:199 [opt]
> frame #2: 0x0000000105738530 postgres`transformStmt [inlined]
> transformInsertStmt(pstate=<unavailable>) + 69 at analyze.c:540 [opt]
> frame #3: 0x00000001057384eb
> postgres`transformStmt(pstate=<unavailable>, parseTree=<unavailable>)
> + 2411 at analyze.c:279 [opt]
> frame #4: 0x0000000105737a42
> postgres`transformTopLevelStmt(pstate=<unavailable>,
> parseTree=0x00007fa58186a438) + 18 at analyze.c:192 [opt]
> frame #5: 0x00000001059408d0
> postgres`pg_analyze_and_rewrite_params(parsetree=0x00007fa58186a438,
> query_string="insert into d values (1000000, 1000000, 'x')",
> parserSetup=(plpgsql.so`plpgsql_parser_setup at pl_comp.c:1017),
> parserSetupArg=0x00007fa58185c2a0, queryEnv=0x00007fa581857798) + 128
> at postgres.c:706 [opt]

Do you know when you will have a new patch ready?

This looks like an interesting and important feature but I think it's
going to have to come together quickly if it's going to make it into v10.

Thanks,
--
-David
david(at)pgmasters(dot)net


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: David Steele <david(at)pgmasters(dot)net>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-02 22:44:01
Message-ID: CACjxUsO4g_-XBMDnZRO-4=agpNUpGONOmOiz-kH4Xmk5FHU8hQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 2, 2017 at 9:04 AM, David Steele <david(at)pgmasters(dot)net> wrote:
> On 2/20/17 10:43 PM, Thomas Munro wrote:

>> Based on a suggestion from Robert off-list I tried inserting into a
>> delta relation from a trigger function and discovered that it
>> segfaults:
>>
>> * frame #0: 0x00000001057705a6
>> postgres`addRangeTableEntryForRelation(pstate=0x00007fa58186a4d0,
>> rel=0x0000000000000000, alias=0x0000000000000000, inh='\0',
>> inFromCl='\0') + 70 at parse_relation.c:1280 [opt]
>> frame #1: 0x000000010575bbda
>> postgres`setTargetTable(pstate=0x00007fa58186a4d0,
>> relation=0x00007fa58186a098, inh=<unavailable>, alsoSource='\0',
>> requiredPerms=1) + 90 at parse_clause.c:199 [opt]
>> frame #2: 0x0000000105738530 postgres`transformStmt [inlined]
>> transformInsertStmt(pstate=<unavailable>) + 69 at analyze.c:540 [opt]
>> frame #3: 0x00000001057384eb
>> postgres`transformStmt(pstate=<unavailable>, parseTree=<unavailable>)
>> + 2411 at analyze.c:279 [opt]
>> frame #4: 0x0000000105737a42
>> postgres`transformTopLevelStmt(pstate=<unavailable>,
>> parseTree=0x00007fa58186a438) + 18 at analyze.c:192 [opt]
>> frame #5: 0x00000001059408d0
>> postgres`pg_analyze_and_rewrite_params(parsetree=0x00007fa58186a438,
>> query_string="insert into d values (1000000, 1000000, 'x')",
>> parserSetup=(plpgsql.so`plpgsql_parser_setup at pl_comp.c:1017),
>> parserSetupArg=0x00007fa58185c2a0, queryEnv=0x00007fa581857798) + 128
>> at postgres.c:706 [opt]
>
> Do you know when you will have a new patch ready?
>
> This looks like an interesting and important feature but I think it's
> going to have to come together quickly if it's going to make it into v10.

I hope to post a new version addressing review comments by Monday (6 March).

--
Kevin Grittner


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-08 00:28:44
Message-ID: CACjxUsPfUUa813oDvJRx2wuiqHXO3VsCLQzcuy0r=UEyS-xOjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 20, 2017 at 7:44 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> Was it intentional that this test doesn't include any statements that
> reach the case where the trigger does RAISE EXCEPTION 'RI error'?
> From the output generated there doesn't seem to be any evidence that
> the triggers run at all, though I know from playing around with this
> that they do

Tests expanded to cover more. Some bugs found and fixed in the process. :-/

> + * Portions Copyright (c) 1996-2014, PostgreSQL Global Development Group
>
> These copyright messages are missing 3 years' worth of bugfixes.

Those were only in files created with the initial patch in 2014. No
bug fixes missing. Updated the years to 2017, though.

> +SPI_get_caller_relation(const char *name)
>
> Do we need this function? It's not used by the implementation.

Good point. It seemed useful way back when, but since no uses for
it emerged, it should be removed until such time (if any) that it
would be useful.

> +typedef struct NamedTuplestoreScan
> +{
> + Scan scan;
> + char *enrname;
> +} NamedTuplestoreScan;
>
> Nearly plan node structs always have a comment for the members below
> 'scan'; I think this needs one too because 'enrname' is not
> self-explanatory.

Done.

> /*
> + * Capture the NEW and OLD transition TABLE tuplestores (if specified for
> + * this trigger).
> + */
> + if (trigdata->tg_newtable || trigdata->tg_oldtable)
> + {
> + estate.queryEnv = create_queryEnv();
> + if (trigdata->tg_newtable)
> + {
> + Enr enr = palloc(sizeof(EnrData));
> +
> + enr->md.name = trigdata->tg_trigger->tgnewtable;
> + enr->md.tupdesc = trigdata->tg_relation->rd_att;
> + enr->md.enrtuples = tuplestore_tuple_count(trigdata->tg_newtable);
> + enr->reldata = trigdata->tg_newtable;
> + register_enr(estate.queryEnv, enr);
> + SPI_register_relation(enr);
> + }
>
> Why do we we have to call register_enr and also SPI_register_relation here?

Essentially, because plpgsql does some things through SPI and some
things not. Both cases are covered.

> On Mon, Dec 19, 2016 at 4:35 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> On Sun, Dec 18, 2016 at 3:15 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>> There were a few changes Thomas included in the version he posted,
>>> without really delving into an explanation for those changes. Some
>>> or all of them are likely to be worthwhile, but I would rather
>>> incorporate them based on explicit discussion, so this version
>>> doesn't do much other than generalize the interface a little,
>>> change some names, and add more regression tests for the new
>>> feature. (The examples I worked up for the rough proof of concept
>>> of enforcement of RI through set logic rather than row-at-a-time
>>> navigation were the basis for the new tests, so the idea won't get
>>> totally lost.) Thomas, please discuss each suggested change (e.g.,
>>> the inclusion of the query environment in the parameter list of a
>>> few more functions).
>>
>> I was looking for omissions that would cause some kind of statements
>> to miss out on ENRs arbitrarily. It seemed to me that
>> parse_analyze_varparams should take a QueryEnvironment, mirroring
>> parse_analyze, so that PrepareQuery could pass it in. Otherwise,
>> PREPARE wouldn't see ENRs. Is there any reason why SPI_prepare should
>> see them but PREPARE not?
>
> Any thoughts about that?

Do you see any way to test that code, or would it be dead code there
"just in case" we later decided to do something that needed it? I'm
not a big fan of the latter. I've had to spend too much time
maintaining and/or ripping out code that fits that description.

On Mon, Feb 20, 2017 at 9:43 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Tue, Feb 21, 2017 at 7:14 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> On Fri, Jan 20, 2017 at 2:49 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>> Attached is v9 which fixes bitrot from v8. No other changes.
>>>
>>> Still needs review.
>
> Based on a suggestion from Robert off-list I tried inserting into a
> delta relation from a trigger function and discovered that it
> segfaults

Fixed. Such an attempt now generates something like this:

ERROR: relation "d" cannot be the target of a modifying statement
CONTEXT: SQL statement "INSERT INTO d VALUES (1000000, 1000000, 'x')"
PL/pgSQL function level2_table_bad_usage_func() line 3 at SQL statement

New patch attached.

Miscellanea:

Do you suppose we should have all PLs that are part of the base
distro covered?

What is necessary to indicate an additional SQL feature covered?

--
Kevin Grittner

Attachment Content-Type Size
transition-v11.diff.gz application/x-gzip 31.5 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-09 22:48:12
Message-ID: CACjxUsNgVAvwmQEKAfLk0K4wTsa3SF6RUNKNTibSnefO9Fu0ig@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Mar 7, 2017 at 6:28 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:

> New patch attached.

And bit-rotted less than 24 hours later by fcec6caa.

New patch attached just to fix bit-rot.

That conflicting patch might be a candidate to merge into the new
Ephemeral Named Relation provided by my patch, for more flexibility
and extensibility...

--
Kevin Grittner

Attachment Content-Type Size
transition-v12.diff.gz application/x-gzip 31.5 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-12 21:08:12
Message-ID: CAEepm=2DoBUzJjzT3Xo9LpBTg5ZprPcsGDpoadz+oqekLukCeg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 10, 2017 at 11:48 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Tue, Mar 7, 2017 at 6:28 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>
>> New patch attached.
>
> And bit-rotted less than 24 hours later by fcec6caa.
>
> New patch attached just to fix bit-rot.
>
> That conflicting patch might be a candidate to merge into the new
> Ephemeral Named Relation provided by my patch, for more flexibility
> and extensibility...

Thanks. I found a new way to break it: run the trigger function so
that the plan is cached by plpgsql, then ALTER TABLE incompatibly,
then run the trigger function again. See attached.

On Wed, Mar 8, 2017 at 1:28 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>> I was looking for omissions that would cause some kind of statements
>>> to miss out on ENRs arbitrarily. It seemed to me that
>>> parse_analyze_varparams should take a QueryEnvironment, mirroring
>>> parse_analyze, so that PrepareQuery could pass it in. Otherwise,
>>> PREPARE wouldn't see ENRs. Is there any reason why SPI_prepare should
>>> see them but PREPARE not?
>>
>> Any thoughts about that?
>
> Do you see any way to test that code, or would it be dead code there
> "just in case" we later decided to do something that needed it? I'm
> not a big fan of the latter. I've had to spend too much time
> maintaining and/or ripping out code that fits that description.

I guess you could test it by reaching PREPARE and EXECUTE via dynamic
SQL inside a plpgsql function (ie EXECUTE 'EXECUTE ...').

Really I was just trying to be thorough and examine every path into
the parser and analyser to make sure they all supported the new
QueryEnvironment argument. When I found that the PREPARE path didn't,
my first thought was that there may be PLs that wouldn't be able to
take advantage of plan reuse any other way, but I see that all the
built-in PLs expose SPI_prepare, so that isn't a problem for them.

You're probably right that it's not actually very useful. We've
recorded this obscure omission in the archives.

> Miscellanea:
>
> Do you suppose we should have all PLs that are part of the base
> distro covered?

I vote for doing that in Postgres 11. My pl/python patch[1] may be a
useful starting point, but I haven't submitted it in this CF and
nobody has shown up with pl/tcl or pl/perl versions.

> What is necessary to indicate an additional SQL feature covered?

I assume you're talking about information_schema.sql_features, and I
see you've created a new thread to talk about that. I'm not sure
about that, but a couple of thoughts occurred to me when looking for
references to transition tables in an old draft standard I have.
These are both cases where properties of the subject table should
probably also affect access to the derived transition tables:

* What privileges implications are there for transition tables? I'm
wondering about column and row level privileges; for example, if you
can't see a column in the subject table, I'm guessing you shouldn't be
allowed to see it in the transition table either, but I'm not sure.

* In future we could consider teaching it about functional
dependencies as required by the spec; if you can SELECT id, name FROM
<subject table> GROUP BY id, I believe you should be able to SELECT
id, name FROM <transition table> GROUP BY id, but currently you can't.

[1] https://www.postgresql.org/message-id/CAEepm=3WVmPmz3bkFtK2kcnD9Kr7hxPZ2SKJ8SfZX_VSuTeh2A@mail.gmail.com

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
segfault.sql.txt text/plain 1.1 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-13 18:51:30
Message-ID: CACjxUsO=squN2XbYBM6qLfS9co=bfGqQFxMfY+pjyAGKP_jpwQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Mar 12, 2017 at 4:08 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> I found a new way to break it: run the trigger function so
> that the plan is cached by plpgsql, then ALTER TABLE incompatibly,
> then run the trigger function again. See attached.

The first part doesn't seem so bad. Using the transition table in a
FOR EACH STATEMENT trigger you get:

test=# update hoge set name = (name::text || name::text)::integer;
ERROR: attribute 2 has wrong type
DETAIL: Table has type integer, but query expects text.
CONTEXT: SQL statement "SELECT (SELECT string_agg(id || '=' || name,
',') FROM d)"
PL/pgSQL function hoge_upd_func() line 3 at RAISE

... while putting each row on its own line from a FOR EACH ROW
trigger you get:

test=# update hoge set name = (name::text || name::text)::integer;
ERROR: type of parameter 15 (integer) does not match that when
preparing the plan (text)
CONTEXT: SQL statement "SELECT (SELECT string_agg(old.id || '=' ||
old.name, ','))"
PL/pgSQL function hoge_upd_func() line 3 at RAISE

Does anyone think the first message needs improvement? If so, to
what?

Obviously the next part is a problem. With the transition table we
get a segfault:

test=# -- now drop column 'name'
test=# alter table hoge drop column name;
ALTER TABLE
test=# update hoge set id = id;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

While the row-oriented query manages to dodge it:

test=# alter table hoge drop column name;
ALTER TABLE
test=# update hoge set id = id;
ERROR: record "old" has no field "name"
CONTEXT: SQL statement "SELECT (SELECT string_agg(old.id || '=' ||
old.name, ','))"
PL/pgSQL function hoge_upd_func() line 3 at RAISE

I expected that existing mechanisms would have forced re-planning of
a trigger function if the table the function was attached to was
altered. Either that was a bit "optimistic", or the old TupleDesc
is used for the new plan. Will track down which it is, and fix it.

>> Do you suppose we should have all PLs that are part of the base
>> distro covered?
>
> I vote for doing that in Postgres 11. My pl/python patch[1] may be a
> useful starting point, but I haven't submitted it in this CF and
> nobody has shown up with pl/tcl or pl/perl versions.

OK, but we'd better add something to the docs saying that only C and
plpgsql triggers are supported in v10.

>> What is necessary to indicate an additional SQL feature covered?
>
> I assume you're talking about information_schema.sql_features

I had forgotten we had that in a table. I was thinking more of the docs:

https://www.postgresql.org/docs/current/static/features.html

I guess if we change one, we had better change the other. (Or are
they generated off a common source?)

> a couple of thoughts occurred to me when looking for
> references to transition tables in an old draft standard I have.
> These are both cases where properties of the subject table should
> probably also affect access to the derived transition tables:
>
> * What privileges implications are there for transition tables? I'm
> wondering about column and row level privileges; for example, if you
> can't see a column in the subject table, I'm guessing you shouldn't be
> allowed to see it in the transition table either, but I'm not sure.

I'll see how that works in FOR EACH ROW triggers. We should match
that, I think. Keep in mind that not just anyone can put a trigger
on a table.

> * In future we could consider teaching it about functional
> dependencies as required by the spec; if you can SELECT id, name FROM
> <subject table> GROUP BY id, I believe you should be able to SELECT
> id, name FROM <transition table> GROUP BY id, but currently you can't.

Interesting idea.

I'll post a new patch once I figure out the dropped column on the
cached function plan.

--
Kevin Grittner


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-13 19:34:03
Message-ID: 20170313193403.rq5pmpovoswd65cv@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner wrote:

> >> What is necessary to indicate an additional SQL feature covered?
> >
> > I assume you're talking about information_schema.sql_features
>
> I had forgotten we had that in a table. I was thinking more of the docs:
>
> https://www.postgresql.org/docs/current/static/features.html
>
> I guess if we change one, we had better change the other. (Or are
> they generated off a common source?)

See src/backend/catalog/sql_features.txt

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-24 00:14:43
Message-ID: CAEepm=0m-Rb_u9JAyaSJoKrxb3bYtoT69UL0=n78wCMrBoo2AQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Mar 14, 2017 at 7:51 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Sun, Mar 12, 2017 at 4:08 PM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> I found a new way to break it: run the trigger function so
>> that the plan is cached by plpgsql, then ALTER TABLE incompatibly,
>> then run the trigger function again. See attached.
>
> [...]
>
> I expected that existing mechanisms would have forced re-planning of
> a trigger function if the table the function was attached to was
> altered. Either that was a bit "optimistic", or the old TupleDesc
> is used for the new plan. Will track down which it is, and fix it.

When PlanCacheRelCallback runs, I don't think it understands that
these named tuplestore RangeTblEntry objects are dependent on the
subject table. Could that be fixed like this?

@@ -2571,6 +2582,9 @@ extract_query_dependencies_walker(Node *node,
PlannerInfo *context)
if (rte->rtekind == RTE_RELATION)
context->glob->relationOids =

lappend_oid(context->glob->relationOids, rte->relid);
+ else if (rte->rtekind == RTE_NAMEDTUPLESTORE)
+ context->glob->relationOids =
+
lappend_oid(context->glob->relationOids, [subject table's OID]);
}

> I'll post a new patch once I figure out the dropped column on the
> cached function plan.

If that's fixed and the permissions question can be waved away by
saying it's the same as the per-row situation, my only other comment
would be a bikeshed issue: Enr isn't a great name for a struct.

Very keen to see this feature in PostgreSQL 10!

--
Thomas Munro
http://www.enterprisedb.com


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-24 04:36:08
Message-ID: CAEepm=260+vzK8R39BDQEKGpoPxyCV6Syxbza=aUAyBzeh4qDA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 24, 2017 at 1:14 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> If that's fixed and the permissions question can be waved away by
> saying it's the same as the per-row situation, my only other comment
> would be a bikeshed issue: Enr isn't a great name for a struct.

One more thought: should this be allowed?

postgres=# create table mytab (i int) partition by list (i);
CREATE TABLE
postgres=# create table mytab1 partition of mytab for values in (42);
CREATE TABLE
postgres=# create function my_trigger_function() returns trigger as $$
begin end; $$ language plpgsql;
CREATE FUNCTION
postgres=# create trigger my_trigger after update on mytab referencing
old table as my_old for each statement execute procedure
my_trigger_function();
CREATE TRIGGER

I haven't really looked into the interaction of triggers and the new
partition feature very much but it looks like the intention is that
you shouldn't be allowed to do something that would need access to the
actual row data from a trigger that is attached to the top-level
partition:

postgres=# create trigger my_trigger before update on mytab for each
row execute procedure my_trigger_function();
ERROR: "mytab" is a partitioned table
DETAIL: Partitioned tables cannot have ROW triggers.

By the same logic, I guess that we shouldn't allow transition table
triggers to be attached to the top level partitioned table, because it
can't really work.

You can attach ROW triggers to the concrete tables that hold real
data, which makes sense because they actually have data to capture and
feed to the trigger function:

postgres=# create trigger my_trigger before update on mytab1 for each
row execute procedure my_trigger_function();
CREATE TRIGGER

Perhaps the moral equivalent should be possible for statement triggers
with transition tables, and that already works with your patch as far
as I know. So I think your patch probably just needs to reject them
on partitioned tables.

--
Thomas Munro
http://www.enterprisedb.com


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-24 21:39:10
Message-ID: CAEepm=1cEwfGdJF8npCF1gZg619QS7hh5q+9dxz0PePg1-jOsw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 24, 2017 at 5:36 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Fri, Mar 24, 2017 at 1:14 PM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> If that's fixed and the permissions question can be waved away by
>> saying it's the same as the per-row situation, my only other comment
>> would be a bikeshed issue: Enr isn't a great name for a struct.
>
> One more thought: should this be allowed?
>
> postgres=# create table mytab (i int) partition by list (i);
> CREATE TABLE
> postgres=# create table mytab1 partition of mytab for values in (42);
> CREATE TABLE
> postgres=# create function my_trigger_function() returns trigger as $$
> begin end; $$ language plpgsql;
> CREATE FUNCTION
> postgres=# create trigger my_trigger after update on mytab referencing
> old table as my_old for each statement execute procedure
> my_trigger_function();
> CREATE TRIGGER

On second thoughts, that's actually arguably a bug in committed code.
What do you think about the attached patch?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
no-transition-tables-for-partitioned-tables.patch application/octet-stream 3.7 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-26 11:39:49
Message-ID: CAEepm=1qnkXYNTxGKiZME7o0GqDaO=292_FtaD3nV-Q2umL_VQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 24, 2017 at 1:14 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Tue, Mar 14, 2017 at 7:51 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> On Sun, Mar 12, 2017 at 4:08 PM, Thomas Munro
>> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>>> I found a new way to break it: run the trigger function so
>>> that the plan is cached by plpgsql, then ALTER TABLE incompatibly,
>>> then run the trigger function again. See attached.
>>
>> [...]
>>
>> I expected that existing mechanisms would have forced re-planning of
>> a trigger function if the table the function was attached to was
>> altered. Either that was a bit "optimistic", or the old TupleDesc
>> is used for the new plan. Will track down which it is, and fix it.
>
> When PlanCacheRelCallback runs, I don't think it understands that
> these named tuplestore RangeTblEntry objects are dependent on the
> subject table. Could that be fixed like this?
>
> @@ -2571,6 +2582,9 @@ extract_query_dependencies_walker(Node *node,
> PlannerInfo *context)
> if (rte->rtekind == RTE_RELATION)
> context->glob->relationOids =
>
> lappend_oid(context->glob->relationOids, rte->relid);
> + else if (rte->rtekind == RTE_NAMEDTUPLESTORE)
> + context->glob->relationOids =
> +
> lappend_oid(context->glob->relationOids, [subject table's OID]);

I'm not sure if this is the right approach and it may have style
issues, but it does fix the crashing in the ALTER TABLE case I
reported: see attached patch which applies on top of your v12.

BTW I had to make the following change to your v12 because of commit b8d7f053:

/*
* initialize child expressions
*/
- scanstate->ss.ps.targetlist = (List *)
- ExecInitExpr((Expr *) node->scan.plan.targetlist,
- (PlanState *) scanstate);
- scanstate->ss.ps.qual = (List *)
- ExecInitExpr((Expr *) node->scan.plan.qual,
- (PlanState *) scanstate);
+ scanstate->ss.ps.qual =
+ ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate);

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
experimental-fix-for-transition-table-invalidation.patch application/octet-stream 2.8 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-30 16:51:59
Message-ID: CACjxUsNhdm4ZCgaVreLK5kAwHTZUkqJAVXiywwi-HNVsuTLMnA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 23, 2017 at 11:36 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> One more thought: should this be allowed?
>
> postgres=# create table mytab (i int) partition by list (i);
> CREATE TABLE
> postgres=# create table mytab1 partition of mytab for values in (42);
> CREATE TABLE
> postgres=# create function my_trigger_function() returns trigger as $$
> begin end; $$ language plpgsql;
> CREATE FUNCTION
> postgres=# create trigger my_trigger after update on mytab referencing
> old table as my_old for each statement execute procedure
> my_trigger_function();
> CREATE TRIGGER

> Perhaps the moral equivalent should be possible for statement triggers
> with transition tables, and that already works with your patch as far
> as I know. So I think your patch probably just needs to reject them
> on partitioned tables.

> [patch provided]

Yeah, that looks good. Included in next patch version.

On Sun, Mar 26, 2017 at 6:39 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> BTW I had to make the following change to your v12 because of commit b8d7f053:

Yeah, I ran into that, too, and used exactly the same fix.

On Sun, Mar 26, 2017 at 6:39 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Fri, Mar 24, 2017 at 1:14 PM, Thomas Munro

>> When PlanCacheRelCallback runs, I don't think it understands that
>> these named tuplestore RangeTblEntry objects are dependent on the
>> subject table. Could that be fixed like this?
>>
>> @@ -2571,6 +2582,9 @@ extract_query_dependencies_walker(Node *node,
>> PlannerInfo *context)
>> if (rte->rtekind == RTE_RELATION)
>> context->glob->relationOids =
>>
>> lappend_oid(context->glob->relationOids, rte->relid);
>> + else if (rte->rtekind == RTE_NAMEDTUPLESTORE)
>> + context->glob->relationOids =
>> +
>> lappend_oid(context->glob->relationOids, [subject table's OID]);
>
> I'm not sure if this is the right approach and it may have style
> issues, but it does fix the crashing in the ALTER TABLE case I
> reported: see attached patch which applies on top of your v12.

I had been working along similar lines, but had not gotten it
working. Merged your version and mine, taking the best of both.
:-)

Thanks for the reviews and the fixes!

New version attached. It needs some of these problem cases added to
the testing, and a mention in the docs that only C and plpgsql
triggers can use the feature so far. I'll add those tomorrow.

--
Kevin Grittner

Attachment Content-Type Size
transition-v13.diff.gz application/x-gzip 32.9 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 16:06:31
Message-ID: CACjxUsMOj4rzj6XQH_9=AG7b96UuWFs2mv=pOyc1n+FN8aV3kA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 23, 2017 at 7:14 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:

> my only other comment would be a bikeshed issue:
> Enr isn't a great name for a struct.

I know, but EphemeralNamedRelation starts to get kinda long,
especially when making the normal sorts of concatenations. I
started making the change and balked when I saw things like
EphemeralNamedRelationMetadataData and a function named
EphemeralNamedRelationMetadataGetTupDesc() in place of
EnrmdGetTupDesc(). A 40 character function name make for a lot of
line-wrapping to stay within pgindent limits. Any suggestions?

--
Kevin Grittner


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 17:20:51
Message-ID: CACjxUsM9hpcXGRpzC1KNFYcZVhBdMDrhBHPydt0LiKmsMQXFYw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 30, 2017 at 11:51 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:

> New version attached. It needs some of these problem cases added to
> the testing, and a mention in the docs that only C and plpgsql
> triggers can use the feature so far. I'll add those tomorrow.

Done and attached.

Now the question is, should it be pushed? It's been through just
about every CF in the last three years with little modification, and
finally got a thorough enough review in this CF that I think it can
be considered. Here are the numbers:

85 files changed, 2266 insertions(+), 132 deletions(-)

Of that, 70 lines are the plpgsql implementation (which I should
probably push separately), about 200 lines are docs and 623 lines
are new regression tests. Most of the rest only comes into play if
the feature is used.

This adds support for SQL standard sub-feature, although only in
triggers written in C and plpgsql. (Other PLs will probably require
fewer lines than plpgsql.) It also provides infrastructure needed
to get incremental maintenance of materialized views based on just
simple declarative DDL. Tom has expressed hope that it could be
used to improve performance and memory usage for AFTER triggers, and
I believe it can, but that that should be a follow-on patch. It
might provide the basis of set-based statement-level enforcement of
referential integrity, with the regression tests providing a rough
proof of concept.

My inclination is to push it late today, but be ready to revert if
there are any hard-to-fix surprise problems missed in review and
testing; but if the general preference is to hold it for version 11,
that's OK with me, too.

--
Kevin Grittner

Attachment Content-Type Size
transition-v14.diff.gz application/x-gzip 33.7 KB

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 17:50:49
Message-ID: 11152.1490982649@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Kevin Grittner <kgrittn(at)gmail(dot)com> writes:
> On Thu, Mar 23, 2017 at 7:14 PM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> my only other comment would be a bikeshed issue:
>> Enr isn't a great name for a struct.

> I know, but EphemeralNamedRelation starts to get kinda long,
> especially when making the normal sorts of concatenations. I
> started making the change and balked when I saw things like
> EphemeralNamedRelationMetadataData and a function named
> EphemeralNamedRelationMetadataGetTupDesc() in place of
> EnrmdGetTupDesc(). A 40 character function name make for a lot of
> line-wrapping to stay within pgindent limits. Any suggestions?

I would vote for calling the struct typedef EphemeralNamedRelation and
using the abbreviation ENR (capitalized that way, not as Enr) in related
function names, where that seemed sensible. I really do *not* like
"Enr" as a global name.

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 17:58:56
Message-ID: CA+Tgmoa3uQb3=tWsOL3w7U3xu6VO3Rcp7Q+Qw04N+cA1+8p1SQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 31, 2017 at 1:50 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Kevin Grittner <kgrittn(at)gmail(dot)com> writes:
>> On Thu, Mar 23, 2017 at 7:14 PM, Thomas Munro
>> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>>> my only other comment would be a bikeshed issue:
>>> Enr isn't a great name for a struct.
>
>> I know, but EphemeralNamedRelation starts to get kinda long,
>> especially when making the normal sorts of concatenations. I
>> started making the change and balked when I saw things like
>> EphemeralNamedRelationMetadataData and a function named
>> EphemeralNamedRelationMetadataGetTupDesc() in place of
>> EnrmdGetTupDesc(). A 40 character function name make for a lot of
>> line-wrapping to stay within pgindent limits. Any suggestions?
>
> I would vote for calling the struct typedef EphemeralNamedRelation and
> using the abbreviation ENR (capitalized that way, not as Enr) in related
> function names, where that seemed sensible. I really do *not* like
> "Enr" as a global name.

Yeah, I had the same thought about capitalization but wasn't sure if
it was worth suggesting. But since Tom did, +1 from me.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 18:18:52
Message-ID: CACjxUsOFECsZgJU87fGpVBMLus_27=NunmvvCSp6J561bSFCKA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 31, 2017 at 12:58 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Fri, Mar 31, 2017 at 1:50 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

>> I would vote for calling the struct typedef EphemeralNamedRelation and
>> using the abbreviation ENR (capitalized that way, not as Enr) in related
>> function names, where that seemed sensible. I really do *not* like
>> "Enr" as a global name.
>
> Yeah, I had the same thought about capitalization but wasn't sure if
> it was worth suggesting. But since Tom did, +1 from me.

Will do.

--
Kevin Grittner


From: David Fetter <david(at)fetter(dot)org>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-03-31 18:29:18
Message-ID: 20170331182918.GA3202@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Mar 31, 2017 at 12:20:51PM -0500, Kevin Grittner wrote:
> On Thu, Mar 30, 2017 at 11:51 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>
> > New version attached. It needs some of these problem cases added
> > to the testing, and a mention in the docs that only C and plpgsql
> > triggers can use the feature so far. I'll add those tomorrow.
>
> Done and attached.
>
> Now the question is, should it be pushed?

Yes. Among other things, that'll get it buildfarm tested and give
people interested in other PLs better visibility.

That, and I suspect that people will start using this infrastructure
for some very cool projects.

Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 01:34:26
Message-ID: CAMsr+YGFvPzkh8tuaSB1SDfgHHUr1i8NE5vqOvaHLyH+ixFdAQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1 April 2017 at 02:29, David Fetter <david(at)fetter(dot)org> wrote:
> On Fri, Mar 31, 2017 at 12:20:51PM -0500, Kevin Grittner wrote:
>> On Thu, Mar 30, 2017 at 11:51 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>>
>> > New version attached. It needs some of these problem cases added
>> > to the testing, and a mention in the docs that only C and plpgsql
>> > triggers can use the feature so far. I'll add those tomorrow.
>>
>> Done and attached.
>>
>> Now the question is, should it be pushed?
>
> Yes. Among other things, that'll get it buildfarm tested and give
> people interested in other PLs better visibility.
>
> That, and I suspect that people will start using this infrastructure
> for some very cool projects.

Yeah, I was excited to see this committed. It practically begs to be
used for plpgsql TABLE valued variables backed by tuplestores.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
To: Craig Ringer <craig(at)2ndquadrant(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 03:32:52
Message-ID: CADkLM=epD939gP85mxhu_A2do2W=05T_PjXzqn5C57Gn5m8yeQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> > That, and I suspect that people will start using this infrastructure
> > for some very cool projects.
>
> Yeah, I was excited to see this committed. It practically begs to be
> used for plpgsql TABLE valued variables backed by tuplestores.
>

(also very excited about this!)


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 03:50:37
Message-ID: CAEepm=0_DkehQf2XXgdu+u6ND+XeseoSnOdHxEMRnzQ=uN3MSA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Happy to see this committed! And thanks for the co-author credit,
which is a generous exaggeration.

I was still a bit confused about this and poked at it a bit:

On Wed, Mar 8, 2017 at 1:28 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> /*
>> + * Capture the NEW and OLD transition TABLE tuplestores (if specified for
>> + * this trigger).
>> + */
>> + if (trigdata->tg_newtable || trigdata->tg_oldtable)
>> + {
>> + estate.queryEnv = create_queryEnv();
>> + if (trigdata->tg_newtable)
>> + {
>> + Enr enr = palloc(sizeof(EnrData));
>> +
>> + enr->md.name = trigdata->tg_trigger->tgnewtable;
>> + enr->md.tupdesc = trigdata->tg_relation->rd_att;
>> + enr->md.enrtuples = tuplestore_tuple_count(trigdata->tg_newtable);
>> + enr->reldata = trigdata->tg_newtable;
>> + register_enr(estate.queryEnv, enr);
>> + SPI_register_relation(enr);
>> + }
>>
>> Why do we we have to call register_enr and also SPI_register_relation here?
>
> Essentially, because plpgsql does some things through SPI and some
> things not. Both cases are covered.

We're maintaining two different QueryEnvironment objects here, one
inside the SPI module and another in plpgsql_EState. I think that's
done only so that we have one to inject into the portal in
exec_dynquery_with_params, so that EXECUTE 'SELECT * FROM
<transition_table>' can work.

That raises the question: shouldn't SPI_cursor_open just do that
itself using the SPI connection's current QueryEnvironment? That
would make SPI_cursor_open consistent with SPI_execute_plan, and also
benefit handlers for other PLs that would otherwise have to do similar
double-bookkeeping. See attached patch showing what I mean.

Please also find attached a rebased patch to add pl/python support,
and new equivalent patches for pl/perl and pl/tcl. I am planning to
add these to PG11 CF1, unless you think we should be more aggressive
given the extra time?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
spi-portal-open-with-query-env.patch application/octet-stream 3.5 KB
transition-plpython-v1.patch application/octet-stream 4.4 KB
transition-plperl-v1.patch application/octet-stream 5.7 KB
transition-pltcl-v1.patch application/octet-stream 4.2 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 04:09:21
Message-ID: CAEepm=0iSHbbwf27Sf7zPa6nCf0thFW5F9OaEJmaTArPYzCt6A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Apr 3, 2017 at 3:50 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> Please also find attached a rebased patch to add pl/python support,
> and new equivalent patches for pl/perl and pl/tcl. I am planning to
> add these to PG11 CF1, unless you think we should be more aggressive
> given the extra time?

Or perhaps the code to inject trigger data transition tables into SPI
(a near identical code block these three patches) should be somewhere
common so that each PLs would only need to call a function. If so,
where should that go?

--
Thomas Munro
http://www.enterprisedb.com


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 13:59:03
Message-ID: 11967.1491227943@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
> Or perhaps the code to inject trigger data transition tables into SPI
> (a near identical code block these three patches) should be somewhere
> common so that each PLs would only need to call a function. If so,
> where should that go?

spi.c?

regards, tom lane


From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-03 15:41:59
Message-ID: CACjxUsNvY=ONeKw-KDkH+qtbZKC30-6nLPwC4QA+k91MVWrYnQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Apr 3, 2017 at 8:59 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
>> Or perhaps the code to inject trigger data transition tables into SPI
>> (a near identical code block these three patches) should be somewhere
>> common so that each PLs would only need to call a function. If so,
>> where should that go?
>
> spi.c?

Until now, trigger.c didn't know about SPI, and spi.c didn't know
about triggers. The intersection was left to referencing code, like
PLs. Is there any other common code among the PLs dealing with this
intersection? If so, maybe a new triggerspi.c file (or
spitrigger.c?) would make sense. Possibly it could make sense from
a code structure PoV even for a single function, but it seems kinda
iffy for just this function. As far as I can see it comes down to
adding it to spi.c or creating a new file -- or just duplicating
these 30-some lines of code to every PL.

--
Kevin Grittner


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-04 00:16:06
Message-ID: CAEepm=1DttcEpXXfyTQaLB+Hq=LEc2X0nOECoMMBDbHPxqBHew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Apr 4, 2017 at 3:41 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> On Mon, Apr 3, 2017 at 8:59 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
>>> Or perhaps the code to inject trigger data transition tables into SPI
>>> (a near identical code block these three patches) should be somewhere
>>> common so that each PLs would only need to call a function. If so,
>>> where should that go?
>>
>> spi.c?
>
> Until now, trigger.c didn't know about SPI, and spi.c didn't know
> about triggers. The intersection was left to referencing code, like
> PLs. Is there any other common code among the PLs dealing with this
> intersection? If so, maybe a new triggerspi.c file (or
> spitrigger.c?) would make sense. Possibly it could make sense from
> a code structure PoV even for a single function, but it seems kinda
> iffy for just this function. As far as I can see it comes down to
> adding it to spi.c or creating a new file -- or just duplicating
> these 30-some lines of code to every PL.

Ok, how about SPI_register_trigger_data(TriggerData *)? Or any name
you prefer... I didn't suggest something as specific as
SPI_register_transition_tables because think it's plausible that
someone might want to implement SQL standard REFERENCING OLD/NEW ROW
AS and make that work in all PLs too one day, and this would be the
place to do that.

See attached, which add adds the call to all four built-in PLs. Thoughts?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
transition-tables-for-all.patch application/octet-stream 21.1 KB

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-04 23:49:46
Message-ID: CACjxUsNfx1xLTp-EN88kgpx8YbygEygDftfmtJ7sq4uCiy4J4Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Apr 3, 2017 at 7:16 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Tue, Apr 4, 2017 at 3:41 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>> On Mon, Apr 3, 2017 at 8:59 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
>>>> Or perhaps the code to inject trigger data transition tables into SPI
>>>> (a near identical code block these three patches) should be somewhere
>>>> common so that each PLs would only need to call a function. If so,
>>>> where should that go?
>>>
>>> spi.c?
>>
>> Until now, trigger.c didn't know about SPI, and spi.c didn't know
>> about triggers. The intersection was left to referencing code, like
>> PLs. Is there any other common code among the PLs dealing with this
>> intersection? If so, maybe a new triggerspi.c file (or
>> spitrigger.c?) would make sense. Possibly it could make sense from
>> a code structure PoV even for a single function, but it seems kinda
>> iffy for just this function. As far as I can see it comes down to
>> adding it to spi.c or creating a new file -- or just duplicating
>> these 30-some lines of code to every PL.
>
> Ok, how about SPI_register_trigger_data(TriggerData *)? Or any name
> you prefer... I didn't suggest something as specific as
> SPI_register_transition_tables because think it's plausible that
> someone might want to implement SQL standard REFERENCING OLD/NEW ROW
> AS and make that work in all PLs too one day, and this would be the
> place to do that.
>
> See attached, which add adds the call to all four built-in PLs. Thoughts?

Worked on the docs some more and then pushed it.

Nice job cutting the number of *.[ch] lines by 30 while adding support for
the other three core PLs. :-)

--
Kevin Grittner


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-05 00:22:38
Message-ID: CAEepm=2O+gWnCOT9OCN-Kp5Z77EkCfnqChrb8adDhdAZPCNh6A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Apr 5, 2017 at 11:49 AM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
> Worked on the docs some more and then pushed it.
>
> Nice job cutting the number of *.[ch] lines by 30 while adding support for
> the other three core PLs. :-)

Great. Thanks. I wonder if there is some way we can automatically
include code fragments in the documentation without keeping them in
sync manually.

Now it looks like I have no more excuses for putting off reading the
papers you've shared[1][2] about incremental matview algorithms!
Looking forward to helping out with the next steps in that project if
I can.

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2254&rep=rep1&type=pdf
[2] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.3208&rep=rep1&type=pdf

--
Thomas Munro
http://www.enterprisedb.com


From: Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: delta relations in AFTER triggers
Date: 2017-04-13 02:59:00
Message-ID: CADkLM=fPJnuZNVQWy1YygZ5=PeuamAvrrDqF3XGW2+=BdRBUiw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> Great. Thanks. I wonder if there is some way we can automatically
> include code fragments in the documentation without keeping them in
> sync manually.
>
>
In whatever extra docs you add, could you include an example of an INSERT
ON CONFLICT, and potentially a CTE query that does two operations on the
same table. I'm not clear on what to expect when a statement does a mix of
INSERT, UPDATE, and DELETE? Will there be multiple firings of the trigger
in a single statement, or will the before/after sets be mashed together
regardless of which part of the query generated it?


From: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-04 09:12:59
Message-ID: CANEvxPoOodsb_ZJwgSOLpYic5s4-0pZOJWrWnRU-TpgV9KJQdA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I have been testing this for a while and observed a server crash while
referencing table column value in a trigger procedure for AFTER DELETE
trigger.

-- Steps to reproduce:
CREATE TABLE t1(c1 int);
CREATE TABLE t2(cc1 int);
INSERT INTO t1 VALUES (10);
INSERT INTO t2 VALUES (10);

CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
$$ BEGIN
DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
RETURN OLD;
END; $$ LANGUAGE PLPGSQL;

CREATE TRIGGER trg1
AFTER DELETE ON t2
REFERENCING OLD TABLE AS my_old
FOR EACH ROW
EXECUTE PROCEDURE trig_func();

DELETE FROM t2 WHERE cc1 =10;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

On further testing it without delta relations in AFTER trigger,it executed
successfully.

CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
$$ BEGIN
DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from t2);
RETURN OLD;
END; $$ LANGUAGE PLPGSQL;

CREATE TRIGGER trg1
AFTER DELETE ON t2
FOR EACH ROW
EXECUTE PROCEDURE trig_func();

DELETE FROM t2 WHERE cc1 =10;
DELETE 1
postgres=# select * from t1;
c1
----
10
(1 row)

postgres=# select * from t2;
cc1
-----
(0 rows)

Logfile and core dump attached for reference.

*Thanks & Regards,*

*Prabhat Kumar Sahu*
Mob: 7758988455
Skype ID: prabhat.sahu1984

www.enterprisedb.co <http://www.enterprisedb.com/>m
<http://www.enterprisedb.com/>

On Thu, Apr 13, 2017 at 8:29 AM, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
wrote:

> Great. Thanks. I wonder if there is some way we can automatically
>> include code fragments in the documentation without keeping them in
>> sync manually.
>>
>>
> In whatever extra docs you add, could you include an example of an INSERT
> ON CONFLICT, and potentially a CTE query that does two operations on the
> same table. I'm not clear on what to expect when a statement does a mix of
> INSERT, UPDATE, and DELETE? Will there be multiple firings of the trigger
> in a single statement, or will the before/after sets be mashed together
> regardless of which part of the query generated it?
>
>
>

Attachment Content-Type Size
log.zip application/zip 776.2 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-04 09:51:03
Message-ID: CAEepm=2=Uv5fm=exqL+ygBxaO+-tgmC=o+63H4zYAXi9HtXf1w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, May 4, 2017 at 9:12 PM, Prabhat Sahu
<prabhat(dot)sahu(at)enterprisedb(dot)com> wrote:
> I have been testing this for a while and observed a server crash while referencing table column value in a trigger procedure for AFTER DELETE trigger.
>
> -- Steps to reproduce:
> CREATE TABLE t1(c1 int);
> CREATE TABLE t2(cc1 int);
> INSERT INTO t1 VALUES (10);
> INSERT INTO t2 VALUES (10);
>
> CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
> $$ BEGIN
> DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
> RETURN OLD;
> END; $$ LANGUAGE PLPGSQL;
>
> CREATE TRIGGER trg1
> AFTER DELETE ON t2
> REFERENCING OLD TABLE AS my_old
> FOR EACH ROW
> EXECUTE PROCEDURE trig_func();
>
> DELETE FROM t2 WHERE cc1 =10;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

Reproduced here. The stack looks like this:

frame #3: 0x000000010f06f8b0
postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
lineNumber=1237) + 128 at assert.c:54
frame #4: 0x000000010f0cbc85
postgres`tuplestore_rescan(state=0x00007ff219840200) + 85 at
tuplestore.c:1237
frame #5: 0x000000010eced9b1
postgres`ExecReScanNamedTuplestoreScan(node=0x00007ff21d007840) + 81
at nodeNamedtuplestorescan.c:197
frame #6: 0x000000010eca46a6
postgres`ExecReScan(node=0x00007ff21d007840) + 822 at execAmi.c:216
frame #7: 0x000000010ece7eca
postgres`ExecNestLoop(node=0x00007ff21d006310) + 538 at
nodeNestloop.c:148

I think the problem is that the tuplestore read pointer hasn't been
opened with the "rewindable" flag. It works for me with the attached.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
fix-named-tuplestore-rescan.patch application/octet-stream 704 bytes

From: Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-04 12:39:54
Message-ID: CANiYTQv05RRHEnQQnoFKTguURnz+o53M3Zb6Jwa1XCn8ELMi=g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

While testing the feature we encountered one more crash,below is the
scenario to reproduce.

create table t1 ( a int);
create table t2 ( a int);
insert into t1 values (11),(12),(13);

create or replace function my_trig() returns trigger
language plpgsql as $my_trig$
begin
insert into t2(select a from new_table);
RETURN NEW;
end;
$my_trig$;

create trigger my_trigger
after truncate or update on t1
referencing new table as new_table old table as oldtab
for each statement
execute procedure my_trig();

truncate t1;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Log file and core dump attached for reference.

On Thu, May 4, 2017 at 3:21 PM, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
wrote:

> On Thu, May 4, 2017 at 9:12 PM, Prabhat Sahu
> <prabhat(dot)sahu(at)enterprisedb(dot)com> wrote:
> > I have been testing this for a while and observed a server crash while
> referencing table column value in a trigger procedure for AFTER DELETE
> trigger.
> >
> > -- Steps to reproduce:
> > CREATE TABLE t1(c1 int);
> > CREATE TABLE t2(cc1 int);
> > INSERT INTO t1 VALUES (10);
> > INSERT INTO t2 VALUES (10);
> >
> > CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
> > $$ BEGIN
> > DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
> > RETURN OLD;
> > END; $$ LANGUAGE PLPGSQL;
> >
> > CREATE TRIGGER trg1
> > AFTER DELETE ON t2
> > REFERENCING OLD TABLE AS my_old
> > FOR EACH ROW
> > EXECUTE PROCEDURE trig_func();
> >
> > DELETE FROM t2 WHERE cc1 =10;
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
>
> Reproduced here. The stack looks like this:
>
> frame #3: 0x000000010f06f8b0
> postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> lineNumber=1237) + 128 at assert.c:54
> frame #4: 0x000000010f0cbc85
> postgres`tuplestore_rescan(state=0x00007ff219840200) + 85 at
> tuplestore.c:1237
> frame #5: 0x000000010eced9b1
> postgres`ExecReScanNamedTuplestoreScan(node=0x00007ff21d007840) + 81
> at nodeNamedtuplestorescan.c:197
> frame #6: 0x000000010eca46a6
> postgres`ExecReScan(node=0x00007ff21d007840) + 822 at execAmi.c:216
> frame #7: 0x000000010ece7eca
> postgres`ExecNestLoop(node=0x00007ff21d006310) + 538 at
> nodeNestloop.c:148
>
> I think the problem is that the tuplestore read pointer hasn't been
> opened with the "rewindable" flag. It works for me with the attached.
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>

--

Regards,

Neha Sharma

Attachment Content-Type Size
logs.zip application/zip 738.9 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>
Cc: Kevin Grittner <kgrittn(at)gmail(dot)com>, Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-04 20:23:33
Message-ID: CAEepm=0VR5W-N38eTkO_FqJbGqQ_ykbBRmzmvHyxDhy1p=0Csw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, May 5, 2017 at 12:39 AM, Neha Sharma
<neha(dot)sharma(at)enterprisedb(dot)com> wrote:
> While testing the feature we encountered one more crash,below is the
> scenario to reproduce.
>
> create table t1 ( a int);
> create table t2 ( a int);
> insert into t1 values (11),(12),(13);
>
> create or replace function my_trig() returns trigger
> language plpgsql as $my_trig$
> begin
> insert into t2(select a from new_table);
> RETURN NEW;
> end;
> $my_trig$;
>
> create trigger my_trigger
> after truncate or update on t1
> referencing new table as new_table old table as oldtab
> for each statement
> execute procedure my_trig();
>
> truncate t1;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

Thanks. Reproduced here. The stack looks like this:

frame #3: 0x0000000103e5e8b0
postgres`ExceptionalCondition(conditionName="!((((((trigdata->tg_event)
& 0x00000003) == 0x00000000) || (((trigdata->tg_event) & 0x00000003)
== 0x00000002) || (((trigdata->tg_event) & 0x00000003) == 0x00000001))
&& (((trigdata->tg_event) & 0x00000018) == 0x00000000) &&
!(trigdata->tg_event & 0x00000020) && !(trigdata->tg_event &
0x00000040)) || (trigdata->tg_oldtable == ((void*)0) &&
trigdata->tg_newtable == ((void*)0)))", errorType="FailedAssertion",
fileName="trigger.c", lineNumber=2045) + 128 at assert.c:54
frame #4: 0x0000000103a6f542
postgres`ExecCallTriggerFunc(trigdata=0x00007fff5c40bad0, tgindx=0,
finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
per_tuple_context=0x00007fd8b906f928) + 258 at trigger.c:2039
frame #5: 0x0000000103a754ed
postgres`AfterTriggerExecute(event=0x00007fd8ba092460,
rel=0x00000001043fd9c0, trigdesc=0x00007fd8ba068758,
finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
per_tuple_context=0x00007fd8b906f928,
trig_tuple_slot1=0x0000000000000000,
trig_tuple_slot2=0x0000000000000000) + 1469 at trigger.c:3860
frame #6: 0x0000000103a73080
postgres`afterTriggerInvokeEvents(events=0x00007fd8ba07fb00,
firing_id=1, estate=0x00007fd8ba090440, delete_ok='\x01') + 592 at
trigger.c:4051
frame #7: 0x0000000103a72b7b
postgres`AfterTriggerEndQuery(estate=0x00007fd8ba090440) + 203 at
trigger.c:4227
frame #8: 0x0000000103a498aa
postgres`ExecuteTruncate(stmt=0x00007fd8ba059f40) + 2026 at
tablecmds.c:1485

There's an assertion that it's (one of INSERT, UPDATE, DELETE, an
AFTER trigger, not deferred) *or* there are no transition tables.
Here it's TRUNCATE and there are transition tables, so it fails:

/*
* Protect against code paths that may fail to initialize transition table
* info.
*/
Assert(((TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event) ||
TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) &&
TRIGGER_FIRED_AFTER(trigdata->tg_event) &&
!(trigdata->tg_event & AFTER_TRIGGER_DEFERRABLE) &&
!(trigdata->tg_event & AFTER_TRIGGER_INITDEFERRED)) ||
(trigdata->tg_oldtable == NULL && trigdata->tg_newtable == NULL));

We can't possibly support transition tables on TRUNCATE (the whole
point of TRUNCATE is not to inspect all the rows so we can't collect
them), and we already reject ROW triggers on TRUNCATE, so we should
reject transition tables on STATEMENT triggers for TRUNCATE at
creation time too. See attached. Thoughts?

> Log file and core dump attached for reference.

Thanks! Just by the way, it'd be better to post just an interesting
stack trace fragment rather than a core file, because core files can't
really be used without the exact executable that you built.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
no-transition-tables-for-truncate.patch application/octet-stream 665 bytes

From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-06 05:33:24
Message-ID: 20170506053324.GE843225@rfd.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, May 04, 2017 at 09:51:03PM +1200, Thomas Munro wrote:
> On Thu, May 4, 2017 at 9:12 PM, Prabhat Sahu
> <prabhat(dot)sahu(at)enterprisedb(dot)com> wrote:
> > I have been testing this for a while and observed a server crash while referencing table column value in a trigger procedure for AFTER DELETE trigger.
> >
> > -- Steps to reproduce:
> > CREATE TABLE t1(c1 int);
> > CREATE TABLE t2(cc1 int);
> > INSERT INTO t1 VALUES (10);
> > INSERT INTO t2 VALUES (10);
> >
> > CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
> > $$ BEGIN
> > DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
> > RETURN OLD;
> > END; $$ LANGUAGE PLPGSQL;
> >
> > CREATE TRIGGER trg1
> > AFTER DELETE ON t2
> > REFERENCING OLD TABLE AS my_old
> > FOR EACH ROW
> > EXECUTE PROCEDURE trig_func();
> >
> > DELETE FROM t2 WHERE cc1 =10;
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
>
> Reproduced here. The stack looks like this:
>
> frame #3: 0x000000010f06f8b0
> postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> lineNumber=1237) + 128 at assert.c:54
> frame #4: 0x000000010f0cbc85
> postgres`tuplestore_rescan(state=0x00007ff219840200) + 85 at
> tuplestore.c:1237
> frame #5: 0x000000010eced9b1
> postgres`ExecReScanNamedTuplestoreScan(node=0x00007ff21d007840) + 81
> at nodeNamedtuplestorescan.c:197
> frame #6: 0x000000010eca46a6
> postgres`ExecReScan(node=0x00007ff21d007840) + 822 at execAmi.c:216
> frame #7: 0x000000010ece7eca
> postgres`ExecNestLoop(node=0x00007ff21d006310) + 538 at
> nodeNestloop.c:148
>
> I think the problem is that the tuplestore read pointer hasn't been
> opened with the "rewindable" flag. It works for me with the attached.

[Action required within three days. This is a generic notification.]

The above-described topic is currently a PostgreSQL 10 open item. Kevin,
since you committed the patch believed to have created it, you own this open
item. If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know. Otherwise, please observe the policy on
open item ownership[1] and send a status update within three calendar days of
this message. Include a date for your subsequent status update. Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10. Consequently, I will appreciate your efforts
toward speedy resolution. Thanks.

[1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-06 05:34:46
Message-ID: 20170506053446.GF843225@rfd.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, May 05, 2017 at 08:23:33AM +1200, Thomas Munro wrote:
> On Fri, May 5, 2017 at 12:39 AM, Neha Sharma
> <neha(dot)sharma(at)enterprisedb(dot)com> wrote:
> > While testing the feature we encountered one more crash,below is the
> > scenario to reproduce.
> >
> > create table t1 ( a int);
> > create table t2 ( a int);
> > insert into t1 values (11),(12),(13);
> >
> > create or replace function my_trig() returns trigger
> > language plpgsql as $my_trig$
> > begin
> > insert into t2(select a from new_table);
> > RETURN NEW;
> > end;
> > $my_trig$;
> >
> > create trigger my_trigger
> > after truncate or update on t1
> > referencing new table as new_table old table as oldtab
> > for each statement
> > execute procedure my_trig();
> >
> > truncate t1;
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
>
> Thanks. Reproduced here. The stack looks like this:
>
> frame #3: 0x0000000103e5e8b0
> postgres`ExceptionalCondition(conditionName="!((((((trigdata->tg_event)
> & 0x00000003) == 0x00000000) || (((trigdata->tg_event) & 0x00000003)
> == 0x00000002) || (((trigdata->tg_event) & 0x00000003) == 0x00000001))
> && (((trigdata->tg_event) & 0x00000018) == 0x00000000) &&
> !(trigdata->tg_event & 0x00000020) && !(trigdata->tg_event &
> 0x00000040)) || (trigdata->tg_oldtable == ((void*)0) &&
> trigdata->tg_newtable == ((void*)0)))", errorType="FailedAssertion",
> fileName="trigger.c", lineNumber=2045) + 128 at assert.c:54
> frame #4: 0x0000000103a6f542
> postgres`ExecCallTriggerFunc(trigdata=0x00007fff5c40bad0, tgindx=0,
> finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
> per_tuple_context=0x00007fd8b906f928) + 258 at trigger.c:2039
> frame #5: 0x0000000103a754ed
> postgres`AfterTriggerExecute(event=0x00007fd8ba092460,
> rel=0x00000001043fd9c0, trigdesc=0x00007fd8ba068758,
> finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
> per_tuple_context=0x00007fd8b906f928,
> trig_tuple_slot1=0x0000000000000000,
> trig_tuple_slot2=0x0000000000000000) + 1469 at trigger.c:3860
> frame #6: 0x0000000103a73080
> postgres`afterTriggerInvokeEvents(events=0x00007fd8ba07fb00,
> firing_id=1, estate=0x00007fd8ba090440, delete_ok='\x01') + 592 at
> trigger.c:4051
> frame #7: 0x0000000103a72b7b
> postgres`AfterTriggerEndQuery(estate=0x00007fd8ba090440) + 203 at
> trigger.c:4227
> frame #8: 0x0000000103a498aa
> postgres`ExecuteTruncate(stmt=0x00007fd8ba059f40) + 2026 at
> tablecmds.c:1485
>
> There's an assertion that it's (one of INSERT, UPDATE, DELETE, an
> AFTER trigger, not deferred) *or* there are no transition tables.
> Here it's TRUNCATE and there are transition tables, so it fails:
>
> /*
> * Protect against code paths that may fail to initialize transition table
> * info.
> */
> Assert(((TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
> TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event) ||
> TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) &&
> TRIGGER_FIRED_AFTER(trigdata->tg_event) &&
> !(trigdata->tg_event & AFTER_TRIGGER_DEFERRABLE) &&
> !(trigdata->tg_event & AFTER_TRIGGER_INITDEFERRED)) ||
> (trigdata->tg_oldtable == NULL && trigdata->tg_newtable == NULL));
>
>
> We can't possibly support transition tables on TRUNCATE (the whole
> point of TRUNCATE is not to inspect all the rows so we can't collect
> them), and we already reject ROW triggers on TRUNCATE, so we should
> reject transition tables on STATEMENT triggers for TRUNCATE at
> creation time too. See attached. Thoughts?

[Action required within three days. This is a generic notification.]

The above-described topic is currently a PostgreSQL 10 open item. Kevin,
since you committed the patch believed to have created it, you own this open
item. If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know. Otherwise, please observe the policy on
open item ownership[1] and send a status update within three calendar days of
this message. Include a date for your subsequent status update. Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10. Consequently, I will appreciate your efforts
toward speedy resolution. Thanks.

[1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-10 02:55:56
Message-ID: 20170510025556.GB1293561@rfd.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, May 06, 2017 at 05:33:24AM +0000, Noah Misch wrote:
> On Thu, May 04, 2017 at 09:51:03PM +1200, Thomas Munro wrote:
> > On Thu, May 4, 2017 at 9:12 PM, Prabhat Sahu
> > <prabhat(dot)sahu(at)enterprisedb(dot)com> wrote:
> > > I have been testing this for a while and observed a server crash while referencing table column value in a trigger procedure for AFTER DELETE trigger.
> > >
> > > -- Steps to reproduce:
> > > CREATE TABLE t1(c1 int);
> > > CREATE TABLE t2(cc1 int);
> > > INSERT INTO t1 VALUES (10);
> > > INSERT INTO t2 VALUES (10);
> > >
> > > CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
> > > $$ BEGIN
> > > DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
> > > RETURN OLD;
> > > END; $$ LANGUAGE PLPGSQL;
> > >
> > > CREATE TRIGGER trg1
> > > AFTER DELETE ON t2
> > > REFERENCING OLD TABLE AS my_old
> > > FOR EACH ROW
> > > EXECUTE PROCEDURE trig_func();
> > >
> > > DELETE FROM t2 WHERE cc1 =10;
> > > server closed the connection unexpectedly
> > > This probably means the server terminated abnormally
> > > before or while processing the request.
> > > The connection to the server was lost. Attempting reset: Failed.
> >
> > Reproduced here. The stack looks like this:
> >
> > frame #3: 0x000000010f06f8b0
> > postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> > 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> > lineNumber=1237) + 128 at assert.c:54
> > frame #4: 0x000000010f0cbc85
> > postgres`tuplestore_rescan(state=0x00007ff219840200) + 85 at
> > tuplestore.c:1237
> > frame #5: 0x000000010eced9b1
> > postgres`ExecReScanNamedTuplestoreScan(node=0x00007ff21d007840) + 81
> > at nodeNamedtuplestorescan.c:197
> > frame #6: 0x000000010eca46a6
> > postgres`ExecReScan(node=0x00007ff21d007840) + 822 at execAmi.c:216
> > frame #7: 0x000000010ece7eca
> > postgres`ExecNestLoop(node=0x00007ff21d006310) + 538 at
> > nodeNestloop.c:148
> >
> > I think the problem is that the tuplestore read pointer hasn't been
> > opened with the "rewindable" flag. It works for me with the attached.
>
> [Action required within three days. This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 10 open item. Kevin,
> since you committed the patch believed to have created it, you own this open
> item. If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know. Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message. Include a date for your subsequent status update. Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10. Consequently, I will appreciate your efforts
> toward speedy resolution. Thanks.
>
> [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item is past due for your status update. Kindly send
a status update within 24 hours, and include a date for your subsequent status
update. Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-10 02:56:20
Message-ID: 20170510025620.GC1293561@rfd.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, May 06, 2017 at 05:34:46AM +0000, Noah Misch wrote:
> On Fri, May 05, 2017 at 08:23:33AM +1200, Thomas Munro wrote:
> > On Fri, May 5, 2017 at 12:39 AM, Neha Sharma
> > <neha(dot)sharma(at)enterprisedb(dot)com> wrote:
> > > While testing the feature we encountered one more crash,below is the
> > > scenario to reproduce.
> > >
> > > create table t1 ( a int);
> > > create table t2 ( a int);
> > > insert into t1 values (11),(12),(13);
> > >
> > > create or replace function my_trig() returns trigger
> > > language plpgsql as $my_trig$
> > > begin
> > > insert into t2(select a from new_table);
> > > RETURN NEW;
> > > end;
> > > $my_trig$;
> > >
> > > create trigger my_trigger
> > > after truncate or update on t1
> > > referencing new table as new_table old table as oldtab
> > > for each statement
> > > execute procedure my_trig();
> > >
> > > truncate t1;
> > > server closed the connection unexpectedly
> > > This probably means the server terminated abnormally
> > > before or while processing the request.
> > > The connection to the server was lost. Attempting reset: Failed.
> >
> > Thanks. Reproduced here. The stack looks like this:
> >
> > frame #3: 0x0000000103e5e8b0
> > postgres`ExceptionalCondition(conditionName="!((((((trigdata->tg_event)
> > & 0x00000003) == 0x00000000) || (((trigdata->tg_event) & 0x00000003)
> > == 0x00000002) || (((trigdata->tg_event) & 0x00000003) == 0x00000001))
> > && (((trigdata->tg_event) & 0x00000018) == 0x00000000) &&
> > !(trigdata->tg_event & 0x00000020) && !(trigdata->tg_event &
> > 0x00000040)) || (trigdata->tg_oldtable == ((void*)0) &&
> > trigdata->tg_newtable == ((void*)0)))", errorType="FailedAssertion",
> > fileName="trigger.c", lineNumber=2045) + 128 at assert.c:54
> > frame #4: 0x0000000103a6f542
> > postgres`ExecCallTriggerFunc(trigdata=0x00007fff5c40bad0, tgindx=0,
> > finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
> > per_tuple_context=0x00007fd8b906f928) + 258 at trigger.c:2039
> > frame #5: 0x0000000103a754ed
> > postgres`AfterTriggerExecute(event=0x00007fd8ba092460,
> > rel=0x00000001043fd9c0, trigdesc=0x00007fd8ba068758,
> > finfo=0x00007fd8ba0817b8, instr=0x0000000000000000,
> > per_tuple_context=0x00007fd8b906f928,
> > trig_tuple_slot1=0x0000000000000000,
> > trig_tuple_slot2=0x0000000000000000) + 1469 at trigger.c:3860
> > frame #6: 0x0000000103a73080
> > postgres`afterTriggerInvokeEvents(events=0x00007fd8ba07fb00,
> > firing_id=1, estate=0x00007fd8ba090440, delete_ok='\x01') + 592 at
> > trigger.c:4051
> > frame #7: 0x0000000103a72b7b
> > postgres`AfterTriggerEndQuery(estate=0x00007fd8ba090440) + 203 at
> > trigger.c:4227
> > frame #8: 0x0000000103a498aa
> > postgres`ExecuteTruncate(stmt=0x00007fd8ba059f40) + 2026 at
> > tablecmds.c:1485
> >
> > There's an assertion that it's (one of INSERT, UPDATE, DELETE, an
> > AFTER trigger, not deferred) *or* there are no transition tables.
> > Here it's TRUNCATE and there are transition tables, so it fails:
> >
> > /*
> > * Protect against code paths that may fail to initialize transition table
> > * info.
> > */
> > Assert(((TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
> > TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event) ||
> > TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) &&
> > TRIGGER_FIRED_AFTER(trigdata->tg_event) &&
> > !(trigdata->tg_event & AFTER_TRIGGER_DEFERRABLE) &&
> > !(trigdata->tg_event & AFTER_TRIGGER_INITDEFERRED)) ||
> > (trigdata->tg_oldtable == NULL && trigdata->tg_newtable == NULL));
> >
> >
> > We can't possibly support transition tables on TRUNCATE (the whole
> > point of TRUNCATE is not to inspect all the rows so we can't collect
> > them), and we already reject ROW triggers on TRUNCATE, so we should
> > reject transition tables on STATEMENT triggers for TRUNCATE at
> > creation time too. See attached. Thoughts?
>
> [Action required within three days. This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 10 open item. Kevin,
> since you committed the patch believed to have created it, you own this open
> item. If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know. Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message. Include a date for your subsequent status update. Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10. Consequently, I will appreciate your efforts
> toward speedy resolution. Thanks.
>
> [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item is past due for your status update. Kindly send
a status update within 24 hours, and include a date for your subsequent status
update. Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-10 03:18:18
Message-ID: CA+TgmoYSWNdwtXZJ=p_-p1YUUhAHqWGJTKRKNhs351sytag3Ww@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, May 4, 2017 at 5:51 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> Reproduced here. The stack looks like this:
>
> frame #3: 0x000000010f06f8b0
> postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> lineNumber=1237) + 128 at assert.c:54
> frame #4: 0x000000010f0cbc85
> postgres`tuplestore_rescan(state=0x00007ff219840200) + 85 at
> tuplestore.c:1237
> frame #5: 0x000000010eced9b1
> postgres`ExecReScanNamedTuplestoreScan(node=0x00007ff21d007840) + 81
> at nodeNamedtuplestorescan.c:197
> frame #6: 0x000000010eca46a6
> postgres`ExecReScan(node=0x00007ff21d007840) + 822 at execAmi.c:216
> frame #7: 0x000000010ece7eca
> postgres`ExecNestLoop(node=0x00007ff21d006310) + 538 at
> nodeNestloop.c:148
>
> I think the problem is that the tuplestore read pointer hasn't been
> opened with the "rewindable" flag. It works for me with the attached.

Committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Neha Sharma <neha(dot)sharma(at)enterprisedb(dot)com>, Kevin Grittner <kgrittn(at)gmail(dot)com>, Prabhat Sahu <prabhat(dot)sahu(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Corey Huinker <corey(dot)huinker(at)gmail(dot)com>
Subject: Re: delta relations in AFTER triggers
Date: 2017-05-10 03:26:48
Message-ID: CA+TgmoaeR9vbfSNU7SLGf4_1PTr6g7YvudGJPNSM5aB8OxPTxQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, May 4, 2017 at 4:23 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> We can't possibly support transition tables on TRUNCATE (the whole
> point of TRUNCATE is not to inspect all the rows so we can't collect
> them), and we already reject ROW triggers on TRUNCATE, so we should
> reject transition tables on STATEMENT triggers for TRUNCATE at
> creation time too. See attached. Thoughts?

Committed, with the addition of a regression test, the inclusion of
which in future bug fixes of this sort I recommend.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company