Re: SQL/MED - file_fdw

Lists: pgsql-hackers
From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <itagaki(dot)takahiro(at)gmail(dot)com>,<hanada(at)metrosystems(dot)co(dot)jp>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-01-18 00:33:08
Message-ID: 4D348B6502000025000396E0@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Itagaki Takahiro wrote:
> Shigeru HANADA wrote:

>> Attached patch would avoid this leak by adding per-copy context to
>> CopyState. This would be overkill, and ResetCopyFrom() might be
>> reasonable though.
>
> Good catch. I merged your fix into the attached patch.

Review for CF:

While there is a post which suggests applying this patch in the
middle of a string of fdw patches, it stands alone without depending
on any of the other patches. I chose to focus on testing it alone,
to isolate just issues with this particular patch. In addition to
the base patch, there was a patch-on-patch which was applied to
change signature of NextCopyFrom to take fewer params and return
HeapTuple.

This patch is in context diff format, applies cleanly, compiles
without warning, passes all tests in `make check` and `make
installcheck-world`. From that and the testing I did, I think that
this patch could be committed before any of the others, if desired.

A few minor comments on format, though -- some parts of the patch
came out much more readable (for me, at least) when I regenerated the
diff using the git diff --patience switch. For example, see the end
of the CopyStateData structure definition each way. Also, whitespace
has little differences from what pgindent uses. Most are pretty
minor except for a section where the indentation wasn't changed based
on a change in depth of braces, to keep the patch size down.

It appears that the point of the patch is to provide a better API for
accessing the implementation of COPY, to support other patches. This
whole FDW effort is not an area of expertise for me, but the API
looks reasonable to me, with a somewhat parallel structure to some of
the other APIs used by the executor. From reading the FDW posts, it
appears that other patches are successfully using this new API, and
the reviewers of the other patches seem happy enough with it, which
would tend to indicate that it is on-target. Other hackers seem to
want it and we didn't already have it. From my reading of the posts
the idea of creating an API at this level was agreed upon by the
community.

The only two files touched by this patch are copy.h and copy.c. The
copy.h changes consisted entirely of new function prototypes and one
declaration of a pointer type (CopyState) to a struct defined and
used only inside copy.c (CopyStateData). That pointer is the return
value from BeginCopyFrom and required as a parameter to other new
functions. So the old API is still present exactly as it was, with
additional functions added.

I tried to read through the code to look for problems, but there are
so many structures and techniques involved that I haven't had to deal
with yet, that it would take me days to get up to speed enough to
desk check this adequately. Since this API is a prerequisite for
other patches, and already being used by them, I figured I should do
what I could quickly and then worry about how to cover that.

Since it doesn't appear to be intended to change any user-visible
behavior, I don't see any need for docs or changes to the regression
tests. I didn't see any portability problems. It seemed a little
light on comments to me, but perhaps that's because of my ignorance
of some of the techniques being used -- maybe things are obvious
enough to anyone who would be mucking about in copy.c.

I spent a few hours doing ad hoc tests of various sorts to try to
break things, without success. Among those tests were checks for
correct behavior on temporary tables and read only transactions --
separately and in combination. I ran millions of COPY statements
inside of a single database transaction (I tried both FROM and TO)
without seeing memory usage increase by more than a few kB. No
assertion failures. No segfaults. Nothing unusual in the log. I
plan to redo these tests with the full fdw patch set unless someone
has already covered that ground.

So far everything I've done has been with asserts enabled, so I
haven't tried to get serious benchmarks, but it seems fast. I will
rebuild without asserts and do performance tests before I change the
status on the CF page.

I'm wondering if it would make more sense to do the benchmarking with
just this patch or the full fdw patch set? Both?

-Kevin


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-01-21 10:48:29
Message-ID: AANLkTikjcnxq6XtinMPJY7-OuAgp=QZJvaJNjyXfH3mj@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 18, 2011 at 09:33, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> Review for CF:

Thank your for the review!

> Since it doesn't appear to be intended to change any user-visible
> behavior, I don't see any need for docs or changes to the regression
> tests.

There might be some user-visible behaviors in error messages
because I rearranged some codes to check errors, But we can see
the difference only if we have two or more errors in COPY commands.
They should be not so serious issues.

> So far everything I've done has been with asserts enabled, so I
> haven't tried to get serious benchmarks, but it seems fast.  I will
> rebuild without asserts and do performance tests before I change the
> status on the CF page.
>
> I'm wondering if it would make more sense to do the benchmarking with
> just this patch or the full fdw patch set?  Both?

I tested the performance on my desktop PC, but I cannot see any
differences. But welcome if any of you could test on high-performance
servers.

Comparison with file_fdw would be more interesting
If they have similar performance, we could replace "COPY FROM" to
"CREATE TABLE AS SELECT FROM foreign_table", that is more flexible.

--
Itagaki Takahiro


From: Noah Misch <noah(at)leadboat(dot)com>
To: itagaki(dot)takahiro(at)gmail(dot)com, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-06 00:01:30
Message-ID: 20110206000130.GA4690@tornado.gateway.2wire.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jan 17, 2011 at 06:33:08PM -0600, Kevin Grittner wrote:
> Itagaki Takahiro wrote:
> > Shigeru HANADA wrote:
>
> >> Attached patch would avoid this leak by adding per-copy context to
> >> CopyState. This would be overkill, and ResetCopyFrom() might be
> >> reasonable though.
> >
> > Good catch. I merged your fix into the attached patch.
>
> Review for CF:
...
> I tried to read through the code to look for problems, but there are
> so many structures and techniques involved that I haven't had to deal
> with yet, that it would take me days to get up to speed enough to
> desk check this adequately. Since this API is a prerequisite for
> other patches, and already being used by them, I figured I should do
> what I could quickly and then worry about how to cover that.

I've studied the patch from this angle. My two substantive questions concern
CopyFromErrorCallback() error strings and the use of the per-tuple memory
context; see below. The rest either entail trivial fixes, or they do not
indicate anything that clearly ought to change.

The patch mostly moves code around; it's a bit difficult to grasp what's really
changing by looking at the diff (not a criticism of the approach -- I see no way
to avoid that). The final code structure is at least as easy to follow as the
structure it replaces. Here is a summary of the changes:

file_fdw wishes to borrow the COPY FROM code for parsing structured text and
building tuples fitting a given relation. file_fdw must bypass the parts that
insert those tuples. Until now, copy.c has exposed a single API, DoCopy().
To meet file_fdw's needs, the patch adds four new APIs for use as a set:

BeginCopyFrom() - once-per-COPY initialization and validation
NextCopyFrom() - call N times to get N values/null arrays
EndCopyFrom() - once-per-COPY cleanup
CopyFromErrorCallback() - for caller use in ErrorContextCallback.callback

Implementing these entails breaking apart the existing code structure. For
one, the patch separates from DoCopy the once-per-COPY-statement code, both
initialization and cleanup. Secondly, it separates the CopyFrom code for
assembling a tuple based on COPY input from the code that actually stores said
tuples in the target relation. To avoid duplicating code, the patch then
calls these new functions from DoCopy and CopyFrom. To further avoid
duplicating code and to retain symmetry, it refactors COPY TO setup and
teardown along similar lines. We have four new static functions:

BeginCopyTo() - parallel to BeginCopyFrom(), for DoCopy() alone
BeginCopy() - shared bits between BeginCopyTo() and BeginCopyFrom()
EndCopyTo() - parallel to EndCopyFrom(), for DoCopy() alone
EndCopy() - shared bits between EndCopyTo() and EndCopyFrom()

Most "parse analysis"-type bits of DoCopy() move to BeginCopy(). Several
checks remain in DoCopy(): superuser for reading a server-local file,
permission on the relation, and a read-only transaction check. The first
stays where it does because a superuser may define a file_fdw foreign table
and then grant access to others. The other two remain in DoCopy() because the
new API uses the relation as a template without reading or writing it.

The catalog work (looking up defaults, I/O functions, etc) in CopyFrom() moves
to BeginCopyFrom(), and applicable local variables become CopyState members.

copy_in_error_callback changes name to CopyFromErrorCallback() to better fit
with the rest of the new API. Its implementation does not change at all.

Since it's now possible for one SQL statement to call into the COPY machinery
an arbitrary number of times, the patch introduces a per-COPY memory context.

The patch implements added refactorings that make sense but are peripheral to
the API change at hand. CopyState.fe_copy is gone, now calculated locally in
DoCopyTo(). CopyState.processed is gone, with the row count now bubbling up
through return values. We now initialize the CopyState buffers for COPY FROM
only; they are unused in COPY TO. The purest of patches would defer all these,
but I don't object to them tagging along.

For COPY TO, the relkind checks move from DoCopyTo() to BeginCopyTo(). I'm
skeptical about this one. It's not required for correctness, and the relkind
checks for COPY FROM remain in CopyFrom().

file_fdw uses CopyFromErrorCallback() to give errors the proper context. The
function uses template strings like "COPY %s, line %d", where %s is the name of
the relation being copied. Presumably file_fdw and other features using this
API would wish to customize that error message prefix, and the relation name
might not be apropos at all. How about another argument to BeginCopyFrom,
specifying an error prefix to be stashed in the CopyState?

We could easily regret requiring a Relation in BeginCopyFrom(); another user may
wish to use a fabricated TupleDesc. Code paths in the new API use the Relation
for a TupleDesc, relhashoids, column defaults, and error message strings.
Removing the need for a full Relation is within reach. That being said, the API
definitely loses cleanliness if we cater to that possibility now. It's probably
best to keep the API as the patch has it, and let the hypothetical future use
case change it.

There were some concerns upthread that exposing more of copy.c would attract use
in third party FDWs, hamstringing future efforts to improve the implementation
for the COPY command itself. On review, FDW modules are not likely users in
general. file_fdw wants this because it parses structured text files, not
because it's an FDW. Either way, the API also seems nicely-chosen to minimize
the exposure of internals.

I have not tested the patched code in any detail, because Kevin's review covered
that angle quite thoroughly. I have no cause to suspect that the patch will
change user-visible function behavior, aside from the spurious error message
changes noted below. It looks unlikely to affect performance, but that's
probably worth a quick test.

> Since it doesn't appear to be intended to change any user-visible
> behavior, I don't see any need for docs or changes to the regression
> tests. I didn't see any portability problems. It seemed a little
> light on comments to me, but perhaps that's because of my ignorance
> of some of the techniques being used -- maybe things are obvious
> enough to anyone who would be mucking about in copy.c.

In some places, the new code does not add comments despite the surrounding code
having many comments. However, some existing comments went a bit too far
(/* Place tuple in tuple slot */, /* Reset the per-tuple exprcontext */).
The degree of commenting is mostly good, but I've noted a few places below.

The header comment for BeginCopyFrom should probably explain the arguments a bit
more; it assumes familiarity with the COPY implementation. How about this:

/*
* Setup to read tuples from a file for COPY FROM.
*
* 'rel': used as a template for the tuples
* 'filename': name of server-local file to read
* 'attnamelist': List of char *, columns to include. NIL selects all cols.
* 'options': List of DefElem. See copy_opt_item in gram.y for selections.
*
* Returns a CopyState, to be passed to NextCopyFrom().
*/

Thanks,
nm

Comments on specific hunks:

> *** a/src/backend/commands/copy.c
> --- b/src/backend/commands/copy.c

> ***************
> *** 167,181 **** typedef struct CopyStateData
> char *raw_buf;
> int raw_buf_index; /* next byte to process */
> int raw_buf_len; /* total # of bytes stored */
> } CopyStateData;
>
> - typedef CopyStateData *CopyState;
> -
> /* DestReceiver for COPY (SELECT) TO */
> typedef struct
> {
> DestReceiver pub; /* publicly-known function pointers */
> CopyState cstate; /* CopyStateData for the command */
> } DR_copy;
>
>
> --- 170,197 ----
> char *raw_buf;
> int raw_buf_index; /* next byte to process */
> int raw_buf_len; /* total # of bytes stored */
> +
> + /*
> + * The definition of input functions and default expressions are stored
> + * in these variables.
> + */
> + EState *estate;
> + AttrNumber num_defaults;
> + bool file_has_oids;
> + FmgrInfo oid_in_function;
> + Oid oid_typioparam;
> + FmgrInfo *in_functions;
> + Oid *typioparams;
> + int *defmap;
> + ExprState **defexprs; /* array of default att expressions */
> } CopyStateData;

The leading comment only applies to a few of those fields. Each field needs its
own comment, instead.

> ***************
> *** 1314,1339 **** DoCopyTo(CopyState cstate)
> }
> PG_END_TRY();
>
> ! if (!pipe)
> {
> ! if (FreeFile(cstate->copy_file))
> ! ereport(ERROR,
> ! (errcode_for_file_access(),
> ! errmsg("could not write to file \"%s\": %m",
> ! cstate->filename)));
> }
> }
>
> /*
> * Copy from relation or query TO file.
> */
> ! static void
> CopyTo(CopyState cstate)
> {
> TupleDesc tupDesc;
> int num_phys_attrs;
> Form_pg_attribute *attr;
> ListCell *cur;
>
> if (cstate->rel)
> tupDesc = RelationGetDescr(cstate->rel);
> --- 1396,1442 ----
> }
> PG_END_TRY();
>
> ! return processed;
> ! }
> !
> ! /*
> ! * Clean up storage and release resources for COPY TO.
> ! */
> ! static void
> ! EndCopyTo(CopyState cstate)
> ! {
> ! if (cstate->filename != NULL && FreeFile(cstate->copy_file))
> ! ereport(ERROR,
> ! (errcode_for_file_access(),
> ! errmsg("could not close file \"%s\": %m",
> ! cstate->filename)));

The old error message was "could not write to file \"%s\": %m". This might be a
good change, but it belongs in a separate patch.

> !
> ! /*
> ! * Close the relation or query. We can release the AccessShareLock we got.
> ! */

Separated from its former location, this comment is not applicable.

> ***************
> *** 1826,1832 **** CopyFrom(CopyState cstate)
> slot = ExecInitExtraTupleSlot(estate);
> ExecSetSlotDescriptor(slot, tupDesc);
>
> ! econtext = GetPerTupleExprContext(estate);
>
> /*
> * Pick up the required catalog information for each attribute in the
> --- 1889,2070 ----
> slot = ExecInitExtraTupleSlot(estate);
> ExecSetSlotDescriptor(slot, tupDesc);
>
> ! /* Prepare to catch AFTER triggers. */
> ! AfterTriggerBeginQuery();
> !
> ! /*
> ! * Check BEFORE STATEMENT insertion triggers. It's debateable whether we
> ! * should do this for COPY, since it's not really an "INSERT" statement as
> ! * such. However, executing these triggers maintains consistency with the
> ! * EACH ROW triggers that we already fire on COPY.
> ! */
> ! ExecBSInsertTriggers(estate, resultRelInfo);
> !
> ! values = (Datum *) palloc(tupDesc->natts * sizeof(Datum));
> ! nulls = (bool *) palloc(tupDesc->natts * sizeof(bool));
> !
> ! bistate = GetBulkInsertState();
> !
> ! /* Set up callback to identify error line number */
> ! errcontext.callback = CopyFromErrorCallback;
> ! errcontext.arg = (void *) cstate;
> ! errcontext.previous = error_context_stack;
> ! error_context_stack = &errcontext;
> !
> ! while (!done)
> ! {

The loop may as well be unconditional ...

> ! bool skip_tuple;
> ! Oid loaded_oid = InvalidOid;
> !
> ! CHECK_FOR_INTERRUPTS();
> !
> ! /* Reset the per-tuple exprcontext */
> ! ResetPerTupleExprContext(estate);
> !
> ! /* Switch into its memory context */
> ! MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));

Shouldn't a switch to this context happen inside NextCopyFrom(), then again for
the calls to heap_form_tuple/HeapTupleSetOid? I found previous discussion on
this matter, but no conclusion.

> !
> ! done = !NextCopyFrom(cstate, values, nulls, &loaded_oid);
> ! if (done)
> ! break;

... since this is the only exit. No need to maintain "done".

> ! /*
> ! * Clean up storage and release resources for COPY FROM.
> ! */
> ! void
> ! EndCopyFrom(CopyState cstate)
> ! {
> ! FreeExecutorState(cstate->estate);
> !
> ! if (cstate->filename != NULL && FreeFile(cstate->copy_file))
> ! ereport(ERROR,
> ! (errcode_for_file_access(),
> ! errmsg("could not close file \"%s\": %m",
> ! cstate->filename)));

Likewise: this might be a good verbiage change, but it belongs in its own patch.


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-07 06:39:39
Message-ID: AANLkTimG1=en=9f8j=kdiNMAGMWfYGKb+5nYCKhRqSJc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thank you for the detailed review!
Revised patch attached, but APIs for jagged csv is not included in it.

On Sun, Feb 6, 2011 at 09:01, Noah Misch <noah(at)leadboat(dot)com> wrote:
> Most "parse analysis"-type bits of DoCopy() move to BeginCopy().

It would be possible to move more FROM-only or TO-only members in BeginCopy()
into their BeginCopyFrom/To(), but it is just a code refactoring requires
more code rearrangement. It should be done by another try even if needed.

> CopyState.processed is gone, with the row count now bubbling up
> through return values.

I removed it from CopyState because it represents "number of inserted rows"
in COPY FROM. However, for the viewpoint of API, the numbers of lines in
the input file is more suitable. To avoid the confusion, I moved it
from a common member to COPY FROM-only variable.

> For COPY TO, the relkind checks move from DoCopyTo() to BeginCopyTo().  I'm
> skeptical about this one.  It's not required for correctness, and the relkind
> checks for COPY FROM remain in CopyFrom().

I think error checks should be done in the early phase, so I moved the check
to BeginCopyTo(). However, relkind check in COPY FROM is needed only for
COPY FROM command because the relation is the inserted target. For APIs,
the relation is used as a template for input file. So, we cannot perform
the check in BeginCopyFrom().

> file_fdw uses CopyFromErrorCallback() to give errors the proper context.  The
> function uses template strings like "COPY %s, line %d", where %s is the name of
> the relation being copied.  Presumably file_fdw and other features using this
> API would wish to customize that error message prefix, and the relation name
> might not be apropos at all.  How about another argument to BeginCopyFrom,
> specifying an error prefix to be stashed in the CopyState?

I changed "COPY %s, .." to "relation %s, ..." because the first string is
the relation name anyway. We could have another prefix argument, but I think
it has little information for errors.

We also have many "COPY" in other messages, but they won't be used actually
because the are messages for invalid arguments and file_fdw will have own
validater function. All invalid arguments will be filtered in CREATE commands.

> We could easily regret requiring a Relation in BeginCopyFrom(); another user may
> wish to use a fabricated TupleDesc.  Code paths in the new API use the Relation
> for a TupleDesc, relhashoids, column defaults, and error message strings.
> Removing the need for a full Relation is within reach.  That being said, the API
> definitely loses cleanliness if we cater to that possibility now.  It's probably
> best to keep the API as the patch has it, and let the hypothetical future use
> case change it.

Agreed. The current file_fdw has a template relation anyway.

> The header comment for BeginCopyFrom should probably explain the arguments a bit
> more; it assumes familiarity with the COPY implementation.  How about this:
> (snip)
> Comments on specific hunks:

Thanks included.

>> !             /* Reset the per-tuple exprcontext */
>> !             ResetPerTupleExprContext(estate);
>> !
>> !             /* Switch into its memory context */
>> !             MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
>
> Shouldn't a switch to this context happen inside NextCopyFrom(), then again for
> the calls to heap_form_tuple/HeapTupleSetOid?  I found previous discussion on
> this matter, but no conclusion.

In my understanding, NextCopyFrom() should use CurrentMemoryContext provided
by the caller. The file_fdw will use executor's per-tuple context for it.
Another issue in the automatic context switch is to discard the previous
values in the next call while the caller is unaware.

--
Itagaki Takahiro

Attachment Content-Type Size
copy_export-20110207.patch application/octet-stream 48.2 KB

From: Noah Misch <noah(at)leadboat(dot)com>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-07 12:16:37
Message-ID: 20110207121637.GA22669@tornado.gateway.2wire.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 07, 2011 at 03:39:39PM +0900, Itagaki Takahiro wrote:
> On Sun, Feb 6, 2011 at 09:01, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > Most "parse analysis"-type bits of DoCopy() move to BeginCopy().
>
> It would be possible to move more FROM-only or TO-only members in BeginCopy()
> into their BeginCopyFrom/To(), but it is just a code refactoring requires
> more code rearrangement. It should be done by another try even if needed.

Agreed.

> > CopyState.processed is gone, with the row count now bubbling up
> > through return values.
>
> I removed it from CopyState because it represents "number of inserted rows"
> in COPY FROM. However, for the viewpoint of API, the numbers of lines in
> the input file is more suitable. To avoid the confusion, I moved it
> from a common member to COPY FROM-only variable.

Perhaps I'm missing something. The new API does not expose a "processed" count
at all; that count is used for the command tag of a top-level COPY. This part
of the patch is just changing how we structure the code to maintain that tally,
and it has no implications for observers outside copy.c. Right?

> > For COPY TO, the relkind checks move from DoCopyTo() to BeginCopyTo(). ??I'm
> > skeptical about this one. ??It's not required for correctness, and the relkind
> > checks for COPY FROM remain in CopyFrom().
>
> I think error checks should be done in the early phase, so I moved the check
> to BeginCopyTo(). However, relkind check in COPY FROM is needed only for
> COPY FROM command because the relation is the inserted target. For APIs,
> the relation is used as a template for input file. So, we cannot perform
> the check in BeginCopyFrom().

The choice of where to put them does not affect anything outside of copy.c, and
placement in DoCopyTo() would make the symmetry between the TO and FROM code
paths easier to follow. Not a major concern, though.

> > file_fdw uses CopyFromErrorCallback() to give errors the proper context. ??The
> > function uses template strings like "COPY %s, line %d", where %s is the name of
> > the relation being copied. ??Presumably file_fdw and other features using this
> > API would wish to customize that error message prefix, and the relation name
> > might not be apropos at all. ??How about another argument to BeginCopyFrom,
> > specifying an error prefix to be stashed in the CopyState?
>
> I changed "COPY %s, .." to "relation %s, ..." because the first string is
> the relation name anyway. We could have another prefix argument, but I think
> it has little information for errors.

That's perhaps an improvement for file_fdw, but not for regular COPY.

My comment originated with a faulty idea that file_fdw's internal use of COPY
was an implementation detail that users should not need to see. Looking now,
the file_fdw documentation clearly explains the tie to COPY, even referring
users to the COPY documentation. I no longer see a need to hide the fact that
the "foreign" source is a PostgreSQL COPY command. The error messages are fine
as they were.

Some future client of the new API may wish to hide its internal COPY use, but
there's no need to design for that now.

> We also have many "COPY" in other messages, but they won't be used actually
> because the are messages for invalid arguments and file_fdw will have own
> validater function. All invalid arguments will be filtered in CREATE commands.

Agreed; "could not read from COPY file: %m" appears to be the primary one liable
to happen in practice. The greater failure with that one is that, given a query
reading from multiple file_fdw tables, you can't tell which file had a problem.

This issue runs broader than the patch at hand; I will start another thread to
address it. Let's proceed with this patch, not changing any error messages. If
other discussion concludes that the desired behavior requires an enhancement to
this new API, a followup commit can implement that.

> >> ! ?? ?? ?? ?? ?? ?? /* Reset the per-tuple exprcontext */
> >> ! ?? ?? ?? ?? ?? ?? ResetPerTupleExprContext(estate);
> >> !
> >> ! ?? ?? ?? ?? ?? ?? /* Switch into its memory context */
> >> ! ?? ?? ?? ?? ?? ?? MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
> >
> > Shouldn't a switch to this context happen inside NextCopyFrom(), then again for
> > the calls to heap_form_tuple/HeapTupleSetOid? ??I found previous discussion on
> > this matter, but no conclusion.
>
> In my understanding, NextCopyFrom() should use CurrentMemoryContext provided
> by the caller. The file_fdw will use executor's per-tuple context for it.

In a direct call to COPY FROM, all of NextCopyFrom() uses the per-tuple context
of CopyState->estate. We reset that context before each call to NextCopyFrom().
This is true before (ignoring code movement) and after the patch.

A file_fdw NextCopyFrom() call will use the per-tuple context of the executor
performing a foreign scan. Allocations will arise primarily in type input
functions. ExecEvalExpr(), used to acquire default values, will still use the
per-tuple context of CopyState->estate. That per-tuple context will never get
reset explicitly, so default value computations leak until EndCopyFrom().

I see memory context use was discussed already, but I don't see the
aforementioned specific details addressed:
http://archives.postgresql.org/message-id/AANLkTikfiM1qknr9=tL+xemBLAJ+YJoLhAG3XsH7mfwH@mail.gmail.com

The use of ExecEvalExpr() in NextCopyFrom() also seems contrary to the
execnodes.h comment you quoted in that thread ("CurrentMemoryContext should be
set to ecxt_per_tuple_memory before calling ExecEvalExpr() --- see
ExecEvalExprSwitchContext()."). NextCopyFrom() passes CopyState->estate to
ExecEvalExpr, but the current context may be the per-tuple context of a
different executor state.

> Another issue in the automatic context switch is to discard the previous
> values in the next call while the caller is unaware.

True; we would need to document that pointers stored in the "values" argument of
NextCopyData() are valid only until the next call. Does that already work with
file_fdw's usage pattern, or would file_fdw need to make copies?

Three hunk-specific comments (diff is between previously-reviewed copy.c and
current master plus today's patch):

> *** a/src/backend/commands/copy.c
> --- b/src/backend/commands/copy.c
> ***************
> *** 175,193 **** typedef struct CopyStateData
> * The definition of input functions and default expressions are stored
> * in these variables.
> */
> EState *estate;
> AttrNumber num_defaults;
> bool file_has_oids;
> FmgrInfo oid_in_function;
> Oid oid_typioparam;
> ! FmgrInfo *in_functions;
> ! Oid *typioparams;
> ! int *defmap;
> ExprState **defexprs; /* array of default att expressions */
> } CopyStateData;
>
> /* DestReceiver for COPY (SELECT) TO */
> typedef struct
> {
> DestReceiver pub; /* publicly-known function pointers */
> CopyState cstate; /* CopyStateData for the command */
> --- 175,193 ----
> * The definition of input functions and default expressions are stored
> * in these variables.
> */

This block comment remains roughly half correct. Again, I think a small comment
on every field below should replace it.

> EState *estate;
> AttrNumber num_defaults;
> bool file_has_oids;
> FmgrInfo oid_in_function;
> Oid oid_typioparam;
> ! FmgrInfo *in_functions; /* array of input functions for each attrs */
> ! Oid *typioparams; /* array of element types for in_functions */
> ! int *defmap; /* array of default att numbers */
> ExprState **defexprs; /* array of default att expressions */
> } CopyStateData;
>
> /* DestReceiver for COPY (SELECT) TO */
> typedef struct
> {
> DestReceiver pub; /* publicly-known function pointers */
> CopyState cstate; /* CopyStateData for the command */
> ***************
> *** 1268,1283 **** BeginCopy(bool is_from,
> --- 1268,1289 ----
> }
>
> /*
> * Release resources allocated in a cstate.
> */
> static void
> EndCopy(CopyState cstate)
> {
> + if (cstate->filename != NULL && FreeFile(cstate->copy_file))
> + ereport(ERROR,
> + (errcode_for_file_access(),
> + errmsg("could not close file \"%s\": %m",
> + cstate->filename)));

In the write case, this is not an improvement -- failure in fclose(3) almost
always arises from the attempt to flush buffered writes. Note how most other
call sites checking the return value of FreeFile() use an error message like the
original one. The change to the read case seems fine.

> +
> MemoryContextDelete(cstate->copycontext);
> pfree(cstate);
> }
>
> /*
> * Setup CopyState to read tuples from a table or a query for COPY TO.
> */
> static CopyState
> ***************
> *** 1400,1424 **** DoCopyTo(CopyState cstate)
> }
>
> /*
> * Clean up storage and release resources for COPY TO.
> */
> static void
> EndCopyTo(CopyState cstate)
> {
> - if (cstate->filename != NULL && FreeFile(cstate->copy_file))
> - ereport(ERROR,
> - (errcode_for_file_access(),
> - errmsg("could not close file \"%s\": %m",
> - cstate->filename)));
> -
> - /*
> - * Close the relation or query. We can release the AccessShareLock we got.
> - */
> if (cstate->queryDesc != NULL)
> {
> /* Close down the query and free resources. */
> ExecutorEnd(cstate->queryDesc);
> FreeQueryDesc(cstate->queryDesc);
> PopActiveSnapshot();
> }
>
> --- 1406,1421 ----
> ***************
> *** 1681,1746 **** void
> CopyFromErrorCallback(void *arg)
> {
> CopyState cstate = (CopyState) arg;
>
> if (cstate->binary)
> {
> /* can't usefully display the data */
> if (cstate->cur_attname)
> ! errcontext("COPY %s, line %d, column %s",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname);
> else
> ! errcontext("COPY %s, line %d",
> cstate->cur_relname, cstate->cur_lineno);
> }
> else
> {
> if (cstate->cur_attname && cstate->cur_attval)
> {
> /* error is relevant to a particular column */
> char *attval;
>
> attval = limit_printout_length(cstate->cur_attval);
> ! errcontext("COPY %s, line %d, column %s: \"%s\"",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname, attval);
> pfree(attval);
> }
> else if (cstate->cur_attname)
> {
> /* error is relevant to a particular column, value is NULL */
> ! errcontext("COPY %s, line %d, column %s: null input",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname);
> }
> else
> {
> /* error is relevant to a particular line */
> if (cstate->line_buf_converted || !cstate->need_transcoding)
> {
> char *lineval;
>
> lineval = limit_printout_length(cstate->line_buf.data);
> ! errcontext("COPY %s, line %d: \"%s\"",
> cstate->cur_relname, cstate->cur_lineno, lineval);
> pfree(lineval);
> }
> else
> {
> /*
> * Here, the line buffer is still in a foreign encoding, and
> * indeed it's quite likely that the error is precisely a
> * failure to do encoding conversion (ie, bad data). We dare
> * not try to convert it, and at present there's no way to
> * regurgitate it without conversion. So we have to punt and
> * just report the line number.
> */
> ! errcontext("COPY %s, line %d",
> cstate->cur_relname, cstate->cur_lineno);
> }
> }
> }
> }
>
> /*
> * Make sure we don't print an unreasonable amount of COPY data in a message.
> --- 1678,1743 ----
> CopyFromErrorCallback(void *arg)
> {
> CopyState cstate = (CopyState) arg;
>
> if (cstate->binary)
> {
> /* can't usefully display the data */
> if (cstate->cur_attname)
> ! errcontext("relation %s, line %d, column %s",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname);
> else
> ! errcontext("relation %s, line %d",
> cstate->cur_relname, cstate->cur_lineno);
> }
> else
> {
> if (cstate->cur_attname && cstate->cur_attval)
> {
> /* error is relevant to a particular column */
> char *attval;
>
> attval = limit_printout_length(cstate->cur_attval);
> ! errcontext("relation %s, line %d, column %s: \"%s\"",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname, attval);
> pfree(attval);
> }
> else if (cstate->cur_attname)
> {
> /* error is relevant to a particular column, value is NULL */
> ! errcontext("relation %s, line %d, column %s: null input",
> cstate->cur_relname, cstate->cur_lineno,
> cstate->cur_attname);
> }
> else
> {
> /* error is relevant to a particular line */
> if (cstate->line_buf_converted || !cstate->need_transcoding)
> {
> char *lineval;
>
> lineval = limit_printout_length(cstate->line_buf.data);
> ! errcontext("relation %s, line %d: \"%s\"",
> cstate->cur_relname, cstate->cur_lineno, lineval);
> pfree(lineval);
> }
> else
> {
> /*
> * Here, the line buffer is still in a foreign encoding, and
> * indeed it's quite likely that the error is precisely a
> * failure to do encoding conversion (ie, bad data). We dare
> * not try to convert it, and at present there's no way to
> * regurgitate it without conversion. So we have to punt and
> * just report the line number.
> */
> ! errcontext("relation %s, line %d",
> cstate->cur_relname, cstate->cur_lineno);
> }
> }
> }
> }
>
> /*
> * Make sure we don't print an unreasonable amount of COPY data in a message.
> ***************
> *** 1782,1798 **** limit_printout_length(const char *str)
> */
> static uint64
> CopyFrom(CopyState cstate)
> {
> HeapTuple tuple;
> TupleDesc tupDesc;
> Datum *values;
> bool *nulls;
> - bool done = false;
> ResultRelInfo *resultRelInfo;
> EState *estate = cstate->estate; /* for ExecConstraints() */
> TupleTableSlot *slot;
> MemoryContext oldcontext = CurrentMemoryContext;
> ErrorContextCallback errcontext;
> CommandId mycid = GetCurrentCommandId(true);
> int hi_options = 0; /* start with default heap_insert options */
> BulkInsertState bistate;
> --- 1779,1794 ----
> ***************
> *** 1906,1936 **** CopyFrom(CopyState cstate)
> bistate = GetBulkInsertState();
>
> /* Set up callback to identify error line number */
> errcontext.callback = CopyFromErrorCallback;
> errcontext.arg = (void *) cstate;
> errcontext.previous = error_context_stack;
> error_context_stack = &errcontext;
>
> ! while (!done)
> {
> bool skip_tuple;
> Oid loaded_oid = InvalidOid;
>
> CHECK_FOR_INTERRUPTS();
>
> /* Reset the per-tuple exprcontext */
> ResetPerTupleExprContext(estate);
>
> /* Switch into its memory context */
> MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
>
> ! done = !NextCopyFrom(cstate, values, nulls, &loaded_oid);
> ! if (done)
> break;
>
> /* And now we can form the input tuple. */
> tuple = heap_form_tuple(tupDesc, values, nulls);
>
> if (loaded_oid != InvalidOid)
> HeapTupleSetOid(tuple, loaded_oid);
>
> --- 1902,1931 ----
> bistate = GetBulkInsertState();
>
> /* Set up callback to identify error line number */
> errcontext.callback = CopyFromErrorCallback;
> errcontext.arg = (void *) cstate;
> errcontext.previous = error_context_stack;
> error_context_stack = &errcontext;
>
> ! for (;;)
> {
> bool skip_tuple;
> Oid loaded_oid = InvalidOid;
>
> CHECK_FOR_INTERRUPTS();
>
> /* Reset the per-tuple exprcontext */
> ResetPerTupleExprContext(estate);
>
> /* Switch into its memory context */
> MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
>
> ! if (!NextCopyFrom(cstate, values, nulls, &loaded_oid))
> break;
>
> /* And now we can form the input tuple. */
> tuple = heap_form_tuple(tupDesc, values, nulls);
>
> if (loaded_oid != InvalidOid)
> HeapTupleSetOid(tuple, loaded_oid);
>
> ***************
> *** 2015,2031 **** CopyFrom(CopyState cstate)
> */
> if (hi_options & HEAP_INSERT_SKIP_WAL)
> heap_sync(cstate->rel);
>
> return processed;
> }
>
> /*
> ! * Setup CopyState to read tuples from a file for COPY FROM.
> */
> CopyState
> BeginCopyFrom(Relation rel,
> const char *filename,
> List *attnamelist,
> List *options)
> {
> CopyState cstate;
> --- 2010,2032 ----
> */
> if (hi_options & HEAP_INSERT_SKIP_WAL)
> heap_sync(cstate->rel);
>
> return processed;
> }
>
> /*
> ! * CopyGetAttnums - build an integer list of attnums to be copied
> ! *
> ! * The input attnamelist is either the user-specified column list,
> ! * or NIL if there was none (in which case we want all the non-dropped
> ! * columns).
> ! *
> ! * rel can be NULL ... it's only used for error reports.
> */
> CopyState
> BeginCopyFrom(Relation rel,

This is just a verbatim copy of the CopyGetAttnums() header comment. (The
middle paragraph happens to be true, though.)

> const char *filename,
> List *attnamelist,
> List *options)
> {
> CopyState cstate;
> ***************
> *** 2208,2224 **** BeginCopyFrom(Relation rel,
> MemoryContextSwitchTo(oldcontext);
>
> return cstate;
> }
>
> /*
> * Read next tuple from file for COPY FROM. Return false if no more tuples.
> *
> ! * valus and nulls arrays must be the same length as columns of the
> * relation passed to BeginCopyFrom. Oid of the tuple is returned with
> * tupleOid separately.
> */
> bool
> NextCopyFrom(CopyState cstate, Datum *values, bool *nulls, Oid *tupleOid)
> {
> TupleDesc tupDesc;
> Form_pg_attribute *attr;
> --- 2209,2225 ----
> MemoryContextSwitchTo(oldcontext);
>
> return cstate;
> }
>
> /*
> * Read next tuple from file for COPY FROM. Return false if no more tuples.
> *
> ! * values and nulls arrays must be the same length as columns of the
> * relation passed to BeginCopyFrom. Oid of the tuple is returned with
> * tupleOid separately.
> */
> bool
> NextCopyFrom(CopyState cstate, Datum *values, bool *nulls, Oid *tupleOid)
> {
> TupleDesc tupDesc;
> Form_pg_attribute *attr;
> ***************
> *** 2453,2474 **** NextCopyFrom(CopyState cstate, Datum *values, bool *nulls, Oid *tupleOid)
> /*
> * Clean up storage and release resources for COPY FROM.
> */
> void
> EndCopyFrom(CopyState cstate)
> {
> FreeExecutorState(cstate->estate);
>
> - if (cstate->filename != NULL && FreeFile(cstate->copy_file))
> - ereport(ERROR,
> - (errcode_for_file_access(),
> - errmsg("could not close file \"%s\": %m",
> - cstate->filename)));
> -
> /* Clean up storage */
> EndCopy(cstate);
> }
>
>
> /*
> * Read the next input line and stash it in line_buf, with conversion to
> * server encoding.
> --- 2454,2469 ----


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Noah Misch <noah(at)leadboat(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-07 20:05:26
Message-ID: 4D505086.4030700@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/07/2011 01:39 AM, Itagaki Takahiro wrote:
>
>
>> file_fdw uses CopyFromErrorCallback() to give errors the proper context. The
>> function uses template strings like "COPY %s, line %d", where %s is the name of
>> the relation being copied. Presumably file_fdw and other features using this
>> API would wish to customize that error message prefix, and the relation name
>> might not be apropos at all. How about another argument to BeginCopyFrom,
>> specifying an error prefix to be stashed in the CopyState?
> I changed "COPY %s, .." to "relation %s, ..." because the first string is
> the relation name anyway. We could have another prefix argument, but I think
> it has little information for errors.
>
> We also have many "COPY" in other messages, but they won't be used actually
> because the are messages for invalid arguments and file_fdw will have own
> validater function. All invalid arguments will be filtered in CREATE commands.

These changes have broken the regression tests. The attached patches
(one for the core regression tests and one for file_fdw) fix that.

But I don't know that your change is terribly helpful. I rather like
Noah's idea better, if we need to make a change.

cheers

andrew

Attachment Content-Type Size
copyapi-regressionfix-file_fdw.patch text/x-patch 618 bytes
copyapi-regressionfix-core.patch text/x-patch 4.2 KB

From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-08 12:25:29
Message-ID: AANLkTinfHgmiOqDqx7F2ZLOpVLTqp0+4Xx_kL1S_scw+@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is a revised patch, that including jagged csv support.
A new exported function is named as NextCopyFromRawFields.

On Mon, Feb 7, 2011 at 21:16, Noah Misch <noah(at)leadboat(dot)com> wrote:
> Perhaps I'm missing something.  The new API does not expose a "processed" count
> at all; that count is used for the command tag of a top-level COPY.  This part
> of the patch is just changing how we structure the code to maintain that tally,
> and it has no implications for observers outside copy.c.  Right?

True, but the counter is tightly bound with INSERT-side of COPY FROM.
| copy.c (1978)
| * We count only tuples not suppressed by a BEFORE INSERT trigger;

I think it is cleaner way to separate it from CopyState
because it is used as a file reader rather than a writer.
However, if there are objections, I'd revert it.

>> I changed "COPY %s, .." to "relation %s, ..."
> My comment originated with a faulty idea that file_fdw's internal use of COPY
> was an implementation detail that users should not need to see.  Looking now,
> the file_fdw documentation clearly explains the tie to COPY, even referring
> users to the COPY documentation.  I no longer see a need to hide the fact that
> the "foreign" source is a PostgreSQL COPY command.  The error messages are fine
> as they were.

OK, I reverted the changes. User-visible changes might be more important,
pointed by Andrew.

> ExecEvalExpr(), used to acquire default values, will still use the
> per-tuple context of CopyState->estate.  That per-tuple context will never get
> reset explicitly, so default value computations leak until EndCopyFrom().

Ah, I see. I didn't see the leak because we rarely use per-tuple memory
context in the estate. We just use CurrentMemoryContext in many cases.
But the leak could occur, and the code is misleading.
I moved ResetPerTupleExprContext() into NextCopyFrom(), but
CurrentMemoryContext still used in it to the result values.

Another possible design might be passing EState as an argument of
NextCopyFrom and remove estate from CopyState. It seems a much cleaner way
in terms of control flow, but it requires more changes in file_fdw.
Comments?

> This block comment remains roughly half correct.  Again, I think a small comment
> on every field below should replace it.
>
> This is just a verbatim copy of the CopyGetAttnums() header comment.  (The
> middle paragraph happens to be true, though.)

Silly mistakes. Maybe came from too many 'undo' in my editor.

--
Itagaki Takahiro

Attachment Content-Type Size
copy_export-20110208.patch application/octet-stream 49.9 KB

From: Noah Misch <noah(at)leadboat(dot)com>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-08 18:49:48
Message-ID: 20110208184948.GA960@tornado.gateway.2wire.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Feb 08, 2011 at 09:25:29PM +0900, Itagaki Takahiro wrote:
> Here is a revised patch, that including jagged csv support.
> A new exported function is named as NextCopyFromRawFields.

Seems a bit incongruous to handle the OID column in that function. That part
fits with the other per-column code in NextCopyFrom().

> On Mon, Feb 7, 2011 at 21:16, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > Perhaps I'm missing something. ??The new API does not expose a "processed" count
> > at all; that count is used for the command tag of a top-level COPY. ??This part
> > of the patch is just changing how we structure the code to maintain that tally,
> > and it has no implications for observers outside copy.c. ??Right?
>
> True, but the counter is tightly bound with INSERT-side of COPY FROM.
> | copy.c (1978)
> | * We count only tuples not suppressed by a BEFORE INSERT trigger;
>
> I think it is cleaner way to separate it from CopyState
> because it is used as a file reader rather than a writer.
> However, if there are objections, I'd revert it.

No significant objection. I just wished to be clear on whether it was pure
refactoring, or something more.

> > ExecEvalExpr(), used to acquire default values, will still use the
> > per-tuple context of CopyState->estate. ??That per-tuple context will never get
> > reset explicitly, so default value computations leak until EndCopyFrom().
>
> Ah, I see. I didn't see the leak because we rarely use per-tuple memory
> context in the estate. We just use CurrentMemoryContext in many cases.
> But the leak could occur, and the code is misleading.
> I moved ResetPerTupleExprContext() into NextCopyFrom(), but
> CurrentMemoryContext still used in it to the result values.

The code still violates the contract of ExecEvalExpr() by calling it with
CurrentMemoryContext != econtext->ecxt_per_tuple_memory.

> Another possible design might be passing EState as an argument of
> NextCopyFrom and remove estate from CopyState. It seems a much cleaner way
> in terms of control flow, but it requires more changes in file_fdw.
> Comments?

Seems sensible and more-consistent with the typical structure of executor code.
Do we place any constraints on sharing of EState among different layers like
this? I could not identify any, offhand.

The new API uses EState for two things. First, BeginCopyFrom() passes it to
ExecPrepareExpr(). Instead, let's use expression_planner() + ExecInitExpr() and
require that we've been called with a memory context of suitable longevity.
Things will break anyway if BeginCopyFrom()'s CurrentMemoryContext gets reset
too early. This way, we no longer need an EState in BeginCopyFrom().

Second, NextCopyFrom() sends the per-output-tuple ExprContext of the EState to
ExecEvalExpr(). It really needs a specific ExprContext, not an EState. A
top-level COPY has a bijection between input and output tuples, making the
distinction unimportant. GetPerTupleExprContext() is wrong for a file_fdw scan,
though. We need the ExprContext of the ForeignScanState or another of
equivalent lifecycle. NextCopyFrom() would then require that it be called with
CurrentMemoryContext == econtext->ecxt_per_tuple_memory.

Sorry, I missed a lot of these memory details on my first couple of reviews.

nm


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-09 05:55:26
Message-ID: AANLkTimfZE1bLXFfuV9TPihGOR7Uq=Z=a=2kTusG2PJk@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Feb 9, 2011 at 03:49, Noah Misch <noah(at)leadboat(dot)com> wrote:
>> A new exported function is named as NextCopyFromRawFields.
> Seems a bit incongruous to handle the OID column in that function.  That part
> fits with the other per-column code in NextCopyFrom().

Hmmm, I thought oid columns should be separated from user columns, but it
might be a confusing interface. I removed the explicit oid output from
NextCopyFromRawFields. file_fdw won't use oids at all in any cases, though.

> The code still violates the contract of ExecEvalExpr() by calling it with
> CurrentMemoryContext != econtext->ecxt_per_tuple_memory.

I'm not sure whether we have such contract because the caller might
want to get the expression result in long-lived context. But anyway
using an external ExprContext looks cleaner. The new prototype is:

bool NextCopyFrom(
[IN] CopyState cstate, ExprContext *econtext,
[OUT] Datum *values, bool *nulls, Oid *tupleOid)

Note that econtext can be NULL if no default values are used.
Since file_fdw won't use default values, we can just pass NULL for it.

> Sorry, I missed a lot of these memory details on my first couple of reviews.

You did great reviews! Thank you very much.

--
Itagaki Takahiro

Attachment Content-Type Size
copy_export-20110209.patch application/octet-stream 49.6 KB

From: Noah Misch <noah(at)leadboat(dot)com>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-09 07:03:06
Message-ID: 20110209070306.GA4114@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Feb 09, 2011 at 02:55:26PM +0900, Itagaki Takahiro wrote:
> On Wed, Feb 9, 2011 at 03:49, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > The code still violates the contract of ExecEvalExpr() by calling it with
> > CurrentMemoryContext != econtext->ecxt_per_tuple_memory.
>
> I'm not sure whether we have such contract because the caller might
> want to get the expression result in long-lived context.

execQual.c has this comment:

/* ----------------------------------------------------------------
* ExecEvalExpr routines
...
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
* ExecEvalExprSwitchContext() is provided for callers who don't prefer to
* do the switch in an outer loop. We do not do the switch in these routines
* because it'd be a waste of cycles during nested expression evaluation.
* ----------------------------------------------------------------
*/

Assuming that comment is accurate, ExecEvalExpr constrains us; any default
values must get allocated in econtext->ecxt_per_tuple_memory. To get them in a
long-lived allocation, the caller can copy the datums or supply a long-lived
ExprContext. Any CurrentMemoryContext works when econtext == NULL, of course.

I'd suggest enhancing your new paragraph in the NextCopyFrom() header comment
like this:

- * 'econtext' is used to evaluate default expression for each columns not
- * read from the file. It can be NULL when no default values are used, i.e.
- * when all columns are read from the file.
+ * 'econtext' is used to evaluate default expression for each columns not read
+ * from the file. It can be NULL when no default values are used, i.e. when all
+ * columns are read from the file. If econtext is not NULL, the caller must have
+ * switched into the temporary memory context econtext->ecxt_per_tuple_memory.

> But anyway
> using an external ExprContext looks cleaner. The new prototype is:
>
> bool NextCopyFrom(
> [IN] CopyState cstate, ExprContext *econtext,
> [OUT] Datum *values, bool *nulls, Oid *tupleOid)

Looks good.

> Note that econtext can be NULL if no default values are used.
> Since file_fdw won't use default values, we can just pass NULL for it.

Nice. Good thinking. One small point:

> --- 2475,2504 ----
> * provided by the input data. Anything not processed here or above
> * will remain NULL.
> */
> + /* XXX: End of only-indentation changes. */
> + if (num_defaults > 0)
> + {
> + Assert(econtext != NULL);
> +
> for (i = 0; i < num_defaults; i++)

This could be better-written as "Assert(num_defaults == 0 || econtext != NULL);".

From a functional and code structure perspective, I find this ready to commit.
(I assume you'll drop the XXX: indent only comments on commit.) Kevin, did you
want to do that performance testing you spoke of?

Thanks,
nm


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:12:21
Message-ID: AANLkTimGpnuaYquw91JmV4F4NTEmbo3tdaacFJh=iFP9@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Feb 9, 2011 at 2:03 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> From a functional and code structure perspective, I find this ready to commit.
> (I assume you'll drop the XXX: indent only comments on commit.)  Kevin, did you
> want to do that performance testing you spoke of?

OK, so is this Ready for Committer, or we're still working on it?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Noah Misch" <noah(at)leadboat(dot)com>
Cc: "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com>, <hanada(at)metrosystems(dot)co(dot)jp>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:31:08
Message-ID: 4D550FEC020000250003A841@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Noah Misch <noah(at)leadboat(dot)com> wrote:
>> From a functional and code structure perspective, I find this
>> ready to commit. (I assume you'll drop the XXX: indent only
>> comments on commit.) Kevin, did you want to do that performance
>> testing you spoke of?
>
> OK, so is this Ready for Committer, or we're still working on it?

I can run some benchmarks to compare COPY statements with and
without the patch this weekend. Noah, does it make more sense to do
that with just the copy_export-20110209.patch patch file applied, or
in combination with some other FDW patch(es)?

-Kevin


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Noah Misch <noah(at)leadboat(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:31:16
Message-ID: AANLkTi=5coHzHv=+-PgmHxZhn3TqvHTSX9SszbvpcE2O@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Feb 12, 2011 at 01:12, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Wed, Feb 9, 2011 at 2:03 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
>> From a functional and code structure perspective, I find this ready to commit.
>> (I assume you'll drop the XXX: indent only comments on commit.)  Kevin, did you
>> want to do that performance testing you spoke of?
>
> OK, so is this Ready for Committer, or we're still working on it?

Basically, we have no more tasks until the FDW core API is applied.
COPY API and file_fdw patches are waiting for it.

If we extend them a little more, I'd raise two items:
* Should we print foreign table names in error messages?
http://archives.postgresql.org/pgsql-hackers/2011-02/msg00427.php
* COPY encoding patch was rejected, but using client_encoding is
logically wrong for file_fdw. We might need subset of the patch
for file_fdw.

--
Itagaki Takahiro


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Noah Misch <noah(at)leadboat(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org, hanada(at)metrosystems(dot)co(dot)jp
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:36:36
Message-ID: AANLkTikHje1i76xx=jP1OqHtr0bA44r2bvLqRyPCNR4w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Feb 11, 2011 at 11:31 AM, Itagaki Takahiro
<itagaki(dot)takahiro(at)gmail(dot)com> wrote:
> On Sat, Feb 12, 2011 at 01:12, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Wed, Feb 9, 2011 at 2:03 AM, Noah Misch <noah(at)leadboat(dot)com> wrote:
>>> From a functional and code structure perspective, I find this ready to commit.
>>> (I assume you'll drop the XXX: indent only comments on commit.)  Kevin, did you
>>> want to do that performance testing you spoke of?
>>
>> OK, so is this Ready for Committer, or we're still working on it?
>
> Basically, we have no more tasks until the FDW core API is applied.
> COPY API and file_fdw patches are waiting for it.
>
> If we extend them a little more, I'd raise two items:
> * Should we print foreign table names in error messages?
>  http://archives.postgresql.org/pgsql-hackers/2011-02/msg00427.php
> * COPY encoding patch was rejected, but using client_encoding is
>  logically wrong for file_fdw. We might need subset of the patch
>  for file_fdw.

It sounds to me like that second issue is a showstopper. I think we
either need to reopen discussion on that patch and come up with a
resolution that is acceptable ASAP, or we need to punt file_fdw to
9.2.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Noah Misch" <noah(at)leadboat(dot)com>,<hanada(at)metrosystems(dot)co(dot)jp>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:45:13
Message-ID: 4D551339020000250003A849@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com> wrote:

> Basically, we have no more tasks until the FDW core API is
> applied. COPY API and file_fdw patches are waiting for it.

This is something I've found confusing about this patch set, to the
point of not knowing what to test, exactly. The COPY API patch and
the patch-on-patch for it both applied cleanly *without any of the
other patches* and seemed to run fine, even though the post with a
patch-on-patch for the COPY API said that several other patches
needed to be applied first. In spite of having tried to follow the
posts for all the FDW threads, I'm still confused enough about the
relationship between these patches to be unsure what to test.

My top priority for testing would be to confirm that there is no
adverse impact on existing COPY performance from the refactoring.

-Kevin


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:46:47
Message-ID: 20110211164647.GA30425@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Feb 11, 2011 at 10:31:08AM -0600, Kevin Grittner wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> > Noah Misch <noah(at)leadboat(dot)com> wrote:
> >> From a functional and code structure perspective, I find this
> >> ready to commit. (I assume you'll drop the XXX: indent only
> >> comments on commit.) Kevin, did you want to do that performance
> >> testing you spoke of?
> >
> > OK, so is this Ready for Committer, or we're still working on it?
>
> I can run some benchmarks to compare COPY statements with and
> without the patch this weekend. Noah, does it make more sense to do
> that with just the copy_export-20110209.patch patch file applied, or
> in combination with some other FDW patch(es)?

I'd say, run them with this patch alone. The important thing is to not penalize
existing COPY users. Incidentally, the "did you want ... ?" was a genuine
question. I see very little performance risk here, so the tests could be quite
cursory, even absent entirely.

nm


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Noah Misch" <noah(at)leadboat(dot)com>
Cc: "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, <hanada(at)metrosystems(dot)co(dot)jp>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-11 16:58:18
Message-ID: 4D55164A020000250003A850@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Noah Misch <noah(at)leadboat(dot)com> wrote:

> I'd say, run them with this patch alone. The important thing is
> to not penalize existing COPY users.

Sounds good.

> Incidentally, the "did you want ... ?" was a genuine question. I
> see very little performance risk here, so the tests could be quite
> cursory, even absent entirely.

>From what I've seen, I tend to agree. If there's a committer ready
to go over this, I would say that it might be worth waiting for the
benchmark results against the patch from the day before yesterday to
be run before "pulling the trigger" on it; but proceed on the basis
that it's a near-certainty that it will test out OK.

-Kevin


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Noah Misch" <noah(at)leadboat(dot)com>
Cc: "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, <hanada(at)metrosystems(dot)co(dot)jp>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-12 21:42:17
Message-ID: 4D56AA59020000250003A91C@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Noah Misch <noah(at)leadboat(dot)com> wrote:

> I'd say, run them with this patch alone. The important thing is
> to not penalize existing COPY users. Incidentally, the "did you
> want ... ?" was a genuine question. I see very little performance
> risk here, so the tests could be quite cursory, even absent
> entirely.

In two hours of testing with a 90GB production database, the copy
patch on top of HEAD ran 0.6% faster than HEAD for pg_dumpall
(generating identical output files), but feeding that in to and
empty cluster with psql ran 8.4% faster with the patch than without!
I'm going to repeat that latter with more attention to whether
everything made it in OK. (That's not as trivial to check as the
dump phase.)

Do you see any reason that COPY FROM should be significantly
*faster* with the patch? Are there any particular things I should
be checking for problems?

-Kevin


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-12 22:33:37
Message-ID: 20110212223337.GA16941@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Feb 12, 2011 at 03:42:17PM -0600, Kevin Grittner wrote:
> In two hours of testing with a 90GB production database, the copy
> patch on top of HEAD ran 0.6% faster than HEAD for pg_dumpall
> (generating identical output files), but feeding that in to and
> empty cluster with psql ran 8.4% faster with the patch than without!
> I'm going to repeat that latter with more attention to whether
> everything made it in OK. (That's not as trivial to check as the
> dump phase.)
>
> Do you see any reason that COPY FROM should be significantly
> *faster* with the patch?

No. Up to, say, 0.5% wouldn't be too surprising, but 8.4% is surprising. What
is the uncertainty of that figure?

> Are there any particular things I should
> be checking for problems?

Nothing comes to mind.

Thanks,
nm


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Noah Misch" <noah(at)leadboat(dot)com>
Cc: "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, <hanada(at)metrosystems(dot)co(dot)jp>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-13 18:41:11
Message-ID: 4D57D16B020000250003A93B@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Noah Misch <noah(at)leadboat(dot)com> wrote:
> On Sat, Feb 12, 2011 at 03:42:17PM -0600, Kevin Grittner wrote:

>> Do you see any reason that COPY FROM should be significantly
>> *faster* with the patch?
>
> No. Up to, say, 0.5% wouldn't be too surprising, but 8.4% is
> surprising.
> What is the uncertainty of that figure?

With a few more samples, it's not that high. It's hard to dodge
around the maintenance tasks on this machine to get good numbers, so
I can't really just set something up to run overnight to get numbers
in which I can have complete confidence, but (without putting
statistical probabilities around it) I feel very safe in saying
there isn't a performance *degradation* with the patch. I got four
restores of of the 90GB data with the patch and four without. I
made sure it was during windows without any maintenance running, did
a fresh initdb for each run, and made sure that the disk areas were
the same for each run. The times for each version were pretty
tightly clustered except for each having one (slow) outlier.

If you ignore the outlier for each, there is *no overlap* between
the two sets -- the slowest of the non-outlier patched times is
faster than the fastest non-patched time.

With the patch, compared to without -- best time is 9.8% faster,
average time without the outliers is 6.9% faster, average time
including outliers is 4.3% faster, outlier is 0.8% faster.

Even with just four samples each, since I was careful to minimize
distorting factors, that seems like plenty to have confidence that
there is no performance *degradation* from the patch. If we want to
claim some particular performance *gain* from it, I would need to
arrange a dedicated machine and script maybe 100 runs each way to be
willing to offer a number for public consumption.

Unpatched:
real 17m24.171s
real 16m52.892s
real 16m40.624s
real 16m41.700s

Patched:
real 15m56.249s
real 15m47.001s
real 15m3.018s
real 17m16.157s

Since you said that a cursory test, or no test at all, should be
good enough given the low risk of performance regression, I didn't
book a machine and script a large test run, but if anyone feels
that's justified, I can arrange something.

-Kevin


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-14 00:18:33
Message-ID: 4D5874D9.2090901@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/12/2011 05:33 PM, Noah Misch wrote:
> On Sat, Feb 12, 2011 at 03:42:17PM -0600, Kevin Grittner wrote:
>> In two hours of testing with a 90GB production database, the copy
>> patch on top of HEAD ran 0.6% faster than HEAD for pg_dumpall
>> (generating identical output files), but feeding that in to and
>> empty cluster with psql ran 8.4% faster with the patch than without!
>> I'm going to repeat that latter with more attention to whether
>> everything made it in OK. (That's not as trivial to check as the
>> dump phase.)
>>
>> Do you see any reason that COPY FROM should be significantly
>> *faster* with the patch?
> No. Up to, say, 0.5% wouldn't be too surprising, but 8.4% is surprising. What
> is the uncertainty of that figure?
>
>

We have seen in the past that changes that might be expected to slow
things down slightly can have the opposite effect. For example, see
<http://people.planetpostgresql.org/andrew/index.php?/archives/37-Puzzling-results.html>
where Tom commented:

Yeah, IME it's not unusual for microbenchmark results to move a
percent or three in response to any code change at all, even
unrelated ones. I suppose it's from effects like critical loops
breaking across cache lines differently than before.

cheers

andrew


From: Noah Misch <noah(at)leadboat(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-14 13:06:52
Message-ID: 20110214130652.GB30908@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Feb 13, 2011 at 12:41:11PM -0600, Kevin Grittner wrote:
> Unpatched:
> real 17m24.171s
> real 16m52.892s
> real 16m40.624s
> real 16m41.700s
>
> Patched:
> real 15m56.249s
> real 15m47.001s
> real 15m3.018s
> real 17m16.157s
>
> Since you said that a cursory test, or no test at all, should be
> good enough given the low risk of performance regression, I didn't
> book a machine and script a large test run, but if anyone feels
> that's justified, I can arrange something.

Based on this, I've taken the liberty of marking the patch Ready for Committer.
Thanks.


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Robert Haas <robertmhaas(at)gmail(dot)com>, hanada(at)metrosystems(dot)co(dot)jp, pgsql-hackers(at)postgresql(dot)org
Subject: Re: SQL/MED - file_fdw
Date: 2011-02-14 13:54:39
Message-ID: AANLkTin1_hB-oprumkL-FjLrXH6qcgS7eKbSm_=TaiBw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 14, 2011 at 22:06, Noah Misch <noah(at)leadboat(dot)com> wrote:
> On Sun, Feb 13, 2011 at 12:41:11PM -0600, Kevin Grittner wrote:
>> Unpatched:
>> real    17m24.171s
>> real    16m52.892s
>> real    16m40.624s
>> real    16m41.700s
>>
>> Patched:
>> real    15m56.249s
>> real    15m47.001s
>> real    15m3.018s
>> real    17m16.157s
>>
>> Since you said that a cursory test, or no test at all, should be
>> good enough given the low risk of performance regression, I didn't
>> book a machine and script a large test run, but if anyone feels
>> that's justified, I can arrange something.
>
> Based on this, I've taken the liberty of marking the patch Ready for Committer.

Thank you very much for performance testing and reviewing!

The result is interesting because I didn't intend performance optimization.
At least no performance regression is enough for the purpose.

--
Itagaki Takahiro