Ragged CSV import

Lists: pgsql-hackers
From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Ragged CSV import
Date: 2009-09-09 19:59:55
Message-ID: 4AA8093B.8030206@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


I have received a requirement for the ability to import ragged CSV
files, i.e. files that contain variable numbers of columns per row. The
requirement is that extra columns would be ignored and missing columns
filled with NULL. The client wanting this has wrestled with some
preprocessors to try to get what they want, but they would feel happier
with this built in. This isn't the first time I have received this
request since we implemented CSV import. People have complained on
numerous occasions about the strictness of the import routines w.r.t.
the number of columns.

I don't think that providing a CSV import mode for reading these files
will be very difficult or invasive. If it's not acceptable, I will
provide it to the client as a patch, and I will be retrofitting it to
8.4 anyway. But I think it's not an unreasonable request.

Thoughts?

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:13:56
Message-ID: 6369.1252527236@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> I have received a requirement for the ability to import ragged CSV
> files, i.e. files that contain variable numbers of columns per row. The
> requirement is that extra columns would be ignored and missing columns
> filled with NULL. The client wanting this has wrestled with some
> preprocessors to try to get what they want, but they would feel happier
> with this built in. This isn't the first time I have received this
> request since we implemented CSV import. People have complained on
> numerous occasions about the strictness of the import routines w.r.t.
> the number of columns.

Hmm. Accepting too few columns and filling with nulls isn't any
different than what INSERT has always done. But ignoring extra columns
seems like a different ballgame. Can you talk your client out of that
one? It just seems like a bad idea.

As for the "numerous occasions", maybe I've not been paying attention,
but I don't recall any ...

regards, tom lane


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:25:07
Message-ID: 20090909202507.GV4132@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:

> As for the "numerous occasions", maybe I've not been paying attention,
> but I don't recall any ...

I don't know about numerous, but I've seen it in the spanish list; for
example:

http://archives.postgresql.org/pgsql-es-ayuda/2007-03/msg00901.php

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:27:05
Message-ID: 6912.1252528025@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> I have received a requirement for the ability to import ragged CSV
>> files, i.e. files that contain variable numbers of columns per row.

BTW, one other thought about this: I think the historical reason for
COPY being strict about the number of incoming columns was that it
provided a useful cross-check that the parsing hadn't gone off into
the weeds. We have certainly seen enough examples where the reported
manifestation of, say, an escaping mistake was that COPY saw the row
as having too many or too few columns. So being permissive about it
would lose some error detection capability. I am not clear about
whether CSV format is sufficiently more robust than the traditional
COPY format to render this an acceptable loss. Comments?

(One possible answer to this is to make the behavior optional, though
surely COPY has got too many options already :-()

regards, tom lane


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:34:29
Message-ID: 20090909203428.GW4132@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> >> I have received a requirement for the ability to import ragged CSV
> >> files, i.e. files that contain variable numbers of columns per row.
>
> BTW, one other thought about this: I think the historical reason for
> COPY being strict about the number of incoming columns was that it
> provided a useful cross-check that the parsing hadn't gone off into
> the weeds. We have certainly seen enough examples where the reported
> manifestation of, say, an escaping mistake was that COPY saw the row
> as having too many or too few columns. So being permissive about it
> would lose some error detection capability. I am not clear about
> whether CSV format is sufficiently more robust than the traditional
> COPY format to render this an acceptable loss. Comments?

I think accepting less columns and filling with nulls should be
protected enough for this not to be a problem; if the parser goes nuts,
it will die eventually. Silently dropping excessive trailing columns
does not seem acceptable though; you could lose entire rows and not
notice.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:37:32
Message-ID: 4AA8120C.9040100@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 9/9/09 12:59 PM, Andrew Dunstan wrote:
>
> I have received a requirement for the ability to import ragged CSV
> files, i.e. files that contain variable numbers of columns per row. The
> requirement is that extra columns would be ignored and missing columns
> filled with NULL. The client wanting this has wrestled with some
> preprocessors to try to get what they want, but they would feel happier
> with this built in. This isn't the first time I have received this
> request since we implemented CSV import. People have complained on
> numerous occasions about the strictness of the import routines w.r.t.
> the number of columns.

Would this just work on columns on the end, or would it work on the
basis of parsing the CSV header and matching columns?

While the former functionality would be relatively simple, I think the
latter is what people really want.

--
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:46:59
Message-ID: 4AA81443.9090203@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>
>> I have received a requirement for the ability to import ragged CSV
>> files, i.e. files that contain variable numbers of columns per row. The
>> requirement is that extra columns would be ignored and missing columns
>> filled with NULL. The client wanting this has wrestled with some
>> preprocessors to try to get what they want, but they would feel happier
>> with this built in. This isn't the first time I have received this
>> request since we implemented CSV import. People have complained on
>> numerous occasions about the strictness of the import routines w.r.t.
>> the number of columns.
>>
>
> Hmm. Accepting too few columns and filling with nulls isn't any
> different than what INSERT has always done. But ignoring extra columns
> seems like a different ballgame. Can you talk your client out of that
> one? It just seems like a bad idea.
>

No, that's critical. The application this is wanted for uploads data
that users put in spreadsheets. The users apparently expect that they
will be able to put comments on some rows off to the right of the data
they want loaded, and have it ignored.

To answer your other point made later, my intention was to make this
optional behaviour, not default behaviour. I agree that it would be too
slack for default behaviour. Yes, we have quite a few options, but
that's not surprising in dealing with a format that is at best
ill-defined and which we do not control.

> As for the "numerous occasions", maybe I've not been paying attention,
> but I don't recall any ...
>
>

The requests have been made on IRC, at conferences, in private emails.

cheers

andrew


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>, "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:47:48
Message-ID: 4AA7CE24020000250002ABEE@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> Would this just work on columns on the end, or would it work on the
> basis of parsing the CSV header and matching columns?
>
> While the former functionality would be relatively simple, I think
the
> latter is what people really want.

It's been a while since I've had a need for something like this, but
of
the copy features not currently available in PostgreSQL, the two most
useful are to read in only some of the defined columns, and to output
to
a separate disk file any rows which failed to match the expected
format.
The latter would not cause the copy to fail unless the count of such
rows exceeded a user-specified threshold.

-Kevin


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:50:09
Message-ID: 4AA81501.3020602@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus wrote:
> On 9/9/09 12:59 PM, Andrew Dunstan wrote:
>
>> I have received a requirement for the ability to import ragged CSV
>> files, i.e. files that contain variable numbers of columns per row. The
>> requirement is that extra columns would be ignored and missing columns
>> filled with NULL. The client wanting this has wrestled with some
>> preprocessors to try to get what they want, but they would feel happier
>> with this built in. This isn't the first time I have received this
>> request since we implemented CSV import. People have complained on
>> numerous occasions about the strictness of the import routines w.r.t.
>> the number of columns.
>>
>
> Would this just work on columns on the end, or would it work on the
> basis of parsing the CSV header and matching columns?
>
> While the former functionality would be relatively simple, I think the
> latter is what people really want.
>
>

No, it's the former, and that's exactly what the client requested. I'm
not talking about parsing the header line - that's a much larger can of
worms.

cheers

andrew


From: Hannu Krosing <hannu(at)2ndQuadrant(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 20:56:01
Message-ID: 1252529761.4080.23.camel@hvost1700
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 2009-09-09 at 16:34 -0400, Alvaro Herrera wrote:
> Tom Lane wrote:
> > Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> > >> I have received a requirement for the ability to import ragged CSV
> > >> files, i.e. files that contain variable numbers of columns per row.
> >
> > BTW, one other thought about this: I think the historical reason for
> > COPY being strict about the number of incoming columns was that it
> > provided a useful cross-check that the parsing hadn't gone off into
> > the weeds. We have certainly seen enough examples where the reported
> > manifestation of, say, an escaping mistake was that COPY saw the row
> > as having too many or too few columns. So being permissive about it
> > would lose some error detection capability. I am not clear about
> > whether CSV format is sufficiently more robust than the traditional
> > COPY format to render this an acceptable loss. Comments?
>
> I think accepting less columns and filling with nulls should be
> protected enough for this not to be a problem; if the parser goes nuts,
> it will die eventually. Silently dropping excessive trailing columns
> does not seem acceptable though; you could lose entire rows and not
> notice.

Maybe we could put a catch-all "text" or even "text[]" column at as the
last one of the table and gather all extra columns there ?

> --
> Alvaro Herrera http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.

--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>, "Dann Corbit" <DCorbit(at)connx(dot)com>, "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 21:03:26
Message-ID: 4AA7D1CE020000250002ABF4@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Dann Corbit" <DCorbit(at)connx(dot)com> wrote:
> Kevin Grittner

>> It's been a while since I've had a need for something like this,
>> but of the copy features not currently available in PostgreSQL,
>> the two most useful are to read in only some of the defined
>> columns, and to output to a separate disk file any rows which
>> failed to match the expected format.
>> The latter would not cause the copy to fail unless the count of
>> such rows exceeded a user-specified threshold.

> Perhaps something like SQL Server's BCP format files could be
> used.
> http://support.microsoft.com/kb/67409
> http://technet.microsoft.com/en-us/library/ms178129.aspx
> http://www.nigelrivett.net/SQLTsql/BCP_quoted_CSV_Format_file.html
> http://doc.ddart.net/mssql/sql70/impt_bcp_16.htm

That is what I was thinking of when I wrote the above, although in
the Sybase incarnation, before Microsoft split off on their own. (I
see they haven't changed sybchar from the Sybase name yet....)

My reservation about referencing it is that it includes so many
bells and whistles that it's not as easy to use as it might be, even
with the "wizard" to generate the format description files. The
other problem with it is that it was far and away the *least* stable
part of the DBMS. You could count on it being broken in any version
until six months to a year into that version's life. We eventually
moved away from it because of the instability -- we could write code
from scratch each time with more confidence of correct behavior. I
think keeping it a little simpler might help with keeping it stable.

-Kevin


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Dann Corbit <DCorbit(at)connx(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 21:13:43
Message-ID: 4AA81A87.7040407@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Dann Corbit wrote:
> Perhaps something like SQL Server's BCP format files could be used.
> http://support.microsoft.com/kb/67409
> http://technet.microsoft.com/en-us/library/ms178129.aspx
> http://www.nigelrivett.net/SQLTsql/BCP_quoted_CSV_Format_file.html
> http://doc.ddart.net/mssql/sql70/impt_bcp_16.htm
>

This looks vastly more complex than anything I am interested in doing.

cheers

andrew


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 23:18:14
Message-ID: 603c8f070909091618j74326f8dneabda1f1e6e3d192@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 9, 2009 at 4:13 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> I have received a requirement for the ability to import ragged CSV
>> files, i.e. files that contain variable numbers of columns per row. The
>> requirement is that extra columns would be ignored and missing columns
>> filled with NULL. The client wanting this has wrestled with some
>> preprocessors to try to get what they want, but they would feel happier
>> with this built in. This isn't the first time I have received this
>> request since we implemented CSV import. People have complained on
>> numerous occasions about the strictness of the import routines w.r.t.
>> the number of columns.
>
> Hmm.  Accepting too few columns and filling with nulls isn't any
> different than what INSERT has always done.  But ignoring extra columns
> seems like a different ballgame.  Can you talk your client out of that
> one?  It just seems like a bad idea.

I agree that ignoring extra columns is a bad idea, but I don't even
like the idea of ignoring missing columns. It doesn't seem like a
good idea to take a spreadsheet and feed it into COPY without doing
any validation anyway, and this is the kind of thing that is trivial
to clean up with a thin layer of Perl or your scripting language of
choice.

...Robert


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-09 23:41:33
Message-ID: 4AA83D2D.9080304@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas wrote:
> I agree that ignoring extra columns is a bad idea, but I don't even
> like the idea of ignoring missing columns. It doesn't seem like a
> good idea to take a spreadsheet and feed it into COPY without doing
> any validation anyway, and this is the kind of thing that is trivial
> to clean up with a thin layer of Perl or your scripting language of
> choice.
>
>

If it's an optional feature then I don't see why there is a problem.
What skin is it off anyone else's nose but those whose choose this
behaviour?

I am perfectly familiar with Perl and so is the client that requested
this feature. They are quite technically savvy. They are using a
scripting solution now but they find it cumbersome.

As for general validation, the requestor's application in fact loads the
spreadsheet into a temp table of text columns and then subjects it to a
large variety of complex business rule checking before adding the data
to the main tables. It is a whole lot faster and cleaner to do it that
way than before the data is loaded at all. That's why they aren't
concerned about missing columns.

cheers

andrew


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 00:43:17
Message-ID: 603c8f070909091743s9b44d80sc3ced348de688378@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 9, 2009 at 7:41 PM, Andrew Dunstan<andrew(at)dunslane(dot)net> wrote:
>
>
> Robert Haas wrote:
>>
>> I agree that ignoring extra columns is a bad idea, but I don't even
>> like the idea of ignoring missing columns.  It doesn't seem like a
>> good idea to take a spreadsheet and feed it into COPY without doing
>> any validation anyway, and this is the kind of thing that is trivial
>> to clean up with a thin layer of Perl or your scripting language of
>> choice.
>>
>>
>
> If it's an optional feature then I don't see why there is a problem. What
> skin is it off anyone else's nose but those whose choose this behaviour?

I have to admit I'm usually an advocate of that way of thinking, so
maybe I am all wet. I suppose it comes down to whether you think this
particular feature is something with broad applicability, or whether
there are 100 other equally plausible features.

I wonder whether it would be appropriate to do something like
implement a method by which copy could return text[] and then one
could write wrappers around that functionality to do this as well as
other things. For example, suppose you wanted to have rows of the
form:

A,B,C,X1,Y1,X2,Y2,X3,Y3

...which gets transformed into an insert of (A,B,C) into a main table
and (A,X1,Y1), (A,X2,Y2), (A,X3,Y3) into a side table. (I have actual
knowledge of a widely-deployed system produced by a large company that
outputs data in a format similar to this, though the actual format is
considerably more complex.)

...Robert


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 02:40:34
Message-ID: 4AA86722.1070203@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas wrote:
> I wonder whether it would be appropriate to do something like
> implement a method by which copy could return text[] and then one
> could write wrappers around that functionality to do this as well as
> other things.
>

It's not a function ... writing a copy() function returning text[] (or
setof text[], which would be better) in plperlu would be trivial. But it
couldn't read from stdin or be able to be fed data from a client in
copy-in mode.

And even if we have it what does the sql look like to insert the values
into some table with n columns?

Anything along these lines in C is likely to be far larger than what I
had in mind, which was a fairly minor enhancement.

cheers

andrew


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 02:55:09
Message-ID: 603c8f070909091955k4c0c0fb6q4295808c06a910d2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 9, 2009 at 10:40 PM, Andrew Dunstan<andrew(at)dunslane(dot)net> wrote:
> Robert Haas wrote:
>>
>> I wonder whether it would be appropriate to do something like
>> implement a method by which copy could return text[] and then one
>> could write wrappers around that functionality to do this as well as
>> other things.
>
> It's not a function ... writing a copy() function returning text[] (or setof
> text[], which would be better) in plperlu would be trivial. But it couldn't
> read from stdin or be able to be fed data from a client in copy-in mode.

Yes, setof text[] is what I meant. Making it be able to read from
stdin or be fed data from a client in copy-in mode would be the whole
point of the feature, I suppose.

> And even if we have it what does the sql look like to insert the values into
> some table with n columns?

INSERT INTO table
SELECT t[1], t[2], t[3], t[4], ..., t[n]
FROM (...however we get the copy results...)

...although I'm not entirely sure that's compatible with the
client/server protocol.

> Anything along these lines in C is likely to be far larger than what I had
> in mind, which was a fairly minor enhancement.

It wouldn't be the first time that someone was told that a particular
enhancement didn't have enough value and that they had to do something
larger if they wanted it in core, but on the other hand, I am only
expressing my opinion, which is not binding on you or anyone else, nor
even a fully-fleshed-out proposal. All I'm saying is that it seems to
me that there is value in being able to use the CSV (or other) parsing
code, but have some way to make modifications to how/where the data is
actually inserted. However, I'm skeptical about whether the specific
thing you want to do after parsing (namely, drop excess columns,
null-fill missing ones) is sufficiently common to warrant a feature to
do only that. YMMV, of course.

...Robert


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:01:09
Message-ID: 14071.1252551669@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Robert Haas wrote:
>> I wonder whether it would be appropriate to do something like
>> implement a method by which copy could return text[] and then one
>> could write wrappers around that functionality to do this as well as
>> other things.

> Anything along these lines in C is likely to be far larger than what I
> had in mind, which was a fairly minor enhancement.

I think Robert's got a point though. What you're talking about is a
fairly specialized single-purpose feature, which nonetheless is going to
require a lot of infrastructure (for example, teaching psql's \copy
about it). Perhaps, for approximately the same amount of overhead,
we could provide something more general.

I don't agree with the idea of injecting something "behind" copy though.
The thought that comes to mind for me is something "in front of" copy,
that is, give it the text of each line and let it do a text-to-text
transformation before COPY chews on it.

Any of this is getting into territory we had previously agreed not to
let COPY venture into, ie general purpose data transformation. I'm not
sure I want to cross that bridge and only get "ignore extra columns" out
of it.

regards, tom lane


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:13:09
Message-ID: 4AA86EC5.8040004@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas wrote:
> However, I'm skeptical about whether the specific
> thing you want to do after parsing (namely, drop excess columns,
> null-fill missing ones) is sufficiently common to warrant a feature to
> do only that. YMMV, of course.
>
>
>

So might my experience. I can tell you that I have been asked about this
dozens of times since implementing CSV import, in various forums. From
day one people have wanted a way of handling CSVs which were not
strictly regular. Pretty much every time the request has been something
fairly close to this.

As for the feature not being large enough, we have a regrettable
(IMNSHO) tendency in this project to take simple proposals and embelish
them with many bells and whistles. Personally, I'm much more of a fan of
incremental development.

cheers

andrew


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:19:07
Message-ID: 603c8f070909092019n26f7d82di3a666d332d66207b@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 9, 2009 at 11:01 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> Robert Haas wrote:
>>> I wonder whether it would be appropriate to do something like
>>> implement a method by which copy could return text[] and then one
>>> could write wrappers around that functionality to do this as well as
>>> other things.
>
>> Anything along these lines in C is likely to be far larger than what I
>> had in mind, which was a fairly minor enhancement.
>
> I think Robert's got a point though.  What you're talking about is a
> fairly specialized single-purpose feature, which nonetheless is going to
> require a lot of infrastructure (for example, teaching psql's \copy
> about it).  Perhaps, for approximately the same amount of overhead,
> we could provide something more general.
>
> I don't agree with the idea of injecting something "behind" copy though.
> The thought that comes to mind for me is something "in front of" copy,
> that is, give it the text of each line and let it do a text-to-text
> transformation before COPY chews on it.

That seems to me to be a whole lot less useful. As I see it, the
whole point of any enhancement in this area is to reuse the parsing
code. If I have a CSV file (or some other format COPY understands),
I don't want to have to write my own parser for that format in order
to do some simple data transformation (like dropping columns >n).

Previous agreements nonwithstanding, I think letting COPY do general
transformations is a great idea. But I'm a lot more skeptical about
one specific transformation without some framework of which that case
is the first instance.

...Robert


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:27:24
Message-ID: 4AA8721C.1060106@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> What you're talking about is a
> fairly specialized single-purpose feature, which nonetheless is going to
> require a lot of infrastructure (for example, teaching psql's \copy
> about it).

Well, that's hardly a lot.

> Perhaps, for approximately the same amount of overhead,
> we could provide something more general.
>

Ok.

> I don't agree with the idea of injecting something "behind" copy though.
> The thought that comes to mind for me is something "in front of" copy,
> that is, give it the text of each line and let it do a text-to-text
> transformation before COPY chews on it.
>

you mean some sort of filter mechanism?

COPY .... FILTER function_name ( args) ... ?

That might work. Then we could provide a couple builtin and people could
write others in C or PL/dujour.

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:35:58
Message-ID: 14610.1252553758@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> you mean some sort of filter mechanism?

> COPY .... FILTER function_name ( args) ... ?

> That might work. Then we could provide a couple builtin and people could
> write others in C or PL/dujour.

Yeah, that's pretty much what I was thinking, although exactly where the
filter happens and what its arguments/results are is open to debate.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 03:52:28
Message-ID: 14958.1252554748@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Wed, Sep 9, 2009 at 11:01 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> The thought that comes to mind for me is something "in front of" copy,
>> that is, give it the text of each line and let it do a text-to-text
>> transformation before COPY chews on it.

> That seems to me to be a whole lot less useful. As I see it, the
> whole point of any enhancement in this area is to reuse the parsing
> code. If I have a CSV file (or some other format COPY understands),
> I don't want to have to write my own parser for that format in order
> to do some simple data transformation (like dropping columns >n).

I'm unconvinced. A significant fraction of the COPY customization
requests that I can remember had to do with cleaning up input that
didn't entirely match COPY's idea of what valid CSV is. If it's close
enough that COPY will successfully parse it as N columns, you can stick
it into an N-column text table and do your transformations afterwards.

Thinking a bit more generally, it seems to me that as COPY IN is
currently constituted, there are really four distinguishable bits
of functionality in a pipeline:

1. Reading from FE/BE protocol (or file) and conversion to database
character set.

2. Parsing the text into a set of de-escaped field values.

3. Converting the field values to internal Datum form (requires
knowledge of a set of target data types).

4. Bulk insertion into the target table.

It is not hard to come up with problems that are most easily solved
by injecting some sort of filtering between any pair of these steps.
You're suggesting a filter between 2 and 3, I suggested one between
1 and 2, and the well-known trick of inserting into a temp table
and then filtering to a final table amounts to a filter between
3 and 4.

We've had proposals before to come up with a form of COPY that can be
used as a data source in an INSERT/SELECT context. That would
automatically provide a way to filter between steps 3 and 4 --- or maybe
better, insist that the COPY-thingy produces just text fields, and leave
both steps 3 and 4 to be done by the INSERT/SELECT. With that, the only
missing functionality would be a textual filter between steps 1 and 2.

regards, tom lane


From: Peter Eisentraut <peter_e(at)gmx(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 08:24:09
Message-ID: 1252571049.9102.0.camel@fsopti579.F-Secure.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 2009-09-09 at 19:18 -0400, Robert Haas wrote:
> I agree that ignoring extra columns is a bad idea, but I don't even
> like the idea of ignoring missing columns. It doesn't seem like a
> good idea to take a spreadsheet and feed it into COPY without doing
> any validation anyway, and this is the kind of thing that is trivial
> to clean up with a thin layer of Perl or your scripting language of
> choice.

I would think that a spreadsheet application has some kind of control
over its export format, too. Perhaps you can just tell it to export
only the first N columns?


From: Nikhil Sontakke <nikhil(dot)sontakke(at)enterprisedb(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 08:38:24
Message-ID: a301bfd90909100138j628fcfb2oc2fb3a2bbe002fda@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

> the two most
> useful are to read in only some of the defined columns, and to output
> to
> a separate disk file any rows which failed to match the expected
> format.
> The latter would not cause the copy to fail unless the count of such
> rows exceeded a user-specified threshold.
>

+1

Allowing the capability to handle rows that might get discarded due to
constraint violations, bad column inputs etc. sounds like a big help
while doing large copy operations.

Another capability would be to transform the input column via some sql
expressions before loading it into the table. Given the way update
works, this could avoid the unnecessary subsequent bloat to fine-tune
some of the columns.

Regards,
Nikhils
--
http://www.enterprisedb.com


From: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 09:24:52
Message-ID: m2tyzbdue3.fsf@hi-media.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> you mean some sort of filter mechanism?
>
>> COPY .... FILTER function_name ( args) ... ?
>
>> That might work. Then we could provide a couple builtin and people could
>> write others in C or PL/dujour.
>
> Yeah, that's pretty much what I was thinking, although exactly where the
> filter happens and what its arguments/results are is open to debate.

Last time we opened this can of worms here's what it lead us to:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00222.php

That is, COPY support in the FROM clause. Now the example gets rewritten
as:

SELECT *
FROM function_name(a, b, c)
LATERAL (COPY ... FROM ...);

Or with some more nesting if we don't have LATERAL by the time we get
COPY support in the FROM clause :)

That allows to simplify some things: COPY in FROM always returns text
datums, the function_name can be any callable function (pl/whatever),
and we could import easily data whose representation has to be adapted
to PostgreSQL data type input function (such as timestamps etc)

INSERT INTO t SELECT myparse(a) FROM (COPY ...);

As already said by Andrew on IRC, though, this goes nowhere to help him
address his direct concern, COPY would still choke on input lines that
are containing too few or too many columns...

Regards,
--
dim


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Peter Eisentraut <peter_e(at)gmx(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 11:08:55
Message-ID: 4AA8DE47.5070804@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Peter Eisentraut wrote:
> On Wed, 2009-09-09 at 19:18 -0400, Robert Haas wrote:
>
>> I agree that ignoring extra columns is a bad idea, but I don't even
>> like the idea of ignoring missing columns. It doesn't seem like a
>> good idea to take a spreadsheet and feed it into COPY without doing
>> any validation anyway, and this is the kind of thing that is trivial
>> to clean up with a thin layer of Perl or your scripting language of
>> choice.
>>
>
> I would think that a spreadsheet application has some kind of control
> over its export format, too. Perhaps you can just tell it to export
> only the first N columns?
>
>

Not to my knowledge. In any case, this requires more user work and much
more user education, and in the situation I am dealing with neither is
likely.

cheers

andrew


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 11:16:02
Message-ID: 4AA8DFF2.4030405@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>
>> you mean some sort of filter mechanism?
>>
>
>
>> COPY .... FILTER function_name ( args) ... ?
>>
>
>
>> That might work. Then we could provide a couple builtin and people could
>> write others in C or PL/dujour.
>>
>
> Yeah, that's pretty much what I was thinking, although exactly where the
> filter happens and what its arguments/results are is open to debate.
>
>
>
>

I have slept on it, and while this is a nice idea, I think your
suggestion that this would take about the same effort as what I had in
mind (which I could just about have coded by now) is out by at least an
order of magnitude. I think it would eat up all my available time
between now and Nov 15, and I'm not prepared to spend my time budget
that way.

I will talk to the requestor and see if we can do something along the
lines of Robert's suggestion using a plperlu function and a temp file,
which I think I could do within the original time budget.

Consider the suggestion withdrawn.

cheers

andrew


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 13:32:30
Message-ID: 20090910133230.GA31165@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan escribió:

> As for the feature not being large enough, we have a regrettable
> (IMNSHO) tendency in this project to take simple proposals and
> embelish them with many bells and whistles. Personally, I'm much
> more of a fan of incremental development.

+1

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 13:56:24
Message-ID: 22797.1252590984@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Dimitri Fontaine <dfontaine(at)hi-media(dot)com> writes:
> Last time we opened this can of worms here's what it lead us to:
> http://archives.postgresql.org/pgsql-hackers/2008-04/msg00222.php

Right, that's the discussion I was remembering.

> As already said by Andrew on IRC, though, this goes nowhere to help him
> address his direct concern, COPY would still choke on input lines that
> are containing too few or too many columns...

An easy fix for that is to allow this form of COPY to return text[]
instead of a fixed number of text columns.

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 13:56:29
Message-ID: 603c8f070909100656p4d6c8828wbfbacff8ffa5f89d@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 9, 2009 at 11:52 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> On Wed, Sep 9, 2009 at 11:01 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> The thought that comes to mind for me is something "in front of" copy,
>>> that is, give it the text of each line and let it do a text-to-text
>>> transformation before COPY chews on it.
>
>> That seems to me to be a whole lot less useful.  As I see it, the
>> whole point of any enhancement in this area is to reuse the parsing
>> code.   If I have a CSV file (or some other format COPY understands),
>> I don't want to have to write my own parser for that format in order
>> to do some simple data transformation (like dropping columns >n).
>
> I'm unconvinced.  A significant fraction of the COPY customization
> requests that I can remember had to do with cleaning up input that
> didn't entirely match COPY's idea of what valid CSV is.  If it's close
> enough that COPY will successfully parse it as N columns, you can stick
> it into an N-column text table and do your transformations afterwards.
>
> Thinking a bit more generally, it seems to me that as COPY IN is
> currently constituted, there are really four distinguishable bits
> of functionality in a pipeline:
>
> 1. Reading from FE/BE protocol (or file) and conversion to database
> character set.
>
> 2. Parsing the text into a set of de-escaped field values.
>
> 3. Converting the field values to internal Datum form (requires
> knowledge of a set of target data types).
>
> 4. Bulk insertion into the target table.
>
> It is not hard to come up with problems that are most easily solved
> by injecting some sort of filtering between any pair of these steps.
> You're suggesting a filter between 2 and 3, I suggested one between
> 1 and 2, and the well-known trick of inserting into a temp table
> and then filtering to a final table amounts to a filter between
> 3 and 4.
>
> We've had proposals before to come up with a form of COPY that can be
> used as a data source in an INSERT/SELECT context.  That would
> automatically provide a way to filter between steps 3 and 4 --- or maybe
> better, insist that the COPY-thingy produces just text fields, and leave
> both steps 3 and 4 to be done by the INSERT/SELECT.  With that, the only
> missing functionality would be a textual filter between steps 1 and 2.

That would be swell. Being able to use COPY as an expression
returning text[] would, I think, cater to an extremely wide variety of
use cases, although it sounds like Andrew is not interested in
implementing it at this point. If we want to add on a filter between
steps 1 and 2, that sounds potentially useful as well.

I am fuzzy on the implementation details for making COPY act as a data
source for INSERT/SELECT though. I had thought to make EXPLAIN a data
source, but it turned out not to be possible (as far as I could tell)
without making EXPLAIN a fully-reserved word, which you vetoed. It
seems likely that COPY will present similar issues, though I haven't
tried.

I am also wondering what happens when someone embeds multiple COPY
statements in a single query, or sticks one inside of a CTE or on the
inner side of a left join. I am inclined to think that we should
cause COPY to materialize its input (so rescans will return the same
data already read) and throw an error if more than one is present in
the same query (unless you have a better idea?).

...Robert


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 13:58:54
Message-ID: 603c8f070909100658j723c030crce65ffbe9ea958fe@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Sep 10, 2009 at 9:56 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Dimitri Fontaine <dfontaine(at)hi-media(dot)com> writes:
>> Last time we opened this can of worms here's what it lead us to:
>>   http://archives.postgresql.org/pgsql-hackers/2008-04/msg00222.php
>
> Right, that's the discussion I was remembering.
>
>> As already said by Andrew on IRC, though, this goes nowhere to help him
>> address his direct concern, COPY would still choke on input lines that
>> are containing too few or too many columns...
>
> An easy fix for that is to allow this form of COPY to return text[]
> instead of a fixed number of text columns.

Hmm... in fact, what if we just implemented COPY returning text[] for
a first cut, and forget making it act like a table source for now.
Then you could COPY to a temp table (with a text[] column) and select
from there. There's really two separate features here: COPY producing
text[] rather than a record type, and COPY being embeddable in some
larger query.

...Robert


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 14:09:23
Message-ID: 23048.1252591763@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> I am fuzzy on the implementation details for making COPY act as a data
> source for INSERT/SELECT though. I had thought to make EXPLAIN a data
> source, but it turned out not to be possible (as far as I could tell)
> without making EXPLAIN a fully-reserved word, which you vetoed. It
> seems likely that COPY will present similar issues, though I haven't
> tried.

IIRC the previous discussion touched on making it look like a
set-returning function, although this would be a shade less convenient
for option parsing etc.

> I am also wondering what happens when someone embeds multiple COPY
> statements in a single query, or sticks one inside of a CTE or on the
> inner side of a left join.

Yeah, it would need to be restricted somehow. A straight SRF function
would materialize its result, but I doubt we want that to happen for
COPY.

(This brings up the whole question of performance impact, which would
have to be thought about and minimized.)

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 14:11:14
Message-ID: 23084.1252591874@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Hmm... in fact, what if we just implemented COPY returning text[] for
> a first cut, and forget making it act like a table source for now.
> Then you could COPY to a temp table (with a text[] column) and select
> from there. There's really two separate features here: COPY producing
> text[] rather than a record type, and COPY being embeddable in some
> larger query.

That might be a reasonable way-station ...

regards, tom lane


From: Hannu Krosing <hannu(at)2ndQuadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 18:20:23
Message-ID: 1252606823.3931.4.camel@hvost1700
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 2009-09-10 at 10:09 -0400, Tom Lane wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> > I am fuzzy on the implementation details for making COPY act as a data
> > source for INSERT/SELECT though. I had thought to make EXPLAIN a data
> > source, but it turned out not to be possible (as far as I could tell)
> > without making EXPLAIN a fully-reserved word, which you vetoed. It
> > seems likely that COPY will present similar issues, though I haven't
> > tried.
>
> IIRC the previous discussion touched on making it look like a
> set-returning function, although this would be a shade less convenient
> for option parsing etc.
>
> > I am also wondering what happens when someone embeds multiple COPY
> > statements in a single query

As long as they COPY from different input files it should be perfectly
OK. Though this already goes out into SQL/MED land.

> , or sticks one inside of a CTE or on the
> > inner side of a left join.
>
> Yeah, it would need to be restricted somehow. A straight SRF function
> would materialize its result, but I doubt we want that to happen for
> COPY.

Making SRF-s streamable is another thing that would make postgreSQL a
lot more powerful for all kinds of computing.

> (This brings up the whole question of performance impact, which would
> have to be thought about and minimized.)

Completely agree - nobody wants a slow COPY command.

> regards, tom lane

--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 18:26:38
Message-ID: 4AA944DE.5000406@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>> As already said by Andrew on IRC, though, this goes nowhere to help him
>> address his direct concern, COPY would still choke on input lines that
>> are containing too few or too many columns...
>>
>
> An easy fix for that is to allow this form of COPY to return text[]
> instead of a fixed number of text columns.
>
>
>

I'd like to see some concrete syntax suggestions for such an animal.

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 18:49:22
Message-ID: 6283.1252608562@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Tom Lane wrote:
>> An easy fix for that is to allow this form of COPY to return text[]
>> instead of a fixed number of text columns.

> I'd like to see some concrete syntax suggestions for such an animal.

I think there were some in the previous discussion --- it came down
to using a SRF with arguments that specify the copy source and options.
Setting the result type as text[] instead of some has-to-be-specified
composite type should actually make it a bit easier.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Hannu Krosing <hannu(at)2ndQuadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 18:50:53
Message-ID: 6313.1252608653@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hannu Krosing <hannu(at)2ndQuadrant(dot)com> writes:
>> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>>> I am also wondering what happens when someone embeds multiple COPY
>>> statements in a single query

> As long as they COPY from different input files it should be perfectly
> OK. Though this already goes out into SQL/MED land.

Multiple copy from file is okay, copy from stdin not so much.

regards, tom lane


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 19:23:14
Message-ID: 4AA95222.9040400@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>
>> Tom Lane wrote:
>>
>>> An easy fix for that is to allow this form of COPY to return text[]
>>> instead of a fixed number of text columns.
>>>
>
>
>> I'd like to see some concrete syntax suggestions for such an animal.
>>
>
> I think there were some in the previous discussion --- it came down
> to using a SRF with arguments that specify the copy source and options.
>

Oh, so I guess not stdin.

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-10 19:33:26
Message-ID: 7042.1252611206@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Tom Lane wrote:
>> I think there were some in the previous discussion --- it came down
>> to using a SRF with arguments that specify the copy source and options.

> Oh, so I guess not stdin.

Well, it'd be nice if stdin were allowed, but we'd have to look into how
to prevent multiple reads or re-reads then.

regards, tom lane


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 00:21:57
Message-ID: 20090911002157.GK17756@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Andrew Dunstan (andrew(at)dunslane(dot)net) wrote:
> Consider the suggestion withdrawn.

Let's not throw it out completely. The proposal to have COPY return a
text[] in some fashion is interesting enough that others (ah, such as
myself..) might be willing to put effort into it. Andrew, could you put
your thoughts plus some example files onto a wiki page, at least? Then
Robert, Tom, myself, etc, could update that to nail down the
specification and then it's just an implementation detail, as it were.

Thanks!

Stephen


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 01:26:42
Message-ID: 4AA9A752.1070006@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Stephen Frost wrote:
> * Andrew Dunstan (andrew(at)dunslane(dot)net) wrote:
>
>> Consider the suggestion withdrawn.
>>
>
> Let's not throw it out completely. The proposal to have COPY return a
> text[] in some fashion is interesting enough that others (ah, such as
> myself..) might be willing to put effort into it. Andrew, could you put
> your thoughts plus some example files onto a wiki page, at least? Then
> Robert, Tom, myself, etc, could update that to nail down the
> specification and then it's just an implementation detail, as it were.
>
>
>

I don't mind discussing the idea a bit.

I don't have any samples readily to hand, but really anything that's not
strictly rectangular meets my original case, like

a,b,c
1,2,3
4,5,6,7
8,9
10,11,12

I do like the idea of COPY returning a SETOF text[], but I am not at all
clear on the mechanics of feeding STDIN to an SRF. ISTM that something
like a RETURNING clause on COPY and the ability to use it in FROM clause
or something similar might work better. I understand the difficulties,
but maybe we could place some restrictions on where it could be used so
as to obviate at least some of those.

One of the things I like about a SETOF text[] is that it lets you
reorder the columns, or cherry pick which ones you want. In fact, it
might be argued with that the hacky FORCE NOT NULL, which has always
pained me somewhat, even if it was my idea ;-) might no longer be needed.

I'd love to be able to do something like

INSERT into foo (x,y,z) select t[3],[t2],[t57] from (COPY RETURNING
t FROM stdin CSV);

The only thing that's been seriously on the table that isn't accounted
for by something like this is the suggestion of making the header line
have some semantic significance, and I'm far from sure that's a good idea.

If time were not short on getting features presented I might attempt to
do it, but I have one very large monkey (and a few small ones) on my
back that I am determined to get rid of by the November CF, and there is
not a hope in the world I could get two large features done, even if we
had the details of this all sorted out and agreed on. That's why I said
"Consider the suggestion withdrawn".

cheers

andrew


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 02:34:51
Message-ID: 20090911023451.GM17756@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew,

* Andrew Dunstan (andrew(at)dunslane(dot)net) wrote:
> I don't mind discussing the idea a bit.

Great! Thanks for this. I'll put it up on a wiki and start looking at
implementation details and what the semantics should be.

Thanks again,

Stephen


From: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 08:27:06
Message-ID: m2eiqd28f9.fsf@hi-media.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> I do like the idea of COPY returning a SETOF text[], but I am not at all
> clear on the mechanics of feeding STDIN to an SRF. ISTM that something like
> a RETURNING clause on COPY and the ability to use it in FROM clause or
> something similar might work better.

I envisonned COPY "just" returning all what it reads (minus extra discarded
column as soon as your proposal gets implemented), in the FROM clause,
and the user sorting out what he wants in the SELECT clause.

> I understand the difficulties, but
> maybe we could place some restrictions on where it could be used so as to
> obviate at least some of those.

Maybe instead of opening FROM for COPY, having it accepted in WITH would
be better, the same way (from the user point of view) that DML returning
are worked on.

> INSERT into foo (x,y,z) select t[3],[t2],[t57] from (COPY RETURNING
> t FROM stdin CSV);

Would become:

WITH csv AS (
COPY t FROM stdin CSV
)
INSERT INTO foo(x, y, z)
SELECT t[3], t[2], mytimestamptz([5], [6], [7])
FROM csv;

Now the text[] has a strange feeling, without it it'd be:

WITH csv AS (
COPY t(a, b, c, d, e, f, g)
FROM stdin
CSV IGNORING EXTRA COLUMNS -- random nice syntax
MISSING COLUMNS DEFAULTS NULL -- that needs some reality check
)
INSERT INTO foo(x, y, z)
SELECT c, b, mytimestamptz(e, f, g)
FROM csv;

The function mytimestamptz(date text, time text, timezone text) will
accept input that PostgreSQL input types would have errored out on... so
you can process in one go strange formats from other products.

> The only thing that's been seriously on the table that isn't accounted for
> by something like this is the suggestion of making the header line have some
> semantic significance, and I'm far from sure that's a good idea.

Yeah, and it seems only useful when you don't have any way to play with
what COPY returns before it goes to a table (magic column reordering or
ignoring).

Regards,
--
dim


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 14:59:23
Message-ID: 4AAA65CB.9070307@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


I wrote:
> I'd love to be able to do something like
>
> INSERT into foo (x,y,z) select t[3],[t2],[t57] from (COPY RETURNING
> t FROM stdin CSV);
>
>

Some IRC discussion suggested ways we could do better than that syntax.
I think my current preferred candidate is something like

COPY foo (a,b,c)
FROM stdin
CSV
AS t USING (t[3],t[2],[t57]);

I'm not sure how we'd could plug a filter into that. Maybe a WHERE
clause? My immediate need at least doesn't actually involve filtering
anything - we load every line in the CSV into a temp table and then
filter what we load into the main tables after applying the business rules.

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 15:16:44
Message-ID: 20728.1252682204@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> I wrote:
>> I'd love to be able to do something like
>>
>> INSERT into foo (x,y,z) select t[3],[t2],[t57] from (COPY RETURNING
>> t FROM stdin CSV);

> Some IRC discussion suggested ways we could do better than that syntax.
> I think my current preferred candidate is something like

> COPY foo (a,b,c)
> FROM stdin
> CSV
> AS t USING (t[3],t[2],[t57]);

[ scratches head... ] What happened to seeing COPY as a data source in
a larger command? If that syntax has anything at all to recommend it,
I'm not seeing what. It's not extensible and it would require lots of
code duplication to implement the impoverished feature set it does have.

regards, tom lane


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 15:29:17
Message-ID: 4AAA6CCD.10502@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>
>> I wrote:
>>
>>> I'd love to be able to do something like
>>>
>>> INSERT into foo (x,y,z) select t[3],[t2],[t57] from (COPY RETURNING
>>> t FROM stdin CSV);
>>>
>
>
>> Some IRC discussion suggested ways we could do better than that syntax.
>> I think my current preferred candidate is something like
>>
>
>
>> COPY foo (a,b,c)
>> FROM stdin
>> CSV
>> AS t USING (t[3],t[2],[t57]);
>>
>
> [ scratches head... ] What happened to seeing COPY as a data source in
> a larger command? If that syntax has anything at all to recommend it,
> I'm not seeing what. It's not extensible and it would require lots of
> code duplication to implement the impoverished feature set it does have.
>
>
>

Well, I think the objection was that it would slow COPY down to have to
go though the executor in the copy-as-source scenario. But maybe that
would happen anyway, and maybe we don't care, we'd just accept that it
wouldn't be nearly as fast as a raw copy.

cheers

andrew


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-11 15:35:58
Message-ID: 21020.1252683358@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Well, I think the objection was that it would slow COPY down to have to
> go though the executor in the copy-as-source scenario. But maybe that
> would happen anyway, and maybe we don't care, we'd just accept that it
> wouldn't be nearly as fast as a raw copy.

I haven't heard complaints about the COPY (query) syntax, which is
the same thing in the opposite direction. You can't expect that
flexibility costs zero.

regards, tom lane


From: daveg <daveg(at)sonic(dot)net>
To: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Stephen Frost <sfrost(at)snowman(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Ragged CSV import
Date: 2009-09-13 02:14:24
Message-ID: 20090913021424.GD25379@sonic.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Sep 11, 2009 at 10:27:06AM +0200, Dimitri Fontaine wrote:
> Maybe instead of opening FROM for COPY, having it accepted in WITH would
> be better, the same way (from the user point of view) that DML returning
> are worked on.
...
> WITH csv AS (
> COPY t FROM stdin CSV
> )
> INSERT INTO foo(x, y, z)
> SELECT t[3], t[2], mytimestamptz([5], [6], [7])
> FROM csv;
>
> Now the text[] has a strange feeling, without it it'd be:
>
> WITH csv AS (
> COPY t(a, b, c, d, e, f, g)
> FROM stdin
> CSV IGNORING EXTRA COLUMNS -- random nice syntax
> MISSING COLUMNS DEFAULTS NULL -- that needs some reality check
> )
> INSERT INTO foo(x, y, z)
> SELECT c, b, mytimestamptz(e, f, g)
> FROM csv;
>
> The function mytimestamptz(date text, time text, timezone text) will
> accept input that PostgreSQL input types would have errored out on... so
> you can process in one go strange formats from other products.

+1

-dg

--
David Gould daveg(at)sonic(dot)net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.