Copy From & Insert UNLESS

Lists: pgsql-hackers
From: James William Pye <pgsql(at)jwp(dot)name>
To: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Copy From & Insert UNLESS
Date: 2006-02-04 01:39:38
Message-ID: 20060204013938.GA96565@lit.jwp.name
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greets folks, [YABLP: Yet Another Bulk Loading Proposal]

The subject of this letter is referring to giving INSERT and COPY FROM STDIN
the ability to alter the destination of rows that violate any constraints named
in a user specified set.

I am seeking, as many others are or have, to improve the performance on bulk
loads to live systems where constraint violations may occur and filtering can
be done more efficiently within the backend.

Primarily, I'm concerned with UNIQUE violations. However, I think tackling the
general case is the wiser action.

The attached patch is *not* being included for any sort of application, and I
make no claims of it functioning as I intended it to or as I may imply it to. =)
The patch only served the purpose of providing rough numbers for the case of
unique violations.

Despite the fact that my experimental patch uses error trapping, that is *not*
what I have in mind for the implementation. I do not want to trap errors upon
insert or copy from. Rather, I wish to implement functionality that would allow
alternate destinations for tuples that violate user specified constraints on
the table, which, by default, will be to simply drop the tuple.

My proposed syntax is along the lines of:

INSERT INTO table [ ( column [, ...] ) ]
* [UNLESS CONSTRAINT VIOLATION
[ON (constraint_name [, ...]) [THEN INSERT INTO table]] [, OR ...]]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) | query }

and

COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
* [UNLESS CONSTRAINT VIOLATION
[ON (constraint_name [, ...]) [THEN INSERT INTO table]] [, OR ...]]
...

The purpose of the arguably unintuitive THEN INSERT INTO x portion is to provide
the mechanism in which a user can specify the destination table for tuples that
violated the associated set of constraints. Using the OR portion allows the user
to specify additional sets of constraints for different destinations.

A tuple will be withheld from the target table if ANY of the constraints
listed in any of the constraint_name sets is violated. Constraint sets should
not [may not?] reference the same constraint multiple times, even among
different sets.

Example:

\d dest_table
Table "public.dest_table"
Column | Type | Modifiers
--------+---------+-----------
i | integer | not null
j | integer |
Indexes:
"dest_table_pkey" PRIMARY KEY, btree (i)
Check constraints:
"dest_table_j_check" CHECK (j > 0)

CREATE TEMP TABLE pkey_failures (i int, j int);
CREATE TEMP TABLE check_failures (i int, j int);

COPY dest_table FROM STDIN
UNLESS CONSTRAINT VIOLATION
ON (dest_table_pkey) THEN INSERT INTO pkey_failures
OR (dest_table_j_check) THEN INSERT INTO check_failures;

For most constraints, this proposed implementation should be fairly easy to
implement. However, the B-Tree index has the uniqueness check within its
insert access method, _bt_check_unique. Perhaps the best solution here is to
expose this check function--with changes, of course--and define a new access
method entry, 'amcheck' or, perhaps, 'amscanforinsert' where state information
would be given back to the caller for later use in the actual insert. (Perhaps
evident, but I'm not as familiar with the index code as I would like to be for
this sort of speculation, so please excuse me if I am not making good sense.)

There is one other annoying change. The constraints specified in an INSERT
UNLESS should be checked before all other unspecified constraints. This is to
elegantly handle the insertion case where two violations can occur, one with a
constraint that the user specified, and one that the user didn't. Regardless of
the order in which constraints are collected for checking, the user specified
ones should be checked first to avoid unwelcome errors from being thrown when
the tuple was going to be tossed anyways.

This proposal purposefully does not discuss bad data errors as I think that
should be seen as a separate issue. Perhaps a future feature within the UNLESS
syntax.

Prior Discussions or Mentions [See the last two.]

implicit abort harmful?
http://archives.postgresql.org/pgsql-general/2003-05/msg00962.php
how to continue a transaction after an error?
http://archives.postgresql.org/pgsql-sql/2000-11/msg00097.php
mass import to table with unique index
http://archives.postgresql.org/pgsql-general/2003-01/msg01465.php
Duplicate key insert question
http://archives.postgresql.org/pgsql-general/2003-07/msg00056.php

Ignore when using COPY FROM (Matthew Kennedy) (small, complaint/fr)
http://archives.postgresql.org/pgsql-general/2000-08/msg00681.php
COPY and duplicates (Ryan Mahoney) (small, complaint/fr)
http://archives.postgresql.org/pgsql-general/2001-07/msg00569.php

Bulk loading using COPY - ignore duplicates? (Lee Kindness?)
http://archives.postgresql.org/pgsql-hackers/2002-01/msg00029.php
Practical error logging for very large COPY statements (Simon Riggs)
http://archives.postgresql.org/pgsql-hackers/2005-11/msg01100.php

Extant Solutions

There are quite a few solutions to this problem as I'm sure many (all?) know:

. Temporary table that filters out the evil tuples.
. BEFORE TRIGGER handling the tuple if the constraint of interest is violated.
. INSERT wrapped in a subtransaction.
. (Other variations)

Temporary tables are probably the fastest here. However, it still exhibits
redundancy, and requires post-load tuple movement(extra step).

Savepoints require client side logic in order to detect the appropriate
error code to trap or raise. (Also, this seems to be quite slow, regardless.)

A before trigger is going to require that interested constraints be tested
twice and for code to be effectively duplicated.

So, there are quite a few ways to do error controlled bulk loading. Temporary
tables appear to be the best current solution. However, I think the
implementation described in this proposal will yield improvements by simply
reducing redundancy.

Simple Numbers [Using the attached, *very* experimental patch]:

(PostgreSQL 8.2 [~HEAD], redhat 9)

These numbers were rendered from a simple single client trial where psql
and the postmaster were running on the same machine. I ran each trial a
few times and I would cleanup and VACUUM FULL between runs. The numbers do
not include the cleanup or the vacuum.

The "Insert unless" trials are drawn from my attached patch.
(For the INSERT UNLESS cases I used the patch, otherwise a clean PG.)

CREATE TABLE foo (i int PRIMARY KEY);

[Straight SQL INSERTs (via psql and an SQL file)]
(100000 Violations)
Auto-Commit Transactions: 22.213
Savepoints: (ran out of shared memory)
Insert into temporary table, Merge: 24.225
* Insert Unless: 14.668

(50000 Violations, 50000 New)
Auto-Commit Transactions: 33.342
Savepoints: (untested)
Insert into temporary table, Merge: 24.243
* Insert Unless: 14.260

(100000 New)
Auto-Commit Transactions: 47.990
Savepoints: 3:05.60 (three minutes!!)
Temporary table: 26.178
* Insert Unless: 14.283

The numbers here look pretty good, especially for such a hackish patch.
[btw, I hope I screwed up somehow on the savepoints.]

However:

[COPY FROM STDIN (via psql and a file of data)]
(100000 Violations)
Copy Unless: 2.4132
Copy to temporary, Merge: 0.72675

(50000 Conflicts, 50000 New)
Copy Unless: 2.1145
Copy to temporary, Merge: 1.469

(100000 New)
Copy Unless: 1.6386
Copy to temporary, Merge: 2.4305

The numbers here don't look so good now. :(
However, I'm convinced that this is showing the inefficiencies in my current
hack, rather than dissolving the likelihood of the discussed implementation being
an improvement.

(The files that rendered these results are available on request. They are 100K a
piece after being bzip'd)
--
Regards, James William Pye

Attachment Content-Type Size
hack.patch text/plain 8.4 KB

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: James William Pye <pgsql(at)jwp(dot)name>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>, agoldshuv(at)greenplum(dot)com
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-05 21:13:31
Message-ID: 43E66A7B.7000301@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

James,

> I am seeking, as many others are or have, to improve the performance on bulk
> loads to live systems where constraint violations may occur and filtering can
> be done more efficiently within the backend.
>
> Primarily, I'm concerned with UNIQUE violations. However, I think tackling the
> general case is the wiser action.

Alon Goldshuv on Bizgres has been working on this as well. Maybe you
could collaborate? Alon?

--Josh


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-05 21:16:15
Message-ID: 43E66B1F.60707@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Folks,

> The subject of this letter is referring to giving INSERT and COPY FROM STDIN
> the ability to alter the destination of rows that violate any constraints named
> in a user specified set.

BTW, just in case anyone thinks that James is not addressing a real and
widespread problem, Joe Conway said in his presentation on Symer's 1.2
TB databases that the single most painful thing they had to deal with in
the implementation is filtering out bad rows before COPY (from OSCON2005
presentation).

--Josh


From: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
To: James William Pye <pgsql(at)jwp(dot)name>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-05 22:08:12
Message-ID: 20060205135638.G33067@megazone.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Fri, 3 Feb 2006, James William Pye wrote:

> Despite the fact that my experimental patch uses error trapping, that is *not*
> what I have in mind for the implementation. I do not want to trap errors upon
> insert or copy from. Rather, I wish to implement functionality that would allow
> alternate destinations for tuples that violate user specified constraints on
> the table, which, by default, will be to simply drop the tuple.
>
> My proposed syntax is along the lines of:
>
> INSERT INTO table [ ( column [, ...] ) ]
> * [UNLESS CONSTRAINT VIOLATION
> [ON (constraint_name [, ...]) [THEN INSERT INTO table]] [, OR ...]]
> { DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) | query }
>
> and
>
> COPY tablename [ ( column [, ...] ) ]
> FROM { 'filename' | STDIN }
> * [UNLESS CONSTRAINT VIOLATION
> [ON (constraint_name [, ...]) [THEN INSERT INTO table]] [, OR ...]]
> ...
>
> The purpose of the arguably unintuitive THEN INSERT INTO x portion is to provide
> the mechanism in which a user can specify the destination table for tuples that
> violated the associated set of constraints. Using the OR portion allows the user
> to specify additional sets of constraints for different destinations.
>
> A tuple will be withheld from the target table if ANY of the constraints
> listed in any of the constraint_name sets is violated. Constraint sets should
> not [may not?] reference the same constraint multiple times, even among
> different sets.
>
> Example:
>
> \d dest_table
> Table "public.dest_table"
> Column | Type | Modifiers
> --------+---------+-----------
> i | integer | not null
> j | integer |
> Indexes:
> "dest_table_pkey" PRIMARY KEY, btree (i)
> Check constraints:
> "dest_table_j_check" CHECK (j > 0)
>
> CREATE TEMP TABLE pkey_failures (i int, j int);
> CREATE TEMP TABLE check_failures (i int, j int);
>
> COPY dest_table FROM STDIN
> UNLESS CONSTRAINT VIOLATION
> ON (dest_table_pkey) THEN INSERT INTO pkey_failures
> OR (dest_table_j_check) THEN INSERT INTO check_failures;
>
> For most constraints, this proposed implementation should be fairly easy to
> implement.

Have you considered how this might work with spec-compliant constraint
timing? I think even in inserting cases, a later trigger before statement
end could in some cases un-violate a constraint, so checking before insert
won't actually be the same behavior as the normal constraint handling
which seems bad for this kind of system.


From: James William Pye <pgsql(at)jwp(dot)name>
To: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 00:51:01
Message-ID: 20060206005101.GA97445@lit.jwp.name
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Feb 05, 2006 at 02:08:12PM -0800, Stephan Szabo wrote:
> Have you considered how this might work with spec-compliant constraint
> timing?

I haven't gone so far as to look into the spec, yet. [Noise of rustling papers]

However, constraints referenced in an UNLESS clause that are deferred, in any
fashion, should probably be "immediated" within the context of the command.
Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS were
to actually alter the timing of a given constraint.

> I think even in inserting cases, a later trigger before statement
> end could in some cases un-violate a constraint, so checking before insert
> won't actually be the same behavior as the normal constraint handling
> which seems bad for this kind of system.

Any facility that can alter the tuple before it being inserted into the heap
should probably be exercised prior to the application of the tuple against
UNLESS's behavior. The implementation of UNLESS will probably completely change
ExecConstraints(), which comes after the firing of BEFORE triggers and before
heap_insert(). Beyond that, I am not sure what other considerations should be
made with respect to triggers. So, UNLESS should/will be applied after BEFORE
triggers, but before "non-UNLESS specified" constraints. ;)
--
Regards, James William Pye


From: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
To: James William Pye <pgsql(at)jwp(dot)name>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 03:14:49
Message-ID: 20060205190352.F39663@megazone.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Sun, 5 Feb 2006, James William Pye wrote:

> On Sun, Feb 05, 2006 at 02:08:12PM -0800, Stephan Szabo wrote:
> > Have you considered how this might work with spec-compliant constraint
> > timing?
>
> I haven't gone so far as to look into the spec, yet. [Noise of rustling papers]
>
> However, constraints referenced in an UNLESS clause that are deferred, in any
> fashion, should probably be "immediated" within the context of the command.
> Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS were
> to actually alter the timing of a given constraint.

The problem is that even immediate constraints are supposed to be checked
at end of statement, not at row time. Our implementation of UNIQUE is
particularly bad for this. Basically a violation at the time the row is
created is irrelevant if the violation is gone by the end of statement.

> > I think even in inserting cases, a later trigger before statement
> > end could in some cases un-violate a constraint, so checking before insert
> > won't actually be the same behavior as the normal constraint handling
> > which seems bad for this kind of system.
>
> Any facility that can alter the tuple before it being inserted into the heap
> should probably be exercised prior to the application of the tuple against
> UNLESS's behavior.

The problem is that you can un-violate a unique constraint by changing
some other row that's already in the table. And I think that it might even
be legal to do so in an after trigger (and in fact, some other row's after
trigger).

This isn't necessarily a killer to the idea though, it probably just means
the semantics are harder to nail down.


From: James William Pye <pgsql(at)jwp(dot)name>
To: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 18:25:05
Message-ID: 20060206182505.GA22529@lit.jwp.name
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Feb 05, 2006 at 07:14:49PM -0800, Stephan Szabo wrote:
> On Sun, 5 Feb 2006, James William Pye wrote:
> > However, constraints referenced in an UNLESS clause that are deferred, in any
> > fashion, should probably be "immediated" within the context of the command.
> > Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS were
> > to actually alter the timing of a given constraint.
>
> The problem is that even immediate constraints are supposed to be checked
> at end of statement, not at row time.

I see. "Immediated" is not the word that I am actually looking for then. :(
Perhaps Postgres should specify our current immediate as a "new" constraint mode.
"instant", maybe? Sadly, I think it will be difficult to get away from using that or
some other synonym if such an idea were to be "implemented".

[Getting the feeling that this has been discussed before. ;]

> Our implementation of UNIQUE is particularly bad for this.

Yes. Changing how UNIQUE constraints are implemented will likely be the first
step in this patch.

> > Any facility that can alter the tuple before it being inserted into the heap
> > should probably be exercised prior to the application of the tuple against
> > UNLESS's behavior.
>
> The problem is that you can un-violate a unique constraint by changing
> some other row that's already in the table. And I think that it might even
> be legal to do so in an after trigger (and in fact, some other row's after
> trigger).
> [join]
> Basically a violation at the time the row is
> created is irrelevant if the violation is gone by the end of statement.

Okay. I can't help but think such a trigger as being questionable at best.
However, per spec, it should be possible. =\

> This isn't necessarily a killer to the idea though, it probably just means
> the semantics are harder to nail down.

Aye. I figured there would be some details that might take a while.

Once the UNIQUE constraint code is relocated, I think implementing more
standards compliant constraint timing might be substantially easier. However, I
don't think this should effect UNLESS. Rather, I think UNLESS should, more or
less, demand that specified constraints be checked at the same time as they are
currently. This is meant to be an optimization at multiple levels; reduce code
redundancy(rewriting constraint checks for use prior to the actual insertion),
computational redundancy(potentially, running the rewritten checks more than
once), and reduce unnecessary I/O(avoiding heap_insert()'ing an evil tuple
into the target table despite the fact that the statement may later "inviolate"
it). Although, perhaps, it could be configurable with an option;
"INSERT INTO t UNLESS [DEFERRED] CONSTRAINT VIOLATION". =)
--
Regards, James William Pye


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 19:03:06
Message-ID: 43E79D6A.1060706@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

James,

Are you sure that a new type of constraint is the way to go for this?
It doesn't solve our issues in the data warehousing space. The spec we
started with for "Error-tolerant COPY" is:

1) It must be able to handle parsing errors (i.e. bad char set);
2) It must be able to handle constraint violations;
3) It must output all row errors to a log or "errors" table which makes
it possible to determine which input row failed and why;
4) It must not slow significantly (like, not more than 15%) the speed of
bulk loading.

On that basis, Alon started working on a low-level error trapper for
COPY. It seems like your idea, which would involve a second constraint
check, would achieve neigher #1 nor #4.

--Josh Berkus


From: James William Pye <pgsql(at)jwp(dot)name>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 21:46:10
Message-ID: 20060206214610.GB22529@lit.jwp.name
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 06, 2006 at 11:03:06AM -0800, Josh Berkus wrote:
> Are you sure that a new type of constraint is the way to go for this?

[Thinking that you are referring to the new constraint mode that I was
confusingly referring to...]

Well, it really wouldn't be new. It's just labeling what we do now as something
other than "immediate". Considering that immediate constraints are meant to be
checked at the end of the SQL-statement, and our implementation of immediate is
truly "immediate", as Stephan pointed out to me. However, I think our current
timing method is better for normal cases, at least for Postgres, than what the
spec specifies.
[See pages 63-66: The second paragraph in 4.17.2 "Checking of constraints"]

Ultimately, I don't care about this very much. However, I think an
implementation of my proposal would aid in implementing spec compliant
immediate timing.

[If I misunderstood what you were getting at, sorry. :]

> It doesn't solve our issues in the data warehousing space. The spec we
> started with for "Error-tolerant COPY" is:
>
> 1) It must be able to handle parsing errors (i.e. bad char set);

My proposal did not handle this, and purposefully so. A constraint violation,
while inhibiting insertion into the target table would still yield a "kosher"
tuple--just not okay for that table, which could then be dropped or redirected
using the "THEN INSERT INTO" into another precisely structured table for later
analysis. Bad data errors would not even have a tuple to work with in the first
place, which is why I wanted to draw a distinction.

I think having something to handle bad data is useful, but I think it should be
distinct, syntactically and implementation-wise, from constraint violations.

That's not to say that it couldn't fit into the model that "UNLESS" would try to
create:
"COPY ... UNLESS BAD DATA [ON COLUMN (y)] OR CONSTRAINT VIOLATION [ON (z)] ..."

> 2) It must be able to handle constraint violations;

Check. :)

> 3) It must output all row errors to a log or "errors" table which makes
> it possible to determine which input row failed and why;

Check; save data errors for now.

> 4) It must not slow significantly (like, not more than 15%) the speed of
> bulk loading.

Check. (See below)

> It seems like your idea, which would involve a second constraint
> check, would achieve neigher #1 nor #4.

I'm not proposing that a second constraint check should be made.

The difficulty of my implementation comes from the position that I don't think
the current implementation of UNIQUE constraints is ideal. It is "hidden"
inside nbtree, which, while convenient, is not likely to be the best place for
it. I believe my original letter covered this by proposing a new pg_am column;
one that would hold a regproc that would be able to 'scan for insert' and return
the state(position, locks, whether an entry exists, anything else necessary for
a quick insert) of that scan to the caller for later use in the actual insert or
update. All other constraints appear to require trivial modifications to get it
to work with UNLESS without any redundancy.
--
Regards, James William Pye


From: "Alon Goldshuv" <agoldshuv(at)greenplum(dot)com>
To: "Josh Berkus" <josh(at)agliodbs(dot)com>, "James William Pye" <pgsql(at)jwp(dot)name>
Cc: "Hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 22:08:38
Message-ID: C00D3316.C788%agoldshuv@greenplum.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> Alon Goldshuv on Bizgres has been working on this as well. Maybe you
> could collaborate? Alon?

I would love to collaborate. The proposal is neat, however, I am not too
excited about handling errors in such high granularity, as far as the user
is concerned. I am more on the same line with Tom Lane's statement in
Simon's thread (Practical error logging for very large COPY statements):

"The general problem that needs to be solved is "trap any error that
occurs during attempted insertion of a COPY row, and instead of aborting
the copy, record the data and the error message someplace else". Seen
in that light, implementing a special path for uniqueness violations is
pretty pointless."

But, I definitely share your struggle to finding a good way to handle those
unique/FK constraints...

Out of your current possible known solutions list:

. Temporary table that filters out the evil tuples.
. BEFORE TRIGGER handling the tuple if the constraint of interest is
violated.
. INSERT wrapped in a subtransaction.
. (Other variations)

I really don't like Temporary tables (too much user intervention) or
subtransactions (sloooow). I also don't like using pg_loader for that
manner, as although it's a nice tool, isolating errors with it for large
data sets is impractical.

I guess the BEFORE TRIGGER is the closest solution to what I would like to
achieve. I think something can be done even without a trigger. We could trap
any of the following:

- bad data (any error before the tuple can be created).
- domain constraints
- check constraints
- NOT NULL constraints

As far as UNIQUE goes, maybe there is a good way to do a bt scan against the
index table right before the simple_heap_insert call? Hopefully without too
much code duplication. I am not too familiar with that code, so I don't have
a very specific idea yet. I don't know how much slower things will become
with this extra scan (I would think it will still be simpler and faster than
a subtransaction), but I figure that there is a price to pay if you want
single row error isolation. Otherwise, if the user wants to run COPY like it
is currently (all data rows or nothing) they could still do it in the same
speed using the current code path, bypassing the extra scan.

Not sure this way very helpful, but these are my thoughts at this moment.

Regards,
Alon.


From: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 22:17:06
Message-ID: 20060206125621.X65955@megazone.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 6 Feb 2006, Josh Berkus wrote:

> Are you sure that a new type of constraint is the way to go for this?
> It doesn't solve our issues in the data warehousing space. The spec we
> started with for "Error-tolerant COPY" is:
>
> 1) It must be able to handle parsing errors (i.e. bad char set);
> 2) It must be able to handle constraint violations;
> 3) It must output all row errors to a log or "errors" table which makes
> it possible to determine which input row failed and why;
> 4) It must not slow significantly (like, not more than 15%) the speed of
> bulk loading.
>
> On that basis, Alon started working on a low-level error trapper for
> COPY. It seems like your idea, which would involve a second constraint
> check, would achieve neigher #1 nor #4.

I think in his system it wouldn't check the constraints twice, it'd just
potentially check them at a different time than the normal constraint
timing, so I think it'd cover #4. I'd wonder if there'd be any possibility
of having violations get unnoticed in that case, but I'm not coming up
with an obvious way that could happen.


From: Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com>
To: James William Pye <pgsql(at)jwp(dot)name>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 22:20:10
Message-ID: 20060206124519.C65955@megazone.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 6 Feb 2006, James William Pye wrote:

> On Sun, Feb 05, 2006 at 07:14:49PM -0800, Stephan Szabo wrote:
> > On Sun, 5 Feb 2006, James William Pye wrote:
> > > However, constraints referenced in an UNLESS clause that are deferred, in any
> > > fashion, should probably be "immediated" within the context of the command.
> > > Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS were
> > > to actually alter the timing of a given constraint.
> >
> > The problem is that even immediate constraints are supposed to be checked
> > at end of statement, not at row time.
>
> I see. "Immediated" is not the word that I am actually looking for then. :(
> Perhaps Postgres should specify our current immediate as a "new" constraint mode.
> "instant", maybe? Sadly, I think it will be difficult to get away from using that or
> some other synonym if such an idea were to be "implemented".
>
> [Getting the feeling that this has been discussed before. ;]

Only parts of it. :)

> > Our implementation of UNIQUE is particularly bad for this.
>
> Yes. Changing how UNIQUE constraints are implemented will likely be the first
> step in this patch.
>
> > > Any facility that can alter the tuple before it being inserted into the heap
> > > should probably be exercised prior to the application of the tuple against
> > > UNLESS's behavior.
> >
> > The problem is that you can un-violate a unique constraint by changing
> > some other row that's already in the table. And I think that it might even
> > be legal to do so in an after trigger (and in fact, some other row's after
> > trigger).
> > [join]
> > Basically a violation at the time the row is
> > created is irrelevant if the violation is gone by the end of statement.
>
> Okay. I can't help but think such a trigger as being questionable at best.
> However, per spec, it should be possible. =\

Yeah, it's pretty odd in the insert case. It's easy in the update case to
make a case where it matters, definately less so for insert.

> Once the UNIQUE constraint code is relocated, I think implementing more
> standards compliant constraint timing might be substantially easier. However, I
> don't think this should effect UNLESS. Rather, I think UNLESS should, more or
> less, demand that specified constraints be checked at the same time as they are
> currently. This is meant to be an optimization at multiple levels; reduce code
> redundancy(rewriting constraint checks for use prior to the actual insertion),
> computational redundancy(potentially, running the rewritten checks more than
> once), and reduce unnecessary I/O(avoiding heap_insert()'ing an evil tuple
> into the target table despite the fact that the statement may later "inviolate"
> it). Although, perhaps, it could be configurable with an option;
> "INSERT INTO t UNLESS [DEFERRED] CONSTRAINT VIOLATION". =)

I'd say that if we were going to check the constraints at a different
time, we'd want a better name/description than "UNLESS CONSTRAINT
VIOLATION" since the unadorned INSERT or COPY might run with no constraint
violations.


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: James William Pye <pgsql(at)jwp(dot)name>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-06 22:49:14
Message-ID: 200602061449.14298.josh@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

James,

> The difficulty of my implementation comes from the position that I don't
> think the current implementation of UNIQUE constraints is ideal. It is
> "hidden" inside nbtree, which, while convenient, is not likely to be the
> best place for it.

Agreed; one of the things that's been on the TODO list for quite a while is
real deferrable unique constraints that would allow for (for example)
reordering of a UNIQUE column inside a transaction.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco


From: James William Pye <pgsql(at)jwp(dot)name>
To: Alon Goldshuv <agoldshuv(at)greenplum(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Copy From & Insert UNLESS
Date: 2006-02-14 19:35:21
Message-ID: 20060214193521.GB26643@lit.jwp.name
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 06, 2006 at 05:08:38PM -0500, Alon Goldshuv wrote:
> The proposal is neat, however, I am not too
> excited about handling errors in such high granularity, as far as the user
> is concerned. I am more on the same line with Tom Lane's statement in
> Simon's thread (Practical error logging for very large COPY statements):
>
> "The general problem that needs to be solved is "trap any error that
> occurs during attempted insertion of a COPY row, and instead of aborting
> the copy, record the data and the error message someplace else". Seen
> in that light, implementing a special path for uniqueness violations is
> pretty pointless."

I think I would be inclined to actually agree with that, which is why I proposed
a special path for constraint violations in general as opposed to just
uniqueness. However, I can understand if you remain unmoved. ;)

> But, I definitely share your struggle to finding a good way to handle those
> unique/FK constraints...

Aye, :-(

> As far as UNIQUE goes, maybe there is a good way to do a bt scan against the
> index table right before the simple_heap_insert call?

Yes, but I believe some additional locking is required in order to make that
safe. Not that that would kill it, but I think there is a better way; I'm
cooking up a general proposal for refactoring unique constraints, so I'm hoping
something along those lines will aid any patch attempting to solving this
problem[copy error/violation management].
--
Regards, James William Pye