Re: Optimising Foreign Key checks

From: Noah Misch <noah(at)leadboat(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Optimising Foreign Key checks
Date: 2013-06-01 20:27:35
Message-ID: 20130601202735.GA256029@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jun 01, 2013 at 09:41:13AM +0100, Simon Riggs wrote:
> FK checks can be expensive, especially when loading large volumes of
> data into an existing table or partition. A couple of ideas for
> improving performance are discussed here:
>
> 1. Use Case: Bulk loading
> COPY pgbench_accounts; --> references pgbench_branches with many
> repeated values
>
> Proposal: Transactions that need multiple checks can be optimised by
> simply LOCKing the whole referenced table, once. We can then hold the
> referenced table as a Hash, like we do with a Hash Join (its almost
> exactly the same thing). This works in two ways: it speeds up checks
> and it also reduces the locking overhead.

> 2. Use Case: Transactional repetition
> BEGIN;
> INSERT INTO order VALUES (ordid, ....)
> INSERT INTO order_line VALUES (ordid, 1, .....)
> INSERT INTO order_line VALUES (ordid, 2, .....)
> INSERT INTO order_line VALUES (ordid, 3, .....)
> INSERT INTO order_line VALUES (ordid, 4, .....)

> Proposal: De-duplicate multiple checks against same value. This would
> be implemented by keeping a hash of rows that we had already either
> inserted and/or locked as the transaction progresses, so we can use
> the hash to avoid queuing up after triggers.

I find (2) more promising than (1). It helps automatically, and it helps in
more cases. The main advantage of (1) is avoiding the writes of tuple locks
onto the PK table. Therefore, I expect (1) to distinguish itself over (2)
when the referenced values are _not_ repeated too much. If referenced values
are repeated, tuple locking costs would tend to disappear into the noise after
the deduplication of (2).

> In both cases we are building up a local hash table with values and
> then using those values to avoid queuing constraint triggers. So code
> is similar for both.
>
> Thoughts?

Will this add too much cost where it doesn't help? I don't know what to
predict there. There's the obvious case of trivial transactions with no more
than one referential integrity check per FK, but there's also the case of a
transaction with many FK checks all searching different keys. If the hash hit
rate (key duplication rate) is low, the hash can consume considerably more
memory than the trigger queue without preventing many RI queries. What sort
of heuristic could we use to avoid pessimizing such cases?

Same-transaction UPDATE or DELETE of the PK table, as well as subtransaction
abort, potentially invalidates hash entries. I recommend thinking relatively
early about how best to handle that.

Before deciding what to think overall, I needed to see a benchmark. I ran
this simple one based on your scenarios:

BEGIN;
CREATE TABLE "order" (orderid int PRIMARY KEY);
CREATE TABLE order_line (
orderid int,
lineno int,
PRIMARY KEY (orderid, lineno),
FOREIGN KEY (orderid) REFERENCES "order"
);
INSERT INTO "order" VALUES (1);
INSERT INTO order_line SELECT 1, n FROM generate_series(1,1000000) t(n);
ROLLBACK;

See attached output from "perf report -s parent -g graph,5,caller"; I suggest
browsing under "less -S". It confirms that the expensive part is something
your proposals would address.

A different way to help the bulk loading case would be to lock more keys with
a single query. Currently, we issue a query like this for every INSERTed row:

SELECT 1 FROM ONLY pktable WHERE pkcol = $1 FOR KEY SHARE OF x

We could instead consider queued tuples in batches:

SELECT 1 FROM ONLY pktable WHERE pkcol = ANY (ARRAY[$1,$2,...,$100]) FOR KEY SHARE OF x

This would have the advantage of not adding a long-lived, potentially-large
data structure and not depending on the rate of referenced key duplication.
But it would not help non-bulk scenarios. (However, the user could make your
example for (2) become a bulk scenario by deferring the FK constraint.)

Thanks,
nm

--
Noah Misch
EnterpriseDB http://www.enterprisedb.com

Attachment Content-Type Size
fk-insert-profile.txt text/plain 10.0 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2013-06-01 21:56:22 Re: Vacuum, Freeze and Analyze: the big picture
Previous Message Robert Haas 2013-06-01 20:26:20 Re: Freezing without write I/O