Possible future performance improvement: sort updates/deletes by ctid

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Cc: "Stephen Denne" <Stephen(dot)Denne(at)datamail(dot)co(dot)nz>
Subject: Possible future performance improvement: sort updates/deletes by ctid
Date: 2008-01-30 03:11:12
Message-ID: 1391.1201662672@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

We've had a couple of discussions recently revolving around the
inefficiency of using hashjoin/hashaggregation output to update a target
table, because of the resulting very random access pattern. I believe
this same mechanism is underlying the slowness of Stephen Denne's
"alternate query" described here:
http://archives.postgresql.org/pgsql-performance/2008-01/msg00227.php

I made up the attached doubtless-oversimplified test case to model what
he was seeing. It's cut down about 4x from the table size he describes,
but the UPDATE still takes forever --- I gave up waiting after 2 hours,
when it had deleted about a fifth of its hashjoin temp files, suggesting
that the total runtime would be about 10 hours.

A brute force idea for fixing this is to sort the intended update or
delete operations of an UPDATE/DELETE command according to the target
table's ctid, which is available for free anyway since the executor top
level must have it to perform the operation. I made up an even more
brute force patch (also attached) that forces that to happen for every
UPDATE or DELETE --- obviously we'd not want that for real, it's just
for crude performance testing. With that patch, I got the results

QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=6075623.03..6085623.05 rows=4000008 width=618) (actual time=2078726.637..3371944.124 rows=4000000 loops=1)
Sort Key: df.ctid
Sort Method: external merge Disk: 2478992kB
-> Hash Join (cost=123330.50..1207292.72 rows=4000008 width=618) (actual time=20186.510..721120.455 rows=4000000 loops=1)
Hash Cond: (df.document_id = d.id)
-> Seq Scan on document_file df (cost=0.00..373334.08 rows=4000008 width=614) (actual time=11.775..439993.807 rows=4000000 loops=1)
-> Hash (cost=57702.00..57702.00 rows=4000200 width=8) (actual time=19575.885..19575.885 rows=4000000 loops=1)
-> Seq Scan on document d (cost=0.00..57702.00 rows=4000200 width=8) (actual time=0.039..14335.615 rows=4000000 loops=1)
Total runtime: 3684037.097 ms

or just over an hour runtime --- still not exactly speedy, but it
certainly compares favorably to the estimated 10 hours for unsorted
updates.

This is with default shared_buffers (32MB) and work_mem (1MB);
a more aggressive work_mem would have meant fewer hash batches and fewer
sort runs and hence better performance in both cases, but with the
majority of the runtime going into the sort step here, I think that the
sorted update would benefit much more.

Nowhere near a workable patch of course, but seems like food for
thought.

regards, tom lane

Attachment Content-Type Size
unknown_filename text/plain 589 bytes
unknown_filename text/plain 1.1 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2008-01-30 03:13:45 Re: [pgtranslation-translators] Opinions about wording of error messages for bug #3883?
Previous Message Stephen Denne 2008-01-30 02:37:04 Re: 8.3RC1 on windows missing descriptive Event handle names