From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Concurrent HOT Update interference |
Date: | 2013-05-10 10:23:01 |
Message-ID: | CA+U5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Currently, when we access a buffer for a HOT update we check to see if
its possible to get a cleanup lock so we can clean the buffer.
Currently, UPDATEs and DELETEs pin buffers during the scan phase and
then re-lock the buffer to update.
So what we have is that multiple UPDATEs repeatedly accessing the same
block will prevent each other from successful cleanup, since while one
session is performing the update, the second session is pinning the
block with an indexscan.
This effect has been noted for some time during pgbench runs, where
running with more sessions than scale factors causes contention. We've
never done anything about it because that's been seen as a poorly
executed test, whereas it does actually match the real situation we
experience at "hot spots" in the table.
Holding the buffer pin across both scan and update saves effort for a
single session, but it also causes bloat in the concurrent case. Or
put another way, HOT is not effective at "hot spots" in a table!
I thought I'd raise the problem first before attempting to propose a solution.
(And also: why is index_fetch_heap() in indexam.c, yet bitgetpage() in
executor/nodeBitmapHeapscan.c?)
Comments?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
From | Date | Subject | |
---|---|---|---|
Next Message | Vitalii Tymchyshyn | 2013-05-10 10:48:49 | Re: In progress INSERT wrecks plans on table |
Previous Message | Kyotaro HORIGUCHI | 2013-05-10 08:36:55 | Re: Fast promotion failure |