Shared locking in slru.c

Lists: pgsql-hackers
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Shared locking in slru.c
Date: 2005-11-30 18:53:13
Message-ID: 29931.1133376793@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I've been fooling around with a test case Rob Creager sent me, which is
able to drive PG into a context-switch storm caused by contention for
the SubtransControlLock. Rob asked for the test case not to be posted
publicly (it's part of some proprietary code), but I found that you can
cause some of the same behavior just by running pgbench while holding
an open transaction in a psql session. pgbench isn't able to saturate
the machine but I think that's due to unrelated pgbench limitations.

The basic cause of the problem is that by holding an open transaction
while many other transactions come and go, the window of transactions
for which pg_subtrans has to be consulted gets wider and wider. If the
system is doing a lot of updates, the fraction of tuples for which
HeapTupleSatisfiesSnapshot has to invoke SubTransGetTopmostTransaction
gets larger too, and so the extent of contention for SubtransControlLock
increases, even though no subtransactions are actually in use.

I've been looking at various ways to resolve this, but one thing that
seems promising is to hack slru.c to take the control lock in shared
mode, not exclusive mode, for read-only accesses to pages that are
already in memory. The vast majority of accesses to pg_subtrans in the
problem scenario are read-only, and so this reduces the need to block
and the consequent CS storm.

A quick-hack prototype patch for this is attached. (It changes only
SubTransGetParent, but if applied of course TransactionIdGetStatus and
so on would get the same treatment.) The idea is that we take the lock
in shared mode, look to see if the required page is in memory, and if so
just return it; if not, release the lock, reacquire in exclusive mode,
proceed with the existing code.

The reason that the existing code always takes the lock in exclusive
mode, even if there's no intention to change data, is that it also needs
to adjust the LRU information to reflect the page access, and the way
that we're doing that requires exclusive access. So to make the idea
work at all, we need some alternative way of managing recency-of-use
info.

The way the attached patch attacks this is for the shared-lock access
case to simply set the page's LRU counter to zero, without bumping up
the LRU counters of the other pages as the normal adjustment would do.
This is safe to execute in a shared environment since even if someone
else is concurrently touching the same page, they'll also be trying
to set its counter to zero, and so there's no possibility of ending
up in a bad state. However, this leaves us with the likelihood that
multiple pages will have equal LRU counters when it comes time to
throw out a page to make room for another. The patch deals with that
by selecting the furthest-back page of those with the highest LRU
setting. I'm not totally happy with this heuristic, though, and was
wondering if anyone had a better idea. Anyone seen a lock-free
data structure for LRU or approximately-LRU state tracking?

regards, tom lane

Attachment Content-Type Size
unknown_filename text/plain 3.9 KB

From: Kenneth Marshall <ktm(at)it(dot)is(dot)rice(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pqsql-hackers(at)postgresql(dot)org
Subject: Re: Shared locking in slru.c
Date: 2005-11-30 20:18:40
Message-ID: 20051130201839.GD3765@it.is.rice.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 30, 2005 at 01:53:13PM -0500, Tom Lane wrote:
> I've been looking at various ways to resolve this, but one thing that
> seems promising is to hack slru.c to take the control lock in shared
> mode, not exclusive mode, for read-only accesses to pages that are
> already in memory. The vast majority of accesses to pg_subtrans in the
> problem scenario are read-only, and so this reduces the need to block
> and the consequent CS storm.
>
> A quick-hack prototype patch for this is attached. (It changes only
> SubTransGetParent, but if applied of course TransactionIdGetStatus and
> so on would get the same treatment.) The idea is that we take the lock
> in shared mode, look to see if the required page is in memory, and if so
> just return it; if not, release the lock, reacquire in exclusive mode,
> proceed with the existing code.
>
> The reason that the existing code always takes the lock in exclusive
> mode, even if there's no intention to change data, is that it also needs
> to adjust the LRU information to reflect the page access, and the way
> that we're doing that requires exclusive access. So to make the idea
> work at all, we need some alternative way of managing recency-of-use
> info.
>
> The way the attached patch attacks this is for the shared-lock access
> case to simply set the page's LRU counter to zero, without bumping up
> the LRU counters of the other pages as the normal adjustment would do.
> This is safe to execute in a shared environment since even if someone
> else is concurrently touching the same page, they'll also be trying
> to set its counter to zero, and so there's no possibility of ending
> up in a bad state. However, this leaves us with the likelihood that
> multiple pages will have equal LRU counters when it comes time to
> throw out a page to make room for another. The patch deals with that
> by selecting the furthest-back page of those with the highest LRU
> setting. I'm not totally happy with this heuristic, though, and was
> wondering if anyone had a better idea. Anyone seen a lock-free
> data structure for LRU or approximately-LRU state tracking?
>
> regards, tom lane

Tom,

In this situation, if this code does not need the ability to queue
access to the control lock, it may be reasonable to use a latch process
to update the control information. In pseudo-code, the operations to
read the control information are:

WriteControl:
1. Set latch.
2. Update control information
3. Increment latch version number.
4. Unset latch.

ReadControl:
1. Read latch version number.
2. Read control information.
3. Check latch. If locked go to step 1.
4. Read latch version number. If the value is different from the
value read in step 1, go to step 1.

In this way, a read operation will only need two reads of the version
number and one of the shared data. A write operation will be very
lightweight. Setting the latch can be a single TAS/CAS operation and
unsetting the latch and incrementing the version number can be done
in a single write. Of course, if you need to be able to queue up lock
requests this will not work.

Ken


From: Kenneth Marshall <ktm(at)it(dot)is(dot)rice(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kenneth Marshall <ktm(at)is(dot)rice(dot)edu>, pqsql-hackers(at)postgresql(dot)org
Subject: Re: Shared locking in slru.c
Date: 2005-11-30 21:06:34
Message-ID: 20051130210634.GA27311@it.is.rice.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 30, 2005 at 03:23:55PM -0500, Tom Lane wrote:
> Kenneth Marshall <ktm(at)is(dot)rice(dot)edu> writes:
> > ... In pseudo-code, the operations to
> > read the control information are:
>
> > WriteControl:
> > 1. Set latch.
> > 2. Update control information
> > 3. Increment latch version number.
> > 4. Unset latch.
>
> > ReadControl:
> > 1. Read latch version number.
> > 2. Read control information.
> > 3. Check latch. If locked go to step 1.
> > 4. Read latch version number. If the value is different from the
> > value read in step 1, go to step 1.
>
> Hm, I don't see how that's safe in the presence of concurrent would-be
> writers? (Or is that what you meant by "queuing up lock requests"?)
>
> regards, tom lane

The latch is definitely safe for readers and writers concurrently
accessing the information. It does not provide the ordered waiting
for a lock that the LWLock will. It is also so light-weight that
for the types of reads and updates to the shared areas that it may
outperform the existing LWLock even during contention.

Ken


From: Manfred Koizar <mkoi-pg(at)aon(dot)at>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Shared locking in slru.c
Date: 2005-12-02 10:09:41
Message-ID: t870p1d9ooqa3j6juglrv4vvq68gh7ncue@4ax.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 30 Nov 2005 13:53:13 -0500, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
wrote:
>The way the attached patch attacks this is for the shared-lock access
>case to simply set the page's LRU counter to zero, without bumping up
>the LRU counters of the other pages as the normal adjustment would do.

If you still plan to do this, you might also want to revert the
micro-optimisation intruduced by the original SLRU patch:

| Apart from refactoring I made a little change to SlruRecentlyUsed,
| formerly ClogRecentlyUsed: It now skips incrementing lru_counts, if
| slotno is already the LRU slot, thus saving a few CPU cycles.

|+#define SlruRecentlyUsed(shared, slotno) \
|+ do { \
|+ if ((shared)->page_lru_count[slotno] != 0) { \
|+ int iilru; \
|+ for (iilru = 0; iilru < NUM_CLOG_BUFFERS; iilru++) \
|+ (shared)->page_lru_count[iilru]++; \
|+ (shared)->page_lru_count[slotno] = 0; \
|+ } \
|+ } while (0)

Otherwise you could end up with a stable state of several pages having
lru_count == 0.
Servus
Manfred


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Manfred Koizar <mkoi-pg(at)aon(dot)at>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Shared locking in slru.c
Date: 2005-12-02 14:18:01
Message-ID: 2247.1133533081@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Manfred Koizar <mkoi-pg(at)aon(dot)at> writes:
> On Wed, 30 Nov 2005 13:53:13 -0500, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
> wrote:
>> The way the attached patch attacks this is for the shared-lock access
>> case to simply set the page's LRU counter to zero, without bumping up
>> the LRU counters of the other pages as the normal adjustment would do.

> If you still plan to do this, you might also want to revert the
> micro-optimisation intruduced by the original SLRU patch:

Good point --- thanks for mentioning it. I'm still fooling with the
modified code because it seems like it's not doing very well at managing
the SLRU pool, and perhaps that's got something to do with it ...

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Shared locking in slru.c
Date: 2005-12-05 20:32:38
Message-ID: 2237.1133814758@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> The way the attached patch attacks this is for the shared-lock access
> case to simply set the page's LRU counter to zero, without bumping up
> the LRU counters of the other pages as the normal adjustment would do.
> ...
> I'm not totally happy with this heuristic, though, and was
> wondering if anyone had a better idea. Anyone seen a lock-free
> data structure for LRU or approximately-LRU state tracking?

I've come up with what seems a slightly better way to do this. The idea
is to replace the current structure for tracking which page is the least-
recently-used with this:

typedef struct SlruSharedData
{
...

/*----------
* We mark a page "most recently used" by setting
* page_lru_count[slotno] = ++cur_lru_count;
* The oldest page is therefore the one with the highest value of
* cur_lru_count - page_lru_count[slotno]
*----------
*/
int cur_lru_count;

...

int page_lru_count[NUM_SLRU_BUFFERS];

...

which makes the SlruRecentlyUsed macro look like

#define SlruRecentlyUsed(shared, slotno) \
do { \
int new_lru_count = (shared)->cur_lru_count; \
if (new_lru_count != (shared)->page_lru_count[slotno]) { \
(shared)->cur_lru_count = ++new_lru_count; \
(shared)->page_lru_count[slotno] = new_lru_count; \
} \
} while (0)

and SlruSelectLRUPage() has to look for the maximum value of
"cur_count - shared->page_lru_count[slotno]" rather than just
"shared->page_lru_count[slotno]" as before. This seems like a win
in any case since it takes cycles out of the commonly used path at
a small increase in the cost of SlruSelectLRUPage --- but in that
routine you are about to do I/O anyway, so a few extra subtractions
are negligible.

However, the real reason for doing this is that I think it's OK for
the proposed SimpleLruReadPage_ReadOnly routine to apply this form
of SlruRecentlyUsed even though it holds only a shared lock. Assuming
that int reads and writes are atomic, the only possible failures are

1. If a process running the macro is delayed, it might store a stale
(hence too small) value back into cur_lru_count or a page_lru_count
array element after someone else has incremented them to a larger value.

2. Two processes might read the same cur_lru_count value at the same
time, so that one of their increments is lost. This has the same end
effect as #1, though.

Given reasonable care in SlruSelectLRUPage (see attached excerpt), we
can defend against these scenarios and usually still make a reasonable
choice of which page to evict. In any case, the worst possible scenario
is that we make a nonoptimal choice of page to evict due to confused
lru_count state. This price seems well worth the chance to reduce
contention for shared memory.

Thoughts, objections?

regards, tom lane

/*
* If we find any EMPTY slot, just select that one. Else locate the
* least-recently-used slot to replace.
*
* Normally the page_lru_count values will all be different and so
* there will be a well-defined LRU page. But since we allow
* concurrent execution of SlruRecentlyUsed() within
* SimpleLruReadPage_ReadOnly(), it is possible that multiple pages
* acquire the same lru_count values. In that case we break ties by
* choosing the furthest-back page.
*
* In no case will we select the slot containing latest_page_number
* for replacement, even if it appears least recently used.
*
* Notice that this next line forcibly advances cur_lru_count to a
* value that is certainly beyond any value that will be in the
* page_lru_count array after the loop finishes. This ensures that
* the next execution of SlruRecentlyUsed will give us good data,
* even if it's against a page that has the current counter value.
*/
cur_count = (shared->cur_lru_count)++;
best_delta = -1;
bestslot = 0; /* no-op, just keeps compiler quiet */
best_page_number = 0; /* ditto */
for (slotno = 0; slotno < NUM_SLRU_BUFFERS; slotno++)
{
int this_delta;
int this_page_number;

if (shared->page_status[slotno] == SLRU_PAGE_EMPTY)
return slotno;
this_delta = cur_count - shared->page_lru_count[slotno];
if (this_delta < 0)
{
/*
* Clean up in case shared updates have caused cur_count
* increments to get "lost". We back off the page counts,
* rather than trying to increase cur_count, to avoid any
* question of infinite loops or failure in the presence of
* wrapped-around counts.
*/
shared->page_lru_count[slotno] = cur_count;
this_delta = 0;
}
this_page_number = shared->page_number[slotno];
if ((this_delta > best_delta ||
(this_delta == best_delta &&
ctl->PagePrecedes(this_page_number, best_page_number))) &&
this_page_number != shared->latest_page_number)
{
bestslot = slotno;
best_delta = this_delta;
best_page_number = this_page_number;
}
}