Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline
Date: 2013-09-24 10:48:11
Message-ID: 20130924104811.GA11964@awork2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2013-09-24 12:39:39 +0200, Tom Lane wrote:
> Andres Freund <andres(at)2ndquadrant(dot)com> writes:
> > So, what we do is we guarantee that LWLocks are aligned to 16 or 32byte
> > boundaries. That means that on x86-64 (64byte cachelines, 24bytes
> > unpadded lwlock) two lwlocks share a cacheline.

> > In my benchmarks changing the padding to 64byte increases performance in
> > workloads with contended lwlocks considerably.
>
> At a huge cost in RAM. Remember we make two LWLocks per shared buffer.

> I think that rather than using a blunt instrument like that, we ought to
> see if we can identify pairs of hot LWLocks and make sure they're not
> adjacent.

That's a good point. What about making all but the shared buffer lwlocks
64bytes? It seems hard to analyze the interactions between all the locks
and keep it maintained.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2013-09-24 11:14:31 Re: record identical operator
Previous Message Tom Lane 2013-09-24 10:39:39 Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline