From: | Ganesh Venkitachalam-1 <ganesh(at)vmware(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Latch implementation |
Date: | 2010-09-23 16:22:44 |
Message-ID: | Pine.LNX.4.64.1009230919510.3375@aluva.prom.eng.vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Attached is the current implementation redone with poll. It lands at
around 10.5 usecs, right above pipe, but better than the current
implementation.
As to the other questions: yes, this would matter for sync replication.
Cosider an enterprise use case with 10Gb network & SSDs (not at all
uncommon): a 10Gb network can do a roundtrip with the commitlog in <10
usecs, and SSDs have write latency < 50 usec. Now if the latch takes tens
of usescs (this stuff scales somewhat with the number of processes, my
data is all with 2 processes), that becomes a very significant part of the
net commit latency. So I'd think this is worth fixing.
Thanks,
--Ganesh
On Thu, 23 Sep 2010, Simon Riggs wrote:
> Date: Thu, 23 Sep 2010 06:56:38 -0700
> From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
> To: Ganesh Venkitachalam <ganesh(at)vmware(dot)com>
> Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
> Subject: Re: [HACKERS] Latch implementation
>
> On Wed, 2010-09-22 at 13:31 -0700, Ganesh Venkitachalam-1 wrote:
>> Hi,
>>
>> I've been playing around with measuring the latch implementation in 9.1,
>> and here are the results of a ping-pong test with 2 processes signalling
>> and waiting on the latch. I did three variations (linux 2.6.18, nehalem
>> processor).
>>
>> One is the current one.
>>
>> The second is built on native semaphors on linux. This one cannot
>> implement WaitLatchOrSocket, there's no select involved.
>
> That looks interesting. If we had a need for a latch that would not need
> to wait on a socket as well, this would be better. In sync rep, we
> certainly do. Thanks for measuring this.
>
> Question is: in that case would we use latches or a PGsemaphore?
>
> If the answer is "latch" then we could just have an additional boolean
> option when we request InitLatch() to see what kind of latch we want.
>
>> The third is an implementation based on pipe() and poll. Note: in its
>> current incarnation it's essentially a hack to measure performance, it's
>> not usable in postgres, this assumes all latches are created before any
>> process is forked. We'd need to use mkfifo to sort that out if we really
>> want to go this route, or similar.
>>
>> - Current implementation: 1 pingpong is avg 15 usecs
>> - Pipe+poll: 9 usecs
>> - Semaphore: 6 usecs
>
> Pipe+poll not worth it then.
>
> --
> Simon Riggs www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Training and Services
>
>
Attachment | Content-Type | Size |
---|---|---|
sema.c | text/plain | 2.7 KB |
unix_latch.c | text/plain | 18.5 KB |
latch.h | text/plain | 2.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-09-23 16:45:45 | Re: Configuring synchronous replication |
Previous Message | Csaba Nagy | 2010-09-23 16:09:26 | Re: Configuring synchronous replication |