Re: Latches with weak memory ordering (Re: max_wal_senders must die)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org, Markus Wanner <markus(at)bluegap(dot)ch>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Robert Haas <robertmhaas(at)gmail(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: Latches with weak memory ordering (Re: max_wal_senders must die)
Date: 2010-11-19 16:57:09
Message-ID: 1744.1290185829@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Andres Freund <andres(at)anarazel(dot)de> writes:
> I was never talking about 'locking the whole cache' - I was talking about
> flushing/fencing it like a "global" read/write barrier would. And "lock
> xchgb/xaddl" does not imply anything for other cachelines but its own.

If that's the case, why aren't the parallel regression tests falling
over constantly? My recollection is that when I broke the sinval code
by assuming strong memory ordering without spinlocks, it didn't take
long at all for the PPC buildfarm members to expose the problem.
If it's possible for Intel-ish processors to exhibit weak memory
ordering behavior, I'm quite sure that our current code would be showing
bugs everywhere.

The impression I had of current Intel designs is that they ensure global
cache coherency, ie if one processor has a dirty cache line the others
know that, and will go get the updated data before attempting to access
that piece of memory.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2010-11-19 16:58:12 Re: libpq changes for synchronous replication
Previous Message Andres Freund 2010-11-19 16:31:42 Re: Latches with weak memory ordering (Re: max_wal_senders must die)