Re: reducing the overhead of frequent table locks - now, with WIP patch

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: reducing the overhead of frequent table locks - now, with WIP patch
Date: 2011-06-06 12:08:04
Message-ID: BANLkTin21kDTbm=3FKvDHZmzS2ouk3rmAg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jun 6, 2011 at 8:02 AM, Heikki Linnakangas
<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> On 06.06.2011 07:12, Robert Haas wrote:
>>
>> I did some further investigation of this.  It appears that more than
>> 99% of the lock manager lwlock traffic that remains with this patch
>> applied has locktag_type == LOCKTAG_VIRTUALTRANSACTION.  Every SELECT
>> statement runs in a separate transaction, and for each new transaction
>> we run VirtualXactLockTableInsert(), which takes a lock on the vxid of
>> that transaction, so that other processes can wait for it.  That
>> requires acquiring and releasing a lock manager partition lock, and we
>> have to do the same thing a moment later at transaction end to dump
>> the lock.
>>
>> A quick grep seems to indicate that the only places where we actually
>> make use of those VXID locks are in DefineIndex(), when CREATE INDEX
>> CONCURRENTLY is in use, and during Hot Standby, when max_standby_delay
>> expires.  Considering that these are not commonplace events, it seems
>> tremendously wasteful to incur the overhead for every transaction.  It
>> might be possible to make the lock entry spring into existence "on
>> demand" - i.e. if a backend wants to wait on a vxid entry, it creates
>> the LOCK and PROCLOCK objects for that vxid.  That presents a few
>> synchronization challenges, and plus we have to make sure that the
>> backend that's just been "given" a lock knows that it needs to release
>> it, but those seem like they might be manageable problems, especially
>> given the new infrastructure introduced by the current patch, which
>> already has to deal with some of those issues.  I'll look into this
>> further.
>
> At the moment, the transaction with given vxid acquires an ExclusiveLock on
> the vxid, and anyone who wants to wait for it to finish acquires a
> ShareLock. If we simply reverse that, so that the transaction itself takes
> ShareLock, and anyone wanting to wait on it take an ExclusiveLock, will this
> fastlock patch bust this bottleneck too?

Not without some further twaddling. Right now, the fast path only
applies when you are taking a lock < ShareUpdateExclusiveLock on an
unshared relation. See also the email I just sent on why using the
exact same mechanism might not be such a hot idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Golub 2011-06-06 12:09:15 Re: Error in PQsetvalue
Previous Message Robert Haas 2011-06-06 12:06:10 Re: heap vacuum & cleanup locks