Re: Do we need a ShmList implementation?

From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Do we need a ShmList implementation?
Date: 2010-09-20 16:28:55
Message-ID: 4C978BC7.1050909@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 20/09/10 19:04, Kevin Grittner wrote:
> Heikki Linnakangas<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>
>> In the SSI patch, you'd also need a way to insert an existing
>> struct into a hash table. You currently work around that by using
>> a hash element that contains only the hash key, and a pointer to
>> the SERIALIZABLEXACT struct. It isn't too bad I guess, but I find
>> it a bit confusing.
>
> Hmmm... Mucking with the hash table implementation to accommodate
> that seems like it's a lot of work and risk for pretty minimal
> benefit. Are you sure it's worth it?

No, I'm not sure at all.

>> Well, we generally try to avoid dynamic structures in shared
>> memory, because shared memory can't be resized.
>
> But don't HTAB structures go beyond their estimated sizes as needed?

Yes, but not in a very smart way. The memory allocated for hash table
elements are never free'd. So if you use up all the "slush fund" shared
memory for SIREAD locks, it can't be used for anything else anymore,
even if the SIREAD locks are later released.

>> Any chance of collapsing together entries of already-committed
>> transactions in the SSI patch, to put an upper limit on the number
>> of shmem list entries needed? If you can do that, then a simple
>> array allocated at postmaster startup will do fine.
>
> I suspect it can be done, but I'm quite sure that any such scheme
> would increase the rate of serialization failures. Right now I'm
> trying to see how much I can do to *decrease* the rate of
> serialization failures, so I'm not eager to go there. :-/

I see. It's worth spending some mental power on, an upper limit would
make life a lot easier. It doesn't matter much if it's 2*max_connections
or 100*max_connections, as long as it's finite.

> If it is
> necessary, the most obvious way to manage this is just to force
> cancellation of the oldest running serializable transaction and
> running ClearOldPredicateLocks(), perhaps iterating, until we free
> an entry to service the new request.

Hmm, that's not very appealing either. But perhaps it's still better
than not letting any new transactions to begin. We could say "snapshot
too old" in the error message :-).

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2010-09-20 16:35:24 Re: Do we need a ShmList implementation?
Previous Message Simon Riggs 2010-09-20 16:26:54 Re: libpq changes for synchronous replication