Re: literature on write-ahead logging

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: literature on write-ahead logging
Date: 2011-06-09 15:15:53
Message-ID: BANLkTikAQucQT2YS+wPjLd6kbYzoswuJeg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jun 9, 2011 at 11:13 AM, Alvaro Herrera
<alvherre(at)commandprompt(dot)com> wrote:
> Excerpts from Robert Haas's message of jue jun 09 10:55:45 -0400 2011:
>> On Thu, Jun 9, 2011 at 10:34 AM, Alvaro Herrera
>> <alvherre(at)commandprompt(dot)com> wrote:
>
>> > Slower than sleeping?  Consider that this doesn't need to be done for
>> > each record insertion, only when you need to flush (maybe more than
>> > that, but I think that's the lower limit).
>>
>> Maybe.  I'm worried that if someone jacks up max_connections to 1000
>> or 5000 or somesuch it could get pretty slow.
>
> Well, other things are going to get pretty slow as well, not just this
> one, which is why we suggest using a connection pooler with a reasonable
> limit.
>
> On the other hand, maybe those are things we ought to address sometime,
> so perhaps we don't want to be designing the old limitation into a new
> feature.
>
> A possibly crazy idea: instead of having a MaxBackends-sized array, how
> about some smaller array of insert-done-pointer-updating backends (a
> couple dozen or so), and if it's full, the next one has to sleep a bit
> until one of them becomes available.  We could protect this with a
> PGSemaphore having as many counts as items are in the array.

Maybe. It would have to be structured in such a way that you didn't
perform a system call in the common case, I think.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2011-06-09 15:17:47 Re: Invalid byte sequence for encoding "UTF8", caused due to non wide-char-aware downcase_truncate_identifier() function on WINDOWS
Previous Message Robert Haas 2011-06-09 15:14:36 Re: [v9.1] sepgsql - userspace access vector cache