Re: Cost limited statements RFC

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Cost limited statements RFC
Date: 2013-06-08 21:17:45
Message-ID: CAMkU=1w5+NZ3=ER-Hvf-_=Z-rDVLhVST7Rv4oTVqEAS9eR_yog@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jun 8, 2013 at 1:57 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:

> On 6/8/13 4:43 PM, Jeff Janes wrote:
>
> Also, in all the anecdotes I've been hearing about autovacuum causing
>> problems from too much IO, in which people can identify the specific
>> problem, it has always been the write pressure, not the read, that
>> caused the problem. Should the default be to have the read limit be
>> inactive and rely on the dirty-limit to do the throttling?
>>
>
> That would be bad, I have to carefully constrain both of them on systems
> that are short on I/O throughput. There all sorts of cases where cleanup
> of a large and badly cached relation will hit the read limit right now.
>

I wouldn't remove the ability, just change the default. You can still tune
your exquisitely balanced systems :)

Of course if the default were to be changed, who knows what complaints we
would start getting, which we don't get now because the current default
prevents them.

But my gut feeling is that if autovacuum is trying to read faster than the
hardware will support, it will just automatically get throttled, by
inherent IO waits, at a level which can be comfortably supported. And this
will cause minimal interference with other processes. It is self-limiting.
If it tries to write too much, however, the IO system is reduced to a
quivering heap, not just for that process, but for all others as well.

>
> I suspect the reason we don't see as many complaints is that a lot more
> systems can handle 7.8MB/s of random reads then there are ones that can do
> 3.9MB/s of random writes. If we removed that read limit, a lot more
> complaints would start rolling in about the read side.

Why is there so much random IO? Do your systems have
autovacuum_vacuum_scale_factor set far below the default? Unless they do,
most of the IO (both read and write) should be sequential. Or at least, I
don't understand why they are not sequential.

Cheers,

Jeff

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Kevin Grittner 2013-06-08 21:20:35 Re: Cost limited statements RFC
Previous Message Simon Riggs 2013-06-08 21:10:34 Re: Hard limit on WAL space used (because PANIC sucks)