Re: Cost limited statements RFC

From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Cost limited statements RFC
Date: 2013-06-08 20:57:36
Message-ID: 51B39AC0.40607@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 6/8/13 4:43 PM, Jeff Janes wrote:

> Also, in all the anecdotes I've been hearing about autovacuum causing
> problems from too much IO, in which people can identify the specific
> problem, it has always been the write pressure, not the read, that
> caused the problem. Should the default be to have the read limit be
> inactive and rely on the dirty-limit to do the throttling?

That would be bad, I have to carefully constrain both of them on systems
that are short on I/O throughput. There all sorts of cases where
cleanup of a large and badly cached relation will hit the read limit
right now.

I suspect the reason we don't see as many complaints is that a lot more
systems can handle 7.8MB/s of random reads then there are ones that can
do 3.9MB/s of random writes. If we removed that read limit, a lot more
complaints would start rolling in about the read side.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2013-06-08 21:00:18 Batch API for After Triggers
Previous Message Simon Riggs 2013-06-08 20:45:24 ALTER TABLE ... ALTER CONSTRAINT