Re: Does larger i/o size make sense?

From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Does larger i/o size make sense?
Date: 2013-08-27 21:04:00
Message-ID: 521D1440.5060605@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 8/27/13 3:54 PM, Josh Berkus wrote:
> I believe that Greenplum currently uses 128K. There's a definite
> benefit for the DW use-case.

Since Linux read-ahead can easily give big gains on fast storage, I
normally set that to at least 4096 sectors = 2048KB. That's a lot
bigger than even this, and definitely necessary for reaching maximum
storage speed.

I don't think that the block size change alone will necessarily
duplicate the gains on seq scans that Greenplum gets though. They've
done a lot more performance optimization on that part of the read path
than just the larger block size.

As far as quantifying whether this is worth chasing, the most useful
thing to do here is find some fast storage and profile the code with
different block sizes at a large read-ahead. I wouldn't spend a minute
on trying to come up with a more complicated management scheme until the
potential gain is measured.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Smith 2013-08-27 21:37:47 Re: [v9.4] row level security
Previous Message Kevin Grittner 2013-08-27 20:51:00 Re: Behaviour of take over the synchronous replication