From: | Atri Sharma <atri(dot)jiit(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Group Commits Vs WAL Writes |
Date: | 2013-06-28 10:04:12 |
Message-ID: | CAOeZVicRV3gDyud-AY98sUvUcRHSYiRMTHbey94tPGB07UvC4Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>
> Yep. To take a degenerate case, suppose that you had many small WAL
> records, say 64 bytes each, so more than 100 per 8K block. If you
> flush those one by one, you're going to rewrite that block 100 times.
> If you flush them all at once, you write that block once.
>
> But even when the range is more than the minimum write size (8K for
> WAL), there are still wins. Writing 16K or 24K or 32K submitted as a
> single request can likely be done in a single revolution of the disk
> head. But if you write 8K and wait until it's done, and then write
> another 8K and wait until that's done, the second request may not
> arrive until after the disk head has passed the position where the
> second block needs to go. Now you have to wait for the drive to spin
> back around to the right position.
>
> The details of course vary with the hardware in use, but there are
> very few I/O operations where batching smaller requests into larger
> chunks doesn't help to some degree. Of course, the optimal transfer
> size does vary considerably based on the type of I/O and the specific
> hardware in use.
This makes a lot of sense. I was always under the impression that
batching small requests into larger requests bears the overhead of I/O
latency. But, it seems to be the other way round, which I have now
understood.
Thanks a ton,
Regards,
Atri
--
Regards,
Atri
l'apprenant
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-06-28 12:41:46 | Re: changeset generation v5-01 - Patches & git tree |
Previous Message | Atri Sharma | 2013-06-28 09:57:55 | Re: Group Commits Vs WAL Writes |