Re: [REVIEW] Re: Compression of full-page-writes

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Bruce Momjian <bruce(at)momjian(dot)us>, Andres Freund <andres(at)anarazel(dot)de>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Rahila Syed <rahilasyed90(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [REVIEW] Re: Compression of full-page-writes
Date: 2014-12-12 21:40:01
Message-ID: CA+TgmoZtZfzh-UPXoZp3LnRMBqxqOij0Z+AXetmBhpsjWdQaZw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Dec 12, 2014 at 1:51 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> What I don't understand is why we aren't working on double buffering,
> since that cost would be paid in a background process and would be
> evenly spread out across a checkpoint. Plus we'd be able to remove
> FPWs altogether, which is like 100% compression.

The previous patch to implement that - by somebody at vmware - was an
epic fail. I'm not opposed to seeing somebody try again, but it's a
tricky problem. When the double buffer fills up, then you've got to
finish flushing the pages whose images are stored in the buffer to
disk before you can overwrite it, which acts like a kind of
mini-checkpoint. That problem might be solvable, but let's use this
thread to discuss this patch, not some other patch that someone might
have chosen to write but didn't.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2014-12-12 21:43:18 Re: moving from contrib to bin
Previous Message Tom Lane 2014-12-12 21:33:38 Re: Final Patch for GROUPING SETS