Re: Page Checksums

Lists: pgsql-hackers
From: David Fetter <david(at)fetter(dot)org>
To: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Page Checksums
Date: 2011-12-17 21:33:24
Message-ID: 20111217213324.GA4497@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Folks,

What:

Please find attached a patch for 9.2-to-be which implements page
checksums. It changes the page format, so it's an initdb-forcing
change.

How:
In order to ensure that the checksum actually matches the hint
bits, this makes a copy of the page, calculates the checksum, then
sends the checksum and copy to the kernel, which handles sending
it the rest of the way to persistent storage.

Why:
My employer, VMware, thinks it's a good thing, and has dedicated
engineering resources to it. Lots of people's data is already in
cosmic ray territory, and many others' data will be soon. And
it's a TODO :)

If this introduces new failure modes, please detail, and preferably
demonstrate, just what those new modes are. As far as we've been able
to determine so far, it could expose on-disk corruption that wasn't
exposed before, but we see this as dealing with a previously
un-dealt-with failure rather than causing one.

Questions, comments and bug fixes are, of course, welcome.

Let the flames begin!

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

Attachment Content-Type Size
checksums_20111217_01.patch text/plain 10.7 KB

From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 08:14:38
Message-ID: 4EEDA0EE.7060006@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 17.12.2011 23:33, David Fetter wrote:
> What:
>
> Please find attached a patch for 9.2-to-be which implements page
> checksums. It changes the page format, so it's an initdb-forcing
> change.
>
> How:
> In order to ensure that the checksum actually matches the hint
> bits, this makes a copy of the page, calculates the checksum, then
> sends the checksum and copy to the kernel, which handles sending
> it the rest of the way to persistent storage.
>...
> If this introduces new failure modes, please detail, and preferably
> demonstrate, just what those new modes are.

Hint bits, torn pages -> failed CRC. See earlier discussion:

http://archives.postgresql.org/pgsql-hackers/2009-11/msg01975.php

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: David Fetter <david(at)fetter(dot)org>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 08:54:00
Message-ID: 20111218085400.GA14268@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2011 at 10:14:38AM +0200, Heikki Linnakangas wrote:
> On 17.12.2011 23:33, David Fetter wrote:
> >What:
> >
> > Please find attached a patch for 9.2-to-be which implements page
> > checksums. It changes the page format, so it's an initdb-forcing
> > change.
> >
> >How:
> > In order to ensure that the checksum actually matches the hint
> > bits, this makes a copy of the page, calculates the checksum, then
> > sends the checksum and copy to the kernel, which handles sending
> > it the rest of the way to persistent storage.
> >...
> >If this introduces new failure modes, please detail, and preferably
> >demonstrate, just what those new modes are.
>
> Hint bits, torn pages -> failed CRC. See earlier discussion:
>
> http://archives.postgresql.org/pgsql-hackers/2009-11/msg01975.php

The patch requires that full page writes be on in order to obviate
this problem by never reading a torn page. Instead, copy of the page
has already hit storage before the torn write occurs.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 10:19:32
Message-ID: 4EEDBE34.9010409@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18.12.2011 10:54, David Fetter wrote:
> On Sun, Dec 18, 2011 at 10:14:38AM +0200, Heikki Linnakangas wrote:
>> On 17.12.2011 23:33, David Fetter wrote:
>>> If this introduces new failure modes, please detail, and preferably
>>> demonstrate, just what those new modes are.
>>
>> Hint bits, torn pages -> failed CRC. See earlier discussion:
>>
>> http://archives.postgresql.org/pgsql-hackers/2009-11/msg01975.php
>
> The patch requires that full page writes be on in order to obviate
> this problem by never reading a torn page.

Doesn't help. Hint bit updates are not WAL-logged.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: David Fetter <david(at)fetter(dot)org>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 18:44:00
Message-ID: 20111218184400.GB14268@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2011 at 12:19:32PM +0200, Heikki Linnakangas wrote:
> On 18.12.2011 10:54, David Fetter wrote:
> >On Sun, Dec 18, 2011 at 10:14:38AM +0200, Heikki Linnakangas wrote:
> >>On 17.12.2011 23:33, David Fetter wrote:
> >>>If this introduces new failure modes, please detail, and preferably
> >>>demonstrate, just what those new modes are.
> >>
> >>Hint bits, torn pages -> failed CRC. See earlier discussion:
> >>
> >>http://archives.postgresql.org/pgsql-hackers/2009-11/msg01975.php
> >
> >The patch requires that full page writes be on in order to obviate
> >this problem by never reading a torn page.
>
> Doesn't help. Hint bit updates are not WAL-logged.

What new failure modes are you envisioning for this case? Any way to
simulate them, even if it's by injecting faults into the source code?

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 19:34:03
Message-ID: 4EEE402B.1030807@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18.12.2011 20:44, David Fetter wrote:
> On Sun, Dec 18, 2011 at 12:19:32PM +0200, Heikki Linnakangas wrote:
>> On 18.12.2011 10:54, David Fetter wrote:
>>> On Sun, Dec 18, 2011 at 10:14:38AM +0200, Heikki Linnakangas wrote:
>>>> On 17.12.2011 23:33, David Fetter wrote:
>>>>> If this introduces new failure modes, please detail, and preferably
>>>>> demonstrate, just what those new modes are.
>>>>
>>>> Hint bits, torn pages -> failed CRC. See earlier discussion:
>>>>
>>>> http://archives.postgresql.org/pgsql-hackers/2009-11/msg01975.php
>>>
>>> The patch requires that full page writes be on in order to obviate
>>> this problem by never reading a torn page.
>>
>> Doesn't help. Hint bit updates are not WAL-logged.
>
> What new failure modes are you envisioning for this case?

Umm, the one explained in the email I linked to... Let me try once more.
For the sake of keeping the example short, imagine that the PostgreSQL
block size is 8 bytes, and the OS block size is 4 bytes. The CRC is 1
byte, and is stored on the first byte of each page.

In the beginning, a page is in the buffer cache, and it looks like this:

AA 12 34 56 78 9A BC DE

AA is the checksum. Now a hint bit on the last byte is set, so that the
page in the shared buffer cache looks like this:

AA 12 34 56 78 9A BC DF

Now PostgreSQL wants to evict the page from the buffer cache, so it
recalculates the CRC. The page in the buffer cache now looks like this:

BB 12 34 56 78 9A BC DF

Now, PostgreSQL writes the page to the OS cache, with the write() system
call. It sits in the OS cache for a few seconds, and then the OS decides
to flush the first 4 bytes, ie. the first OS block, to disk. On disk,
you now have this:

BB 12 34 56 78 9A BC DE

If the server now crashes, before the OS has flushed the second half of
the PostgreSQL page to disk, you have a classic torn page. The updated
CRC made it to disk, but the hint bit did not. The CRC on disk is not
valid, for the rest of the contents of that page on disk.

Without CRCs, that's not a problem because the data is valid whether or
not the hint bit makes it to the disk. It's just a hint, after all. But
when you have a CRC on the page, the CRC is only valid if both the CRC
update *and* the hint bit update makes it to disk, or neither.

So you've just turned an innocent torn page, which PostgreSQL tolerates
just fine, into a block with bad CRC.

> Any way to
> simulate them, even if it's by injecting faults into the source code?

Hmm, it's hard to persuade the OS to suffer a torn page on purpose. What
you could do is split the write() call in mdwrite() into two. First
write the 1st half of the page, then the second. Then you can put a
breakpoint in between the writes, and kill the system before the 2nd
half is written.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Peter Eisentraut <peter_e(at)gmx(dot)net>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 19:42:09
Message-ID: 1324237329.28578.1.camel@vanquo.pezone.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On sön, 2011-12-18 at 21:34 +0200, Heikki Linnakangas wrote:
> On 18.12.2011 20:44, David Fetter wrote:
> > Any way to
> > simulate them, even if it's by injecting faults into the source code?
>
> Hmm, it's hard to persuade the OS to suffer a torn page on purpose. What
> you could do is split the write() call in mdwrite() into two. First
> write the 1st half of the page, then the second. Then you can put a
> breakpoint in between the writes, and kill the system before the 2nd
> half is written.

Perhaps the Library-level Fault Injector (http://lfi.sf.net) could be
used to set up a test for this. (Not that I think you need one, but if
David wants to see it happen himself ...)


From: Jesper Krogh <jesper(at)krogh(dot)cc>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-18 19:51:04
Message-ID: 4EEE4428.5040409@krogh.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2011-12-18 11:19, Heikki Linnakangas wrote:
>> The patch requires that full page writes be on in order to obviate
>> this problem by never reading a torn page.
>
> Doesn't help. Hint bit updates are not WAL-logged.

I dont know if it would be seen as a "half baked feature".. or similar,
and I dont know if the hint bit problem is solvable at all, but I could
easily imagine checksumming just "skipping" the hit bit entirely.

It would still provide checksumming for the majority of the "data" sitting
underneath the system, and would still be extremely usefull in my
eyes.

Jesper
--
Jesper


From: Greg Stark <stark(at)mit(dot)edu>
To: Jesper Krogh <jesper(at)krogh(dot)cc>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-19 01:55:08
Message-ID: CAM-w4HPbhRqwyZYw_j5mZZEB-a70GmcUGvC-2T-a+H3CswyZwQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2011 at 7:51 PM, Jesper Krogh <jesper(at)krogh(dot)cc> wrote:
> I dont know if it would be seen as a "half baked feature".. or similar,
> and I dont know if the hint bit problem is solvable at all, but I could
> easily imagine checksumming just "skipping" the hit bit entirely.

That was one approach discussed. The problem is that the hint bits are
currently in each heap tuple header which means the checksum code
would have to know a fair bit about the structure of the page format.
Also the closer people looked the more hint bits kept turning up
because the coding pattern had been copied to other places (the page
header has one, and index pointers have a hint bit indicating that the
target tuple is deleted, etc). And to make matters worse skipping
individual bits in varying places quickly becomes a big consumer of
cpu time since it means injecting logic into each iteration of the
checksum loop to mask out the bits.

So the general feeling was that we should move all the hint bits to a
dedicated part of the buffer so that they could all be skipped in a
simple way that doesn't depend on understanding the whole structure of
the page. That's not conceptually hard, it's just a fair amount of
work. I think that's where it was left off.

There is another way to look at this problem. Perhaps it's worth
having a checksum *even if* there are ways for the checksum to be
spuriously wrong. Obviously having an invalid checksum can't be a
fatal error then but it might still be useful information. Rright now
people don't really know if their system can experience torn pages or
not and having some way of detecting them could be useful. And if you
have other unexplained symptoms then having checksum errors might be
enough evidence that the investigation should start with the hardware
and get the sysadmin looking at hardware logs and running memtest
sooner.

--
greg


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 04:21:48
Message-ID: 4EEEBBDC.1020005@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/18/11 5:55 PM, Greg Stark wrote:
> There is another way to look at this problem. Perhaps it's worth
> having a checksum *even if* there are ways for the checksum to be
> spuriously wrong. Obviously having an invalid checksum can't be a
> fatal error then but it might still be useful information. Rright now
> people don't really know if their system can experience torn pages or
> not and having some way of detecting them could be useful. And if you
> have other unexplained symptoms then having checksum errors might be
> enough evidence that the investigation should start with the hardware
> and get the sysadmin looking at hardware logs and running memtest
> sooner.

Frankly, if I had torn pages, even if it was just hint bits missing, I
would want that to be logged. That's expected if you crash, but if you
start seeing bad CRC warnings when you haven't had a crash? That means
you have a HW problem.

As long as the CRC checks are by default warnings, then I don't see a
problem with this; it's certainly better than what we have now.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


From: Aidan Van Dyk <aidan(at)highrise(dot)ca>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 04:44:43
Message-ID: CAC_2qU8nWNrOZ7L-A9ATHNTiARX1wXq3RbnL1NCXg6+6VT7Aqw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 18, 2011 at 11:21 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> On 12/18/11 5:55 PM, Greg Stark wrote:
>> There is another way to look at this problem. Perhaps it's worth
>> having a checksum *even if* there are ways for the checksum to be
>> spuriously wrong. Obviously having an invalid checksum can't be a
>> fatal error then but it might still be useful information. Rright now
>> people don't really know if their system can experience torn pages or
>> not and having some way of detecting them could be useful. And if you
>> have other unexplained symptoms then having checksum errors might be
>> enough evidence that the investigation should start with the hardware
>> and get the sysadmin looking at hardware logs and running memtest
>> sooner.
>
> Frankly, if I had torn pages, even if it was just hint bits missing, I
> would want that to be logged.  That's expected if you crash, but if you
> start seeing bad CRC warnings when you haven't had a crash?  That means
> you have a HW problem.
>
> As long as the CRC checks are by default warnings, then I don't see a
> problem with this; it's certainly better than what we have now.

But the scary part is you don't know how long *ago* the crash was.
Because a hint-bit-only change w/ a torn-page is a "non event" in
PostgreSQL *DESIGN*, on crash recovery, it doesn't do anything to try
and "scrub" every page in the database.

So you could have a crash, then a recovery, and a couple clean
shutdown-restart combinations before you happen to read the "needed"
page that was torn in the crash $X [ days | weeks | months ] ago.
It's specifically because PostgreSQL was *DESIGNED* to make torn pages
a non-event (because WAL/FPW fixes anything that's dangerous), that
the whole CRC issue is so complicated...

I'll through out a few random thoughts (some repeated) that people who
really want the CRC can fight over:

1) Find a way to not bother writing out hint-bit-only-dirty pages....
I know people like Kevin keep recommending a vacuum freeze after a
big load to avoid later problems anyways and I think that's probably
common in big OLAP shops, and OLTP people are likely to have real
changes on the page anyways. Does anybody want to try and measure
what type of performance trade-offs we'ld really have on a variety of
"normal" (ya, I know, what's normal) workloads? If the page has a
real change, it's got a WAL FPW, so we avoid the problem....

2) If the writer/checksummer knows it's a hint-bit-only-dirty page,
can it stuff a "cookie" checksum in it and not bother verifying?
Looses a bit of the CRC guarentee, especially around "crashes" which
is when we expect a torn page, but avoids the whole "scary! scary!
Your database is corrupt!" false-positives in the situation PostgreSQL
was specifically desinged to make not scary.

#) Anybody investigated putting the CRC in a relation fork, but not
right in the data block? If the CRC contains a timestamp, and is WAL
logged before the write, at least on reading a block with a wrong
checksum, if a warning is emitted, the timestamp could be looked at by
whoever is reading the warning and know tht the block was written
shortly before the crash $X $PERIODS ago....

The whole "CRC is only a warning" because we "expect to get them if we
ever crashed" means that the time when we most want them, we have to
assume they are bogus... And to make matters worse, we don't even
know when the perioud of "they may be bugus" ends, unless we have a
way to methodically force PG through ever buffer in the database after
the crash... And then that makes them very hard to consider
useful...

a.

--
Aidan Van Dyk                                             Create like a god,
aidan(at)highrise(dot)ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 11:10:11
Message-ID: CA+U5nMJVt8EXxgAtBVptYYuu0AbOtVcUOfCtgcKAQ9aLnrCH1A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 4:21 AM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> On 12/18/11 5:55 PM, Greg Stark wrote:
>> There is another way to look at this problem. Perhaps it's worth
>> having a checksum *even if* there are ways for the checksum to be
>> spuriously wrong. Obviously having an invalid checksum can't be a
>> fatal error then but it might still be useful information. Rright now
>> people don't really know if their system can experience torn pages or
>> not and having some way of detecting them could be useful. And if you
>> have other unexplained symptoms then having checksum errors might be
>> enough evidence that the investigation should start with the hardware
>> and get the sysadmin looking at hardware logs and running memtest
>> sooner.
>
> Frankly, if I had torn pages, even if it was just hint bits missing, I
> would want that to be logged.  That's expected if you crash, but if you
> start seeing bad CRC warnings when you haven't had a crash?  That means
> you have a HW problem.
>
> As long as the CRC checks are by default warnings, then I don't see a
> problem with this; it's certainly better than what we have now.

It is an important problem, and also a big one, hence why it still exists.

Throwing WARNINGs for normal events would not help anybody; thousands
of false positives would just make Postgres appear to be less robust
than it really is. That would be a credibility disaster. VMWare
already have their own distro, so if they like this patch they can use
it.

The only sensible way to handle this is to change the page format as
discussed. IMHO the only sensible way that can happen is if we also
support an online upgrade feature. I will take on the online upgrade
feature if others work on the page format issues, but none of this is
possible for 9.2, ISTM.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: Page Checksums
Date: 2011-12-19 11:13:44
Message-ID: 201112191213.44930.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Monday, December 19, 2011 12:10:11 PM Simon Riggs wrote:
> The only sensible way to handle this is to change the page format as
> discussed. IMHO the only sensible way that can happen is if we also
> support an online upgrade feature. I will take on the online upgrade
> feature if others work on the page format issues, but none of this is
> possible for 9.2, ISTM.
Totally with you that its not 9.2 material. But I think if somebody actually
wants to implement that that person would need to start discussing and
implementing rather soon if it should be ready for 9.3. Just because its not
geared towards the next release doesn't mean it OT.

Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 12:50:10
Message-ID: CA+TgmobSzXLhFc-gBmgSRzxuWXqQ09vfcvH844o_kDtYRtJ6rw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 6:10 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> Throwing WARNINGs for normal events would not help anybody; thousands
> of false positives would just make Postgres appear to be less robust
> than it really is. That would be a credibility disaster. VMWare
> already have their own distro, so if they like this patch they can use
> it.

Agreed on all counts.

It seems to me that it would be possible to plug this hole by keeping
track of which pages in shared_buffers have had unlogged changes to
them since the last FPI. When you go to evict such a page, you write
some kind of WAL record for it - either an FPI, or maybe a partial
page image containing just the parts that might have been changed
(like all the tuple headers, or whatever). This would be expensive,
of course.

> The only sensible way to handle this is to change the page format as
> discussed. IMHO the only sensible way that can happen is if we also
> support an online upgrade feature. I will take on the online upgrade
> feature if others work on the page format issues, but none of this is
> possible for 9.2, ISTM.

I'm not sure that I understand the dividing line you are drawing here.
However, with respect to the implementation of this particular
feature, it would be nice if we could arrange things so that space
cost of the feature need only be paid by people who are using it. I
think it would be regrettable if everyone had to give up 4 bytes per
page because some people want checksums. Maybe I'll feel differently
if it turns out that the overhead of turning on checksumming is
modest, but that's not what I'm expecting.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Aidan Van Dyk <aidan(at)highrise(dot)ca>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 14:14:09
Message-ID: 20111219141409.GA24234@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
> But the scary part is you don't know how long *ago* the crash was.
> Because a hint-bit-only change w/ a torn-page is a "non event" in
> PostgreSQL *DESIGN*, on crash recovery, it doesn't do anything to try
> and "scrub" every page in the database.

Fair enough, but, could we distinguish these two cases? In other words,
would it be possible to detect if a page was torn due to a 'traditional'
crash and not complain in that case, but complain if there's a CRC
failure and it *doesn't* look like a torn page?

Perhaps that's a stretch, but if we can figure out that a page is torn
already, then perhaps it's not so far fetched..

Thanks,

Stephen
(who is no expert on WAL/torn pages/etc)


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Aidan Van Dyk <aidan(at)highrise(dot)ca>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 14:18:21
Message-ID: 20111219141821.GB24234@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
> #) Anybody investigated putting the CRC in a relation fork, but not
> right in the data block? If the CRC contains a timestamp, and is WAL
> logged before the write, at least on reading a block with a wrong
> checksum, if a warning is emitted, the timestamp could be looked at by
> whoever is reading the warning and know tht the block was written
> shortly before the crash $X $PERIODS ago....

I do like the idea of putting the CRC info in a relation fork, if it can
be made to work decently, as we might be able to then support it on a
per-relation basis, and maybe even avoid the on-disk format change..

Of course, I'm sure there's all kinds of problems with that approach,
but it might be worth some thinking about.

Thanks,

Stephen


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-19 14:33:22
Message-ID: 1324305084-sup-6213@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Excerpts from Stephen Frost's message of lun dic 19 11:18:21 -0300 2011:
> * Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
> > #) Anybody investigated putting the CRC in a relation fork, but not
> > right in the data block? If the CRC contains a timestamp, and is WAL
> > logged before the write, at least on reading a block with a wrong
> > checksum, if a warning is emitted, the timestamp could be looked at by
> > whoever is reading the warning and know tht the block was written
> > shortly before the crash $X $PERIODS ago....
>
> I do like the idea of putting the CRC info in a relation fork, if it can
> be made to work decently, as we might be able to then support it on a
> per-relation basis, and maybe even avoid the on-disk format change..
>
> Of course, I'm sure there's all kinds of problems with that approach,
> but it might be worth some thinking about.

I think the main objection to that idea was that if you lose a single
page of CRCs you have hundreds of data pages which no longer have good
CRCs.

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 14:34:51
Message-ID: CA+Tgmoa3fJanN9NtzryH1eA2kxBEd1j0_Yrh6KLSaiiMR0gf5Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 9:14 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> * Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
>> But the scary part is you don't know how long *ago* the crash was.
>> Because a hint-bit-only change w/ a torn-page is a "non event" in
>> PostgreSQL *DESIGN*, on crash recovery, it doesn't do anything to try
>> and "scrub" every page in the database.
>
> Fair enough, but, could we distinguish these two cases?  In other words,
> would it be possible to detect if a page was torn due to a 'traditional'
> crash and not complain in that case, but complain if there's a CRC
> failure and it *doesn't* look like a torn page?

No.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: David Fetter <david(at)fetter(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 17:07:00
Message-ID: 20111219170700.GA29634@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 09:34:51AM -0500, Robert Haas wrote:
> On Mon, Dec 19, 2011 at 9:14 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > * Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
> >> But the scary part is you don't know how long *ago* the crash was.
> >> Because a hint-bit-only change w/ a torn-page is a "non event" in
> >> PostgreSQL *DESIGN*, on crash recovery, it doesn't do anything to try
> >> and "scrub" every page in the database.
> >
> > Fair enough, but, could we distinguish these two cases?  In other words,
> > would it be possible to detect if a page was torn due to a 'traditional'
> > crash and not complain in that case, but complain if there's a CRC
> > failure and it *doesn't* look like a torn page?
>
> No.

Would you be so kind as to elucidate this a bit?

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: Page Checksums
Date: 2011-12-19 17:09:48
Message-ID: 201112191809.49013.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Monday, December 19, 2011 03:33:22 PM Alvaro Herrera wrote:
> Excerpts from Stephen Frost's message of lun dic 19 11:18:21 -0300 2011:
> > * Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
> > > #) Anybody investigated putting the CRC in a relation fork, but not
> > > right in the data block? If the CRC contains a timestamp, and is WAL
> > > logged before the write, at least on reading a block with a wrong
> > > checksum, if a warning is emitted, the timestamp could be looked at by
> > > whoever is reading the warning and know tht the block was written
> > > shortly before the crash $X $PERIODS ago....
> >
> > I do like the idea of putting the CRC info in a relation fork, if it can
> > be made to work decently, as we might be able to then support it on a
> > per-relation basis, and maybe even avoid the on-disk format change..
> >
> > Of course, I'm sure there's all kinds of problems with that approach,
> > but it might be worth some thinking about.
>
> I think the main objection to that idea was that if you lose a single
> page of CRCs you have hundreds of data pages which no longer have good
> CRCs.
Which I find a pretty non-argument because there is lots of SPOF data in a
cluster (WAL, control record) anyway...
If recent data starts to fail you have to restore from backup anyway.

Andres


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: David Fetter <david(at)fetter(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 17:10:37
Message-ID: 20111219171037.GE24234@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* David Fetter (david(at)fetter(dot)org) wrote:
> On Mon, Dec 19, 2011 at 09:34:51AM -0500, Robert Haas wrote:
> > On Mon, Dec 19, 2011 at 9:14 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > > Fair enough, but, could we distinguish these two cases?  In other words,
> > > would it be possible to detect if a page was torn due to a 'traditional'
> > > crash and not complain in that case, but complain if there's a CRC
> > > failure and it *doesn't* look like a torn page?
> >
> > No.
>
> Would you be so kind as to elucidate this a bit?

I'm guessing, based on some discussion on IRC, that it's because we
don't really 'detect' torn pages today, when it's due to a hint-bit-only
update. With all the trouble due to hint-bit updates, and if they're
written out or not, makes me wish we could just avoid doing hint-bit
only updates to disk somehow.. Or log them when we do them. Both of
those have their own drawbacks, of course.

Thanks,

Stephen


From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: Page Checksums
Date: 2011-12-19 17:13:50
Message-ID: 20111219171350.GF24234@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Andres Freund (andres(at)anarazel(dot)de) wrote:
> On Monday, December 19, 2011 03:33:22 PM Alvaro Herrera wrote:
> > > I do like the idea of putting the CRC info in a relation fork, if it can
> > > be made to work decently, as we might be able to then support it on a
> > > per-relation basis, and maybe even avoid the on-disk format change..
> > >
> > I think the main objection to that idea was that if you lose a single
> > page of CRCs you have hundreds of data pages which no longer have good
> > CRCs.
> Which I find a pretty non-argument because there is lots of SPOF data in a
> cluster (WAL, control record) anyway...
> If recent data starts to fail you have to restore from backup anyway.

I agree with Andres on this one.. Also, if we use CRC on the pages in
the CRC, hopefully we'd be able to detect when a bad block impacted the
CRC fork and differentiate that from a whole slew of bad blocks in the
heap..

There might be an issue there with handling locking and having to go
through the page-level lock on the CRC, which locks a lot more pages in
the heap and therefore reduces scalability.. Don't we have a similar
issue with the visibility map though?

Thanks,

Stephen


From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 18:46:25
Message-ID: 4EEF8681.8020903@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12/19/2011 07:50 AM, Robert Haas wrote:
> On Mon, Dec 19, 2011 at 6:10 AM, Simon Riggs<simon(at)2ndquadrant(dot)com> wrote:
>> The only sensible way to handle this is to change the page format as
>> discussed. IMHO the only sensible way that can happen is if we also
>> support an online upgrade feature. I will take on the online upgrade
>> feature if others work on the page format issues, but none of this is
>> possible for 9.2, ISTM.
> I'm not sure that I understand the dividing line you are drawing here.

There are three likely steps to reaching checksums:

1) Build a checksum mechanism into the database. This is the
straighforward part that multiple people have now done.

2) Rework hint bits to make the torn page problem go away. Checksums go
elsewhere? More WAL logging to eliminate the bad situations? Eliminate
some types of hint bit writes? It seems every alternative has
trade-offs that will require serious performance testing to really validate.

3) Finally tackle in-place upgrades that include a page format change.
One basic mechanism was already outlined: a page converter that knows
how to handle two page formats, some metadata to track which pages have
been converted, a daemon to do background conversions. Simon has some
new ideas here too ("online upgrade" involves two clusters kept in sync
on different versions, slightly different concept than the current
"in-place upgrade"). My recollection is that the in-place page upgrade
work was pushed out of the critical path before due to lack of immediate
need. It wasn't necessary until a) a working catalog upgrade tool was
validated and b) a bite-size feature change to test it on appeared. We
have (a) now in pg_upgrade, and CRCs could be (b)--if the hint bit
issues are sorted first.

What Simon was saying is that he's got some interest in (3), but wants
no part of (2).

I don't know how much time each of these will take. I would expect that
(2) and (3) have similar scopes though--many days, possibly a few
months, of work--which means they both dwarf (1). The part that's been
done is the visible tip of a mostly underwater iceburg.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: David Fetter <david(at)fetter(dot)org>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 19:27:08
Message-ID: CA+TgmoZhSKAP-TN6N2ahe-+zfZn_L-T_ykVOekyuCU_Z2Kh+=Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 12:07 PM, David Fetter <david(at)fetter(dot)org> wrote:
> On Mon, Dec 19, 2011 at 09:34:51AM -0500, Robert Haas wrote:
>> On Mon, Dec 19, 2011 at 9:14 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>> > * Aidan Van Dyk (aidan(at)highrise(dot)ca) wrote:
>> >> But the scary part is you don't know how long *ago* the crash was.
>> >> Because a hint-bit-only change w/ a torn-page is a "non event" in
>> >> PostgreSQL *DESIGN*, on crash recovery, it doesn't do anything to try
>> >> and "scrub" every page in the database.
>> >
>> > Fair enough, but, could we distinguish these two cases?  In other words,
>> > would it be possible to detect if a page was torn due to a 'traditional'
>> > crash and not complain in that case, but complain if there's a CRC
>> > failure and it *doesn't* look like a torn page?
>>
>> No.
>
> Would you be so kind as to elucidate this a bit?

Well, basically, Stephen's proposal was pure hand-waving. :-)

I don't know of any magic trick that would allow us to know whether a
CRC failure "looks like a torn page". The only information we're
going to get is the knowledge of whether the CRC matches or not. If
it doesn't, it's fundamentally impossible for us to know why. We know
the page contents are not as expected - that's it!

It's been proposed before that we could examine the page, consider all
the unset hint bits that could be set, and try all combinations of
setting and clearing them to see whether any of them produce a valid
CRC. But, as Tom has pointed out previously, that has a really quite
large chance of making a page that's *actually* been corrupted look
OK. If you have 30 or so unset hint bits, odds are very good that
some combination will produce the 32-CRC you're expecting.

To put this another way, we currently WAL-log just about everything.
We get away with NOT WAL-logging some things when we don't care about
whether they make it to disk. Hint bits, killed index tuple pointers,
etc. cause no harm if they don't get written out, even if some other
portion of the same page does get written out. But as soon as you CRC
the whole page, now absolutely every single bit on that page becomes
critical data which CANNOT be lost. IOW, it now requires the same
sort of protection that we already need for our other critical updates
- i.e. WAL logging. Or you could introduce some completely new
mechanism that serves the same purpose, like MySQL's double-write
buffer.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-19 20:18:02
Message-ID: 4EEF9BFA.9000308@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 19.12.2011 21:27, Robert Haas wrote:
> To put this another way, we currently WAL-log just about everything.
> We get away with NOT WAL-logging some things when we don't care about
> whether they make it to disk. Hint bits, killed index tuple pointers,
> etc. cause no harm if they don't get written out, even if some other
> portion of the same page does get written out. But as soon as you CRC
> the whole page, now absolutely every single bit on that page becomes
> critical data which CANNOT be lost. IOW, it now requires the same
> sort of protection that we already need for our other critical updates
> - i.e. WAL logging. Or you could introduce some completely new
> mechanism that serves the same purpose, like MySQL's double-write
> buffer.

Double-writes would be a useful option also to reduce the size of WAL
that needs to be shipped in replication.

Or you could just use a filesystem that does CRCs...

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>, greg <greg(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-20 17:44:48
Message-ID: CA+U5nM+iq6w9+TQU7NUP5LyOzoJDppYUn61YQaerr5DY99+fNg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 11:10 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

> The only sensible way to handle this is to change the page format as
> discussed. IMHO the only sensible way that can happen is if we also
> support an online upgrade feature. I will take on the online upgrade
> feature if others work on the page format issues, but none of this is
> possible for 9.2, ISTM.

I've had another look at this just to make sure.

Doing this for 9.2 will change the page format, causing every user to
do an unload/reload, with no provided mechanism to do that, whether or
not they use this feature.

If we do that, the hints are all in the wrong places, meaning any hint
set will need to change the CRC.

Currently, setting hints can be done while holding a share lock on the
buffer. Preventing that would require us to change the way buffer
manager works to make it take an exclusive lock while writing out,
since a hint would change the CRC and so allowing hints to be set
while we write out would cause invalid CRCs. So we would need to hold
exclusive lock on buffers while we calculate CRCs.

Overall, this will cause a much bigger performance hit than we planned
for. But then we have SSI as an option, so why not this?

So, do we have enough people in the house that are willing to back
this idea, even with a severe performance hit? Are we willing to
change the page format now, with plans to change it again in the
future? Are we willing to change the page format for a feature many
people will need to disable anyway? Do we have people willing to spend
time measuring the performance in enough cases to allow educated
debate?

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, greg <greg(at)2ndquadrant(dot)com>
Subject: Re: Page Checksums
Date: 2011-12-20 18:25:24
Message-ID: 201112201925.24652.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tuesday, December 20, 2011 06:44:48 PM Simon Riggs wrote:
> Currently, setting hints can be done while holding a share lock on the
> buffer. Preventing that would require us to change the way buffer
> manager works to make it take an exclusive lock while writing out,
> since a hint would change the CRC and so allowing hints to be set
> while we write out would cause invalid CRCs. So we would need to hold
> exclusive lock on buffers while we calculate CRCs.
While hint bits are a problem that specific problem is actually handled by
copying the buffer onto a separate buffer and calculating the CRC on that copy.
Given that we already rely on the fact that the flags can be read consistently
from the individual backends thats fine.

Andres


From: Jesper Krogh <jesper(at)krogh(dot)cc>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-20 18:39:53
Message-ID: 4EF0D679.2040607@krogh.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2011-12-20 18:44, Simon Riggs wrote:
> On Mon, Dec 19, 2011 at 11:10 AM, Simon Riggs<simon(at)2ndquadrant(dot)com> wrote:
>
>> The only sensible way to handle this is to change the page format as
>> discussed. IMHO the only sensible way that can happen is if we also
>> support an online upgrade feature. I will take on the online upgrade
>> feature if others work on the page format issues, but none of this is
>> possible for 9.2, ISTM.
> I've had another look at this just to make sure.
>
> Doing this for 9.2 will change the page format, causing every user to
> do an unload/reload, with no provided mechanism to do that, whether or
> not they use this feature.

How about only calculating the checksum and setting it in the "bgwriter"
just before
flying the buffer off to disk.

Perhaps even let autovacuum do the same if it flushes pages to disk as a
part
of the process.

If someone comes along and sets a hint bit,changes data, etc. its only
job is to clear
the checksum to a meaning telling "we dont have a checksum for this page".

Unless the bgwriter becomes bottlenecked by doing it, the impact on
"foreground"
work should be fairly limited.

Jesper .. just throwing in random thoughts ..
--
Jesper


From: Jesper Krogh <jesper(at)krogh(dot)cc>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-20 18:44:53
Message-ID: 4EF0D7A5.1040100@krogh.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2011-12-19 02:55, Greg Stark wrote:
> On Sun, Dec 18, 2011 at 7:51 PM, Jesper Krogh<jesper(at)krogh(dot)cc> wrote:
>> I dont know if it would be seen as a "half baked feature".. or similar,
>> and I dont know if the hint bit problem is solvable at all, but I could
>> easily imagine checksumming just "skipping" the hit bit entirely.
> That was one approach discussed. The problem is that the hint bits are
> currently in each heap tuple header which means the checksum code
> would have to know a fair bit about the structure of the page format.
> Also the closer people looked the more hint bits kept turning up
> because the coding pattern had been copied to other places (the page
> header has one, and index pointers have a hint bit indicating that the
> target tuple is deleted, etc). And to make matters worse skipping
> individual bits in varying places quickly becomes a big consumer of
> cpu time since it means injecting logic into each iteration of the
> checksum loop to mask out the bits.
I do know it is a valid and really relevant point (the cpu-time spend),
but here in late 2011 it is really a damn irritating limitation, since if
there any resources I have plenty available of in the production environment
then it is cpu-time, just not on the "single core currently serving the
client".

Jesper
--
Jesper


From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-27 18:39:36
Message-ID: 1325011176.14697.32.camel@jdavis
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 2011-12-19 at 07:50 -0500, Robert Haas wrote:
> I
> think it would be regrettable if everyone had to give up 4 bytes per
> page because some people want checksums.

I can understand that some people might not want the CPU expense of
calculating CRCs; or the upgrade expense to convert to new pages; but do
you think 4 bytes out of 8192 is a real concern?

(Aside: it would be MAXALIGNed anyway, so probably more like 8 bytes.)

I was thinking we'd go in the other direction: expanding the header
would take so much effort, why not expand it a little more to give some
breathing room for the future?

Regards,
Jeff Davis


From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Jesper Krogh <jesper(at)krogh(dot)cc>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, David Fetter <david(at)fetter(dot)org>, PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Page Checksums
Date: 2011-12-27 18:46:24
Message-ID: 1325011584.14697.37.camel@jdavis
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 2011-12-19 at 01:55 +0000, Greg Stark wrote:
> On Sun, Dec 18, 2011 at 7:51 PM, Jesper Krogh <jesper(at)krogh(dot)cc> wrote:
> > I dont know if it would be seen as a "half baked feature".. or similar,
> > and I dont know if the hint bit problem is solvable at all, but I could
> > easily imagine checksumming just "skipping" the hit bit entirely.
>
> That was one approach discussed. The problem is that the hint bits are
> currently in each heap tuple header which means the checksum code
> would have to know a fair bit about the structure of the page format.

Which is actually a bigger problem, because it might not be the backend
that's reading the page. It might be your backup script taking a new
base backup.

The kind of person to care about CRCs would also want the base backup
tool to verify them during the copy so that you don't overwrite your
previous (good) backup with a bad one. The more complicated we make the
verification process, the less workable that becomes.

I vote for a simple way to calculate the checksum -- fixed offsets of
each page (of course, it would need to know the page size), and a
standard checksum algorithm.

Regards,
Jeff Davis


From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-27 18:54:40
Message-ID: 1325012080.11655.5.camel@jdavis
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 2011-12-19 at 22:18 +0200, Heikki Linnakangas wrote:
> Or you could just use a filesystem that does CRCs...

That just moves the problem. Correct me if I'm wrong, but I don't think
there's anything special that the filesystem can do that we can't.

The filesystems that support CRCs are more like ZFS than ext3. They do
all writes to a new location, thus fragmenting the files. That may be a
good trade-off for some people, but it's not free.

Regards,
Jeff Davis


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Jeff Davis <pgsql(at)j-davis(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-28 09:00:02
Message-ID: CA+TgmoYp4+7UNS-FeSWejy3ZT1rjuRkgqtLVCz0J0zTVUNRQhw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 27, 2011 at 1:39 PM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> On Mon, 2011-12-19 at 07:50 -0500, Robert Haas wrote:
>> I
>> think it would be regrettable if everyone had to give up 4 bytes per
>> page because some people want checksums.
>
> I can understand that some people might not want the CPU expense of
> calculating CRCs; or the upgrade expense to convert to new pages; but do
> you think 4 bytes out of 8192 is a real concern?
>
> (Aside: it would be MAXALIGNed anyway, so probably more like 8 bytes.)

Yeah, I do. Our on-disk footprint is already significantly greater
than that of some other systems, and IMHO we should be looking for a
way to shrink our overhead in that area, not make it bigger.
Admittedly, most of the fat is probably in the tuple header rather
than the page header, but at any rate I don't consider burning up 1%
of our available storage space to be a negligible overhead. I'm not
sure I believe it should need to be MAXALIGN'd, since it is followed
by item pointers which IIRC only need 2-byte alignment, but then again
Heikki also recently proposed adding 4 bytes per page to allow each
page to track its XID generation, to help mitigate the need for
anti-wraparound vacuuming.

I think Simon's approach of stealing the 16-bit page version field is
reasonably clever in this regard, although I also understand why Tom
objects to it, and I certainly agree with him that we need to be
careful not to back ourselves into a corner. What I'm not too clear
about is whether a 16-bit checksum meets the needs of people who want
checksums. If we assume that flaky hardware is going to corrupt pages
steadily over time, then it seems like it might be adequate, because
in the unlikely event that the first corrupted page happens to still
pass its checksum test, well, another will come along and we'll
probably spot the problem then, likely well before any significant
fraction of the data gets eaten. But I'm not sure whether that's the
right mental model. I, and I think some others, initially assumed
we'd want a 32-bit checksum, but I'm not sure I can justify that
beyond "well, I think that's what people usually do". It could be
that even if we add new page header space for the checksum (as opposed
to stuffing it into the page version field) we still want to add only
2 bytes. Not sure...

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-28 09:31:33
Message-ID: CA+U5nM+7G_1sy7F+g9HyNChVOHmX6jNiZoXpf1LLU=5pnoJffA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Dec 28, 2011 at 9:00 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> What I'm not too clear
> about is whether a 16-bit checksum meets the needs of people who want
> checksums.

We need this now, hence the gymnastics to get it into this release.

16-bits of checksum is way better than zero bits of checksum, probably
about a million times better (numbers taken from papers quoted earlier
on effectiveness of checksums).

The strategy I am suggesting is 16-bits now, 32/64 later.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2011-12-28 17:27:25
Message-ID: 4EFB517D.1040001@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28.12.2011 11:00, Robert Haas wrote:
> Admittedly, most of the fat is probably in the tuple header rather
> than the page header, but at any rate I don't consider burning up 1%
> of our available storage space to be a negligible overhead.

8 / 8192 = 0.1%.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Jim Nasby <jim(at)nasby(dot)net>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Page Checksums
Date: 2012-01-04 00:22:26
Message-ID: 68ED2664-C1E5-435E-977C-F6CD7CD72E95@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Dec 28, 2011, at 3:31 AM, Simon Riggs wrote:
> On Wed, Dec 28, 2011 at 9:00 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
>> What I'm not too clear
>> about is whether a 16-bit checksum meets the needs of people who want
>> checksums.
>
> We need this now, hence the gymnastics to get it into this release.
>
> 16-bits of checksum is way better than zero bits of checksum, probably
> about a million times better (numbers taken from papers quoted earlier
> on effectiveness of checksums).
>
> The strategy I am suggesting is 16-bits now, 32/64 later.

What about allowing for an initdb option? That means that if you want binary compatibility so you can pg_upgrade then you're stuck with 16 bit checksums. If you can tolerate replicating all your data then you can get more robust checksumming.

In either case, it seems that we're quickly approaching the point where we need to start putting resources into binary page upgrading...
--
Jim C. Nasby, Database Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndQuadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-08 23:25:05
Message-ID: CA+U5nMKjXgfbxxvjU0t7NxAJXV6KXO9boQF0tbmAEnpSqXO8dg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 19, 2011 at 8:18 PM, Heikki Linnakangas
<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:

> Double-writes would be a useful option also to reduce the size of WAL that
> needs to be shipped in replication.
>
> Or you could just use a filesystem that does CRCs...

Double writes would reduce the size of WAL and we discussed many times
we want that.

Using a filesystem that does CRCs is basically saying "let the
filesystem cope". If that is an option, why not just turn full page
writes off and let the filesystem cope?

Do we really need double writes or even checksums in Postgres? What
use case are we covering that isn't covered by using the right
filesystem for the job? Or is that the problem? Are we implementing a
feature we needed 5 years ago but don't need now? Yes, other databases
have some of these features, but do we need them? Do we still need
them now?

Tell me we really need some or all of this and I will do my best to
make it happen.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Jim Nasby <jim(at)nasby(dot)net>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndQuadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-10 00:12:06
Message-ID: B81291FE-2ABD-4208-81FB-D5C65581D22A@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Jan 8, 2012, at 5:25 PM, Simon Riggs wrote:
> On Mon, Dec 19, 2011 at 8:18 PM, Heikki Linnakangas
> <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>
>> Double-writes would be a useful option also to reduce the size of WAL that
>> needs to be shipped in replication.
>>
>> Or you could just use a filesystem that does CRCs...
>
> Double writes would reduce the size of WAL and we discussed many times
> we want that.
>
> Using a filesystem that does CRCs is basically saying "let the
> filesystem cope". If that is an option, why not just turn full page
> writes off and let the filesystem cope?

I don't think that just because a filesystem CRC's that you can't have a torn write.

Filesystem CRCs very likely will not happen to data that's in the cache. For some users, that's a huge amount of data to leave un-protected.

Filesystem bugs do happen... though presumably most of those would be caught by the filesystem's CRC check... but you never know!
--
Jim C. Nasby, Database Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Jim Nasby <jim(at)nasby(dot)net>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndQuadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-10 08:04:06
Message-ID: 4F0BF0F6.1080704@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10.01.2012 02:12, Jim Nasby wrote:
> Filesystem CRCs very likely will not happen to data that's in the cache. For some users, that's a huge amount of data to leave un-protected.

You can repeat that argument ad infinitum. Even if the CRC covers all
the pages in the OS buffer cache, it still doesn't cover the pages in
the shared_buffers, CPU caches, in-transit from one memory bank to
another etc. You have to draw the line somewhere, and it seems
reasonable to draw it where the data moves between long-term storage,
ie. disk, and RAM.

> Filesystem bugs do happen... though presumably most of those would be caught by the filesystem's CRC check... but you never know!

Yeah. At some point we have to just have faith on the underlying system.
It's reasonable to provide protection or make recovery easier from bugs
or hardware faults that happen fairly often in the real world, but a
can't-trust-no-one attitude is not very helpful.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Jim Nasby <jim(at)nasby(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-10 09:07:37
Message-ID: CA+U5nM+dJSj16qPDihTxJPk7riJiiP99sAmgab=eMURgB31LBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 10, 2012 at 8:04 AM, Heikki Linnakangas
<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> On 10.01.2012 02:12, Jim Nasby wrote:
>>
>> Filesystem CRCs very likely will not happen to data that's in the cache.
>> For some users, that's a huge amount of data to leave un-protected.
>
>
> You can repeat that argument ad infinitum. Even if the CRC covers all the
> pages in the OS buffer cache, it still doesn't cover the pages in the
> shared_buffers, CPU caches, in-transit from one memory bank to another etc.
> You have to draw the line somewhere, and it seems reasonable to draw it
> where the data moves between long-term storage, ie. disk, and RAM.

We protect each change with a CRC when we write WAL, so doing the same
thing doesn't sound entirely unreasonable, especially if your database
fits in RAM and we aren't likely to be doing I/O anytime soon. The
long term storage argument may no longer apply in a world with very
large memory.

The question is, when exactly would we check the checksum? When we
lock the block, when we pin it? We certainly can't do it on every
access to the block since we don't even track where that happens in
the code.

I think we could add an option to check the checksum immediately after
we pin a block for the first time but it would be very expensive and
sounds like we're re-inventing hardware or OS features again. Work on
50% performance drain, as an estimate.

That is a level of protection no other DBMS offers, so that is either
an advantage or a warning. Jim, if you want this, please do the
research and work out what the probability of losing shared buffer
data in your ECC RAM really is so we are doing it for quantifiable
reasons (via old Google memory academic paper) and to verify that the
cost/benefit means you would actually use it if we built it. Research
into requirements is at least as important and time consuming as
research on possible designs.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Benedikt Grundmann <bgrundmann(at)janestreet(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-10 09:25:42
Message-ID: 20120110092542.GJ6419@ldn-qws-004.delacy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10/01/12 09:07, Simon Riggs wrote:
> > You can repeat that argument ad infinitum. Even if the CRC covers all the
> > pages in the OS buffer cache, it still doesn't cover the pages in the
> > shared_buffers, CPU caches, in-transit from one memory bank to another etc.
> > You have to draw the line somewhere, and it seems reasonable to draw it
> > where the data moves between long-term storage, ie. disk, and RAM.
>
> We protect each change with a CRC when we write WAL, so doing the same
> thing doesn't sound entirely unreasonable, especially if your database
> fits in RAM and we aren't likely to be doing I/O anytime soon. The
> long term storage argument may no longer apply in a world with very
> large memory.
>
I'm not so sure about that. The experience we have is that storage
and memory doesn't grow as fast as demand. Maybe we are in a minority
but at Jane Street memory size < database size is sadly true for most
of the important databases.

Concrete the two most important databases are

715 GB

and

473 GB

in size (the second used to be much closer to the first one in size but
we recently archived a lot of data).

In both databases there is a small set of tables that use the majority of
the disk space. Those tables are also the most used tables. Typically
the size of one of those tables is between 1-3x size of memory. And the
cumulative size of all indices on the table is normally roughly the same
size as the table.

Cheers,

Bene


From: Jim Nasby <jim(at)nasby(dot)net>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-21 23:12:05
Message-ID: 3EDE42FB-3C5B-406F-B7A5-16253B455C76@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Jan 10, 2012, at 3:07 AM, Simon Riggs wrote:
> I think we could add an option to check the checksum immediately after
> we pin a block for the first time but it would be very expensive and
> sounds like we're re-inventing hardware or OS features again. Work on
> 50% performance drain, as an estimate.
>
> That is a level of protection no other DBMS offers, so that is either
> an advantage or a warning. Jim, if you want this, please do the
> research and work out what the probability of losing shared buffer
> data in your ECC RAM really is so we are doing it for quantifiable
> reasons (via old Google memory academic paper) and to verify that the
> cost/benefit means you would actually use it if we built it. Research
> into requirements is at least as important and time consuming as
> research on possible designs.

Maybe I'm just dense, but it wasn't clear to me how you could use the information in the google paper to extrapolate data corruption probability.

I can say this: we have seen corruption from bad memory, and our Postgres buffer pool (8G) is FAR smaller than available memory on all of our servers (192G or 512G). So at least in our case, CRCs that protect the filesystem cache would protect the vast majority of our memory (96% or 98.5%).
--
Jim C. Nasby, Database Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net


From: Robert Treat <rob(at)xzilla(dot)net>
To: Jim Nasby <jim(at)nasby(dot)net>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 05:34:28
Message-ID: CABV9wwP=ignGoD=yQb6JztZ2Kc8+92gHCp_ur5SmGe4z7scWKg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jan 21, 2012 at 6:12 PM, Jim Nasby <jim(at)nasby(dot)net> wrote:
> On Jan 10, 2012, at 3:07 AM, Simon Riggs wrote:
>> I think we could add an option to check the checksum immediately after
>> we pin a block for the first time but it would be very expensive and
>> sounds like we're re-inventing hardware or OS features again. Work on
>> 50% performance drain, as an estimate.
>>
>> That is a level of protection no other DBMS offers, so that is either
>> an advantage or a warning. Jim, if you want this, please do the
>> research and work out what the probability of losing shared buffer
>> data in your ECC RAM really is so we are doing it for quantifiable
>> reasons (via old Google memory academic paper) and to verify that the
>> cost/benefit means you would actually use it if we built it. Research
>> into requirements is at least as important and time consuming as
>> research on possible designs.
>
> Maybe I'm just dense, but it wasn't clear to me how you could use the information in the google paper to extrapolate data corruption probability.
>
> I can say this: we have seen corruption from bad memory, and our Postgres buffer pool (8G) is FAR smaller than
> available memory on all of our servers (192G or 512G). So at least in our case, CRCs that protect the filesystem
> cache would protect the vast majority of our memory (96% or 98.5%).

Would it be unfair to assert that people who want checksums but aren't
willing to pay the cost of running a filesystem that provides
checksums aren't going to be willing to make the cost/benefit trade
off that will be asked for? Yes, it is unfair of course, but it's
interesting how small the camp of those using checksummed filesystems
is.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com


From: Florian Weimer <fweimer(at)bfk(dot)de>
To: Robert Treat <rob(at)xzilla(dot)net>
Cc: Jim Nasby <jim(at)nasby(dot)net>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 07:57:21
Message-ID: 82fwf5353y.fsf@mid.bfk.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Robert Treat:

> Would it be unfair to assert that people who want checksums but aren't
> willing to pay the cost of running a filesystem that provides
> checksums aren't going to be willing to make the cost/benefit trade
> off that will be asked for? Yes, it is unfair of course, but it's
> interesting how small the camp of those using checksummed filesystems
> is.

Don't checksumming file systems currently come bundled with other
features you might not want (such as certain vendors)?

--
Florian Weimer <fweimer(at)bfk(dot)de>
BFK edv-consulting GmbH http://www.bfk.de/
Kriegsstraße 100 tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99


From: jesper(at)krogh(dot)cc
To: "Florian Weimer" <fweimer(at)bfk(dot)de>
Cc: "Robert Treat" <rob(at)xzilla(dot)net>, "Jim Nasby" <jim(at)nasby(dot)net>, "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "David Fetter" <david(at)fetter(dot)org>, "Stephen Frost" <sfrost(at)snowman(dot)net>, "Aidan Van Dyk" <aidan(at)highrise(dot)ca>, "Josh Berkus" <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, "Greg Smith" <greg(at)2ndquadrant(dot)com>, "Koichi Suzuki" <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 08:02:28
Message-ID: 82bd4877b93409c92a906a1d75578ac5.squirrel@shrek.krogh.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> * Robert Treat:
>
>> Would it be unfair to assert that people who want checksums but aren't
>> willing to pay the cost of running a filesystem that provides
>> checksums aren't going to be willing to make the cost/benefit trade
>> off that will be asked for? Yes, it is unfair of course, but it's
>> interesting how small the camp of those using checksummed filesystems
>> is.
>
> Don't checksumming file systems currently come bundled with other
> features you might not want (such as certain vendors)?

I would chip in and say that I would prefer sticking to well-known proved
filesystems like xfs/ext4 and let the application do the checksumming.

I dont forsee fully production-ready checksumming filesystems readily
available in the standard Linux distributions within a near future.

And yes, I would for sure turn such functionality on if it were present.

--
Jesper


From: Florian Weimer <fweimer(at)bfk(dot)de>
To: jesper(at)krogh(dot)cc
Cc: "Robert Treat" <rob(at)xzilla(dot)net>, "Jim Nasby" <jim(at)nasby(dot)net>, "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "David Fetter" <david(at)fetter(dot)org>, "Stephen Frost" <sfrost(at)snowman(dot)net>, "Aidan Van Dyk" <aidan(at)highrise(dot)ca>, "Josh Berkus" <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, "Greg Smith" <greg(at)2ndquadrant(dot)com>, "Koichi Suzuki" <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 08:16:46
Message-ID: 82bopt347l.fsf@mid.bfk.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> I would chip in and say that I would prefer sticking to well-known proved
> filesystems like xfs/ext4 and let the application do the checksumming.

Yes, that's a different way of putting my concern. If you want a proven
file system with checksumming (and an fsck), options are really quite
limited.

> And yes, I would for sure turn such functionality on if it were present.

Same here. I already use page-level checksum with Berkeley DB.

--
Florian Weimer <fweimer(at)bfk(dot)de>
BFK edv-consulting GmbH http://www.bfk.de/
Kriegsstraße 100 tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99


From: Robert Treat <rob(at)xzilla(dot)net>
To: jesper(at)krogh(dot)cc
Cc: Florian Weimer <fweimer(at)bfk(dot)de>, Jim Nasby <jim(at)nasby(dot)net>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 14:49:29
Message-ID: CABV9wwPYJZN1MYeCbApRrvm3B7Y6L+WykBF1bvTH3_ojcgkNHw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 24, 2012 at 3:02 AM,  <jesper(at)krogh(dot)cc> wrote:
>> * Robert Treat:
>>
>>> Would it be unfair to assert that people who want checksums but aren't
>>> willing to pay the cost of running a filesystem that provides
>>> checksums aren't going to be willing to make the cost/benefit trade
>>> off that will be asked for? Yes, it is unfair of course, but it's
>>> interesting how small the camp of those using checksummed filesystems
>>> is.
>>
>> Don't checksumming file systems currently come bundled with other
>> features you might not want (such as certain vendors)?
>
> I would chip in and say that I would prefer sticking to well-known proved
> filesystems like xfs/ext4 and let the application do the checksumming.
>

*shrug* You could use Illumos or BSD and you'd get generally vendor
free systems using ZFS, which I'd say offers more well-known and
proved checksumming than anything cooking in linux land, or than the
as-to-be-written yet checksumming in postgres.

> I dont forsee fully production-ready checksumming filesystems readily
> available in the standard Linux distributions within a near future.
>
> And yes, I would for sure turn such functionality on if it were present.
>

That's nice to say, but most people aren't willing to take a 50%
performance hit. Not saying what we end up with will be that bad, but
I've seen people get upset about performance hits much lower than
that.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Robert Treat <rob(at)xzilla(dot)net>
Cc: jesper(at)krogh(dot)cc, Florian Weimer <fweimer(at)bfk(dot)de>, Jim Nasby <jim(at)nasby(dot)net>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 15:15:31
Message-ID: CA+U5nM+tVtyEXX6yhiNyMaX9YuUEyMow5ueFAkCzTGRriWaPBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 24, 2012 at 2:49 PM, Robert Treat <rob(at)xzilla(dot)net> wrote:
>> And yes, I would for sure turn such functionality on if it were present.
>>
>
> That's nice to say, but most people aren't willing to take a 50%
> performance hit. Not saying what we end up with will be that bad, but
> I've seen people get upset about performance hits much lower than
> that.

When we talk about a 50% hit, are we discussing (1) checksums that are
checked on each I/O, or (2) checksums that are checked each time we
re-pin a shared buffer? The 50% hit was my estimate of (2) and has
not yet been measured, so shouldn't be used unqualified when
discussing checksums. Same thing is also true "I would use it"
comments, since we're not sure whether you're voting for (1) or (2).

As to whether people will actually use (1), I have no clue. But I do
know is that many people request that feature, including people that
run heavy duty Postgres production systems and who also know about
filesystems. Do people need (2)? It's easy enough to add as an option,
once we have (1) and there is real interest.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


From: Jim Nasby <jim(at)nasby(dot)net>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Robert Treat <rob(at)xzilla(dot)net>, jesper(at)krogh(dot)cc, Florian Weimer <fweimer(at)bfk(dot)de>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org, Greg Smith <greg(at)2ndquadrant(dot)com>, Koichi Suzuki <koichi(dot)szk(at)gmail(dot)com>
Subject: Re: Page Checksums
Date: 2012-01-24 17:30:47
Message-ID: 74DC9138-85BA-4308-9D8D-91CEE07E8A57@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Jan 24, 2012, at 9:15 AM, Simon Riggs wrote:
> On Tue, Jan 24, 2012 at 2:49 PM, Robert Treat <rob(at)xzilla(dot)net> wrote:
>>> And yes, I would for sure turn such functionality on if it were present.
>>>
>>
>> That's nice to say, but most people aren't willing to take a 50%
>> performance hit. Not saying what we end up with will be that bad, but
>> I've seen people get upset about performance hits much lower than
>> that.
> When we talk about a 50% hit, are we discussing (1) checksums that are
> checked on each I/O, or (2) checksums that are checked each time we
> re-pin a shared buffer? The 50% hit was my estimate of (2) and has
> not yet been measured, so shouldn't be used unqualified when
> discussing checksums. Same thing is also true "I would use it"
> comments, since we're not sure whether you're voting for (1) or (2).
>
> As to whether people will actually use (1), I have no clue. But I do
> know is that many people request that feature, including people that
> run heavy duty Postgres production systems and who also know about
> filesystems. Do people need (2)? It's easy enough to add as an option,
> once we have (1) and there is real interest.

Some people will be able to take a 50% hit and will happily turn on checksumming every time a page is pinned. But I suspect a lot of folks can't afford that kind of hit, but would really like to have their filesystem cache protected (we're certainly in the later camp).

As for checksumming filesystems, I didn't see any answers about whether the filesystem *cache* was also protected by the filesystem checksum. Even if it is, the choice of checksumming filesystems is certainly limited... ZFS is the only one that seems to have real traction, but that forces you off of Linux, which is a problem for many shops.
--
Jim C. Nasby, Database Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net