[RFC] LSN Map

Lists: pgsql-hackers
From: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: [RFC] LSN Map
Date: 2015-01-07 09:50:38
Message-ID: 54AD016E.9020406@2ndquadrant.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Hackers,

In order to make incremental backup
(https://wiki.postgresql.org/wiki/Incremental_backup) efficient we
need a way to track the LSN of a page in a way that we can retrieve it
without reading the actual block. Below there is my proposal on how to
achieve it.

LSN Map
-------

The purpose of the LSN map is to quickly know if a page of a relation
has been modified after a specified checkpoint.

Implementation
--------------

We create an additional fork which contains a raw stream of LSNs. To
limit the space used, every entry represent the maximum LSN of a group
of blocks of a fixed size. I chose arbitrarily the size of 2048
which is equivalent to 16MB of heap data, which means that we need 64k
entry to track one terabyte of heap.

Name
----

I've called this map LSN map, and I've named the corresponding fork
file as "lm".

WAL logging
-----------

At the moment the map is not wal logged, but is updated during the wal
reply. I'm not enough deep in WAL mechanics to see if the current
approach is sane or if we should change it.

Current limits
--------------

The current implementation tracks only heap LSN. It currently does not
track any kind of indexes, but this can be easily added later. The
implementation of commands that rewrite the whole table can be
improved: cluster uses shared memory buffers instead of writing the
map directly on the disk, and moving a table to another tablespace
simply drops the map instead of updating it correctly.

Further ideas
-------------

The current implementation updates an entry in the map every time the
block get its LSN bumped, but we really only need to know which is the
first checkpoint that contains expired data. So setting the entry to
the last checkpoint LSN is probably enough, and will reduce the number
of writes. To implement this we only need a backend local copy of the
last checkpoint LSN, which is updated during each XLogInsert. Again,
I'm not enough deep in replication mechanics to see if this approach
could work on a standby using restartpoints instead of checkpoints.
Please advice on the best way to implement it.

Conclusions
------------

This code is incomplete, and the xlog reply part must be
improved/fixed, but I think its a good start to have this feature.
I will appreciate any review, advice or critic.

Regards,
Marco

--
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco(dot)nenciarini(at)2ndQuadrant(dot)it | www.2ndQuadrant.it

Attachment Content-Type Size
lsn-map-v1.patch text/plain 51.7 KB

From: Bruce Momjian <bruce(at)momjian(dot)us>
To: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-07 15:16:19
Message-ID: 20150107151619.GG17824@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jan 7, 2015 at 10:50:38AM +0100, Marco Nenciarini wrote:
> Implementation
> --------------
>
> We create an additional fork which contains a raw stream of LSNs. To
> limit the space used, every entry represent the maximum LSN of a group
> of blocks of a fixed size. I chose arbitrarily the size of 2048
> which is equivalent to 16MB of heap data, which means that we need 64k
> entry to track one terabyte of heap.

I like the idea of summarizing the LSN to keep its size reaonable. Have
you done any measurements to determine how much backup can be skipped
using this method for a typical workload, i.e. how many 16MB page ranges
are not modified in a typical span between incremental backups?

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Bruce Momjian <bruce(at)momjian(dot)us>
Cc: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-07 15:33:20
Message-ID: 20150107153320.GV1457@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Bruce Momjian wrote:

> Have you done any measurements to determine how much backup can be
> skipped using this method for a typical workload, i.e. how many 16MB
> page ranges are not modified in a typical span between incremental
> backups?

That seems entirely dependent on the specific workload.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Bruce Momjian <bruce(at)momjian(dot)us>, Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-07 15:41:43
Message-ID: 22402.1420645303@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> Bruce Momjian wrote:
>> Have you done any measurements to determine how much backup can be
>> skipped using this method for a typical workload, i.e. how many 16MB
>> page ranges are not modified in a typical span between incremental
>> backups?

> That seems entirely dependent on the specific workload.

Maybe, but it's a reasonable question. The benefit obtained from the
added complexity/overhead clearly goes to zero if you summarize too much,
and it's not at all clear that there's a sweet spot where you win. So
I'd want to see some measurements demonstrating that this is worthwhile.

regards, tom lane


From: Bruce Momjian <bruce(at)momjian(dot)us>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-07 15:46:04
Message-ID: 20150107154604.GH17824@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jan 7, 2015 at 12:33:20PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > Have you done any measurements to determine how much backup can be
> > skipped using this method for a typical workload, i.e. how many 16MB
> > page ranges are not modified in a typical span between incremental
> > backups?
>
> That seems entirely dependent on the specific workload.

Well, obviously. Is that worth even stating?

My question is whether there are enough workloads for this to be
generally useful, particularly considering the recording granularity,
hint bits, and freezing. Do we have cases where 16MB granularity helps
compared to file or table-level granularity? How would we even measure
the benefits? How would the administrator know they are benefitting
from incremental backups vs complete backups, considering the complexity
of incremental restores?

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +


From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-08 19:18:28
Message-ID: 54AED804.7060701@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/7/15, 3:50 AM, Marco Nenciarini wrote:
> The current implementation tracks only heap LSN. It currently does not
> track any kind of indexes, but this can be easily added later.

Would it make sense to do this at a buffer level, instead of at the heap level? That means it would handle both heap and indexes. I don't know if LSN is visible that far down though.

Also, this pattern is repeated several times; it would be good to put it in it's own function:
+ lsnmap_pin(reln, blkno, &lmbuffer);
+ lsnmap_set(reln, blkno, lmbuffer, lsn);
+ ReleaseBuffer(lmbuffer);
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


From: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>
To: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-01-13 11:22:24
Message-ID: 54B4FFF0.2060600@2ndquadrant.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Il 08/01/15 20:18, Jim Nasby ha scritto:
> On 1/7/15, 3:50 AM, Marco Nenciarini wrote:
>> The current implementation tracks only heap LSN. It currently does not
>> track any kind of indexes, but this can be easily added later.
>
> Would it make sense to do this at a buffer level, instead of at the heap
> level? That means it would handle both heap and indexes.
> I don't know if LSN is visible that far down though.

Where exactly you are thinking to handle it?

>
> Also, this pattern is repeated several times; it would be good to put it
> in it's own function:
> + lsnmap_pin(reln, blkno, &lmbuffer);
> + lsnmap_set(reln, blkno, lmbuffer, lsn);
> + ReleaseBuffer(lmbuffer);

Right.

Regards,
Marco

--
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco(dot)nenciarini(at)2ndQuadrant(dot)it | www.2ndQuadrant.it


From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-02-23 17:52:26
Message-ID: 54EB68DA.6000006@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 01/13/2015 01:22 PM, Marco Nenciarini wrote:
> Il 08/01/15 20:18, Jim Nasby ha scritto:
>> On 1/7/15, 3:50 AM, Marco Nenciarini wrote:
>>> The current implementation tracks only heap LSN. It currently does not
>>> track any kind of indexes, but this can be easily added later.
>>
>> Would it make sense to do this at a buffer level, instead of at the heap
>> level? That means it would handle both heap and indexes.
>> I don't know if LSN is visible that far down though.
>
> Where exactly you are thinking to handle it?

Dunno, but Jim's got a point. This is a maintenance burden to all
indexams, if they all have to remember to update the LSN map separately.
It needs to be done in some common code, like in PageSetLSN or
XLogInsert or something.

Aside from that, isn't this horrible from a performance point of view?
The patch doubles the buffer manager traffic, because any update to any
page will also need to modify the LSN map. This code is copied from the
visibility map code, but we got away with it there because the VM only
needs to be updated the first time a page is modified. Subsequent
updates will know the visibility bit is already cleared, and don't need
to access the visibility map.

Ans scalability: Whether you store one value for every N pages, or the
LSN of every page, this is going to have a huge effect of focusing
contention to the LSN pages. Currently, if ten backends operate on ten
different heap pages, for example, they can run in parallel. There will
be some contention on the WAL insertions (much less in 9.4 than before).
But with this patch, they will all fight for the exclusive lock on the
single LSN map page.

You'll need to find a way to not update the LSN map on every update. For
example, only update the LSN page on the first update after a checkpoint
(although that would still have a big contention focusing effect right
after a checkpoint).

- Heikki


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-02-24 02:55:13
Message-ID: CA+TgmoaTJSZ1sKks8pSYnEw+L1U_EH-a3X2PDTy+pPosqwfVTg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Feb 23, 2015 at 12:52 PM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> Dunno, but Jim's got a point. This is a maintenance burden to all indexams,
> if they all have to remember to update the LSN map separately. It needs to
> be done in some common code, like in PageSetLSN or XLogInsert or something.
>
> Aside from that, isn't this horrible from a performance point of view? The
> patch doubles the buffer manager traffic, because any update to any page
> will also need to modify the LSN map. This code is copied from the
> visibility map code, but we got away with it there because the VM only needs
> to be updated the first time a page is modified. Subsequent updates will
> know the visibility bit is already cleared, and don't need to access the
> visibility map.
>
> Ans scalability: Whether you store one value for every N pages, or the LSN
> of every page, this is going to have a huge effect of focusing contention to
> the LSN pages. Currently, if ten backends operate on ten different heap
> pages, for example, they can run in parallel. There will be some contention
> on the WAL insertions (much less in 9.4 than before). But with this patch,
> they will all fight for the exclusive lock on the single LSN map page.
>
> You'll need to find a way to not update the LSN map on every update. For
> example, only update the LSN page on the first update after a checkpoint
> (although that would still have a big contention focusing effect right after
> a checkpoint).

I think it would make more sense to do this in the background.
Suppose there's a background process that reads the WAL and figures
out which buffers it touched, and then updates the LSN map
accordingly. Then the contention-focusing effect disappears, because
all of the updates to the LSN map are being made by the same process.
You need some way to make sure the WAL sticks around until you've
scanned it for changed blocks - but that is mighty close to what a
physical replication slot does, so it should be manageable.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
Cc: Marco Nenciarini <marco(dot)nenciarini(at)2ndquadrant(dot)it>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [RFC] LSN Map
Date: 2015-07-06 14:19:37
Message-ID: 559A8E79.9040506@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 02/24/2015 04:55 AM, Robert Haas wrote:
> On Mon, Feb 23, 2015 at 12:52 PM, Heikki Linnakangas
> <hlinnakangas(at)vmware(dot)com> wrote:
>> Dunno, but Jim's got a point. This is a maintenance burden to all indexams,
>> if they all have to remember to update the LSN map separately. It needs to
>> be done in some common code, like in PageSetLSN or XLogInsert or something.
>>
>> Aside from that, isn't this horrible from a performance point of view? The
>> patch doubles the buffer manager traffic, because any update to any page
>> will also need to modify the LSN map. This code is copied from the
>> visibility map code, but we got away with it there because the VM only needs
>> to be updated the first time a page is modified. Subsequent updates will
>> know the visibility bit is already cleared, and don't need to access the
>> visibility map.
>>
>> Ans scalability: Whether you store one value for every N pages, or the LSN
>> of every page, this is going to have a huge effect of focusing contention to
>> the LSN pages. Currently, if ten backends operate on ten different heap
>> pages, for example, they can run in parallel. There will be some contention
>> on the WAL insertions (much less in 9.4 than before). But with this patch,
>> they will all fight for the exclusive lock on the single LSN map page.
>>
>> You'll need to find a way to not update the LSN map on every update. For
>> example, only update the LSN page on the first update after a checkpoint
>> (although that would still have a big contention focusing effect right after
>> a checkpoint).
>
> I think it would make more sense to do this in the background.
> Suppose there's a background process that reads the WAL and figures
> out which buffers it touched, and then updates the LSN map
> accordingly. Then the contention-focusing effect disappears, because
> all of the updates to the LSN map are being made by the same process.
> You need some way to make sure the WAL sticks around until you've
> scanned it for changed blocks - but that is mighty close to what a
> physical replication slot does, so it should be manageable.

If you implement this as a background process that reads WAL, as Robert
suggested, you could perhaps implement this completely in an extension.
That'd be nice, even if we later want to integrate this in the backend,
in order to get you started quickly.

This is marked in the commitfest as "Needs Review", but ISTM this got
its fair share of review back in February. Marking as Returned with
Feedback.

- Heikki