Re: Implementing incremental backup

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Cédric Villemain <cedric(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org, "Jehan-Guillaume (ioguix) de Rorthais" <ioguix(at)free(dot)fr>, Tatsuo Ishii <ishii(at)postgresql(dot)org>, klaussfreire(at)gmail(dot)com, sfrost(at)snowman(dot)net
Subject: Re: Implementing incremental backup
Date: 2013-06-22 14:08:51
Message-ID: 20130622140851.GA1254@alap2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2013-06-22 15:58:35 +0200, Cédric Villemain wrote:
> > A differential backup resulting from a bunch of WAL between W1 and Wn
> > would help to recover much faster to the time of Wn than replaying all
> > the WALs between W1 and Wn and saves a lot of space.
> >
> > I was hoping to find some time to dig around this idea, but as the
> > subject rose here, then here are my 2¢!
>
> something like that maybe :
> ./pg_xlogdump -b \
> ../data/pg_xlog/000000010000000000000001 \
> ../data/pg_xlog/000000010000000000000005| \
> grep 'backup bkp' | awk '{print ($5,$9)}'

Note that it's a bit more complex than that for a number of reasons:
* we don't log full page images for e.g. new heap pages, we just set the
XLOG_HEAP_INIT_PAGE flag on the record
* there also are XLOG_FPI records
* How do you get a base backup as the basis to apply those to? You need
it to be recovered exactly to a certain point...

But yes, I think something can be done in the end. I think Heikki's
pg_rewind already has quite a bit of the required logic.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2013-06-22 14:54:50 Re: Possible bug in CASE evaluation
Previous Message Cédric Villemain 2013-06-22 13:58:35 Re: Implementing incremental backup