Re: Warm Standby Setup Documentation

Lists: pgsql-general
From: Ogden <lists(at)darkstatic(dot)com>
To: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Warm Standby Setup Documentation
Date: 2010-03-22 14:21:16
Message-ID: 0F6403EE-2740-49F4-9BF3-2C6AA45E2EDE@darkstatic.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

I have looked all over but could not find any detailed docs on setting up a warm standby solution using PostgreSQL 8.4. I do know of http://www.postgresql.org/docs/8.4/static/warm-standby.html but was wondering if there was a more detailed document on this topic.

Are people using this as a viable backup/hot spare solution? How has it worked out?

Thank you

Ogden


From: Bryan Murphy <bmurphy1976(at)gmail(dot)com>
To: Ogden <lists(at)darkstatic(dot)com>
Cc: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-24 15:09:05
Message-ID: 7fd310d11003240809v508c1627k7e6677a33777c078@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Mon, Mar 22, 2010 at 9:21 AM, Ogden <lists(at)darkstatic(dot)com> wrote:

> I have looked all over but could not find any detailed docs on setting up a
> warm standby solution using PostgreSQL 8.4. I do know of
> http://www.postgresql.org/docs/8.4/static/warm-standby.html but was
> wondering if there was a more detailed document on this topic.
>
> Are people using this as a viable backup/hot spare solution? How has it
> worked out?
>
> Thank you
>
> Ogden
>

We use it, it works pretty well, although it's a bit of a pain to set up the
first time. We have two spares, one which is an equivalent sized machine we
use for failover, and one which is a smaller machine that we use for
worst-case-scenario file system snapshots/backups.

The one thing you should be aware of is that when you fail over, your spare
has no spares. I have not found a way around this problem yet. So, when
you fail over, there is a window where you have no backups while you're
building the new spares. This can be pretty nerve wracking if your database
is like ours and it takes 3-6 hours to bring a new spare online from
scratch.

I was able to build out our solution reading the docs and asking questions
on the mailing list. The information is in the docs, you just have to read
it a few times for it to sink in.

Bryan


From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Bryan Murphy <bmurphy1976(at)gmail(dot)com>
Cc: Ogden <lists(at)darkstatic(dot)com>, PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-26 18:32:00
Message-ID: 4BACFDA0.3030706@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Bryan Murphy wrote:
> The one thing you should be aware of is that when you fail over, your
> spare has no spares. I have not found a way around this problem yet.
> So, when you fail over, there is a window where you have no backups
> while you're building the new spares. This can be pretty nerve
> wracking if your database is like ours and it takes 3-6 hours to bring
> a new spare online from scratch.

If there's another server around, you can have your archive_command on
the master ship to two systems, then use the second one as a way to
jump-start this whole process. After fail-over, just start shipping
from the new primary to that 3rd server, now the replacement standby,
and sync any files it doesn't have. Then switch it into recovery. Much
faster than doing a new base backup from the standby on larger systems.

--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.us


From: Ogden <lists(at)darkstatic(dot)com>
To: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-26 18:35:43
Message-ID: 68992D6C-719C-4E1F-9887-6AA71DD4A830@darkstatic.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general


On Mar 26, 2010, at 1:32 PM, Greg Smith wrote:

> Bryan Murphy wrote:
>> The one thing you should be aware of is that when you fail over, your spare has no spares. I have not found a way around this problem yet. So, when you fail over, there is a window where you have no backups while you're building the new spares. This can be pretty nerve wracking if your database is like ours and it takes 3-6 hours to bring a new spare online from scratch.
>
> If there's another server around, you can have your archive_command on the master ship to two systems, then use the second one as a way to jump-start this whole process. After fail-over, just start shipping from the new primary to that 3rd server, now the replacement standby, and sync any files it doesn't have. Then switch it into recovery. Much faster than doing a new base backup from the standby on larger systems.

How is it possible to use the archive_command to ship to different ones?

archive_command = 'rsync -a %p postgres(at)192(dot)168(dot)x(dot)x:/usr/local/pgsql/walfiles/%f </dev/null'
archive_timeout = 120 # force a logfile segment switch after this

I suppose you can put multiple commands there then?

Also, 2 minutes - is this reasonable for a heavy write database?

Thank you

Ogden


From: Bryan Murphy <bmurphy1976(at)gmail(dot)com>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Ogden <lists(at)darkstatic(dot)com>, PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-26 20:38:02
Message-ID: 7fd310d11003261338t5ca16dc7jdb5fc7c38120fa42@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Fri, Mar 26, 2010 at 1:32 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:

> If there's another server around, you can have your archive_command on the
> master ship to two systems, then use the second one as a way to jump-start
> this whole process. After fail-over, just start shipping from the new
> primary to that 3rd server, now the replacement standby, and sync any files
> it doesn't have. Then switch it into recovery. Much faster than doing a
> new base backup from the standby on larger systems.
>
>
Every time I've tried to do this it's failed because the third server was
looking for log files starting with 00000006... but the secondary server
(new master) is now shipping files starting with 00000007... How do I get
the third server to switch over to the higher numbered files? That's the
part I was never able to overcome.

I'd really like to fix this, because this has literally given me nightmares.
:)

Bryan


From: Yar Tykhiy <yar(at)barnet(dot)com(dot)au>
To: Ogden <lists(at)darkstatic(dot)com>
Cc: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-29 00:45:58
Message-ID: 20100329004558.GB3229@buka.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Fri, Mar 26, 2010 at 01:35:43PM -0500, Ogden wrote:
> On Mar 26, 2010, at 1:32 PM, Greg Smith wrote:
>
> > Bryan Murphy wrote:
> >> The one thing you should be aware of is that when you fail over, your spare has no spares. I have not found a way around this problem yet. So, when you fail over, there is a window where you have no backups while you're building the new spares. This can be pretty nerve wracking if your database is like ours and it takes 3-6 hours to bring a new spare online from scratch.
> >
> > If there's another server around, you can have your archive_command on the master ship to two systems, then use the second one as a way to jump-start this whole process. After fail-over, just start shipping from the new primary to that 3rd server, now the replacement standby, and sync any files it doesn't have. Then switch it into recovery. Much faster than doing a new base backup from the standby on larger systems.
>
> How is it possible to use the archive_command to ship to different ones?
>
> archive_command = 'rsync -a %p postgres(at)192(dot)168(dot)x(dot)x:/usr/local/pgsql/walfiles/%f </dev/null'
> archive_timeout = 120 # force a logfile segment switch after this
>
> I suppose you can put multiple commands there then?

You can always wrap as many commands as you like in a script.
However, there is a pitfall to watch out for when shipping WALs to
multiple standby servers. Namely your script has to handle failures
of individual WAL shipping targets so that a single target going down
doesn't disrupt operation of the whole cluster. Please see
http://archives.postgresql.org/pgsql-general/2009-10/msg00590.php
for discussion.

Yar


From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Ogden <lists(at)darkstatic(dot)com>
Cc: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-29 04:57:05
Message-ID: 4BB03321.1010606@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Ogden wrote:
> How is it possible to use the archive_command to ship to different ones?
>
> archive_command = 'rsync -a %p postgres(at)192(dot)168(dot)x(dot)x:/usr/local/pgsql/walfiles/%f </dev/null'
> archive_timeout = 120 # force a logfile segment switch after this
>
> I suppose you can put multiple commands there then?
>

The examples in the manual lead one toward putting a full command line
into the script. I personally never do that; I call a full-size script
with "%f %p" and put all of the transport details into it instead. Once
you do that, you can ship that segment all over the place if you feel
like it, and add significantly better error detection/recovery than
possible in a single line too. As already mentioned, you do need to
make sure that you don't end up blocking archiving on the master due to
delivery failure on an optional node however.

> Also, 2 minutes - is this reasonable for a heavy write database?
>

It's extremely unlikely a write-heavy database will care about the
setting of archive_timeout. That setting exists for the situation where
you sometimes go some number of minutes without generating at least 16MB
of WAL writes, and want to force a log file to ship anyway. That
shouldn't happen often on a busy server. Setting archive_timeout to a
lower value mainly is a source of overhead on mostly idle systems.

--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.us


From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Bryan Murphy <bmurphy1976(at)gmail(dot)com>
Cc: Ogden <lists(at)darkstatic(dot)com>, PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-29 05:11:30
Message-ID: 4BB03682.7040805@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Bryan Murphy wrote:
> On Fri, Mar 26, 2010 at 1:32 PM, Greg Smith <greg(at)2ndquadrant(dot)com
> <mailto:greg(at)2ndquadrant(dot)com>> wrote:
>
> If there's another server around, you can have your
> archive_command on the master ship to two systems, then use the
> second one as a way to jump-start this whole process. After
> fail-over, just start shipping from the new primary to that 3rd
> server, now the replacement standby, and sync any files it doesn't
> have. Then switch it into recovery. Much faster than doing a new
> base backup from the standby on larger systems.
>
>
> Every time I've tried to do this it's failed because the third server
> was looking for log files starting with 00000006... but the secondary
> server (new master) is now shipping files starting with 00000007...
> How do I get the third server to switch over to the higher numbered
> files? That's the part I was never able to overcome.

One thing it's easy to miss is that you have to save every incoming WAL
archive file on the standby, and sync them all over to the 3rd system
before you trigger the standby to be active. The archive_command has to
be active and shipping to the 3rd system before the server is triggered too.

You can think of any given standby server as a base backup and some
number of WAL segments applied to it. So long as you never let a WAL
file get applied to or generated by the standby who becomes the master
without making its way to the additional system, it should always be
possible to bring up that additional server up to date without something
being missing. The exact order of operations to ensure that in all case
is certainly not obvious though.

--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.us


From: Ogden <lists(at)darkstatic(dot)com>
To: Yar Tykhiy <yar(at)barnet(dot)com(dot)au>
Cc: PGSQL Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Warm Standby Setup Documentation
Date: 2010-03-29 16:42:38
Message-ID: 052B4AEC-D21B-4D76-A1C3-D3A783523E2A@darkstatic.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general


On Mar 28, 2010, at 7:45 PM, Yar Tykhiy wrote:

> On Fri, Mar 26, 2010 at 01:35:43PM -0500, Ogden wrote:
>> On Mar 26, 2010, at 1:32 PM, Greg Smith wrote:
>>
>>> Bryan Murphy wrote:
>>>> The one thing you should be aware of is that when you fail over, your spare has no spares. I have not found a way around this problem yet. So, when you fail over, there is a window where you have no backups while you're building the new spares. This can be pretty nerve wracking if your database is like ours and it takes 3-6 hours to bring a new spare online from scratch.
>>>
>>> If there's another server around, you can have your archive_command on the master ship to two systems, then use the second one as a way to jump-start this whole process. After fail-over, just start shipping from the new primary to that 3rd server, now the replacement standby, and sync any files it doesn't have. Then switch it into recovery. Much faster than doing a new base backup from the standby on larger systems.
>>
>> How is it possible to use the archive_command to ship to different ones?
>>
>> archive_command = 'rsync -a %p postgres(at)192(dot)168(dot)x(dot)x:/usr/local/pgsql/walfiles/%f </dev/null'
>> archive_timeout = 120 # force a logfile segment switch after this
>>
>> I suppose you can put multiple commands there then?
>
> You can always wrap as many commands as you like in a script.
> However, there is a pitfall to watch out for when shipping WALs to
> multiple standby servers. Namely your script has to handle failures
> of individual WAL shipping targets so that a single target going down
> doesn't disrupt operation of the whole cluster. Please see
> http://archives.postgresql.org/pgsql-general/2009-10/msg00590.php
> for discussion.

Is it as simple as doing this:

archive_command = '/var/lib/pgsql/data/warm_standby.sh %p %f </dev/null'

Where /var/lib/pgsql/data/warm_standby.sh is:

#!/bin/sh

rsync -a $1 postgres(at)192(dot)168(dot)1(dot)26:/usr/local/pgsql/walfiles/$2
rsync -a $1 postgres(at)192(dot)168(dot)1(dot)27:/usr/local/pgsql/walfiles/$2
...

For each warm standby "slave"?

Is it safe to do it this way? I wish there were some scripts out there that I can see as examples.

Thank you

Ogden