Standalone synchronous master

Lists: pgsql-hackers
From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Standalone synchronous master
Date: 2011-12-25 20:08:40
Message-ID: CAO-C5=ka8Sd6ZRqCeJReEBGHe=Oe7=7jFabiH1p6Di17eMYqMw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi all,

I’m new here so maybe someone else already has this in the works ?

Anyway, proposed change/patch :

Add a new parameter :

synchronous_standalone_master = on | off

To control whether a master configured with synchronous_commit = on is
allowed to stop waiting for standby WAL sync when all synchronous
standby WAL senders are disconnected.

Current behavior is that the master waits indefinitely until a
synchronous standby becomes available or until synchronous_commit is
disabled manually. This would still be the default, so
synchronous_standalone_master defaults to off.

Previously discussed here :

http://archives.postgresql.org/pgsql-hackers/2010-10/msg01009.php

I’m attaching a working patch against master/HEAD and I hope the
spirit of christmas will make you see kindly on my attempt :) or
something ...

It works fine and I added some extra logging so that it would be
possible to follow more easily from an admins point of view.

It looks like this when starting the primary server with
synchronous_standalone_master = on :

$ ./postgres
LOG: database system was shut down at 2011-12-25 20:27:13 CET
<-- No standby is connected at startup
LOG: not waiting for standby synchronization
LOG: autovacuum launcher started
LOG: database system is ready to accept connections
<-- First sync standby connects here so switch to sync mode
LOG: standby "tx0113" is now the synchronous standby with priority 1
LOG: waiting for standby synchronization
<-- standby wal receiver on the standby is killed (SIGKILL)
LOG: unexpected EOF on standby connection
LOG: not waiting for standby synchronization
<-- restart standby so that it connects again
LOG: standby "tx0113" is now the synchronous standby with priority 1
LOG: waiting for standby synchronization
<-- standby wal receiver is first stopped (SIGSTOP) to make sure
we have outstanding waits in the primary, then killed (SIGKILL)
LOG: could not receive data from client: Connection reset by peer
LOG: unexpected EOF on standby connection
LOG: not waiting for standby synchronization
<-- client now finally receives commit ACK that was hanging due
to the SIGSTOP:ed wal receiver on the standby node

And so on ... any comments are welcome :)

Thanks and cheers,

/A

Attachment Content-Type Size
sync-standalone-v1.patch.txt text/plain 9.0 KB

From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 10:14:41
Message-ID: CAHGQGwE4BKOO=QWFmpEcQUpPUZ887OyjwNwbGMkiL5rHFOUCgg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 26, 2011 at 5:08 AM, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> I’m new here so maybe someone else already has this in the works ?

No, as far as I know.

> And so on ... any comments are welcome :)

Basically I like this whole idea, but I'd like to know why do you
think this functionality is required?

When is the replication mode switched from "standalone" to "sync"?
That happens as soon as
sync standby appears? or it has caught up with the master? The former
might block the
transactions for a long time until the standby has caught up with the
master even though
synchronous_standalone_master is enabled and a user wants to avoid
such a downtime.

When standalone master is enabled, you might lose some committed
transactions at failover
as follows:

1. While synchronous replication is running normally, replication
connection is closed because of
network outage.
2. The master works standalone because of
synchronous_standalone_master=on and some
new transactions are committed though their WAL records are not
replicated to the standby.
3. The master crashes for some reasons, the clusterware detects it and
triggers a failover.
4. The standby which doesn't have recent committed transactions
becomes the master at a failover...

Is this scenario acceptable?

To avoid such a loss of transactions, I'm thinking to introduce new
GUC parameter specifying
the shell command which is executed when replication mode is switched
from "sync" to "standalone".
If we set it to something like STONITH command, we can shut down
forcibly the standby before
the master resumes the transactions, and avoid the failover to the
obsolete standby when the
master crashes.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 12:51:07
Message-ID: CAO-C5=nc5VCyTFzxorpmuWX2jLEqK+FEgTbWcMqUkqLJK04wig@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello and thank you for your feedback I appreciate it.

Updated patch : sync-standalone-v2.patch

I am sorry to be spamming the list but I just cleaned it up a little
bit, wrote better comments and tried to move most of the logic into
syncrep.c since that's where it belongs anyway and also fixed a small
bug where standalone mode was disabled/enabled runtime via SIGHUP.

> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?

How should a synchronous master handle the situation where all
standbys have failed ?

Well, I think this is one of those cases where you could argue either
way. Someone caring more about high availability of the system will
want to let the master continue and just raise an alert to the
operators. Someone looking for an absolute guarantee of data
replication will say otherwise.

I don’t like introducing config variables just for the fun of it, but
I think in this case there is no right and wrong.

Oracle dataguard replication has three different configurable modes
called “performance/availability/protection” which for postgres
corresponds exactly with “async/sync+standalone/sync”.

> When is the replication mode switched from "standalone" to "sync"?

Good question. Currently that happens when a standby server has
connected and also been deemed suitable for synchronous commit by the
master ( meaning that its name matches the config variable
synchronous_standby_names ). So in a setup with both synchronous and
asynchronous standbys, the master only considers the synchronous ones
when deciding on standalone mode. The asynchronous standbys are
“useless” to a synchronous master anyway.

> The former might block the transactions for a long time until the standby has caught up with the master even though synchronous_standalone_master is enabled and a user wants to avoid such a downtime.

If we a talking about a network “glitch”, than the standby would take
a few seconds/minutes to catch up (not hours!) which is acceptable if
you ask me.

If we are talking about say a node failure, I suppose the workaround
even on current code is to bring up the new standby first as
asynchronous and then simply switch it to synchronous by editing
synchronous_standby_names on the master. Voila ! :)

So in effect this is a non-issue since there is a possible workaround, agree ?

> 1. While synchronous replication is running normally, replication
> connection is closed because of
> network outage.
> 2. The master works standalone because of
> synchronous_standalone_master=on and some
> new transactions are committed though their WAL records are not
> replicated to the standby.
> 3. The master crashes for some reasons, the clusterware detects it and
> triggers a failover.
> 4. The standby which doesn't have recent committed transactions
becomes the master at a failover...

> Is this scenario acceptable?

So you have two separate failures in less time than an admin would
have time to react and manually bring up a new standby.

I’d argue that your system in not designed to be redundant enough if
that kind of scenario worries you. But the point where it all goes
wrong is where the ”clusterware” decides to fail over automatically.
In that kind of setup synchronous_standalone_master must likely be off
but again if the “clusterware” is not smart enough to take the right
decision then it should not act at all. Better to just log critical
alerts, send sms to people etc.

Makes sense ? :)

Cheers,

/A

Attachment Content-Type Size
sync-standalone-v2.patch application/octet-stream 9.2 KB

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 13:35:58
Message-ID: CABUevExC-ySt9-64Dak=wmMM2pqXVyb7fu_iGxb_0Eo5nBTyRw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 26, 2011 at 13:51, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> Hello and thank you for your feedback I appreciate it.
>
> Updated patch : sync-standalone-v2.patch
>
> I am sorry to be spamming the list but I just cleaned it up a little
> bit, wrote better comments and tried to move most of the logic into
> syncrep.c since that's where it belongs anyway and also fixed a small
> bug where standalone mode was disabled/enabled runtime via SIGHUP.

It's not spam when it's an updated patch ;)

>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?
>
> How should a synchronous master handle the situation where all
> standbys have failed ?
>
> Well, I think this is one of those cases where you could argue either
> way. Someone caring more about high availability of the system will
> want to let the master continue and just raise an alert to the
> operators. Someone looking for an absolute guarantee of data
> replication will say otherwise.

If you don't care about the absolute guarantee of data, why not just
use async replication? It's still going to replicate the data over to
the client as quickly as it can - which in the end is the same level
of guarantee that you get with this switch set, isn't it?

>> When is the replication mode switched from "standalone" to "sync"?
>
> Good question. Currently that happens when a standby server has
> connected and also been deemed suitable for synchronous commit by the
> master ( meaning that its name matches the config variable
> synchronous_standby_names ). So in a setup with both synchronous and
> asynchronous standbys, the master only considers the synchronous ones
> when deciding on standalone mode. The asynchronous standbys are
> “useless” to a synchronous master anyway.

But wouldn't an async standby still be a lot better than no standby at
all (standalone)?

>> The former might block the transactions for a long time until the standby has caught up with the master even though synchronous_standalone_master is enabled and a user wants to avoid such a downtime.
>
> If we a talking about a network “glitch”, than the standby would take
> a few seconds/minutes to catch up (not hours!) which is acceptable if
> you ask me.

So it's not Ok to block the master when the standby goes away, but it
is ok to block it when it comes back and catches up? The goes away
might be the same amount of time - or even shorter, depending on
exactly how the network works..

>> 1. While synchronous replication is running normally, replication
>> connection is closed because of
>>    network outage.
>> 2. The master works standalone because of
>> synchronous_standalone_master=on and some
>>    new transactions are committed though their WAL records are not
>> replicated to the standby.
>> 3. The master crashes for some reasons, the clusterware detects it and
>> triggers a failover.
>> 4. The standby which doesn't have recent committed transactions
> becomes the master at a failover...
>
>> Is this scenario acceptable?
>
> So you have two separate failures in less time than an admin would
> have time to react and manually bring up a new standby.

Given that one is a network failure, and one is a node failure, I
don't see that being strange at all. For example, a HA network
environment might cause a short glitch when it's failing over to a
redundant node - enough to bring down the replication connection and
require it to reconnect (during which the master would be ahead of the
slave).

In fact, both might well be network failures - one just making the
master completely inaccessble, and thus triggering the need for a
failover.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 14:59:32
Message-ID: CAO-C5=mzpecz2r0GTNg7gMD9EpjuYxToWj3U2qDMghV0JjcQrQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Interesting discussion!

>>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?

>> How should a synchronous master handle the situation where all
>> standbys have failed ?
>>
>> Well, I think this is one of those cases where you could argue either
>> way. Someone caring more about high availability of the system will
>> want to let the master continue and just raise an alert to the
>> operators. Someone looking for an absolute guarantee of data
>> replication will say otherwise.

>If you don't care about the absolute guarantee of data, why not just
>use async replication? It's still going to replicate the data over to
>the client as quickly as it can - which in the end is the same level
>of guarantee that you get with this switch set, isn't it?

This setup does still guarantee that if the master fails, then you can
still fail over to the standby without any possible data loss because
all data is synchronously replicated.

I want to replicate data with synchronous guarantee to a disaster site
*when possible*. If there is any chance that commits can be
replicated, then I’d like to wait for that.

If however the disaster node/site/link just plain fails and
replication goes down for an *indefinite* amount of time, then I want
the primary node to continue operating, raise an alert and deal with
that. Rather than have the whole system grind to a halt just because a
standby node failed.

It’s not so much that I don’t “care” about replication guarantee, then
I’d just use asynchronous and be done with it. My point is that it is
not always black and white and for some system setups you have to
balance a few things against each other.

If we were just talking about network glitches then I would be fine
with the current behavior because I do not believe they are
long-lasting anyway and they are also *quantifiable* which is a huge
bonus.

My primary focus is system availability but I also care about all that
other stuff too.

I want to have the cake and eat it at the same time as we say in Sweden ;)

>>> When is the replication mode switched from "standalone" to "sync"?
>>
>> Good question. Currently that happens when a standby server has
>> connected and also been deemed suitable for synchronous commit by the
>> master ( meaning that its name matches the config variable
>> synchronous_standby_names ). So in a setup with both synchronous and
>> asynchronous standbys, the master only considers the synchronous ones
>> when deciding on standalone mode. The asynchronous standbys are
>> “useless” to a synchronous master anyway.

>But wouldn't an async standby still be a lot better than no standby at
>all (standalone)?

As soon as the standby comes back online, I want to wait for it to sync.

>>> The former might block the transactions for a long time until the standby has caught up with the master even though synchronous_standalone_master is enabled and a user wants to avoid such a downtime.
>
>> If we a talking about a network “glitch”, than the standby would take
>> a few seconds/minutes to catch up (not hours!) which is acceptable if
>> you ask me.

>So it's not Ok to block the master when the standby goes away, but it
>is ok to block it when it comes back and catches up? The goes away
>might be the same amount of time - or even shorter, depending on
>exactly how the network works..

To be honest I don’t have a very strong opinion here, we could go
either way, I just wanted to keep this patch as small as possible to
begin with. But again network glitches aren’t my primary concern in a
HA system because the amount of data that the standby lags behind is
possible to estimate and plan for.

Typically switch convergence takes in the order of 15-30 seconds and I
can thus typically assume that the restarted standby can recover that
gap in less than a minute. So once upon a blue moon when something
like that happens, commits would take up to say 1 minute longer. No
big deal IMHO.

>>> 1. While synchronous replication is running normally, replication
>>> connection is closed because of
>>> network outage.
>>> 2. The master works standalone because of
>>> synchronous_standalone_master=on and some
>>> new transactions are committed though their WAL records are not
>>> replicated to the standby.
>>> 3. The master crashes for some reasons, the clusterware detects it and
>>> triggers a failover.
>>> 4. The standby which doesn't have recent committed transactions
>>> becomes the master at a failover...

>>> Is this scenario acceptable?

>> So you have two separate failures in less time than an admin would
>> have time to react and manually bring up a new standby.

>Given that one is a network failure, and one is a node failure, I
>don't see that being strange at all. For example, a HA network
>environment might cause a short glitch when it's failing over to a
>redundant node - enough to bring down the replication connection and
>require it to reconnect (during which the master would be ahead of the
>slave).

>In fact, both might well be network failures - one just making the
>master completely inaccessble, and thus triggering the need for a
>failover.

You still have two failures on a two-node system.

If we are talking about a setup with only two nodes (which I am), then
I think it’s fair to limit the discussion to one failure (whatever
that might be! node,switch,disk,site,intra-site link, power, etc ...).

And in that case, there are only really three likely scenarios :
1) The master fails
2) The standby fails
3) Both fail (due to shared network gear, power, etc)

Yes there might be a need to failover and Yes the standby could
possibly have lagged behind the master but with my sync+standalone
mode, you reduce the risk of that compared to just async mode.

So decrease the risk of data loss (case 1), increase system
availability/uptime (case 2).

That is a actually a pretty good description of my goal here :)

Cheers,

/A


From: Dimitri Fontaine <dimitri(at)2ndQuadrant(dot)fr>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 15:05:31
Message-ID: 87k45jpe4k.fsf@hi-media-techno.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Magnus Hagander <magnus(at)hagander(dot)net> writes:
> If you don't care about the absolute guarantee of data, why not just
> use async replication? It's still going to replicate the data over to
> the client as quickly as it can - which in the end is the same level
> of guarantee that you get with this switch set, isn't it?

Isn't that equivalent to setting synchronous_standby_names to '' and
reloading the server?

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 15:23:32
Message-ID: CABUevEwf9Zd6EypK7m=78MRDm-8Gn8G-w0iOMKqCs-3fY4sZcA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 26, 2011 at 15:59, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
>>>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?
>
>>> How should a synchronous master handle the situation where all
>>> standbys have failed ?
>>>
>>> Well, I think this is one of those cases where you could argue either
>>> way. Someone caring more about high availability of the system will
>>> want to let the master continue and just raise an alert to the
>>> operators. Someone looking for an absolute guarantee of data
>>> replication will say otherwise.
>
>>If you don't care about the absolute guarantee of data, why not just
>>use async replication? It's still going to replicate the data over to
>>the client as quickly as it can - which in the end is the same level
>>of guarantee that you get with this switch set, isn't it?
>
> This setup does still guarantee that if the master fails, then you can
> still fail over to the standby without any possible data loss because
> all data is synchronously replicated.

Only if you didn't have a network hitch, or if your slave was down.

Which basically means it doesn't *guarantee* it.

> I want to replicate data with synchronous guarantee to a disaster site
> *when possible*. If there is any chance that commits can be
> replicated, then I’d like to wait for that.

There's always a chance, it's just about how long you're willing to wait ;)

Another thought could be to have something like a "sync_wait_timeout",
saying "i'm willing to wait <n> seconds for the syncrep to be caught
up. If nobody is cauth up within that time,then I can back down to
async mode/"standalone" mode". That way, data availaibility wouldn't
be affected by short-time network glitches.

> If however the disaster node/site/link just plain fails and
> replication goes down for an *indefinite* amount of time, then I want
> the primary node to continue operating, raise an alert and deal with
> that. Rather than have the whole system grind to a halt just because a
> standby node failed.

If the standby node failed and can be determined to actually be failed
(by say a cluster manager), you can always have your cluster software
(or DBA, of course) turn it off by editing the config setting and
reloading. Doing it that way you can actually *verify* that the site
is gone for an indefinite amount of time.

> It’s not so much that I don’t “care” about replication guarantee, then
> I’d just use asynchronous and be done with it. My point is that it is
> not always black and white and for some system setups you have to
> balance a few things against each other.

Agreed in principle :-)

> If we were just talking about network glitches then I would be fine
> with the current behavior because I do not believe they are
> long-lasting anyway and they are also *quantifiable* which is a huge
> bonus.

But the proposed switches doesn't actually make it possible to
differentiate between these "non-long-lasting" issues and long-lasting
ones, does it? We might want an interface that actually does...

> My primary focus is system availability but I also care about all that
> other stuff too.
>
> I want to have the cake and eat it at the same time as we say in Sweden ;)

Of course - we all do :D

>>>> When is the replication mode switched from "standalone" to "sync"?
>>>
>>> Good question. Currently that happens when a standby server has
>>> connected and also been deemed suitable for synchronous commit by the
>>> master ( meaning that its name matches the config variable
>>> synchronous_standby_names ). So in a setup with both synchronous and
>>> asynchronous standbys, the master only considers the synchronous ones
>>> when deciding on standalone mode. The asynchronous standbys are
>>> “useless” to a synchronous master anyway.
>
>>But wouldn't an async standby still be a lot better than no standby at
>>all (standalone)?
>
> As soon as the standby comes back online, I want to wait for it to sync.

I guess I just find this very inconsistent. You're willing to wait,
but only sometimes. You're not willing to wait when it goes down, but
you are willing to wait when it comes back. I don't see why this
should be different, and I don't see how you can reliably
differentiate between these two.

>>>> The former might block the transactions for a long time until the standby has caught up with the master even though synchronous_standalone_master is enabled and a user wants to avoid such a downtime.
>>
>>> If we a talking about a network “glitch”, than the standby would take
>>> a few seconds/minutes to catch up (not hours!) which is acceptable if
>>> you ask me.
>
>>So it's not Ok to block the master when the standby goes away, but it
>>is ok to block it when it comes back and catches up? The goes away
>>might be the same amount of time - or even shorter, depending on
>>exactly how the network works..
>
> To be honest I don’t have a very strong opinion here, we could go
> either way, I just wanted to keep this patch as small as possible to
> begin with. But again network glitches aren’t my primary concern in a
> HA system because the amount of data that the standby lags behind is
> possible to estimate and plan for.
>
> Typically switch convergence takes in the order of 15-30 seconds and I
> can thus typically assume that the restarted standby can recover that
> gap in less than a minute. So once upon a blue moon when something
> like that happens, commits would take up to say 1 minute longer. No
> big deal IMHO.

What about the slave rebooting, for example? That'll usually be pretty
quick too - so you'd be ok waiting for that. But your patch doesn't
let you wait for that - it will switch to standalone mode right away?
But if it takes 30 seconds to reboot, and then 30 seconds to catch up,
you are *not* willing to wait for the first 30 seconds, but you 'are*
willing fo wait for the second? Just seems strange to me, I guess...

>>>> 1. While synchronous replication is running normally, replication
>>>> connection is closed because of
>>>>    network outage.
>>>> 2. The master works standalone because of
>>>> synchronous_standalone_master=on and some
>>>>    new transactions are committed though their WAL records are not
>>>> replicated to the standby.
>>>> 3. The master crashes for some reasons, the clusterware detects it and
>>>> triggers a failover.
>>>> 4. The standby which doesn't have recent committed transactions
>>>> becomes the master at a failover...
>
>>>> Is this scenario acceptable?
>
>>> So you have two separate failures in less time than an admin would
>>> have time to react and manually bring up a new standby.
>
>>Given that one is a network failure, and one is a node failure, I
>>don't see that being strange at all. For example, a HA network
>>environment might cause a short glitch when it's failing over to a
>>redundant node - enough to bring down the replication connection and
>>require it to reconnect (during which the master would be ahead of the
>>slave).
>
>>In fact, both might well be network failures - one just making the
>>master completely inaccessble, and thus triggering the need for a
>>failover.
>
> You still have two failures on a two-node system.

Yes - but only one (or zero) of them is actually to any of the nodes :-)

> If we are talking about a setup with only two nodes (which I am), then
> I think it’s fair to limit the discussion to one failure (whatever
> that might be! node,switch,disk,site,intra-site link, power, etc ...).
>
> And in that case, there are only really three likely scenarios :
> 1)      The master fails
> 2)      The standby fails
> 3)      Both fail (due to shared network gear, power, etc)
>
> Yes there might be a need to failover and Yes the standby could
> possibly have lagged behind the master but with my sync+standalone
> mode, you reduce the risk of that compared to just async mode.
>
> So decrease the risk of data loss (case 1), increase system
> availability/uptime (case 2).
>
> That is a actually a pretty good description of my goal here :)
>
> Cheers,
>
> /A

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


From: Guillaume Lelarge <guillaume(at)lelarge(dot)info>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 16:18:26
Message-ID: 1324916306.12762.39.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 2011-12-26 at 16:23 +0100, Magnus Hagander wrote:
> On Mon, Dec 26, 2011 at 15:59, Alexander Björnhagen
> <alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> >>>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?
> >
> >>> How should a synchronous master handle the situation where all
> >>> standbys have failed ?
> >>>
> >>> Well, I think this is one of those cases where you could argue either
> >>> way. Someone caring more about high availability of the system will
> >>> want to let the master continue and just raise an alert to the
> >>> operators. Someone looking for an absolute guarantee of data
> >>> replication will say otherwise.
> >
> >>If you don't care about the absolute guarantee of data, why not just
> >>use async replication? It's still going to replicate the data over to
> >>the client as quickly as it can - which in the end is the same level
> >>of guarantee that you get with this switch set, isn't it?
> >
> > This setup does still guarantee that if the master fails, then you can
> > still fail over to the standby without any possible data loss because
> > all data is synchronously replicated.
>
> Only if you didn't have a network hitch, or if your slave was down.
>
> Which basically means it doesn't *guarantee* it.
>

It doesn't guarantee it, but it increases the master availability.
That's the kind of customization some users would like to have. Though I
find it weird to introduce another GUC there. Why not add a new enum
value to synchronous_commit, such as local_only_if_slaves_unavailable
(yeah, the enum value is completely stupid, but you get my point).

--
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 17:01:18
Message-ID: CAO-C5==1qyrt3gD3+h1ehoo9UMx8ZTTMcdOyULAfAB3=WafGLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hmm,

I suppose this conversation would lend itself better to a whiteboard
or a maybe over a few beers instead of via e-mail ...

>>>>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?

>>>> How should a synchronous master handle the situation where all
>>>> standbys have failed ?
>>>>
>>>> Well, I think this is one of those cases where you could argue either
>>>> way. Someone caring more about high availability of the system will
>>>> want to let the master continue and just raise an alert to the
>>>> operators. Someone looking for an absolute guarantee of data
>>>> replication will say otherwise.

>>>If you don't care about the absolute guarantee of data, why not just
>>>use async replication? It's still going to replicate the data over to
>>>the client as quickly as it can - which in the end is the same level
>>>of guarantee that you get with this switch set, isn't it?

>> This setup does still guarantee that if the master fails, then you can
>> still fail over to the standby without any possible data loss because
>> all data is synchronously replicated.

>Only if you didn't have a network hitch, or if your slave was down.

>Which basically means it doesn't *guarantee* it.

True. In my two-node system, I’m willing to take that risk when my
only standby has failed.

Most likely (compared to any other scenario), we can re-gain
redundancy before another failure occurs.

Say each one of your nodes can fail once a year. Most people have much
better track record than with their production machines/network/etc
but just as an example. Then on any given day there is a 0,27% chance
that at given node will fail (1/365*100=0,27), right ?

Then the probability of both failing on the same day is (0,27%)^2 =
0,000074 % or about 1 in 13500. And given that it would take only a
few hours tops to restore redundancy, it is even less of a chance than
that because you would not be exposed for the entire day.

So, to be a bit blunt about it and I hope I don’t come off a rude
here, this dual-failure or creeping-doom type scenario on a two-node
system is probably not relevant but more an academical question.

>> I want to replicate data with synchronous guarantee to a disaster site
>> *when possible*. If there is any chance that commits can be
>> replicated, then I’d like to wait for that.

>There's always a chance, it's just about how long you're willing to wait ;)

Yes, exactly. When I can estimate it I’m willing to wait.

>Another thought could be to have something like a "sync_wait_timeout",
>saying "i'm willing to wait <n> seconds for the syncrep to be caught
>up. If nobody is cauth up within that time,then I can back down to
>async mode/"standalone" mode". That way, data availaibility wouldn't
>be affected by short-time network glitches.

This was also mentioned in the previous thread I linked to,
“replication_timeout“ :

http://archives.postgresql.org/pgsql-hackers/2010-10/msg01009.php

In a HA environment you have redundant networking and bonded
interfaces on each node. The only “glitch” would really be if a switch
failed over and that’s a pretty big “if” right there.

>> If however the disaster node/site/link just plain fails and
>> replication goes down for an *indefinite* amount of time, then I want
>> the primary node to continue operating, raise an alert and deal with
>> that. Rather than have the whole system grind to a halt just because a
>> standby node failed.

>If the standby node failed and can be determined to actually be failed
>(by say a cluster manager), you can always have your cluster software
>(or DBA, of course) turn it off by editing the config setting and
>reloading. Doing it that way you can actually *verify* that the site
>is gone for an indefinite amount of time.

The system might as well do this for me when the standby gets
disconnected instead of halting the master.

>> If we were just talking about network glitches then I would be fine
>> with the current behavior because I do not believe they are
>> long-lasting anyway and they are also *quantifiable* which is a huge
>> bonus.

>But the proposed switches doesn't actually make it possible to
>differentiate between these "non-long-lasting" issues and long-lasting
>ones, does it? We might want an interface that actually does...

“replication_timeout” where the primary disconnects the WAL sender
after a timeout together with “synchronous_standalone_master” which
tells the primary it can continue anyway when that happens allows
exactly that. This would then be first part towards that but I wanted
to start out small and I personally think it is sufficient to draw the
line at TCP disconnect of the standby.

>>>>> When is the replication mode switched from "standalone" to "sync"?
>>>>
>>>> Good question. Currently that happens when a standby server has
>>>> connected and also been deemed suitable for synchronous commit by the
>>>> master ( meaning that its name matches the config variable
>>>> synchronous_standby_names ). So in a setup with both synchronous and
>>>> asynchronous standbys, the master only considers the synchronous ones
>>>> when deciding on standalone mode. The asynchronous standbys are
>>>> “useless” to a synchronous master anyway.
>
>>>But wouldn't an async standby still be a lot better than no standby at
>>>all (standalone)?
>
>> As soon as the standby comes back online, I want to wait for it to sync.

>I guess I just find this very inconsistent. You're willing to wait,
>but only sometimes. You're not willing to wait when it goes down, but
>you are willing to wait when it comes back. I don't see why this
>should be different, and I don't see how you can reliably
>differentiate between these two.

When the wait is quantifiable, I want to wait (like a connected
standby that is in the process of catching up). When it is not (like
when the remote node disappeared and the master has no way of knowing
for how long), I do not want to wait.

In both cases I want to send off alerts, get people involved and fix
the problem causing this, it is not something that should happen
often.

>>>>> The former might block the transactions for a long time until the standby has caught up with the master even though synchronous_standalone_master is enabled and a user wants to avoid such a downtime.
>>
>>>> If we a talking about a network “glitch”, than the standby would take
>>>> a few seconds/minutes to catch up (not hours!) which is acceptable if
>>>> you ask me.
>
>>>So it's not Ok to block the master when the standby goes away, but it
>>>is ok to block it when it comes back and catches up? The goes away
>>>might be the same amount of time - or even shorter, depending on
>>>exactly how the network works..
>>
>> To be honest I don’t have a very strong opinion here, we could go
>> either way, I just wanted to keep this patch as small as possible to
>> begin with. But again network glitches aren’t my primary concern in a
>> HA system because the amount of data that the standby lags behind is
>> possible to estimate and plan for.
>>
>> Typically switch convergence takes in the order of 15-30 seconds and I
>> can thus typically assume that the restarted standby can recover that
>> gap in less than a minute. So once upon a blue moon when something
>> like that happens, commits would take up to say 1 minute longer. No
>> big deal IMHO.

>What about the slave rebooting, for example? That'll usually be pretty
>quick too - so you'd be ok waiting for that. But your patch doesn't
>let you wait for that - it will switch to standalone mode right away?
>But if it takes 30 seconds to reboot, and then 30 seconds to catch up,
>you are *not* willing to wait for the first 30 seconds, but you 'are*
>willing fo wait for the second? Just seems strange to me, I guess...

That’s exactly right. While the standby is booting, the master has no
way of knowing what is going on with that standby so then I don’t want
to wait.

When the standby has managed to boot, connect and started to sync up
the data that it was lagging behind, then I do want to wait because I
know that it will not take too long before it has caught up.

>>>>> 1. While synchronous replication is running normally, replication
>>>>> connection is closed because of
>>>>> network outage.
>>>>> 2. The master works standalone because of
>>>>> synchronous_standalone_master=on and some
>>>>> new transactions are committed though their WAL records are not
>>>>> replicated to the standby.
>>>>> 3. The master crashes for some reasons, the clusterware detects it and
>>>>> triggers a failover.
>>>>> 4. The standby which doesn't have recent committed transactions
>>>>> becomes the master at a failover...

>>>>> Is this scenario acceptable?

>>>> So you have two separate failures in less time than an admin would
>>>> have time to react and manually bring up a new standby.
>
>>>Given that one is a network failure, and one is a node failure, I
>>>don't see that being strange at all. For example, a HA network
>>>environment might cause a short glitch when it's failing over to a
>>>redundant node - enough to bring down the replication connection and
>>>require it to reconnect (during which the master would be ahead of the
>>>slave).
>>>
>>>In fact, both might well be network failures - one just making the
>>>master completely inaccessble, and thus triggering the need for a
>>>failover.
>>
>> You still have two failures on a two-node system.

>Yes - but only one (or zero) of them is actually to any of the nodes :-)

It doesn’t matter from the viewpoint of our primary and standby
servers. If the link to the standby fails so that it is unreachable
from the master, then the master may consider that node as failed. It
does not matter that the component which failed was not part of that
physical machine, it still rendered it useless because it is no longer
reachable.

So in the previous example where one network link fails and then one
node fails, I see that as two separate failures. If it is possible to
take out both primary and standby servers with only one component
failing (shared network/power/etc), then the system is not designed
right because there is a single-point of failure and no software in
the world will ever save you from that.

That’s why I tried to limit ourselves to the simple use-case where
either the standby or the primary node fails. If both fail then all
bets are off, you’re going to have a very bad day at the office
anyway.

> If we are talking about a setup with only two nodes (which I am), then
> I think it’s fair to limit the discussion to one failure (whatever
> that might be! node,switch,disk,site,intra-site link, power, etc ...).
>
> And in that case, there are only really three likely scenarios :
> 1) The master fails
> 2) The standby fails
> 3) Both fail (due to shared network gear, power, etc)
>
> Yes there might be a need to failover and Yes the standby could
> possibly have lagged behind the master but with my sync+standalone
> mode, you reduce the risk of that compared to just async mode.
>
> So decrease the risk of data loss (case 1), increase system
> availability/uptime (case 2).
>
> That is a actually a pretty good description of my goal here :)
>
> Cheers,
>
> /A


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Guillaume Lelarge <guillaume(at)lelarge(dot)info>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 17:06:27
Message-ID: CAO-C5=nmy_h=z=16KoadBRQCjrrct32MPM+NQqm8Z5absHfkUA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 26, 2011 at 5:18 PM, Guillaume Lelarge
<guillaume(at)lelarge(dot)info> wrote:
> On Mon, 2011-12-26 at 16:23 +0100, Magnus Hagander wrote:
>> On Mon, Dec 26, 2011 at 15:59, Alexander Björnhagen
>> <alex(dot)bjornhagen(at)gmail(dot)com> wrote:
>> >>>> Basically I like this whole idea, but I'd like to know why do you think this functionality is required?
>> >
>> >>> How should a synchronous master handle the situation where all
>> >>> standbys have failed ?
>> >>>
>> >>> Well, I think this is one of those cases where you could argue either
>> >>> way. Someone caring more about high availability of the system will
>> >>> want to let the master continue and just raise an alert to the
>> >>> operators. Someone looking for an absolute guarantee of data
>> >>> replication will say otherwise.
>> >
>> >>If you don't care about the absolute guarantee of data, why not just
>> >>use async replication? It's still going to replicate the data over to
>> >>the client as quickly as it can - which in the end is the same level
>> >>of guarantee that you get with this switch set, isn't it?
>> >
>> > This setup does still guarantee that if the master fails, then you can
>> > still fail over to the standby without any possible data loss because
>> > all data is synchronously replicated.
>>
>> Only if you didn't have a network hitch, or if your slave was down.
>>
>> Which basically means it doesn't *guarantee* it.
>>
>
> It doesn't guarantee it, but it increases the master availability.

Yes exactly.

> That's the kind of customization some users would like to have. Though I
> find it weird to introduce another GUC there. Why not add a new enum
> value to synchronous_commit, such as local_only_if_slaves_unavailable
> (yeah, the enum value is completely stupid, but you get my point).

You are right an enum makes much more sense, and the patch would be
much smaller as well so I’ll rework that bit.

/A


From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-26 17:32:42
Message-ID: CABUevEz4SizFKVb=K+fqggeuP2Y9QSUgsKGiooa6tZ30uGiZKg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 26, 2011 at 18:01, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> Hmm,
>
> I suppose this conversation would lend itself better to a whiteboard
> or a maybe over a few beers instead of via e-mail  ...

mmm. beer... :-)

>>>>> Well, I think this is one of those cases where you could argue either
>>>>> way. Someone caring more about high availability of the system will
>>>>> want to let the master continue and just raise an alert to the
>>>>> operators. Someone looking for an absolute guarantee of data
>>>>> replication will say otherwise.
>
>>>>If you don't care about the absolute guarantee of data, why not just
>>>>use async replication? It's still going to replicate the data over to
>>>>the client as quickly as it can - which in the end is the same level
>>>>of guarantee that you get with this switch set, isn't it?
>
>>> This setup does still guarantee that if the master fails, then you can
>>> still fail over to the standby without any possible data loss because
>>> all data is synchronously replicated.
>
>>Only if you didn't have a network hitch, or if your slave was down.
>
>>Which basically means it doesn't *guarantee* it.
>
> True. In my two-node system, I’m willing to take that risk when my
> only standby has failed.
>
> Most likely (compared to any other scenario), we can re-gain
> redundancy before another failure occurs.
>
> Say each one of your nodes can fail once a year. Most people have much
> better track record than with their production machines/network/etc
> but just as an example. Then on any given day there is a 0,27% chance
> that at given node will fail (1/365*100=0,27), right ?
>
> Then the probability of both failing on the same day is (0,27%)^2 =
> 0,000074 % or about 1 in 13500. And given that it would take only a
> few hours tops to restore redundancy, it is even less of a chance than
> that because you would not be exposed for the entire day.

That is assuming the failures are actually independent. In my
experience, they're usually not.

But that's diverging into math, which really isn't my strong side here :D

> So, to be a bit blunt about it and I hope I don’t come off a rude
> here, this dual-failure or creeping-doom type scenario on a two-node
> system is probably not relevant but more an academical question.

Given how many times I've seen it, I'm going to respectfully disagree
with that ;)

That said, I agree it's not necessarily reasonable to try to defend
against that in a two node cluster. You can always make it three-node
if you need to do that. I'm worried that the interface seems a bit
fragile and that it's hard to "be sure". Predictable interfaces are
good.. :-)

>>> I want to replicate data with synchronous guarantee to a disaster site
>>> *when possible*. If there is any chance that commits can be
>>> replicated, then I’d like to wait for that.
>
>>There's always a chance, it's just about how long you're willing to wait ;)
>
> Yes, exactly. When I can estimate it I’m willing to wait.
>
>>Another thought could be to have something like a "sync_wait_timeout",
>>saying "i'm willing to wait <n> seconds for the syncrep to be caught
>>up. If nobody is cauth up within that time,then I can back down to
>>async mode/"standalone" mode". That way, data availaibility wouldn't
>>be affected by short-time network glitches.
>
> This was also mentioned in the previous thread I linked to,
> “replication_timeout“ :
>
> http://archives.postgresql.org/pgsql-hackers/2010-10/msg01009.php

Hmm. That link was gone from the thread when I read it - I missed it
completely. Sorry about that.

So reading that thread, it really only takes care of one of the cases
- the replication_timeout only fires if the slave "goes dead". It
could be useful if a similar timeout would apply if I did a "pg_ctl
restart" on the slave - making the master wait <n> seconds before
going into standalone mode. The way I read the proposal now, the
master would immediately go into standalone mode if the standby
actually *closes* the connection instead of timing it out?

> In a HA environment you have redundant networking and bonded
> interfaces on each node. The only “glitch” would really be if a switch
> failed over and that’s a pretty big “if” right there.

Switches fail a lot. And there are a lot more things in between that
can fail. I don't think it's such a big if - network issues are by far
the most common cases of a HA environment failing I've seen lately.

>>> If however the disaster node/site/link just plain fails and
>>> replication goes down for an *indefinite* amount of time, then I want
>>> the primary node to continue operating, raise an alert and deal with
>>> that. Rather than have the whole system grind to a halt just because a
>>> standby node failed.
>
>>If the standby node failed and can be determined to actually be failed
>>(by say a cluster manager), you can always have your cluster software
>>(or DBA, of course) turn it off by editing the config setting and
>>reloading. Doing it that way you can actually *verify* that the site
>>is gone for an indefinite amount of time.
>
> The system might as well do this for me when the standby gets
> disconnected instead of halting the master.

I guess two ways of seeing it - the flip of that coin is "the system
can already do this for you"...

>>> If we were just talking about network glitches then I would be fine
>>> with the current behavior because I do not believe they are
>>> long-lasting anyway and they are also *quantifiable* which is a huge
>>> bonus.
>
>>But the proposed switches doesn't actually make it possible to
>>differentiate between these "non-long-lasting" issues and long-lasting
>>ones, does it? We might want an interface that actually does...
>
> “replication_timeout” where the primary disconnects the WAL sender
> after a timeout together with “synchronous_standalone_master” which
> tells the primary it can continue anyway when that happens allows
> exactly that. This would then be first part towards that but I wanted
> to start out small and I personally think it is sufficient to draw the
> line at TCP disconnect of the standby.

Maybe it is. It still seems fragile to me.

>>>>But wouldn't an async standby still be a lot better than no standby at
>>>>all (standalone)?
>>
>>> As soon as the standby comes back online, I want to wait for it to sync.
>
>>I guess I just find this very inconsistent. You're willing to wait,
>>but only sometimes. You're not willing to wait when it goes down, but
>>you are willing to wait when it comes back. I don't see why this
>>should be different, and I don't see how you can reliably
>>differentiate between these two.
>
> When the wait is quantifiable, I want to wait (like a connected
> standby that is in the process of catching up). When it is not (like
> when the remote node disappeared and the master has no way of knowing
> for how long), I do not want to wait.
> In both cases I want to send off alerts, get people involved and fix
> the problem causing this, it is not something that should happen
> often.

Of course.

>>What about the slave rebooting, for example? That'll usually be pretty
>>quick too - so you'd be ok waiting for that. But your patch doesn't
>>let you wait for that - it will switch to standalone mode right away?
>>But if it takes 30 seconds to reboot, and then 30 seconds to catch up,
>>you are *not* willing to wait for the first 30 seconds, but you 'are*
>>willing fo wait for the second? Just seems strange to me, I guess...
>
> That’s exactly right. While the standby is booting, the master has no
> way of knowing what is going on with that standby so then I don’t want
> to wait.
>
> When the standby has managed to boot, connect and started to sync up
> the data that it was lagging behind, then I do want to wait because I
> know that it will not take too long before it has caught up.

Yeah, that does make sense, when you look at it like that.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2011-12-27 11:39:22
Message-ID: CAO-C5==qfoWkiLhs0SEKmPHsQ-kmWNNpQwVa1JmQOh1J3OZong@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Okay,

Here’s version 3 then, which piggy-backs on the existing flag :

synchronous_commit = on | off | local | fallback

Where “fallback” now means “fall back from sync replication when no
(suitable) standbys are connected”.

This was done on input from Guillaume Lelarge.

> That said, I agree it's not necessarily reasonable to try to defend
> against that in a two node cluster.

That’s what I’ve been trying to say all along but I didn’t give enough
context before so I understand we took a turn there.

You can always walk up to any setup and say “hey, if you nuke that
site from orbit and crash that other thing, and ...” ;) I’m just
kidding of course but you get the point. Nothing is absolute.

And so we get back to the three likelihoods in our two-node setup :

1.The master fails
- Okay, promote the standby

2.The standby fails
- Okay, the system still works but you no longer have data
redundancy. Deal with it.

3.Both fail, together or one after the other.

I’ve stated that 1 and 2 together covers way more than 99.9% of what’s
expected in my setup on any given day.

But 3. is what we’ve been talking about ... And well in that case
there is no reason to just go ahead and promote a standby because,
granted, it could be lagging behind if the master decided to switch to
standalone mode just before going down itself.

As long as you do not prematurely or rather instinctively promote the
standby when it has *possibly* lagged behind, you’re good and there is
no risk of data loss. The data might be sitting on a crashed or
otherwise unavailable master, but it’s not lost. Promoting the standby
however is basically saying “forget the master and its data, continue
from where the standby is currently at”.

Now granted this is operationally harder/more complicated than just
synchronous replication where you can always, in any case, just
promote the standby after a master failure, knowing that all data is
guaranteed to be replicated.

> I'm worried that the interface seems a bit
> fragile and that it's hard to "be sure".

With this setup, you can’t promote the standby without first checking
if the replication link was disconnected prior to the master failure.

For me, the benefits outweigh this one drawback because I get more
standby replication guarantee than async replication and more master
availability than sync replication in the most plausible outcomes.

Cheers,

/A

Attachment Content-Type Size
sync-standalone-v3.patch application/octet-stream 10.5 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-04 02:22:22
Message-ID: CA+TgmoZsOKW+9GJo1h=MJt6+bnzfMVYbfwJY96ZS4ZHBXYGSLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 27, 2011 at 6:39 AM, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> And so we get back to the three likelihoods in our two-node setup :
>
> 1.The master fails
>  - Okay, promote the standby
>
> 2.The standby fails
>  - Okay, the system still works but you no longer have data
> redundancy. Deal with it.
>
> 3.Both fail, together or one after the other.

It seems to me that if you are happy with #2, you don't really need to
enable sync rep in the first place.

At any rate, even without multiple component failures, this
configuration makes it pretty easy to lose durability (which is the
only point of having sync rep in the first place). Suppose the NIC
card on the master is the failing component. If it happens to drop
the TCP connection to the clients just before it drops the connection
to the standby, the standby will have all the transactions, and you
can fail over just fine. If it happens to drop the TCP connection to
the just before it drops the connection to the clients, the standby
will not have all the transactions, and failover will lose some
transactions - and presumably you enabled this feature in the first
place precisely to prevent that sort of occurrence.

I do think that it might be useful to have this if there's a
configurable timeout involved - that way, people could say, well, I'm
OK with maybe losing transactions if the standby has been gone for X
seconds. But if the only possible behavior is equivalent to a
zero-second timeout I don't think it's too useful. It's basically
just going to lead people to believe that their data is more secure
than it really is, which IMHO is not helpful.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Aidan Van Dyk <aidan(at)highrise(dot)ca>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-04 14:28:59
Message-ID: CAC_2qU91iUN6DMVa0=pL1SG7tacaK5Bw6UDW6sCWYN0f0BwYnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 3, 2012 at 9:22 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> It seems to me that if you are happy with #2, you don't really need to
> enable sync rep in the first place.
>
> At any rate, even without multiple component failures, this
> configuration makes it pretty easy to lose durability (which is the
> only point of having sync rep in the first place).  Suppose the NIC
> card on the master is the failing component.  If it happens to drop
> the TCP connection to the clients just before it drops the connection
> to the standby, the standby will have all the transactions, and you
> can fail over just fine.  If it happens to drop the TCP connection to
> the just before it drops the connection to the clients, the standby
> will not have all the transactions, and failover will lose some
> transactions - and presumably you enabled this feature in the first
> place precisely to prevent that sort of occurrence.
>
> I do think that it might be useful to have this if there's a
> configurable timeout involved - that way, people could say, well, I'm
> OK with maybe losing transactions if the standby has been gone for X
> seconds.  But if the only possible behavior is equivalent to a
> zero-second timeout I don't think it's too useful.  It's basically
> just going to lead people to believe that their data is more secure
> than it really is, which IMHO is not helpful.

So, I'm a big fan of syncrep guaranteeing it's guarantees. To me,
that's the whole point. Having it "fall out of sync rep" at any point
*automatically* seems to be exactly counter to the point of sync rep.

That said, I'm also a big fan of monitoring everything as well as I could...

I'ld love a "hook" script that was run if sync-rep state ever changed
(heck, I'ld even like it if it just choose a new sync standby).

Even better, is there a way we could start injecting "notify" events
into the cluster on these types of changes? Especially now that
notify events can take payloads, it means I don't have to keep
constantly polling the database to see if it things its connected,
etc.

a.

--
Aidan Van Dyk                                             Create like a god,
aidan(at)highrise(dot)ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Aidan Van Dyk <aidan(at)highrise(dot)ca>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-04 18:04:26
Message-ID: CA+TgmoZHTdnUdXp1eka8grjFibXdGisptmjc5KOOUZQwC7_wkQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jan 4, 2012 at 9:28 AM, Aidan Van Dyk <aidan(at)highrise(dot)ca> wrote:
> I'ld love a "hook" script that was run if sync-rep state ever changed
> (heck, I'ld even like it if it just choose a new sync standby).

That seems useful. I don't think the current code quite knows its own
state; we seem to have each walsender recompute who the boss is, and
if you query pg_stat_replication that redoes the same calculation. I
can't shake the feeling that there's a better way... which would also
facilitate this.

> Even better, is there a way we could start injecting "notify" events
> into the cluster on these types of changes?  Especially now that
> notify events can take payloads, it means I don't have to keep
> constantly polling the database to see if it things its connected,
> etc.

I like this idea, too.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Aidan Van Dyk <aidan(at)highrise(dot)ca>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-13 06:31:23
Message-ID: CAHGQGwHtPB8wN+Tn6vD21VecLF-h1=UvX6mZ+_5bU88gy_MvDw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Jan 4, 2012 at 11:28 PM, Aidan Van Dyk <aidan(at)highrise(dot)ca> wrote:
> So, I'm a big fan of syncrep guaranteeing it's guarantees.  To me,
> that's the whole point.  Having it "fall out of sync rep" at any point
> *automatically* seems to be exactly counter to the point of sync rep.

Yes, what Alexander proposed is not sync rep. It's new replication mode.
If we adopt the proposal, we have three replication modes, async, sync,
what Alexander proposed, like Oracle DataGuard provides. If you need
the guarantee which sync rep provides, you can choose sync as replication
mode.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
To: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
Cc: Aidan Van Dyk <aidan(at)highrise(dot)ca>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-13 10:30:40
Message-ID: CAO-C5=k1AB1Md3_cAS6zKZW54sOWcMkzqveyvCLg7EHZJadO4Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

At this point I feel that this new functionality might be a bit
overkill for postgres, maybe it's better to stay lean and mean rather
than add a controversial feature like this.

I also agree that a more general replication timeout variable would be
more useful to a larger audience but that would in my view add more
complexity to the replication code which is quite simple and
understandable right now ...

Anyway, my backup plan was to achieve the same thing by triggering on
the logging produced on the primary server and switch to async mode
when detecting that the standby replication link has failed (and then
back again when it is restored). In effect I would put a replication
monitor on the outside of the server instead of embedding it.

/A


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>
Cc: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-13 17:33:43
Message-ID: CAMkU=1ySUFQG2ZEQx+=aFtjayuafDeb34sLr2Ck6Z08YEpUs2A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 13, 2012 at 2:30 AM, Alexander Björnhagen
<alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> At this point I feel that this new functionality might be a bit
> overkill for postgres, maybe it's better to stay lean and mean rather
> than add a controversial feature like this.

I don't understand why this is controversial. In the current code, if
you have a master and a single sync standby, and the master disappears
and you promote the standby, now the new master is running *without a
standby*. If you are willing to let the new master run without a
standby, why are you not willing to let the
the old one do so if it were the standby which failed in the first place?

Cheers,

Jeff


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-13 17:50:49
Message-ID: 26934.1326477049@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> I don't understand why this is controversial. In the current code, if
> you have a master and a single sync standby, and the master disappears
> and you promote the standby, now the new master is running *without a
> standby*.

If you configured it to use sync rep, it won't accept any transactions
until you give it a standby. If you configured it not to, then it's you
that has changed the replication requirements.

> If you are willing to let the new master run without a
> standby, why are you not willing to let the
> the old one do so if it were the standby which failed in the first place?

Doesn't follow.

regards, tom lane


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <alex(dot)bjornhagen(at)gmail(dot)com>,"Jeff Janes" <jeff(dot)janes(at)gmail(dot)com>
Cc: "Fujii Masao" <masao(dot)fujii(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Magnus Hagander" <magnus(at)hagander(dot)net>, "Aidan Van Dyk" <aidan(at)highrise(dot)ca>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Standalone synchronous master
Date: 2012-01-13 18:12:12
Message-ID: 4F101F9C0200002500044796@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:\

> I don't understand why this is controversial.

I'm having a hard time seeing why this is considered a feature. It
seems to me what is being proposed is a mode with no higher
integrity guarantee than asynchronous replication, but latency
equivalent to synchronous replication. I can see where it's
tempting to want to think it gives something more in terms of
integrity guarantees, but when I think it through, I'm not really
seeing any actual benefit.

If this fed into something such that people got jabber message,
emails, or telephone calls any time it switched between synchronous
and stand-alone mode, that would make it a built-in monitoring,
fail-over, and alert system -- which *would* have some value. But
in the past we've always recommended external tools for such
features.

-Kevin


From: Dimitri Fontaine <dimitri(at)2ndQuadrant(dot)fr>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: <alex(dot)bjornhagen(at)gmail(dot)com>, "Jeff Janes" <jeff(dot)janes(at)gmail(dot)com>, "Fujii Masao" <masao(dot)fujii(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Magnus Hagander" <magnus(at)hagander(dot)net>, "Aidan Van Dyk" <aidan(at)highrise(dot)ca>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Standalone synchronous master
Date: 2012-01-13 22:43:04
Message-ID: m239bjte9z.fsf@2ndQuadrant.fr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> I'm having a hard time seeing why this is considered a feature. It
> seems to me what is being proposed is a mode with no higher
> integrity guarantee than asynchronous replication, but latency
> equivalent to synchronous replication. I can see where it's
> tempting to want to think it gives something more in terms of
> integrity guarantees, but when I think it through, I'm not really
> seeing any actual benefit.

Same here, so what I think is that the new recv and write modes that
Fujii is working on could maybe be demoted from sync variant, while not
being really async ones. Maybe “eager” or some other term.

It seems to me that would answer the OP use case and your remark here.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-15 21:46:30
Message-ID: CAMkU=1xtECZb4HhuyQiNwjpo0a=szOuL5JsMBY9VM-JeJ52FZg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 13, 2012 at 9:50 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
>> I don't understand why this is controversial.  In the current code, if
>> you have a master and a single sync standby, and the master disappears
>> and you promote the standby, now the new master is running *without a
>> standby*.
>
> If you configured it to use sync rep, it won't accept any transactions
> until you give it a standby.  If you configured it not to, then it's you
> that has changed the replication requirements.

Sure, but isn't that a very common usage? Maybe my perceptions are
out of whack, but I commonly hear about fail-over and rarely hear
about using more than one slave so that you can fail over and still
have a positive number of slaves.

Cheers,

Jeff


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: alex(dot)bjornhagen(at)gmail(dot)com, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-15 22:01:31
Message-ID: CAMkU=1z9pnKR53gBiL=MX-d9TK2zObE3jfzPCmy9EzBVnkJAow@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 13, 2012 at 10:12 AM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:\
>
>> I don't understand why this is controversial.
>
> I'm having a hard time seeing why this is considered a feature.  It
> seems to me what is being proposed is a mode with no higher
> integrity guarantee than asynchronous replication, but latency
> equivalent to synchronous replication.

There are never 100% guarantees. You could always have two
independent failures (the WAL disk of the master and of the slave)
nearly simultaneously.

If you look at weaker guarantees, then with asynchronous replication
you are almost guaranteed to lose transactions on a fail-over of a
busy server, and with the proposed option you are almost guaranteed
not to, as long as disconnections are rare.

As far as latency, I think there are many cases when a small latency
is pretty much equivalent to zero latency. A human on the other end
of a commit is unlikely to notice a latency of 0.1 seconds.

> I can see where it's
> tempting to want to think it gives something more in terms of
> integrity guarantees, but when I think it through, I'm not really
> seeing any actual benefit.

I think the value of having a synchronously replicated commit is
greater than zero but less than infinite. I don't think it is
outrageous to think that that value could be approximately expressed
in seconds you are willing to wait for that replicated commit before
going ahead without it.

>
> If this fed into something such that people got jabber message,
> emails, or telephone calls any time it switched between synchronous
> and stand-alone mode, that would make it a built-in monitoring,
> fail-over, and alert system -- which *would* have some value.  But
> in the past we've always recommended external tools for such
> features.

Since synchronous_standby_names cannot be changed without bouncing the
server, we do not provide the tools for an external tool to make this
change cleanly.

Cheers,

Jeff


From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, alex(dot)bjornhagen(at)gmail(dot)com, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-16 08:17:31
Message-ID: CAHGQGwFAJObU7iOr5QRCuvE_N4-rUhOor7qO=C9rXJEeArf8wg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jan 16, 2012 at 7:01 AM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> On Fri, Jan 13, 2012 at 10:12 AM, Kevin Grittner
> <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:\
>>
>>> I don't understand why this is controversial.
>>
>> I'm having a hard time seeing why this is considered a feature.  It
>> seems to me what is being proposed is a mode with no higher
>> integrity guarantee than asynchronous replication, but latency
>> equivalent to synchronous replication.
>
> There are never 100% guarantees.  You could always have two
> independent failures (the WAL disk of the master and of the slave)
> nearly simultaneously.
>
> If you look at weaker guarantees, then with asynchronous replication
> you are almost guaranteed to lose transactions on a fail-over of a
> busy server, and with the proposed option you are almost guaranteed
> not to, as long as disconnections are rare.

Yes. The proposed mode guarantees that you don't lose transactions
when single failure happens, but asynchronous replication doesn't. So
the proposed one has the benefit of reducing the risk of data loss to
a certain extent.

OTOH, when more than one failures happen, in the proposed mode, you
may lose transactions. For example, imagine the case where the standby
crashes, the standalone master runs for a while, then its database gets
corrupted. In this case, you would lose any transactions committed while
standalone master is running.

So, if you want to avoid such a data loss, you can use synchronous replication
mode. OTOH, if you can endure the data loss caused by double failure for
some reasons (e.g., using reliable hardware...) but not that caused by single
failure, and want to improve the availability (i.e., want to prevent
transactions
from being blocked after single failure happens), the proposed one is good
option to use. I believe that some people need this proposed replication mode.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, alex(dot)bjornhagen(at)gmail(dot)com, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Aidan Van Dyk <aidan(at)highrise(dot)ca>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-01-16 15:52:49
Message-ID: CA+TgmoZ3OVBrDmDkkNvx1p0yGp8_HpH-8Z3xYpRYK+stQRLWfQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jan 15, 2012 at 5:01 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> Since synchronous_standby_names cannot be changed without bouncing the
> server, we do not provide the tools for an external tool to make this
> change cleanly.

Yes, it can. It's PGC_SIGHUP.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Bruce Momjian <bruce(at)momjian(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Björnhagen <alex(dot)bjornhagen(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Standalone synchronous master
Date: 2012-08-26 03:26:11
Message-ID: 20120826032611.GK10814@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jan 3, 2012 at 09:22:22PM -0500, Robert Haas wrote:
> On Tue, Dec 27, 2011 at 6:39 AM, Alexander Björnhagen
> <alex(dot)bjornhagen(at)gmail(dot)com> wrote:
> > And so we get back to the three likelihoods in our two-node setup :
> >
> > 1.The master fails
> >  - Okay, promote the standby
> >
> > 2.The standby fails
> >  - Okay, the system still works but you no longer have data
> > redundancy. Deal with it.
> >
> > 3.Both fail, together or one after the other.
>
> It seems to me that if you are happy with #2, you don't really need to
> enable sync rep in the first place.
>
> At any rate, even without multiple component failures, this
> configuration makes it pretty easy to lose durability (which is the
> only point of having sync rep in the first place). Suppose the NIC
> card on the master is the failing component. If it happens to drop
> the TCP connection to the clients just before it drops the connection
> to the standby, the standby will have all the transactions, and you
> can fail over just fine. If it happens to drop the TCP connection to
> the just before it drops the connection to the clients, the standby
> will not have all the transactions, and failover will lose some
> transactions - and presumably you enabled this feature in the first
> place precisely to prevent that sort of occurrence.
>
> I do think that it might be useful to have this if there's a
> configurable timeout involved - that way, people could say, well, I'm
> OK with maybe losing transactions if the standby has been gone for X
> seconds. But if the only possible behavior is equivalent to a
> zero-second timeout I don't think it's too useful. It's basically
> just going to lead people to believe that their data is more secure
> than it really is, which IMHO is not helpful.

Added to TODO:

Add a new "eager" synchronous mode that starts out synchronous but
reverts to asynchronous after a failure timeout period

This would require some type of command to be executed to alert
administrators of this change.

http://archives.postgresql.org/pgsql-hackers/2011-12/msg01224.php

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +