Re: Replication

From: Craig James <craig_james(at)emolecules(dot)com>
To: Andreas Kostyrka <andreas(at)kostyrka(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Replication
Date: 2007-06-15 01:44:52
Message-ID: 4671EF14.7000002@emolecules.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Andreas Kostyrka wrote:
> Slony provides near instantaneous failovers (in the single digit seconds
> range). You can script an automatic failover if the master server
> becomes unreachable.

But Slony slaves are read-only, correct? So the system isn't fully functional once the master goes down.

> That leaves you the problem of restarting your app
> (or making it reconnect) to the new master.

Don't you have to run a Slony app to convert one of the slaves into the master?

> 5-10MB data implies such a fast initial replication, that making the
> server rejoin the cluster by setting it up from scratch is not an issue.

The problem is to PREVENT it from rejoining the cluster. If you have some semi-automatic process that detects the dead server and converts a slave to the master, and in the mean time the dead server manages to reboot itself (or its network gets fixed, or whatever the problem was), then you have two masters sending out updates, and you're screwed.

>> The problem is, there don't seem to be any "vote a new master" type of
>> tools for Slony-I, and also, if the original master comes back online,
>> it has no way to know that a new master has been elected. So I'd have
>> to write a bunch of SOAP services or something to do all of this.
>
> You don't need SOAP services, and you do not need to elect a new master.
> if dbX goes down, dbY takes over, you should be able to decide on a
> static takeover pattern easily enough.

I can't see how that is true. Any self-healing distributed system needs something like the following:

- A distributed system of nodes that check each other's health
- A way to detect that a node is down and to transmit that
information across the nodes
- An election mechanism that nominates a new master if the
master fails
- A way for a node coming online to determine if it is a master
or a slave

Any solution less than this can cause corruption because you can have two nodes that both think they're master, or end up with no master and no process for electing a master. As far as I can tell, Slony doesn't do any of this. Is there a simpler solution? I've never heard of one.

> The point here is, that the servers need to react to a problem, but you
> probably want to get the admin on duty to look at the situation as
> quickly as possible anyway.

No, our requirement is no administrator interaction. We need instant, automatic recovery from failure so that the system stays online.

> Furthermore, you need to checkout pgpool, I seem to remember that it has
> some bad habits in routing queries. (E.g. it wants to apply write
> queries to all nodes, but slony makes the other nodes readonly.
> Furthermore, anything inside a BEGIN is sent to the master node, which
> is bad with some ORMs, that by default wrap any access into a transaction)

I should have been more clear about this. I was planning to use PGPool in the PGPool-1 mode (not the new PGPool-2 features that allow replication). So it would only be acting as a failover mechanism. Slony would be used as the replication mechanism.

I don't think I can use PGPool as the replicator, because then it becomes a new single point of failure that could bring the whole system down. If you're using it for INSERT/UPDATE, then there can only be one PGPool server.

I was thinking I'd put a PGPool server on every machine in failover mode only. It would have the Slony master as the primary connection, and a Slony slave as the failover connection. The applications would route all INSERT/UPDATE statements directly to the Slony master, and all SELECT statements to the PGPool on localhost. When the master failed, all of the PGPool servers would automatically switch to one of the Slony slaves.

This way, the system would keep running on the Slony slaves (so it would be read-only), until a sysadmin could get the master Slony back online. And when the master came online, the PGPool servers would automatically reconnect and write-access would be restored.

Does this make sense?

Craig

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Eugene Ogurtsov 2007-06-15 02:17:46 Re: Replication
Previous Message Andreas Kostyrka 2007-06-15 01:02:15 Re: Replication