[H-GEN] any ideas for cluster filesystem solution ?

Craig ARMOUR c.armour at uq.net.au
Thu Nov 9 23:04:49 EST 2000


[ Humbug *General* list - semi-serious discussions about Humbug and  ]
[ Unix-related topics.  Please observe the list's charter.           ]
[ Worthwhile understanding: http://www.humbug.org.au/netiquette.html ]

>  * you almost certainly drop the currently active requests, database
>    connections, etc.

for web servers, this usually isn't that big a drama.  you might get a few
users with connection timed out.  The average monkey wouldn't think
anything of it and press reload.

>  * presumably the machines have two network cards anyhow
> 
> In any case, Byron has a more important point: the biggest risk is
> having the two machines sitting next to each other in the same
> building and network.

I have the benefit of actually seeing this setup.  But here we go,

The machine room has 2 truncked gigabit links which ( asssuming our
network guys have done it properly) follow different links around UQ
from our core router to the library.  The links flow into individual
blades on a cisco 6500. that is as fairly redundant as you get.

it is all good and well to say you should have the two machines in
different locations on different networks.  In many cases that is simply
not possible.  So you have two machines, which via say DNS round robin,
load share.  and should one die, you merely have to heart beat the other
one to take over the ip.

The Idea of HA clusters such as these ( and it could be 2 or 20 nodes in
the cluster ) is more to loadshare, and provide a means for failover
should another node require maintenance.  This could be any matenance
from rebooting a machine for a new kernel, to installing more ram or
another disk.

an ideal solution would be to have two or more machines, with two
controllers, two raid arrays, and have both machines on other sides of the
planet.  You run alternate pathing through your controllers and mirror one
array onto the other.   this is obviously very expensive and not supported
by what the "client" has to provide  

as eric said to start off with, he has two machines with one external raid
arrray, and he wishes to get the best HA he can with what he
has.  unfortunately, it appears linux does not support the SAN stuff well,
other wise just sticking both machines on the fcal arbitrated loop would
work well.  ( Solaris does it easily ! :) 

Eric is right, it seems a waste to have copies of the data when all the
data is on the one storage array, and both machines share the
array.  Unfortunately, linux may provide you with no better option.

> The basic idea is OK, but there are two drawbacks.  Firstly, it's

it was just meant as a basic idea.  I probably should have said that :) 

Cheers
Craig


--
* This is list (humbug) general handled by majordomo at lists.humbug.org.au .
* Postings to this list are only accepted from subscribed addresses of
* lists 'general' or 'general-post'.



More information about the General mailing list