[imp] High Availability / Load Balancing

Eric Rostetter eric.rostetter at physics.utexas.edu
Thu Dec 19 11:25:38 PST 2002


Quoting Dominic Ijichi <dom@ijichi.org>:

> > Shared scsi-devices on a cluster.  Only way to go.  Works great.  Been in
> > OpenVMS for years, in Tru64 unix, available but rather experimental for
> > linux.
> 
> doesn't this require physical sharing?

That is the normal way to do it.  But it can be done remotely via fiber to
a couple kilometers.  So it can be done over "short" distances.

> will it go over a network?  i spread

It can, but then your syncing is only as reliable as the network.  Do you
trust the network that much?  Actually for some applications this is
sufficient, but I don't think it is sufficient for high volume mail storage.

Then there is the idea of security and speed if you send this all over an
open network.  I wouldn't want to try this kind of stuff on an open network.  
And private (long distance) networks are not cheap.

> servers over networks and datacentres for redundancy - i've found that ISPs
> tend to go bust more frequently than my hard disks!

Sad, but true.

I also find machines go belly up more than disks. That is why having redundent
machines with shared storage (e.g. SAN or NAS) are so nice.  "Good" NAS units
(not the el-cheapo junk on the market that is just an actual windows/linux
machine being sold as a NAS) are very reliable.  While still a single point
of failure, your MTBF goes way down when you cluster the machines off a 
SAN/NAS.

We're starting to setup a lot of high-available stuff now with NAS units.
It is much cheaper and easier to setup/maintain to have a few cheap machines
running off one centralized NAS unit (now that GB-ethenet is so cheap/easy).

If you have lots of money, you can make the NAS/SAN fully redundent. But
even without the redundency in the NAS/SAN, it still lowers your MTBF and
also your ease of maintaining (backups, upgrades, downtime, etc).

> i appreciate in a
> university/large corporate environment nice hardware is the way to go, but i
> certainly can't afford that and I suspect a lot of people will run horde/imap
> out of colo/hosted.

I'm not sure that colo/hosted services are any cheaper in the long run.

Soon scsi over tcp-ip will be out, and will really change the nature of
clustering also.  But there are already so many options: NFS, shared-scsi,
clustering filesystems, NAS/SAN setups, homebrew setups like drdb-failover
or software raid over the network, etc.  Really lots of valid options.  Of
course, just as many ways to do it wrong as to do it right, so you still
have to be careful!

-- 
Eric Rostetter
The Department of Physics
The University of Texas at Austin

Why get even? Get odd!


More information about the imp mailing list