[imp] Server Farms..
Andrew Morgan
morgan at orst.edu
Tue Mar 30 09:49:03 PST 2004
On Tue, 30 Mar 2004, Michael Bellears wrote:
> We currently run one (very!) overburdened IMP server (Currently wearing
> many hats - qmail/courier-imap/anti-virus/spamassassin/webmail on Debian
> 3) - So we have decided to migrate to a server farm.
>
> We have been looking at the following Loadbalancers(If anyone has
> personal experience with either, or other recommendations - I would be
> grateful for any input):
>
> ServerIron-XL -
> http://www.foundrynet.com/products/webswitches/serveriron/
> CoyotePoint - http://www.coyotepoint.com/equalizer.htm
We have 3 ServerIronXLs here and I would recommend them based on our
experience. As others have said, a load-balancer can do things
round-robin DNS can't, such as automatically removing a dead server from
the pool.
> I have a couple of questions regarding the "real" webservers setup:
>
> 1. We currently run MySQL for IMP prefs/vpopmail users/spamassassin
> prefs etc - Is it advisable to have each "real" server run it's own
> MySQL - And have each in a master/slave setup?
We have MySQL running on a separate server. This keeps the webmail
servers really simple to configure. We do not use MySQL replication.
> 2. What is the recommended method to synch config files on all "real"
> servers (Eg. Httpd.conf, horde/imp config files etc?) - Have only one
> server that admins connect to for mods, then rsync any changes to the
> other servers?
We just rsync /var/www from server to server whenever a change is made.
> 3. What about logfiles - We would have all users mail etc on an NFS
> share - Can you do the same for logfiles?(Or do you get locking issues?)
> - From a statistical aspect, it would be a pain to have to combine each
> "real" servers logfiles, then run analysis. Also from a support
> perspective - How are support personnel supposed to know which "real"
> server a client would actually be connecting to in order to see if they
> are entering a wrong username/pass etc?
Most of the useful information about a user's problem are in the Horde
syslogs, which we send to a central loghost. You could let syslog combine
the logs from all your webmail servers, or use a merge script to do it
after the fact. Syslog includes the name of the originating machine, so
you know which real server is involved.
Apache logs stay local on each webmail server, but we've rarely needed to
look at them to diagnose problems.
> 4. This is OT, but I thought I would ask anyway - Once we have a "real"
> server setup they way we want - What imaging software can be used (Quick
> recovery in the event of server failure (Eg. If Total rebuild is
> necessary), and also simple for addition of new "real" server)?. I would
> love a utility that can create a bootable CD from our SOE, giving us the
> ability to just place this CD into the new server, power up and in a
> couple of minutes we have a fully functional server ready to be placed
> into the server farm.
>
> I have briefly looked at the following :
>
> http://www.mondorescue.org/
> http://www.systemimager.org/
We've used systemimager for a shell server, but not the webmail servers.
We have a tape backup of one of the webmail servers, should we need to
recover any of the master data. We have 3 servers in our load balance
pool, and we can run with only 2 servers working. If a server fails, we
just install debian from CD, add the extra packages we need, and rsync
/var/www to it. There are a few other things to configure, but I'd say we
could have the server restored in an hour or two. With the extra capacity
we have available, we don't need to have a quick restore process.
However, imaging might make the restore process simpler.
Andy
More information about the imp
mailing list