[imp] Server Farms..

Michael Bellears MBellears at staff.datafx.com.au
Tue Mar 30 15:24:53 PST 2004


> >
> > ServerIron-XL -
> > http://www.foundrynet.com/products/webswitches/serveriron/
> > CoyotePoint - http://www.coyotepoint.com/equalizer.htm
> 
> We have 3 ServerIronXLs here and I would recommend them based 
> on our experience.  As others have said, a load-balancer can 
> do things round-robin DNS can't, such as automatically 
> removing a dead server from the pool.

I'm definitely impressed with the ServerIrons.

> 
> > I have a couple of questions regarding the "real" webservers setup:
> >
> > 1. We currently run MySQL for IMP prefs/vpopmail users/spamassassin 
> > prefs etc - Is it advisable to have each "real" server run it's own 
> > MySQL - And have each in a master/slave setup?
> 
> We have MySQL running on a separate server.  This keeps the 
> webmail servers really simple to configure.  We do not use 
> MySQL replication.

Ok - What impact does this MySQL server dieing on you have?

I want to eliminate all(If possible!) single points of failure - Hence
the reason for the MySQL master/slave setup.

> 
> > 2. What is the recommended method to synch config files on 
> all "real"
> > servers (Eg. Httpd.conf, horde/imp config files etc?) - 
> Have only one 
> > server that admins connect to for mods, then rsync any 
> changes to the 
> > other servers?
> 
> We just rsync /var/www from server to server whenever a 
> change is made.

No probs.

> 
> > 3. What about logfiles - We would have all users mail etc on an NFS 
> > share - Can you do the same for logfiles?(Or do you get locking 
> > issues?)
> > - From a statistical aspect, it would be a pain to have to combine 
> > each "real" servers logfiles, then run analysis. Also from 
> a support 
> > perspective - How are support personnel supposed to know 
> which "real"
> > server a client would actually be connecting to in order to see if 
> > they are entering a wrong username/pass etc?
> 
> Most of the useful information about a user's problem are in 
> the Horde syslogs, which we send to a central loghost.  You 
> could let syslog combine the logs from all your webmail 
> servers, or use a merge script to do it after the fact.  
> Syslog includes the name of the originating machine, so you 
> know which real server is involved.
> 
> Apache logs stay local on each webmail server, but we've 
> rarely needed to look at them to diagnose problems.

My major concern will be the qmail logs - multilog cannot(?) log to an
external host....I will have to ask about this on the qmail list.

> 
> We've used systemimager for a shell server, but not the 
> webmail servers.
> We have a tape backup of one of the webmail servers, should 
> we need to recover any of the master data.  We have 3 servers 
> in our load balance pool, and we can run with only 2 servers 
> working.  If a server fails, we just install debian from CD, 
> add the extra packages we need, and rsync /var/www to it.  
> There are a few other things to configure, but I'd say we 
> could have the server restored in an hour or two.  With the 
> extra capacity we have available, we don't need to have a 
> quick restore process.
> However, imaging might make the restore process simpler.
> 

I have had reports from others using systemimager and they swear by it -
I'll have to test and see how it goes.

Regards,
MB


More information about the imp mailing list