[imp] High Availability / Load Balancing

Dominic Ijichi dom at ijichi.org
Sun Dec 15 07:21:05 PST 2002


not tried it, but would be interested to hear from people who have.  initial
recoils in horror would be related (but not restricted) to:

 - no redundancy - NFS doesn't replicate/copy the data so you've still got a
single point of failure
 - to replicate the data to multiple nfs servers involves using rsync or
somesuch, inevitably involving significant latency
 - custom 'ping' scripts presumably cron'd or daemon'd - again involving latency
 - all the associated problems with nfs (locking, timeouts, hanging etc)
 - security across servers/networks

i would have thought using databases is much simpler - there are specific
libraries, code hooks and php support for it.

out if interest, has anyone used ip clustering and/or application-layer load
balancers?  I used to use F5 load balancers for banks of solaris web servers
which were really nice, but never got a chance to test with Horde.

dom


> out of curiosity ---
> 
> why not simply use nfs with/for/whatever the dir where sessions and/or
> uploaded files reside and have a simple script that 'pings' the nfs
> server -- if it fails just fail over to the nfs server backup?
> 
> Jimmy
> 
> 
> On Sat, 2002-12-14 at 22:43, Dominic Ijichi wrote:
> > Hi,
> >
> > > Hey All,
> > > We're looking into adding a second imp front end server to provide load
> > > balancing and redundancy to webmail. Forgetting for a minute how to
> > > implement the actual service redundancy/load balancing (i.e. via
> > > hardware, dns roundrobin etc...), I wanted to get a feel for how others
> > > handled the horde/imp/turba specific issues.
> >
> > I currently run two imp frontends going to single IMAP store.  Various
> domains
> > are pointed through DNS RR to both frontends.
> >
> > > Since we currently store all prefs in ldap, I think i have boiled the
> > > problems that need to be solved down to:
> >
> > I store all prefs in SQL as the LDAP support doesn't seem to support proper
> > trees, last time I looked - been on my todo list for aaages, not had time
> to
> > fix.
> >
> > > 1) Sessions - They're currently stored in files. Could a client be
> > > reading or composing an email then try to load a page, end up with the
> > > second horde/imp server (that doesnt have an active session) and be
> > > logged out?
> >
> > Yes, this does happen frequently, you have to contend for it.  I use AdoDB,
> and
> > prepend the session include in php.ini on both servers like this (setting
> the
> > session_handler to user, of course):
> >
> > auto_prepend_file = "/opt/data/htdocs/sessionlib/adodb-session.php"
> >
> > Then setup mysql on both servers in bidirectional replication agreements
> and
> > create the relevent users and databases.  Then, in horde/conf.php where the
> > custom session handler entries are:
> >
> > $conf['sessionhandler']['params']['open']    = 'adodb_sess_open';
> > $conf['sessionhandler']['params']['close']   = 'adodb_sess_close';
> > $conf['sessionhandler']['params']['read']    = 'adodb_sess_read';
> > $conf['sessionhandler']['params']['write']   = 'adodb_sess_write';
> > $conf['sessionhandler']['params']['destroy'] = 'adodb_sess_destroy';
> > $conf['sessionhandler']['params']['gc']      = 'adodb_sess_gc';
> >
> > It's a little inefficient, as every single php page loaded through the
> > webserver preloads the whole session stuff, but it doesn't seem to slow it
> > significantly, and more importantly has provided transparent session
> failover
> > across all my php apps.  The point of having the bidi-sql replication is
> that
> > it's pretty much immediate, so when one server fails you have good
> confidence
> > in taking up with the other one immediately.
> >
> > The downer to this is bandwidth.  I've got two servers located on different
> > networks for redundancy, and the bandwidth that the session replication
> uses is
> > huge, and it's all down to horde - HOWEVER, I use a heavily hacked version
> from
> > CVS a couple of months ago, so it may well just be down to that.  Be
> prepared
> > though, under moderate usage, to get replication bandwidth of 500Mb+ per
> day!
> >
> > > 2) Attachments - Could a user upload an attachment, then send the mail
> > > (using the other server that doesn't have the attachment) and get an
> > > error?
> >
> > Never hit this problem.  You can store attachments in SQL under VFS, so if
> you
> > include this in the replication it's taken care of.
> >
> > Few things to consider - I've found some buggy (beta) versions of IE tend
> to
> > send the session handling into a loop of undetermined nature, so again
> > replication bandwidth is hammered really badly, and mysql replication
> > (deliberately) stops on any error - I tend to ignore some of them in my.cnf
> > like this:
> >  slave-skip-errors=1062, 1007
> >
> > Might also be worth using compression:
> >  slave_compressed_protocol= 1
> >
> > Hope this helps,
> > dom
> >
> >
> > > Has anyone dealt with redundancy / high availability and found
> > > solutions to the above problems? Are there any other problems that I'm
> > > not thinking about?
> > >
> > > Thanks,
> > > Lee
> > >
> > >
> > > --
> > > IMP mailing list
> > > Frequently Asked Questions: http://horde.org/faq/
> > > To unsubscribe, mail: imp-unsubscribe@lists.horde.org
> >
> >
> > ------------------------------------------
> > This message was penned by the hand of Dom
> --
> Jimmy Brake <jimmy@isurge.com>
> 
> 
> --
> IMP mailing list
> Frequently Asked Questions: http://horde.org/faq/
> To unsubscribe, mail: imp-unsubscribe@lists.horde.org


------------------------------------------
This message was penned by the hand of Dom


More information about the imp mailing list