[horde] High Capacity Horde & Email Environment
Jeff Warnica
jeffw at chebucto.ns.ca
Tue Mar 30 16:49:57 PST 2004
On Tue, 2004-30-03 at 16:02 -0600, Jacob Davida wrote:
> All of these mail servers would use NFS to mount dedicated FreeBSD NFS
> partitions for mail delivery. Each NFS mount will be mounted by each of the
> mail servers. I dont' forsee any locking problems since it will be using
> Maildir,
> but I am not 100% positive as this is theoretical.
Mail on NFS is evil. Don't do it. Don't think about doing it. Push it
out of your mind. Pretend that NFS doesn't exist.
Quickly googling around, I see an c.2000 email saying Email on Coda is
also Bad... But the OpenAFS manual specifically uses email as an example
when you would want to use a particular ACL. So if your dead set on some
network file system (but not /the/ Network File System), those might be
with looking at.
> Is there a better way to spread out users? Maybe there are inexpensive other
> NFS/storage
> devices?
While I've not so much as seen such a setup in production, how about
this: Get a IDE/SCSI RAID box (that is, internal IDE drives, external
connections with SCSI) that has multiple external buses. Mount a given
partition on a given system. If/when that system fails, remount its
relevant partition on a hot-standby system. There may be a problem if a
system half fails, not completely releasing its partitions, but you may
be able to force disconnection via the RAID system. Only because I
happen to have their propagan^H^H^H marketing folder on my desk, I will
give the name StorCase. Very impressive specs. Also, impressive
Blinkenlights.
> Since each Postfix server would have Courier on it, users could POP or IMAP
> data from
> any of the servers, which in turn would pull the email out of the necessary
> NFS mount.
> Are we all on the same page? Let me know if I've lost anyone.
I'm partial to Cyrus. With it's Murder system (which I haven't used),
the front end box(es) cleanly proxy IMAP/POP to the right storage
system. LMTP injection too for that matter. You would be able to
completely disassociate MXing with storage. Note that this provides
clustering for (horizontal) scalability only - not for reliability. To
quote them "Note that Cyrus Murder is still relatively young in the
grand scheme of things, and if you choose to deploy you are doing so at
your own risk.". OTOH, you are comparing it to a custom solution. That
Cyrus is a "magic box" may or may not be a good thing. Tight integration
with Sieve may or may not be a good thing. It can inherently understand
virtual domains, which almost definitely is a good thing.
NFS (or AFS, or Coda), I think would be slower then Cyrus-IMAP/Murder,
even if you didn't have the locking issues. NFS would be blindly
ignorant of what is passing around, whereas Cyrus would do any thinking
on the machine that the disks are on.
> Then for Horde, we could have a cluster of web servers, sessions being
> stored in the
> database, and each server being identical for a horde setup. Maybe horde
> could even
> be stored on one of the NFS mounts to make for a single distribution for the
> Horde config.
> Anyways, round robin DNS again. These web servers would speak IMAP with the
> mail servers and SQL to the database server.
I would dust off a P200 or two, and slap
http://www.linuxvirtualserver.org/ on it for load balancing (with
persistence) and store sessions using plan 'ol PHP, and consider putting
its session.save_path on tmpfs. Keeping session data all in memory on
the right web server would, I'm sure, be a massive speed improvement.
LVS would also provide a level of high availability as well. RRDNS
sounds good, but it means at least a 5 second lag if a box dies. With
numbers like "9 seconds" thrown around as the Mean Time For Users To Get
Pissed At The Page For Not Loading, you don't have much room when there
is a failure. At worst, if a real server fails, then its users would
have to log in again.
> That brings me on to the DB server, I hadn't thought about what happens when
> the
> DB server needs more juice. I know I coudl master/slave MySQL, but how do I
> control writes, can Horde be told to write to one place and read from
> another?
MySQL claims that replication in a circular master/slave relationship is
OK, but may confuse particular SQL clients. I think that since most
rows in a given table would only ever be touched by a single user, and
it is reasonably safe to assume a user is only using their account once
at any given time and with persistence they will always get the same web
server, if each web server uses the same DB server, things should be OK.
A lot of ifs. I don't know what happens when (not if) one of the DB
servers crashes and the circle is broken.
Also, have you considered using LDAP for storing user accounts, mail
aliases, etc? For that matter, for Horde prefs in general? While
OpenLDAP replication is also only one way, the PHP docs seem to imply
that it will (automatically) chase referrals, so the DB
write-to-the-right-place problem wouldn't be. I think that there is also
a significant advantage to being able to nuke all of a users data at
once (well, save for mailboxes), rather then through
passwd/shadow/group, MySQL... Furthermore, OpenLDAP should be able to
be configured to have a sufficient cache big enough that the relevant
objects for all logged in users will be in memory. I don't know how
smart MySQL (or any other RDBMS is) with per-row caching.
More information about the horde
mailing list