[imp] memcached sessions

Liam Hoekenga liamr at umich.edu
Tue Oct 25 15:49:30 PDT 2005


What about making sure that one of the memcache servers was "localhost" 
and trusting it as the authoritative one for the data?  When you write out 
session data, write it to localhost, and to whichever in the pool you've 
decided to write to.  When you get bounced to a new server, query 
localhost (which isn't going to have it), and then query the pool if 
necessary?

Either that... or write the data to all the servers in the pool.. which is 
good for redundancy, but probably not great for performance.

How does the perl API handled pooled servers?  or libmemcache?  I'd still 
like to look at the mcache extension (built on libmemcache), but further 
work this PM has it still segfaulting.  It's a shame that the maintainer 
of the PECL memcached extension doesn't see the value in adding connection 
pooling to his code (I asked and he said "it's easy to handle in your 
application in a few lines of code".  Blech.)

Liam

> What I was working on would only implement 1 and 2, although 3 might be 
> possible.  The problem I see with replicating anything is you don't have 
> a good way of guaranteeing the data gets replicated.  For instance, if 
> you are using two servers and have one session...  You write the session 
> info to both the first time.  A network problem temporarily drops the 
> connection to the second server, so the first server is used for reads 
> and writes exclusively. You need code to detect when server 2 comes 
> back.  And when it does come back, its going to have stale session data, 
> if any writes were done to server 1.  So if you do a read to server 2, 
> you're going to get different results than if you read server 1.  So I 
> think doing replication might be tricky, because there's no good way to 
> determine the state of the information across multiple servers. Now if 
> you had a way to pool memcached, so that it handled replications itself 
> (say by specifiying peers to the memcached process at startup, and if a 
> change was received, that information was passed to all peers by the 
> memcached process), that might be interesting, but it would also add 
> complexity and the overhead of trying to maintain consistency might kill 
> the speed.  Always tradeoffs somewhere..


More information about the imp mailing list