[dev] Authentication session data cleaned by Kronolith

Rui Carneiro rui.arc at gmail.com
Tue Jun 25 17:16:28 UTC 2013


On Tue, Jun 25, 2013 at 4:38 PM, Michael M Slusarz <slusarz at horde.org>wrote:

>  Quoting Rui Carneiro <rui.arc at gmail.com>:
>
>
>>   On Tue, Jun 25, 2013 at 2:42 PM, Michael M Slusarz <slusarz at horde.org>
>> wrote:
>>
>>  Quoting Rui Carneiro <rui.arc at gmail.com>:
>>>
>>>  I don't know if my issue is the same from the original thread but here
>>>> it
>>>> goes.
>>>>
>>>> My sessions are also being clear but by IMP. Everything work just fine
>>>> with
>>>> "normal" accounts but when I login into a very large account (p.e. 70k
>>>> messages on the inbox) the session is destroyed (I am assuming this
>>>> since I
>>>> am redirected to the logout page).
>>>>
>>>> My current driver for the SessionHandler is Memcache. If I use File
>>>> everything works as expected.
>>>>
>>>
>>> You *really* shouldn't be using memcache as a sessionhandler backend.
>>>  If using on a system with many users, you are most likely going to run
>>> out of the larger memory slices fairly quickly - which will cause
>>> sessions to be dropped due to the FIFO allocation.
>>>
>>> A mailbox with 70,000 messages requires a large data structure to
>>> represent.  You may in fact be reaching the memcache max size limit of
>>> 1 MB.
>>>
>>
>>
>>    We are indeed passing the 1MB size limit but shouldn't this be
>> supported with the large_items config?
>>
>
> Not necessarily.  There are a finite number of large slabs allocated by
> memcache.  If on a server with a number of users, these large slabs are
> necessarily limited (especially if caching other data), so they are
> potentially going to get reallocated to another user.  This will cause the
> original user's session data to be incomplete, and their session will be
> invalidated/timeout.
>

It is not the case. I am testing this locally and I am the only one using.


> Again - you *really* should not be using memcache as a session handler.
>  It was never designed for this purpose.  It is 2013 - there are better,
> higher-performing alternatives available these days.


I know that there is another alternatives but we were trying to upgrade
horde code without messing our backends. Our memcache daemon has an uptime
of 3 years (25713 hours to be precise) and we had no problems until now.

The problem is in fact related with >1MB blocks. Easily reproduced by:
 $GLOBALS['session']->set('imp', 'xyz', str_repeat('a', 5000000));

Anyway, we will try to use Redis but IMHO this is bug that needs to be
fixed.


More information about the dev mailing list