[imp] New IMAP server, troubleshooting slow IMP performance
Simon Wilson
simon at simonandkate.net
Wed Aug 2 03:14:44 UTC 2017
>>>> I just switched it to nosql to use the mongo nosql setup on the
>>>> server that is used for Horde cache - and the email times have
>>>> dropped to about 0.5s.
>>>>
>>>> Thanks Jan, that was the ticket. I think nosql is probably better
>>>> performance anyway?
>>>
>>> Probably yes.
>>>
>>> --
>>> Jan Schneider
>>
>> Since setting my IMAP cache to be nosql (mongo), every 5 or 6 hours
>> mongo goes 100% CPU for 2 or 3 hours. The mongod log is a constant
>> flow of messages, as follows below.
>>
>> During this time, Imp seems to be slower again, like when IMAP
>> caching was not working properly.
>>
>> Then for some reason it will just stop. The log sequence shows the
>> tail of it running flat out (triggering every 50 - 100ms), then
>> stopping.
>>
>> Any ideas?
>>
>> Simon
>>
>> 2017-08-01T23:04:13.836+1000 I COMMAND [conn100] query
>> horde.horde_cache query: { cid: "e471c4112fbd3a7374d6deaa2192039e"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59189
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:463
>> nreturned:1 reslen:605 locks:{ Global: { acquireCount: { r: 928 }
>> }, Database: { acquireCount: { r: 464 } }, Collection: {
>> acquireCount: { r: 464 } } } 133ms
>> 2017-08-01T23:04:13.943+1000 I COMMAND [conn101] query
>> horde.horde_cache query: { cid: "4c3e501b437e318e40097fe327d66ea6"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59180
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:464
>> nreturned:1 reslen:322 locks:{ Global: { acquireCount: { r: 930 }
>> }, Database: { acquireCount: { r: 465 } }, Collection: {
>> acquireCount: { r: 465 } } } 123ms
>> 2017-08-01T23:04:14.024+1000 I COMMAND [conn100] query
>> horde.horde_cache query: { cid: "e471c4112fbd3a7374d6deaa2192039e"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59189
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:464
>> nreturned:1 reslen:605 locks:{ Global: { acquireCount: { r: 930 }
>> }, Database: { acquireCount: { r: 465 } }, Collection: {
>> acquireCount: { r: 465 } } } 173ms
>> 2017-08-01T23:04:14.050+1000 I COMMAND [conn73] query
>> horde.horde_cache query: { cid: "0a6536c57d01c2ea148c511301648064"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59182
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:463
>> nreturned:1 reslen:765 locks:{ Global: { acquireCount: { r: 928 }
>> }, Database: { acquireCount: { r: 464 } }, Collection: {
>> acquireCount: { r: 464 } } } 160ms
>> 2017-08-01T23:04:15.010+1000 I COMMAND [conn73] query
>> horde.horde_cache query: { cid: "7f626b12cffb4951d3f569c6d2f0d3cd"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59184
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:463
>> nreturned:1 reslen:736 locks:{ Global: { acquireCount: { r: 928 }
>> }, Database: { acquireCount: { r: 464 } }, Collection: {
>> acquireCount: { r: 464 } } } 101ms
>> 2017-08-01T23:04:15.248+1000 I COMMAND [conn101] query
>> horde.horde_cache query: { cid: "f73608d48bef65a48f2a887a510b397b"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59183
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:463
>> nreturned:1 reslen:701 locks:{ Global: { acquireCount: { r: 928 }
>> }, Database: { acquireCount: { r: 464 } }, Collection: {
>> acquireCount: { r: 464 } } } 106ms
>> 2017-08-01T23:04:30.409+1000 I NETWORK [conn95] end connection
>> 127.0.0.1:47564 (26 connections now open)
>> 2017-08-01T23:04:35.415+1000 I NETWORK [conn49] end connection
>> 127.0.0.1:40664 (25 connections now open)
>> 2017-08-01T23:04:37.417+1000 I NETWORK [conn99] end connection
>> 127.0.0.1:56114 (24 connections now open)
>> 2017-08-01T23:04:39.419+1000 I NETWORK [conn64] end connection
>> 127.0.0.1:42526 (23 connections now open)
>> 2017-08-01T23:04:43.423+1000 I NETWORK [conn92] end connection
>> 127.0.0.1:47472 (22 connections now open)
>> 2017-08-01T23:04:47.427+1000 I NETWORK [conn78] end connection
>> 127.0.0.1:44930 (21 connections now open)
>> 2017-08-01T23:04:51.432+1000 I NETWORK [conn86] end connection
>> 127.0.0.1:45858 (20 connections now open)
>> 2017-08-01T23:04:56.434+1000 I NETWORK [conn83] end connection
>> 127.0.0.1:45694 (19 connections now open)
>> 2017-08-01T23:05:08.004+1000 I COMMAND [conn74] query
>> horde.horde_cache query: { cid: "cae0be8ca4bcc7fe642c2498640c0af4"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59188
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:463
>> nreturned:1 reslen:440 locks:{ Global: { acquireCount: { r: 928 }
>> }, Database: { acquireCount: { r: 464 } }, Collection: {
>> acquireCount: { r: 464 } } } 106ms
>> 2017-08-01T23:05:08.232+1000 I COMMAND [conn91] query
>> horde.horde_cache query: { cid: "7f626b12cffb4951d3f569c6d2f0d3cd"
>> } planSummary: COLLSCAN ntoskip:0 keysExamined:0 docsExamined:59184
>> cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:464
>> nreturned:1 reslen:736 locks:{ Global: { acquireCount: { r: 930 }
>> }, Database: { acquireCount: { r: 465 } }, Collection: {
>> acquireCount: { r: 465 } } } 102ms
>> 2017-08-01T23:05:33.484+1000 I NETWORK [conn97] end connection
>> 127.0.0.1:56032 (18 connections now open)
>>
>> --
>> Simon Wilson
>> M: 0400 12 11 16
>
> Mongo ran at 100% CPU with Imp IMAP caching for about 7 hours
> straight overnight.
>
> I have set Imp to use Redis (hashtable) cache, will see if that works better.
>
> Simon
>
> --
I switched both Horde and Imp caching over to Redis, and have disabled
mongo...
What that has shown is that it's not Mongo that's the cause...
One device is logging this, over and over again:
[13102][2017-08-02T13:07:25+10:00] INFO:
----------Horde_Core_ActiveSync_Driver::authenticate() attempt for
dom----------
[13102][2017-08-02T13:07:26+10:00] INFO: ----------SYNC request
received for user dom
[13102][2017-08-02T13:07:26+10:00] INFO: ----------Request being
handled for device: ANDROIDC1102625737, Supporting protocol version:
14.1, Using Horde_ActiveSync v2.38.4
[13102][2017-08-02T13:07:26+10:00] INFO: Policykey: 1936295678 verified.
[13102][2017-08-02T13:07:26+10:00] I: <Synchronize>
[13102][2017-08-02T13:07:26+10:00] I: <Folders>
[13102][2017-08-02T13:07:26+10:00] I: <Folder>
[13102][2017-08-02T13:07:26+10:00] I: <SyncKey>
[13102][2017-08-02T13:07:26+10:00] I:
{597edfd4-81f4-4718-b7c8-314dc0a801e6}1735
[13102][2017-08-02T13:07:26+10:00] I: </SyncKey>
[13102][2017-08-02T13:07:26+10:00] I: <FolderId>
[13102][2017-08-02T13:07:26+10:00] I: F96d3fb07
[13102][2017-08-02T13:07:26+10:00] I: </FolderId>
[13102][2017-08-02T13:07:26+10:00] I: <DeletesAsMoves>
[13102][2017-08-02T13:07:26+10:00] I: 1
[13102][2017-08-02T13:07:26+10:00] I: </DeletesAsMoves>
[13102][2017-08-02T13:07:26+10:00] I: <GetChanges />
[13102][2017-08-02T13:07:26+10:00] I: <WindowSize>
[13102][2017-08-02T13:07:26+10:00] I: 10
[13102][2017-08-02T13:07:26+10:00] I: </WindowSize>
[13102][2017-08-02T13:07:26+10:00] I: <Options>
[13102][2017-08-02T13:07:26+10:00] I: <FilterType>
[13102][2017-08-02T13:07:26+10:00] I: 3
[13102][2017-08-02T13:07:26+10:00] I: </FilterType>
[13102][2017-08-02T13:07:26+10:00] I: <AirSyncBase:BodyPreference>
[13102][2017-08-02T13:07:26+10:00] I: <AirSyncBase:Type>
[13102][2017-08-02T13:07:26+10:00] I: 2
[13102][2017-08-02T13:07:26+10:00] I: </AirSyncBase:Type>
[13102][2017-08-02T13:07:26+10:00] I: <AirSyncBase:TruncationSize>
[13102][2017-08-02T13:07:26+10:00] I: 200000
[13102][2017-08-02T13:07:26+10:00] I: </AirSyncBase:TruncationSize>
[13102][2017-08-02T13:07:26+10:00] I: </AirSyncBase:BodyPreference>
[13102][2017-08-02T13:07:26+10:00] I: </Options>
[13102][2017-08-02T13:07:26+10:00] I: </Folder>
[13102][2017-08-02T13:07:26+10:00] ERR: COLLECTIONS:
Horde_ActiveSync_Collections::getBackendIdForFolderUid failed because
folder was not found in cache.
[13102][2017-08-02T13:07:26+10:00] O: <Synchronize>
[13102][2017-08-02T13:07:26+10:00] O: <Folders>
[13102][2017-08-02T13:07:26+10:00] O: <Folder>
[13102][2017-08-02T13:07:26+10:00] O: <SyncKey>
[13102][2017-08-02T13:07:26+10:00] O:
{597edfd4-81f4-4718-b7c8-314dc0a801e6}1736
[13102][2017-08-02T13:07:26+10:00] O: </SyncKey>
[13102][2017-08-02T13:07:26+10:00] O: <FolderId>
[13102][2017-08-02T13:07:26+10:00] O: F96d3fb07
[13102][2017-08-02T13:07:26+10:00] O: </FolderId>
[13102][2017-08-02T13:07:26+10:00] O: <Status>
[13102][2017-08-02T13:07:26+10:00] O: 8
[13102][2017-08-02T13:07:26+10:00] O: </Status>
[13102][2017-08-02T13:07:26+10:00] O: </Folder>
[13102][2017-08-02T13:07:26+10:00] O: </Folders>
[13102][2017-08-02T13:07:26+10:00] O: </Synchronize>
[13102][2017-08-02T13:07:26+10:00] INFO: Maximum memory usage for
ActiveSync request: 17301504 bytes.
So it looks like a particular device repeatedly trying to query
ActiveSync is the root cause. The phone is my son's LG G3 running
Android 6, with the native email client.
I see this:
[02/Aug/2017:13:08:35 +1000] "POST
/Microsoft-Server-ActiveSync?Cmd=Sync&User=dom&DeviceId=androidc1102625737&DeviceType=Android HTTP/1.1" 200 76 "-"
"Android-Mail/7.7.2.162644686.release"0/632339
each time it tries.
So it looks like something to do with my caching is not working properly.
What is the Correct way to clear caches if I switch between hashtable
and nosql, and make connecting devices aware of that, and not end up
with racing CPU usage?
I can run redis or mongo for Horde / Imp caching, but it looks like
I'm having issues with stale cache if I switch?
Simon.
--
Simon Wilson
M: 0400 12 11 16
More information about the imp
mailing list