[dev] shares performance hit (was Re: disable shares)

duck at obala.net duck at obala.net
Tue Jun 10 09:04:21 UTC 2008


Quoting "Didi Rieder" <adrieder at sbox.tugraz.at>:

> Quoting "Didi Rieder" <adrieder at sbox.tugraz.at>:
>
>> Quoting duck <duck at obala.net>:
>>
>>> On Mon, 2008-06-09 at 16:42 +0200, Didi Rieder wrote:
>>>> > Out of memory with only 1000 current users and 130 q/s? This should
>>>> > never happen. Probably it is an Mysql configuration mistake. How much
>>>> > ram do you have. You can control your ram usage with various Mysqld
>>>> > options or even kernel settings like swappness etc.
>>>>
>>>> This is what I was also thinking.
>>>> I have a 16GB, but with the 32Bit server only ~3.2Gb can be used.
>>>
>>> uh. 12G wasted.
>>
>> Not really, we are running more zones on the machine. 2 are syncing
>> replicas for the imap servers. So in case we have to switch to the
>> replicas the Ram will be needed.
>>
>>> I have only 4G :(
>>
>> poor you :-), but at least you do not have these problems.
>>
>>>> Do you yous persistent connectins from horde to the sql server?
>>> no. they are not really usefull in an internal network and floods you in
>>> the peak time
>>
>> ok, that could be already on of the problems.
>>
>>>> What are your settings for:
>>>>
>>>> key_buffer_size
>>> 50M
>>>
>>>> max_connections
>>> 1024
>>>
>>>> sort_buffer_size
>>> 1M
>>>
>>>> read_buffer_size
>>> 256K
>>>
>>>> read_rnd_buffer_size
>>> 768K
>>>
>>>> myisam_sort_buffer_size
>>> 8M
>>>
>>> than some other settings that mightl help you (see manual for deatils):
>>> thread_concurrency  = 8
>>> tmp_table_size = 200M
>>> thread_cache_size = 50
>>> skip-external-locking
>>> skip-thread-priority
>>> table_cache = 400
>>>
>>> for peak times to not get flooded
>>> connect_timeout = 2 (disable connect attempt after 2 sec)
>>> and close connections after 30sec of inactivity, yes php should close
>>> connections after a script ends... yes it should :)
>>> wait_timeout = 30
>>> interactive_timeout = 30
>>
>> Here we go... It seams that we used a rather memory-oversized config
>> in my.cnf which led to memory problems in peak hours. I decreased most
>> the relevant values and will report in some days.
>
> Now I got entries in the slow_log file:
>
> # Query_time: 12  Lock_time: 0  Rows_sent: 1  Rows_examined: 13767
> use horde;
> SELECT s.*  FROM nag_shares s  LEFT JOIN nag_shares_users AS u ON
> u.share_id = s.share_id WHERE s.share_owner = 'user at realm' OR
> (s.perm_creator & 2) <> 0 OR (s.perm_default & 2) <> 0 OR ( u.user_uid
> = 'user at realm' AND (u.perm & 2) <> 0) ORDER BY s.attribute_name ASC;
>
> I saw many of them and "nag" is only an example the others were the
> same with "kronolith", and "mnemo".
>
> Didi

The main problem of this query is in MySQL itself. MySQL doesn't  
feature bitmap type indexes. So all rows must be examined. I was aware  
of that when I wrote the native driver. Is in my TODO to find a better  
storage structure to fully use engine capabilities. But I cannot say  
when I'd have time to work on it even because I don't experiment such  
performance issues.

Said that. You should get more from your server. 12 seconds to check  
only 13k rows is definitely too much for a server you mentioned. I  
have 13,465 rows in Kronolith shares and a query like that takes onl  
0.2254 sec! For Ansel Shares where I have 67.888 galleries in it,  
takes only 1.9356 sec!

You can hire me ;)

Duck


More information about the dev mailing list