[horde] Horde Problems Getting Better, but MYSQL still an issue
Komputnik
komputnik at gmail.com
Fri Sep 7 20:20:30 UTC 2007
Dave Cunningham schrieb:
> Thanks for all that advice.
>
> I have applied many of your suggestions and will monitor performance today.
>
> I am also going to try to apply that patch.
>
> Dave
>
> -----Original Message-----
> From: Andrew Morgan [mailto:morgan at orst.edu]
> Sent: Thu 9/6/2007 5:01 PM
> To: Dave Cunningham
> Cc: horde at lists.horde.org
> Subject: Re: [horde] Horde Problems Getting Better, but MYSQL still an issue
>
> On Thu, 6 Sep 2007, Dave Cunningham wrote:
>
>
>> My mysql database just gets thoroughly pinned. I've tried a bunch of
>> things mentioned so far on this list, but nothing seems to have solved
>> the problem yet. The mysql process still shows a failry consistant 150
>> to 200% CPU (dual cpu) usage in top.
>>
>> I have tried converting to InnoDB to allow row locking. But, I'm
>> wondering if I have to do something else to get my queries to lock rows
>> rather than whole tables. Is this something that mysql figures out on
>> it's own, or is it something I have to set or edit in the queries? I
>> would think the indexes should help it figure out to only lock certain
>> rows, but things have not improved much with tweaking.
>>
>
> I don't know if Horde uses row-locking for non-session tables. When I was
> using MySQL for sessions, I had the following entry in
> horde/config/conf.php:
>
> $conf['sessionhandler']['params']['rowlocking'] = true;
>
> I'm not sure if there is a rowlocking parameter for $conf['sql'].
>
>
>> I have two implementations of horde. One server has one instance of
>> horde running on it, with about 1000-1500 active sessions. This one
>> horde instance has one database for all 1500 sessions. I have a second
>> server that has about 2 dozen virtual hosts, each running a separate
>> instance of horde. Each instance has it's own database. The total
>> usage is again around 1000-1500 sessions. However, the load on the
>> database server is WAY smaller (about 1/8 -1/4 of the CPU usage). I
>> figure this is because the various aspects of the datatree are split for
>> each instance over many databases. So, each datatree join is FAR
>> smaller than the single implementation. Anyone agree, disagree, or
>> better, have a solution to help?
>>
>
> I agree that the size of the datatree table determines how painful the
> JOINs will be. Multiple small datatree tables would be faster that 1 big
> datatree table.
>
> You can also get burned by some of the queries Horde tries to do. See the
> attached patch for an improvement in handling OR datatree queries. I
> forgot about this useful patch that was written by someone else on the
> list. I believe it was accepted into Horde CVS as well. We've been using
> it in production here for several months anyways.
>
>
>> Here is my my.cnf:
>>
>> [mysqld]
>> query_cache_size=1024M
>>
>
> You may be doing more harm than good with such a large query cache. If I
> remember right, when a table is updated then the cached queries using that
> table are invalidated. On a table with frequent updates, the cache may be
> adding unnecessary overhead. I'm running with query_cache_size = 32M.
> There is some turnover in the cache, which indicates I may need to
> increase that value. However, a 1GB query cache seems pretty excessive.
>
>
>> max_allowed_packet = 2048M
>> max_connections = 2000
>> key_buffer_size=160M
>>
>
> Your key_buffer_size should be larger. I'm using 512M here and that is
> 87% used.
>
>
>> join_buffer_size=64M
>>
>
> Hmm, I have the default 128K join_buffer_size, but I wonder if I should
> increase this on my system too. I don't see a way to monitor the usage of
> this buffer.
>
>
>> table_cache=1024M
>>
>
> This sets the number of tables to keep open at a time. It is not a memory
> size. I use table_cache=512.
>
>
>> sort_buffer_size=128M
>>
>
> I'm using the default size of 2MB, which seems to be adequate.
>
>
>> net_buffer_length=8M
>>
>
> You should probably leave this set to the default. The docs say:
>
> Each client thread is associated with a connection buffer and result
> buffer. Both begin with a size given by net_buffer_length but are
> dynamically enlarged up to max_allowed_packet bytes as needed. The
> result buffer shrinks to net_buffer_length after each SQL statement.
>
> This variable should not normally be changed, but if you have very
> little memory, you can set it to the expected length of statements sent
> by clients. If statements exceed this length, the connection buffer is
> automatically enlarged. The maximum value to which net_buffer_length can
> be set is 1MB.
>
>
>> myisam_sort_buffer_size=128M
>>
>
> The default should be fine. This is only used when repairing tables or
> creating indexes.
>
>
>> read_buffer_size=64M
>> thread_cache_size=1024M
>>
>
> thread_cache_size=8 is sufficient!
>
>
>> read_rnd_buffer_size=32M
>> tmp_table_size=64M
>> record_buffer=64M
>>
>
> Hmm, record_buffer doesn't seem to be in the docs. What version of MySQL
> are you running? :)
>
>
>> log-slow-queries=/var/log/mysql_slow_queries.log
>> long_query_time = 10
>> # InnoDB settings
>> innodb_data_home_dir =/var/lib/mysql/
>> innodb_data_file_path = ibdata1:100M:autoextend
>> innodb_log_file_size=1024M
>> set-variable = innodb_buffer_pool_size=2048M
>> set-variable = innodb_additional_mem_pool_size=200M
>>
>
> In version 5.0.x, these 2 variables can be specified directly, without
> prefixing "set-variable".
>
>
>> innodb_flush_log_at_trx_commit=0
>>
>> Thanks guys,
>>
>> Dave
>>
>
> Whew!
>
> Andy
>
>
Hi,
mysql is an issue on linux.
Because of the lack of thread scheduling in older kernels you should do
that on SMP Machines. (x86_64 preferred).
Use glibc 2.6.
Use and adequat kernel. (2.6.22.x).
use google-perf-tools.
Link mysql against -ltcmalloc or build google-perf-tools and on the
mysql-init script right before start() export this:
EXPORT LD_PRELOAD=/usr/lib/libtcmalloc.so (libpath may vary on x86_64).
Without innodb i got 70% Performance addition.
With innodb there is an addition of 45%.
Use sysbench from sourceforge to see that it is working.
That waitcount drops dramatically on mysql.
Link to mysql-perf-issue.
http://jeffr-tech.livejournal.com/5705.html
Rgds.
Kompi
More information about the horde
mailing list