[horde] Horde Problems Getting Better, but MYSQL still an issue
Andrew Morgan
morgan at orst.edu
Thu Sep 6 21:01:41 UTC 2007
On Thu, 6 Sep 2007, Dave Cunningham wrote:
> My mysql database just gets thoroughly pinned. I've tried a bunch of
> things mentioned so far on this list, but nothing seems to have solved
> the problem yet. The mysql process still shows a failry consistant 150
> to 200% CPU (dual cpu) usage in top.
>
> I have tried converting to InnoDB to allow row locking. But, I'm
> wondering if I have to do something else to get my queries to lock rows
> rather than whole tables. Is this something that mysql figures out on
> it's own, or is it something I have to set or edit in the queries? I
> would think the indexes should help it figure out to only lock certain
> rows, but things have not improved much with tweaking.
I don't know if Horde uses row-locking for non-session tables. When I was
using MySQL for sessions, I had the following entry in
horde/config/conf.php:
$conf['sessionhandler']['params']['rowlocking'] = true;
I'm not sure if there is a rowlocking parameter for $conf['sql'].
> I have two implementations of horde. One server has one instance of
> horde running on it, with about 1000-1500 active sessions. This one
> horde instance has one database for all 1500 sessions. I have a second
> server that has about 2 dozen virtual hosts, each running a separate
> instance of horde. Each instance has it's own database. The total
> usage is again around 1000-1500 sessions. However, the load on the
> database server is WAY smaller (about 1/8 -1/4 of the CPU usage). I
> figure this is because the various aspects of the datatree are split for
> each instance over many databases. So, each datatree join is FAR
> smaller than the single implementation. Anyone agree, disagree, or
> better, have a solution to help?
I agree that the size of the datatree table determines how painful the
JOINs will be. Multiple small datatree tables would be faster that 1 big
datatree table.
You can also get burned by some of the queries Horde tries to do. See the
attached patch for an improvement in handling OR datatree queries. I
forgot about this useful patch that was written by someone else on the
list. I believe it was accepted into Horde CVS as well. We've been using
it in production here for several months anyways.
> Here is my my.cnf:
>
> [mysqld]
> query_cache_size=1024M
You may be doing more harm than good with such a large query cache. If I
remember right, when a table is updated then the cached queries using that
table are invalidated. On a table with frequent updates, the cache may be
adding unnecessary overhead. I'm running with query_cache_size = 32M.
There is some turnover in the cache, which indicates I may need to
increase that value. However, a 1GB query cache seems pretty excessive.
> max_allowed_packet = 2048M
> max_connections = 2000
> key_buffer_size=160M
Your key_buffer_size should be larger. I'm using 512M here and that is
87% used.
> join_buffer_size=64M
Hmm, I have the default 128K join_buffer_size, but I wonder if I should
increase this on my system too. I don't see a way to monitor the usage of
this buffer.
> table_cache=1024M
This sets the number of tables to keep open at a time. It is not a memory
size. I use table_cache=512.
> sort_buffer_size=128M
I'm using the default size of 2MB, which seems to be adequate.
> net_buffer_length=8M
You should probably leave this set to the default. The docs say:
Each client thread is associated with a connection buffer and result
buffer. Both begin with a size given by net_buffer_length but are
dynamically enlarged up to max_allowed_packet bytes as needed. The
result buffer shrinks to net_buffer_length after each SQL statement.
This variable should not normally be changed, but if you have very
little memory, you can set it to the expected length of statements sent
by clients. If statements exceed this length, the connection buffer is
automatically enlarged. The maximum value to which net_buffer_length can
be set is 1MB.
> myisam_sort_buffer_size=128M
The default should be fine. This is only used when repairing tables or
creating indexes.
> read_buffer_size=64M
> thread_cache_size=1024M
thread_cache_size=8 is sufficient!
> read_rnd_buffer_size=32M
> tmp_table_size=64M
> record_buffer=64M
Hmm, record_buffer doesn't seem to be in the docs. What version of MySQL
are you running? :)
> log-slow-queries=/var/log/mysql_slow_queries.log
> long_query_time = 10
> # InnoDB settings
> innodb_data_home_dir =/var/lib/mysql/
> innodb_data_file_path = ibdata1:100M:autoextend
> innodb_log_file_size=1024M
> set-variable = innodb_buffer_pool_size=2048M
> set-variable = innodb_additional_mem_pool_size=200M
In version 5.0.x, these 2 variables can be specified directly, without
prefixing "set-variable".
> innodb_flush_log_at_trx_commit=0
>
> Thanks guys,
>
> Dave
Whew!
Andy
-------------- next part --------------
--- lib/Horde/DataTree/sql.php.orig 2007-03-14 12:59:23.000000000 -0700
+++ lib/Horde/DataTree/sql.php 2007-06-28 13:30:39.000000000 -0700
@@ -1098,6 +1098,46 @@
return PEAR::raiseError('no criteria');
}
+ // If there are top-level OR criteria, process one at a time
+ // and return any results as soon as they're found...but only if
+ // there is no LIMIT requested.
+ if ($count == 0 && $from == 0) {
+ foreach ($criteria as $key => $vals) {
+ if ($key == 'OR') {
+ $rows = array();
+ $num_or_statements = count($criteria[$key]);
+ for ($i = 0; $i < $num_or_statements; $i++) {
+ $criteria_or = $criteria['OR'][$i];
+ list($query, $values) = $this->buildAttributeQuery(
+ $criteria_or,
+ $parent,
+ $allLevels,
+ $restrict,
+ DATATREE_BUILD_SELECT,
+ $sortby_name,
+ $sortby_key,
+ $direction);
+ if ($count) {
+ $query = $this->_db->modifyLimitQuery($query, $from, $count);
+ }
+
+ Horde::logMessage('SQL Query by DataTree_sql::getByAttributes(): ' . $query, __FILE__, __LINE__, PEAR_LOG_DEBUG);
+
+ $result = $this->_db->query($query, $values);
+ if (is_a($result, 'PEAR_Error')) {
+ Horde::logMessage($result, __FILE__, __LINE__, PEAR_LOG_ERR);
+ return $result;
+ }
+ while ($row = $result->fetchRow()) {
+ $rows[$row[0]] = String::convertCharset($row[1], $this->_params['charset']);
+ }
+ }
+
+ return $rows;
+ }
+ }
+ }
+ // Process AND or other complex queries.
$aq = $this->buildAttributeQuery($criteria,
$parent,
$allLevels,
@@ -1109,22 +1149,22 @@
if (is_a($aq, 'PEAR_Error')) {
return $aq;
}
- list($query, $values) = $aq;
- Horde::logMessage('SQL Query by DataTree_sql::getByAttributes(): ' . $query, __FILE__, __LINE__, PEAR_LOG_DEBUG);
+ list($query, $values) = $aq;
if ($count) {
- $result = $this->_db->limitQuery($query, $from, $count, $values);
- } else {
- $result = $this->_db->query($query, $values);
+ $query = $this->_db->modifyLimitQuery($query, $from, $count);
}
+ Horde::logMessage('SQL Query by DataTree_sql::getByAttributes(): ' . $query, __FILE__, __LINE__, PEAR_LOG_DEBUG);
+ $result = $this->_db->query($query, $values);
if (is_a($result, 'PEAR_Error')) {
Horde::logMessage($result, __FILE__, __LINE__, PEAR_LOG_ERR);
return $result;
}
+
$rows = array();
- while ($row = &$result->fetchRow()) {
+ while ($row = $result->fetchRow()) {
$rows[$row[0]] = String::convertCharset($row[1], $this->_params['charset']);
}
More information about the horde
mailing list