[dev] shares performance hit (was Re: disable shares)
duck
duck at obala.net
Mon Jun 9 16:00:08 UTC 2008
On Mon, 2008-06-09 at 15:32 +0200, Didi Rieder wrote:
> Quoting duck at obala.net:
>
> > Quoting "Didi Rieder" <adrieder at sbox.tugraz.at>:
> >
> >> Quoting duck at obala.net:
> >>
> >>> Quoting "Didi Rieder" <adrieder at sbox.tugraz.at>:
> >>>
> >>>> Quoting Didi Rieder <adrieder at sbox.tugraz.at>:
> >>>>
> >>>>> Quoting Jan Schneider <jan at horde.org>:
> >>>>>
> >>>>>> Zitat von Didi Rieder <adrieder at sbox.tugraz.at>:
> >>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> It would be great to have a configuration option or setting which
> >>>>>>> allows to completely disable shares?
> >>>>>>> I mean that all the sql lookups (e.g. for kronolith, mnemo and nag)
> >>>>>>> are omitted? If the default shares are needed in such a setup the
> >>>>>>> could be generated dynmaically without doing the time consuming sql
> >>>>>>> lookups.
> >>>>>>
> >>>>>> Lock the default_* preferences.
> >>>>>
> >>>>> I did that already but still queries of the following type are made:
> >>>>> SELECT s.* FROM nag_shares s LEFT JOIN nag_shares_users AS u ON
> >>>>> u.share_id = s.share_id WHERE s.share_owner = 'riederd at mbox' OR
> >>>>> s.perm_creator & 4 OR s.perm_default & 4 OR ( u.user_uid =
> >>>>> 'riederd at mbox' AND u.perm & 4) ORDER BY s.attribute_name ASC;
> >>>>>>
> >>>>>>> Shares are nice but if you run a system with >1000 active sessions and
> >>>>>>> serveral thousands of users they really can hit the performance.
> >>>>>>
> >>>>>> That's why we have a native SQL driver now, and share caching.
> >>>>>
> >>>>> I know, and it performs better than the datatree but still it is to
> >>>>> slow for our system/setup. We will move the sql server to a newer
> >>>>> machine soon, lets see if this helps.
> >>>>
> >>>> Unfortunately the mirgation to a new mysql-server machine (Sun 4600 4x
> >>>> 3GHz Dual core Opterons with 16Gb RAM) did not help very much.
> >>>> Initially it seemed that moving from a Sun Fire v480 (4xSPARC @900Mhz,
> >>>> 8Gb RAM) to the new system solved the problem.
> >>>> It was still running fine when there were about 7000 users using the
> >>>> system which results in the same number of default entries in the nag,
> >>>> mnemo and kronolith shares tables. But after the number increased to
> >>>> about 8200 users with about 1000 active sessions the mysql server just
> >>>> went down to its knees. (With the old systems ~2000 users were already
> >>>> to much)
> >>>>
> >>>> Does someone have a similar number of users and active sessions and
> >>>> uses mysql as data backend and does not see such problems? If yes, I'd
> >>>> be very much interested in seeing the relevant horde and mysql-server
> >>>> configuration parameters.
> >>>>
> >>>> Why is it necessary to do all the sql-lookups for default shares if
> >>>> sharing is disabled/locked? Wouldn't it be possible to dynamically
> >>>> create the results of the lookups by the php code itself without
> >>>> asking the database?
> >>>>
> >>>> Thanks for any hint.
> >>>>
> >>>> Didi
> >>>
> >>>
> >>> I do. This the reason why I wrote the native SQL driver.
> >>>
> >>> I have up to 1500 users online (with only one site, there are 100
> >>> other more, but they are smaller). I have an master and slave mysql
> >>> installation. The master is a dedicated server. The slave server
> >>> holds even DNS and mails. The web-server is even an dedicated
> >>> server with. It runs Lighttpd 1.5, php 5.2.6 and memcached. A very
> >>> good mysql performance resource is
> >>> http://www.mysqlperformanceblog.com/. Mysql default options are
> >>> often very “solid” but all depends on what type of data you hold
> >>> and what kind of queries do you run.
> >>>
> >>> But I agree. We should avoid the creation of default data and try
> >>> to minimize/limit the data usage wherever is possible.
> >>>
> >>> How can I help?
> >>
> >> Duck,
> >>
> >> thanks for you answer.
> >> I'm already using the native SQL driver and it is better than the
> >> datatree system, but unfortunately not enough :-(
> >>
> >> Do you use read/write splitting with your master and slave setup?
> >> ...hmmm with the one single slave for reads that wouldn't help much, right?
> >>
> >> Our system is the following:
> >> - 2 dedicated main mailservers with ~15.000 mailboxes (and ~15 more
> >> smaller ones < 30 user / server)
> >> - 4 dedicated and loadbalanced webservers with apache + php 5.2.6 +
> >> xcache + memcached on each and serving only horde
> >> - one dedicated SQL-Server (the above mentioned Sun 4600) also just
> >> serving horde no other data. This server is running mySQL 5.0.51a 32
> >> bit
> >>
> >> I think from the hardware site this should be enough.
> >>
> >> Could you please post or send me your my.cnf file.
> >>
> >> Avoiding creation and fetching of default data would indeed help.
> >> Interestingly, turba doesn't do any default share queries when
> >> sharing is disabled.
> >>
> >> Didi
> >
> > Turba does shares only when requested. Other modules relays on them.
> >
> > I abandoned Apache time ago...
> >
> > 40% of my queries are executed on the salve server 60% on the master
> > server as no all modules supports splited read/write. You can even
> > trick you conf to randomly switch from splited SQL or use mysqlproxy
> > for this.
> >
> > I suggest not to go directly to trick you SQL settings. My friends
> > always tend to directly trick mysql configuration. Then call me for
> > help. Can be little hard work, especially if you don't know your data
> > and actions on them. Performance is nearly always a problem of
> > application architecture and data handling. And tricking the con does
> > not solve your problem. Just avoid them... till traffic doesn't grow
> > even more or you do not install another application that needs other
> > settings. So lets go set by step.
> >
> > 1.
> > You should first trick your table structure. For example. All horde
> > scripts are general only for compatibility reasons. You should trick
> > columns types according your data. For example: The numerical columns
> > are often INT. But you can minimize your data by setting them to
> > MEDIUM/SMALL/TINY UNSIGNED integers wherever is possible. For example
> > in a lot of situations your numbers probably won't grow above SMALLINT
> > or MEDIUMINT expecially if you set them to be UNSIGNED as you get one
> > more bit. And share_id are always positive numbers. This is is because
> > mysql will allocate resources according to the predicted data
> > retrieved according to rows predicted to be selected and column types.
> > Mysql can analyze and purpose the best table structure for you (but
> > don't believe it truly as forces ENUMs and other staff that maybe is
> > all good for your flexibility).
> >
> > 2. check you mysql log for queries that does not use index.
> >
> > 3. set-up query cache.
> >
> > 4.check your log with mysqldumpslow and try to isolate queries that
> > really take long time. Here we, Horde DEV team, will really like to
> > help to kill them.
> >
> > 5. Then trick your mysql configuration. Because no you know what is
> > going in with your data and what needs to be changed. For example you
> > don't know if there is a sorting buffer problem, or no enough
> > table/thread cache etc. What your phpMyAdmin “Runtime Information”
> > page says? There is are a lot of info regarding you server stats. You
> > should check the comments of the settings where there are read
> > numbers. But pleas not that are not always true. Some of the number
> > get out of bounds as a consequence of a group of other wrong settings
> > not directly them.
>
> Duck,
>
> thanks for the advice.
>
> I activated the query-cache ever since it was implemented in mysql.
> I'll try to isolate the slow queries and to optimize the table structure.
>
> But my suspicion is that the sheer amount of concurrent queries at
> peak times is knocking down the server by pushing it to the memory
> limitations (got out of memory in the logs).
>
> This is the today's summary from 10 AM until 3:15 PM:
> Threads: 183 Questions: 2564496 Slow queries: 300 Opens: 1970
> Flush tables: 7 Open tables: 160 Queries per second avg: 130.888
>
> I'll try to switch to 64 Bit that should help a bit because it enables
> the server to use more memory. But may be this is still not enough. So
> next would be to switch to a read/write splitted setup.
>
> Anyway I think that reducing the amount of database queries would help most.
>
> Thanks, I'll keep you informed
>
> Didi
Out of memory with only 1000 current users and 130 q/s? This should
never happen. Probably it is an Mysql configuration mistake. How much
ram do you have. You can control your ram usage with various Mysqld
options or even kernel settings like swappness etc.
Please first check your tables (types and indexes). Than you can
compress you index by gzipping indexes. Here I help you with the
attached script that passes all you databases and tables and set your
PACK_KEYS option to 1. Then check you phpmyadmin runtime information
screen and tune some settings based on the help provided near the stats
numbers in red. This steps are the fastest to do and must help.
Then the general system bottleneck will be solved or avoided and we can
start fix you real problem and optimize the code where needed.
Duck
-------------- next part --------------
A non-text attachment was scrubbed...
Name: packkey.php
Type: application/x-php
Size: 1100 bytes
Desc: not available
Url : http://lists.horde.org/archives/dev/attachments/20080609/5c91da75/attachment-0001.bin
More information about the dev
mailing list