[kronolith] performance on large systems
John Madden
jmadden at ivytech.edu
Sun Nov 22 03:10:02 UTC 2009
> > In my testing so far that's producing the same output as the stock
> > query but it's doing so in ~65ms instead of ~1200ms. Obviously I'd
> > have to fit that to match the string substitutions, but other than
> > that, would the above work?
> You said you're using Postgres, right? If that's fast for postgres
> that's great; I haven't tried it yet (I'll try your patch later), but
> the subqueries seem like they'd perform poorly with MySQL...
Yes, Postgresql only in our shop, mysql avoided whenever possible. :)
I really dislike subqueries but they're faster than join's in this case, at
least. When dealing with such a huge number of rows, you really don't
want to be doing that in a join. The sub-select in this case reduces
the amount of comparison (IN ()) to a very minimal set. I think this
would out-perform joins on just about any DB once the data set is sufficiently
large, but I'm a hack when it comes to this stuff.
John
--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden at ivytech.edu
More information about the kronolith
mailing list