[dev] [patch] sizelimit in turba sql driver.
Marko Djukic
marko at oblo.com
Mon May 26 18:28:24 PDT 2003
Quoting Chuck Hagenbuch <chuck at horde.org>:
> Quoting Marko Djukic <marko at oblo.com>:
>
> > i believe the paging would have to be done at the php level and not at
> > the backend level, for two reasons:
> > a) to avoid multiple queries to the backend (total results, then the
> > section we are interested)
>
> Why total results first?
to work out that i'm on page 3 of 8? i'd need to know that there are 8 total
pages, no?
> And if you're doing all results at once and caching them, when do you update
> the cache (thus fetching everything again).
> if there are a lot of results, but you only need one - you'll never fetch
> the whole result set if you just fetch the first slice in the backend.
sorry, i don't follow you here.
> And storing everything puts an extra load on *some* backend, even if it's
> not a different one - if we store it in a cache, we have to hit the cache
> *plus* the backend at least once. Etc.
it might put extra load, but on the other hand with backend-only work like using
modifyLimitQuery you are definately putting extra load.
> > b) discrepancies with how different backends would handle paging, so i
> > would rather have consistent results and handle it at the php level.
>
> Are you just assuming that there would be discrepancies, or are you aware of
> specific ones?
two problems that kept coming to mind:
a) minor, but how would the prefs based driver do the searching?
b) my understanding of ldap could be wrong, but i still haven't figured out how
to get back from ldap a result set starting from record 200 and return only 20
record, or how to get back the total number of possible results to figure out
that the user is looking at page 3 of 8. if there are ldap gurus who know how
to achieve this, performance issues, etc...?
> > and since we are getting the total results anyway
>
> We are? Why?
as above - how do we know that there should be 8 pages of total results?
but i could be wrong, would doing a count() based total search then a limit
search be less resource-intensive than doing a total search then slicing the
data with php?
and the other problem i had before was that in certain searches on large a DB a
sql limit query would die, whereas i could manage to do a no-limit search and
work on the data in php. i suspect i had to tweak something with my mysql, but
never managed to hunt it down.
marko
More information about the dev
mailing list