On Fri, 01 Jan 2010 18:54:58 +0100 Jean-Michel Pouré jmpoure@free.fr wrote:
Le vendredi 01 janvier 2010 à 14:08 +0100, Ivan Sergio Borgonovo a écrit :
Before just building my cache tables and build up my own cache system on top of cache API, is there an alternative approach?
You may turn off caching, turn on SQL logging for slow queries and study your SQL query log server-side in the SQL database. Find slow queries and fix them. Of course, this may not be possible if you are using a shared server without access to logs.
I'm already on pg. I'm already using my box. I'm already monitoring slow queries. It's just that even fast queries add up and most of the content is "nearly static". Using drupal cache system let me skip a lot of queries and php code so that cpu cycles are left to things that are more expensive to make faster.
I've found 2 ways... but I'd like to hear something more drupal-halal.
To summarise the problem: - I can't forecast when a certain page will expire - I can know when a certain page is stale - Without any tweak it is "expensive" to relate the page cache pk (cid/url) with the records that generated it. url are a function of the record content
One solution would be to use a functional index on the cache_page.cid that compute from the "url" the pk of the record that generated it so I could quickly delete cache entries when the record is newer than cache_page.created.
Another solution could be to just add a table that save the mapping record pk -> url. If there wasn't anything in the cache... I'd hit the function that generate a fresh page and save the mapping. If there was something in the cache I wouldn't even have to check if there was already an entry for pk -> url. So no need to use an UPSERT.
Still really not convinced and looking for a better solution.
I just noticed that cache_page.cid is too short for my urls...
BTW ISAM shouldn't be bad for caching, not sure if it still perform well during cache invalidation (record deletion).