[support] how cache works: on the fly content creation (not node) and caching

Larry Garfield larry at garfieldtech.com
Sat Jan 2 20:26:02 UTC 2010


On Saturday 02 January 2010 12:14:26 pm Ivan Sergio Borgonovo wrote:

> > to throw something that heavy at it, sure, it's going to crumble.
> > The same is true of any other database, or any web app, Drupal
> > included.
> 
> I wouldn't put it this way. It's like saying that a train can take
> you from Hide Park to Piccadilly as well as a motorbike if you know
> how to drive them.
> 
> And yeah people who has been driving a motorbike for the past 5 years
> would oppose a bit of resistance before learning to drive a train
> and will hardly understand how a train can be "faster".
> Still a train is not a motorbike and it looks sluggish when you put
> it at tasks more suited for a motorbike.

I'm not sure I'd agree with that analogy anymore.  As you say below, both 
Postgres and MySQL have improved considerably in recent years so previous 
street knowledge about "MySQL is fast and lame, Postgres is powerful but slow" 
is outdated and wrong these days, on both sides.  The skillz of your admin now 
make a much bigger difference in most cases.

> Actually for few tens of a second I was thinking to use ISAM just
> for cache[1]... I wonder if this would become easier for D7. Yeah
> I'm aware of fastpath... but I still meant *easier*.

Actually, I'd go the other way.  In Drupal, the cache tables are one of the 
busier tables.  Cache, session, etc. are some of the heavier-write tables that 
you don't want to do table-level locking on.  watchdog or search indexes are 
good candidates for MyISAM, but not cache.

> Anyway even in pure speed if you compare pg with InnoDB Postgresql
> became quite competitive in the past years, if you include other
> factors other than "pure speed" when comparing pg with InnoDB, pg is
> more than just competitive. So maybe you're right... other factors
> come to play, and not just performance. Prejudice?

Exactly my point.  Old prejudices about MySQL *and* Postgres are no longer 
really accurate, and to continue to spread them is simply FUD.  (Note that I 
am not badmouthing Postgres here; just pointing out that "MySQL == too slow 
for anything more than a blog" is simply wrong.)

> > Besides, what the OP asked about was how to selectively invalidate
> > the cache to avoid running queries at all.  A rant about how
> > Postgres would be faster than MySQL really doesn't add anything to
> > the conversation.
> 
> url were actually a very good pk for page cache (btw many urls in
> drupal are too short (255) including cache_page.cid). So I can't
> complain about having to do some extra work to invalidate the cache
> efficiently in my case.
> I was looking if I could delay a real solution to the problem
> passing the $expire parameter at page creation time... so that I
> could assign longer/shorter cache times accordingly to the
> probability that a certain page will become stale.
> But I can't see no way to pass $expire at content creation time.
> This seems to be true even for D7.
> 
> What is the mechanism governing "next general cache wipe"?

Any time cache_clear_all() is called, anything that's not marked as 
CACHE_PERMANENT gets cleared.  Unfortunately it gets called a little more 
often than many people realize, because the cache is not as fine-grained as it 
should be.

Perhaps what you could do is just use your own cache table, and don't identify 
that cache table to the cache system.  That way it gets ignored by the 
drupal_flush_all_caches() command, but you can still use 
cache_get()/cache_set() for the body of the page in your own code, then 
disable page caching on those pages using the CacheExclude module.  Turn on 
block caching.  That will make individual pages a little more expensive but 
you can then control the cache clearing logic for the body however you want 
to, and since you get to control the cid then, you can make it whatever you 
want.

--Larry Garfield


More information about the support mailing list