[drupal-devel] [bug] Put revisions in their own table

Jose A Reyero drupal-devel at drupal.org
Fri Jul 29 15:40:46 UTC 2005

Issue status update for 
Post a follow up: 

 Project:      Drupal
 Version:      cvs
 Component:    node system
 Category:     bug reports
 Priority:     normal
 Assigned to:  killes at www.drop.org
 Reported by:  killes at www.drop.org
 Updated by:   Jose A Reyero
 Status:       patch

After looking at the patch I'm not really surprised this slows down
everything. I thought that the reason why we wanted revisions in their
own table was in the first place to have a simpler -and faster- node
table. But this patch:
- Adds fields and complexity not only to the node table but also to all
the main tables for data
- Needs additional joins just to retrieve single nodes (which is done
many times for a typical main page)
- Does away with encapsulation of the revision functionality requiring
other modules to handle  revissions related data.

So please, please, please:

- Make revisions table only needed when actually using revisions
- Keep that old nice thing of hiding revisions system from other
- And what's so bad with serializing data anyway? I agree that usually
there are better ways to store data. But this is one of the cases where
serialization does make sense. Not the current huge serialized field,
but only a new table with one serialized node per row.
- In general, do it much simpler. This is way too complex and it
removes more functionality than it adds.

In just four words: keep it simple, please.

Jose A Reyero

Previous comments:

Wed, 05 May 2004 16:25:27 +0000 : killes at www.drop.org

Currently all node revisions are stored in a serialized field in
node.table and retrieved for _each_ page view although they are rarely
needed. However, we have agreed that serializing data is bad and that
we should try to keep the memory foot print pf Drupal small.

Therefore I propose to create a separate revisions table which would be
in principle identical to the node table, only that it could have
several old copies of the same node. Extra data added by other modules
could be added in a serialized field unless we find a better solution.


Wed, 05 May 2004 17:06:35 +0000 : jhriggs

I too think the serialized approach is less than desirable, but here's
an alternative.  This would likely take some considerable rework in
core and contrib, but the following is how we handle similar types of
situations in our databases at work.  It is more elegant that a
separate table, and avoids the (almost exact) duplication of a table. 
Instead of separate tables, keep all revisions of nodes in the node
table as follows:

* add field: active (0/1 or Y/N)
* add field: revision
* every revision of a node is stored in the node table; however, only
one revision can be active at any given time
* nid can no longer be unique -- primary/unique key becomes (nid,
* any time a node is loaded, updated (without revision), etc., the
active version is used.



Wed, 05 May 2004 17:57:48 +0000 : killes at www.drop.org

I am not opposed to your scheme, but I want to stress the following:

* Duplicating a table's structure is not bad (IMHO) as long as the
content is different.

* having two tables will allow us to have a rather small node table.
This is (maybe) a performance gain.


Wed, 05 May 2004 18:37:29 +0000 : jhriggs

I don't necessarily think that duplicating a table's structure is _bad_.
 It just seems to be wasteful and a pain to maintain.  (Every change to
the node table is made twice...easy to do, but also easy to miss

As for performance, as long as nid and the active indicator are
indexed, there shouldn't be any performance loss.  Also, archiving an
old version when making a new revision will be much simpler:  just
change the active indicator rather than copying an entire node to
another table (and ensuring everything gets copied...again a potential
maintainance issue).

To be honest, I would just like to see the serialized data go away,
regarless of what approach is taken.


Fri, 30 Jul 2004 19:49:33 +0000 : Nick Nassar

Attachment: http://drupal.org/files/issues/Drupal-Improved_Revision_Schema_07-30-2004.patch.gz (10.47 KB)

I'm interested in using Drupal for a large scale wiki-type project. In
order to do this, I need revisions to be in their own table.

Attached is a patch to do just that. Most of the changes are pretty
self explanatory. Spreading out node data across two tables meant that
I had to add database functions to do locking/transactions.  Without
this, race conditions in which the database becomes corrupted are


Fri, 30 Jul 2004 19:54:36 +0000 : Nick Nassar

Oh yeah... The patch is a diff against Drupal CVS


Sat, 31 Jul 2004 00:00:08 +0000 : Anonymous

Gerhard speaking.

Nick, thanks a lot for your nice patch! It saves me a great deal of
labour. I looked through it and immediately liked it. You not only put
the old revisions into a new table but also the current one. Do you
have an estimate how much more expensive the additional join is?

Besides a few minor coding style issues I found a major one: Just a few
hours before you uploaded your patch JonBob's node access patch hit
core. That means your patch won't apply anymore as all the queries you
change have been changed. Can I bug you to update your patch?


Sat, 31 Jul 2004 01:11:59 +0000 : Anonymous

Also I think that your upgrade path loses existing revisions.


Sat, 31 Jul 2004 02:39:12 +0000 : drumm

I think this is the proper way to do things. No columns are duplicated,
there is no serialized data, and only the fields that  are logically
revised are stored. Nothing jumped out at me as a way to have my node
modules be able to keep a table of revisions of additional fields. I'm
guessing this could be done within the confines of _insert and _update.

Assuming the upgrade path works and modules can extend it I give it a


Sat, 31 Jul 2004 14:40:15 +0000 : Nick Nassar

It figures that just as I finish a big patch, another patch comes along
and breaks it. Oh well, it should be a pretty easy to fix. I'll work on

Fixing the upgrade path to keep revisions should be fairly painless.

I found another issue that needs to be fixed before this patch gets
merged. There format of a node needs to be stored for each revision.
Otherwise, for modules that store a format for the nodes, such as page
and book, if you write one revision in PHP and the next in HTML, the
PHP revision will be displayed as HTML. This is part of a larger issue
of how node modules should store revisions of additional fields. I
think each module that wants to do this should create another table
with (nid, revid) as the primary key. Just as when they want to add
fields to a node they create another table with nid as the primary key.

As far as performance goes, for sites that make heavy use of revisions,
an extra join on primary keys is going to be a lot faster than grabbing
all of the revisions from that database everytime. We would need to run
benchmarks to determine is the overall difference in speed is for an
average site is a gain or a loss. I'm guessing it's very minor either


Mon, 23 Aug 2004 13:55:49 +0000 : Nick Nassar

Attachment: http://drupal.org/files/issues/Drupal-Improved_Revision_Schema_08-23-2004.patch.gz (10.92 KB)

Here's an updated patch against CVS that puts revisions in their own
table, provides an upgrade path, and fixes the format related bugs in
the last patch.

Hopefully, this can make it into CVS as soon as the freeze is over.


Mon, 23 Aug 2004 14:10:39 +0000 : moshe weitzman

Interesting patch ... drumm's question is still outstanding. how do
modules store revisions of their fields? Are they expected to manage
this on their own? Thats not how it works today.

As an aside, i am seeing profile_ fields in my node.revisions column.
One could argue that those need not be saved. They pertain to the node
author, not to the node itself.


Mon, 23 Aug 2004 16:14:39 +0000 : Nick Nassar

Having modules be responsible for storing revisions of their own fields
is a side-effect of storing revision data in tables. There's really no
way around it. However, revisions generally don't make sense for node
types that don't have PHP/HTML content, such as polls. I think it's
going to be a pretty rare scenario for a new node type to want another
field to change per-revision, so it's a pretty good trade-off.

Storing fields that shouldn't be part of revisions, such as the
profile_ fields, is a side-effect of storing revisions as serialized
objects. Applying this patch will free up that wasted space. :)


Mon, 23 Aug 2004 17:20:57 +0000 : Anonymous

There should be a hook that let's the module choose whether it supports
history.  This way a module author can prevent the user from doing
something that may break his module or just cause undefined behavior. 
If the module doesn't support history then don't let the user/admin
choose to add history to nodes of that type.



Mon, 23 Aug 2004 19:23:29 +0000 : Nick Nassar

I agree, there should be an API change to make specifying support for
revisions easier. In the interests of keeping patches small and keeping
to one change per patch, I think the API change should be a separate

A sort of ad-hoc API to decide whether or not a module supports
revisions by default already exists. Instead of having a hook, modules
set the default value of the "Create new revision" field in the edit
form. The admin can change this option in
admin/node/configure/defaults. This patch doesn't change that.

Revisions are broken for node types that have their own database
structure, like polls, even when storing them as serialized objects.
This patch doesn't change that, either.


Tue, 26 Oct 2004 02:35:06 +0000 : moshe weitzman

I'm guessing that someone is going to have to demonstrate that this
patch performs as well as current drupal before it gets comitted. i
think this patch is a few benchmarks from being comitted.


Wed, 27 Oct 2004 01:04:09 +0000 : Nick Nassar

Attachment: http://drupal.org/files/issues/Drupal-Improved_Revision_Schema_10-26-2004.patch.gz (11 KB)

I ran some really unscientific benchmarks, and it looks like this patch
has a negligible affect on performance.

I used apache bench and the database from theregular.org, which doesn't
contain any revisions (worst case scenario for this patch) and contains
several hundred nodes. Both the patched and unpatched versions hovered
between 2.36 and 2.38 requests per second.

The command I used was:
ab -n50 -C 'PHPSESSID=b01a9f92880ef215b0ed6f1314a5eba2'

An updated patch that should apply to CVS is attached.


Wed, 27 Oct 2004 01:05:16 +0000 : Nick Nassar

I ran some really unscientific benchmarks, and it looks like this patch
has a negligible affect on performance.

I used apache bench and the database from theregular.org, which doesn't
contain any revisions (worst case scenario for this patch) and contains
several hundred nodes. Both the patched and unpatched versions hovered
between 2.36 and 2.38 requests per second.

The command I used was:
ab -n50 -C 'PHPSESSID=b01a9f92880ef215b0ed6f1314a5eba2'

An updated patch that should apply to CVS is attached.


Wed, 27 Oct 2004 01:05:26 +0000 : Nick Nassar

Attachment: http://drupal.org/files/issues/Drupal-Improved_Revision_Schema_10-26-2004.patch_0.gz (11 KB)

I ran some really unscientific benchmarks, and it looks like this patch
has a negligible affect on performance.

I used apache bench and the database from theregular.org, which doesn't
contain any revisions (worst case scenario for this patch) and contains
several hundred nodes. Both the patched and unpatched versions hovered
between 2.36 and 2.38 requests per second.

The command I used was:
ab -n50 -C 'PHPSESSID=b01a9f92880ef215b0ed6f1314a5eba2'

An updated patch that should apply to CVS is attached.


Mon, 15 Nov 2004 05:05:30 +0000 : elias1884

please overthink the revision system default workflow as well. don't
look at the revision system as an isolated system but as a part of the
whole workflow system!

if you combine revisions with the moderation queue the most logic
default workflow would be like that:

auth user creates node (revision #0)
admin approves the node (status = 1, moderation = 0)
=> node publicly available
auth user finds typo and changes node (revision #1, status = 0,
moderation = 1)
what happens at that point at the moment is, that the node is not
accessible anymore at all until the new revision is approved by admin.
of course the new revision should not go online until reviewed and
approved, this is absolutely correct, but there is no reason to not
take the old revision offline, since it was already approved and should
therefore be online until the new revision is approved. it is not
practical if a node disappears only because the author corrected a
admin approves the node (status = 1, moderation = 0)

eventhough I first thought a plain boolean active field would not be
capable of providing that functionality if finally came to the
conclusion, that it can. The only thing to do is to not set that bit,
when a new revision is created, but when it is approved (in case
moderation is activated under default workflow). Every revision should
have its own moderation, status and active field and on approval they
are set like this (status=1, moderation=0, active=Y).

When you wanna rollback to an old revision, you can chose between all
revisions that already have the moderation bit set back to 0 again and
the published to 1. There should be an extra permission for rollback!

another concern that I have about the default workflow is, that users
can't see the content, they have just created, when moderation is
enabled. Eventhough, there is a big fat "submission accepted" presented
after submissions, unexperienced users tend to question the information
those stupid tincans give them, if they can't find their content
afterwards. Many users are really lazy bastards and they don't even
read the status messages. The best feedback about whether his story was
submitted successfully or not of course is, if he can find the story
somewhere on the site, maybe with a status message on top of it,
mentioning, that the content is currently not publicly available since
it has not been approved yet. there should be a my content section
under my account, like somebody is trying to do with the workspace
module I guess.

so my suggestion is to make (status=0, moderation=1) still available
for the creator under a my content section somewhere!


Wed, 24 Nov 2004 04:21:18 +0000 : Nick Nassar

I agree. The current workflow for moderation queues and revisions needs
to change, but this patch isn't the place for it. The patch is already
too big, and it only does the backend stuff.

Instead of adding more to this patch and making it take even longer to
get into core, would you mind creating a new issue for your UI
suggestions, so the those changes can be added as a separate patch?



Sat, 11 Dec 2004 12:26:02 +0000 : Dries

This patch is _much_ needed so I'd love to see someone revive it.  In
order for this patch to be accepted, the following needs to be done:

* Update this patch to CVS HEAD.
* Rename revid to vid.
* Rename node_rev to node_revisions.
* Rename node_rev.changed to node_revisions.timestamp.
* Rename $rnode to $revision.
* Fix the coding style to match Drupal's: proper spacing, single quotes
where possible, proper variable names.
* Benchmark this patch with a large database with enough revisions. 
I'd be happy to benchmark this on my local copy of the drupal.org
* The book.log field should probably move to the node_revisions table. 
This can be done in a separate patch.
* Investigate whether transactions are well-supported.


Mon, 13 Dec 2004 00:25:40 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/Drupal-Improved_Revision_Schema_10-26-2004-revisited.patch.gz (11.02 KB)

I've worked a bit on the patch (coding style issues as mentioned by
Dries). One thing I noticed is that the patch uses REPLACE. IIRC this
needs to be chagned to "UPDATE, if fail INSERT" for pgsql

Nick, are you still interested in working on that patch? I'd like to
know how it works on your site and work on getting it into core.


Mon, 13 Dec 2004 12:33:08 +0000 : Dries

Gerhard: your patch does not apply.


Mon, 13 Dec 2004 17:10:12 +0000 : killes at www.drop.org

Yes, I know, that was the same version as I mailed to you earlier.


Mon, 13 Dec 2004 21:02:06 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions.patch (52.96 KB)

Ok, upüdated the patch to cvs.


Tue, 14 Dec 2004 08:58:36 +0000 : Dries

Some more comments:

* db_begin_transaction() and db_end_transaction() do not belong in
database.inc, but in database.mysql.inc and database.pgsql.inc
* The node module calls node_revisionsision_list() which is not
defined.  (Fxed that on my local copy.)
* Do db_begin_transaction() and db_end_transaction() deprecate Jeremy's
table locking patch?
* The upgrade path assigns the wrong user ID to each revision.
* The upgrade path assigns the wrong date to each revision (that or a
node's revision page shows the wrong usernames/dates).
* The coding style needs a bit of work, but we can worry about that


Tue, 14 Dec 2004 17:34:44 +0000 : Nick Nassar

If you need any help getting those things fixed, just let me know.


Tue, 14 Dec 2004 17:50:30 +0000 : Nick Nassar

How this relates to Jeremy's node locking patch:

There was lots of discussion, and node locking was decided against
because from an end user point of view you never want a node to be
locked. He's now advocating for a much simpler patch that warns users
if their changes will overwrite someone elses. That patch still has a
race condition, which might be fixed using db_begin_transaction().



Tue, 14 Dec 2004 22:26:19 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_0.patch (55.96 KB)

Here is an updated patch that tries to address Dries concerns.


Wed, 15 Dec 2004 08:32:50 +0000 : Dries

Attachment: http://drupal.org/files/issues/revisions-bug.png (76.06 KB)

It didn't fix the aforementioned bugs.  See attached screenshot.


Thu, 06 Jan 2005 20:15:01 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_1.patch (51.77 KB)

Ok, here is a new version. Dries and myself worked hart at it, so please
have a look.

what is still missing

- database upgrades for the core modules with an own table
- contrib modules need an upgrade too.
- do we need nid and vid in both the node and the node_revisions table?
- the amount of sql queries means a good stress testing for large


Wed, 19 Jan 2005 21:43:49 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_2.patch (49.49 KB)

Here is an updated patch. We discussed to keep the current title in node
module and also in the revisiosn table. This is content duplication but
will save many joins as many queries only need the title of a node.
Discussion is welcome.


Wed, 19 Jan 2005 23:33:32 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_3.patch (29.93 KB)

I've implemented the aforementioned solution. This makes the patch much
smaller. The patch now also removes taxonomy_node_has_term() which
wasn't used anywhere. I'd really apprciate if some people could test
drive the patch. It will be another huge improvement for 4.6.


Thu, 20 Jan 2005 00:05:54 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_4.patch (30 KB)

Another revision. Steven didn't like my literal $node->vid in queries.


Thu, 20 Jan 2005 01:10:50 +0000 : killes at www.drop.org

- database upgrades for the core modules with an own table
- contrib modules need an upgrade too.
- do we need nid and vid in both the node and the node_revisions table?
- the amount of sql queries means a good stress testing for large

These issues are still open, btw. Especially the first one needs to be


Tue, 25 Jan 2005 20:11:59 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_5.patch (51.13 KB)

Here is a patch that has the database tables updated for forum, book,
and page module.


Sat, 29 Jan 2005 22:55:59 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_6.patch (49.18 KB)

Yet another update to keep it working with head. The patch now also
removes the table definitons for the page table.


Sat, 29 Jan 2005 22:57:40 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_7.patch (55.69 KB)

Sorry, that was the old version, this is the right one.


Mon, 31 Jan 2005 19:55:03 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_8.patch (55.71 KB)

Updated once more.


Mon, 31 Jan 2005 20:52:08 +0000 : Dries

Anyone to help review/test this?


Mon, 31 Jan 2005 21:22:36 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_9.patch (49.29 KB)

Updated again, the update functions occurred twice. Thanks Bart.


Wed, 02 Feb 2005 00:27:05 +0000 : killes at www.drop.org

Don't know if the db I am using is corrupted or what. I still do have
some didficulties.
The latest patch is attached.


Wed, 02 Feb 2005 00:27:49 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_10.patch (49.67 KB)

I am probably slowly going mad ...


Wed, 02 Feb 2005 01:54:58 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_11.patch (48.95 KB)

The update issue still needs investigating. This patch is updated for


Wed, 02 Feb 2005 20:20:51 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_12.patch (49.83 KB)

Ok, here is a new version. I've solved my troubles with book.module.
There are still some issues with forum module. Possibly due to
inconsistent database.


Wed, 02 Feb 2005 21:31:05 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_13.patch (49.83 KB)

Turns out the drupal.org database had indeed some quirks. Please run
this query in your oldest db and tell me the result:

select nid,type from node where type like '%/%';

If you get a non-zero result we might need to add another security

The patch could use still more testing, though.


Thu, 03 Feb 2005 01:16:54 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_14.patch (49 KB)

Ok, we are getting somewhere. At a first glance the update is working.
There is a problem remaining: the revisions tab will be shown whether
the node has revisions or not. Not sure we can/need to fix this.

People with a drupal.org account can log in at
http://killes.drupaldevs.org/revision/ and poke around. Your
permissions will be the same as on drupal.org. Feel free to vreak
everything but don't forget to file complaints here. (Note: this is
only a pruned version of the drupal.org database with all project nodes
and nodes with nids > 7000 dropped).


Thu, 03 Feb 2005 04:19:14 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_15.patch (52.39 KB)

There was some error in node_save and also the patches to the
database.inc files got lost...


Thu, 03 Feb 2005 07:07:27 +0000 : robertDouglass

Submitting book pages doesn't work on your test site. It puts the entire
content of the preview inside the body textarea. I wrote a sentence in
the body and the log, and pressing preview put several lines of HTML
containing both sentences in the body textarea on the preview page,
plus the book page wouldn't submit.



Thu, 03 Feb 2005 07:50:59 +0000 : Junyor

0 results here.  I started using Drupal with version 4.4, though.


Thu, 03 Feb 2005 23:56:18 +0000 : killes at www.drop.org

@Junyor: Thanks, that's a good sign. Maybe somebody else has an older db
to try.

@robertDouglass: The first effect you describe is due to drupaldevs
running on PHP 5. I am unsure why the second thing does not work. In
node_save() the node object has a nid although there is none in the
form. Very strange.

I've enabled display of db queries on the testsite.


Fri, 04 Feb 2005 19:17:55 +0000 : dmjossel

No results here on the query:

select nid,type from node where type like '%/%';

On a database that was put in place prior to Drupal 4 and is now
running on 4.5.2.


Fri, 04 Feb 2005 20:44:23 +0000 : killes at www.drop.org

@dmjossel: thanks.

@all. The strange problem I reported was apparently php 5 related.
After applying Steven's php 5 patch it went away. One error is
remaining: If I create a new forum topic it will be shown as part of
the book on preview. Hmm, that was due to a db that got corrupted
during testing so that is fixed too.

Please poke around at the test site and look if you find more errors.


Sat, 05 Feb 2005 07:16:22 +0000 : Steven

By the way, I just remembered that Drupal.org has some blogs lingering
on in the database even though blog.module is not enabled. Perhaps this
is causing troubles?


Sat, 05 Feb 2005 11:22:59 +0000 : Anonymous

I can't see why it would. Drupal.org will need extra updates for images
and project nodes because those have their own tables. GK.


Sun, 06 Feb 2005 12:49:55 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_16.patch (52.49 KB)

Updated to apply to cvs again.


Tue, 22 Feb 2005 20:15:40 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_17.patch (49.64 KB)

Updated again.

All we need is a patch to upload module and an upgrade path for it.


Fri, 04 Mar 2005 04:22:58 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_18.patch (52.31 KB)

Updated once more. Moved log field from book to node_revisions table as
discussed in Antwerp. upload module still missing.

We need to decide under which circumstances the log field should be
displayed. Should that be added to the workflow? Should it depend on
the revisions setting?


Sat, 05 Mar 2005 19:27:03 +0000 : Anonymous

Attachment: http://drupal.org/files/issues/revisions_20.patch (75.52 KB)

Ok, here it is: Yet another revision of this grrrrreat patch.

Changes from previous versions:
- supports versioning for uploaded files. A problem is that if you
delete a file, it will be gone for all revisions.
- the log field is now in the node_revsions table, but each module has
to decide whether to show it or not.
  I've implemented it for the page and the book type odes. Also, the
field can be edited when adding non-book nodes to the book. The log is
displayed on the revisions page and if a node is moderated.
- the revisions are moved to an old_revisions table to a) get the node
table smaller and b) still leave the mavailable for contrib modules
that want to retreive old version data.

The patch has been applied to killes.drupaldevs.org/revision where it
can be tested by anybody (especially people who have "site admin"
rights on drupal.org).
The database is from drupal.org and you should b able to log in with
your pass or simply mail yourself a new one.



Sat, 05 Mar 2005 19:51:56 +0000 : Anonymous

Attachment: http://drupal.org/files/issues/revisions_21.patch (59.42 KB)

BTW, I marked this a bug because atm the revisions field can grow quite
big. Neil has reported problems from some users who were not able to
load some nodes due to to many large revisions.

Also, som unrelated stuff crept into the patch. New version attached.


Tue, 08 Mar 2005 05:56:01 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_22.patch (60.29 KB)

Ok, I think I got it.

Changes to last version:

- uploads are no properly versioned.

Missing are still pgsql checks and updates.


Thu, 10 Mar 2005 16:58:41 +0000 : Anonymous

Was able to get http://drupal.org/files/issues/revisions_21.patch to
work with drupal-cvs.tar.gz (10 March 2005) by:

- includes/database.mysql.inc: Commenting out duplicates for functions
function db_begin_transaction and function db_commit_transaction

- modules: node.module: Removing "'title' => $node->title," from
$node_table_values variable declaration and removing "'title' =>
"'%s'"," from "$node_table_types" variable declaration.

Happy to submit a patch if requested. I'll watch this thread.


Fri, 11 Mar 2005 23:59:45 +0000 : killes at www.drop.org

The duplicate function has been removed in rev 22 of this patch.

Why do you think the changes in node_save are needed? Titles are saved
in both tables for performance reasons.


Sun, 13 Mar 2005 16:12:21 +0000 : jlerner

Hi - I posted comment #62. The changes to node_save appear to be needed
because recent patches (both 21 and 22) remove the field 'title' from
table 'node'. So without the changes to node_save, node.module is
broken and generates errors.



Sun, 13 Mar 2005 16:29:42 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_23.patch (61.17 KB)

Thanks, Joshua, for catching this. node:title is there to stay.


Wed, 13 Apr 2005 16:29:23 +0000 : moshe weitzman

since HEAD is open again, perhaps it is a good time to revisit this

once this is committed, lets address - http://drupal.org/node/11071
"node_validate does not respect group editing"


Mon, 18 Apr 2005 15:43:42 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_24.patch (60.39 KB)



Mon, 18 Apr 2005 16:16:32 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_24_0.patch (60.39 KB)



Tue, 19 Apr 2005 05:19:42 +0000 : Dries

I'll commit this patch later this week!  If you haven't checked this
patch already, I urge you to test/check it out because it will have
significant impact on existing code and modules!


Tue, 19 Apr 2005 05:21:33 +0000 : Dries

Also, what do people think about the n.title being duplicated?


Tue, 19 Apr 2005 05:26:58 +0000 : chx

I won't lose any sleep because of duplicated titles...


Tue, 19 Apr 2005 18:35:58 +0000 : killes at www.drop.org

Let me explain why I have chosen to duplicate the title (and also the
uid): If you look at all the queries in Drupal, you will find that most
of them only need the title and th uid of a node. That is, by
duplicating it, we save expensive joins on the node_revisions table.
Due to this fact, this patch is actually a performance improvement. 

A note about updating contrib module:

Strictly speaking they wouldn't need to be updated. They only need to
if their authors decide that their info should be available for
revisioning. The upgrade path for forum.module in my update.inc patch
(+ the forum patch)
should show you what needs to be done.
I will write a note for the update page once the patch hits core.


Sun, 24 Apr 2005 21:21:19 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_25.patch (60.38 KB)

Updated to cvs.

Dries: Based on some remarks in #drupal this is the last update I am
going to do. Apply it or won't fix it.


Sat, 30 Apr 2005 03:42:39 +0000 : Jeremy at kerneltrap.org

Attachment: http://drupal.org/files/issues/revisions_25.patch.patch (528 bytes)

That's a big patch.  I've only started looking through it.  I noticed
one little typo, affecting updates.  A patch to your last patch is

I'm running with the revision patch on my dev server now happily.  I
like the concept.

What happens if you click 'stop' on your browser in the middle of a
MySQL "transaction"?  I assume that kills the connection to MySQL, and
the lock is freed?  But this then leaves changes only partially

What exactly does locking/unlocking the tables buy us in MySQL?  I
don't see anywhere that we detect if an apply fails part way through,
and thus roll back...?  What am I missing?


Sat, 30 Apr 2005 07:11:28 +0000 : Dries

Jeremy: many of us are worried about the performance ramifications this
patch introduces.  Early experiments showed a small performance
improvement (while a performance regression might be expected).  More
performance reports from large sites like kerneltrap.org will certainly
help this patch.  Mind to do a quick performance comparision and to
report back with some numbers?  Thanks.


Sat, 30 Apr 2005 12:38:02 +0000 : Jeremy at kerneltrap.org

Dries:  I'm not running HEAD on kerneltrap, so this really isn't a
possibility.  Furthermore, until I understand why we're locking tables,
I don't like it.  The idea of revisions in their own tables is great. 
The idea of locking tables to get (without any obvious benefit) there
really worries me.


Sat, 30 Apr 2005 14:16:01 +0000 : killes at www.drop.org

@Jeremy: Thanks for looking at the patch! Also for catching the typo. :)

Did you try to upgrade your database? If yes, how did it go? One of
Dries' concerns is the complexity of the upgrade. How many nodes and
revisions did the db have?

About database locking: This part of the patch was created by Nick and
I simply continued to use it.

Maybe the code should rather be:

if(db_begin_transaction(array('{node}', '{node_revisions}',
'{watchdog}', '{sessions}', '{files}'))) {
  db_query($node_query, $node_table_values);
  db_query($revisions_query, $revisions_table_values);

The idea is probably to avoid two updates at the same time. I don't
think the locking helps if you abort the script at an inconvenient
time. Rollbacks aren't implemented in all mysql versions.

We could omit the db locking if deemed inappropriate. Maybe Nick can
explain his ideas behind this.

@Dries: I wonder who the "many of us" are. They certainly haven't
spoken to me. Moshe had some reservations about the upgrade path and
project module, but the time that project module abused revisions to
store issue updates was long ago and his reservations were resolved.
Nobody else (besides you of course and now Jeremy) has voiced
reservations in a way that was audible to me.

If you grep through the patch you will notice that there are only four
queries which have a join on the node_revisions table. Two of them are
in node_load and in the other cases the join replaced a join on the
node table. The two queries in node_load are the only ones that have
both a join on the node and the revisions query. Thus, loading of
individual nodes might become somewhat slower. All other queries will
be faster since the node table is now much smaller. Also, node loading
does not have to be slower, it depends on your node table. If you had
a lot of revisions and thus a large table, then the new scheme will
make your queries actually faster since we do not load the revisions
on each and every node load anymore. If you didn't have many revisions
your node_load migth be somewhat slower.

WRT to the update script Karoly pointed out that we could use multiple
insert queries instead one query per revision. This would probably
make the update somewhat faster. I am willing to work on this iff you
declare that you will commit the patch afterwards. I'd need to know if
this will work on pgsql and on all supported mysql versions before I
work on it.

About locking: Database locking is dog slow, at least on mysql. I was
using locks in an earlier version of the upgrade script but had to
remove it for (serious!) performance reasons.


Mon, 09 May 2005 15:07:34 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_26.patch (46.45 KB)

Ok, another update, cause I need it myself.

I've left out the transaction stuff for now. It is in principle
unrelated to this patch and should be discussed elsewhere.

This also makes the patch smaller and easier to review (hint, hint).


Mon, 09 May 2005 20:32:09 +0000 : killes at www.drop.org

The patch contained the update functions twice.


Mon, 09 May 2005 20:32:26 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_27.patch (39.05 KB)

The patch contained the update functions twice.


Mon, 09 May 2005 21:23:06 +0000 : Dries

Got one error during the upgrade path:



Mon, 09 May 2005 21:26:19 +0000 : killes at www.drop.org

This had happend to me as well, when I tested this patch. The reason is
that for some reason the vid is not unique. Most likely there are some
entries with vid = 0 in there. Can you check which node types those
have? it always was an error in the test database. See:


Mon, 09 May 2005 21:27:06 +0000 : Dries

Actually, I got 2850 errors during the upgrade.

Some of these:

sprintf() [function.sprintf]: Too few arguments in
drupal-cvs/includes/database.inc on line 154.
Some of these:

Query was empty query: in drupal-cvs/includes/database.mysql.inc on
line 66.
And this:

Unknown table 'n' in field list query: SELECT n.nid, n.vid FROM node
INNER JOIN files f ON n.nid = f.nid in
drupal-cvs/includes/database.mysql.inc on line 66.


Mon, 09 May 2005 21:29:19 +0000 : Dries

Or this:

user error: Unknown column 'log' in 'field list'
query: SELECT parent, weight, log FROM book WHERE nid = 1 in
drupal-cvs/includes/database.mysql.inc on line 66


Mon, 09 May 2005 21:52:12 +0000 : Dries

The time required to generate my main page went from 902 ms (before
upgrade) to 2139 ms (after upgrade).

The time required to generate a forum listing (?q=forum/x) went from
1872 ms (before upgrade) to 2874 ms (after upgrade).

Maybe this is because my database is not consistent as the result of
the upgrade errors (yet I don't see any errors on the pages I


Tue, 10 May 2005 00:24:38 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_28.patch (53.47 KB)

Ok, let me get to this from the bottom to the top:

- my test runs indicated a different development wrt timing. If I had
gotten your results, I had stopped working on this long ago. So your
results are wrong for some reason.

- user error: Unknown column 'log' in 'field list'
Wasn't my day, the book patch got lost. It is contained now. First -R
the old patch, then apply this one.

- Unknown table 'n' in field list query:
Walkah found this, but I forgot to fix it. Fixed now.

- I've no idea where the other queries come from. I am hoping that
either your test db is borken or they are follow ups from the other

If you let me have your test db, I'll try some debugging.

Thanks for wasting your time, too.


Tue, 10 May 2005 05:07:31 +0000 : Dries

I double-checked and the numbers don't seem to lie.  I'll test some more
after work on another machine to make sure it is not platform-specific.


Wed, 11 May 2005 03:32:47 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_29.patch (54.83 KB)

Ok, here I am again.

What I did:

1) Ask Dries to let me have drupal.org database
2) get 400MB of SQL inserts...
3) take 23 minutes to import said data
4) Remove all image and project nodes (don't want to install their
modules), 11765 nodes left
5) back up data
6) take tests on non-cached /node page (as anonymous user).

ab results:

-c 1 -n 25:

Requests per second:    1.29 [#/sec] (mean)

Connection Times (ms)
              min  mean[+/-sd] median   max
Total:        663  775 179.7    689    1264

7) Do the same for the tracker page:

Requests per second:    0.83 [#/sec] (mean)

Total:       1182 1199   7.4   1199    1217

8) Apply my patch (rev. 28).

9) run db update and hold breath

10) update times out...

11) play back backup from 5)

12) wait

13) getting annoyed and removing cache, watchdog, and accesslog before
playing in backup.

14) wait again. Understand why Dries doesn't try this patch often.
Maybe a smaller DB would do for testing?

15) wait more. get really annoyed.

16) Set time limit to 18000 in update.php

17) try again

18) fails again before the second update is completed.

19) curse.

20) delete search stuff from db. Ooops, sooo much smaller...

21) import again, below 2 minutes...

22) rewrite to use extended insert. Found a bug.

23) still does not complete. Mysql logging to the rescue!

24) tid = 0? Not good.

25) Well, the update works fine till node 10834. 5595 nodes done, 6136
to go.

26) Writing shell based update script. Discovery: 24MB aren't enough.
Hopefully 64 are. Nope.
extended inserts for revisions are apparently not the brightest idea:
Huge memory consumption.
Hmm, no, all updates got through. Selecting the revisions to put them
into old_revisions table screwed it. Learned about CREATE TABLE
old_revisions SELECT syntax.
Yay! finally. 24 MB are just not enough the update.php script seems to
still break.

27) Benchmarks!

Requests per second:    1.54 [#/sec] (mean)
Connection Times (ms)
              min  mean[+/-sd] median   max
Total:        632  649  40.5    636     791


Requests per second:    0.86 [#/sec] (mean)

Total:       1119 1165  65.8   1160    1461

Ok: So we get an improvement for many node_loads, but none for simple
selects from node.
More tests can be done.

28) roll new patch
Ain't Drupal fun?


Wed, 18 May 2005 13:38:05 +0000 : Dries

I did another round of tests on _another_ machine and it is not looking

                              Before upgrade        After upgrade

?q= (main page)               218 ms/request         340 ms/request
?q=forum (forum overview)     754 ms/request        1520 ms/request
?q=about (book page)          375 ms/request        5400 ms/request

The upgrade process itself gave me a number of 'query was empty' and
'sprintf(): too few arguments' reports.  Everything seems to work fine

Looking at the ?q=about page, I see that the following query is
executed twice _and_ that each time, it take more than 2 seconds to

SELECT n.nid, n.title, b.parent, b.weight FROM node n INNER JOIN book b
ON n.vid = b.vid WHERE n.status = 1 AND n.moderate = 0 ORDER BY
b.weight, n.title; 
| Table | Non_unique | Key_name    | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
| book  |          1 | book_parent |            1 | parent      | A    
    |          92 |     NULL | NULL   |      | BTREE      |         |
| book  |          1 | nid         |            1 | nid         | A    
    |         369 |     NULL | NULL   |      | BTREE      |         |
2 rows in set (0.00 sec)

The book module does not appear to have a primary key?  Sounds like a
bad idea so I added one:

mysql> ALTER TABLE book ADD PRIMARY KEY nid (nid);
Query OK, 369 rows affected (0.02 sec)
Records: 369  Duplicates: 0  Warnings: 0

Next, I wanted to make the vid column a unique key in all node tables:

mysql> ALTER TABLE node ADD UNIQUE vid (vid);
Query OK, 20392 rows affected (0.81 sec)
Records: 20392  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE book ADD UNIQUE vid (vid);
ERROR 1062: Duplicate entry '0' for key 2

mysql> ALTER TABLE forum ADD UNIQUE vid (vid);
Query OK, 10806 rows affected (0.10 sec)
Records: 10806  Duplicates: 0  Warnings: 0
As you can see, it fails for the book table which makes me believe
there is some inconsistent data ... I set out to fix that:

mysql> SELECT nid, COUNT(nid) AS vids FROM book GROUP BY vid HAVING
vids > 1;
| nid | vids |
| 871 |    2 |
1 row in set (0.00 sec)

mysql> SELECT title FROM node WHERE nid = 871;
Empty set (0.00 sec)

mysql> DELETE FROM book WHERE nid = 871;
Query OK, 1 row affected (0.00 sec)

mysql> ALTER TABLE book ADD UNIQUE vid (vid);
Query OK, 368 rows affected (0.01 sec)
Records: 368  Duplicates: 0  Warnings: 0

Looks like everything is well now.  Ran some new benchmarks:

                            Before upgrade      After upgrade      With
?q= (main page)             218 ms/request       340 ms/request     336
?q=forum (forum overview)   754 ms/request      1520 ms/request    1531
?q=about (book page)        375 ms/request      5400 ms/request     475

Unfortunately, we're still slower than the original code.


Wed, 18 May 2005 21:53:31 +0000 : killes at www.drop.org

Dries, thanks for testing it again.

I do think the broken queries you observer have something to do with
the bad performance after the update. Please log the queries any I will
have a look at them. I've never seen any such queries.

My update script also tries to create the appropriate indices, but it
will of course fail if the database contains cruft. the indices for the
forum are probably missing, too.

I am still convinced that the patch is core worthy.


Thu, 19 May 2005 04:36:09 +0000 : Dries

It wouldn't hurt if more people would benchmark this patch.  The patch's
current performance worries me.

Did you check your watchdog messages after upgrading the drupal.org
database?  Depending on your settings, errors might only be shown in
the watchdog.  I'll look into the remaining glitches as time permits.

Thanks for your persistence in keeping this patch up-to-date. :)


Thu, 19 May 2005 11:59:22 +0000 : killes at www.drop.org

Dries: Can you please let me have your updated database? I want to have
a look at it and try my own benchmarks with it.

And yes, if I did learn something on this project is how to be
persistant. ;)


Fri, 24 Jun 2005 16:25:34 +0000 : killes at www.drop.org

Here is an idea that occurred to me:

The problem with the upgrade process is that keeping the existing
revisions requires a lot of work to do. This generates a huge amount of
sql queries for a large database and also requires a huge amount of

My suggestions is to let update.php only handle the basic upgrade, ie
without old revisions. An additional module could be created that would
implement a cron based approach to upgrading old revisions one node at a
time. it could expose a hook to let contrib modules do their own

Dries, what do you think? (I am writing "Dries" because he seems to be
the only one who is interested in getting this into core...)


Fri, 24 Jun 2005 22:25:11 +0000 : Junyor


I'm also interested in seeing this hit core.  What about adding
something to legacy.module to do it?


Sun, 26 Jun 2005 21:14:54 +0000 : chx

This is a sensible approach. Maybe this is the _only_ sensible approach.

I have a little problem though: while the conversion is running somehow
both revision handlers should be available.


Sun, 26 Jun 2005 22:16:21 +0000 : killes at www.drop.org

hehe, one only has to whine and bug enough and one gets some feedback.

@junyor: legacy.module would be a good place. my current idea is to
auto-enable it in update.php and then disable it again in legacy_cron
after all nodes are updated.. ;)

@chx: When somebody wants to look at revisions of a node that node
could be auto-updated.

The only problem are contrib modules: they's need to have some hook in
order to update their own data. When somebody looks at the revisions of
a node than cannot be updated because the contrib module in question has
no such hook, we can optionally let the user discard old revisions I

Dries, what do you think?


Mon, 25 Jul 2005 16:48:51 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_30.patch (53.13 KB)

Sooooo; I've updated this patch once again.
Dries didn't like my idea of legacy updates so we will have an option
to discard old revisions in case the update should prove difficult.
Always keep a backup.


Mon, 25 Jul 2005 17:31:12 +0000 : Bèr Kessels

can we please accept and commit this patch? We can iron out any issues
later. This patch is just far too big and complex to be


Mon, 25 Jul 2005 19:22:50 +0000 : Dries

@Ber: no, we can't commit this patch blindly.  

Data loss is a much bigger problem than a syntax error or other code
glitch.  We can break Drupal but we can't break their data.

I've spent quite a bit of time testing the previous version of this
patch and noticed significant performance degradation.

Tell me, Ber, why can't you test this patch first?


Mon, 25 Jul 2005 19:40:37 +0000 : Bèr Kessels

I was not referring to not testing it. If it is just the upgrade path
that proves to be cumbersome, then why can we not fix it afterwards,
I.e. when everyone has had a good chance to look at it. 

Testing such a huge patch requires a lot of work. something no-one just
does in his spare hours. We had discussions before to deal in a slightly
different way with big changes; to commit them quicker and to leave the
ironing out of any left overs for the community. I hooked into that
discussion here. For two reasons. One is that Gerhard has spent
numerous hours on maintaining this patch. The other one is that the
community can be of help with ironing out issues in  such a large
change, much better than that Gerhard can do on his own.
And yes, dataloss is very bad, but no-one should loose data, when
he/she followed the instructions (backing up)...


Mon, 25 Jul 2005 20:00:38 +0000 : Dries

Ber: applying this patch takes 15 seconds.  Whether I apply this patch
for you, or whether you apply it yourself, it will hardly reduce the
'testing cost'.  The problem is that data loss can be subtle; it might
go unnoticed for a couple days.  Make no mistake, I'd like to see this
patch committed ASAP, but it warrants some testing.  Let's
test/benchmark it and commit it.


Mon, 25 Jul 2005 20:55:03 +0000 : drumm

The upgrade script already takes a long time to execute and does not
provide feedback to the user about how it is doing. I plan on making
the upgrade script able to spread the updates across multiple page
views and give user feedback showing that progress is being made. This
will hopefully make the speed of this update a moot point.

I am working on this for my dayjob, CivicSpace, so it should get done
"real soon now", but it should be expected to take awhile (smallish
number of weeks). Please make comprimises if necessary.


Wed, 27 Jul 2005 17:38:57 +0000 : killes at www.drop.org

Attachment: http://drupal.org/files/issues/revisions_31.patch (48.78 KB)

@drumm: the timeout isn't usually the problem. Memory consumption is.
I'll look into doing the updates in chunks of 1000 nodes or so.

Clousseau found a bug in the patch, updated.


Thu, 28 Jul 2005 18:28:08 +0000 : moshe weitzman

Even if the upgrade issues are resolved, we have to figure out why this
patch is slowing down drupal (according to benchmarks posted here).


Fri, 29 Jul 2005 07:51:45 +0000 : Dries

We can't proceed without some additional benchmarking. I already
benchmarked it on two machines (see comments above), and I'll benchmark
it again after August 10.  The performance impact of this patch worries
me so if two other people could test and benchmark this patch
extensively, that would come a long way.  August 10 is close to the
code freeze so some help is necessary.  

If Gerhard/killes reviewed/tested one of your patches, it is time to
return the favor ...

More information about the drupal-devel mailing list