[Framework-Team] Re: [plone4] Release process
tseaver at palladion.com
Sat Dec 27 19:23:48 UTC 2008
-----BEGIN PGP SIGNED MESSAGE-----
Ross Patterson wrote:
> Tres Seaver <tseaver at palladion.com> writes:
>> Ross Patterson wrote:
>>> Tres Seaver <tseaver at palladion.com> writes:
>>>> Hanno Schlichting wrote:
>>>>> - Plone 4 will have a documented upgrade story
>>>>> A migration from Plone 3 to 4 does not need to be possible in an
>>>>> almost fully automated fashion. We need to ensure we have an easy
>>>>> to follow and understandable documented upgrade story. If we for
>>>>> example change API's or rearrange code, we can document the new
>>>>> places in writing and with error references for the most commonly
>>>>> used parts. If you need to change your buildout configuration, a
>>>>> document explaining the changes is fine, we don't need to build an
>>>>> upgrade machinery for configuration files.
>>>> Can I persuade you and the FWT to forego an "upgrade-in-place"
>>>> strategy for moving from P3 to P4, and instead to have a well-tested
>>>> ad documented "dump-and-reload" story?
>>> I've never actually understood how a dump-and-reload approach would
>>> be inherently more maintainable or otherwise more trouble-free. I
>>> know this has been discussed before, but I missed those discussions.
>>> Can anyone shortcut the research for me and give me some links or
>>> pointers to previous discussions?
>> The short answer is that in-place migrations lead to
>> ordering-dependent arrangements of crufty bits in the site: it gets
>> particularly bad when the representation format of the data changes.
>> If the programmer is both careful and lucky, she can often mitigate
>> the problem with clever defaults, properties, etc., but the downside
>> is that the BBB-driven code has to stay around *forever*.
> Thanks, that's exactly what I wanted to hear. So it's not so much about
> being inherently easier to implement as it is about enabling the removal
> of code.
> In that case, I'm +1 on this but I have other concerns.
>> Dumping the content out to a "neutral" format and loading it into a
>> "clean" site loses the crufty bits, and leaves the code in the "new"
>> software free of nasty BBB stuff. It also gives people a migration
>> target (for moving content into Plone, or even out of it), as well as
>> a non-ZODB-specific backup representation of the site (e.g., to
>> populate a staging / testing server).
> It seems like this this dump format will likely become a point of
> hackability in our software ecosystem. People out there will find
> interesting things to do with it aside from dump-and-reload. This would
> be a good thing except we're *expecting* the format to be unstable and
> change rapidly. It seems possible that there would be some ruffled
> feathers out there amongst those who come to depend on such hacks and
> then find that their favorite hack is quickly broken by the next
> release. As such, it seems like it would be a good idea to dress up the
> dump format in flashing red lights and loud alarms to discourage at
> least the adoption of such hacks if not their creation.
> It also seems like the complexity of this dump format is easily
> underestimated. I'm a little concerned that we'll adopt a solution that
> is more accidental in nature, such as an extension to the current GS
> handlers. I suspect that later we'd find some structural inadequacies
> in such an approach but having already build our upgrade machinery
> around it we'll have yet another painful change to make as we change to
> something better designed. OTOH, perfect is the enemy of good enough.
> I suggest we start with a *minimal* design discussion about how to
> architect a dump-and-reload strategy with the future in mind.
I have code in hand which I have used successfully for two different
customer projects: one was using mostly "stock" Plone content types,
while the other used entirely custom versions; both were based o AT.
This format has allowed:
- Exporting content from a large production site (50k content items),
without taking the site down.
- Merging content from multiple Plone sites into a single site, by
running automated transforms of the exported formats.
The basic assumption of the design is that *every* content object maps
onto a directory:
- Each directory contains a file which exports the item's properties
in an INI format
- BLOBish property values are mapped as separate files.
- References are mapped as sequeces of UIDs in the properties file.
- Containers have a file enumerating their subobjects.
- Security settings are caputred in a separate INI file.
- Workflow history would map onto a separate file (but neither project
wanted to preserve the history, so it remains unimplemented).
The format is built around GenericSetup's adapters, which makes it
possible to extend the framework (e.g., to capture unforeseen values),
or to replace implementations (e.g., to accomodate non-AT content
schemas). It does not use XML at all, which means it will run even in
environments where building lxml is problematic.
Tres Seaver +1 540-429-0999 tseaver at palladion.com
Palladion Software "Excellence by Design" http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
More information about the Framework-Team