[Product-Developers] Re: The most efficient way to store 270 AT fields?
r.ritz at biologie.hu-berlin.de
Thu Jan 8 11:32:43 UTC 2009
Mikko Ohtamaa wrote:
> We are facing a problem where we need to store 270 fields per item. The
> fields are laboratory measurements of a patient - 40 measurement values for
> 7 timepoint. The fields need to be accessed per timepoint, per measurement
> and all fields for one patient once. There will be over 10000 patients,
> distributed under different hospital items (tree-like, for permission
> reasons). Data is not accessed for two patients at once, so we don't need to
> scale the catalog.
As others have pointed out don't make 270 individual fields
on your type.
One further potential alternative not mentioned yet could be
to use a plain Python dictionary or list of dictionaries.
If that were the case, the Record(s)Field/Widget from
ATExtensions could be of help.
On top of this ATExtensions also demonstrates how to handle a
custom data type that can be mapped to the ones mentioned above
(look for the FormattableName(s) datatype/field/widget)
> So I am curious about how we make Plone scale well for this scenario.
> - The overhead of a field in AT schema? Should we use normal storage backend
> (Python object value) or can we compress or field values into list/dict to
> make it faster using a custom storage backend.
> - The wake up overhead of AT object? Should we distribute our fields to
> several ZODB objects e.g. per timepoint, or just stick all values to one
> ZODB objects. All fields per patient are needed on some views once.
> - One big Zope objects vs. few smaller Zope objects?
> Mikko Ohtamaa
> Oulu, Finland
More information about the Product-Developers