[Framework-Team] Re: PLIP load test reports available

Ross Patterson me at rpatterson.net
Sun Sep 6 16:32:32 UTC 2009


Maurits van Rees <maurits at vanrees.org>
writes:

> On Sun, Sep 06, 2009 at 05:52:52PM +0800, Martin Aspeli wrote:
>> This is really good stuff!
>>
>> I must admit, I don't understand the graphs or the tables-of-graphs
>> at all, though. Is there a quick explanation somewhere about how to
>> read them?
>
> The reference bench results are for Plone4 without plips.
> The challenger bench results are for one of the plips.
>
> If the graph is mostly green, the plip has better performance.
> If the graph is mostly red, the plip has worse performance.

Thanks for the documentation, very helpful!

> For each plip three tests have been done, showing performance for:
> - content creation
> - read only
> - heavy writes
> I don't know what is being tested exactly.

See the tests package in collective.coreloadtests.  Should be very
readable:

http://dev.plone.org/collective/browser/collective.coreloadtests/trunk/collective/coreloadtests/tests

Also, using the funkload test recorder makes creating new tests pretty
easy:

http://funkload.nuxeo.org/#test-recorder

> On the front page of the tests at
> http://weblion.psu.edu/static/loadtesting/plone4.0/plips.htmlyou have per plip the following items:
> - label/name of the plip
> - for each of the three tests:
>   - main graph, with link to detailed report of the plip
>   - main difference graph between plip and plain plone, with link to
>     detailed difference report
>
> Take the diffence page for plip 9310 showing heavy writes:
> http://bit.ly/vN9kJ
>
> The first graph has the requests per second.
> The blue line shows the results for B1 (bench 1, plain plain 4.0)
> The purple line shows the results for B2 (bench 2, plip 9310)
>
> The first part shows the difference between those lines as red,
> meaning that the plip is slightly slower; this part of the graph is
> for 1-3 concurrent users.
>
> To the right we have a green difference, meaning that the plip is
> faster, even up to 80% for 10 concurrent users.

Actually, this is one of the invalid tests.  If you look at the test
data you see that it has 100% test failure.  This is probably why the
through put was so high.

http://weblion.psu.edu/static/loadtesting/plone4.0/test_WriteHeavy-20090906T002920-plip9310-flexible-user-registration/index.html#test-stats

So the things that should raise a flag are differential graphs with
solid color (red or green) between the curve and the X axis, and
individual test report graphs with a horizontal red line at the top of
the bottom portion (the error portion, only present if there are hours),
of the per-test report image.

> If results look too absurd, probably something went wrong in the
> tests.  For example plip 8814, replacing secure mail host, looks
> totally red.  Apparently with that plip we serve a whopping zero pages
> per second. :-)

Exactly, and that's where the *.log files come in.  Also, clicking
through to the individual test report might also be informative.

> I hope that clears things up a bit.

Greatly, I think.  Thanks so much.

Ross





More information about the Framework-Team mailing list