Load Testing by RPM Solutions
  • About
  • Contact
  • Quotes
  • Myths
    • Cloud and Performance
    • Cost of Load Testing
    • CPU Bottlenecks
    • Very Prescriptive Requirements
  • Load Testing
    • Standard Load Test
    • Quick Load Test
    • Stress Tests
    • Vendor Benchmarking Test
  • FAQ
  • Customer
  • Links

Fact: Simple Performance Requirements are Easiest to Optimise for.

An optimised application is able to process workload in a reliable and timely manner for the lowest cost.  There is no point having a super low cost online store if it is so slow that no customers are patient enough to complete the sales order process.  Likewise, there is no point having a super fast application that costs so much to run that it is impossible to make a profit.  There is a middle ground, and that middle ground has generally been achieved through careful design and validation through performance testing.  However, with many new generation applications being exposed to very dynamic workloads, and with the supporting infrastructure generally being virtualised or even deployed on a cloud, it is much more difficult to model the workload processing requirements and then map that back to the suitable infrastructure specifications.

Rather than attempting to determine the required infrastructure through capacity planning techniques it is more efficient to build the system with a 'first guess' approximation of servers and run a series of tests to validate that particular configuration.  Based on the results of the test, or a suite of tests, an alternate configuration may be indicated.  This new configuration could then be tested as the next 'iteration' of the solution.   The Design Of Experiments (DOE) approach could be used to vary a number of infrastructure parameters to yield an optimised configuration for given levels of workload. (See Vendor Benchmarking Test for an example of how this might look.)

However, a myth has emerged that it is beneficial to prescribe a significant number of technical performance requirements, resulting in some projects with literally hundreds of 'non-functional requirements' that all need to be validated and reported on in each test run or test cycle.  When seeking to optimise a technology stack, the efficiency of the optimisation process is proportional to the complexity of the requirements.  Ideally, one could consider just two non functional requirements.  A statistical measure of response time and error rate for all user interactions.  For example, one could prescribe that the 97th percentile response time must be less than 3 seconds, and the error rate must be less than 1 in 2,000 interactions.

It should be easy for the reader to realise how much easier it would be to optimise a solution based on such requirements.  There is nothing stopping a team focusing on various slow transactions that fall outside the 97th percentile, but from an optimisation and performance validation perspective, the simpler target will result in more effort being expended in optimising than analysing results.  It also gives the entire team a clear number to aim for, and allows much easier presentation of progress towards the goal.

It is also important to ensure that the workload is specified adequately, but not with too much detail.  It is generally best for workload to be defined as an hourly target.   For example, a target hourly workload may include 5K registrations, 10K logins, 50K navigation steps, 40K searches, 6K 'add to cart' and 2K 'place order' transactions using 2,000 concurrent users.  This means that the 'complexity' of the test is embedded in the workload specification rather than the transaction response time specification.  Note also that the transaction rates are specified with quite low precision.  For example 10K logins per hour allows much more workload variation than specifying 10,000 logins per hour.  However, it is critical that each test is repeatable, from a workload perspective, to a very high precision.

In summary, simplifying the performance requirements makes reporting easier, and allows much better focus on the problems rather than time consuming reporting and communication of complex sets of results.  I am not suggesting that each and every step in each test script be separately recorded and included in a test summary report, as such information is critical in isolating focus on problem areas.  However, from a formal reporting perspective, the less metrics that need to be communicated, the better.
© Copyright 2015, RPM Solutions Pty Ltd.  All Rights Reserved.