Submit and vote on feature ideas.

Welcome to the new Parasoft forums! We hope you will enjoy the site and try out some of the new features, like sharing an idea you may have for one of our products or following a category.

creating useful performance tests lies in leveraging an existing SLA for the Application under test

When we are conceptualizing and developing performance tests we must be aware of SLAs or service Level Agreements.

The Service level agreement is a specification of a Application Under Test (AUT). Essentially the SLA specifies that that application under test needs to comply with certain performance standards. These are typically business requirements of the AUT. Classic examples of SLA requirements are specifications like: The AUT needs to be able to handle a certain number of Hits per second or hits per minute. It should be able to handle a certain number of concurrent users. The SLA defines things like how quickly the service should respond under certain levels of load. So, when we talk about performance testing or "load testing", we are really verifying that an application is performing as we expect it to. This means verifying that the service is performing within the levels that we have defined in the SLA.

Before defining the load test we need to make sure that that SLA is well defined. We have to ask ourselves some questions

  • What are we trying to verify in our system?
  • Are we trying to verify a specific performance profile?
  • Are we trying to define an expected performance threshold?
  • Are we defining how the system is expected to perform in situations where it becomes overloaded?

A clearly defined SLA helps us to clearly understand why we are performing the loadtest in the first place.

At this point the specifics of the SLA become very important. Things like:

  • Exactly how many concurrent users do we need to handle?
  • Exactly how long does the service need to sustain a heavy load?
  • What is an acceptable range for system response times be during specific loads?
  • Precisely which use cases would typically be running times of heavy traffic loads and if multiple use cases should run together what percentage of the load should each use case provide?

Once we fully understand these requirements then we can begin to model a valid and useful load test. We can start this process by leveraging an SOAtest test case. We can create these scenarios that accurately depict one or more use cases. We can pull the use cases into Loadtest and run them in way that will validate the SLA. Perhaps, if we required, we could run them with different distributions of users. Using this same methodology could also leverage scenario testing, and Web Functional testing. Then we could verify that these scenarios perform as we have defined in our SLA.

The final part of this workflow is that we need to tell loadtest how to define a successful verification. For example a particular performance test may need to consistently respond within a specified amount of time. We configure our Load Test with a Quality of Service Metric which matches this time, and Loadtest will verify that the actual performance was within the target time (successful) or outside of the target (unsuccessful). These are validations based on the measurements that loadtest gathers while the load test is running. By modeling our QoS metrics to match the SLA, we have effectively configured LoadTest to automatically validate the service against the performance requirements of that SLA.