Welcome to the new Parasoft forums! We hope you will enjoy the site and try out some of the new features, like sharing an idea you may have for one of our products or following a category.
Test Delay vs. Maximum number of loops
LegacyForum
Posts: 1,664 ✭✭
Interesting dilemma. I have a test scenario that calls a service located on two different servers. The service generates keys in numerical order. Each test should call the service 100 times. The follow-on test (on the alternate server) should not start until the previous 100 calls have completed.
If the testing were done manually, here is how it would be executed:
Test 1: Call service 100 times on server A
Test 2: Call service 100 times on server B
Test 3: Call service 100 times on server A
Test 4: Call service 100 times on server B
At the end of the testing the final key values are verified to ensure the correct ranges were used.
So here is my dilemma:
If the tests are all contained within a scenario, I can set up some test dependency. However, once a test runs once, it is marked as "success" even if it just runs once. Follow-on tests start long before all 100 calls in Test 1 are complete.
vs.
If the tests are not contained within a scenario, I can set up no dependency. Calculating additive delays in milliseconds is dicey at best, and there 's no telling what a little network traffic would do to them.
Thanks for any thoughts or ideas!
Staring into the ether(net)
BrianH
If the testing were done manually, here is how it would be executed:
Test 1: Call service 100 times on server A
Test 2: Call service 100 times on server B
Test 3: Call service 100 times on server A
Test 4: Call service 100 times on server B
At the end of the testing the final key values are verified to ensure the correct ranges were used.
So here is my dilemma:
If the tests are all contained within a scenario, I can set up some test dependency. However, once a test runs once, it is marked as "success" even if it just runs once. Follow-on tests start long before all 100 calls in Test 1 are complete.
vs.
If the tests are not contained within a scenario, I can set up no dependency. Calculating additive delays in milliseconds is dicey at best, and there 's no telling what a little network traffic would do to them.
Thanks for any thoughts or ideas!
Staring into the ether(net)
BrianH
Tagged:
0
Comments
-
Hello again Brian,
I have some idea of your testing scenario based on your description, but I will have to make some assumptions in order to give you some suggestions.
First of all, since I assume you are using SOAP clients to access your service, you will need to add some sort of testing on the XML response. As is, the SOAP Client only knows if it received a response or not, and has no functionality to check for the content of the response. You will need to chain additional tools to the SOAP response in order to verify/test the contents. The XML Asserter is a very useful tool in testing the values of xml elements contained therein.
Second, if you create test suites within your main scenario, you can manage the execution flow by going to "Execution Options -> Test Flow Logic" under your top-level scenario folder. By configuring these settings, you can control how the tests are run, based on the pass/fail status of the previous tests. This will be a key feature in implementing your testing scenario.
Third, you can add "tear-down tests" to your individual test suites that will only run after you've gone through all iterations within the test. However, this will only work if the iterations are driven by a datasource (ie, if you parameterize a field within the SOAP Client with a column from a datasource, that test suite will run until all rows within that column are exhausted).
Attached to this post is a .tst file that demonstrates my setup, which I believe is at least somewhat similar to what you have described. Note that I added a fail-case so that you can see that the subsequent test suites are not executed in the case that the first one fails. To see the passing case, simply go to the global Datasource (named "Table: Values") and change the value 10 to something less than 10.
Hope this helps,
Joseph
PS. There are other ways of implementing your scenario, but this is the most recommended. If the above does not match your needs, please respond as such with additional clarification on your specific situation.0 -
CODESecond, if you create test suites within your main scenario,
you can manage the execution flow by going to "Execution Options -> Test Flow Logic"
under your top-level scenario folder. By configuring these settings, you can control how
the tests are run, based on the pass/fail status of the previous tests. This will be a key
feature in implementing your testing scenario.
Brilliant.
This was the key to making it work, and it was the level of control I was seeking. The tests run in order, and I have also added some delay to make sure all database calls have completed.
Thank you so much for this excellent help, Joseph. Much appreciated!0