Friday, 6 August 2010

Functional Testing REST Web Services With SoapUI

General observations about SoapUI.

  • It is an amazingly high quality open source test tool. The use of groovy scripts, xPath, and dynamic property expansion created a very robust development environment. Although not being Java programmers, it was initially a bit confusing to sort out where Groovy stopped and Java began. A good book, Groovy in Action, solved that problem.
  • We found test case development a bit more tedious than expected with a test cases consisting of a dozen steps and within each step perhaps a dozen assertions. It was just a lot of detail, with lots of mouse clicking to drill into things. We wrote some helper routines to consolidate a lot of the assertion checks which helped to some extent.
  • We wound up buying the Pro version of SoapUI as we needed some technical support and this turned out to be a great decision. The support was excellent, we were more productive using the Pro version, and the code remained compatible with the open source version of SoapUI as long as we stayed away from certain Pro features.
  • The SoapUI documentation was quite well written, but initially it was a challenge to sort out which things applied specifically to REST based testing.

Solution 
  1. Create a SoapUI project, a rest service within the project, defined each of the 100 resource paths within the service, and created an empty test suite.
  2. Utest suite properties as a set of global variables. Some properties were used to store dynamic data that was stored and retrieved by various test steps, while other properties contained static content that represented various possible values for customer records. Referencing these properties in a variety of ways throughout the test suite was simple. A few examples.
    • Simple Property Expansion: ${#TestSuite#password}
    • Dynamic Property Expansion where multiple comma separated values are stored in one property: {=context.testCase.testSuite.properties["accountNumber"].value.split(",")[1]} 
  3. We made extensive use of random numbers for generating values that had to be unique. Yes, there was a remote possibility that we might get a test case failure as a result of a random number collision, but from a practical perspective, this was a non-issue.
  4. We structured our test suite to run sequentially, with the following initialization routines:
    • Generate unique “random” values and store in properties
    • Use REST calls to populate the data base with sufficient framework data to register a customer. Example, define the various valid address types such as home, business, other, etc.
    • Use REST calls to register a set of customers using characteristics pulled form test suite properties. These customers will not be modified during test execution and will be used exclusively for “get” and “search” customer related testing.
    • Define a series of test cases that register customers for subsequent record updating. These can be called from other test cases as needed. Note that we encountered problems exporting and importing test cases that used the Goto Test step (they reference a GUID not the name of the test case), so we chose to call the test cases from a groovy script…
    • def testCase = testRunner.testCase.testSuite.testCases["Another TestCase"] def properties = new com.eviware.soapui.support.types.StringToObjectMap() def async = false testCase.run( properties, async )
  5. Once the basic plumbing was in place we needed to grind out hundreds of test cases that when completed represented thousands of REST API calls and many thousands of test assertions.
  6. Perhaps a note on assertions is warranted. There are a multitude of ways to validate that you get the expected values in a response using SoapUI. We utilized the following methods
    • Contains assertion – On the surface this is a simple text comparison, but we also used regular expressions to accept multiple values such as (?s).*ALREADY.*|.*SUCCESS.* as well as property expansions.
    • Script Assertion – This was our most common assertion tool, which allowed to use the getXmlHolder object provided by SoapUI to easily query and assert conditions within a groovy script, such as..\def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context ) def holder = groovyUtils.getXmlHolder(messageExchange.responseContent) def xPath = "//statusLabel[../SystemLabel='" + \ context.testCase.testSuite.properties["SystemLabel"].value.split(",")[1] \ + "' and ../statusLabel='SUCCESS']" assert holder[xPath] == "SUCCESS"
    • xPath Match – This was nice tool in that I could use an xPath statement to grab a parent node, then compare expected results using both wild cards and property expansions child elements. The only gotcha that I found was that if I wasn’t very careful with my xPath statement and multiple parent blocks were returned, xPath Match would compare the first instance, which wasn’t always what I wanted. But with simpler scenarios, this was definitively a very quick and dirty way to check a bunch of response values.
As most of our assertion testing involved comparing large sets of known values stored in test suite properties to API responses, we wrote some rather elaborate helper routines that simplified the assertion process. We did run into one obstacle in that we couldn’t find a clean way to “include” these routines in the assertions. The Pro version has a common library capability, but we didn’t want to lock ourselves into that. The open source version does allow a Jar file to be included in the the soapUI\bin\ext folder , which would have addressed the issue, but we wanted the test suite to run on a generic install of SoapUI.

So, the utility routines were pasted into assertions throughout the suite (yuck). Just as the test suite was being finalized we discovered that there was a bug in the helper routines (not zero stuffing MD5 hash values) and were faced with editing 147 instances of the helper routines in the test suite (nightmare!). Fortunately, we were able to pull the entire SoapUI project into XMLSpy and globally edit the routines. This notion of editing the project from outside SoapUI might open up some interesting productivity opportunities.

We are now moving on to using SoapUI to load test this implementation and I will share our experiences with this effort in a week or so.

Wednesday, 4 August 2010

SAP CATT - Computer Aided Test Tool

SAP CATT - Computer Aided Test Tool


Just sharing my experiences with CATT


Simply, any one in SAP can use this tool as long as you can access to the tcode. I used CATT atleast in 3 different SAP modules. Typically, in SAP projects, CATT execution is a favorite task for "technical" associates. When the technical resources are either not available immediately or their plate is full with other important tasks, functional associates can simply jump into this activity, since there is no coding or programming involved. This tool has been a great gift from SAP since I am a functional person and could not do coding ! I can remember at least 30 or 40 occassions where I used CATT to get this and that done.


Below find few examples:
1. Created multiple sales orders (excess of 200) in a matter of minutes for the purpose of end user training
2. Created multiple purchase ordes (excess of 200) in a matter of minutes for the purpose of end user training
3. Created Deliveries for the sales orders in a matter of minutes for the purpose of end user training
4. Created config. entires, if the volume of records is large. I remember once I entered 900 records.
5. Extensively used in preparing the transactional data for the purpose of archiving. It worked impeccably.
6. Loading of master data (example: material groups)


Note: Upon execution of CATT, it is very possible that some records will fail, which have to be addressed manually. SAP really needs to further enhance this area of CATT because, there is no easy way of identifying the failed ones, it has to be done manually. One workaround is simply download the result into an Excel and using sort feature of Excel, identify the failed ones and deal with them manually.


CATT stands 4 Computer Aided Testing Tool


Although CATT is meant for as a testing tools, many SAP users have now use CATT frequently to upload vendors master data and make changes to other master record.


SAP Consultant and Abapers tends to used it for creating test data.


With CATT, you don't have to create any ABAP upload programs and this save on development time. However, you still have to spend time on data mapping into the spreadsheet format.


The transactions run without user interaction. You can check system messages and test database changes. All tests are logged.


What CATT does is record you performing the actual transaction once.


You then identify the fields that you wish to change in that view.


Then export this data to a spreadsheet to populate with the data required.


This is uploaded and executed saving you keying in the data manually.


To perform CATT, it has to be enabled in your production environment (your systems administrator should be able to do this - SCC4).


You will also need access to your development system to create the CATT script.


User Guide for Data Upload


The use of CATT is for bulk uploading of data. Although CATT is primarily a testing tool, it can be used for the mass upload of data. The way CATT works is like a real user actually inputting on the SAP screen. You prepare a set of data that are required to be input into the system and execute what you called a Test Case and CATT will do the boring task of keying for you.


Over-all procedure


The over-all procedure to upload data using CATT is as follows:
· Creation of the CATT test case & recording the sample data input.
· Download of the source file template.
· Modification of the source file.
· Upload of the data from the source file.


Details of each step are provided in the following paragraphs.


Detailed Procedure


Creation of the CATT test case:


Creation of the test case is completed as follows:
· Execute Transaction SCAT
· Name the test case. Test case name must start with “Z”. It is also good practise to include the transaction code in
the test case name (e.g. Z_IE01_UPLOAD for the upload of equipment)
· Click the “Record” button.
· Enter the transaction code (e.g. IE01)
· Continue recording the transaction. Ensure data is entered into every field that is to be used during the upload.
· Save the test case.


Download the source file template


Download of source file template is conducted in two stages as follows:
· Creation of import parameters:
· Within transaction SCAT, Double Click on the TCD line in the “Maintain Functions” screen.
· Click the Field List button (Field list is displayed).
· For every field that you wish to upload data, double click in the Column New field contents (This creates an
import parameter).
· In the Maintain Import Parameter Pop-Up:
· Delete the default value if not required.
· Press Enter
· The New field contents column now contains the character & followed by the field name (e.g. &EQART). This
is the name of the import parameter.
· Repeat this for every field (in every screen) to be uploaded.
· Back out and save the CATT test case
· Download of source file template:
· Use the path GOTO -> Variants -> Export Default
· Select path and file name (e.g. C:\TEMP\Z_IE01_UPLOAD.TXT)
· Click Transfer


Modify the source file


The downloaded source file template is now populated with the data that is to be uploaded. This is completed as follows:
· Using Excel, open the tab-delimited text file.
· Do not change any of the entries that already exist.
1st row contains the field names.
2nd row contains the field descriptions.
3rd row displays the default values which are set in the test case.
4th row contains a warning that changing the default values in the spreadsheet has no effect on the actual default values.
· The data to be uploaded can be entered in the spreadsheet from row 4 onwards (delete the 4th row warning &
replace with data for upload).
· Save the file as a Text file (Tab delimited).


Upload data from the source file


Uploading the data is completed as follows:
· Execute the CATT test case
· In the Execute screen:
· Set processing mode to Errors or Background (your choice).
· Set variants to External from file.
· Click the Choose button and select the file to be uploaded.
· If uploading to another client, click the Remote execution button and select the RFC connection to the required client.
· If uploading to the current client, click the execute button.