This post won’t insult your intelligence by explaining the ins and outs of RESTful APIs, what it will hopefully do is give a basic idea of, and design pattern for one of the test tools that we can use to test RESTful APIs in a Java environment.
REST-assured is a Java DSL designed specifically with the intention of making RESTful interfaces easier to test; it does this by providing us with some syntactic sugar to decorate our tests and make them more readable and business-like. Let’s get straight in and have a look at a basic test in rest assured:
The first thing we do is our imports, obviously we need REST-assured, but we also use jUnit as the test runner. Secondly, we use a jUnit ‘Before’ annotation to set up the base URI and port for each test. We can indeed set the port and the base URL manually for each test, however, as we explore REST-assured it will become clear that for scalable test suites, a single point of configuration makes life a lot easier.
As you can see our test itself is quite easily readable to the humans amongst us, ‘given()’ defines the request content (later, when we send request bodies, this is where we will define those), ‘when()’ defines the request actions (in this case a GET to the specified URI), and ‘then()’ defines our assertions an expectations.
This is all you need to run a REST-assured test, go ahead and run the test as you would any jUnit test (if you’re new to Java/jUnit, just click ‘Run’ in your IDE and you should see the output). Unless you have an API configured as per the example, you’ll see a failure, but running against a real endpoint, you should see test results that you’re familiar with, in intelliJ it’ll look something like this:
The above is all good and well (and I know that these kinds of instructions can be found all over the web), but this isn’t really testing our API, this is simply exercising one endpoint for a ‘happy-path’ outcome, how then do we gain confidence in our API through testing?
In order to do this we have to go back to our testing roots and apply principles of test case design in the same way that we would for a desktop or web application but combine this with REST-specific cases. When I’m writing tests I follow my usual process for test case design (happy path, boundary values, equivalence partitioning, state transition etc.), but I also add in the following REST-specific checklist (this may differ depending on the architecture and error code implementation, it may also be covered by your usual test case design, but is a good start):
The above test case code example is great for testing one endpoint or a small API, however, it is far from scalable and when we are testing multiple endpoints over one or more APIs, we need a better strategy. The following pattern is one I have been using, and one which works for me (your mileage may vary), and is based loosely on the popular UI testing ‘page object model’.
Firstly, each API has a base class which sets up the ports and URIs, it can also be used to hold constants (although another separate constants class may be advisable if this becomes unmanageable). My base classes look something like this:
As we can see, this class now sets up the base ports and URI and also holds two constants, this means that any test classes which extend this (I tend to use one per endpoint), have access to all of these, and we do not need to worry about updating URLs and ports in multiple places. This allows for greater flexibility and scalability (it is also possible to read these RestAssured parameters in from a config file, more on that another time).
Note also that we can define our common methods in the base class, this is useful when tests need to call other endpoints or APIs in order to set up test data or verify that updates have been successful. With this implemented, our test class now looks like this:
As you can see, our test class now holds just test code, there are no hard-coded parameters, and the whole thing is a lot cleaner. Each new endpoint can extend our base, and each new API can have its own base. As I said previously, this approach may not work for you, but it has worked very well for me, and has scaled nicely with the application under test. In future posts I hope to look deeper into API testing strategy, the levels at which we can test, and the value we add from this.
Changes to the business requirement capture and specifcation format, an issue management system, a bespoke testing framework at the service layer. The concept of Acceptance Testing inside the Sprint was introduced and the test framework handled the traceability to the requirements, version control, execution and results storage.
Using a Domain Specific Language tied to the requirements, Analysts were able to understand that the testing carried out satisfied the Acceptance Criteria in Sprint. Regressions were caught on check-in or earlier, SIT was performed in the Sprint along with NFT, and UAT was shortened to days rather than months.