Welcome to BJSSinfo@bjss.com

BJSS Capabilities: Testing and Assurance

The discipline of Testing has matured over the last 20 years. BJSS has been at the forefront of this evolution.

Testing is now an integral part of project delivery, helping to de-risk projects from the outset. Assurance of today’s complex solutions requires highly skilled, technically capable practitioners with experience of a range of delivery models. Testers actively engage with the business, architects and developers to ensure testing is a continuous activity, not a discrete event at the end of a project. The BJSS Testing and Assurance practice operates a wide range of toolsets and frameworks, choosing the correct solution that best fits the needs of each engagement. Experienced across a wide range of technical disciplines, including Non-Functional testing, Web & UI, API/Services, Acceptance Testing, Mobile, Audit & Security and Test Health Checks, the team offers access to industry-leading expertise, underpinned by over 20 years of successful enterprise-scale delivery.

Thought Leadership : Testing and Assurance

A tester’s crash course in REST-assured

In recent years, the ‘RESTful API’ has become ubiquitous in the tech world. Reddit and Spotify,amongst others, provide developers with an extensive interface with which they can implement their own tools and applications, but you already know all of that. It will also come as no surprise that many companies are developing APIs internally, and with that comes the need to test, in fact, that’s probably why you’re reading this.

This post won’t insult your intelligence by explaining the ins and outs of RESTful APIs, what it will hopefully do is give a basic idea of, and design pattern for one of the test tools that we can use to test RESTful APIs in a Java environment.



REST-assured is a Java DSL designed specifically with the intention of making RESTful interfaces easier to test; it does this by providing us with some syntactic sugar to decorate our tests and make them more readable and business-like. Let’s get straight in and have a look at a basic test in rest assured:

The first thing we do is our imports, obviously we need REST-assured, but we also use jUnit as the test runner. Secondly, we use a jUnit ‘Before’ annotation to set up the base URI and port for each test. We can indeed set the port and the base URL manually for each test, however, as we explore REST-assured it will become clear that for scalable test suites, a single point of configuration makes life a lot easier.

As you can see our test itself is quite easily readable to the humans amongst us, ‘given()’ defines the request content (later, when we send request bodies, this is where we will define those), ‘when()’ defines the request actions (in this case a GET to the specified URI), and ‘then()’ defines our assertions an expectations.

This is all you need to run a REST-assured test, go ahead and run the test as you would any jUnit test (if you’re new to Java/jUnit, just click ‘Run’ in your IDE and you should see the output). Unless you have an API configured as per the example, you’ll see a failure, but running against a real endpoint, you should see test results that you’re familiar with, in intelliJ it’ll look something like this:


The above is all good and well (and I know that these kinds of instructions can be found all over the web), but this isn’t really testing our API, this is simply exercising one endpoint for a ‘happy-path’ outcome, how then do we gain confidence in our API through testing?

In order to do this we have to go back to our testing roots and apply principles of test case design in the same way that we would for a desktop or web application but combine this with REST-specific cases. When I’m writing tests I follow my usual process for test case design (happy path, boundary values, equivalence partitioning, state transition etc.), but I also add in the following REST-specific checklist (this may differ depending on the architecture and error code implementation, it may also be covered by your usual test case design, but is a good start):

  • Bad Request – send badly formed payload.
  • Invalid Method – make invalid calls to an endpoint (e.g. PUT, POST and DELETE to a GET endpoint).
  • Not Found – make a call on a resource which doesn’t exist.
  • Invalid Auth – send invalid authorisation credentials.
  • Updates – where data is updated cover partial object updates, complete object updates and no update.


Structuring Tests

The above test case code example is great for testing one endpoint or a small API, however, it is far from scalable and when we are testing multiple endpoints over one or more APIs, we need a better strategy. The following pattern is one I have been using, and one which works for me (your mileage may vary), and is based loosely on the popular UI testing ‘page object model’.

Firstly, each API has a base class which sets up the ports and URIs, it can also be used to hold constants (although another separate constants class may be advisable if this becomes unmanageable). My base classes look something like this:

As we can see, this class now sets up the base ports and URI and also holds two constants, this means that any test classes which extend this (I tend to use one per endpoint), have access to all of these, and we do not need to worry about updating URLs and ports in multiple places. This allows for greater flexibility and scalability (it is also possible to read these RestAssured parameters in from a config file, more on that another time).

Note also that we can define our common methods in the base class, this is useful when tests need to call other endpoints or APIs in order to set up test data or verify that updates have been successful. With this implemented, our test class now looks like this:

As you can see, our test class now holds just test code, there are no hard-coded parameters, and the whole thing is a lot cleaner. Each new endpoint can extend our base, and each new API can have its own base. As I said previously, this approach may not work for you, but it has worked very well for me, and has scaled nicely with the application under test. In future posts I hope to look deeper into API testing strategy, the levels at which we can test, and the value we add from this.

BJSS Testing and Assurance


  • Business Transformation
  • Specification By Example
  • Health Checks
  • Project Inception
  • Test Architecture
  • Continuous Improvement

Test Engineering

  • Continuous Delivery
  • Test Frameworks
  • Test Innovation
  • Custom Tooling
  • System, Integration, Acceptance
  • Continuous Deployment

Technical Testing

  • Non-Functional Testing
  • Performance Testing
  • Security Testing
  • Infrastructure Testing
  • Resillience Testing
  • Disaster Recovery

Test Leadership

  • Test Management
  • Defect Management
  • Reporting & Metrics
  • Good Practice
  • Team Building
  • Recruitment and Retention

“I’ve never worked with testers before who actually fixed the code as well as reporting the problem!” Development Manager, Global Commodities Trader

A global commodities trader was struggling with the high rate of regression due to lack of clarity, obscure code-heavy automation, and no traceability of testing to the original requirement documentation.

BJSS Delivered

Changes to the business requirement capture and specifcation format, an issue management system, a bespoke testing framework at the service layer. The concept of Acceptance Testing inside the Sprint was introduced and the test framework handled the traceability to the requirements, version control, execution and results storage.


Using a Domain Specific Language tied to the requirements, Analysts were able to understand that the testing carried out satisfied the Acceptance Criteria in Sprint. Regressions were caught on check-in or earlier, SIT was performed in the Sprint along with NFT, and UAT was shortened to days rather than months.