The monolith, what used to be a bad word is now being remembered with fondness for its simplicity. Yes, there was often a big ball of spaghetti associated with such a time. This is where the testing pyramid shone. Those clean layers of testing, those clear responsibilities broken up between developers, automation testers, and manual testers.Top 10 Skills For Technologists In 2022


What Am I Talking About?

Those of you who are not familiar with the testing pyramid will get the idea from the following diagram.

I used to swear by this concept, I would also hold teams to account on this almost like a religion. In a monolith based paradigm, this makes so much sense. The unit tests keep the developers honest and ensure they write clean code that adheres to SOLID principles, service tests were far simpler as you would have fewer dependencies – yes, you would still have external dependencies but you would have less of your own distributed dependencies to deal with. UI tests have always been hard, so keeping these to a minimum was a lifesaver.

There was of course arguments about what should be tested, what level should these be tested at, what level of test coverage would be required. Mature teams would talk these out reasonably and come to sensible decisions.


Test tooling was also limited to largely your favourite test framework for your given programming language, this was the same language as your codebase. Then the more adventurous might use Ruby (my fondest language) to write Cucumber based BDD tests. 

For those who wrote more BDD scenarios than the unit tests that existed, I would profusely encourage that they were ‘doing it wrong’. I would engage in some purist battles based on the point that I thought they had ‘misunderstood BDD’ and its place in the development lifecycle. Longwinded boring Gherkin scenarios, brittleness, builds that take too long and slowness of pace in writing these tests were areas that particularly frustrated me.

Now I look back the world was so much more simple. The architecture was simple, deployments were simple and so was the tooling.

Now the array of tooling in the test space is immense. Here are some of the levels of tooling you will need to look into:

  • Contract testing e.g. Pact
  • Browser Testing e.g. Browser Stack
  • Browser Driving Libraries e.g. Puppeteer
  • Unit testing e.g. Jest
  • E2End Testing e.g. Cypress
  • Load Testing e.g. K6
  • Security Testing e.g. OWASP Zap

Test Managers Or Test Architects Or Principal Software Developers In Test?

Once there was a world where a test manager would be your point of call to sign off the test approach, testing mechanisms, or the amount of a certain type of test. 


Now testing is far more complex, environments are no longer a small fixed number in fact with ephemeral environments they might be indeed infinite. There will be environments for different uses. Test data can also be incredibly complex.

Anonymising, and pseudo-anonymising large amounts of data whilst also holding on to data characteristics that allow you to effectively run a broad range of automated tests that cover edge cases are no longer trivial. The methods we have for keeping robots from unauthorised access to our products are the same methods that will keep our own automation tests from being blocked access. We need to create solutions around these issues that involve a good amount of engineering expertise.

So who does this work? Is it the engineers? Test engineer? Who then orchestrates and plans this work? Who gains this deep level understanding of the data and architecture so that the testing approaches can be validated? This would often be a mixture of architects, test engineers, and developers. I only occasionally come across test engineers who are comfortable with this level of complexity. Managers have been a goto person to help with this division of responsibility, I am increasingly seeing their role evolving into the more technical engineering space that is more aligned with the role of an architect. The lack of someone at this level has made for some challenges in some of the teams I have been involved in and yet something that I have seen really well covered by a principal software developer in test (SDET).

Automation Testing As A Bolt-On

Having testers who know a little automation has become a common culture. QA analysts come into your organisation and are encouraged to learn scripting languages to write selenium, cucumber, or cypress tests. Their background is often not software engineering. The quality level of their code is mixed, some are happy to create the basic scripting needed to get this to work, others are keen to better themselves. Some really talented folks often get so much into the engineering principles and coding elegance that they drop the QA role to become more involved with software engineering. 

The approach of having automation QAs writing basic front end test scripts fitted really well into the test pyramid. There was only a small amount of happy path journeys covered. This meant that even slow test runs could be finished in 5 minutes and also the sometimes sloppy nature of the less experienced test engineers meant that a few badly written Cucumber tests would not cause there to be any long term technical debt as there was so few of them.

Something changed….


Knowledge of cloud platforms is now a must, knowing how to inspect logs of data flowing from containers, serverless functions, through queues to data sources, and how to identify a potential fault by analysing data is a far more common ask. Of course, there is still a need for exploratory testing or manual browser testing (this can be overlooked by many more technically focussed SDETs), but this alone will struggle with newer development styles where fast change in a codebase is prominent.
DevOps culture is a large driver of this. The traditional ops model of creating a build and having them deploy it to production is long being forgotten. The team now has full power to set up the infrastructure as it suits them. This is a challenge, just as there is no longer a ‘standard testing pyramid’ there is also no longer a ‘standard architecture’. This has to be created by the whole team in collaboration which includes the test folks.
It is important that whoever owns much of the QA responsibility speaks into this process. They need to make sure the design caters to the testing need as much as possible. How can various components be mocked out? How can it be monitored for anomalies? How can it be stood up in a way that it is a stable platform for testing that is resilient to the fast pace of change in a codebase?

Cohesive Tools

Going back to the practical nitty-gritty of engineering and running test pipelines then….

Beyond the standard set of test libraries and tools your team adopts, to make immutable test runs robust, the need for seed data, environments, infrastructure, identity solutions is obvious. Much of the time and energy is spent in the QA space is on building these. This means that no longer is the tip of the ice-cream cone taking the least effort. It is taking the most effort!!

Investing in tooling that can achieve standard patterns and approaches is one way that this process can be sped up. Currently tooling in this space can actually mean stringing a lot of fast-changing tools together. Thinking of the strain on test engineers that we mentioned earlier it may be possible to achieve but is likely to be somewhat time inefficient at times.

Having pre-set-up examples of these tools running or by well documenting your toolchain so that new folks know exactly how to approach these common patterns will certainly help. Just like there are landing-zone projects for AWS or Azure, a test landing zone may also be worthwhile. It is not a one-size-fits-all and some bespoke tooling will always be needed.

The Strain On Test Analysts

This depth of platform and engineering knowledge required to deliver testing solutions in this kind of environment is fairly advanced. To some degree, this goes above the original engineering complexity, especially when you consider how in creative ways you need to inject, manipulate data, and route traffic. This implies that modern QA needs to also be a competent software engineer, some of the best SDETs I have met have been.

Saying this, as I suggested before many come into QA automation from a test analyst role. This role can often be very product focused. I have worked with some talented test analysts who were amazing at fleshing out the requirements and could own and contribute to tickets. This would be a huge benefit to the whole team, with overlaps of a BA and at times a project manager when their ownership of ticket-flow and team process was employed.

These same people do not always have an ‘engineering’ mindset, which actually is fine if they do not shy away from technical conversations. Being interested in environments, data, and data flow is not always the same as being an engineer. By delegating many of the engineering tasks but still taking ownership of the QA process is still a valid way of taking hold of these challenges.

Is the QA engineering focused or product focused? This is now a tension that could emerge.

We need to make sure that as part of our teamwork we take time to understand our QA person’s strengths and communicate with them which approach would suit them better eg the engineering approach or QA product/process ownership approach. Upon deciding so, you may pick up some potential gaps in your team. Identifying these gaps is a good thing and can help you delegate either ownership or engineering tasks to whichever level seems to fill these gaps.

One thing is clear that you should not allow leaving such a value-able person hanging between either approach. I have seen this lead to a lack of direction, a broken test process, and demotivated QA staff.


To make sure that teams can handle this new era of the `inverting of the test pyramid`, a company or team investment in how to streamline solutions will be needed. A brilliant test function needs not just talented individuals but tried and tested methods of solving common problems. In the new world of data and cloud platforms, it is now our job to create and solidify these methods until we can again be confident in our approach just as we were when the ‘test pyramid’ was the source of all testing truth.

Now seems a great time to challenge our thinking and play with different approaches. There may have to be a tricky sweet spot to try and reach, but let’s go and find it!