By Chris Stewart
Chris combines a passion for software development and the people who do it with a pragmatic delivery focus. He has helped to deliver products in many sectors, working with multinational corporations, medium sized businesses and even helped found a couple of start-ups. He enjoys learning and talking about all aspects of the craft and science of software, but is particularly interested in things that revisit and challenge assumptions about the ways we design, build and deliver.
In his spare time he works hard at being a mediocre club runner and particularly enjoys running down fells. He runs up them with less enthusiasm.
This is the first in a series of articles that will share the opinions of BJSS engineers on a number of topics, both complementary and cautious. For the first topic of containerisation, we’ve surveyed engineers from a range of BJSS locations and projects, working in industries such as financial services, public sector and retail.
What we heard
If you spend any time sitting amongst any development/engineering teams it won’t be long before you overhear discussions about Containers and the various surrounding technologies and patterns. It’s a big topic with lots of different angles to it. Adoption and practice vary across the organisations we work with which led us to a question:
What is the BJSS view on Containerisation?
We learn at pace about technologies and trends by being at the coal face of large-scale deliveries. Any opportunity to plumb the depths of BJSS’s practitioners’ real-world experience is worth taking.
So we asked…
We talked with our technical people across a range of different roles; platform engineers, developers, testers and architects. We interviewed those from infrastructure and sysadmin backgrounds as well as those with a background in software engineering.
The answer turned out to be a familiar one. As is often the case, the journey to the answer proved far more valuable.
With any hot topic in software there’s often an implicit assumption that we’re all talking about the same thing. Often, we’re not. By ‘containers’ we can mean any way of packaging and deploying our software: Zip files, Virtual Machines, etc.
But when we’re talking about Containerisation, we mean something more specific: a way of bundling up our software, its runtime, tools and libraries to create an executable package. As far as our running software is concerned a container looks exactly like a real machine or full Virtual Machine. We trade stronger security isolation for a smaller, faster-starting, less resource-heavy, and easier to manage package.
Containers, Containerisation and Container Management have been popularised by the fashion for Microservices – splitting our software up into many small services meant that managing the proliferation of Virtual Machines became a significant overhead.
Containerisation: Three opinions forged from our experience
1. Containers are the future
While lots of technologies exist in today’s applications, Containerisation is being welcomed as a cornerstone of their design. It’s not surprising that the technology has made its way into some of our largest, most critical engagements.
“For any medium or fairly large modern application; container and container management should be the cornerstone of any architectural design right from the beginning.
The simplicity of consistent environments came up as a major selling point for lots of our projects.
“Containers provide a consistent deployment across all environments, effectively it’s the same thing deployed in each environment.”
“It’s great for developers as it saves polluting your dev env, and great for lightweight/stateless components.”
“I like them, they make life easier.”
Consistency carries other benefits when applied across many teams.
“This will improve security and increase efficiency by 25% plus.”
The Service Mesh pattern fits well with containers. We’ve found this pattern brings further consistency to logging, tracing, alerting, authorisation and service lookup. Service Meshes also ease adoption of more sophisticated deployment and resiliency patterns – such as Blue/Green Deployments, Circuit Breakers and Retry.
“Pushing common patterns to the mesh enables developers to spend more time on business logic.”
Many of our customers have concerns (or rules) about the level of lock-in cloud PaaS offerings require.
“Being cloud vendor agnostic was also major selling factor.”
Containers and orchestration offer a middle ground, allowing customers to benefit from PaaS offerings without jumping in with a provider. One mature approach in highly regulated environments is to mandate that core Systems of Record are cloud agnostic while allowing Systems of Innovation to iterate quickly using PaaS.
Clearly, lock-in should always be a consideration with any software build, especially one that is expected to be a long-term investment. However, lock-in is only one aspect of the decision making… which leads us to our next opinion.
2. Containers add unnecessary complexity
One thing came up again and again – concern about the complexity of containerisation and the surrounding ecosystem, specifically container management.
Sometimes it’s called out directly:
We removed all of the complexity from our code and put it into YAML files…”
“…risk of ending up with a way over-engineered solution, something that can be very complex to manage…”
Then sometimes the impact of that complexity can be seen in confusion and uncertainty.
“What’s clear to me however is that people aren’t understanding the basics of orchestration.”
“I personally do not understand the security implications.”
“This stuff seems to be taking a lot of time to build.”
Container Management solutions add complexity. The requirement to upskill teams, the operational processes and the weight of many new APIs. Also, software is not really proven until it is in production. So to prove you are genuinely vendor agnostic you need production systems running in different clouds – further adding to complexity to upskilling and governance.
This complexity can outweigh the benefits. Even where it makes sense, it is not a static area of technology; today’s obvious choice may be tomorrow’s legacy.
There is clearly a battle of hearts and minds being fought as people see and feel the benefits but are struggling with the incidental challenges a technology brings.
3. Containers are already obsolete
While many respondents clearly see Containerisation as the future, there was another strongly voiced opinion – that it’s already been consigned to the history books.
“Think the next evolution for some of these would just to be to go serverless.”
“I am a fan of serverless – I certainly think it’s easier than running your own Kubernetes Cluster.”
“My preference would be cloud native serverless (lambdas/azure functions).”
We’ve already said that avoiding cloud lock-in is a key selling point of Containerisation. But for some the value of agility now, outweighs the cost of lock-in later.
“Portability is good, but truly serverless is better.”
While the term ‘Serverless’ does include a degree of ambiguity, it’s clear from the responses that there is a lot of positivity for NOT having access to the underlying OS.
“Containers feel a bit ‘old world’. Architects should aspire to serverless architectures negating the need for containers and the associated container management.”
Containers bring benefits. They provide a well adopted, uniform way to reason about, package and manage long-running software services.
Unlike cloud provider-specific technologies, they’re built on open standards with open-source and a healthy vendor ecosystem. Choosing a vendor-agnostic route is often a good strategy. But containers also bring complexity on two fronts.
Firstly, the vendor-agnostic route means you must own and run the pieces that would otherwise be run by your cloud provider. For core strategic systems this cost may be worth bearing, but in many cases this is low value, undifferentiated work that is preventing you from tackling real business problems.
The second source of complexity is less obvious. Because containers promise to make microservice deployment simple, there is a tendency for some teams to build ultra-simple components that aren’t meaningful on their own. Just because you can build a massively distributed system doesn’t mean that you should. The complexity of the overall solution does not disappear, it is just shifted to the container management and orchestration systems. Container management becomes seen as a problem, when it is merely a symptom.
Finally, there is an increasingly popular architectural pattern of leaning as hard as possible on cloud-provider services – ‘Serverless’. The potential future debt of vendor lock-in is traded against the ability to focus right now on the core pieces that differentiate your business. Containers, which still carry a cost of management and patching, play a much smaller part in this future. Our research has painted a familiar picture of technology adoption – there are benefits and pain.