BJSS proposed migrating all application build/deployment services to AWS. This included Jenkins build master and slave(s), Selenium grid, SonarQube test environment, Crucible/Fisheye code review tools, Git code repository, Nexus artefact repository and Puppet configuration management.
The infrastructure was designed with scalability and elasticity in mind, with a view to enabling deployment of full staging environments and accommodating future requirements. An important aspect of the engagements was to have all the infrastructure documented as code, using Terraform, to facilitate tracking of the state of the infrastructure, recreating environments as part of Disaster Recovery, and auditing deployed infrastructure.
All infrastructure deployments take place from a centralised Jenkins build server to ensure that important information is held centrally, and to prohibit the ad-hoc deployment of infrastructure not defined in code. As with all e-RS application code in e-RS, deployment code is peer-reviewed and undergoes requisite testing.
Security is integrated using AWS features such as Network Access Control Lists, Identity and Access Management, Security Groups and Encryption, ensuring that data is secure and access rights are correctly controlled.
The BJSS solution uses the following AWS services:
|CloudTrail||Logs all API Calls, providing an audit trail for all deployment activities.|
|CloudWatch||Monitors several metrics to ensure only authorised security changes are applied.|
|EBS||Provides file storage for Git instances; should the instance terminate a new instance is provisioned and automatically attaches/mounts the volume.|
|EC2||On-demand EC2 instances combined with Auto-Scaling Group policies provide scalability and elasticity.|
|IAM||IAM roles enforce security policy, ensuring only authorised DevOps staff can deploy/release code.|
|NACL||Control access between public and private subnets, enabling deployment of publicly accessible “Bastion” hosts whilst protecting services in private networks.|
|RDS||The back-end data repository for all services. The solution takes advantage of features such as mult-az high availability to meet project SLAs.|
|Route53||Enabled simplified and more unified code..|
|S3||Used to store backup snapshots, Jenkins build histories, and log files|
|Security Groups||Provide an additional layer of fine grain security controls by restricting network traffic between specific services on specific ports.|
|VPC||Provides security and network isolation.|
Third party applications and solutions were also used. Terraform allows for the coding of the AWS infrastructure, enabling infrastructure to be easily replicated, moved in the case of a failure, and tracked. Terraform reduced the effort required and ensuring consistent quality of code across the organisation. Also, Packer creates AWS machine images (AMIs) and allows for the coding configuration. While Terraform is responsible for creating an instance (virtual machine) from the AMI, and Puppet is responsible for deploying the application, Packer is responsible for getting the machine image to a state where Puppet can be run.
The successful delivery achieved:
Realise value as early as possible
To establish confidence in the AWS platform, and the reputation for accelerated deployment, it was important to demonstrate the migration of one service as quickly as possible without disruption to the DevOps team. This was achieved by the end of the second (two week) Sprint.
Focus on outcomes
The primary driver was reduced build time. Rather than spending time attempting to accurately size build slaves, with the associated risk of under-specifying, the initial build employed the largest “C” class EC2 instances and retrospectively optimised once performance baselines were established.
BJSS based the solution on an almost entirely Serverless/PAAS stack within AWS, with Terraform managing infrastructure provision. The use of Serverless technologies continued BJSS’ work with DVSA to modernise its approach to application delivery and new ways of working. Using tools such as AWS Lambda, API Gateway, Lambda@Edge and Route53 ensures a cost-effective solution that delivers a substantial improvement over the previous solution.
The BJSS solution uses the following AWS services:
|AWS CloudFront||Enables the Search application to make use of the HTTPS protocol for S3-based web content on a given domain name. The CloudFront Geo-Restriction feature restricts access to Search to requests originating from UK IP addresses.|
|AWS API Gateway||Provides the endpoints used by both the Web-based and Native applications. The Gateway is configured as a proxy to pass the full request (including header) into the Lambda Function which encapsulates the functionality for the Search application.|
|AWS WAF||Restricts access to the service to known IP addresses.|
|AWS Lambda||The DVSA Search Lambda function is written in Java and encapsulates routing within the function hence the use of a Proxy configuration on the API Gateway for the application.|
|AWS Lambda@Edge||Enables modification and addition of security response headers.|
|AWS RDS||Contains an import generated from several data warehouse sources.|
|AWS KMS||Supports encryption of Lambda Environment variables.|
|AWS CloudWatch||Provides default and custom metrics from the application to DashBoarding and CloudWatch alarms.|
By working collaboratively, DVSA and BJSS Technical Support Service have:
Carl Austin UK Chief Technology Officer
The death of private Cloud and the truth of hybrid
Death is a pretty final term, but the private Cloud is on a steep slope of decline. What is private Cloud (or “false Cloud” as Werner Vogels famously called it) anyway? I don’t think the definition matters much more than the fact that it’s in your data-centre. That means you are putting significant time and money into the operational aspects of running your own Cloud. Not only does this often become significantly more expensive than public Cloud alternatives, it also diverts focus from generating direct business value (unless, of course, you are yourself a private Cloud business!).
You would also be missing out on a wide range of benefits of the public Cloud: speed of progression, truly elastic scaling and highly integrated ecosystem, full API driven automation, the list goes on. Many senior managers and IT departments will not recognise the value of these additional benefits, but the opportunity cost is significant – perhaps affecting bottom line at an even higher cost. This amounts to some pretty good reasons to go public instead of private.
From our experience at BJSS, most organisations see it that way too. The risks associated with public Cloud – security, skills, regulators, overspending – which are oft the sales pitch for private Cloud are over-sold and dissipating – with even those who have implemented private Clouds putting in place public Cloud strategies. Often, dissention in the ranks follows with splinter groups using their credit cards to create public Cloud accounts. These strategies often start with non-production systems only, but I’ve no doubt that they will end up replacing private Cloud for production systems too, potentially writing off significant investment. I would strongly recommend that organisations looking to invest in private Cloud take a step back and reconsider for a moment.
So, with private Cloud in the throes of death, what about hybrid? The concept of an IT estate that spans across the data-centre and the public Cloud might sound appealing. But, in reality, the nirvana of a homogenous environment is somewhat unattainable. In the long run, hybrid Cloud is simply a stepping stone into a public Cloud future. It is a means to an end rather than a sensible long-term strategy. From a strategic long-term investment perspective, the same arguments against private Cloud apply to hybrid. The public Cloud providers all have hybrid stories, but these aren’t based on a vision of the future where companies retain their data-centres, they are based on the need to address a proportion of the market who aren’t yet ready or able to let go and to ensure they win those clients’ business in preparation for a future when they are.
Who wants infrastructure anyway? Application delivery for the Cloud
Unless you are a hardware vendor, data-centre or Cloud provider, delivering infrastructure probably won’t generate direct business value. It’s more likely to be seen to be a necessary expenditure to support the delivery of software and services that do generate value. Infrastructure as a service looks to reduce the cost and complexity of infrastructure. Serverless, on the other hand, looks to release it. Of course, there are problems for which serverless solutions are highly relevant and others where they just don’t fit the bill. I fully expect the balance to shift over time, with more and more applications built upon a serverless backbone and a greater range of Cloud services delivered in this model. Our experience suggests that, not only is the infrastructure operation effort removed, but it is also possible “double-dip” by making significant savings to running costs. Who doesn’t like the sound of that? The progression of serverless also has other effects. The ’you build it, you run it’ ethos will become easier to attain, while demand for specialist Cloud infrastructure and sysadmin type skills will decline. This will be counter-balanced by significant increases in demand for engineers and application architects experienced in serverless and Cloud native delivery patterns. This will fuel an increase in applications able to deliver change many times a day – so the demand for continuous deployment experience will accelerate.
Why pay for something that in likelihood you’ll never use?
Many organisations are planning to develop software to be run on more than one Cloud provider, whether as a form of Disaster Recovery, or to protect from vendor lock-in. In many cases this approach chooses to forgo the Cloud native features and offerings of individual Cloud providers in favour of commodity IaaS capabilities or layers of abstraction such as containers. This is a form of insurance, but the premium can be high (effort to deliver Cloud agnostic software can be considerably higher) and the risk proportionally low. Much like decisions to back out of private Cloud investment, I’ve seen organisations stepping back from previous decisions to remain agnostic of Cloud provider. This doesn’t mean that they don’t consider the potential impact of native service adoption – it means they have. I believe that this trend that will likely continue, impacting one of the use cases for containers – abstraction from Cloud provider. I would recommend carefully considering and revisiting reasoning for avoidance of lock-in and whether it genuinely makes business sense. Native capabilities change quickly. It’s easy to make the decision in isolation, perhaps as part of company Cloud strategy, without fully considering its effect on application delivery.
Security experts: change behaviours or get left behind
As the speed of software delivery is pushed faster, accelerated by Cloud native architectures and serverless capabilities, much of the security profession could be left behind. Practitioners need to adapt and change behaviours to avoid this. Security is often seen as a blocker to progress and the outcomes of this are both undesirable. Either a team finds ways to subvert security, which is likely to make the software less secure, or the team slows down its delivery. I believe that the answer is the adoption of DevSecOps. This approach advocates moving security forwards, making it the responsibility of the entire delivery team and utilising security expertise as an aid to that shared responsibility. It encourages use of automated security testing, red & blue team simulations, comprehensive security monitoring and threat intelligence, while discouraging the most common complaint of security: always saying no. The security expert becomes a mentor to teams. Success being that their input is required less frequently, and that they become a reviewer and validator day-to-day, allowing them to focus their efforts on truly specialist input. Those who don’t take this approach, who are seen to be blocking advance, will find themselves marginalised by the speed of progress.
I would encourage security professionals to embrace innovation within their profession, to develop new ways of working and to revisit and improve these currently. I would also encourage engineers to work threat modelling into their repertoire and consider how tooling can help to automate some aspects of security assurance within the build and deployment pipeline.
The core(s) of the AI revolution
AI is on track to be the next technological, potentially even social, revolution. Cloud will be at the core of this movement. Not only is the ephemeral nature of Cloud computing an excellent fit for machine learning, but Cloud providers are bringing AI to the masses with APIs that significantly reduce the specialist knowledge required to build intelligent software. This will be key to the future of Data Science, allowing access to advanced predictive and prescriptive analysis to those who typically sit outside of the Data Science profession, though still requiring some specialist knowledge. Perhaps, more interesting for the expert Data Scientist, is the specialist AI hardware that the Cloud will shortly make available to all. Google and Microsoft are both developing specialist machine learning processors, while AWS provides FGPA and GPU instances, having formed a close relationship with NVIDIA who is targeting the AI development market and continuously improving high level programing interfaces to its chipsets. These moves demonstrate the level of investment that the Cloud providers are putting into AI so they are at the forefront of the revolution. However, with this drive towards machine driven decision making, the reasoning behind decisions is obfuscated. This presents an opportunity to a hacker, a smokescreen for malicious interference that aids avoidance of detection, enabling subversion of the decision-making process to the benefit of the hacker. Effective monitoring and explainability of decisions is paramount to ensuring the integrity of these capabilities.
Cloud will continue to define how we design, build and deploy software applications. It will also play a central part in the AI revolution. Don’t fall into the private Cloud trap. Utilise public Cloud and benefit from the use of Cloud-native capabilities with an educated approach to vendor lock-in. Be forward looking in your approach to security and encourage an environment where everyone is both security conscious and jointly responsible. Above all, consider how the direction of change affects your business, applications and even your profession, ensuring that both you and your new software aren’t considered legacy in just a few years’ time.