Read the Thought Leadership
But what will Cloud look like over the coming years? After all, the rate of change within this space has been such that some applications that were delivered to, and developed for, the Cloud as recently as a few years ago could already be considered “legacy”. This relentless progress demonstrates how vital it is to have a view on the direction of change – otherwise new applications delivered for the Cloud may themselves become legacy in a few years’ time. We believe there are several key “Cloud futures” that will affect the ways in which the market will adopt Cloud platforms, in addition to designing and building software. BJSS’ Cloud experience is significant, with a diverse client-base that spans multiple industries and at many different stages of their journeys. This experience, combined with the views of our consultants and partners, has informed my views on the future of Cloud. The death of private Cloud and the truth of hybrid Death is a pretty final term, but the private Cloud is on a steep slope of decline. What is private Cloud (or “false Cloud” as Werner Vogels famously called it) anyway? I don’t think the definition matters much more than the fact that it’s in your data-centre. That means you are putting significant time and money into the operational aspects of running your own Cloud. Not only does this often become significantly more expensive than public Cloud alternatives, it also diverts focus from generating direct business value (unless, of course, you are yourself a private Cloud business!). You would also be missing out on a wide range of benefits of the public Cloud: speed of progression, truly elastic scaling and highly integrated ecosystem, full API driven automation, the list goes on. Many senior managers and IT departments will not recognise the value of these additional benefits, but the opportunity cost is significant – perhaps affecting bottom line at an even higher cost. This amounts to some pretty good reasons to go public instead of private. From our experience at BJSS, most organisations see it that way too. The risks associated with public Cloud – security, skills, regulators, overspending – which are oft the sales pitch for private Cloud are over-sold and dissipating – with even those who have implemented private Clouds putting in place public Cloud strategies. Often, dissention in the ranks follows with splinter groups using their credit cards to create public Cloud accounts. These strategies often start with non-production systems only, but I’ve no doubt that they will end up replacing private Cloud for production systems too, potentially writing off significant investment. I would strongly recommend that organisations looking to invest in private Cloud take a step back and reconsider for a moment. So, with private Cloud in the throes of death, what about hybrid? The concept of an IT estate that spans across the data-centre and the public Cloud might sound appealing. But, in reality, the nirvana of a homogenous environment is somewhat unattainable. In the long run, hybrid Cloud is simply a stepping stone into a public Cloud future. It is a means to an end rather than a sensible long-term strategy. From a strategic long-term investment perspective, the same arguments against private Cloud apply to hybrid. The public Cloud providers all have hybrid stories, but these aren’t based on a vision of the future where companies retain their data-centres, they are based on the need to address a proportion of the market who aren’t yet ready or able to let go and to ensure they win those clients’ business in preparation for a future when they are. Who wants infrastructure anyway? Application delivery for the Cloud Unless you are a hardware vendor, data-centre or Cloud provider, delivering infrastructure probably won’t generate direct business value. It’s more likely to be seen to be a necessary expenditure to support the delivery of software and services that do generate value. Infrastructure as a service looks to reduce the cost and complexity of infrastructure. Serverless, on the other hand, looks to release it. Of course, there are problems for which serverless solutions are highly relevant and others where they just don’t fit the bill. I fully expect the balance to shift over time, with more and more applications built upon a serverless backbone and a greater range of Cloud services delivered in this model. Our experience suggests that, not only is the infrastructure operation effort removed, but it is also possible “double-dip” by making significant savings to running costs. Who doesn’t like the sound of that? The progression of serverless also has other effects. The ’you build it, you run it’ ethos will become easier to attain, while demand for specialist Cloud infrastructure and sysadmin type skills will decline. This will be counter-balanced by significant increases in demand for engineers and application architects experienced in serverless and Cloud native delivery patterns. This will fuel an increase in applications able to deliver change many times a day – so the demand for continuous deployment experience will accelerate. Why pay for something that in likelihood you’ll never use? Many organisations are planning to develop software to be run on more than one Cloud provider, whether as a form of Disaster Recovery, or to protect from vendor lock-in. In many cases this approach chooses to forgo the Cloud native features and offerings of individual Cloud providers in favour of commodity IaaS capabilities or layers of abstraction such as containers. This is a form of insurance, but the premium can be high (effort to deliver Cloud agnostic software can be considerably higher) and the risk proportionally low. Much like decisions to back out of private Cloud investment, I’ve seen organisations stepping back from previous decisions to remain agnostic of Cloud provider. This doesn’t mean that they don’t consider the potential impact of native service adoption – it means they have. I believe that this trend that will likely continue, impacting one of the use cases for containers – abstraction from Cloud provider. I would recommend carefully considering and revisiting reasoning for avoidance of lock-in and whether it genuinely makes business sense. Native capabilities change quickly. It’s easy to make the decision in isolation, perhaps as part of company Cloud strategy, without fully considering its effect on application delivery. Security experts: change behaviours or get left behind As the speed of software delivery is pushed faster, accelerated by Cloud native architectures and serverless capabilities, much of the security profession could be left behind. Practitioners need to adapt and change behaviours to avoid this. Security is often seen as a blocker to progress and the outcomes of this are both undesirable. Either a team finds ways to subvert security, which is likely to make the software less secure, or the team slows down its delivery. I believe that the answer is the adoption of DevSecOps. This approach advocates moving security forwards, making it the responsibility of the entire delivery team and utilising security expertise as an aid to that shared responsibility. It encourages use of automated security testing, red & blue team simulations, comprehensive security monitoring and threat intelligence, while discouraging the most common complaint of security: always saying no. The security expert becomes a mentor to teams. Success being that their input is required less frequently, and that they become a reviewer and validator day-to-day, allowing them to focus their efforts on truly specialist input. Those who don’t take this approach, who are seen to be blocking advance, will find themselves marginalised by the speed of progress. I would encourage security professionals to embrace innovation within their profession, to develop new ways of working and to revisit and improve these currently. I would also encourage engineers to work threat modelling into their repertoire and consider how tooling can help to automate some aspects of security assurance within the build and deployment pipeline. The core(s) of the AI revolution AI is on track to be the next technological, potentially even social, revolution. Cloud will be at the core of this movement. Not only is the ephemeral nature of Cloud computing an excellent fit for machine learning, but Cloud providers are bringing AI to the masses with APIs that significantly reduce the specialist knowledge required to build intelligent software. This will be key to the future of Data Science, allowing access to advanced predictive and prescriptive analysis to those who typically sit outside of the Data Science profession, though still requiring some specialist knowledge. Perhaps, more interesting for the expert Data Scientist, is the specialist AI hardware that the Cloud will shortly make available to all. Google and Microsoft are both developing specialist machine learning processors, while AWS provides FGPA and GPU instances, having formed a close relationship with NVIDIA who is targeting the AI development market and continuously improving high level programing interfaces to its chipsets. These moves demonstrate the level of investment that the Cloud providers are putting into AI so they are at the forefront of the revolution. However, with this drive towards machine driven decision making, the reasoning behind decisions is obfuscated. This presents an opportunity to a hacker, a smokescreen for malicious interference that aids avoidance of detection, enabling subversion of the decision-making process to the benefit of the hacker. Effective monitoring and explainability of decisions is paramount to ensuring the integrity of these capabilities. In conclusion Cloud will continue to define how we design, build and deploy software applications. It will also play a central part in the AI revolution. Don’t fall into the private Cloud trap. Utilise public Cloud and benefit from the use of Cloud-native capabilities with an educated approach to vendor lock-in. Be forward looking in your approach to security and encourage an environment where everyone is both security conscious and jointly responsible. Above all, consider how the direction of change affects your business, applications and even your profession, ensuring that both you and your new software aren’t considered legacy in just a few years’ time.