Key take: 2020 is the year of the enterprise for AWS
Probably the most striking and evident of the themes this year. AWS are targeting the enterprise heavily. Not only are many of the new features targeting the enterprise, but more specifically they are targeting removal of blockers to AWS adoption in such companies; data privacy, security, hybrid cloud and connectivity issues as examples. We’ve previously seen this as a slight chink in the armour of AWS vs it’s primary competitors and this move makes their intentions clear - to compete hard on this front.
It wasn’t only the content of the announcements that fit this theme either. The entire way in which the keynotes were presented this year screamed enterprise. We saw some serious might-join speakers out on stage during keynotes throughout the week, putting their brand power behind AWS for the enterprise. Names including BP, Cerner, Volkswagen, and Verizon spoke about their successes in partnership with AWS.
This demonstrated a change in tack for re:Invent keynotes, and one that perhaps left them feeling a little more like marketing tools than usual. This was especially so in the case of Werner Vogels’ keynote, which seemed a little flat to this group of techies.
Of the most interesting enterprise targeted announcements, Outposts really stood out. Admittedly announced last year, this year they have gone general availability with some new features, and apparently with a very busy roadmap planned over the coming year. Now you can pay for a limited set of AWS services in your own data centre, hybrid with the public cloud, within the same VPC. During one of the sessions we attended, GE Healthcare spoke about their implementation of Outposts at hospitals across the US, ensuring that the processing of imaging data was performed outside of the public cloud for privacy purposes. While such privacy challenges can be solved through other means in the public cloud, providing an option that doesn’t need to overcome such speedhumps will no doubt extend and accelerate AWS’ reach into the Enterprise market. The number of audience questions during the GE session made no bones of the interest in the market for this solution.
Following on from Outposts, Local Zones and Wavelength also stood out, offering top-end connectivity at the edge, where AWS regions are otherwise unable to reach. Both these services offer compute and storage services closer to end-users, reducing latency for applications typically in need of single digit millisecond latency. One can see this as the next step on the proliferation of services to the edge, at a scale far in excess of any previous announcements such as Lambda@edge or Greengrass.
There were many other announcements fitting of this theme; Compute Optimiser, Nitro Enclaves, Kendra, Multicast and Employee Attribute Access Control to name a few. Finally, the release of the Builders Library provides a growing resource of articles with insights into good practices at massive scale from AWS themselves.
1/3 of all announcements are analytics or machine learning
It’s been a couple of years since the original raft of ML announcements, including SageMaker, and AWS are doubling down. One third of all announcements were machine learning (ML) or analytics based this year. Admittedly some of these are learning aids rather than new service announcements (see DeepRacer multi car and DeepComposer), however the fact that they were included at all demonstrates how serious AWS are about machine learning in a roundabout way.
This year the announcements were an evolution of the capabilities provided within AWS to aid in the deployment of cloud native data lakes and the creation of ML models. I’ve no doubt that the capabilities to further data lake adoption will be of great benefit; especially the increased focus on cross-over of Redshift with S3. At BJSS we believe that cloud native is the vastly preferred implementation path for new data lake projects, and these new features only add to an already strong case.
On the SageMaker suite: time will tell. The new capabilities are clearly a significant advance from those existing, and as a suite it feels like there should be a compelling platform. However, we’ve seen limited uptake on SageMaker so far, and our data science team remain very quiet about the new capabilities upon their announcement (a trait I don’t usually associate with them!).
The most interesting new feature of the suite is model monitor, a tool for detecting model drift automatically. At BJSS we’ve seen clients build capabilities to detect such drift themselves, and it’s an important consideration for production machine learning. This capability will currently only work with models created and deployed through SageMaker, which will serve to limit its applicability however.
But it’s not all tools for those building Analytics or Machine Learning. A selection of the announcements are in fact where AWS have themselves utilised machine learning as part of new features and services. An example of this is the really interesting CodeGuru. Here AWS have applied ML to automate the code review process, while also detecting deviance from AWS best practices and poorly optimised code that is likely to add to compute costs. While only available in preview on code written in Java, this does sound like it has some potential on the face of it. However, cost may be somewhat prohibitive, with an interesting (read expensive) cost model based on lines of code.
Serverless at scale is a major direction, not just an option
AWS are continuing the serverless march space. We all knew this would be the case really, but there’s even less denying it now. While the AWS portfolio is designed to includes all services for all people, those services do not hold as even footing as might be made out upon the surface. This is evidenced by quotes from AWS’ own architects during the week, including "Cluster Huggers" and "Container work has regressed — it makes you think about capacity which is backwards. You went to the cloud to stop thinking about capacity and then you re-introduce it".
With the movement to serverless architectures there has been a significant swing in the skills required to develop cloud solutions to those with strong software engineering skills, somewhat outweighing those coming from more typical infrastructure or sysadmin backgrounds. This is a direction that companies such as AWS and Google have embraced internally for some time. Now they are looking to drive this movement forward throughout the entire industry at a faster rate, and the method in which their serverless capabilities are progressing are the strongest indicator of this.
New features such as Lambda Provisioned Concurrency, EKS on Fargate, Fargate Spot, Step Function improvements and Managed Cassandra are a sample of evolutions that further commit to the serverless ecosystem. These all drive the importance of software engineering expertise in platform engineering and furthermore a merging of cloud architecture with application architecture.
The BJSS team had an amazing time soaking up everything AWS at re:Invent this year. In addition to all manner of announcements and reveals, re:Invent always offers excellent networking opportunities. With 65,000 attendees from businesses using AWS across the globe, it feels like the entire industry really is in one place for a week.
This year’s event left us with no doubt that Amazon are focusing their attentions heavily on winning over decision makers in large enterprise organisations. These battlegrounds are those most familiar to the largest of AWS’s competition, and will be key as public cloud continues its expansion throughout industry. We have no doubt that AWS will be in a much stronger position in the Eenterprise arena now than they were prior to re:Invent.