We came. We saw. We drank from the firehose of announcements at the Amazon Web Services re:Invent show in an unseasonably chilly Las Vegas, cooled by the storm systems battering the rest of the country.

AWS Pushes To Make Machine Learning More Democratic And Accessible

This year, AWS revealed a product direction that was part catch-up, part reemphasis of its willingness to listen closely to customer feedback. Higher capability, throughput, and ease of use at relentlessly declining price points were common themes. The company announced several new capabilities across its wide and expansive product line that would require stacks of PowerPoint slides to adequately cover.

AWS CEO Andy Jassy at the re:invent 2019 keynote
AWS CEO Andy Jassy at the re:Invent 2019 keynote

However, the key takeaway from re:Invent 2019 is that AWS continues its drive to democratize machine learning (ML) and make it accessible to a wider set of business, developer, and creator personas. If you are a data science practitioner, machine-learning developer, or insights-driven business leader, here are the key announcements that you should care about:

Machine-Learning Infrastructure

  • Inferentia chips and Inf1 instances. The company launched its custom-built Inferentia chips aimed at making inferencing cheaper and faster. The new Inf1 instances, available on EC2 (but coming to SageMaker sometime in 2020) integrate with Tensorflow, Pytorch, and MXNet. AWS claims three times the throughput at two-fifths the cost.
  • Amazon Aurora ML integration. Announced a few days ahead of re:Invent is the capability to integrate machine-learning predictions directly from SageMaker and Comprehend into Aurora databases using SQL. This is accomplished using direct calls that do not go through the application layer, making it suitable for low-latency real-time use cases such as fraud detection or product recommendations.
  • ML features in Amplify for iOS and Android. Developers can now add AI/ML-based use cases leveraging pretrained models.

Machine-Learning Services

  • Alexa Voice Service (AVS) is now integrated with IoT Core. This will reduce the cost of building Alexa Voice into devices across a wide variety of categories, particularly in resource-lean devices.
  • Model prototyping in Java with the Deep Java Library (DJL). While Python remains the language of choice for ML devs, AWS made a nod to the popularity of Java in enterprise IT by announcing the DJL, an open source library and API to develop and prototype deep learning models in Java. The DJL works atop Apache MXNet and Pytorch.
  • SageMaker inferencing in Amazon QuickSight. QuickSight is Amazon’s business analytics and visualization service. AWS has enabled machine-learning predictions in Quicksight. Users can now connect to various data sources and select custom, prebuilt, or packaged models and pipe visualizations and dashboards into QuickSight.
  • SageMaker Studio IDE. This is the big one. AWS continues to extend SageMaker towards its vision of the one machine-learning environment to rule them all, with a slew of new capabilities and features that intend to make the platform a fully fledged web-based IDE for end-to-end machine-learning workflows. The company announced several new features within SageMaker Studio to help reduce the heavy lifting commonly associated with machine learning. These include SageMaker Notebooks, which are one-click, elastic, and fully managed Jupyter notebooks within an EC2 instance; SageMaker Experiments, a common place to manage experiments; SageMaker Autopilot, an automated machine-learning tool that, AWS promises, eschews black-box approaches to provide data scientists with visibility and control into the automated model selection and decisioning process; SageMaker Model Monitor, a tool for continuous monitoring of model performance and automatic detection of concept drift; and SageMaker Debugger, a tool to identify and . . . well, debug issues that emerge in machine-learning training jobs.
  • Explainability and human-in-the-loop workflows. The company gained parity with other hyperscaler ML offerings with announcements around model explainability and human-augmented inference. The new Amazon Augmented AI service (A2I) allows improved inference by allowing human reviewers to validate machine-learning predictions. With tooling integrated into SageMaker, human insights through A2I, and basic research into new methods of interpretability such as SHAP (SHapley Additive exPlanations), AWS hopes to support the creation of more explainable models. However, AWS hand-waved at our questions about data security and privacy, implying that the burden of safeguarding users and protecting personally identifiable information (PII) currently rests entirely with AWS customers who use A2I.

AI Applications And Services

  • Custom labels in Rekognition. AWS’s computer vision service for images and video, Rekognition, now allows custom labels that let businesses upload their own images for domain-specific use cases.
  • Amazon Kendra for Enterprise Search. AWS debuted Kendra, a new machine-learning-powered enterprise search service. The company claims that Kendra can aggregate knowledge artifacts and documents from across siloed repositories within the enterprise (such as Box, Dropbox, SharePoint, Office documents, and Salesforce, among others) to create an intent-based search index that provides more useful, contextualized, and relevant answers to natural language-based queries. Customers can set up Kendra from the AWS console.
  • Transcribe Medical. Transcribe Medical is a service that applies natural language specifically to medical documentation to reduce documentation workloads for doctors and medical practitioners.
  • Amazon Fraud Detector. AWS filled a clear product gap by announcing a service that allows the automatic building and training of custom fraud detection models by non-data scientists.
  • Amazon CodeGuru. If there was a single “one more thing” moment at re:Invent this year, the CodeGuru announcement was it. This is a service that automates code reviews using machine learning. Developers can use CodeGuru by just adding it to pull requests. CodeGuru includes reviewer and profiler tools to catch code issues, identify expensive lines of code, and suggest ways to streamline code. CodeGuru is available on pay-as-you-go monthly pricing based on the number of lines of code reviewed and sampling hours per application profile.
  • Amazon Connect. AWS’s only business application, the cloud contact center suite Connect, added advanced machine-learning based analytics with Contact Lens. The service automatically transcribes and analyzes customer calls for various issues such as negative sentiment, compliance lapses, and long pauses. Contact Lens integrates into a single one-click capability some of the building blocks that customers would otherwise have to assemble piecemeal, across other AWS services, to create useful analytics. AWS said that the company was also working towards releasing real-time call transcription sometime in 2020.

. . . what did I miss? Oh, they also have a machine-learning-driven keyboard that uses generative AI to create really bad music.

What It Means

  • AWS is diving deep into enterprise cloud businesses. Compared to last year, when AWS emphasized fully embracing modern architecture, we can see that this year AWS has turned to a more pragmatic vision of digital transformation, such as management leadership, flexible go-cloud paths, and respect for legacy assets. From the perspective of product development, although AWS has already been developing rapidly in the breadth and depth of cloud services in the past, this year we can see that AWS is providing more agility and convenience for enterprise developers and I&O pros by refining the granularity of service features and strengthening service abstraction and assembly.
  • Machine learning in the public cloud is mainstream. The public cloud improves ML development agility and provides AI building blocks, becoming a value foundation for emerging tech innovation. Firms worldwide are using ML in the cloud to boost their digital business. In Forrester’s two reports for North America and Asia Pacific, we characterizes the service ecosystem of ML in the public cloud as consisting of three functionality segments: core services, application services, and infrastructure services. You should consider using a hybrid architecture across public and private environments and assess the use of edge computing, focusing on automation, and evaluate interoperability to accelerate sustainable innovation.

Charlie Dai and I will be covering machine learning in the cloud extensively in 2020, with a Now Tech report in Q2, followed by a Forrester Wave™ later in the year. We would love to hear your reactions to the announcements at re:Invent 2019, as well as your experiences with different hyperscalers’ machine-learning products and services. As always, you can reach us through Forrester briefings (for vendors) or request an inquiry to pick our brains or to share your experiences.