AWS Announces 5 New Amazon SageMaker Capabilities for Scaling with Models
LAS VEGAS, Nov. 30, 2023 -- At AWS re:Invent, Amazon Web Services, Inc. (AWS) announced five new capabilities within Amazon SageMaker to help accelerate building, training, and deployment of large language models and other foundation models. As models continue to transform customer experiences across industries, SageMaker is making it easier and faster for organizations to build, train, and deploy machine learning (ML) models that power a variety of generative AI uses cases. However, to use models successfully, customers need advanced capabilities that efficiently manage model development, usage, and performance. That’s why most industry leading models such as Falcon 40B and 180B, IDEFICS, Jurassic-2, Stable Diffusion, and StarCoder are all trained on SageMaker.
Today’s announcements include a new capability that further enhances SageMaker for scaling with models by accelerating model training time. Another new SageMaker capability optimizes managed ML infrastructure operations by reducing deployment costs and latency of models. AWS is also introducing a new SageMaker Clarify capability that makes it easier to select the right model based on quality parameters that support responsible use of AI. To help customers apply these models across organizations, AWS is also introducing a new no-code capability in SageMaker Canvas that makes it faster and easier for customers to prepare data using natural-language instructions.
Additionally, SageMaker Canvas continues to democratize model building and customization by making it easier for customers to use models to extract insights, make predictions, and generate content using an organization’s proprietary data. These advancements build on SageMaker's extensive capabilities to help customers innovate with ML at scale. To get started with Amazon SageMaker, click here.
“Machine learning is one of the most profound technological developments in recent history, and interest in models has spread to every organization,” said Bratin Saha, vice president of Artificial Intelligence and Machine Learning at AWS. “This growth in interest is presenting new scaling challenges for customers who want to build, train, and deploy models faster. From accelerating training, optimizing hosting costs, reducing latency, and simplifying the evaluation of foundation models, to expanding our no-code model-building capabilities, we are on a mission to democratize access to high-quality, cost-efficient machine learning models for organizations of all sizes. With today’s announcements, we are enhancing Amazon SageMaker with fully managed, purpose-built capabilities that help customers make the most of their machine learning investments.”
New capabilities make it easier and faster for customers to train and operate models to power their generative AI applications
As generative AI continues to gain momentum, many emerging applications will rely on models. But most organizations struggle to adapt their infrastructure to meet the demands of these new models, which can be difficult to train and operate efficiently at scale. Today, SageMaker is adding two new capabilities that help ease the burdens of training and deploying models at scale.
New capability helps customers evaluate any model and select the best one for their use case
Today, customers have a wide range of options when choosing a model to power their generative AI applications, and they want to compare these models quickly to find the best option based on relevant quality and responsible AI parameters (e.g., accuracy, fairness, and robustness). However, when comparing models that perform the same function (e.g., text generation or summarization) or that are within the same family (e.g., Falcon 40B versus Falcon 180B), each model will perform differently across various responsible AI parameters. Even the same model fine-tuned on two different datasets could perform differently, making it challenging to know which version works best.
To start comparing models, organizations must first spend days identifying relevant benchmarks, setting up evaluation tools, and running assessments on each model. While customers have access to publicly available model benchmarks, they are often unable to evaluate the performance of models on prompts that are representative of their specific use cases. In addition, these benchmarks are often hard to decipher and are not useful for evaluating criteria like brand voice, relevance, and style. Then an organization has to go through the time-consuming process of manually analyzing results, and repeating this process for every new use case or fine-tuned model.
SageMaker Clarify now helps customers evaluate, compare, and select the best models for their specific use case based on their chosen parameters to support an organization’s responsible use of AI. With the new capability in SageMaker Clarify, customers can easily submit their own model for evaluation or select a model via SageMaker JumpStart. In SageMaker Studio, customers choose the models that they want to compare for a given task, such as question answering or content summarization. Customers then select evaluation parameters and upload their own prompt dataset or select from built-in, publicly available datasets.
For sensitive criteria or nuanced content that requires sophisticated human judgement, customers can choose to use their own workforce, or a managed workforce provided by SageMaker Ground Truth, to review the responses within minutes using feedback mechanisms. Once customers finish the setup process, SageMaker Clarify runs its evaluations and generates a report, so customers can quickly evaluate, compare, and select the best model based on performance criteria.
New Amazon SageMaker Canvas enhancements make it easier and faster for customers to integrate generative AI into their workflows
Amazon SageMaker Canvas helps customers build ML models and generate predictions without writing a single line of code. Today's announcement expands on SageMaker Canvas’ existing, ready-to-use capabilities that help customers use models to power a range of use cases, in a no-code environment.
About Amazon Web Services
Since 2006, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud. AWS has been continually expanding its services to support virtually any workload, and it now has more than 240 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 102 Availability Zones within 32 geographic regions, with announced plans for 15 more Availability Zones and five more AWS Regions in Canada, Germany, Malaysia, New Zealand, and Thailand. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.
Source: AWS