Getty Images/iStockphoto
AWS intros new foundation model line and tools for Bedrock
The cloud provider launched Nova, a new family of generative AI models available on Amazon Bedrock. It also tackled some problems with generative AI, such as hallucinations.
AWS on Tuesday introduced a new family of large language models and new tools and services to manage generative AI models.
At the AWS Re: Invent 2024 conference in Las Vegas, the cloud giant unveiled Amazon Nova, a new family of foundation models available exclusively on Amazon Bedrock, AWS' generative AI and machine learning platform.
Amazon Nova
The line includes three types of models, with a fourth one expected soon.
Amazon Nova Micro is a text-only large language model (LLM) that delivers low latency responses at a low cost, AWS said. It has a context window of 128K tokens. It is useful for text summarization, translation, brainstorming, and simple mathematical reasoning, according to the vendor.
Amazon Nova Lite is a multimodal model for processing image, video and text inputs. It generates text outputs and provides inputs of up to 300K tokens. It can analyze multiple images and 30 minutes of video.
Amazon Nova Pro is a multimodal model for analyzing financial documents. It has a context window of 300K tokens and can process 15,000 lines of code.
Amazon Nova Premier, for complex reasoning, is still being trained and will be available in 2025.
The Amazon Nova line also includes Amazon Nova Canvas, an image generation model, and Amazon Nova Reel, a video generation model.
No winning model
The new generation of AWS models will join the selection of different models the cloud provider makes available on Amazon Bedrock. The platform houses more than 10 foundation models, LLMs and multimodal modals from generative AI vendors, including Anthropic, Cohere, Meta, Mistral and Stability AI. Amazon's own models, including the Titan line of general purpose LLMs, are also on Bedrock.
With Bedrock, AWS said it is seeking to provide customers with choices that simulate real life.
"The reality is that all of you are going to use different models for different reasons at different times, which, by the way, is the way the real world works," Amazon president and CEO Andy Jassy said during the conference keynote on Tuesday.
He added that in practical applications of generative AI technology, in the real world, sometimes people optimize models for coding or math, other times for autonomous agents, and other times for cost. "Most of the time for some combination of these. At AWS we are going to give you the very best combination of all these," Jassy said.
By providing its own foundation models as well as a choice of other models on the market, AWS is pursuing a strategy that no one model will win over the market.
"It's not necessarily competition," said Mark Beccue, an analyst with Informa TechTarget's Enterprise Strategy Group. "There's going to be lots of models."
What differentiates Nova
AWS claimed that Amazon Nova's chief benefit is that it offers different types of models at lower prices than its competitors.
Amazon Nova Micro costs $0.000035 per 1,000 input tokens and $0.00014 per 1,000 output tokens. Nova Lite is $0.00006 per 1,000 input tokens and $0.00024 per 1,000 output tokens.
Nova Pro is $0.0008 per 1,000 input tokens and $0.0032 per 1,000 output tokens.
By comparison, for example, the Claude LLM from Anthropic runs between $0.003 to $0.0008 per 1,000 input tokens and between $0.0024 to $0.024 per 1,000 output tokens, according to pricing details compiled and provided by AWS.
"That's a huge deal if it's true," Beccue said, adding that a big challenge for enterprises running generative AI models is how to run them efficiently to support business applications. "It's not just training but also the inferencing. Whenever you're running the models, it must be cheap enough to support what you're trying to do."
Therefore, if Amazon Nova can be a cheaper alternative for enterprises, it would be a breakthrough for AWS, Beccue said.
However, it's possible that Nova is only available on Bedrock because AWS is confident in cost savings when the models run on its system rather than on other systems.
Nova is also likely to be useful for internal operations at Amazon, possibly to improve logistics for consumer-facing Amazon.com, said Gartner analyst Jim Hare.
"I think who's going to benefit the most is probably Amazon itself, in terms of using these models to help with some of the innovations," Hare said.
Tackling hallucinations
Beyond the new foundation model line on Bedrock, AWS also unveiled new features for dealing with some of the problems with LLMs and generative AI.
One challenge is hallucinations.
To tackle this problem, which has afflicted LLMs from most vendors, AWS introduced Automated Reasoning checks in preview.
Automated Reasoning checks help users validate the accuracy of responses generated by LLMs to prevent hallucinations.
The automated checks create an alert when there is a problem with an LLM and put a human back in the process to verify information and check for hallucinations
"To me, that's unique," Beccue said. "It's something that says we're making progress on how to manage these things, and this is the kind of stuff we need to see from everybody."
While Automated Reasoning checks appears promising for guarding against hallucinations, it's unclear how users will set it up, Hare said.
"It feels like it's based on policies and business rules," he said. "The devil's in the details, but I think it just highlights that AWS is trying to help customers address some of these challenges that they're facing and using these generative AI tools."
Another LLM management tool AWS introduced is Amazon Bedrock Model Distillation in preview.
Model Distillation automates the process of creating a distilled model -- a smaller model trained to act like a larger model -- for users' specific use cases by generating responses from a large foundation model.
The large foundation model is the teacher model. There's also a fine-tuned small model called the student model. It uses data synthesis to improve the responses given by the teacher model.
Model Distillation addresses the hallucination problem and enables users to run parts of a model instead of the whole thing, Hare said.
He noted that AWS is not the only vendor with a model distillation capability, but said the new feature shows how AWS is trying to address some of these generative AI problems.
AWS also launched a multi-agent collaboration capability for Amazon Bedrock in preview. The new tool lets users build, deploy, and manage multiple AI agents that are working on complex multi-step tasks, Amazon said.
Cohere Rerank 3.5, and other news
AWS is also partnering with vendors that are trying to help enterprises manage the models they have.
On Dec. 2, AWS revealed that independent generative AI vendor's Cohere's advanced reranking model Rerank 3.5 is available through a new Rerank API on Amazon Bedrock. Reranking is a technique to improve the relevance of search results using LLM models' language understanding capabilities,
Cohere Rerank 3.5 boosts the accuracy of retrieval augmentation generation (RAG) and enterprise search, said Arun Chandrasekaran, another Gartner analyst.
"What Cohere is trying to do here is, 'if you're implementing search using keyword search or using RAG, we can work with what you have, and we can just improve the quality of you know what you're doing,'" Chandrasekaran said. "They're just trying to assist and boost the quality of an existing product or an existing system that the customer has already deployed."
Mark BeccueAnalyst, Enterprise Strategy Group
On the infrastructure front, AWS revealed that its Trainium2 AI chip is now generally available.
In addition, the tech giant added new capabilities to its generative AI-powered assistant Amazon Q Business.
One capability coming soon to Amazon Q is the ability to simplify the creation and maintenance of complicated workflows.
AWS also said that with 50 action plugins for Amazon Q, users can now connect to third-party apps and perform specific tasks.
With all these new capabilities and tools, AWS is demonstrating that it's looking to pursue a long-term game instead of trying to catch up with generative AI advances from specialist vendors, Hare said.
"Their strategy is much more around providing these capabilities ... in the way where you have the right guardrails and the capabilities in place to use generative AI safely and have all that trust you need to use it for the enterprise use cases," he said.
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.