AI takeaways from AWS re:Invent 2024

An industry analyst shares his insights from AWS re:Invent 2024, where Amazon announced new tools and models to tackle enterprise generative AI challenges.

Setting aside the Thanksgiving leftovers, I joined about 60,000 other attendees in Las Vegas during the first week of December for Amazon's flagship re:Invent conference.

As AWS is one of the world's leading AI vendors and thought leaders, I was curious to hear what the tech giant had to say about AI as we head into 2025. Here are a few takeaways from three jam-packed days of one-on-one meetings and presentations.

Amazon and AWS are AI pragmatists

Pragmatic AI was a foundational theme throughout re:Invent 2024 messaging and content. Amazon's approach to innovation is bottom-up: The company is all about listening to customer challenges, then trying to figure out a way to solve those challenges.

When it comes to the AI space, some industry watchers view AWS as laggards lacking shiny, first-to-market AI tools and products. I don't see AWS as laggards at all, but rather as a company sticking to its innovation culture, producing pragmatic AI products and services its customers are looking for. This approach works well for AWS in AI because the company has such a significant cloud computing market share, offering extensive opportunities to work with a range of customers on AI ideas. 

Tackling major generative AI pain points: Cost and accuracy

Although there has been extensive interest in and expectations for generative AI, operationalized applications have been slow to market. This is partly because enterprises don't have a good grasp on the total cost of generative AI, particularly AI compute workloads, and partly due to fully validated mistrust about the accuracy of AI model outputs.

Recent research from Informa TechTarget's Enterprise Strategy Group bears that out. A sobering 37% of organizations with generative AI in production or proof-of-concept stages said they have experienced negative effects from hallucinations in their generative AI initiatives.

At re:Invent, AWS teed up separate initiatives to address cost and accuracy:

  • Amazon Nova family of models, focused on reducing costs. Amazon president and CEO Andy Jassy introduced Amazon's new family of AI models, called Nova. The most intriguing aspect of the announcement was the company's claims that the base models are 75% more cost effective than Amazon's other models.
  • Automated Reasoning Checks in Amazon Bedrock. In his keynote, AWS CEO Matt Garman debuted an intriguing tool for battling AI model hallucinations in Amazon Bedrock, called Automated Reasoning Checks. The tool challenges model responses and triggers human-in-the-loop checks when necessary. It's a good concept and one of the first tools I've heard of that's designed to curb model inaccuracy.

At this point, whether either of these initiatives can accomplish what Amazon claims isn't as important as the effort the company is making to address the issues. I think this is just the beginning of wholesale efforts by the AI vendor community to tackle cost and accuracy challenges in generative AI.

Mark Beccue is a principal analyst covering AI at Informa TechTarget's Enterprise Strategy Group.

Enterprise Strategy Group is a division of Informa TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on Artificial intelligence platforms