After nearly two years of speculation about when—or even if—Amazon would launch its own competing family of AI models to take on OpenAI, Google, and Anthropic, the company finally delivered the mic drop moment at its AWS re:Invent conference. Amazon has announced a new family of AI models for text, images and video, called Nova, which it claims have “state-of-the-art intelligence” across many tasks, are 75% cheaper than the industry’s most successful competitors, and are already powering 1,000 of Amazon’s internal apps
However, the question on the minds of many Amazon watchers was: Why? After all, Amazon has invested $8 billion in Anthropic, including an additional $4 billion announced last week. As part of the deal, Anthropic also committed to using Amazon’s AWS cloud and Amazon’s custom AI computing chip, called Trainium, to train and run its models.
Amazon wants control over its own AI destiny
But Amazon clearly has no intention of relying solely on external partners when it comes to its AI strategy. It does not wish to be affiliated with Anthropic or any third party. It wants to reduce the cost of its AI offerings to cloud customers—a low-cost strategy that Amazon’s AWS cloud computing service has long pursued. This would be more difficult to do if only Anthropic models were used. Finally, the company says its customers wanted capabilities, like video, that Anthropic doesn’t currently offer.
Nova’s releases are part of a larger, perhaps master plan to carve its own path to AI dominance—which also includes building what is said to be the world’s largest AI supercomputer, dubbed Project Rainier, which will include hundreds of thousands of Amazon’s Trainium chips working in unison .
In a keynote speech at re:Invent yesterday, Amazon CEO Andy Jassy outlined three lessons for how Amazon is developing its AI strategy. One, he said, is that as organizations scale their generative AI applications, “the cost of computation really matters” — so a cheaper, commoditized model becomes increasingly important.
Further, Jassy said AI success will not come simply from a capable model, but from a model that addresses issues like latency—the time it takes to get the information or results you need from the model—as well as user experience and infrastructure issues. In addition, he insisted that Amazon wants to offer customers, both internal and external, a variety of models to choose from.
“We have a lot of our in-house builders using (Anthropic’s) Claude,” he said. “But they also use Llama models. They also use Mistral models, and they also use some of our own models, and they themselves use domestically grown models.”
That was surprising at first, he said, but added that “we keep learning the same lesson over and over again, which is that there will never be one tool to rule the world.”
That appears to be the core of Amazon’s AI master plan: Ben Thompson, a business and technology analyst and author of Stratechery, wrote yesterday that Jassy’s comments about AWS’s AI strategy “are broadly similar to AWS’s.” Just as AWS offers many choices in processing or databases, AWS will offer a choice of AI models on its Bedrock service. And that will include Amazon’s own Nova models, which happen to be probably the cheapest option for third-party developers. Amazon is betting that AI will become a commodity, he said: “AWS’s bet is that AI will be important enough that eventually it won’t be special at all, which makes Amazon very happy.”
Amazon has to find a balance between price and performance
However, some argue that cheaper models are not the key to building reliable AI applications. Instead, they argue that performance trumps price when it comes to creating efficient, high-quality solutions. There is a question whether Amazon’s Nova models, which it claims are “as good or better” than competing AI software in many, but not all, benchmark tests, will be considered good enough by developers to convince them to make the leap.
This might be the balance Amazon is aiming for, but it’s not clear if they’ve hit the right one. Yesterday I spoke with Rohit Prasad, Amazon’s SVP and Chief Scientist for AGI (Artificial General Intelligence), who told me that the name Nova was purposeful – signaling a new and very different generation of “exceptional quality” AI models.
When I asked why Amazon hadn’t approached Anthropic to build new models for it, Prasad pointed to Amazon’s own “urgent” internal customer needs such as video generation — something he said Anthropic doesn’t offer, as far as he knows. “We have our Prime Video team that wants to recap seasons, and they can do that with the ability to understand video models,” he said. “Amazon ads need models that can generate videos.”
Prasad would not comment on Amazon’s long-term roadmap for its AI models, but said there will be “more paradigm shifts,” including more capable models. Meanwhile, he said the Nova models are available to all internal Amazon teams, including the one working on Amazon’s new generative Alexa digital assistant (which, as I reported back in June, it was a long, not very successful job).
Amazon wants to give its customers a choice among models from various suppliers, he emphasized. The question is, will the same strategy that worked so well for Amazon’s AWS in the past—offering low prices, product choice, and flexibility—pay off again in this new era of artificial intelligence?