The biggest tech companies in the world are now companies involved in artificial intelligence in some way, shape, or form—in creating models that deliver products and services, or in hardware design, or in web services, or more than one. from these categories.
Nvidia is the biggest company in the market, but the big model service companies are becoming household names. OpenAI is on top, but Anthropic is the runner-up in many ways.
One of the things I heard from a recent AI Daily Brief podcast with Nathaniel Whittemore is the announcement that Anthropic is getting $2 billion in new funding, for a total valuation of $60 billion.
This makes both Amodeis (Dario and Daniela) billionaires, as well as five other bigwigs in the firm. So who is funding Anthropic this way?
Amazon, etc. On board
By most accounts, the main contributor is Amazon, which has integrated Anthropic’s Claude model into several of its web services.
Here’s a source from Amazon that talks about using Anthropic’s Claude 3 Haiku:
“Unlock the tuning power of Anthropic’s Claude 3 Haiku model with Amazon Bedrock. This comprehensive deep-dive demo walks you through the process, from accessing Amazon Bedrock to customizing Claude 3 Haiku for your business needs. Discover how to increase model accuracy, quality and robustness by encoding company and domain knowledge. Learn to generate higher-quality results, create unique user experiences, and improve performance for domain-specific tasks. Don’t miss this opportunity to realize the full potential of Claude 3 Haiku by starting with the fine-tuning today.”
The co-branding here is clear.
Other companies like Google have also jumped in, and new reports indicate that Lightspeed Venture is also on board. In a way, it makes sense that Amazon, for example, would fund the company this way, based on the partnerships in play.
But there is also some speculation about the overall context of this agreement.
Burning through cash? What is Track?
A recent Marketwatch opinion piece talks about how OpenAI and Anthropic are spending a lot of money on computing and other online services.
Therese Poletti reports that OpenAI was projected to lose $5 billion in 2024, where the company appears to be losing money on its ChatGPT subscriptions because people are using too much computing power.
Anthropic, she notes, is following suit, increasing its use of AWS capacity.
Lowering completion costs
Although OpenAI and Anthropic are spending a lot of money, some industry insiders predict that new types of networks will greatly reduce the cost of a ChatGPT session or any other type of AI use.
They point to a hyper-process that follows the outlines of Moore’s law, where computing costs decreased as hardware became smaller. Citing various types of scaling laws, these experts suggest that we will see much lower costs for token usage, and some of this will be enabled by multi-agent AI. In other words, individual AI components can work together and bring their own skill sets to the table.
I go back to Ethan Mollick’s bullish predictions about what robots and AI entities will be able to do in a few years – anything from changing a diaper, comforting the sick, preparing a meal or designing a greeting card.
These are my examples, not his, but there are compelling arguments to be made that technology will get faster, cheaper and much better in 2025!
Anthropic News is another waypoint to an emerging industry that will become much more important as the months go by.