In an industry famously tight-lipped about finances and buoyed by sky-high valuations, $157B on the latest OpenAI round, I’ve been trying to get insights into the actual $$ earnings of AI heavyweights OpenAI and Anthropic. While OpenAI's ChatGPT is the AI assistant everyone knows, Anthropic's rapid ascent—especially in API services—signals a shifting landscape. The pressing question: Who's really leading the pack, and what bets are they placing on the future of large language models (LLMs)?
Highlights:
- Revenue Revelations: OpenAI is reportedly generating around $5 billion in annualized revenue, outpacing Anthropic's projected $1 billion for 2024. Yet, Anthropic is growing rapidly, shrinking a 15x revenue gap at the year's start to just 5x. When the race is worth billions, that is astonishing.
- Consumer vs. Enterprise Focus: ChatGPT dominates the consumer market with approximately $2.7 billion in revenue, making it the go-to AI for the masses. Most people have never heard of AI outside of ChatGPT. Anthropic's Claude brings in about $150 million, but its emphasis on Constitutional AI and top-tier models has quickly made it a favorite among enterprises and developers seeking advanced capabilities.
- API Arena Heating Up: In the developer domain, Anthropic is making significant strides. With $800 million in run-rate API revenue, it's closing in on OpenAI's $1.2 to $1.5 billion, highlighting the industry's appetite for Anthropic's cutting-edge models, reliability and strength of AWS cloud distribution.
- Cloud Partnerships Pay Off: Both companies owe much of their revenue to cloud collaborations—OpenAI with Microsoft Azure, Anthropic with AWS Bedrock. These alliances not only provide the computational muscle needed but also extend their reach into enterprise ecosystems.
- Profitability Puzzle: Despite robust revenues, profitability remains elusive. OpenAI's gross margins have tightened to an estimated 55-75%, partly due to over-provisioned GPUs and operational inefficiencies. Anthropic operates at a leaner 38% margin, reflecting the hefty costs of GPU resources and the race to optimize AI model deployment