AI microchip supplier Nvidia, the world’s most valuable company by market cap, relies heavily on a few anonymous customers who collectively contribute tens of billions of dollars in revenue.
AI chip Darling warned investors again in its quarterly 10-Q filing with the SEC that its key accounts were so important that each of their orders crossed the threshold of ten percent of Nvidia’s global gross business.
An elite trio of particularly deep-pocketed consumers, for example, individually bought $10-$11 billion worth of goods and services in the first nine months ending in late October.
Fortunately for Nvidia investors, that won’t be changing anytime soon. Mandeep Singh, global head of technology research at Bloomberg Intelligence, says he believes founder and CEO Jensen Huang’s prediction that spending won’t stop.
“The data center training market could reach $1 trillion without any real pullback,” by which time Nvidia’s share would almost certainly drop from their current 90%. But it could still be hundreds of billions of dollars in annual revenue.
Nvidia’s supply is limited.
Outside of defense contractors based out of the Pentagon, it’s highly unusual for a company to have such a concentration of risk among a handful of customers — let alone one poised to become the first $4 trillion astrologer.
Looking strictly at Nvidia’s accounts on a three-month basis, there were four anonymous wheels that, in total, accounted for nearly every other dollar of sales in the second fiscal quarter, this time leaving at least one of them. Now there are only three left. Meet this criteria.
Singh said good luck Anonymous whales likely include Microsoft, Meta, and possibly Super Micro. But Nvidia declined to comment on the speculation.
Nvidia only refers to them as customers A, B, and C, and all said they bought a total of $12.6 billion worth of goods and services. That was more than a third of Nvidia’s $35.1 billion total for the fiscal third quarter through the end of October.
Their share was also split equally with each at 12%, suggesting that they were likely receiving the maximum amount of chips allocated to them rather than That they ideally wanted.
That would fit with founder and CEO Jensen Huang’s comments that his company has a supply bottleneck. Nvidia simply cannot churn out more chips, as it has outsourced the wholesale fabrication of its industry-leading AI microchips to Taiwan’s TSMC and has no production facilities of its own.
Middleman or End User?
Importantly, the designation of Nvidia’s large anonymous customers as “Customer A,” “Customer B,” and so on is not fixed from one fiscal period to the next. They can and do change locations, with Nvidia keeping its identity a trade secret for competitive reasons. There’s no doubt that these customers won’t be able to see their investors, employees, critics, workers and competitors see how much money they spend on Nvidia chips.
For example, a party designated “Customer A” purchased approximately $4.2 billion in goods and services in the last quarterly fiscal period. Yet it seems to have been underestimated in the past, as it is no more than 10% in the first nine months overall.
Meanwhile, “Customer D” seems to have done just the opposite, reducing purchases of Nvidia chips in the last fiscal quarter but still representing 12% of the business year-to-date.
Because their names are anonymous, it’s hard to say whether they’re middlemen like troubled Super Microcomputer, which supplies data center hardware, or end users like Elon Musk’s xAI. The latter came out of nowhere to build its new Memphis compute cluster in just three months.
Long-term risks for Nvidia include a shift from training to inference chips.
Ultimately, however, there are only a handful of companies with the capital to compete in the AI race because training large language models can be prohibitively expensive. Typically these are cloud computing hyperscalers such as Microsoft.
Oracle, for example, recently announced plans to build a zeta-scale data center with more than 131,000 Nvidia state-of-the-art Blackwell AI training chips, which would be more powerful than any individual site.
It is estimated that the electricity required to run such a large compute cluster would be equivalent to the production capacity of about two dozen nuclear power plants.
Bloomberg Intelligence analyst Singh really sees only a few long-term risks for Nvidia. For one, some hyperscalers will likely reduce orders, reducing its market share. One such potential candidate is Alphabet, which has its own training chips called TPUs.
Secondly, its dominance in training is not the same as inference, which runs generative AI models after being pre-trained. The technical requirements here aren’t nearly as cutting-edge, meaning there’s a lot of competition not only from competitors like AMD, but also from companies with custom silicon like Tesla. Eventually inference will be a much more meaningful business as more and more businesses use AI.
“There are a lot of companies that are trying to focus on the inference opportunity, because you don’t need a high-end GPU accelerator chip for that,” Singh said.
Asked if this long-term change in assessment was a bigger risk than eventually losing market share to training chips, he replied: “Absolutely”.