[ad_1]
Microsoft purchased twice as a lot of Nvidia’s flagship chips as any of its largest rivals within the US and China this 12 months, as OpenAI’s greatest investor accelerated its funding in synthetic intelligence infrastructure.
Analysts at Omdia, a know-how consultancy, estimate that Microsoft purchased 485,000 of Nvidia’s “Hopper” chips this 12 months. That put Microsoft far forward of Nvidia’s subsequent greatest US buyer Meta, which purchased 224,000 Hopper chips, in addition to its cloud computing rivals Amazon and Google.
With demand outstripping provide of Nvidia’s most superior graphics processing items for a lot of the previous two years, Microsoft’s chip hoard has given it an edge within the race to construct the following era of AI techniques.
This 12 months, Huge Tech firms have spent tens of billions of {dollars} on knowledge centres working Nvidia’s newest chips, which have change into the most well liked commodity in Silicon Valley for the reason that debut of ChatGPT two years in the past kick-started an unprecedented surge of funding in AI.
Microsoft’s Azure cloud infrastructure was used to coach OpenAI’s newest o1 mannequin, as they race in opposition to a resurgent Google, start-ups corresponding to Anthropic and Elon Musk’s xAI, and rivals in China for dominance of the following era of computing.
Omdia estimates ByteDance and Tencent every ordered about 230,000 of Nvidia’s chips this 12 months, together with the H20 mannequin, a much less highly effective model of Hopper that was modified to fulfill US export controls for Chinese language prospects.
Amazon and Google, which together with Meta are stepping up deployment of their very own customized AI chips as a substitute for Nvidia’s, purchased 196,000 and 169,000 Hopper chips respectively, the analysts mentioned.
Omdia analyses firms’ publicly disclosed capital spending, server shipments and provide chain intelligence to calculate its estimates.

The worth of Nvidia, which is now beginning to roll out Hopper’s successor Blackwell, has soared to greater than $3tn this 12 months as Huge Tech firms rush to assemble more and more massive clusters of its GPUs.
Nevertheless, the inventory’s extraordinary surge has waned in latest months amid considerations about slower progress, competitors from Huge Tech firms’ personal customized AI chips and potential disruption to its enterprise in China from Donald Trump’s incoming administration within the US.
ByteDance and Tencent have emerged as two of Nvidia’s greatest prospects this 12 months, regardless of US authorities restrictions on the capabilities of American AI chips that may be offered in China.
Microsoft, which has invested $13bn in OpenAI, has been essentially the most aggressive of the US Huge Tech firms in constructing out knowledge centre infrastructure, each to run its personal AI providers corresponding to its Copilot assistant and to hire out to prospects by its Azure division.
Microsoft’s Nvidia chip orders are greater than triple the variety of the identical era of Nvidia’s AI processors that it bought in 2023, when Nvidia was racing to scale up manufacturing of Hopper following ChatGPT’s breakout success.
“Good knowledge centre infrastructure, they’re very advanced, capital intensive initiatives,” Alistair Speirs, Microsoft’s senior director of Azure World Infrastructure, informed the Monetary Occasions. “They take multi-years of planning. And so forecasting the place our progress might be with a bit little bit of buffer is vital.”
Tech firms around the globe will spend an estimated $229bn on servers in 2024, based on Omdia, led by Microsoft’s $31bn in capital expenditure and Amazon’s $26bn. The highest 10 consumers of knowledge centre infrastructure — which now embrace relative newcomers xAI and CoreWeave — make up 60 per cent of worldwide funding in computing energy.
Vlad Galabov, director of cloud and knowledge centre analysis at Omdia, mentioned some 43 per cent of spending on servers went to Nvidia in 2024.
“Nvidia GPUs claimed a tremendously excessive share of the server capex,” he mentioned. “We’re near the height.”

Whereas Nvidia nonetheless dominates the AI chip market, its Silicon Valley rival AMD has been making inroads. Meta purchased 173,000 of AMD’s MI300 chips this 12 months, whereas Microsoft purchased 96,000, based on Omdia.
Huge Tech firms have additionally stepped up utilization of their very own AI chips this 12 months, as they attempt to cut back their reliance on Nvidia. Google, which has for a decade been growing its “tensor processing items”, or TPUs, and Meta, which debuted the primary era of its Meta Coaching and Inference Accelerator chip final 12 months, every deployed about 1.5mn of their very own chips.
Amazon, which is investing closely in its Trainium and Inferentia chips for cloud computing prospects, deployed about 1.3mn of these chips this 12 months. Amazon mentioned this month that it plans to construct a brand new cluster utilizing lots of of 1000’s of its newest Trainium chips for Anthropic, an OpenAI rival by which Amazon has invested $8bn, to coach the following era of its AI fashions.
Microsoft, nonetheless, is way earlier in its effort to construct an AI accelerator to rival Nvidia’s, with solely about 200,000 of its Maia chips put in this 12 months.
Speirs mentioned that utilizing Nvidia’s chips nonetheless required Microsoft to make important investments in its personal know-how to supply a “distinctive” service to prospects.
“To construct the AI infrastructure, in our expertise, is not only about having one of the best chip, it’s additionally about having the fitting storage elements, the fitting infrastructure, the fitting software program layer, the fitting host administration layer, error correction and all these different elements to construct that system,” he mentioned.
[ad_2]