Elon Musk plans to broaden Colossus AI supercomputer tenfold

[ad_1]

Unlock the Editor’s Digest at no cost

Elon Musk’s synthetic intelligence start-up xAI has pledged to broaden its Colossus supercomputer tenfold to include greater than 1mn graphics processing items, in an effort to leap forward of rivals equivalent to Google, OpenAI and Anthropic.

Colossus, inbuilt simply three months earlier this 12 months, is believed to be the most important supercomputer on the planet, working a cluster of greater than 100,000 interconnected Nvidia GPUs. The chips are used to coach Musk’s chatbot Grok, which is much less superior and has fewer customers than market-leader ChatGPT or Google’s Gemini.

Work has already begun to extend the scale of the power in Memphis, Tennessee, in keeping with an announcement from the Larger Memphis Chamber on Wednesday. Nvidia, Dell and Supermicro Pc would additionally set up operations in Memphis to assist the enlargement, the chamber of commerce mentioned, whereas it could set up an “xAI particular operations workforce” to “present round the clock concierge service to the corporate”.

The price of buying so many GPUs can be important. The newest era of Nvidia GPUs usually price tens of hundreds of {dollars}, though older variations of the chips could be cheaper. Musk’s deliberate enlargement of Colossus would require an funding prone to attain tens of billions of {dollars} — plus the excessive price of constructing, powering and cooling the huge servers wherein they might sit. xAI has raised about $11bn in capital from traders this 12 months.

AI corporations are scrambling to safe GPUs and entry to knowledge centres to produce the computing energy wanted to coach and run their frontier large-language fashions.

OpenAI, the maker of ChatGPT, has an nearly $14bn partnership with Microsoft that features credit for computing energy. Anthropic, the maker of the Claude chatbot, has obtained $8bn in funding from Amazon and can quickly be given entry to a brand new cluster of greater than 100,000 of its specialised AI chips.

Relatively than type partnerships, Musk, the world’s richest man, has used his energy and affect throughout the tech sector to construct his personal supercomputing capability, though he’s taking part in catch-up after founding xAI barely greater than a 12 months in the past. The trajectory has been steep — the start-up is valued at $45bn and just lately raised one other $5bn.

Musk is in fierce competitors with OpenAI, which he helped co-found with Sam Altman amongst others in 2015. The pair subsequently fell out and Musk is now suing OpenAI, searching for to dam its transition from a non-profit to a extra conventional enterprise.

An investor in xAI mentioned the velocity with which Musk had created Colossus was the “feather within the cap” of the AI firm, regardless of it having restricted business product choices. “He has constructed essentially the most highly effective supercomputer on the planet in three months.”

Jensen Huang, chief govt of Nvidia, mentioned in October that “there was just one individual on the planet who might do this”. Huang has referred to Colossus as “simply the quickest supercomputer on the planet as one cluster”, and mentioned a knowledge centre of this measurement would usually take three years to construct.

The Colossus challenge has attracted controversy for the velocity wherein it was constructed. A few of have accused it of skirting planning permissions and criticised the calls for that it locations on the area’s energy grid.

“We’re not simply main from the entrance; we’re accelerating progress at an unprecedented tempo whereas making certain the steadiness of the grid utilising megapack expertise,” Brent Mayo, xAI’s senior supervisor for website builds and infrastructure, mentioned at an occasion in Memphis, in keeping with the assertion.

[ad_2]

Leave a Comment