This supercomputer project clearly reflects Musk’s ambition and vision in promoting the potential of AI.
Elon Musk recently announced to investors that his xAI startup is planning to build a supercomputer to power the next version of Grok’s AI chatbot.
The billionaire plans to use 100,000 Nvidia H100 GPUs to train AI and connect all these GPUs into a supercomputer called “Gigafactory of Computing”.
The Information reported that Musk aims to bring the “Gigafactory of Computing” officially into operation from the fall of 2025 and pledged to directly monitor the project progress to ensure timely completion.
At the same time, he added that xAI should cooperate with Oracle to develop the giant computer.
Elon Musk wants to build the world’s largest supercomputer
With the combination of 100,000 Nvidia chips, the completed supercomputer will be four times more powerful than the largest GPU cluster today and become the largest supercomputer in the world, Elon Musk announced in a presentation to investors. in May.
Experts assess that this supercomputer clearly reflects Musk’s ambition and vision in promoting AI’s potential.
If realized, the “Gigafactory of Computing” could help xAI outperform its competitors and attract more resources thanks to its computing power.
Reuters said Musk founded xAI last year as a competitor to Microsoft-backed OpenAI and Alphabet’s Google. The main product of this startup is the large Grok-1 model similar to ChatGPT.
Earlier this year, the company expected training the Grok 2 model would require about 20,000 Nvidia H100 GPUs, while the Grok 3 model and above would require 100,000 Nvidia H100 chips.
There is information that Musk will buy Nvidia’s H100 series and cooperate with Oracle to develop this supercomputer
According to the Financial Times, each Nvidia H100 chip costs more than $30,000. Therefore, with 100,000 GPUs, Musk’s supercomputer will cost about 3 billion USD (equivalent to 76,396 billion VND).
Additionally, Musk is discussing a massive $6 billion funding for xAI with investment giants like Sequoia Capital and Lightspeed Capital, which is expected to be completed in June.
As the AI race becomes increasingly intense, whichever company owns more specialized GPUs will gain the upper hand. From startups to large technology corporations in the world, everyone is racing to collect AI chips.
Currently, Nvidia’s GPUs are the most ordered, while other companies such as AMD are also starting to launch similar products.