A GPU (Graphics Processing Unit), is a specialised hardware component originally designed to accelerate graphics rendering in games and 3D applications.
In the field of AI, GPUs are essential for their ability to perform massive parallel calculations, optimising the training and execution of AI models.deep learning (deep learning) and neural networks.
Unlike CPUs (central processing units), GPUs have thousands of computing cores, enabling them to rapidly process the complex matrix operations (e.g. matrix multiplication) required by frameworks such as TensorFlow or PyTorch. Their dedicated architectures (e.g. CUDA at NVIDIA, Tensor cores) make them a pillar of modern AI infrastructures, from data centres to autonomous vehicles, via the generation of multimedia content (images, videos).
1. Dominant GPU types for AI
- NVIDIA H100/H200 GPU Optimised for data centres, used by giants such as Microsoft, Google and Meta to train massive models.
- NVIDIA Blackwell GPU (B200) RTX 5000 Series: A new generation focused on energy efficiency and parallel computing, deployed in data centres and high-end PCs.
- RTX 5000 Series GPU (Blackwell): Designed for consumer applications (gaming, design), but incorporating AI technologies such as DLSS 4 and neural rendering.
- Google TPU Specialised processors for AI, used internally by Google to reduce dependence on NVIDIA GPUs.
2. GPU prices (2025)
Model | Price (USD) | Target audience |
---|---|---|
RTX 5090 | 1 999 $ | Professionals, gamers |
RTX 5080 | 999 $ | Enthusiasts |
RTX 5070 Ti | 749 $ | Creative, demanding players |
RTX 5070 | 549 $ | General public |
Blackwell GPU (data centre) | Undisclosed (estimated >10,000 $) | Companies, cloud providers |
Note The prices of GPUs for data centres (H100, Blackwell) are not official, but they cost much more than consumer models.
3. Power consumption
- Consumer GPUs :
- RTX 5090: 360 W.
- RTX 5070: 250 W.
- GPUs for data centres :
- Data centres equipped with Blackwell require 300-500 MW, compared with 100-200 MW previously.
- An interaction with ChatGPT consumes 10 times more energy than a Google search.
- Energy efficiency :
Blackwell GPUs reduce the power consumption of large language models by up to 25 times, but overall demand is exploding with theGenerative AI.
4. Number of GPUs required for an AI model
- Example 1 : xAI (Elon Musk) has built a supercomputer with 100 000 H100 in 122 days, with plans to increase to 200,000 H100/H200 in 2025.
- Example 2 : Meta had the equivalent of 60 000 H100 at the end of 2024, including H200s and Blackwells.
- Example 3 : Training a model like BLOOM (generative AI) requires thousands of GPUs and emits around 50 tonnes of CO₂This is 10 times the annual footprint of a French person.
General estimate :
- Advanced language models (e.g. GPT-4): Several tens of thousands of GPUs for training.
- Specialist applications A few hundred to thousands of GPUs, depending on complexity.
5. Environmental impact and challenges
- Energy : Data centres could consume 1,000 TWh in 2026 (Japan's equivalent).
A concrete example
To train a generative AI model comparable to ChatGPT :
- GPUs required ~10,000 H100 (estimate based on Microsoft and xAI infrastructures).
- Material cost >$50 million
- Power consumption 5 GWh for the drive (equivalent to 500 homes a year).
- Inference phase 60-70 % of total energy consumption