GPU

A GPU (Graphics Processing Unit),  is a specialised hardware component originally designed to accelerate graphics rendering in games and 3D applications.

In the field of AI, GPUs are essential for their ability to perform massive parallel calculations, optimising the training and execution of AI models.deep learning (deep learning) and neural networks.

Unlike CPUs (central processing units), GPUs have thousands of computing cores, enabling them to rapidly process the complex matrix operations (e.g. matrix multiplication) required by frameworks such as TensorFlow or PyTorch. Their dedicated architectures (e.g. CUDA at NVIDIA, Tensor cores) make them a cornerstone of modern AI infrastructures, from data centres to autonomous vehicles, including the generation of multimedia content (images, videos).

 


 

1. Dominant GPU types for AI

 

  • Nvidia H100/H200 GPU optimised for data centres, used by giants such as Microsoft, Google and Meta to train massive models.
  • Nvidia Blackwell GPU (B200) RTX 5000 Series: a new generation focused on energy efficiency and parallel computing, deployed in data centres with the B200 and high-end PCs (RTX 5000 Series).
  • RTX 5000 Series GPU (Blackwell): designed for consumer applications (gaming, design), but incorporating AI technologies such as DLSS 4 and neural rendering.
  • Google TPU specialised processors for AI, used internally by Google to reduce its dependence on Nvidia GPUs.

 


2. GPU prices (2025)

Model
Manufacturer
Price (USD)
Target audience
H100 (Hopper)
Nvidia
25 000 - 30 000 $
Enterprise, cloud provider
B200 (Blackwell)
Nvidia
30 000 - 40 000 $
Enterprise, cloud provider
A100
Nvidia
10 000 $
Enterprise, cloud provider
MI300
AMD
5 000 - 10 000 $
Enterprise, cloud provider
RTX 5090
Nvidia
1 999 $
Professionals, gamers
RTX 5070
Nvidia
549 $
General public

 

3. Power consumption

    • B200: 1000 W
    • H100: 700 W
    • A100: 400 W
    • RTX 5090 : 360 W
    • RTX 5070 : 250 W

Data centres equipped with Blackwell B200s require 300-500 MW, compared with 100-200 MW previously with H100s.

An interaction with ChatGPT consumes 10 times more energy than a Google search (source: World Energy Agency).

 


4. Number of GPUs required for an AI model

  • Example 1 : xAI (Elon Musk) has built a supercomputer with 100 000 H100 in 122 days, with plans to increase to 200,000 H100/H200 in 2025.
  • Example 2 : Meta had the equivalent of 60 000 H100 at the end of 2024, including H200s and Blackwells.
  • Example 3 : Training a model like BLOOM (Generative AI) requires thousands of GPUs and emits around 50 tonnes of CO₂This is 10 times the annual footprint of a French person.

General estimate :

  • Language models advanced (e.g. GPT-4): Several tens of thousands of GPUs for training.
  • Specialist applications A few hundred to thousands of GPUs, depending on complexity.

 


5. Environmental impact and challenges

  • Energy : Data centres could consume 1,000 TWh in 2026 (Japan's equivalent).

 


A concrete example

To train a generative AI model comparable to ChatGPT :

  • GPUs required ~10,000 H100 (estimate based on Microsoft and xAI infrastructures).
  • Material cost >$50 million
  • Power consumption 5 GWh for the drive (equivalent to 500 homes a year).
  • Inference phase 60-70 % of total energy consumption