Home > AI Glossary > Trusted AI

Trusted AI

A trusted AI (or Trustworthy AI is an approach to artificial intelligence that aims to develop systems reliable, ethical, transparent and respectful of human values. It is part of a framework designed to ensure that AI technologies operate in a way that safe, fair and responsibleby aligning their decisions with social, legal and ethical standards.

Its concept is similar to that of a Responsible AI.

 


Key principles of trusted AI

  1. Ethics :
    • Respect for human rights, dignity and justice.
    • Avoid bias discriminatory (e.g. discrimination based on gender or ethnic origin).
  2. Transparency (Explainability) :
    • Ability to explain AI decisions (Interpretable AI).
    • Clear documentation of the algorithms and data used.
  3. Robustness and safety :
    • Resistance to errors, malicious attacks (e.g. adverse disturbances) and noisy data.
    • Guaranteed reliable operation in real-life conditions.
  4. Liability (Accountability) :
    • Clear definition of legal responsibilities in the event of error or harm caused by AI.
    • Implementation of monitoring and audit mechanisms.
  5. Respect for privacy :
    • Protection of personal data (e.g. compliance with RGPD in Europe).
    • Use of techniques such as differential confidentiality or thefederated learning.
  6. Equity (Fairness) :
    • Elimination of systemic bias in data or algorithms.
    • Guarantee of equal treatment for all users.
  7. Human control :
    • Maintaining human supervision of critical decisions (e.g. medical, judicial).
    • Principle of "human-in-the-loop (humans in the loop).

Critical areas of application

  • Health medical diagnostics, surgical robots.
  • Justice Support for judicial decision-making (assessing the risk of re-offending).
  • Finance : granting credit, detecting fraud.
  • Autonomous transport Safety of driverless vehicles.
  • Recruitment unbiased selection of candidates.

 


Regulatory frameworks and initiatives

  • European Regulation on AI (AI Act) : Classifies AI systems according to their risk (prohibiting "high-risk" uses).
  • EU guidelines for ethical AI 7 key requirements, including transparency and diversity.
  • OECD Principles on AI promoting innovative and trustworthy AI.
  • IEEE Ethically Aligned Design Technical standards for responsible AI

Challenges

  • Algorithmic biases Reproduction of social inequalities (e.g. AI recruitment that disadvantages women).
  • Black box Complexity of models such as deep neural networks, which are difficult to interpret.
  • Security : Vulnerability attacks (e.g. alteration of training data).
  • Balancing innovation and regulation Risk of slowing down technological progress.

Examples

  • IBM Fairness 360 A tool for detecting and correcting biases in AI models.
  • Google What-If Tool Data analysis: analyses the impact of data on predictions.
  • Explainable AI models (Explainable AI - XAI) : Methods such as LIME or SHAP for interpreting decisions