Trust and responsibility should be core principles of AI.  We encourage you to infuse these guiding principles and technologies for trust and transparency into your AI project.

The LF AI & Data Trusted AI Committee is a global group working on policies, guidelines, tools and use cases by industry to ensure the development of trustworthy AI systems and processes to develop them continue to improve over time.  The starting point was a survey and outreach to current open source Trusted AI related projects to join LF AI & Data efforts. Future directions include creating a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data.  We invite you to join and contribute to an evolving document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology.

To view the Trusted AI projects on the LF AI & Data landscape, please click here.


AI Fairness 360

AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

Learn more

AI Explainability 360

The AI Explainability 360 toolkit, an LF AI Foundation incubation project, is an open-source library that supports the interpretability and explainability of datasets and machine learning models.

Learn more

Adversarial Robustness Toolkit

Adversarial Robustness Toolbox (ART), an LF AI Foundation incubation project, provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats.

Learn more


Mailing lists

The Trusted AI committee maintains four mailing lists. You are invited to join the one that best meets your interest.

Trusted AI Committee:

Trusted AI Announce:

Trusted AI Technical Discussions:

Trusted AI Technical Steering Committee:


Trusted AI Videos