Trusted AI

Please Note: As of late February 2024, the LF AI & Data Trusted AI Committee no longer exists under that name. Its work and members have been folded into the Responsible AI Workstream of the Generative AI Commons, in accordance with the blog, The Transformation of the Trusted AI Committee to Responsible AI as a Generative AI Commons Workstream

Trust and responsibility should be core principles of AI.  We encourage you to infuse these guiding principles and technologies for trust and transparency into your AI project.

The LF AI & Data Trusted AI Committee is a global group working on policies, guidelines, tools, and use cases by industry to ensure the development of trustworthy AI systems and processes to develop them continue to improve over time. Future directions include creating a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data. Trusted AI principles defined by the Principles Working Group are defined below.

Principles for Trusted AI

The (R)REPEATS acronym captures the Trusted AI Principles of Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security. The principles in the (R)REPEATS image are presented in no particular order other than to match the acronym, as they are of equal importance. To find out more about the Principles for Trusted AI please check the blog LF AI & Data Announces Principles for Trusted AI.

Participants