Can We Trust AI?
AI is advancing rapidly within the enterprise with more than half of organizations launching at least one AI deployment in production. At the same time as organizations work to improve the performance of AI, the teams building and deploying AI have to grapple with the challenges of determining whether the AI models can be trusted. Implementing trusted AI processes requires assessing to what degree the AI models are fair, secure, explainable, and have well-documented lineage.
LF AI Trusted AI Committee
LF AI is an umbrella foundation of the Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. To build trust in the adoption of AI, the Trusted AI Committee has been established as part of Linux Foundation AI (LF AI.) The mission of the committee is to:
- (a) define policies, guidelines, tooling and use cases by industry to create responsible and trusted AI
- (b) survey and contact current open source trusted AI related projects to join LF AI efforts,
- (c) create a badging or certification process for open source projects that meet the trusted AI policies/guidelines defined by LF AI, and
- (d) create a document that describes the basic concepts and definitions in relation to trusted AI and aims to standardize the vocabulary/terminology.
The Trusted AI Committee has three chairs, spread across regions (Asia, Europe and U.S.A).
Please refer to the wiki for more details.
Formation of Working Groups
To begin this work the Trusted AI Committee has established two working groups made up of a diverse range of committee members from multiple LF AI members from around the world. The two working groups are: (1) Principles Working Group, and (2) Use Cases Working Group. Both working groups recognize the importance of diversity in the voices that contribute to solving problems in this space. Both working groups will work to increase the diversity of contributors while maintaining a balance between Europe, Asia, and America.
Principles of Trusted AI
The Principles Working Group (PWG) is creating an initial whitepaper that surveys a wide range of prior work and will propose practical guidelines. The PWG has set ambitious goals that will inform the work of the Use Cases Working Group. First, the PWG will define a set of baseline definitions for trusted AI. To inform this, they will collect existing reference materials, analyze the materials according to an appropriate methodology, identify a set of common principles, and propose guidelines for any AI open source project – that can be iteratively refined as principles are put into practice via operational guidelines. They will then identify tools and open source libraries that can be used to implement these common principles. They will discuss and document the relevance of self-certification and audit programs as needed to ensure trust in open AI tools and libraries.
Use Cases by Project, Industry, and Technology
The Use Cases Working Group (UWG) is creating code for specific industry applications of AI (use cases) that can be assessed using the guidelines developed by the PWG, and to provide feedback to make updates. This working group aims to identify open source trusted AI tools from member and non member companies. The distinction of use cases by industry is imperative for adoption so they seek to identify and implement industry use cases for the financial industry, automotive industry, etc. Use cases that outline technical integration between open source projects, e.g. Acumos and AIF360. Next, the UWG will work to create technical guidelines, integration and best practices for trusted ML Functions which can be used in context of ML Ops. As necessary the UWG will identify and implement integration points between external projects.
Future Goals of the Working Groups
After achieving as many of these goals as possible the Use Cases Working Group will define the set of initial projects to drive the integration work of additional projects. The UWG will build a team of core contributors with an emphasis on maintaining collaboration between Europe, Asia and America. This team will work toward the creation of best practices and reference architecture for MLOps in context of trusted AI, the creation of Kubeflow Pipelines for Trusted AI Committee projects to be consumed within ML Platforms, and Apache Nifi Pipelines with trusted AI projects for Acumos consumption. The UWG will define requirements around lineage tracking, metadata collection etc. Lastly, with so many telephone communication companies in the LF AI umbrella, the working group then plans to dive into Telco use cases for trustworthy AI.