Guest Author: Susan Malaika, LF AI & Data Trusted AI Committee Member
The LF AI & Data Foundation and Trusted AI Committee are thrilled to see the progress that has been made within the group; with an emphasis on on the launch of the RREPEATS Principles. On 10 February 2021, Souad Ouali from Orange, led a webinar on RREPEATS which represent the 8 principles for Trusted AI as identified by the Principles Working Group (WG) in the Trusted AI Committee. The RREPEATS acronym captures: Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security.
I invite you to read along for a brief summary of the RREPEATS introduction webinar and encourage you to view the complete webinar on youtube.
Introduction:
The session kicked off with Souad inviting attendees to share what words came to mind when discussing Trusted AI. A wide array of words were shared with bias being an overarching response along with explicability, transparency, robustness, and affordability. This exercise set the stage for further discussions around whether the principles are restricted to Machine Learning and if “Good Old Fashioned AI” (GOFA) was included. Souad clarified that the principles could be applied broadly which emphasized the importance of this topic.
Discussion Takeaways:
The webinar included both positive and thought provoking questions and discussions; below are a just a few takeaways:
- Alejandro Saucedo (The Institute for Ethical AI / Seldon) and Francois Jezequel (Orange) joined Souad in emphasizing that the principles provide direction for processes and people that provide AI.
- We can drill down into tools that support the principles in various categories such as tools for datasets and models, for particular use cases and domain experts, e.g., to explore intended and unintended bias when applying the principle of equitability.
- Typical use cases are adopted as illustrations for particular principles such as credit risk assessment for equitability, and self driving cars for accountability.
- There is a relationship between pairs of principles such as equitability (identifying bias) and accountability (being responsible and to answer for what happened); along with a relationship between explainability and transparency. The latter principles are connected with being easy to understand, making it possible to foster trust. And…
- Trust is the foundation of relationships. Shared goals can only be achieved if there is trust. Trust (and the principles of explainability and transparency) are necessary for the increased adoption of AI.
Souad shared the process which the Principles WG utilized in the creation of RREPEATS which included the review of many documents including content from OECD, EU, SoA, ACM, IEEE, and DoD. It was clear that there were many minds involved and that there was still work to be done as the effort continues to evolve.
Conclusion and Follow-On:
Souad wrapped up the discussion by pointing out that AI is a tool just like any other tool; and that in itself AI is neither good or evil. She shared an analogy with a knife which can be used for good to prepare vegetables but can also be used for evil to kill. She added that evil is not in the tool but in the person using the tool. Souad closed by stating that the principles for Trusted AI provide a guide for Al and its application. And that the challenge is how to apply the principles across the entire life-cycle for AI which will be discussed in the next webinar.
Please join us on 28 April 2021 for the next session in this webinar series: The Trusted AI Principles – Practical Examples. Register here!
In this session, you will learn more about the application of the RREPEATS Principles for two practical examples –
- Classification of Encrypted Traffic Application – Iman Akbari Azirani & Noura Limam, University of Waterloo; Bertrand Mathieu, Orange Labs, France
- RosaeNLG Framework (an LF AI & Data project) – Ludan Stoecklé, CTO of Data & AI Lab BNP Paribas CIB and author of RosaeNLG
The session will also include a discussion with Calvin Lawrence, CTO & Distinguished Engineer Cognitive Solutions at IBM, Alejandro Saucedo, Engineering Director at Seldon, Chief Scientist at The Institute for Ethical AI, & Emilie Sirvent-Hien, Responsible AI program manager at Orange.
Stay connected with the Trusted AI Committee by joining the mailing list here and join us for an upcoming meeting! Learn more here.
LF AI & Data Key Links
- Learn about membership opportunities
- Explore the interactive landscape
- Check out our technical projects
- Join us at upcoming events
- Read the latest announcements on the blog
- Subscribe to the mailing lists
- Follow us on Twitter or LinkedIn
- Access other resources on LF AI & Data’s GitHub or Wiki