The Linux Foundation Projects
Skip to main content
Blog

AI Governance: Gain Control Over the AI Lifecycle

By September 24, 2021No Comments

Guest Authors: Utpal Mangla, VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competency at IBM & Luca Marchi: AI Innovation, Center of Competence for Telco, Media and Entertainment, IBM & Kush Varshney: Distinguished Research Staff Member, Manager at IBM Thomas J. Watson Research Center & Shikhar Kwatra: Data&AI Architect, AI/ML Operationalization Leader at IBM & Mathews Thomas, Executive IT Architect, IBM

 

Effectively Governing AI

Artificial intelligence systems have become increasingly prevalent in everyday life and enterprise settings, and they’re now often being used to support human decision-making. 

When we understand how a technology works and we can assess that it’s safe and reliable, we’re far more inclined to trust it. But even when we don’t understand the technology (do you understand how a modern automobile works?), if it has been tested and certified by a respectable body, we are inclined to trust it. Many AI systems today are black boxes, where data is fed in and results come out. To trust a decision made by an algorithm, we need to know that it is fair, that it’s reliable and can be accounted for, and that it will cause no harm. We need assurances that AI cannot be tampered with and that the system itself is secure. We need to be able to look inside AI systems, to understand the rationale behind the algorithmic outcome, and even ask it questions as to how it came to its decision.

Hence, enterprises creating such AI services are being challenged by an emerging problem: How to effectively govern the creation and deployment of these services. Enterprises want to understand and gain control over their current AI lifecycle processes, often motivated by internal policies or external regulation.

The AI lifecycle includes a variety of roles, performed by people with different specialized skills and knowledge that collectively produce an AI service. Each role contributes in a unique way, using different tools. Figure 1 specifies some common roles.

Roles

Figure 1: A common AI lifecycle involving different personas. Image taken from the AI FactSheets 360 website.

Data flows throughout this lifecycle, as raw input data, engineered features, model predictions, and performance metric results. Data governance relies on the overall management of data availability, relevancy, usability, integrity, and security in an enterprise. It helps organizations manage their information knowledge and answer questions, such as:

  • What data do we have?
  • What do we know about our information?
  • Where do different datasets come from?
  • Does this data adhere to company policies and rules?
  • What is the quality of our data?

Various enterprises are developing theoretical and algorithmic frameworks for generative AI to synthesize realistic, diverse, and targeted data. In order to increase the accountability of high-risk AI systems, we need to develop technologies to increase their end-to-end transparency and fairness.

Tools like AI Fairness 360, AI Explainability 360, Adversarial Robustness 360, and Uncertainty Quantification 360, which are open-source software toolkits that help users uncover and mitigate various biases in machine learning models that lead to bad or unequal performance. Tools and technologies being developed by AI enterprises must be adept at tracking and mitigating biases at multiple points along their machine learning pipeline, using the appropriate metric for their circumstances, and captured in transparent documentation such as an AI FactSheet. They should help an AI development team perform systematic checking for biases similar to checks for development bugs or security violations in a continuous integration pipeline.

Bringing together mitigation techniques appropriate for different points in the pipeline to address different biases (social, temporal, etc.) will help developers produce real-world deployments that are safe and secure. 

LF AI & Data Resources

 

Author

  • Andrew Bringaze

    Andrew Bringaze is the senior developer for The Linux Foundation. With over 10 years of experience his focus is on open source code, WordPress, React, and site security.

    View all posts