The Linux Foundation Projects
Skip to main content

Discover LF AI & Data Projects with TAC Talks Watch Now

LF AI & Data Blog

The Role of Open Source for Accountable AI

By January 26, 2022July 28th, 2023No Comments

Guest Author: Adrian Gonzalez Sanchez, Head of AI Customer Success at Peritus.ai – CNCF End user in Canada, Member at OdiseIA Spanish Observatory of Social and Ethical Impact of AI, and AI lecturer at HEC Montréal

The Evolution of AI Governance

In the age of modern AI and wide company adoption, organizations and individuals around the world are currently dealing with different front lines: how to understand how AI is and its actual value, how to implement it, how to enable access to data from different teams, what kind of tools to use, where (or how) to find good talent, how to standardize best practices among projects. So many moving parts, nicely garnished with the changing regulatory context for data privacy and AI applications. Governance, as for any other context, is a way to organize and facilitate this puzzle of activities.

Often misunderstood and confused with data governance (which focuses on metadata management and data privacy compliance), the AI governance practice focuses on the required end-to-end steps to guarantee proper implementation of AI solutions, by aligning people, tools, processes, and the imperative need to make good use of this family of technologies. IBM defined AI governance in a previous LF AI & Data blog entry as “companies gaining control over the AI lifecycle, often motivated by internal policies or external regulation”, which is very true given the international ethical and regulatory context (e.g. EU’s AI Act, California’s CCPA, unethical AI applications).

If we skip general MLOps considerations and we focus on governance as the key enabler for what we call Responsible AI, we can affirm that we are at a crucial moment where adopting teams can actually translate the good intentions into tangible actions. All ethical reflections (from general philosophical discussions to specific initiatives) are being materialized into organizations’ AI governance processes and involved actors. 

Figure 1 – The Evolution of AI Governance (source: Adrian Gonzalez Sanchez, CC BY-SA 4.0)

As we see in Figure 1, there are several company steps for companies to adopt a Responsible AI approach: availability of human and technical resources, alignment on ethical AI principles (the depend on the specific values of the adopting organization), definition of internal evaluation and escalation processes, supporting the related change management and internal training activities, and the tooling choice to make it happen… which is of course not a minor decision.

Historically, one of the main concerns for the AI teams was how to integrate ethics (and all sorts of good intentions derived from it) into their day-to-day activities. Without proper tools and standards, it was very hard for them to find a way to do something to detect data bias, explicability issues, or potential risks. Fortunately, there has been a new wave of software and toolkits, both commercial and open source, available for different AI team members, from data scientists to product managers.

The Role of Open Source Tools

Responsible AI wouldn’t be possible today (or at the very least, it would be a harder mission for companies and team members) without open source tools. There are some good commercial alternatives, but these in-company Responsible AI initiatives are mostly based on a proactive willingness to do “the right thing”, without a clear business case or specific hard (monetary) benefits. In that sense, finding open and free tools for teams to explore them and build their own Responsible AI tech stack is basic for key adoption, specially for bottom-up initiatives where data scientists and engineers are the actual decision-makers.

Linux Foundation is a clear example of a contribution to the open source Responsible AI ecosystem. Projects such as AI Explainability 360, AI Fairness 360, and Adversarial Robustness Toolbox are already bringing new options (IBM, the main project contributor, has added these tools as part of their commercial AI Ethics solutions too). Other big companies such as Google and Microsoft are bringing relatively similar tools, with their TensorFlow Responsible AI and Microsoft Responsible AI toolboxes. Other companies like PwC are creating evaluation frameworks that can be good sources of inspiration to define the internal AI governance approach, even if there are no technical tools involved.

The Business Case for Accountable AI (and recommendations for 2022)

There is an important difference between proactively adopting a Responsible AI approach and being “forced” as a company to do the right thing via regulations and fines. It is similar to what we saw a few years ago, when companies operating in Europe rushed to adapt to the EU’s GDPR requirements for data privacy, with a huge impact on their customer relations, data management processes, and new internal roles (Data Protection Officer).

With an upcoming AI regulation in Europe (which focuses on potential levels of AI risk, and will impact any company with international businesses, and probably inspire other local regulations), responsibility will be quickly replaced by accountability. Good intentions and actions will become the requirement, and internal governance will be the enabler for “Accountable AI”. Last year was already very active in terms of regulations, new tools, AI adoption, and deeper discussions, but 2022 is certainly the best moment (and the last period of the current window of opportunity, as we can see in Figure 1) because of the great availability of open source resources, and the possibility of adopting a proper AI approach without any regulatory pressure. 

The business case (wonderful two words that get any company executive’s attention) will get clearer once teams HAVE TO adopt tools and processes, reactively accelerate initiatives, and deal with potential fines. For now, valuable soft benefits are being explored (check page 12 of this Economist Intelligence report) that will justify two kinds of parallel actions:

  • Top-down, for companies and executives to support specific AI Governance programs, and work on the “company steps” previously mentioned. This process will take time to be defined and widely adopted, but it can be used to align internal business and technical stakeholders on the same shared goal.
  • Bottom-up, for AI team members (data scientists, engineers, AI product managers, ethicists, etc.) to start exploring available toolkits and cases from other pioneering organizations. This complementary process will also take time, and the tooling choice may change later, but will enable deeper analysis in terms of needs and how to adapt these tools to the company’s AI context and type of solutions). And of course… Open source will help a lot here. 

To summarize, the time is now. Go step by step, and it will be for the best. Good luck in your Accountable AI journey!

LF AI & Data Resources

Author

  • Andrew Bringaze

    Andrew Bringaze is the senior developer for The Linux Foundation. With over 10 years of experience his focus is on open source code, WordPress, React, and site security.

    View all posts