The Linux Foundation Projects
Skip to main content

LF AI & Data has announced its participation in a Department of Commerce initiative, joining over 200 leading artificial intelligence (AI) stakeholders in supporting the development and deployment of trustworthy and safe AI. The U.S. AI Safety Institute Consortium (AISIC), established by the Department of Commerce’s National Institute of Standards and Technology (NIST), aims to bring together AI creators and users, academics, government and industry researchers, and civil society organizations to achieve this mission.

In a significant move towards ensuring the safety and trustworthiness of AI systems, NIST has initiated AISIC. The establishment of AISIC and its alignment with President Biden’s Executive Order highlights a strategic approach towards addressing the complex challenges associated with AI technologies. Among the esteemed members of this consortium is LF AI & Data, the open-source AI & Data foundation at the Linux Foundation. This collaboration represents a pivotal step in fostering safe and trustworthy AI practices.

ENTITY QUOTE

“The LF AI & Data Foundation is collaborating with the National Institute of Standards and Technology (NIST) in the Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies. NIST does not evaluate commercial products under this Consortium and does not endorse any product or service used. Additional information on this Consortium can be found here.

RAIMONDO QUOTE

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The Scope of AISIC: A Global Collective for Secure AI Systems

The U.S. AI Safety Institute Consortium is an impressive assembly of AI developers, users, researchers, and organizations, making it the largest collection globally. With a diverse membership comprising Fortune 500 companies, academic teams, non-profit organizations, and various U.S. Government agencies, these entities collaborate with a shared commitment. Together, we will focus on advancing research and development initiatives to facilitate the creation of secure and reliable AI systems, laying the groundwork for future standards and policies.

Members of the AISIC will play a vital role in assisting NIST in implementing, iterating on, sustaining, and extending priority projects related to research, testing, and guidance on AI safety. By harnessing the expertise of this collective, NIST aims to ensure that its AI safety initiatives are well-integrated with the broader AI safety community, both nationally and globally.

The Necessity of AI Safety and AI Measurement

The rapid expansion of AI tools has reshaped our lives, impacting society and the economy. However, this growth poses unprecedented technical and societal challenges. Achieving alignment between AI and societal norms requires a human-centered approach involving specific policies, processes, and input from community stakeholders. NIST enhances AI measurement efforts through AISIC to harness benefits while mitigating risks. This collaborative initiative aims to establish a new measurement science, identifying proven, scalable, and interoperable measurements and methodologies. This foundation will contribute to the development of trustworthy AI.

Ibrahim Haddad, Executive Director of LF AI & Data, expresses, “LF AI & Data’s participation in the U.S. AI Safety Institute Consortium underscores our dedication to shaping a secure and responsible future for AI, with a primary emphasis on open source. It emphasizes the crucial role of transparent AI development, building a foundation for trustworthy AI aligned with societal values, fostering innovation, and prioritizing public and planetary well-being.

LF AI & Data’s involvement in the U.S. AI Safety Institute Consortium reflects our commitment to responsible AI development. This partnership lays the groundwork for trustworthy AI aligned with societal values, promoting innovation and safeguarding public well-being. AISIC underscores the importance of collective action in navigating the challenges of rapidly evolving AI technologies.

OpenSSF’s role in AISIC underscores its commitment to fortifying AI security through open source collaboration. As a leader in enhancing open source software security, OpenSSF’s participation emphasizes secure coding practices and establishing AI security standards. Collaborating with AISIC, OpenSSF aims to contribute expertise in securing software supply chains, advocating for a holistic AI safety approach. This partnership highlights the pivotal role of open source security in addressing AI’s complex challenges, fostering an ecosystem where trust, transparency, and innovation converge throughout the development and deployment stages.

The full list of consortium participants is available here.