The Linux Foundation Projects
Skip to main content

Author: ONNX Steering Committee  

AI is rapidly transforming the landscape and ONNX continues to be a key technology for AI and machine learning. ONNX is an open format to represent machine learning models that are supported by a broad community of organizations around the world and in many industries. 

The ONNX project had a strong first half of 2023. It had two releases, community roadmap sessions, a steering committee election, and a hybrid community meetup. All this while seeing a 149% increase in downloads year-over-year and an 18% increase in contributors year-over-year. ONNX is also an ecosystem enabler – year-over-year there was a 130% increase in GitHub repos taking a dependency on ONNX. And those other projects – like ONNX Runtime, TensorRT, Hugging Face, etc – have seen significant growth.  

ONNX releases 

The ONNX 1.13 and 1.14 releases added many new capabilities, including: 

  • New operators (DeformConv, Col2Im, BitwiseNot , BitwiseAnd, BitwiseOr and BitwiseXor) 
  • Updated operators in new opset (Equal, AveragePool, Pad, Resize, Resize, Pad, OptionalHasElement, OptionalHasElement, OptionalGetElement, ScatterElement, ScatterND, Split, LpPool) 
  • Updated functions (CenterCropPad, Mish, GroupNormalization) 
  • New FP8 datatypes 
  • Python 3.11 support 
  • Support for Apple M1/M2 silicon 

SIGs and Working Groups 

The technical work of ONNX happens in SIGs and Working Groups. Community contributors discuss, implement, and review the designs and code for ONNX in these groups. They meet regularly and meetings are open to all. New SIGs and Working Groups are created as community interests arise. 

This year we launched the new Compilers SIG to join the existing Architecture/Infrastructure, Operators, Converters, and Model Zoo SIGs. The Compilers SIG is responsible for compiler solutions that optimize and lower ONNX to representations that are fed to further compilers, runtimes, or directly executed. A key project this new SIG works on is onnx-mlir, which lowers ONNX to MLIR representations for ingestion into the LLVM compiler.  

We also are in the process of launching a new Optimizers SIG to focus on solutions for optimizing ONNX models, including compressions techniques such as quantization, pruning, and distillation. 

Working Groups are shorter-term forums for solving specific requirements. A new Multi-device Working Group is being proposed to extend ONNX models to run on multiple accelerators. 

Roadmap Sessions 

Earlier this year, we held 4 ONNX roadmap sessions where community members presented 18 proposals for new functionality and improving pain points. The SIG leaders were in attendance to provide feedback and take the next steps on the proposals. Videos of all the presentations can be found on the ONNX YouTube channel. 

Community Meetup 

In June, the ONNX community congregated at the NVIDIA headquarters for Community Day. The event was hybrid with both in-person and virtual attendees and presenters. It was great to connect with contributors, users, and other community members from both industry and academia. There were 10 talks by community members, 6 talks by the SIG and Working Group leaders, and an overview by the Steering Committee. All the presentation slides and videos are available online and also on the ONNX YouTube channel. 

Getting Involved with ONNX 

We are deeply grateful to the ONNX community for their dedication and contributions. We look forward to seeing continued innovation, collaboration, and growth. Learn more about the SIGs, Working Groups, and how to get involved at https://onnx.ai and our Slack channel. 

Author