All Posts By

Erin Thacker

Kompute Releases v0.8.0 to Continue Advancing Cross-Vendor GPU Acceleration

By Blog

500 Github Star Milestone, Edge-Device Support, CNN Implementations, Variable Types, MatMul Benchmarks, and Binary Optimisations

Kompute, an LF AI & Data Foundation Sandbox-Stage Project advancing the cross-vendor GPU acceleration ecosystem, has released version 0.8.0 which includes major milestones including reaching 500 github stars, edge-device extensions, convolutional neural network (CNN) implementations, variable data types and more. Kompute is a general purpose GPU compute framework for AI & Machine Learning applications which works across cross-vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). The Kompute framework provides a flexible interface that can be adopted by mobile, desktop, cloud and edge applications to enable highly optimizable GPU acceleration. The framework includes a high-level Python interface for advanced data processing use-cases, as well as an extensible low-level C++ interface that provides high performance device-specific optimizations.

The newly released 0.8.0 version of Kompute introduces major improvements to the general cross-platform compatibility and GPU acceleration features of Kompute. The high level summary of the highlights are as follows:

  • Milestone of 500 Github repo stars
  • Broader edge-device support with mesa-driver integration
  • Convolutional Neural Network (CNN) Implementations
  • Support for variable types across GPU parameters
  • Semi-optimized Matrix Multiplication Kernel Benchmark implementation
  • Significant reduction on 3rd party dependencies (15mb->~1mb binary)

If you are interested to learn more, you can join us at our next “GPU Acceleration” monthly call on September 28th at 9:00 EST / 13:00 UTC / 20:00 CST where we will be covering Kompute updates as well as general cross-vendor GPU Acceleration topics.

We will also be giving a talk at CppCon2021 this year, so if you are around please drop by our talk and say hello, or feel free to ask any questions during the Q&A.

The Kompute repo reaches 500 Github stars

We are thrilled to see the fantastic growth and adoption of the Kompute Project, as well as the great discourse that it has continuously encouraged to further the cross-vendor GPU acceleration ecosystem. Today we celebrate the Kompute Project reaching 500 stars in Github, which is a major milestone following from Kompute’s one-year birthday last month. Github stars can become a shallow metric if that’s the only thing that is being used to calculate a project’s growth, so we will be keen to identify other metrics that allow us to ensure our community grows steadily, including number of contributors, contributions, community interactions in our discord, etc.

Broader edge-device support with mesa-driver integration

As part of our 0.8.0 release we have significantly extended edge-device support to hundreds of devices by supporting mesa-drivers as first-class components thanks to this great external contribution. We have added an official tutorial that showcases the integration with the mesa Broadcom drivers running in a Raspberry Pi, which can be adopted across other edge devices for GPU acceleration implementations.

This is a fantastic addition as it showcases the flexibility of the capabilities of Kompute. The example required advanced GPU computing concepts to address some short-comings of limited hardware, such as the need to expose means to add GPU extensions explicitly, as well as adding flexible memory barriers operations that can be used to ensure consistency in more limited devices with non-coherent GPU memory.

Convolutional Neural Network (CNN) Implementations

We have introduced a high level example that provides an implementation of a convolutional neural network (CNN) that enables for image resolution upscaling, which means that images can improve their quality through purely the machine learning implementation. This is another fantastic external contribution from the great Kompute community.

This example showcases how to import a pre-trained deep learning model. We create the Kompute code that loads model weights, then we create Kompute logic that performs inference on image, and we run model against image to perform resolution upscale on any image.

Small imageVGG7 InferenceLarger Image

Support for variable types across GPU parameters

By default, the simplified interfaces of Kompute will expose the ability to deal with float scalar types, which may be enough to get through the basic conceptual examples. However, as you develop real-world applications, more specialized types may be required for the different components that Kompute exposes to perform computation in the GPUs.

In version 0.8.0 of Kompute we introduce richer support for variable types across the Python and C++ interface that allow users to set different scalar values, and in some cases user-defined structs for their Kompute resources. More specifically, we have added support for multiple scalar types for the Kompute Tensor resource, multiple scalar type & arbitrary user-defined struct support for Kompute Push Constants, and multiple scalar types for Specialization Constants.

Semi-optimized Matrix Multiplication Kernel Benchmark example

In this release of Kompute we have received another great external contribution of an example that starts off with a naive implementation of a matrix multiplication algorithm, and then shows how to iteratively improve performance with high-level benchmarking techniques . This highlights how increasing matrix size can also increase the performance in GFLOPS in the specific optimizations introduced. The initial experimentation was based on the SGEMM in WebGL2-compute article on the public library ibiblio.org, and explores some initial improvements with basic and slightly more optimized tiling. This is still a work we would be interested to explore further and would be great to receive further contributions.

Significant reduction on 3rd party dependencies

The Kompute project has now been updated to reduce the 3rd party dependencies. This release removes some dependencies in favour of modularised functional utilities that are only used in the testing framework. This results in a staggering optimization of the binary, reducing the size by an order of magnitude, bringing the library binary from 15MB down to 1MB. This also simplifies cross-platform compatibility, as it requires less dependencies to build in different architectures.

The main dependency that has been removed is GLSLang, which was being used to provide a single function to perform online shader compilation primarily for the tests and simple examples. Instead we have now moved to allowing users to bring in their preferred method of performing compilation of shaders to SPIR-V, whilst still providing guidance on how Kompute users would be able to do it through simple methods.

Join the Kompute Project

The core objective of the Kompute project is to contribute to and further the GPU computing ecosystem across both, scientific and industry applications, through cross-vendor graphics card tooling and capabilities. We have seen very positive reception and adoption of Kompute across various development communities, including advanced data processing use-cases in mobile applications, game development engines, edge device and cloud, and we would love to engage with the broader community to hear thoughts on suggestions and improvements.

The Kompute Project invites you to adopt or upgrade to version 0.8.0 and welcomes feedback. For details on the additional features and improvements, please refer to the release notes here

As mentioned previously, if you are interested to learn more, you can join us at our next GPU Acceleration call on September 28th at 9:00 EST / 13:00 UTC / 20:00 CST where we will be covering Kompute updates as well as general cross-vendor GPU Acceleration topics.

Kompute Key Links

LF AI & Data Resources

Access other resources on LF AI & Data’s GitHub or Wiki

Egeria Webinar: Visualising a Metadata Ecosystem, 13th September 2021

By Blog
Guest Author: David Radley

IMPORTANT UPDATE: The date for this Webinar has changed. It is now schedule for Monday, September 13, 2021 at 15:00 UTC.

Update your calendars! The popular monthly Egeria Webinar program is restarting on September the 13th, 2021. Full program details are here: https://wiki.lfaidata.foundation/display/EG/Egeria+Webinar+program.

The next session is on the 13th of September 2021 at 15:00 UTC and is about visualising a metadata ecosystem. The session will cover:

  • An overview of the open types in Egeria and how they facilitate integration between sources of metadata without having one central metadata repository. 
  • Understanding the types is important knowledge when developing connectors and new APIs like OMAS’s.
  • The call will look at the concepts exposed in a higher level API – to compare them with the low level open types.
  • The session will show the visualisations Egeria has around the types so you can explore how they relate to each other.

Example visualisation

At the end of the session, 

  • You should have a good grasp of the Egeria open types and why they are so important and how to explore them visually. 
  • You want to explore the benefits of connecting your metadata sources into Egeria by mapping your types to the open types. 

Be sure to put the other Webinar dates in your calendar

Egeria Key Links

LF AI & Data Resources

Kompute Joins LF AI & Data as New Sandbox Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today is announcing Kompute as its latest Sandbox Project.

Released and open sourced by The Institute for Ethical AI & Machine Learning, Kompute is a general purpose GPU compute framework for AI & Machine Learning applications which works across cross-vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). The Kompute framework provides a flexible interface that can be adopted by mobile, desktop, cloud and edge applications to enable for highly optimizable GPU acceleration. The framework includes a high-level Python interface that enables for advanced data processing use-cases, as well as an extensible low-level C++ interface that provides for high performance device-specific optimizations.

The core objective of the Kompute project is to contribute to and further the GPU computing ecosystem across both, scientific and industry applications, through cross-vendor graphics card tooling and capabilities. We have seen very positive reception and adoption of Kompute across various development communities, including advanced data processing use-cases in mobile applications, game development engines, edge device and cloud, and we look forward for Kompute to serve as a platform to empower the discourse on topics related to GPU acceleration for cross-industry applications.

Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI & Machine Learning, said: “Many advanced machine learning and deep learning paradigms fit incredibly well to the parallel architecture that GPU computing offers, and exciting advancements in the open source ecosystem such as the Kompute Project are enabling developers to take advantage of GPU acceleration in cross-vendor mobile, desktop, cloud and edge applications. We believe the Linux Foundation is the right platform to encourage and empower the cross-industry discourse required to further the GPU acceleration ecosystem. We are proud for Kompute to become a LF AI & Data project, and we look forward to an ongoing collaboration.”

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project. Learn more about Kompute on their GitHub and join the Kompute-Announce Mailing List and Kompute-Technical-Discuss Mailing List. We also encourage you to subscribe to the Monthly Kompute GPU Acceleration Sessions, to join the community and stay connected on the latest updates – subscription instructions available here.

A warm welcome to Kompute! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Kompute Key Links

LF AI & Data Resources

 

Connect with LF AI & Data at Open Source Summit + Embedded Linux Conference + OSPOCon 2021

By Blog

The LF AI & Data Foundation is thrilled to be a part of the Open Source Summit + Embedded Linux Conference + OSPOCon 2021 event! This hybrid event, both in-person and virtual, is slated to take place September 27-30, 2021 in Seattle, Washington. 

Register for OSS+ELC+OSPOCon 2021 to attend. 

There are a variety of ways to participate at this conference. First, you’re welcome to join us in person! Make sure you check out all the health and safety guidelines for in-person attendance here. You can also attend virtually, with similar formats as events hosted in the past year. Information about the virtual program can be found in the FAQs linked here. We’ll also be hosting an AI & Data Track within the program. We hope to see you there!

 Attend Sessions in the AI & Data Track 

The LF AI & Data Foundation will be hosting an AI & Data Track at OSS+ELC+OSPOCon2021. Join these sessions to learn the latest updates from our projects and hear from leaders in the AI & Data industry.  

Chat with us in the Onsite or Virtual Exhibit Hall!

Our booth will be in the Bronze Hall at the event, please stop by and say hello! If you can’t make it in person, come chat with us in the virtual exhibit hall! Various LF AI & Data community members will be around during booth hours to answer any questions you have. You’ll also be able to get more information on how to get involved with the LF AI & Data Foundation.

The LF AI & Data Foundation mission is to build and support an open AI community, and drive open source innovation in AI and data by enabling collaboration and the creation of new opportunities for all the members of the community. 

Want to get involved with the LF AI Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected on the latest updates. 

LF AI & Data Resources