The Linux Foundation Projects
Skip to main content

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today announced ShaderNN as its latest Sandbox Project.

ShaderNN, a Shader-based lightweight and efficient inference engine for mobile GPU. This innovative project is set to redefine the landscape of deep learning inference on mobile devices, unlocking a new era of possibilities for image and graphics processing applications.

“ShaderNN represents a pivotal step forward in the realm of mobile GPU inference. This project encapsulates the essence of LF AI & Data’s mission by fusing innovation, collaboration, and community support,” said Dr. Ibrahim Haddad, Executive Director of LF AI & Data. “ShaderNN’s cutting-edge features, backed by the expertise of our diverse developer community, underline its potential to revolutionize mobile image and graphics processing.” 

ShaderNN stands as a lightweight deep learning inference framework, optimized for Convolutional Neural Networks. It propels high-performance inference in image and graphics processes on mobile devices, utilizing texture-based input/output to eliminate costly data transfers and format conversions between CPU and GPU. Built upon native OpenGL ES and Vulkan graphics stacks, ShaderNN seamlessly integrates with graphics rendering pipelines for maximal resource utilization, perfectly suited for rendering, image/video, and game AI applications.

Key Features of ShaderNN

High Performance

  • Utilize GPU Shader: Implement core operators using GPU Shader to leverage parallel computing capabilities for optimal performance.
  • Pre-built Static Computation Graph: Optimize with constant folding and operator fusion to accelerate forward operation speed.

Lightweight & Portability & Extensibility

  • No Third-Party Library Dependencies: Ensure independence from external libraries, reducing overhead and simplifying integration.
  • Mobile Platform Optimization: Optimize specifically for mobile platforms, enabling effortless portability,
  • deployment, and upgrades.
  • Simple Input/Output Interface: Provide a user-friendly interface compatible with GPU processing for streamlined interactions.

Versatility

  • Framework & CNN network Compatibility: Support popular framework formats like TensorFlow, PyTorch, and ONNX. Support common classification, detection, segmentation, and enhancement networks.
  • User-Defined Operators: Enable easy implementation of new models by supporting user-defined operators.
  • Flexible backend configure: Select the running backend statically or dynamically according to the platform resources during model execution, dynamically adjusting kernel running parameters for minimal energy consumption at runtime.

“ShaderNN is a product of relentless innovation and a shared vision to revolutionize mobile GPU inference. Our team’s dedication to optimizing convolution, matrix multiplication, and enabling automatic backend selection has led us to a breakthrough that will transform how deep learning is experienced on mobile platforms. ShaderNN is not just a project; it’s a gateway to unlocking new possibilities in real-time graphics processing, and we’re thrilled to share this journey with the LF AI & Data Foundation.” A representative from OPPO stated.

LF AI & Data supports projects via a wide range of services, and the first step is joining as a Sandbox Project. Learn more about ShaderNN on their GitHub and join the ShaderNN-Announce Mailing List

A warm welcome to ShaderNN! We are excited to see the project’s continued growth and success as part of the LF AI & Data Foundation. If you are interested in hosting an open source project with us, please visit the LF AI & Data website to learn more.

ShaderNN Key Links

LF AI & Data Resources

Access other resources on LF AI & Data’s GitHub or Wiki