NNStreamer, an LF AI Foundation Incubation-Stage Project, has released version 1.6.0. NNStreamer is a set of Gstreamer plugins that support ease and efficiency for Gstreamer developers adopting neural network models and neural network developers managing neural network pipelines and their filters.
In version 1.6.0, NNStreamer adds a variety of improvements; highlights include:
- New hardware accelerators and neural network frameworks support added: Verisilicon-Vivante, Qualcomm-SNPE, NNFW-ONE-Runtime, and Tensorflow2-lite.
- Data serialization support with flatbuf and protobuf.
- Android APIs optimized (i.e., invoke latency in Galaxy S20: 2ms → 0.1ms)
- Plug-and-play sub-plugins support for tensor-converters
- Hardware acceleration configuration reworked: multiple candidates may be expressed and options may be altered in run-time.
- Fixes, semantics updates, and minor features added after commercialization (Galaxy Watch 3 and a few “next-year” products and Tizen 6 releases).
The NNStreamer Project invites you to adopt or upgrade to version 1.6.0 in your application, and welcomes feedback. To learn more about the NNStreamer 1.6.0 release, check out the change log and full release notes. Want to get involved with NNStreamer? Be sure to join the NNStreamer-Announce and NNStreamer-Technical-Discuss mail lists to join the community and stay connected on the latest updates.
Congratulations to the NNStreamer team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.
NNStreamer Key Links
LF AI Resources