Skip to main content
Category

Blog

Kompute Joins LF AI & Data as New Sandbox Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today is announcing Kompute as its latest Sandbox Project.

Released and open sourced by The Institute for Ethical AI & Machine Learning, Kompute is a general purpose GPU compute framework for AI & Machine Learning applications which works across cross-vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). The Kompute framework provides a flexible interface that can be adopted by mobile, desktop, cloud and edge applications to enable for highly optimizable GPU acceleration. The framework includes a high-level Python interface that enables for advanced data processing use-cases, as well as an extensible low-level C++ interface that provides for high performance device-specific optimizations.

The core objective of the Kompute project is to contribute to and further the GPU computing ecosystem across both, scientific and industry applications, through cross-vendor graphics card tooling and capabilities. We have seen very positive reception and adoption of Kompute across various development communities, including advanced data processing use-cases in mobile applications, game development engines, edge device and cloud, and we look forward for Kompute to serve as a platform to empower the discourse on topics related to GPU acceleration for cross-industry applications.

Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI & Machine Learning, said: “Many advanced machine learning and deep learning paradigms fit incredibly well to the parallel architecture that GPU computing offers, and exciting advancements in the open source ecosystem such as the Kompute Project are enabling developers to take advantage of GPU acceleration in cross-vendor mobile, desktop, cloud and edge applications. We believe the Linux Foundation is the right platform to encourage and empower the cross-industry discourse required to further the GPU acceleration ecosystem. We are proud for Kompute to become a LF AI & Data project, and we look forward to an ongoing collaboration.”

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project. Learn more about Kompute on their GitHub and join the Kompute-Announce Mailing List and Kompute-Technical-Discuss Mailing List. We also encourage you to subscribe to the Monthly Kompute GPU Acceleration Sessions, to join the community and stay connected on the latest updates – subscription instructions available here.

A warm welcome to Kompute! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Kompute Key Links

LF AI & Data Resources

 

TonY Joins LF AI & Data as New Incubation Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today is announcing TonY as its latest Incubation Project. 

TonY is a framework to natively run deep learning jobs on Apache Hadoop. It currently supports TensorFlow, PyTorch, MXNet and Horovod. TonY enables running either single node or distributed training as a Hadoop application. This native connector, together with other TonY features, aims to run machine learning jobs reliably and flexibly. For a quick overview of TonY and comparisons to other frameworks, please see this presentation.

Dr. Ibrahim Haddad, Executive Director of LF AI & Data, said: “We’re excited to welcome the TonY project in LF AI & Data. The project enables data scientists with a way to manage the challenges associated with scaling up training; fitting well into our portfolio of projects and technologies. We look forward to working with the community to grow the project’s footprint and to create new collaboration opportunities for it with our members and other hosted projects.” 

For the past few years, TonY has been empowering all of LinkedIn’s production deep learning jobs to create more relevant content for the company’s 774+ million members. Externally, TonY has also been adopted by companies like iQiyi, and integrated with Google Cloud. Horovod, another LF AI & Data project, was also supported in TonY recently.

Keqiu Hu, technical lead for the TonY project, said: “LinkedIn built TonY to make distributed deep learning training easy on Apache Hadoop for AI model developers. We are thrilled to contribute TonY to the community as the LF AI & Data Foundation’s newest project. The collaboration will further scale the impact of TonY in the open source community, and accelerate the advancement of AI in the industry”.

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project.  LF AI & Data will support the neutral open governance for the project to help foster its growth. Learn more about TonY on their GitHub and be sure to subscribe to the TonY-Announce and TonY-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to TonY! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

TonY Key Links

LF AI & Data Resources

Connect with LF AI & Data at Open Source Summit + Embedded Linux Conference + OSPOCon 2021

By Blog

The LF AI & Data Foundation is thrilled to be a part of the Open Source Summit + Embedded Linux Conference + OSPOCon 2021 event! This hybrid event, both in-person and virtual, is slated to take place September 27-30, 2021 in Seattle, Washington. 

Register for OSS+ELC+OSPOCon 2021 to attend. 

There are a variety of ways to participate at this conference. First, you’re welcome to join us in person! Make sure you check out all the health and safety guidelines for in-person attendance here. You can also attend virtually, with similar formats as events hosted in the past year. Information about the virtual program can be found in the FAQs linked here. We’ll also be hosting an AI & Data Track within the program. We hope to see you there!

 Attend Sessions in the AI & Data Track 

The LF AI & Data Foundation will be hosting an AI & Data Track at OSS+ELC+OSPOCon2021. Join these sessions to learn the latest updates from our projects and hear from leaders in the AI & Data industry.  

Chat with us in the Onsite or Virtual Exhibit Hall!

Our booth will be in the Bronze Hall at the event, please stop by and say hello! If you can’t make it in person, come chat with us in the virtual exhibit hall! Various LF AI & Data community members will be around during booth hours to answer any questions you have. You’ll also be able to get more information on how to get involved with the LF AI & Data Foundation.

The LF AI & Data Foundation mission is to build and support an open AI community, and drive open source innovation in AI and data by enabling collaboration and the creation of new opportunities for all the members of the community. 

Want to get involved with the LF AI Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected on the latest updates. 

LF AI & Data Resources

Acumos Elpis Release Now Available

By Blog

Acumos, an LF AI Foundation Graduated Project, has announced their software release, codenamed Elpis. We’re thrilled to see another great release from the community! 

Acumos is a platform and open source framework that makes it easy to build, share, and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.

Major highlights of the Elpis release include:

  • Automation of Model Deployment with your own CI tool:
    • Elpis Release is a first step towards a fully automated process from model on-boarding to model deployment. It is now possible to connect Acumos with your own CI tool (Jenkins) in order to trigger Jenkins jobs that could be used in the future to deploy your Acumos model
  • On-boarding:
    • Add “deploy” parameter in on-boarding API
    • Allow the use of a SWAGGER interface for R models
  • Micro-service:
    • Speed up the Micro-service creation of R models thanks to enhancement of on-boarding-base-r image
    • Use of Json and swagger URI with R models
    • Fix MS generation failure after on-boarding
    • Create deployment backend
  • Portal-Marketplace:
    • Take into account “deploy” parameter in synchronous and asynchronous microservice generation
    • Create a new Admin page to fulfill Jenkins’s server parameters
  • Onboarding Python Client:
    • Add deploy parameter
    • Fix type issue with python 3.9
  • Onboarding R Client:
    • Add deploy parameter
    • Allow the use of Json and Swagger UI
  • Onboarding Java Client:
    • Add deploy parameter
    • Update old dependencies
  • Onboarding C++ Client:
    • Add deploy parameter
  • Onboarding ONNX Client:
    • Add deploy parameter
    • Model bundle dump without use of configuration file
    • License profile addition when onboarding by CLI

The Acumos Project invites you to adopt or upgrade to the Elpis release and welcomes feedback. For details on the additional features and improvements, please refer to the release notes here and confluence page here. Want to get involved with Acumos? Be sure to join the Acumos-Announce and Acumos-Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Acumos team! We look forward to continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with the Foundation, visit the LF AI & Data website.

Acumos Key Links

LF AI & Data Resources

LF AI & Data Graduated Project Angel Releases Version 3.2.0

By Blog

Angel, an LF AI & Data Foundation Graduated-Stage Project, has released version 3.2.0. Angel is a machine learning framework originally developed by Tencent as the first open-source AI project of the company. The Angel project joined the LF AI & Data Foundation in August 2018 as an Incubation-Stage project, and in December 2019 became a Graduated-Stage project with the support of the Foundation and its technical community. 

With full-stack facilities for AI pipeline, from feature engineering to model training and inference, Angel has provided an end-to-end and easy-to-use platform for engineers and scientists. Particularly, it devotes effort to the high dimension sparse model training and graph neural network learning at production scale. In the previous version, 3.1, Angel introduced graph learning for the first time and afforded a set of well optimized algorithms already adopted in a variety of applications. For release 3.2.0, Angel enhances the core of graph learning with numerous new functionalities and optimizations.

Flexible Architecture for Extension

In release 3.2.0, Angel has designed three layers in the graph learning framework for general purpose which include computing engine, operators, and models. This architecture decouples the high-level algorithms from the low-level graph data (vertices and edges) manipulations; thus, it has good extension for both engineering enhancement and new model development. As an example, in the operator layer, there are a group of primitive abstract operator interfaces such as init(), get(), walk(), and sample() that developers can easily implement in their customized operators and extend the model.

Hybrid Running Mode for Best Performance

There are two main kinds of running models, Parameter Server (PS) and MPI for large scale graph learning algorithms. They have different volumes of communication messages during learning for different models like graph embedding and neural networks. It is hard to accommodate all models with good performance by using just one running mode. In version 3.2.0, support of a hybrid running mode is created by combining PS and MPI communication methods which leverages the advantages of both. This hybrid mode can significantly speed up the training process of graph traversal algorithms.

Adaptive Model Data Partitioning

For the big graph model which cannot be loaded in a single machine, we usually need to partition the model data into several parts across several machines. Range Partitioning and Hash Partitioning are two commonly used methods, where the former takes less memory but may cause load skew among machines; and the latter can have good load balance with much more memory. In this release, Angel can automatically and adaptively use range and hash partitioning according to the model, which is a good tradeoff between memory cost and load balancing.

Support for Heterogeneous Graph Learning

The structure of a graph is usually heterogeneous with multiple types of edge between each pair of vertices and multiple types of vertex attributes. This complexity raises challenges to the graph learning framework in terms of storage and computing. To support the heterogeneous graph, Angel optimizes the data structure of graph storage for fast I/O and provides an interface to users for customized PS function implementation. Such that those heterogeneous graph learning algorithms can be easily executed on the Angel framework even with high dimension sparse attributes on each graph vertex. Based on these optimizations, Angel has implemented several built-in heterogeneous models including HAN, GAT, GraphSAGE, IGMC Prediction, and Bipartite-GraphSAGE.

Performance Enhancement

Learning on a huge graph with about 100 billion edges is very challenging in stability and performance. Angel has deeply enhanced this kind of huge graph problem that is increasingly common in real applications such as social network mining or shopping recommendations. With this enhancement, the K-core and Common Friends model training can be three times faster than before with additional reduction of memory cost by 30%.

Try It!

The Angel Project invites you to adopt or upgrade to version 3.2.0 and welcomes feedback. For details on the additional features and improvements, please refer to the release notes here. Want to get involved with Angel? Be sure to join the Angel-Announce and Angel-Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Angel team! We look forward to continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with the Foundation, visit the LF AI & Data website.

Angel Key Links

LF AI & Data Resources

OpenLineage Joins LF AI & Data as New Sandbox Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today is announcing OpenLineage as its latest Sandbox Project. 

Released and open sourced by Datakin, OpenLineage is an open standard for metadata and lineage collection designed to instrument jobs as they are running. It defines a generic model of run, job, and dataset entities identified using consistent naming strategies. The core lineage model is extensible by defining specific facets to enrich those entities.

OpenLineage is a cross-industry effort involving contributors from major open source data projects, including LF AI & Data projects; Marquez, Amundesen, and Egeria. Without OpenLineage, projects have to instrument all jobs and integrations are external, which can break new versions. When OpenLineage is applied, effort of integration is shared and integration can be pushed in each project, meaning the user will not need to play catch-up. 

Before OpenLineage

With OpenLineage

Dr. Ibrahim Haddad, Executive Director of LF AI & Data, said: “We are excited to welcome the OpenLineage project in LF AI & Data. The project addresses a critical component in governing AI and data projects and further expands the robustness of our portfolio of hosted technical projects. We look forward to working with the OpenLineage project to grow the project’s footprint in the ecosystem, expand its community of adopters and contributors, and to foster the creation of collaboration opportunities with our members and other related projects.”

Julien Le Dem, founder of OpenLineage, said: “Data lineage is a complicated and multidimensional problem; the best solution is to directly observe the movement of data through heterogeneous pipelines. That requires the kind of broad industry coordination that the Linux Foundation has become known for. We are proud for OpenLineage to become a LF AI & Data project, and look forward to an ongoing collaboration.”]

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project. Learn more about OpenLineage on their GitHub and be sure to join the OpenLineage-Announce and OpenLineage-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to OpenLineage! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

OpenLineage Key Links

LF AI & Data Resources

Substra Joins LF AI & Data as New Incubation Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, today is announcing Substra as its latest Incubation Project. 

Substra is a framework offering distributed orchestration of machine learning tasks among partners while guaranteeing secure and trustless traceability of all operations. The Substra project was released and open sourced by OWKIN under the Apache-2.0 license. 

Substra enables privacy-preserving federated learning projects, where multiple parties collaborate on a Machine Learning objective while each one keeps their private datasets behind their own firewall. Its ambition is to make new scientific and economic data science collaborations possible.

Data scientists using the Substra framework are able to:

  • Use their own ML algorithm with any Python ML framework
  • Ship their algorithm on remote data for training and/or prediction and monitor their performances
  • Build advanced Federated Learning strategies for learning across several remote datasets

Data controllers using the Substra framework are able to:

  • Make their dataset(s) available to other partners for training/evaluation, ensuring it cannot be viewed or downloaded
  • Choose fine tuned permissions for your dataset to control its lifecycle
  • Monitor how the data was usedEngage in advanced multi-partner data science collaborations, even with partners owning competing datasets.

Dr. Ibrahim Haddad, Executive Director of LF AI & Data, said: “We’re excited to welcome the Substra project in LF AI & Data. The project enables data scientists to use their own ML algorithm with any Python framework, deploy their algorithm on remote data for training and/or prediction and monitor their performances, and build advanced Federated Learning strategies for learning across several remote datasets. We look forward to working with the community to grow the project’s footprint and to create new collaboration opportunities for it with our members and other hosted projects.” 

Substra operates distributed Machine Learning and aims to provide tools for traceable Data Science.

  • Data Locality: Data remains in the owner’s data stores and is never transferred. AI models travel from one dataset to another.
  • Decentralized Trust: All operations are orchestrated by a distributed ledger technology. There is no need for a single trusted actor or third party; security arises from the network.
  • Traceability: An immutable audit trail registers all the operations realized on the platform simplifying certification of model.
  • Modularity: Substra is highly flexible; various permission regimes and workflow structures can be enforced corresponding to every specific use case.

Camille Marini, Founder of the Substra project, said: “On behalf of all people who contributed to the Substra framework, I am thrilled and proud that it has been accepted as an incubation project in the LF AI & Data Foundation. Substra has been designed to enable the collaboration / cooperation around the creation of ML models from distributed sources of sensitive data. Indeed, we believe that making discoveries using ML cannot be done without making sure that data privacy and governance are not compromised. We also believe that collaboration between data owners and data scientists is key to be able to create good ML models. These values are shared with the Linux Foundation AI and Data, which thus appears as the perfect host for the Substra project. We hope that it will bring value in the AI & Data community.”

Eric Boniface, General Manager of Substra Foundation, said: “We are very happy and proud at Substra Foundation to see the Substra project becoming an LF AI & Data hosted project. Having been its first umbrella for the open source community, hosting the repositories, elaborating the documentation, animating community workgroups and contributing to first real-world flagship use cases like the HealthChain and MELLODDY projects was an incredible experience shared with the amazing Owkin team developing the framework. It was only a first step at a moderate scale, and we are convinced that joining an experienced and global foundation like the LF AI & Data as an incubation project is a great opportunity and the perfect next chapter for the Substra project, its community, and many more privacy-preserving federated learning use cases to come!”.

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project.  LF AI & Data will support the neutral open governance for Substra to help foster the growth of the project. Learn more about Substra on their GitHub and be sure to join the Substra-Announce and Substra-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to Substra! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Substra Key Links

LF AI & Data Resources

New LF AI & Data Members Welcome – Q2 2021

By Blog

We are excited to welcome five new members to the LF AI & Data Foundation. OPPO Mobile Telecommunications Corp has joined as a Premier member, GSI Technology as a General member and Banque de France, Chaitanya Bharathi Institute of Technology, and Sahyadri College of Engineering & Management as Associate members. 

The LF AI & Data Foundation will build and support an open community and a growing ecosystem of open source AI and data by accelerating development and innovation, enabling collaboration and the creation of new opportunities for all the members of the community.

“We are thrilled to continue seeing growth among our member community spanning a wide range of organizations. We see a huge potential for driving in AI and data innovation and the support from our members is critical to the success of that effort. A big welcome to our newest members and we hope more organizations will join us to support the LF AI & Data Foundation mission,” said Dr. Ibrahim Haddad, LF AI & Data Foundation Executive Director.

Premier Members

The LF AI & Data Premier membership is for organizations who contribute heavily to open source AI and data as well as bring in their own projects to be hosted at the Foundation. These companies want to take the most active role in enabling open source AI and Data. Premier members also lead via their voting seats on the Governing Board, Technical Advisory Council, and Outreach Committee.

Learn more about the newest Premier member below:

OPPO is a leading global smart device brand. Since the launch of its first smartphone – “Smiley Face” – in 2008, OPPO has been in relentless pursuit of the perfect synergy of aesthetic satisfaction and innovative technology. Today, OPPO provides a wide range of smart devices spearheaded by the Find and Reno series. Learn more here.

General Members

The LF AI & Data General membership is targeted for organizations that want to put their organization in full view in support of LF AI & Data and our mission. Organizations that join at the General level are committed to using open source technology, helping LF AI & Data grow, voicing the opinions of their customers, and giving back to the community.

Learn more about the newest General member below:

GSI Technology, Inc. is a leading provider of SRAM semiconductor memory solutions. GSI’s newest products leverage its market-leading SRAM technology. The Company recently launched radiation-hardened memory products for extreme environments and the Gemini® APU, a memory-centric associative processing unit designed to deliver performance advantages for diverse AI applications. Learn more here.

Associate Members

The LF AI & Data Associate membership is reserved for pre-approved non-profits, open source projects, and government entities who support the LF AI & Data mission.

Learn more about the newest Associate members below: 

The Banque de France is the French pillar of the Eurosystem, a federal system formed by the European Central Bank and the national central banks of the euro area. Its three main missions are monetary strategy, financial stability and the provision of economic services to the community.

Chaitanya Bharathi Institute of Technology, established in the Year 1979, esteemed as the Premier Engineering Institute in the States of Telangana and Andhra Pradesh, was promoted by a Group of Visionaries from varied Professions of Engineering, Medical, Legal and Management, with an Objective to facilitate the Best Engineering and Management Education to the Students and contribute towards meeting the need of Skilled and Technically conversant Engineers and Management Professionals, for the Country that embarked on an Economic Growth Plan. Learn more here.

Sahyadri College of Engineering and Management (SCEM), Mangaluru was established in the year 2007 under the Bhandary Foundation. SCEM is one of the premier technological institutions inculcating quality and value based education through innovative teaching learning process for holistic development of the graduates. The Institute is affiliated to Visvesvaraya Technological University (VTU), Belagavi with permanent affiliation for most of the programs, approved by the AICTE and the Government of Karnataka. Learn more here.

Welcome New Members!

We look forward to partnering with these new LF AI & Data Foundation members to help support open source innovation and projects within the artificial intelligence (AI) and data space. Welcome to our new members!

Interested in joining the LF AI & Data community as a member? Learn more here and email membership@lfaidata.foundation for more information and/or questions. 

LF AI & Data Resources

LF AI & Data Foundation Announces Graduation of Milvus Project

By Blog

The LF AI & Data Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI) and data open source projects, is announcing today that hosted project Milvus is advancing from an Incubation level project to a Graduate level. This graduation is the result of Milvus demonstrating thriving adoption, an ongoing flow of contributions from multiple organizations, and both documented and structured open governance processes. Milvus has also achieved a Core Infrastructure Initiative Best Practices Badge, and demonstrated a strong commitment to its community of users and contributors. 

Milvus is an open-source vector database built to manage embedding vectors generated by machine learning models and neural networks. The platform is widely used in applications such as computer vision, natural language processing, computational chemistry, personalized recommender systems, and more. The Milvus project extends the capabilities of best-in-class approximate nearest neighbor (ANN) search libraries including Faiss, NMSLIB, and Annoy with a cloud-native database system design. Built with machine learning operations (MLOps) in mind, Milvus provides an efficient, reliable, and flexible database component that contributes to simplified management of the entire machine learning model lifecycle. Milvus has been adopted by over 1,000 organizations worldwide including iQiyi, Kingsoft, Tokopedia, Trend Micro, and more. More than 2,300 developers have joined the Milvus open-source community on GitHub, Slack, mailing lists, and WeChat.

Originally developed and open sourced by Zilliz, Milvus joined LF AI & Data as an incubation project in January 2020.  As an Incubation project, the project has benefited  from  the LF AI & Data’s various enablement services to foster its growth and adoption; including program management support, event coordination, legal services, and marketing services ranging from website creation to project promotion.

“Milvus is a great example of a project that joined us in its early stages and grew significantly with the enablement of our services to graduate as a sign of maturity, functioning open governance, and large-scale adoption,” said Dr. Ibrahim Haddad, Executive Director of the LF AI & Data Foundation. “The development activities, the growth of its users and contributors community, and its adoption is particularly noteworthy. Milvus meets our graduation criteria and we’re proud to be its host Foundation. As a Graduate project, we will continue to support it via an extended set of services tailored for Graduated projects  We’re also excited that the project is now eligible for a voting seat on LF AI & Data’s Technical Advisory Council. Congratulations, Milvus!”

“We have made significant progress since Milvus joined the LF AI & Data foundation 16 months ago. With all the good support from the foundation, we have grown a mature community around the Milvus project. We have also found a lot of collaboration opportunities with other members and projects in the foundation. It helped us a lot in promoting the Milvus project.” said Milvus project lead Xiaofan Luan.

Milvus in Numbers

The stats below capture Milvus’ development efforts as of their graduation in June 2021:

  • Contributors on GitHub: 140 
  • GitHub stars: 6.4K
  • GitHub forks: 887
  • Docker hub downloads: 320K
  • Known community members: 2.3K
LFX Insights stats on Milvus project

Curious about how to get involved with Milvus? 

Check out the Milvus Quickstart Guide and be sure to join the Milvus Announce and Milvus Technical-Discuss mailing lists to join the community and stay connected on the latest updates. Learn more about Milvus on their website and GitHub.

Congratulations to the Milvus team! We look forward to continued growth and success as part of the LF AI & Data Foundation. To learn about hosting an open source project with us, visit the LF AI & Data Foundation website.

Milvus Key Links

LF AI & Data Resources

Adlik Cheetah Release (v0.3.0) Now Available!

By Blog

Adlik, an LF AI & Data Foundation Incubation-Stage Project, has released version 0.3.0, called Cheetah. Adlik is a toolkit for accelerating deep learning inference, which provides an overall support for bringing trained models into production and eases the learning curves for different kinds of inference frameworks. In Adlik, Model Optimizer and Model Compiler delivers optimized and compiled models for a certain hardware environment, and Serving Engine provides deployment solutions for cloud, edge and device.

In version 0.3.0, Cheetah, you’ll find more frameworks integrated and the Adlik Optimizer succeeds in boosting inference performance of models. In a MLPerf test, a ResNet-50 model is optimized by Adlik optimizer, with model size compressed by 93%, inference latency reduced to 1.33ms. And in Adlik compiler, TVM auto scheduling, which globally and automatically searches for the optimal scheduling solution by re-designing scheduling templates, enables lower latency for ResNet-50 on x86 CPU than OpenVINO. This release enhances features, increases useability, and continues to showcase improvements across a wide range of scenarios. A few release highlights to note include the following:

  • Compiler
    • Integrate deep learning frameworks including PaddlePaddle, Caffe and MXNet
    • Support compiling into TVM
    • Support FP16 quantization for OpenVINO
    • Support TVM auto scheduling
  • Optimizer
    • Specific optimization for YOLO V4
    • Pruning, distillation and quantization for ResNet-50
  • Inference Engine
    • Support runtime of TVM and TF-TRT
    • Docker images for cloud native environments support newest version of inference components including OpenVINO (2021.1.110), TensorFlow (2.4.0), TensorRT (7.2.1.6), TFLite (2.4.0), TVM (0.7)
  • Benchmark Test
    • Support paddle models, such as Paddle OCR,PP-YOLO,PPresnet-50

A special thank you goes out to contributors from Paddle for their support in this release. Your contributions are greatly appreciated! 

The Adlik Project invites you to adopt or upgrade to Cheetah, version 0.3.0, and welcomes feedback. To learn more about the Adlik 0.3.0 release, check out the full release notes. Want to get involved with Adlik? Be sure to join the Adlik-Announce and Adlik Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Adlik team! We look forward to continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Adlik Key Links

LF AI & Data Resources