The event was held virtually this year, with over 4,100 people from over 109 countries attending!
The AI/ML/DL Track was well attended, with several AI industry leaders sharing knowledge and insights, and engaging with attendees in several great discussions, both during the sessions and afterwards in the official OSS+ELC Slack workspace.
Miss the sessions? If you registered for OSS NA, you can always view all of the session recordings on demand on the virtual event platform anytime. Or check back on the OSS NA website for a link to all of the sessions recordings on YouTube soon.
The virtual LF AI Foundation booth was also a hit! We had over 300 people visit the booth and interact with our LF AI community members who were staffing the booth. Visitors were also able to download resources to learn more about each of LF AI’s projects, and they were able to give us insight into what type of projects they want to see in the future through our attendee survey.
On the final day of the conference, we hosted our LF AI Mini Summit. Our presenters covered the latest updates from the Foundation, Technical Advisory Council, Trusted AI Committee, and more. We had over 160 people attend the Summit, and received a lot of interest in what the LF AI Foundation is doing. We’re looking forward to more engagement opportunities with the community in the future.
Miss the Summit? If you registered for OSS NA, you can always view the Summit recording on demand on the virtual event platform anytime. Or check back on the OSS NA website for a link to all of the sessions recordings on YouTube soon.
The LF AI Foundation mission is to build and support an open AI community, and drive open source innovation in the AI, ML, and DL domains by enabling collaboration and the creation of new opportunities for all the members of the community.
Want to get involved with the LF AI Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected on the latest updates.
2020 has been a busy year for the LF AI Foundation (LF AI) and we are thrilled to see the continued enthusiasm among the overall community and the growth of our hosted technical projects. With half the year behind us, we’re taking a moment to reflect on the key highlights.
LF AI launched two years ago with ten members, and is now at a total of 24 members across our Premier, General, and Associate levels. In the first half of 2020, we’ve seen extra momentum in our Associate member category, with several educational institutions joining us; including Montreal AI Ethics Institute, Pranveer Institute of Technology, and Penn State Great Valley. We also welcomed two non-profit organizations, AI for People and Ambianic.ai who have both been very active among the LF AI community right away.
It’s been great to see a diverse group of companies getting involved within LF AI across various industries. We welcome those interested in contributing to the support of open source projects within the artificial intelligence (AI), machine learning (ML), and deep learning (DL) space to learn more about membership opportunities here.
Our technical project portfolio grew to twelve projects; of which three are Graduated and nine are Incubating. At the end of June, the LF AI Technical Advisory Council (TAC) approved three additional Incubating projects in the Trusted AI space; these projects are undergoing onboarding into the Foundation and will be formally announced soon, stay tuned! The TAC is continually working to bring in new open source projects, if you are interested in hosting a project with LF AI, check out the proposal process here and email firstname.lastname@example.org to further discuss.
The launch of the LF AI Interactive Landscape has continued to be a great tool to gain insights into how LF AI projects, among many others, fit into the space of open source AI, ML, and DL. As of the end of June, the landscape covers 248 projects coming from over 130 founding organizations universities. These projects collectively earned over 1.4 million GitHub Stars, and cover over 450 millions lines of code coming from over 30 thousand developers! Explore the landscape and please reach out to help us expand it with your own open source project or let us know of other projects that should be included by emailing email@example.com.
We are excited to have seen participation increase in two key initiatives. The ML Workflow & Interop Committee, is focused on defining an ML Workflow and promoting cross project integration and interoperability. The Trusted AI Committee is focused on creating policies, guidelines, tooling, and use cases by industry in this very important space. Both of these committees are open for participation and we welcome anyone interested to join the conversations by subscribing to the mail lists or attending an upcoming meeting; check out their wiki pages for more information.
Despite the challenges that COVID-19 has presented with in person gatherings, our community did not let that prevent them from moving forward with their planned events and instead pivoted to virtual formats. There have been two LF AI Days this year; the first being focused on an ONNX Community Virtual Meetup, followed by a Virtual LF AI Day EU for those based in that region. LF AI Days are regional, one-day events hosted and organized by local members with support from LF AI and its projects. Visit our LF AI Events page for more details on upcoming events and be sure to join us for one soon!
The LF AI community continues to grow! If you haven’t already, check out below a few ways to stay connected with LF AI:
We are excited to see what the second half of 2020 brings and how LF AI can influence the AI, ML, and DL space; we hope you will be a part of the journey! Check out our How to Get Involved Guide or email us at firstname.lastname@example.org for any questions on how to participate.
Sparklyr, an LF AI Foundation Incubation Project, has released version 1.3.0! Sparklyr is an R Language package that lets you analyze data in Apache Spark, the well-known engine for big data processing, while using familiar tools in R. The R Language is widely used by data scientists and statisticians around the world and is known for its advanced features in statistical computing and graphics.
In version 1.3.0, sparklyr adds a variety of improvements; highlights include:
Now supports seamless integration of Spark higher-order functions with R (similar to how dplyr allows R users to compose clear and concise data-manipulation verbs instead of long SQL queries)
After seeing popular demand for Apache Avro functionalities in sparklyr, spark_read_avro, spark_write_avro, sdf_from_avro, and sdf_to_avro methods are implemented to make working with Apache Avro simpler for sparklyr users (context: Apache Avro is a popular data serialization format that combines flexibility of JSON schema definition with efficiency of binary serialization of data columns)
It is now also possible to run user-defined R serialization and deserialization procedures on Spark worker nodes through sparklyr
The power of open source projects is the aggregate contributions originating from different community members and organizations that collectively help drive the advancement of the projects and their roadmaps. The sparklyr community is a great example of this process and was instrumental in producing this release. The sparklyr team wanted to give a special THANK YOU to the following community members for their contributions via pull requests (listed in chronological order):
Contributions take many forms, roadmap input for sparklyr 1.3 from Javier Luraschi ([#2434 and #2552). And great insight from @mattpollock and @benmwhite on several issues (#1773, #2514). Truly a great team effort for this release!
Congratulations to the sparklyr team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.
Join the LF AI Foundation at the AI Open Day in Shenzhen, China on July 4, 2020. This event is hosted by GDG and OSTech and LF AI is thrilled to participate in this series again.
Come listen to representatives of five member companies (Baidu, Huawei, Tencent, ZILLIZ, and ZTE) from the LF AI Foundation who will share their AI experiences, present the problems they’re facing, and discuss how they go about solving them.
As the foundation and common tool set in AI development, the artificial intelligence framework can solve many practical problems, whether it is in academic research or tedious work in social production activities. LF AI Foundation member companies were invited to discuss these issues at the first AI Open Day series of events in May. Now, we will continue the discussion to review the progress and latest information regarding these topics.
A big thank you to Orange for hosting our first virtual LF AI Day EU! The event was held on June 22, 2020 and was a great success with close to 50 attendees joining live.
The event featured keynote speakers from leading operators in the AI industry and startups with a focus on open source strategies for machine learning and deep learning. Various AI topics were covered, including technical presentations from EU startups, demonstrations of AI Marketplace, and discussions of LF AI projects.
The LF AI day was an opportunity to discuss the evolution of AI, the history of LF AI and innovative AI products from EU startups.
After the Orange Welcome of Claire Chauvin, François Jezequel presented the evolution towards Artificial General Intelligence AGI defined as the representation of generalized human cognitive abilities in AI software. He gave an example of research projects on neuroscience and knowledge representation and reasoning.
Anwar Aftab, from AT&T Labs, presented the new functionalities of the latest Acumos AI Release Demeter as a Portable AI platform and how AT&T is using AI for Network automation in Open RAN and NFV/ONAP.
Several European startups presented their innovative AI products starting with an advanced chatbot for conversational AI from Levia, AI empowering HR from Xtramile, neural search engine from Jina AI and visual automation from Ximilar.
The demonstration of the Orange AI Marketplace was also realized during this event as a European platform to accelerate the delivery of AI to the Enterprise market.
During this event, the French open source RosaeNLG (Natural Language Generation) project was introduced, as well as the research activity of Université de Paris for document co-clustering.
Missed the event? Check out the presentations here.
This event was held in a virtual format but we look forward to connecting again at another event in person soon. LF AI Day is a regional, one-day event hosted and organized by local members with support from LF AI, its members, and its projects. If you are interested in hosting an LF AI Day, please email email@example.com to discuss.
Event host, Orange, is a leading telecommunications company with headquarters in France. They are the largest telecoms operator in France, with the bulk of their operations in Europe, Africa and the Middle East.
As an LF AI General Member, Orange is involved within the LF AI Governing Board, Outreach Committee, Trusted AI Committee and an active contributor to the LF AI Acumos project.
In this new version, Milvus has made quite a few improvements and bug fixes:
Updates the Milvus configuration file. #2510
Optimizes the index building performance when many small segments exist. #2373
FAISS is upgraded to 1.6.3. #2381
Optimizes dropping collection performance when too many partitions exist. #2394
Optimizes the implementation of k-selection algorithm for GPU-enabled Milvus. #2466
The search performance degrades on Milvus 0.9.1. #2429
The Milvus Project invites you to adopt or upgrade to version 0.10.0 in your application, and welcomes feedback. To learn more about the Milvus 0.10.0 release, check out the change log and full release notes. Want to get involved with Milvus? Be sure to join the Milvus-Announce and Milvus Technical-Discuss mailing lists to join the community and stay connected on the latest updates.
Congratulations to the Milvus team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.
EDL is a framework with the ability to dynamically adjust the parallelism (number of training workers) for deep neural network training. It can support multi-tenant cluster management to balance job completion time and job waiting time, maximize the use of idle resources, and so on. This project contains the EDL framework and its applications such as distillation and NAS.
Major features of the EDL 0.3.0 release include:
Support elastic training with inference type services during training, such as knowledge distillation.
Inference type services are automatically registered through service discovery in EDL.
Knowledge distillation examples in computer vision and natural language processing.
Also, a new version will be released later that will automatically adjust teachers based on traffic.
The EDL Project invites you to adopt or upgrade to version 0.3.0, and welcomes feedback. To learn more about the EDL 0.3.0 release, check out the full release notes. Want to get involved with EDL? Be sure to join the EDL-Announce and EDL Technical-Discuss mailing lists to join the community and stay connected on the latest updates.
Congratulations to the EDL team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.
The LF AI Foundation (LF AI), the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), today is announcing Marquez as its latest Incubation Project. Marquez is an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. It maintains the provenance of how datasets are consumed and produced, provides global visibility into job runtime and frequency of dataset access, centralization of dataset lifecycle management, and much more.
“The Marquez community is excited to join the LF AI. This is the next step for Marquez to become an integral part of the wider data community and be the standard for lineage and metadata collection” said Julien Le Dem, CTO of Datakin. “We are very pleased to welcome Marquez to LF AI. Machine learning requires high quality data pipelines and Marquez gives visibility into data quality, enables reproducibility, facilitates operations, and builds accountability and trust,” said Dr. Ibrahim Haddad, Executive Director of LF AI. “We look forward to supporting this project and helping it to thrive under a neutral, vendor-free, and open governance.” LF AI supports projects via a wide range of benefits; and the first step is joining as an Incubation Project. Full details on why you should host your open source project with LF AI are available here.
Marquez enables highly flexible data lineage queries across all datasets, while reliably and efficiently associating (upstream, downstream) dependencies between jobs and the datasets they produce and consume.
Marquez is a modular system and has been designed as a highly scalable, highly extensible platform-agnostic solution for metadata management. It consists of the following system components:
Metadata Repository: Stores all job and dataset metadata, including a complete history of job runs and job-level statistics (i.e. total runs, average runtimes, success/failures, etc).
Metadata API: RESTful API enabling a diverse set of clients to begin collecting metadata around dataset production and consumption.
Metadata UI: Used for dataset discovery, connecting multiple datasets and exploring their dependency graph.
Marquez’s data model emphasizes immutability and timely processing of datasets. Datasets are first-class values produced by job runs. A job run is linked to versioned code, and produces one or more immutable versioned outputs. Dataset changes are recorded at different points in job execution via lightweight API calls, including the success or failure of the run itself.
A warm welcome to Marquez and we look forward to the project’s continued growth and success as part of the LF AI Foundation. To learn about how to host an open source project with us, visit the LF AI website.
Adlik, an LF AI Foundation Incubation-Stage Project, has released version 0.1.0. We’re thrilled to see a release from this community who has been hard at work the past few months! Adlik is a toolkit for accelerating deep learning inference, which provides an overall support for bringing trained models into production and eases the learning curves for different kinds of inference frameworks. In Adlik, Model Optimizer and Model Compiler delivers optimized and compiled models for a certain hardware environment, and Serving Engine provides deployment solutions for cloud, edge and device.
In version 0.1.0, Adlik enhances features, increases useability, and addresses miscellaneous bug fixes. A few of the release highlights include the following:
A new framework which is easy to expand and maintain
Compilation of models trained from Keras, Tensorflow, and Pytorch for better execution on CPU/GPU
Multi nodes multi GPUs training and pruning
Configurable implementation of filter pruning to achieve smaller size of inference models
Small batch dataset quantization for TF-Lite and TF-TRT
Management of multi models and multi versions
HTTP/GRPC interfaces for inference service
Runtime scheduler that supports scheduling of multi model instances
Integration of multiple DL inference runtime, including TensorFlow Serving, OpenVINO, TensorRT and TF Lite
Integration of dlib to support ML runtime
This release also contains a Benchmark Test Framework for DL Model, which enables a standardized benchmark test for performance of models running in the same hardware environment with different runtime supported by Adlik. In this framework, the whole testing pipeline is auto executed with a containerized solution.
The Adlik team expressed a special thank you to contributors from ZTE, China Mobile, and China Unicom for their extra hard work.
The Adlik Project invites you to adopt or upgrade to version 0.1.0, and welcomes feedback. To learn more about the Adlik 0.1.0 release, check out the full release notes. Want to get involved with Adlik? Be sure to join the Adlik-Announce and Adlik Technical-Discuss mailing lists to join the community and stay connected on the latest updates.
Congratulations to the Adlik team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.
The ONNX community continues to grow with new tools supporting the spec and nearly two hundred individuals from one hundred organizations attending the April 2020 community meeting. Along with the strong growth of this open source project, we are excited to announce that the governance structure is working well and elections have resulted in newly appointed steering committee members. This is another important step to ensure an open, adaptive, sustainable future for the ONNX project.
ONNX is an open format to represent and optimize deep learning and machine learning models that deploy and execute on diverse hardware platforms and clouds. ONNX allows AI developers to more easily move AI models between tools that are part of trusted AI/ML/DL workflows. The ONNX community was established in 2017 to create an open ecosystem for interchangeable models, and quickly grew as tool vendors and enterprises adopted ONNX for their products and internal processes. Support for ONNX spec as an industry standard continues to grow with the support of contributors from across geographies and industry sectors. ONNX is a graduated project of the LF AI Foundation under multi-vendor open governance, in accordance with industry best practice. ONNX community values are: Open, welcoming, respectful, transparent, accessible, meritorious, and speedy. In accordance with our ONNX community principle of being welcoming, all ONNX Steering Committee meetings are open to the community to attend. We welcome your contributions to ONNX.
Congrats to everyone involved and thank you for your contributions to the ONNX project!