The Linux Foundation Projects
Skip to main content
Blog

LF AI & Data Graduated Project Angel Releases Version 3.2.0

By July 29, 2021No Comments

Angel, an LF AI & Data Foundation Graduated-Stage Project, has released version 3.2.0. Angel is a machine learning framework originally developed by Tencent as the first open-source AI project of the company. The Angel project joined the LF AI & Data Foundation in August 2018 as an Incubation-Stage project, and in December 2019 became a Graduated-Stage project with the support of the Foundation and its technical community. 

With full-stack facilities for AI pipeline, from feature engineering to model training and inference, Angel has provided an end-to-end and easy-to-use platform for engineers and scientists. Particularly, it devotes effort to the high dimension sparse model training and graph neural network learning at production scale. In the previous version, 3.1, Angel introduced graph learning for the first time and afforded a set of well optimized algorithms already adopted in a variety of applications. For release 3.2.0, Angel enhances the core of graph learning with numerous new functionalities and optimizations.

Flexible Architecture for Extension

In release 3.2.0, Angel has designed three layers in the graph learning framework for general purpose which include computing engine, operators, and models. This architecture decouples the high-level algorithms from the low-level graph data (vertices and edges) manipulations; thus, it has good extension for both engineering enhancement and new model development. As an example, in the operator layer, there are a group of primitive abstract operator interfaces such as init(), get(), walk(), and sample() that developers can easily implement in their customized operators and extend the model.

Hybrid Running Mode for Best Performance

There are two main kinds of running models, Parameter Server (PS) and MPI for large scale graph learning algorithms. They have different volumes of communication messages during learning for different models like graph embedding and neural networks. It is hard to accommodate all models with good performance by using just one running mode. In version 3.2.0, support of a hybrid running mode is created by combining PS and MPI communication methods which leverages the advantages of both. This hybrid mode can significantly speed up the training process of graph traversal algorithms.

Adaptive Model Data Partitioning

For the big graph model which cannot be loaded in a single machine, we usually need to partition the model data into several parts across several machines. Range Partitioning and Hash Partitioning are two commonly used methods, where the former takes less memory but may cause load skew among machines; and the latter can have good load balance with much more memory. In this release, Angel can automatically and adaptively use range and hash partitioning according to the model, which is a good tradeoff between memory cost and load balancing.

Support for Heterogeneous Graph Learning

The structure of a graph is usually heterogeneous with multiple types of edge between each pair of vertices and multiple types of vertex attributes. This complexity raises challenges to the graph learning framework in terms of storage and computing. To support the heterogeneous graph, Angel optimizes the data structure of graph storage for fast I/O and provides an interface to users for customized PS function implementation. Such that those heterogeneous graph learning algorithms can be easily executed on the Angel framework even with high dimension sparse attributes on each graph vertex. Based on these optimizations, Angel has implemented several built-in heterogeneous models including HAN, GAT, GraphSAGE, IGMC Prediction, and Bipartite-GraphSAGE.

Performance Enhancement

Learning on a huge graph with about 100 billion edges is very challenging in stability and performance. Angel has deeply enhanced this kind of huge graph problem that is increasingly common in real applications such as social network mining or shopping recommendations. With this enhancement, the K-core and Common Friends model training can be three times faster than before with additional reduction of memory cost by 30%.

Try It!

The Angel Project invites you to adopt or upgrade to version 3.2.0 and welcomes feedback. For details on the additional features and improvements, please refer to the release notes here. Want to get involved with Angel? Be sure to join the Angel-Announce and Angel-Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Angel team! We look forward to continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with the Foundation, visit the LF AI & Data website.

Angel Key Links

LF AI & Data Resources

Author

  • Andrew Bringaze

    Andrew Bringaze is the senior developer for The Linux Foundation. With over 10 years of experience his focus is on open source code, WordPress, React, and site security.