FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale models. Our goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality. Currently, we are focusing on NLP models and tasks. In the near future, we will support for other modalities.

Now it supports GLM, BERT, RoBERTa, GPT2, T5, and models from Huggingface Transformers.

It provides APIs to quickly download and use those pre-trained models on a given text, and fine-tune them on your own datasets.

These models can be applied to (chinese/english) Text, for tasks like text classification, information extraction, question answering, summarization, and text generation.

FlagAI is backed by the three most popular data/model parallel libraries — PyTorch/Deepspeed/Megatron-LM — with seamless integration between them. Your can parallel your training/testing process with less than ten lines of code.

FlagAI is a sandbox-stage project of the LF AI & Data Foundation.

Contributed by: BAAI in June 2022