The Linux Foundation Projects
Skip to main content

Discover LF AI & Data Projects with TAC Talks Watch Now

The LF AI & Data Foundation, dedicated to fostering open-source innovation in artificial intelligence (AI) and data-related projects, has introduced RWKV as its newest Incubation Project.

RWKV (pronounced as RwaKuv) is a Recurrent Neural Network (RNN) family of models that combine the strengths of both Transformers and RNNs. It offers parallelizable training akin to transformers and efficient inference similar to RNNs. This unique blend results in exceptional performance, swift inference, VRAM optimization, fast training, extended context, and sentence embeddings—all achieved without the use of attention mechanisms. 

Dr. Ibrahim Haddad, Executive Director of LF AI & Data Foundation, expressed his enthusiasm, stating, “RWKV’s integration into our incubation stage program underscores our commitment to advancing the field of AI and data through open source collaboration. This project, born out of the EleutherAI community and sponsored by StabilityAI, exemplifies the spirit of innovation that LF AI & Data seeks to promote.”

BlinkDL the creator of RWKV shared, “We are thrilled to join LF AI & Data Foundation as the first technical project under the Generative AI Commons initiative. RWKV’s architecture represents a leap forward in reconciling the trade-offs between computational efficiency and model performance in sequence processing tasks, and we look forward to collaborating with the Foundation and the wider open source community.”

The research paper introducing RWKV has been recently released and is accepted into EMNLP 2023. For those interested in diving deeper into the project’s technical details, the preprint is accessible [here].

RWKV also recently launched their model Eagle 7B which outperforms all other open source and source available models on multilingual benchmarks.  Eagle 7B is a 7 billion parameter RNN trained on 1.1 trillion tokens of multilingual text from over 100 languages using the RWKV v5 architecture and released under the Apache 2.0 license. 

RWKV addresses the challenges posed by memory and computational complexity in sequence processing tasks, offering a scalable and efficient solution. By combining the parallelizable training of Transformers with the efficient inference of RNNs, RWKV opens new possibilities for AI researchers and practitioners.

Learn more about RWKV on their GitHub and join the RWKV-Announce Mailing List

A warm welcome to RWKV! We are excited to see the project’s continued growth and success as part of the LF AI & Data Foundation. If you are interested in hosting an open source project with us, please visit the LF AI & Data website to learn more.

RWKV Key Links

LF AI & Data Resources

Access other resources on LF AI & Data’s GitHub or Wiki