Our dynamic low-rank task-adaptive reparameterization (TARP) and model structure (TAMS) primitives are implemented as a Python library.
pip install -e .
The initial commit includes this README and the original codebases we build upon, listed below. Later commits isolate our contributions and demonstrate how the library is used, e.g., TARP and TAMS in a meta-learning the difference loop on top of a HuggingFace Transformers model.
dialogue_personalization/ reproduces our work on Persona-Chat. It is based on https://github.com/HLTCHKUST/PAML (MIT license) which was released with the paper "Personalizing Dialogue Agents via Meta-Learning" by Zhaojiang Lin*, Andrea Madotto*, Chien-Sheng Wu, Pascale Fung at ACL 2019.
abstractive_summarization/ reproduces our work on AdaptSum. It is based on https://github.com/TysonYu/AdaptSum (CC BY 4.0 license) which was released with the paper "AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization" by Tiezheng Yu*, Zihan Liu*, Pascale Fung at NAACL-HLT 2021.
low_rank_comparisons/ reproduces comparisons with other efficient adaptation papers. It is based on https://github.com/microsoft/LoRA/tree/snapshot-9-15-2021 (MIT license) which was released with the preprint "LoRA: Low-Rank Adaptation of Large Language Models" by Edward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Weizhu Chen.