Variance-reduced zeroth-order methods for fine-tuning language models
2024
Fine-tuning language models (LMs) has demonstrated success in a wide array of downstream tasks. However, as LMs are scaled up, the memory requirements for backpropagation become prohibitively high. Zeroth-order (ZO) optimization methods can leverage memory-efficient for-ward passes to estimate gradients. Recently, MeZO, an adaptation of ZO-SGD, has been shown to consistently outperform zero-shot and in-context learning when combined with suitable task prompts. In this work, we couple ZO methods with variance reduction techniques to enhance stability and convergence for inference-based LM fine-tuning. We introduce Memory-Efficient Zeroth-Order Stochastic Variance-Reduced Gradient (MeZO-SVRG) and demonstrate its efficacy across multiple LM fine-tuning tasks, eliminating the reliance on task-specific prompts. Evaluated across a range of both masked and autoregressive LMs (up to 7B parameters) on benchmark down-stream tasks, MeZO-SVRG outperforms MeZO with up to 20% increase in test accuracies in both full- and partial-parameter fine-tuning settings. MeZO-SVRG benefits from reduced computation time, often surpassing MeZO’s peak test accuracy with a 2× reduction in GPU-hours. MeZO-SVRG substantially decreases the memory requirement (by at least 2× for autoregressive models), achieving greater memory savings as both the batch size and context lengths increase, in comparison to first-order methods.
Research areas