Evolutionary contrastive distillation for language model alignment
2024
The ability of large language models (LLMs) to execute complex instructions is essential for their real-world applications. However, several recent studies indicate that LLMs struggle with challenging instructions (Zhou et al., 2023; Qin et al., 2024; Jiang et al., 2023b). In this paper, we propose Evolutionary Contrastive Distillation (ECD), a novel method for generating high-quality synthetic preference data designed to enhance the complex instruction-following capability of language models. ECD generates data that specifically illustrates the difference between a response that successfully follows a set of complex instructions and a response that is high-quality, but nevertheless makes some subtle mistakes. This is done by prompting LLMs to progressively evolve simple instructions into more complex instructions. When an instruction is made more complex, the original successful response mostly meets the new requirements but misses one or two, thus becoming a “hard negative” example for the new instruction. By pairing a good response with such a hard negative response, and employing contrastive learning algorithms such as DPO (Rafailov et al., 2023), we improve language models’ ability to follow complex instructions. Empirically, we observe that our method yields a 7B model that exceeds the complex instruction-following performance of current state-of-the-art (SOTA) 7B models and is competitive even with open-source 70B models.
Research areas