Aligning to constraints for data-efficient language model customization
2025
General-purpose language models (LMs) are aligned to diverse user intents, but fall short when it comes to specific applications. While finetuning is the default method for customized alignment, human annotations are often unavailable in various customization scenarios. Based on the observation that one of the main issues of LM customization is constraint adherence, we investigate the feasibility of using constraints as a bridge from general LMs to customized ones. We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified and efficient framework, ACT (Aligning to ConsTraints), for customizing LMs without human annotation. Specifically, ACT uses automatic constraint verifiers, which are typically easy to implement in practice, to compute constraint satisfaction rate (CSR) of each response. It samples multiple responses for each prompt and collects preference labels based on their CSR. Subsequently, ACT adapts the LM to the target task through a ranking-based learning process. Experiments on fine-grained entity typing, abstractive summarization, and temporal question answering demonstrate that ACT is capable of enhancing LMs’ ability to adhere to different classes of constraints, thereby improving task performance comparable to or approaching that of finetuning with labeled data.
Research areas