Hippolyte Gisserot-Boukhlef, Ricardo Rei, Emmanuel Malherbe, Céline Hudelot, Pierre Colombo, Nuno M. Guerreiro
Artefact Research Center, Unbabel, Equall, MICS CentraleSupélec, Université Paris-Saclay, Instituto de Telecomunicações, Instituto Superior Técnico & Universidade de Lisboa (Lisbon ELLIS Unit)
We are pleased to share the latest research article by our PhD student, Hippolyte Gisserot-Boukhlef, which has been selected as a featured paper at the Ninth Conference on Machine Translation (WMT24) in November 2024.
Abstract
The paper explores the effectiveness of Preference Optimization techniques, particularly in comparison to Supervised Fine-Tuning. While optimizing on preference data is a common practice in machine translation—often leveraging high-quality outputs from external models like GPT-4—the broader implications of this approach are not yet fully understood. Interestingly, our findings suggest that using the model itself as a self-teacher can achieve comparable translation quality, while eliminating the complexities and constraints associated with relying on external systems.