The multi-objective optimization (MOO) / multitask learning (MTL) have gained much popularity with prevalent use cases such as production model development of regression / classification / ranking models with MOO, and raining deep learning models with MTL. Despite the long history of research in MOO, its application to machine learning requires development of solution strategy, and algorithms have recently been developed to solve specific problems such as discovery of any Pareto Optimal (PO) solution, and that with a
particular form of preference. In this paper, we develop a novel and generic framework to discover a PO solution with multiple forms of preferences. It allows us to formulate a generic MOO/MTL problem to express a preference, which is solved to achieve the preference and PO. Specifically, we apply the framework to solve the weighted Chebyshev problem and an extension of that. The former is known to be a method to discover the Pareto Front, the latter helps to find a model that outperforms an existing model with only one run. Experimental results demonstrate not only the method achieves competitive performance with existing methods, but also models with similar performance can be built from different forms of preferences.
A multi-objective, multi-task learning framework induced by Pareto stationarity
2022
Research areas