Mersmann, OlafNaujoks, BorisTrautmann, HeikeWeihs, Claus2010-02-082010-02-082010-02-08http://hdl.handle.net/2003/2667110.17877/DE290R-12656Choosing and tuning an optimization procedure for a given class of nonlinear optimization problems is not an easy task. One way to proceed is to consider this as a tournament, where each procedure will compete in different ‘disciplines’. Here, disciplines could either be different functions, which we want to optimize, or specific performance measures of the optimization procedure. We would then be interested in the algorithm that performs best in a majority of cases or whose average performance is maximal. We will focus on evolutionary multiobjective optimization algorithms (EMOA), and will present a novel approach to the design and analysis of evolutionary multiobjective benchmark experiments based on similar work from the context of machine learning. We focus on deriving a consensus among several benchmarks over different test problems and illustrate the methodology by reanalyzing the results of the CEC 2007 EMOA competition.enDiscussion Paper / SFB 823;03/2010310330620Benchmarking evolutionary multiobjective optimization algorithmsreport