|Title:||Neyman-Pearson theory of testing and Mayo s extensions applied to evolutionary computing|
|Abstract:||Evolutionary computation (EC) is a relatively new discipline in computer science (Eiben & Smith, 2003). It tackles hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bioinformatics, where classical methods from mathematical optimization fail. Many theoretical results in this field are too abstract, they do not match with reality. To develop problem specific algorithms, experimentation is necessary. During the first phase of experimental research in EC (before 1980), which can be characterized as -foundation and development,- the comparison of different algorithms was mostly based on mean values - nearly no further statistics have been used. In the second phase, where EC -moved to mainstream- (1980-2000), classical statistical methods were introduced. There is a strong need to compare EC algorithms to mathematical optimization (main stream) methods. Adequate statistical tools for EC are developed in the third phase (since 2000). They should be able to cope with problems like small sample sizes, nonnormal distributions, noisy results, etc. However - even if these tools are under development - they do not bridge the gap between the statistical significance of an experimental result and its scientific meaning. Based on Mayo s learning model (NPT) we will propose some ideas how to bridge this gap (Mayo, 1983, 1996). We will present plots of the observed significance level and discuss the sequential parameter optimization (SPO) approach. SPO is a heuristic, but implementable approach, which provides a framework for a sound statistical methodology in EC (Bartz-Beielstein, 2006).|
|Appears in Collections:||Sonderforschungsbereich (SFB) 531|
This item is protected by original copyright
Items in Eldorado are protected by copyright, with all rights reserved, unless otherwise indicated.