|Title:||Surrogate models for discrete optimization problems|
|Abstract:||Surrogate models are crucial tools for many real-world optimization problems. An optimization algorithm can evaluate a data-driven surrogate model instead of an expensive objective function. While surrogate models are well-established in the continuous optimization domain, they are less frequently applied to more complex search spaces with discrete or combinatorial solution representations. The main goal of this thesis is to fill this gap. We develop and improve methods for data-driven surrogate modeling in discrete search spaces. After an initial review of existing approaches, this work focuses on a similarity-based, or kernel-based, model: Kriging. The intuitive idea is to change the underlying kernel, thus adapting Kriging to arbitrary data types. However, the model is sensitive to the employed kernel. Hence, methods for combining or selecting kernels are required. To that end, this thesis discusses various methods based on concepts like correlation, likelihood, and cross-validation. Our experiments show that choosing or constructing the right kernel determines the success of the optimization algorithm. Another challenge is that kernels are often desired to be positive semi-definite (e.g., correlation functions) or conditionally negative semi-definite (distances). Analytical proofs of a kernel's definiteness are possible, yet not always feasible in practice. At the same time, these properties can have a significant effect on model accuracy. Hence, we propose a randomized, empirical search procedure to determine whether a kernel is definite. Once a kernel is determined to be indefinite, appropriate counter-measures have to be taken to avoid performance losses. Taking inspiration from the field of support vector learning, we use eigenspectrum transformations and related methods to correct the kernel matrices. We add to the state of the art by considering various repair mechanisms that are linked to additional requirements imposed by Kriging models. We test most of our methods with sets of simple benchmark problems. To extend this, we also investigate two problems that are more complex. Firstly, we consider solutions represented by tree structures, which are of interest in genetic programming. We discuss different kernels on tree structures and test them on symbolic regression tasks. Another more complex test case are hierarchical search spaces. Here, some variables are only active when other variables satisfy a condition. We design several new kernels for hierarchical search spaces and compare them to an existing kernel. Even with those more complex test cases, it remains unclear how performance estimates translate to real-world problems. To address this, we propose a data-driven test function generator based on Kriging simulation. In contrast to estimation, simulation may avoid smoothing the training data and is inherently suited to generate randomized test instances that reflect real-world behavior.|
|Subject Headings:||Surrogate modeling|
|Subject Headings (RSWK):||Diskrete Optimierung|
|Appears in Collections:||LS 11|
This item is protected by original copyright
All resources in the repository are protected by copyright.