Multi-objective analysis of machine learning algorithms using model-based optimization techniques
dc.contributor.advisor | Weihs, Claus | |
dc.contributor.author | Horn, Daniel | |
dc.contributor.referee | Groll, Andreas | |
dc.date.accepted | 2019-02-20 | |
dc.date.accessioned | 2019-03-14T07:37:31Z | |
dc.date.available | 2019-03-14T07:37:31Z | |
dc.date.issued | 2019 | |
dc.description.abstract | My dissertation deals with the research areas optimization and machine learning. However, both of them are too extensive to be covered by a single person in a single work, and that is not the goal of my work either. Therefore, my dissertation focuses on interactions between these fields. On the one hand, most machine learning algorithms rely on optimization techniques. First, the training of a learner often implies an optimization. This is demonstrated by the SVM, where the weighted sum of the margin size and the sum of margin violations has to be optimized. Many other learners internally optimize either a least-squares or a maximum likelihood problem. Second, the performance of most machine learning algorithms depends on a set of hyper-parameters and an optimization has to be conducted in order to find the best performing model. Unfortunately, there is no globally accepted optimization algorithm for hyper-parameter tuning problems, and in practice naive algorithms like random or grid search are frequently used. On the other hand, some optimization algorithms rely on machine learning models. They are called model-based optimization algorithms and are mostly used to solve expensive optimization problems. During the optimization, the model is iteratively refined and exploited. One of the most challenging tasks here is the choice of the model class. It has to be applicable to the particular parameter space of the OP and to be well suited for modeling the function’s landscape. In this work, I gave special attention to the multi-objective case. In contrast to the single-objective case, where a single best solution is likely to exist, all possible trade-offs between the objectives have to be considered. Hence, not only a single best, but a set of best solutions exists, one for each trade-off. Although approaches for solving multi-objective problems differ from the corresponding approaches for single-objective problems in some parts, other parts can remain unchanged. This is shown for model-based multi-objective optimization algorithms. The last third of this work addresses the field of offline algorithm selection. In online algorithm selection the best algorithm for a problem is selected while solving it. Contrary, offline algorithm selection guesses the best algorithm a-priori. Again, the work focuses on the multi-objective case: An algorithm has to be selected with respect to multiple, conflicting objectives. As with all offline techniques, this selection rule hast to be trained on a set of available training data sets and can only be applied to new data sets that are similar enough to those in the training set. | de |
dc.identifier.uri | http://hdl.handle.net/2003/37937 | |
dc.identifier.uri | http://dx.doi.org/10.17877/DE290R-19922 | |
dc.language.iso | en | de |
dc.subject | Optimization | de |
dc.subject | Machine learning | de |
dc.subject.ddc | 310 | |
dc.subject.rswk | Optimierung | de |
dc.subject.rswk | Maschinelles Lernen | de |
dc.title | Multi-objective analysis of machine learning algorithms using model-based optimization techniques | de |
dc.type | Text | de |
dc.type.publicationtype | doctoralThesis | de |
dcterms.accessRights | open access | |
eldorado.secondarypublication | false | de |