|Title:||Learning interpretable models|
|Abstract:||Interpretability is an important, yet often neglected criterion when applying machine learning algorithms to real-world tasks. An understandable model enables the user to gain more knowledge from his data and to participate in the knowledge discovery process in a more detailed way. Hence, learning interpretable models is a challenging task, whose complexity comes from the problems that interpretability is a fuzzy, subjective concept and human mental capabilities are in some ways astonishingly limited. At the same time, interpretability is a critical problem, because it is crucial for problems that cannot be solved purely automatically. The work presented in this thesis is structured along the three dimensions of understandability, accuracy, and efficiency. It contains contributions on the levels of the optimization of the interpretability of a learner with and without knowledge of its internals (white box and black box approach), the description of a models errors by local patterns and the improvement of global models with local models. Starting from an analysis of the requirements for and measures of interpretability in the context of knowledge discovery, diverse possible approaches of generating understandable models are investigated, with a particular focus on interpretable Support Vector Machines and local effects in the data. Problems of existing techniques and ad-hoc approaches to understandability optimization are analyzed and improved algorithms are developed.|
|Subject Headings:||Machine learning|
|Appears in Collections:||LS 08 Künstliche Intelligenz|
Files in This Item:
|dissertation_rueping.pdf||DNB||2.59 MB||Adobe PDF||View/Open|
This item is protected by original copyright
All resources in the repository are protected by copyright.