Learning interpretable models
Loading...
Date
2006-10-20T13:06:43Z
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Interpretability is an important, yet often neglected criterion when
applying machine learning algorithms to real-world tasks. An
understandable model enables the user to gain more knowledge from his
data and to participate in the knowledge discovery process in a more
detailed way. Hence, learning interpretable models is a challenging
task, whose complexity comes from the problems that interpretability is
a fuzzy, subjective concept and human mental capabilities are in some
ways astonishingly limited. At the same time, interpretability is a
critical problem, because it is crucial for problems that cannot be
solved purely automatically.
The work presented in this thesis is structured along the three
dimensions of understandability, accuracy, and efficiency. It contains
contributions on the levels of the optimization of the interpretability
of a learner with and without knowledge of its internals (white box and
black box approach), the description of a models errors by local
patterns and the improvement of global models with local models.
Starting from an analysis of the requirements for and measures of
interpretability in the context of knowledge discovery, diverse possible
approaches of generating understandable models are investigated, with a
particular focus on interpretable Support Vector Machines and local
effects in the data. Problems of existing techniques and ad-hoc
approaches to understandability optimization are analyzed and improved
algorithms are developed.
Description
Table of contents
Keywords
Machine learning, Data mining, Classification, Interpretability, Local patterns, Local models