Learning interpretable models

dc.contributor.advisorMorik, Katharina
dc.contributor.authorRĂ¼ping, Stefan
dc.contributor.refereeMĂ¼ller, Heinrich
dc.date.accepted2006-08-08
dc.date.accessioned2006-10-20T13:06:43Z
dc.date.available2006-10-20T13:06:43Z
dc.date.issued2006-10-20T13:06:43Z
dc.description.abstractInterpretability is an important, yet often neglected criterion when applying machine learning algorithms to real-world tasks. An understandable model enables the user to gain more knowledge from his data and to participate in the knowledge discovery process in a more detailed way. Hence, learning interpretable models is a challenging task, whose complexity comes from the problems that interpretability is a fuzzy, subjective concept and human mental capabilities are in some ways astonishingly limited. At the same time, interpretability is a critical problem, because it is crucial for problems that cannot be solved purely automatically. The work presented in this thesis is structured along the three dimensions of understandability, accuracy, and efficiency. It contains contributions on the levels of the optimization of the interpretability of a learner with and without knowledge of its internals (white box and black box approach), the description of a models errors by local patterns and the improvement of global models with local models. Starting from an analysis of the requirements for and measures of interpretability in the context of knowledge discovery, diverse possible approaches of generating understandable models are investigated, with a particular focus on interpretable Support Vector Machines and local effects in the data. Problems of existing techniques and ad-hoc approaches to understandability optimization are analyzed and improved algorithms are developed.en
dc.format.extent2655227 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/2003/23008
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-8863
dc.identifier.urnurn:nbn:de:hbz:290-2003/23008-2
dc.language.isoen
dc.subjectMachine learningen
dc.subjectData miningen
dc.subjectClassificationen
dc.subjectInterpretabilityen
dc.subjectLocal patternsen
dc.subjectLocal modelsen
dc.subject.ddc004
dc.titleLearning interpretable modelsen
dc.typeTextde
dc.type.publicationtypedoctoralThesis
dcterms.accessRightsopen access

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
dissertation_rueping.pdf
Size:
2.53 MB
Format:
Adobe Portable Document Format
Description:
DNB
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.92 KB
Format:
Item-specific license agreed upon to submission
Description: