Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Müller, Emmanuel | - |
dc.contributor.author | Balestra, Chiara | - |
dc.date.accessioned | 2024-08-27T09:57:52Z | - |
dc.date.available | 2024-08-27T09:57:52Z | - |
dc.date.issued | 2024 | - |
dc.identifier.uri | http://hdl.handle.net/2003/42657 | - |
dc.identifier.uri | http://dx.doi.org/10.17877/DE290R-24493 | - |
dc.description.abstract | Rankings represent the natural way to access the importance of a finite set of items. Ubiquitous in real-world applications and machine-learning methods, they mostly derive from automated or human-based importance score assignments. Many fields involving rankings, such as Recommender Systems, feature selection, and anomaly detection, overlap with human-derived scoring systems, such as candidate selection and operational risk assessments. Rankings are explicitly hard to evaluate; several challenges derive from concerned biases, fairness issues, and also from their derivation and evaluation. This thesis spins around deriving importance scores and rankings as solutions in various contexts and applications. Starting from unsupervised feature importance scores based on an unconventional use of Shapley values for unlabeled data, it will touch a more applied field with an ad-hoc unsupervised methodology for reducing the dimensionality of collections of gene sets. We then focus on feature importance scores in a time-dependent context, focusing on detecting correlational concept drifts in the univariate dimensions of unlabeled streaming data. The whole work is commonly characterized by seeking to improve abstract concepts of trustworthiness and reliability, with an open eye on the consistency of evaluations and methods. In this direction, we add insights into using saliency importance score assignments for interpreting time series classification methods and define desirable mathematical properties for ranking evaluation metrics. Furthermore, we use Shapley values to interpret unsupervised anomaly detection deep methods based on features bagging. Lastly, we introduce some future and current challenges related to fairness issues in rank aggregations and some possible extensions of the current work. | de |
dc.language.iso | en | de |
dc.subject | Rankings | de |
dc.subject | Explainable machine learning | de |
dc.subject | Shapley values | de |
dc.subject | Important scores | de |
dc.subject | Unlabeled data | de |
dc.subject | Unlabeled time series | de |
dc.subject.ddc | 004 | |
dc.title | Rankings and importance scores as multi-facets of explainable machine learning | de |
dc.type | Text | de |
dc.contributor.referee | De Bie, Tijl | - |
dc.date.accepted | 2024-07-09 | - |
dc.type.publicationtype | PhDThesis | de |
dc.subject.rswk | Ranking | de |
dc.subject.rswk | Explainable Artificial Intelligence | de |
dc.subject.rswk | Shapley-Lösung | de |
dc.subject.rswk | Zeitreihe | de |
dcterms.accessRights | open access | |
eldorado.secondarypublication | false | de |
Appears in Collections: | Chair of Data Science and Data Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Dissertation_Balestra.pdf | DNB | 4.63 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
This item is protected by original copyright rightsstatements.org