Evaluable explainability and applications to 3D vision

Loading...
Thumbnail Image

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Alternative Title(s)

Abstract

With the breakthroughs in the performance of deep neural networks, they are applied in a wide range of fields, including several with high requirements for security. However, these black-box models suffer from potential risks, the most threatening of which is their opaque decision-making process. The recent rise of explainability studies on black-box models is a promising research direction to enhance the trustworthiness of models. Nevertheless, existing explainability studies are still limited, one being that they are difficult to be evaluated objectively due to the lack of ground truth, and the other being that the vast majority of relevant studies are constrained to a particular data format and lack extensibility. The two main parts of this dissertation address each of these two limitations. In the first half, we aim to improve the evaluability of explainability methods. We optimize the choice of baseline involved in the explanation evaluations and part of the explainability approaches to satisfy the uninformative definition. In addition, we complement the explanation evaluation metrics from three novel perspectives, namely robustness to parameter perturbations, generalizability, and sensitivity consistency. In the latter half, we extend the applicability of the explainability approaches to 3D computer vision field so that the trustworthiness of point cloud models is enhanced. We first extend the perturbation-based approach to point clouds and provide online toolkits to facilitate practical implementation. Subsequently, we propose two activation maximization-based point cloud global explainability approaches, which visualize input instances that are representative for specific categories. Moreover, we propose a non-DNN point cloud classifier that utilizes multi-scale fractal windows to extract distributional information and makes predictions via random forests, which significantly enhances the explainability compared to DNNs. Further, we adversarially analyze the decision sensitivity of point cloud models with the help of saliency maps generated by explainability methods. Finally, we analyze how the model learns 3D geometric features by analyzing the distribution of activations in the intermediate layers. Extensive experiments demonstrate that the proposed method contributes to the explainability evaluation and its adaptability on point clouds.

Description

Table of contents

Keywords

Explainability, Point Clouds

Subjects based on RSWK

Punktwolke, Rückverfolgbarkeit (Informatik)

Citation