Towards rigorous understanding of neural networks via semantics-preserving transformations
dc.contributor.author | Schlüter, Maximilian | |
dc.contributor.author | Nolte, Gerrit | |
dc.contributor.author | Murtovi, Alnis | |
dc.contributor.author | Steffen, Bernhard | |
dc.date.accessioned | 2024-11-11T12:21:06Z | |
dc.date.available | 2024-11-11T12:21:06Z | |
dc.date.issued | 2023-05-30 | |
dc.description.abstract | In this paper, we present an algebraic approach to the precise and global verification and explanation of Rectifier Neural Networks, a subclass of Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras, which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified, or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function. | en |
dc.identifier.uri | http://hdl.handle.net/2003/42740 | |
dc.identifier.uri | http://dx.doi.org/10.17877/DE290R-24572 | |
dc.language.iso | en | |
dc.relation.ispartofseries | International journal on software tools for technology transfer; 25 | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | (rectifier) neural networks | en |
dc.subject | activation functions | en |
dc.subject | (piece-wise) affine functions | en |
dc.subject | linear algebra | en |
dc.subject | typed affine decision structures | en |
dc.subject | symbolic execution | en |
dc.subject | explainability | en |
dc.subject | verification | en |
dc.subject | robustness | en |
dc.subject | semantics | en |
dc.subject | XOR | en |
dc.subject | diagnostics | en |
dc.subject | precision | en |
dc.subject | digit recognition | en |
dc.subject.ddc | 004 | |
dc.title | Towards rigorous understanding of neural networks via semantics-preserving transformations | en |
dc.type | Text | |
dc.type.publicationtype | Article | |
dcterms.accessRights | open access | |
eldorado.secondarypublication | true | |
eldorado.secondarypublication.primarycitation | Schlüter, M., Nolte, G., Murtovi, A., Steffen, B.: Towards rigorous understanding of neural networks via semantics-preserving transformations. International journal on software tools for technology transfer. 25, 301–327 (2023). https://doi.org/10.1007/s10009-023-00700-7 | |
eldorado.secondarypublication.primaryidentifier | https://doi.org/10.1007/s10009-023-00700-7 |