Authors: Böing, Benedikt
Title: Verification of unsupervised neural networks
Language (ISO): en
Abstract: Neural networks are at the forefront of machine learning being responsible for achievements such as AlphaGo. As they are being deployed in more and more environments - even in safety-critical ones such as health care - we are naturally interested in assuring their reliability. However, the discovery of so-called adver- sarial attacks for supervised neural networks demonstrated that tiny distortions in the input space can lead to misclassifications and thus, to potentially catas- trophic errors: Patients could be diagnosed wrongly, or a car might confuse stop signs and traffic lights. Thus, ideally, we would like to guarantee that these types of attacks cannot occur. In this thesis we extend the research on reliable neural networks to the realm of unsupervised learning. This includes defining proper notions of reliability, as well as analyzing and adapting unsupervised neural networks with respect to this notion. Our definitions of reliability depend on the underlying neural networks and the problems they are meant to solve. However, in all our cases, we aim for guarantees on a continuous input space containing infinitely many points. Therefore we extend the traditional setting of testing against a finite dataset such that we require specialized tools to actually check a given network for reliability. We will demonstrate how we can leverage neural network verification for these purposes. Using neural network verification, however, entails a major challenge: It does not scale up to large networks. To overcome this limitation, we design a novel training procedure yielding networks that are both more reliable according to our definition as well as more amenable for neural network verification. By exploiting the piecewise affine structure of our networks, we can locally simplify them and thus decrease verification runtime significantly. We also take a per- spective that complements a neural network’s training by exploring how we can repair non-reliable neural network ensembles. With this thesis, we paradigmatically show the necessity and the complications of unsupervised neural network verification. It aims to pave the way for more research to come and towards a safe usage of these simple-to-build yet difficult-to-understand models given by unsupervised neural networks.
Subject Headings: Neuronale Netze
Verification
Adversarial attacks
Subject Headings (RSWK): Neuronale Netze
Programmverifikation
Computersabotage
URI: http://hdl.handle.net/2003/42030
http://dx.doi.org/10.17877/DE290R-23863
Issue Date: 2023
Appears in Collections:Chair of Data Science and Data Engineering

Files in This Item:
File Description SizeFormat 
Diss_Boeing.pdfDNB3.95 MBAdobe PDFView/Open


This item is protected by original copyright



This item is protected by original copyright rightsstatements.org