Full metadata record
DC FieldValueLanguage
dc.contributor.authorFiedler, Felix-
dc.contributor.authorLucia, Sergio-
dc.date.accessioned2024-04-17T13:33:18Z-
dc.date.available2024-04-17T13:33:18Z-
dc.date.issued2023-11-02-
dc.identifier.urihttp://hdl.handle.net/2003/42444-
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-24280-
dc.description.abstractUncertainty quantification is an important task in machine learning - a task in which standard neural networks (NNs) have traditionally not excelled. This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred. Bayesian neural networks are an approach to address this limitation. They assume probability distributions for all parameters and yield distributed predictions. However, training and inference are typically intractable and approximations must be employed. A promising approximation is NNs with Bayesian last layer (BLL). They assume distributed weights only in the linear output layer and yield a normally distributed prediction. To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood. This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting. We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation. Furthermore, we address the challenge of uncertainty quantification for extrapolation points. We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points. Our methods are derived for the multivariate case and demonstrated in a simulation study. In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data.en
dc.language.isoende
dc.relation.ispartofseriesIEEE access;11-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subjectBayesian last layeren
dc.subjectBayesian neural networken
dc.subjectuncertainty quantificationen
dc.subject.ddc660-
dc.titleImproved uncertainty quantification for neural networks with Bayesian last layeren
dc.typeTextde
dc.type.publicationtypeResearchArticlede
dcterms.accessRightsopen access-
eldorado.secondarypublicationtruede
eldorado.secondarypublication.primaryidentifierhttps://doi.org/10.1109/ACCESS.2023.3329685de
eldorado.secondarypublication.primarycitationF. Fiedler und S. Lucia, „Improved uncertainty quantification for neural networks with Bayesian last layer“, IEEE access, Bd. 11, S. 123149–123160, 2023, https://doi.org/10.1109/access.2023.3329685de
Appears in Collections:Fakultät für Bio- und Chemieingenieurwesen

Files in This Item:
File Description SizeFormat 
Improved_Uncertainty_Quantification_for_Neural_Networks_With_Bayesian_Last_Layer.pdfDNB1.9 MBAdobe PDFView/Open


This item is protected by original copyright



This item is licensed under a Creative Commons License Creative Commons