Improved uncertainty quantification for neural networks with Bayesian last layer
dc.contributor.author | Fiedler, Felix | |
dc.contributor.author | Lucia, Sergio | |
dc.date.accessioned | 2024-04-17T13:33:18Z | |
dc.date.available | 2024-04-17T13:33:18Z | |
dc.date.issued | 2023-11-02 | |
dc.description.abstract | Uncertainty quantification is an important task in machine learning - a task in which standard neural networks (NNs) have traditionally not excelled. This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred. Bayesian neural networks are an approach to address this limitation. They assume probability distributions for all parameters and yield distributed predictions. However, training and inference are typically intractable and approximations must be employed. A promising approximation is NNs with Bayesian last layer (BLL). They assume distributed weights only in the linear output layer and yield a normally distributed prediction. To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood. This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting. We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation. Furthermore, we address the challenge of uncertainty quantification for extrapolation points. We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points. Our methods are derived for the multivariate case and demonstrated in a simulation study. In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data. | en |
dc.identifier.uri | http://hdl.handle.net/2003/42444 | |
dc.identifier.uri | http://dx.doi.org/10.17877/DE290R-24280 | |
dc.language.iso | en | de |
dc.relation.ispartofseries | IEEE access;11 | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | de |
dc.subject | Bayesian last layer | en |
dc.subject | Bayesian neural network | en |
dc.subject | uncertainty quantification | en |
dc.subject.ddc | 660 | |
dc.title | Improved uncertainty quantification for neural networks with Bayesian last layer | en |
dc.type | Text | de |
dc.type.publicationtype | ResearchArticle | de |
dcterms.accessRights | open access | |
eldorado.secondarypublication | true | de |
eldorado.secondarypublication.primarycitation | F. Fiedler und S. Lucia, „Improved uncertainty quantification for neural networks with Bayesian last layer“, IEEE access, Bd. 11, S. 123149–123160, 2023, https://doi.org/10.1109/access.2023.3329685 | de |
eldorado.secondarypublication.primaryidentifier | https://doi.org/10.1109/ACCESS.2023.3329685 | de |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Improved_Uncertainty_Quantification_for_Neural_Networks_With_Bayesian_Last_Layer.pdf
- Size:
- 1.86 MB
- Format:
- Adobe Portable Document Format
- Description:
- DNB
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 4.85 KB
- Format:
- Item-specific license agreed upon to submission
- Description: