Toward precise ambiguity-aware cross-modality global self-localization

dc.contributor.authorStannartz, Niklas
dc.contributor.authorSchütte, Stefan
dc.contributor.authorKuhn, Markus
dc.contributor.authorBertram, Torsten
dc.date.accessioned2024-04-17T12:35:31Z
dc.date.available2024-04-17T12:35:31Z
dc.date.issued2023-06-14
dc.description.abstractThere are significant advances in GNSS-free cross-modality self-localization of self-driving vehicles. Recent methods focus on learnable features for both cross-modal global localization via place recognition (PR) and local pose tracking, however they lack means of combining them in a complete localization pipeline. That is, a pose retrieved from PR has to be validated if it actually represents the true pose. Performing this validation without GNSS measurements makes the localization problem significantly more challenging. In this contribution, we propose a method to precisely localize the ego-vehicle in a high resolution map without GNSS prior. Furthermore, sensor and map data may be of different dimensions (2D / 3D) and modality, i.e. radar, lidar or aerial imagery. We initialize our system with multiple hypotheses retrieved from a PR method and infer the correct hypothesis over time. This multi-hypothesis approach is realized using a Gaussian sum filter which enables an efficient tracking of a low number of hypotheses and further facilitates the inference of our deep sensor-to-map matching network at arbitrarily distant regions simultaneously. We further propose a method to estimate the probability that none of the currently tracked hypotheses is correct. We achieve successful global localization in extensive experiments on the MulRan dataset, outperforming comparative methods even if none of the initial poses from PR was close to the true pose. Due to the flexibility of the approach, we can show state-of-the-art accuracy in lidar-to-aerial-imagery localization on a custom dataset using our pipeline with only minor modifications of the matching model.en
dc.identifier.urihttp://hdl.handle.net/2003/42442
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-24278
dc.language.isoende
dc.relation.ispartofseriesIEEE access;11
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subjectvehicle self-localizationen
dc.subjectcross-modality localizationen
dc.subjectglobal localizationen
dc.subjectplace recognitionen
dc.subjectmulti-hypothesis localizationen
dc.subjectHD mapen
dc.subjectautomated drivingen
dc.subject.ddc620
dc.titleToward precise ambiguity-aware cross-modality global self-localizationen
dc.typeTextde
dc.type.publicationtypeResearchArticlede
dcterms.accessRightsopen access
eldorado.secondarypublicationtruede
eldorado.secondarypublication.primarycitationN. Stannartz, S. Schütte, M. Kuhn, und T. Bertram, „Toward precise ambiguity-aware cross-modality global self-localization“, IEEE access, Bd. 11, S. 60005–60027, 2023, https://doi.org/10.1109/access.2023.3286310de
eldorado.secondarypublication.primaryidentifierhttps://doi.org/10.1109/ACCESS.2023.3286310de

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Toward_Precise_Ambiguity-Aware_Cross-Modality_Global_Self-Localization.pdf
Size:
3.6 MB
Format:
Adobe Portable Document Format
Description:
DNB
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.85 KB
Format:
Item-specific license agreed upon to submission
Description: