Autor(en): Posada, Luis Felipe
Titel: Visual robot navigation with omnidirectional vision
Sprache (ISO): en
Zusammenfassung: In a world where service robots are increasingly becoming an inherent part of our lives, it has become essential to provide robots with superior perception capabilities and acute semantic knowledge of the environment. In recent years, the computer vision field has advanced immensely, providing rich information at a fraction of the cost. It has thereby become an essential part of many autonomous systems and the sensor of choice while tackling the most challenging perception problems. Nevertheless, it is still challenging for a robot to extract meaningful information from an image signal (a high dimensional, complex, and noisy data). This dissertation presents several contributions towards visual robot navigation relying solely on omnidirectional vision. The first part of the thesis is devoted to robust free-space detection using omnidirectional images. By mimicking a range sensor, the free-space extraction in the omniview constitutes a fundamental block in our system, allowing for collision-free navigation, localization, and map-building. The uncertainty in the free-space classifications is handled with fuzzy preference structures, which explicitly expresses it in terms of preference, conflict, and ignorance. This way, we show it is possible to substantially reduce the classification error by rejecting queries associated with a strong degree of conflict and ignorance. The motivation of using vision in contrast to classical proximity sensors becomes apparent after the incorporation of more semantic categories in the scene segmentation. We propose a multi-cue classifier able to distinguish between the classes: floor, vertical structures, and clutter. This result is further enhanced to extract the scene’s spatial layout and surface reconstruction for a better spatial and context awareness. Our scheme corrects the problematic distortions induced by the hyperbolic mirror with a novel bird’s eye formulation. The proposed framework is suitable for self-supervised learning from 3D point cloud data. Place context is integrated into the system by training a place category classifier able to distinguish among the categories: room, corridor, doorway, and open space. Hand-engineered features, as well as those learned from data representations, are considered with different ensemble systems. The last part of the thesis is concerned with local and map-based navigation. Several visual local semantic behaviors are derived by fusing the semantic scene segmentation with the semantic place context. The advantage of the proposed local navigation is that the system can recover from conflicting errors while activating behaviors in the wrong context. Higher-level behaviors can also be achieved by compositions of the basic ones. Finally, we propose different visual map-based navigation alternatives that reproduce or achieve better results compared to classical proximity sensors, which include: map generation, particle filter localization, and semantic map building.
Schlagwörter: Visual robot navigation
Semantic mapping
Omnidirectional vision
Visual robot localization
Visual robot behaviors
Schlagwörter (RSWK): Semantische Modellierung
Maschinelles Sehen
Bildverarbeitung
Lagemessung
Navigation
URI: http://hdl.handle.net/2003/38417
http://dx.doi.org/10.17877/DE290R-20348
Erscheinungsdatum: 2019
Enthalten in den Sammlungen:Lehrstuhl für Regelungssystemtechnik

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
Dissertation_Posada.pdfDNB6.68 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource ist urheberrechtlich geschützt.



Diese Ressource ist urheberrechtlich geschützt. rightsstatements.org