|Title:||Time-optimal large view visual servoing with dynamic sets of SIFT|
|Abstract:||This paper presents a novel approach to large view visual servoing in the context of object manipulation. In many scenarios the features extracted in the reference pose are only perceivable across a limited region of the work space. The limited visibility of features necessitates the introduction of additional intermediate reference views of the object and requires path planning in view space. In our scheme visual control is based on decoupled moments of SIFT-features, which are generic in the sense that the control operates with a dynamic set of feature correspondences rather than a static set of geometric features. The additional flexibility of dynamic feature sets enables flexible path planning in the image space and online selection of optimal reference views during servoing to the goal view. The time to convergence to the goal view is estimated by a neural network based on the residual feature error and the quality of the SIFT feature distribution. The transition among reference views occurs on the basis of this estimated cost which is evaluated online based on the current set of visible features. The dynamic switching scheme achieves robust and nearly time-optimal convergence of the visual control across the entire task space. The effectiveness and robustness of the scheme is confirmed in an experimental evaluation in a virtual reality simulation and on a real robot arm with a eye-in-hand configuration.|
|Appears in Collections:||Sonderforschungsbereich (SFB) 531|
This item is protected by original copyright
If no CC-License is given, pleas contact the the creator, if you want to use thre resource other than only read it.