Arbeitsgebiet Bildsignalverarbeitung

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 16 of 16
  • Item
    Statistical methods for compositional and photometric analysis of the lunar surface
    (2024) Hess, Marcel; Wöhler, Christian; Hiesinger, Harald
    This thesis presents statistical methods and analysis regarding the surface properties and processes of the lunar surface using reflectance spectroscopy. The first process of interest is the diurnal variations of OH/H2O on the Moon. Closely linked to this are lunar swirls, which are the second major science topic in this work. Thirdly, we investigate the mineral composition and space weathering to understand the first two processes. The solar wind contains hydrogen that forms hydroxyl and is the main driver for space weathering on the lunar surface. The hydrogen content varies more strongly with the time of day for TiO2-rich surfaces and less for plagioclase-rich areas. The OH/H2O variations at swirls is generally weaker due to magnetic shielding. However, the compaction significance spectral index (CSSI) shows that not for all swirl locations, the brightness difference comes predominantly from maturity differences, but compaction plays a significant role in the appearance of lunar swirls. Another dimension of lunar swirls that is explored in this work is the photometric behavior, which can be seen as a proxy for the physical properties of the surface. This analysis further accentuates that compaction is a factor that contributes to the increased brightness of swirls. Furthermore, this thesis introduces an unmixing framework that considers the effects of space weathering by using the nanophase and microphase particles as endmembers. Space weathering, mineral darkening agents like ilmenite, and grain size produce similar spectral effects. Thus, we propose a Bayesian approach, which characterizes the uncertainties and interdependencies. Finally, maps of the major minerals and space weathering agents for the Moon are presented.
  • Item
    Metabolic profiling on 2D NMR TOCSY spectra using machine learning
    (2023) Migdadi, Lubaba Yousef Hazza; Wöhler, Christian; Kummert, Franz
    Due to the dynamicity of biological cells, the role of metabolic profiling in discovering biological fingerprints of diseases, and their evolution, as well as the cellular pathway of different biological or chemical stimuli is most significant. Two-dimensional nuclear magnetic resonance (2D NMR) is one of the fundamental and strong analytical instruments for metabolic profiling. Though, total correlation spectroscopy (2D NMR 1H -1H TOCSY) can be used to improve spectral overlap of 1D NMR, strong peak shift, signal overlap, spectral crowding and matrix effects in complex biological mixtures are extremely challenging in 2D NMR analysis. In this work, we introduce an automated metabolic deconvolution and assignment based on the deconvolution of 2D TOCSY of real breast cancer tissue, in addition to different differentiation pathways of adipose tissue-derived human Mesenchymal Stem cells. A major alternative to the common approaches in NMR based machine learning where images of the spectra are used as an input, our metabolic assignment is based only on the vertical and horizontal frequencies of metabolites in the 1H-1H TOCSY. One- and multi-class Kernel null foley–Sammon transform, support vector machines, polynomial classifier kernel density estimation, and support vector data description classifiers were tested in semi-supervised learning and novelty detection settings. The classifiers’ performance was evaluated by comparing the conventional human-based methodology and automatic assignments under different initial training sizes settings. The results of our novel metabolic profiling methods demonstrate its suitability, robustness, and speed in automated nontargeted NMR metabolic analysis.
  • Item
    Learning the link between Albedo and reflectance: Machine learning-based prediction of hyperspectral bands from CTX images
    (2022-07-18) Stepcenkov, Sergej; Wilhelm, Thorsten; Wöhler, Christian
    The instruments of the Mars Reconnaissance Orbiter (MRO) provide a large quantity and variety of imagining data for investigations of the Martian surface. Among others, the hyper-spectral Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) captures visible to infrared reflectance across several hundred spectral bands. However, Mars is only partially covered with targeted CRISM at full spectral and spatial resolution. In fact, less than one percent of the Martian surface is imaged in this way. In contrast, the Context Camera (CTX) onboard the MRO delivers images with a higher spatial resolution and the image data cover almost the entire Martian surface. In this work, we examine to what extent machine learning systems can learn the relation between morphology, albedo and spectral composition. To this end, a dataset of 67 CRISM-CTX image pairs is created and different deep neural networks are trained for the pixel-wise prediction of CRISM bands solely based on the albedo information of a CTX image. The trained models enable us to estimate spectral bands across large areas without existing CRISM data and to predict the spectral composition of any CTX image. The predictions are qualitatively similar to the ground-truth spectra and are also able to recover finer grained details, such as dunes or small craters.
  • Item
    Semantische Umgebungserfassung auf Basis von Radar-Merkmalskarten
    (2021) Lombacher, Jakob; Wöhler, Christian; Schwenker, Friedhelm
    Moderne Fahrerassistenzsysteme und autonomes Fahren benötigen eine detaillierte Umgebungserfassung. Neben einer präzisen Beschreibung von Form und Zustand der umgebenen Objekte, ist auch ein immer besseres semantisches Verständnis der Situation erforderlich. Die Anforderungen der funktionalen Sicherheit führen dazu, dass diese oft nur durch Redundanz erreicht werden können. Für Systeme zur semantischen Klassifizierung von Objekten werden heutzutage hauptsächlich optische Sensoren verwendet. Obwohl die Anzahl der im Fahrzeug verbauten Radare immer weiter steigt und die Messeigenschaften jedes einzelnen Sensors sich in Auflösung, Genauigkeit und Sensitivität beständig verbessern, leistet der Radarsensor heutzutage nur einen geringen Beitrag zur Semantik. Im Rahmen dieser Arbeit wird das Potential des Radarsensors zur semantischen Umgebungserfassung für das statische Fahrzeugumfeld untersucht. Ausgehend von Radar-Detektionen untersucht diese Arbeit die gesamte Verarbeitungs- und Entwicklungskette eines Klassifikationssystems für Radar. Anhand eines großen Datensatzes wird gezeigt, dass die Klassifikation der statischen Welt vielversprechende Ergebnisse erzielt und dass durch gezielte Anpassungen der Algorithmen die Resultate deutlich verbessert werden können.
  • Item
    Machine learning applied to radar data: classification and semantic instance segmentation of moving road users
    (2021) Schumann, Ole; Wöhler, Christian; Dietmayer, Klaus
    Classification and semantic instance segmentation applications are rarely considered for automotive radar sensors. In current implementations, objects have to be tracked over time before some semantic information can be extracted. In this thesis, data from a network of 77 GHz automotive radar sensors is used to construct, train and evaluate machine learning algorithms for the classification of moving road users. The classification step is deliberately performed early in the process chain so that a subsequent tracking algorithm can benefit from this extra information. For this purpose, a large data set with real-world scenarios from about 5 h of driving was recorded and annotated. Given that the point clouds measured by the radar sensors are both sparse and noisy, the proposed methods have to be sensitive to those features that discern the individual classes from each other and at the same time, they have to be robust to outliers and measurement errors. Two groups of applications are considered: classi- fication of clustered data and semantic (instance) segmentation of whole scenes. In the first category, specifically designed density-based clustering algorithms are used to group individual measurements to objects. These objects are then used either as input to a manual feature extraction step or as input to a neural network, which operates directly on the bare input points. Different classifiers are trained and evaluated on these input data. For the algorithms of the second category, the measurements of a whole scene are used as input, so that the clustering step becomes obsolete. A newly designed recurrent neural network for instance segmentation of point clouds is utilized. This approach outperforms all of the other proposed methods and exceeds the baseline score by about ten percentage points. In additional experiments, the performance of human test candidates on the same task is analyzed. This study shows that temporal correlations in the data are of great use for the test candidates, who are nevertheless outrun by the recurrent network.
  • Item
    Uncertainty-based image segmentation with unsupervised mixture models
    (2019) Wilhelm, Thorsten; Wöhler, Christian; Kummert, Franz
    In this thesis, a contribution to explainable artificial intelligence is made. More specifically, the aspect of artificial intelligence which focusses on recreating the human perception is tackled from a previously neglected direction. A variant of human perception is building a mental model of the extents of semantic objects which appear in the field of view. If this task is performed by an algorithm, it is termed image segmentation. Recent methods in this area are mostly trained in a supervised fashion by exploiting an as extensive as possible data set of ground truth segmentations. Further, semantic segmentation is almost exclusively tackled by Deep Neural Networks (DNNs). Both trends pose several issues. First, the annotations have to be acquired somehow. This is especially inconvenient if, for instance, a new sensor becomes available, new domains are explored, or different quantities become of interest. In each case, the cumbersome and potentially costly labelling of the raw data has to be redone. While annotating keywords to an image can be achieved in a reasonable amount of time, annotating every pixel of an image with its respective ground truth class is an order of magnitudes more time-consuming. Unfortunately, the quality of the labels is an issue as well because fine-grained structures like hair, grass, or the boundaries of biological cells have to be outlined exactly in image segmentation in order to derive meaningful conclusions. Second, DNNs are discriminative models. They simply learn to separate the features of the respective classes. While this works exceptionally well if enough data is provided, quantifying the uncertainty with which a prediction is made is then not directly possible. In order to allow this, the models have to be designed differently. This is achieved through generatively modelling the distribution of the features instead of learning the boundaries between classes. Hence, image segmentation is tackled from a generative perspective in this thesis. By utilizing mixture models which belong to the set of generative models, the quantification of uncertainty is an implicit property. Additionally, the dire need of annotations can be reduced because mixture models are conveniently estimated in the unsupervised setting. Starting with the computation of the upper bounds of commonly used probability distributions, this knowledge is used to build a novel probability distribution. It is based on flexible marginal distributions and a copula which models the dependence structure of multiple features. This modular approach allows great flexibility and shows excellent performance at image segmentation. After deriving the upper bounds, different ways to reach them in an unsupervised fashion are presented. Including the probable locations of edges in the unsupervised model estimation greatly increases the performance. The proposed models surpass state-of-the-art accuracies in the generative and unsupervised setting and are on-par with many discriminative models. The analyses are conducted following the Bayesian paradigm which allows computing uncertainty estimates of the model parameters. Finally, a novel approach combining a discriminative DNN and a local appearance model in a weakly supervised setting is presented. This combination yields a generative semantic segmentation model with minimal annotation effort.
  • Item
    Can a red wood-ant nest be associated with fault-related CH4 micro-seepage?
    (2018-03-28) Berberich, Gabriele M.; Ellison, Aaron M.; Berberich, Martin B.; Grumpe, Arne; Becker, Adrian; Wöhler, Christian
    We measured methane (CH4) and stable carbon isotope of methane (δ13C-CH4) concentrations in ambient air and within a red wood-ant (RWA; Formica polyctena) nest in the Neuwied Basin (Germany) using high-resolution in-situ sampling to detect microbial, thermogenic, and abiotic fault-related micro-seepage of CH4. Methane degassing from RWA nests was not synchronized with earth tides, nor was it influenced by micro-earthquake degassing or concomitantly measured RWA activity. Two δ13C-CH4 signatures were identified in nest gas: −69‰ and −37‰. The lower peak was attributed to microbial decomposition of organic matter within the RWA nest, in line with previous observations that RWA nests are hot-spots of microbial CH4. The higher peak has not been reported in previous studies. We attribute this peak to fault-related CH4 emissions moving via fault networks into the RWA nest, which could originate either from thermogenic or abiotic CH4 formation. Sources of these micro-seepages could be Devonian schists, iron-bearing “Klerf Schichten”, or overlapping micro-seepage of magmatic CH4 from the Eifel plume. Given the abundance of RWA nests on the landscape, their role as sources of microbial CH4 and biological indicators for abiotically-derived CH4 should be included in estimation of methane emissions that are contributing to climatic change.
  • Item
    Degassing rhythms and fluctuations of geogenic gases in a red wood-ant nest and in soil in the Neuwied Basin (East Eifel Volcanic Field, Germany)
    (2018-10-05) Berberich, Gabriele M.; Berberich, Martin B.; Ellison, Aaron M.; Wöhler, Christian
    Geochemical tracers of crustal fluids (CO2, He, Rn) provide a useful tool for the identification of buried fault structures. We acquired geochemical data during 7-months of continual sampling to identify causal processes underlying correlations between ambient air and degassing patterns of three gases (CO2, He, Rn) in a nest of red wood ants (Formica polyctena; “RWA”) and the soil at Goloring in the Neuwied Basin, a part of the East Eifel Volcanic Field (EEVF). We explored whether temporal relations and degassing rhythms in soil and nest gas concentrations could be indicators of hidden faults through which the gases migrate to the surface from depth. In nest gas, the coupled system of CO2-He and He concentrations exceeding atmospheric standards 2-3 fold suggested that RWA nests may be biological indicators of hidden degassing faults and fractures at small scales. Equivalently periodic degassing infradian rhythms in the RWA nest, soil, and three nearby minerals springs suggested NW-SE and NE-SW tectonic linkages. Because volcanic activity in the EEVF is dormant, more detailed information on the EEVF’s tectonic, magmatic, and degassing systems and its active tectonic fault zones are needed. Such data could provide additional insights into earthquake processes that are related to magmatic processes at the lower crust.
  • Item
    First identification of periodic degassing rhythms in three mineral springs of the East Eifel Volcanic Field (EEVF, Germany)
    (2019-04-24) Berberich, Gabriele M.; Wöhler, Christian; Berberich, Martin B.; Ellison, Aaron M.
    We present a geochemical dataset acquired during continual sampling over 7 months (bi-weekly) and 4 weeks (every 8 h) in the Neuwied Basin, a part of the East Eifel Volcanic Field (EEVF, Germany). We used a combination of geochemical, geophysical, and statistical methods to describe and identify potential causal processes underlying the correlations of degassing patterns of CO2, He, Rn, and tectonic processes in three investigated mineral springs (Nette, Kärlich and Kobern). We provide for the first time, temporal analyses of periodic degassing patterns (1 day and 2–6 days) in springs. The temporal fluctuations in cyclic behavior of 4–5 days that we recorded had not been observed previously but may be attributed to a fundamental change in either gas source processes, subsequent gas transport to the surface, or the influence of volcano–tectonic earthquakes. Periods observed at 10 and 15 days may be related to discharge pulses of magma in the same periodic rhythm. We report the potential hint that deep low-frequency (DLF) earthquakes might actively modulate degassing. Temporal analyses of the CO2–He and CO2–Rn couples indicate that all springs are interlinked by previously unknown fault systems. The volcanic activity in the EEVF is dormant but not extinct. To understand and monitor its magmatic and degassing systems in relation to new developments in DLF-earthquakes and magmatic recharging processes and to identify seasonal variation in gas flux, we recommend continual monitoring of geogenic gases in all available springs taken at short temporal intervals.
  • Item
    Image-based 3D reconstruction of surfaces with highly complex reflectance properties
    (2019) Lenoch, Malte; Wöhler, Christian; Pauli, Josef
    The camera-based acquisition of the environment has become an ordinary task in today’s society as much in science as in everyday-life situations. Smartphone cameras are employed in interactive video games and augmented reality, just as industrial quality inspection, remote sensing, robotics and autonomous vehicles rely on camera sensors to analyze the outside world. One crucial aspect of the automated analysis is the retrieval of the 3D structure of unknown objects in the scene – be it for collision prevention, grabbing, or comparison to a CAD model – from the acquired image data. Reflectance-based surface reconstruction methods form a valuable part of the set of camera-based algorithms. Stereo cameras exploit geometrical optics to triangulate the 3D position of a scene point while photometric procedures require one camera only and estimate a surface gradient field based on the shading of an object. The reflectance properties of the object have to be known to achieve this which results in a chicken-and-egg problem on unknown objects since the surface shape has to be available to approximate the reflectance properties, and the reflectance properties have to be known to estimate the surface shape. This situation is circumvented on Lambertian surfaces, yet, those that are of interest in real-world applications exhibit much more complex reflectance properties for which this problem remains. The challenge of estimating the unknown spatially varying bidirectional reflectance distribution function (BRDF) parameters of an object of approximately known shape is approached from a Bayesian perspective employing reversible jump Markov chain Monte Carlo methods to infer both, reflectance parameters and surface regions that show similar reflectance properties from sampling the posterior distributions of the data. A significant advantage compared to non-linear least squares estimates is the availability of statistical information that can directly be used to evaluate the accuracy of the inferred patches and parameters. In the evaluation of the method, the derived patches accurately separate a synthetic and a laboratory dataset into meaningful segments. The reflectance of the synthetic dataset is almost perfectly reproduced and misestimated BRDF parameters underline the necessity for a large dataset to apply statistical inference. The real-world dataset reveals the inherent problems of BRDF estimation in the presence of cast shadows and interreflections. Furthermore, a procedure that is suitable to calibrate a two-camera photometric stereo acquisition setup is examined. The calibration is based on multiple images of a diffuse spherical object that is located in corresponding images. Although the calibration object is supposed to be perfectly diffuse by design, considering a specular Phong component in addition to the Lambertian BRDF model increases the accuracy of the rendered images. The light source positions are initialized based on stereo geometry and optimized by minimizing the intensity error between measured and rendered images of the calibration object. Ultimately, this dissertation tackles the task of image-based surface reconstruction with the contribution of two novel algorithms. The first one computes an initial approximation of the 3D shape based on the diffuse component of the reflectance and iteratively refines this rough guess with gradient fields calculated from photometric stereo assuming a combination of the BRDF models of Lambert and Blinn. The second method computes the surface gradient fields for both views of a stereo camera setup and updates the estimated depth subject to Horn’s integrability constraint and a new regularization term that accounts for the disparity offset between the two matching gradient fields. Both procedures are evaluated on objects that exhibit complex reflectance properties and challenging shapes. A fringe projection 3D scanner is used for reference data and error assessment. Small details that are not visible in the coarse initial 3D data, that is supplied to the first algorithm, are recovered based on the high-quality gradient data obtained from photometric stereo. The error of the test data with respect to the reference scanner is less than 0.3 mm. In contrast to the first method that computes shape information, the stereo camera algorithm yields absolute 3D data and produces very good reconstruction results on all datasets. The proposed method even surpasses the reconstruction accuracy of the 3D scanner on a metallic dataset. This is a notable contribution, as most existing camera-based surface reconstruction methods exclusively handle diffusely reflecting objects and those that focus on non-Lambertian objects still struggle with highly specular metallic surfaces.
  • Item
    Estimation of planetary surface ages using image based automatic crater detection algorithms
    (2018) Al-Tameemi, Atheer; Wöhler, Christian; Kummert, Franz
    A fully automatic system of crater detection, fusion and age estimation is built and constructed to result in reliable results in comparison with manually long time manually process from experts and professionals. A new idea of an extension of crater detection algorithms (CDA) is the Age Estimation that relied basically on Crater frequency-size distribution (CSFD). The age estimation process for surfaces depends basically on the numbers of the craters detected on the Moon surface and the total area of that surface. It is examined how well a template matching method is suitable for determining the age of different lunar areas. Six artificially lit crater models are used to count the craters in the investigated areas using cross-correlation. A threshold value for the automatic crater detection algorithm has been calculated for each dataset in order to obtain the best reliable results followed by a fusion automatic process for duplicated detections. A new implementation of this approach is provided for estimating the surface age with the possibility of flexible threshold values needed for calibration and evaluation process. With these two above-mentioned automatic steps, this will result in a time reduction and reasonable crater detection and so far precise age values. An automatic age mapping process has been applied to use the optimal threshold value in larger homogenous areas for more efficiency and behavior study. For the purpose of testing accuracy and efficiency, a dataset from lunar nearside regions has been examined to find out if there is an ideal threshold value for the crater detection process so that the smallest possible errors in the surface ages - derived from manually detected craters – are found in comparison to values from the literature. For this purpose, the optimal threshold value is calculated in five areas of Mare Cognitum on the Moon and then use to determine the age of five other areas in Oceanus Procellarum. By subsequently comparing the calculated ages with those from the literature, the accuracy of the method is examined. An image-based CDA has been implemented on a different dataset of craters, the first group of the dataset is the LU60645GT catalogue that includes a large number of crater candidates with diameters between 0.7 km and 2.5 km and located in the large craters Alphonsus and Ptolemaeus. The second dataset is a different region on the Moon near the crater Hell Q that includes a limited number of small craters with very small diameters between 3 m and 70 m, while the third group of data contains a list of medium-sized craters (128 m-1000 m) on the morphologically homogeneous floor of the lunar crater Tsiolkovsky. In an advanced step, an automatic method of detection for secondary crater candidates on the lunar surface has been proposed. To assess the accuracy of the developed method, automatic crater counts were performed for the flat floor of the lunar farside crater Tsiolkovsky by applying the Voronoi tesselation based Secondary Candidate Detection (SCD) to the results of the template matching based crater detector. For a small are on the crater floor, the obtained age of 3.21 Ga is consistent with the age of 3.19 Ga determined by Pasckert et al. (2015). In the next step, the age estimation was expanded to the complete crater floor, resulting in a map of the surface age which is at least partially corrected for the influence of secondary craters.
  • Item
    Objective assessment of the perceptual quality of HMI-components with a particular focus on the head-up display
    (2017) Köppl, Sonja Maria; Wöhler, Christian; Kays, Rüdiger
    Diese Arbeit konzentriert sich auf die Anwendung von Klassifikatoren um die subjektiv wahrgenommene Qualität von virtuellen Head-up-Display Bildern (HUD) in Luxusklassefahrzeugen zu beurteilen. Klassifikation versucht ausgehend von Trainingsdaten Muster bzw. Gesetzmäßigkeiten einer Aufgabenstellung zu erlernen und anschließend unbekannte Daten der gleichen Aufgabenstellung zu beurteilen. Die für diese Arbeit notwendigen HUD-Bilder werden mithilfe eines bestehenden Laboraufbaus der Daimler AG erzeugt. Durch Clustern werden repräsentative Bilder identifiziert, welche durch 12 Testpersonen manuell bewertet und numerisch durch 21 objektive Merkmale beschreiben werden. Die experimentelle Auswertung zeigt, dass Klassifikatoren durchaus geeignet sind um die subjektiv empfundene Qualität von HUD-Bildern zu beurteilen. Während der Trainingsphase erlernt der Klassifikator den Zusammenhang zwischen den objektiven Merkmalen und den subjektiven Labels. Danach wird in der Testphase das Gelernte auf unbekannte Testdaten angewendet und anhand der objektiven Merkmale das subjektive Empfinden abgeschätzt. Abhängig vom Klassifikatortyp wird für die bestimmten Labels der Testbilder eine höhere Trefferquote erzielt als für die Grenzwertbetrachtung, bei der es sich um die übliche Vorgehensweise zur Ermittlung der Kundentauglichkeit von HUD-Bildern handelt. Die untersuchten Klassifikatoren benötigen eine große Anzahl von gelabelten Trainingsdaten um ein umfassendes und verallgemeinerndes Erkennungsverhalten zu erlangen. Allerdings ist die manuelle Bewertung von vielen HUD-Bildern teuer und zeitaufwändig. Aus diesem Grund wird experimentell aufgezeigt, dass mit Hilfe des teilüberwachten Lernens oder des aktiven Lernens der manuelle Bewertungsaufwand reduziert werden kann ohne eine Verschlechterung der Klassifikationsgenauigkeit. Teilüberwachtes Lernen verwendet seine eigenen Vorhersagen um sich selbst zu trainieren. Im Gegensatz dazu ist aktives Lernen in der Lage die informativsten Trainingsbilder auszuwählen. Die Untersuchung zeigt sogar, dass eine Verbesserung der Klassifikationsgenauigkeit erzielt werden kann, wenn ungelabelte Daten in Verbindung mit einer kleinen Menge von gelabelten Trainingsdaten, bzw. die informativsten Trainingsdaten dazu verwendet werden die Klassifikatoren effektiv zu trainieren.
  • Item
    Self-adaptive structure semi-supervised methods for streamed emblematic gestures
    (2017) Al-Behadili, Husam Jumaah Naeemah; Wöhler, Christian; Götze, Jürgen
    Although many researchers try to improve the level of machine intelligence, there is still a long way to achieve intelligence similar to what humans have. Scientists and engineers are continuously trying to increase the level of smartness of the modern technology, i.e. smartphones and robotics. Humans communicate with each other by using the voice and gestures. Hence, gestures are essential to transfer the information to the partner. To reach a higher level of intelligence, the machine should learn from and react to the human gestures, which mean learning from continuously streamed gestures. This task faces serious challenges since processing streamed data suffers from different problems. Besides the stream data being unlabelled, the stream is long. Furthermore, “concept-drift” and “concept evolution” are the main problems of them. The data of the data streams have several other problems that are worth to be mentioned here, e.g. they are: dynamically changed, presented only once, arrived at high speed, and non-linearly distributed. In addition to the general problems of the data streams, gestures have additional problems. For example, different techniques are required to handle the varieties of gesture types. The available methods solve some of these problems individually, while we present a technique to solve these problems altogether. Unlabelled data may have additional information that describes the labelled data more precisely. Hence, semi-supervised learning is used to handle the labelled and unlabelled data. However, the data size increases continuously, which makes training classifiers so hard. Hence, we integrate the incremental learning technique with semi-supervised learning, which enables the model to update itself on new data without the need of the old data. Additionally, we integrate the incremental class learning within the semi-supervised learning, since there is a high possibility of incoming new concepts in the streamed gestures. Moreover, the system should be able to distinguish among different concepts and also should be able to identify random movements. Hence, we integrate the novelty detection to distinguish between the gestures that belong to the known concepts and those that belong to unknown concepts. The extreme value theory is used for this purpose, which overrides the need of additional labelled data to set the novelty threshold and has several other supportive features. Clustering algorithms are used to distinguish among different new concepts and also to identify random movements. Furthermore, the system should be able to update itself on only the trusty assignments, since updating the classifier on wrongly assigned gesture affects the performance of the system. Hence, we propose confidence measures for the assigned labels. We propose six types of semi-supervised algorithms that depend on different techniques to handle different types of gestures. The proposed classifiers are based on the Parzen window classifier, support vector machine classifier, neural network (extreme learning machine), Polynomial classifier, Mahalanobis classifier, and nearest class mean classifier. All of these classifiers are provided with the mentioned features. Additionally, we submit a wrapper method that uses one of the proposed classifiers or ensemble of them to autonomously issue new labels to the new concepts and update the classifiers on the newly incoming information depending on whether they belong to the known classes or new classes. It can recognise the different novel concepts and also identify random movements. To evaluate the system we acquired gesture data with nine different gesture classes. Each of them represents a different order to the machine e.g. come, go, etc. The data are collected using the Microsoft Kinect sensor. The acquired data contain 2878 gestures achieved by ten volunteers. Different sets of features are computed and used in the evaluation of the system. Additionally, we used real data, synthetic data and public data as support to the evaluation process. All the features, incremental learning, incremental class learning, and novelty detection are evaluated individually. The outputs of the classifiers are compared with the original classifier or with the benchmark classifiers. The results show high performances of the proposed algorithms.
  • Item
    Integrated recovery of elevation and photometric reflectance properties from hyperspectral data
    (2015) Grumpe, Arne; Wöhler, Christian; Pauli, Josef
    The analysis of optical measurements, i.e. images, may be subdivided into methods with respect to the spatial reflectance distribution, e.g. bundle adjustment and shape from shading, and methods with respect to the spectral reflectance distribution, e.g. determination of object properties based on its colour. Current research considers these problems separately. Hyperspectral imagery, however, simultaneously provides knowledge on the local surface topography, i.e. shading, and the spectral reflectance. One problem that requires treatment in all methods is the dependence of the object's appearance on its shape. The goal of this thesis is to bridge the gap between spatial and spectral analysis of reflectance data, i.e. to extract and combine the spatial and the spectral information from hyperspectral images. This is achieved by an integrated framework for the recovery of local surface topography and the normalisation of spectral data. Photometric shape recovery methods derive the surface orientation, i.e. its gradient field, from the image and retrieve the shape by integrating the estimated gradient field, which is prone to the accumulation of systematic errors originating from the gradient estimation. To suppress these systematic errors, the photometric shape recovery is re-stricted by soft constraints derived from topographic models of lower lateral resolution. These soft constraints are applied to both the gradient field estimation and the gradient field integration. The earth's moon has been of scientific interest for a long time and thus a wealth of measurements exists and is publicly available. The available measurements include high resolution topography models derived from stereo image analysis and laser altimeter measurements, hyperspectral reflec-tance measurements and elemental abundances measured by gamma ray spectrometers. This wealth of data is rarely met in industrial applications and thus the lunar surface is an ideal object for the method development. The developed methods include the refinement, i.e. increase of lateral resolution, of stereo based topographic model and the estimation of the surface's temperature and the parameters of the reflectance model. The computed values allow for a normalisation of the spec-tral data and compensation of the thermal component. The developed techniques are applied to derive a near-global Moon Mineralogy Mapper mosaic. Based on this mosaic, a regression method is applied to map parameters of the spectral absorption bands onto elemental abundances measured by the Lunar Prospector Gamma-Ray Spectrometer. To obtain co-registered images, which are required for an analysis of the spectral data, an illumination independent image registration method is developed based on the recovered elevation models, which, by definition, are co-registered to the original image. Finally, the photometric surface refine-ment methods are applied to Lunar Orbiter Narrow Angle Camera images to derive to elevation models of the highest possible resolution. The results show that the influence of the local topography is nearly eliminated from the normalised reflectance maps. A qualitative analysis of the obtained parameters of the reflectance model, e.g. the single-scattering albedo, is in good agreement with known bright and dark areas, e.g. bright volcanic domes or ash deposits. An analysis of the temperature estimation shows, that accurate estimates of temperatures above 300 K are possible. Comparing the results of the refined topographic models to single high accuracy laser altimeter measurements show that the depth error is comparable to stereo analysis while the lateral resolution is greatly increased. The presented image registration technique based on the topography models achieves sub-pixel accuracy.
  • Item
    Ressourcenoptimierte Objektdetektion und teilüberwachtes Lernen zur Echtzeitanwendung mit konfidenzbasierten, kaskadierten Klassifikationssystemen
    (2014) Staudenmaier, Armin; Wöhler, Christian; Hoffmann, Frank
    Die Arbeit beschäftigt sich mit maschinellen Lernverfahren zur echtzeitfähigen bildbasierten Objektdetektion wobei ein Datensatz mit anspruchsvollen unterschiedlichen amerikanischen Verkehrsschildern verwendet wird und in Kraftfahrzeugen zur Erkennung der erlaubten Höchstgeschwindigkeiten eingesetzt wird. Der Fokus liegt auf vollständig automatisierten Trainingsmechanismen für kaskadierte Klassifikationssysteme, wobei ein neues Verfahren zur Fusion konfidenzbasierter Stufenklassifkatoren basierend auf der Kalkulation der individuellen Wahrscheinlichkeiten vorgestellt wird. Anhand von Konfidenzwerten wird außerdem ein teilüberwachtes Lernverfahren zur Echtzeitobjektdetekion vorgestellt, denn bei großen Datenmengen werden zum einen vom Menschen Objekte bei der Generierung des Lerndatensatzes "übersehen", die in den Hintergrundbeispielen auftauchen und zum Anderen können dort strukturell ähnliche Beispiele enthalten sein, mit denen sich die Klassifikationsleistung steigern lässt, indem sicher klassifizierte Negativbeispiele vom Klassifikator selbst "umdefiniert" werden. Ausserdem wird ein von den Frequenzeigenschaften der Merkmale abhängiges "coarse to fine" Trainings- und Detektionsverfahren vorgestellt, das integralbildbasierte Strukturtensormerkmale verwendet. Für die Stufenklassifikatoren werden Ensembleverfahren verwendet, wobei außer dem AdaBoost-Verfahren ein neues konfidenzbasiertes Stackingverfahren vorgestellt wird, mit dem auf dem Testdatensatz die besten Klassifikationsergebnisse erzielt werden .
  • Item
    3D shape measurement and reflectance analysis for highly specular and interreflection affected surfaces
    (2014-08-05) Herbort, Steffen; Wöhler, Christian; Kummert, Franz