LS 07 Graphische Systeme

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 35
  • Item
    Discrete Laplacians for general polygonal and polyhedral meshes
    (2024) Pontzen, Astrid; Botsch, Mario; Hildebrandt, Klaus
    This thesis presents several approaches that generalize the Laplace-Beltrami operator and its closely related gradient and divergence operators to arbitrary polygonal and polyhedral meshes. We start by introducing the linear virtual refinement method, which provides a simple yet effective discretization of the Laplacian with the help of the Galerkin method from a Finite Element perspective. Its flexibility allows us to explore alternative numerical schemes in this setting and to derive a second Laplacian, called the Diamond Laplacian with a similar approach, but this time combined with the Discrete Duality Finite Volume method. It offers enhanced accuracy but comes at the cost of denser matrices and slightly longer solving times. In the second part of the thesis, we extend the linear virtual refinement to higher-order discretizations. This method is called the quadratic virtual refinement method. It introduces variational quadratic shape functions for arbitrary polygons and polyhedra. We also present a custom multigrid approach to address the computational challenges of higher-order discretizations, making the faster convergence rates and higher accuracy of these polygon shape functions more affordable for the user. The final part of this thesis focuses on the open degrees of freedom of the linear virtual refinement method. By uncovering connections between our operator and the underlying tessellations, we can enhance the accuracy and stability of our initial method and improve its overall performance. These connections equally allow us to define what a ``good'' polygon would be in the context of our Laplacian. We present a smoothing approach that alters the shape of the polygons (while retaining the original surface as much as possible) to allow for even better performance.
  • Item
    Efficient, collision-free multi-robot navigation in an environment abstraction framework
    (2023) Böckenkamp, Adrian; Müller, Heinrich; Ten Hompel, Michael
    Industrial automation deploys a continuously increasing amount of mobile robots in favor of classical linear conveyor systems for material flow handling in manufacturing and intralogistics. This increases flexibility by handling a larger variety of goods, improves scalability by adapting the fleet size to varying system loads, and enhances fault tolerance by avoiding single points of failure. However, it also raises the need for efficient, collision-free multi-robot navigation. This core problem is first precisely modeled in a form that differs from existing approaches specifically in terms of application relevance and structured algorithmic treatability. Collision-free trajectories for the mobile robots between given start and goal locations are sought so that the number of goals reached per time is as high as possible. Based on this, a decoupled solution called the Collaborative Local Planning Framework (CLPF), is designed and implemented, which, in contrast to existing solutions, aims at avoiding deadlocks with the greatest possible concurrency. Moreover, this solution includes the handling of dynamic inputs consisting of both moving and non-moving robots. For testing, performance analysis, and optimization, due to the complexity of multi-robot systems, the use of simulation is common. However, this also creates a gap between real and simulated robots. These issues can be reduced by using several different simulators---albeit with the disadvantage of further increasing complexity. For this purpose, the Robot Experimentation Framework (REF) is introduced to write robotic experiments with a unified interface that can be run on multiple simulators and also on real hardware. It facilitates the creation of experiments for performance assessment, (parameter) optimization and runtime analysis. The framework has proven its effectiveness throughout this thesis. Lastly, experimental proof of the viability of the solution is provided based on a case study of a complete (simulated) assembly system of decentralized autonomous agents for the production of highly individualized automobiles. This integrates all developed concepts into a holistic application of industrial automation. Detailed evaluations of more than 800 000 solved scenarios with more than 5 700 000 processed goals have experimentally proven the robustness and reliability of the developed concepts. Robots have never crashed into each other in any of the conducted experiments, empirically proving the claimed safety guarantees. A fault-tolerance analysis of the decentralized assembly system has experimentally proven its resilience to failures at workstations and, thus, specifically revealed an advantage over linear conveyor systems.
  • Item
    Constructing L∞ Voronoi diagrams in 2D and 3D
    (2022-10-06) Bukenberger, D. R.; Buchin, K.; Botsch, M.
    Voronoi diagrams and their computation are well known in the Euclidean L2 space. They are easy to sample and render in generalized Lp spaces but nontrivial to construct geometrically. Especially the limit of this norm with p → ∞ lends itself to many quad- and hex-meshing related applications as the level-set in this space is a hypercube. Many application scenarios circumvent the actual computation of L∞ diagrams altogether as known concepts for these diagrams are limited to 2D, uniformly weighted and axis-aligned sites. Our novel algorithm allows for the construction of generalized L∞ Voronoi diagrams. Although parts of the developed concept theoretically extend to higher dimensions it is herein presented and evaluated for the 2D and 3D case. It further supports individually oriented sites and allows for generating weighted diagrams with anisotropic weight vectors for individual sites. The algorithm is designed around individual sites, and initializes their cells with a simple meshed representation of a site's level-set. Hyperplanes between adjacent cells cut the initialization geometry into convex polyhedra. Non-cell geometry is filtered out based on the L∞ Voronoi criterion, leaving only the non-convex cell geometry. Eventually we conclude with discussions on the algorithms complexity, numerical precision and analyze the applicability of our generalized L∞ diagrams for the construction of Centroidal Voronoi Tessellations (CVT) using Lloyd's algorithm.
  • Item
    Bahnplanung mittels impliziter Methoden für spanende und beschichtende Fertigungsverfahren
    (2022) Gaspar, Marcel; Müller, Heinrich; Turek, Stefan
    Das Thema der Dissertation ist die Entwicklung einer auf Level-Sets basierenden Bahnplanungsmethodik. Zu diesem Zweck werden unterschiedliche Ansätze zur impliziten Definition, Repräsentation und Manipulation von für die Planung von Werkzeugbahnen relevanten Flächen und Kurven vorgestellt. Die entwickelten Methoden dienen zur Planung von Werkzeugbahnen in zwei unterschiedlichen Anwendungsszenarien, der spanenden Fertigung dentaler Werkstücke wie Zahnkronen, Brücken oder Inlays mittels einer 5-achsigen Tischfräsmaschine, und zur thermischen Beschichtung von Oberflächen mittels einer am Roboterarm geführten Spritzpistole. Die Verwendung impliziter Methoden führt einerseits zu bedeutsamen Vorteilen gegenüber expliziten Methoden, andererseits jedoch auch zu neuen Herausforderungen, welche in der Dissertation thematisiert werden. Hervorgehoben sei hier der Ansatz zur impliziten Repräsentation von Kammlinien als Projektion des Medialachsenrandes auf eine explizite Oberfläche, sowie die damit verbundene Identifikation von für die Bahnplanung kritischen Bereichen auf der Zieloberfläche. Ebenso werden die den Werkzeugbahnen zugrunde liegenden Kurven auf Oberflächen und im Raum implizit definiert, sowie Ansätze zur Überführung explizit bzw. implizit definierter Kontaktpunktkurven auf Oberflächen in explizit bzw. implizit definierte Werkzeugpositionskurven im Raum vorgestellt. Die Überführung von Kontaktpunktkurven auf Oberflächen in Werkzeugpositionskurven im Raum wird weitergehend vertieft, indem die Bestimmung einer Positionskurve als Lösung eines Optimierungsproblems aufgefasst wird, welche zwischen einer aus prozesstechnischer, geometrischer und dynamischer Perspektive günstigen Positionierung des Werkzeugs abzuwägen vermag. Abschließend wird eine neuartige und auf ausschließlich impliziten Repräsentationen beruhende Methodik zur Simulation des Materialabtrags bei der spanenden Fertigung vorgestellt.
  • Item
    Agent-based simulation of pedestrian dynamics for exposure time estimation in epidemic risk assessment
    (2021-04-01) Harweg, Thomas; Bachmann, Daniel; Weichert, Frank
    Purpose With the coronavirus disease 2019 (COVID-19) pandemic spreading across the world, protective measures for containing the virus are essential, especially as long as no vaccine or effective treatment is available. One important measure is the so-called physical distancing or social distancing. Methods In this paper, we propose an agent-based numerical simulation of pedestrian dynamics in order to assess the behavior of pedestrians in public places in the context of contact transmission of infectious diseases like COVID-19, and to gather insights about exposure times and the overall effectiveness of distancing measures. Results To abide by the minimum distance of 1.5 m stipulated by the German government at an infection rate of 2%, our simulation results suggest that a density of one person per 16m2 or below is sufficient. Conclusions The results of this study give insight into how physical distancing as a protective measure can be carried out more efficiently to help reduce the spread of COVID-19.
  • Item
    Layered character models for fast physics-based simulation
    (2022) Komaritzan, Martin; Botsch, Mario; Hildebrandt, Klaus
    This thesis presents two different layered character models that are ready to be used in physics-based simulations, in particular they enable convincing character animations in real-time. We start by introducing a two-layered model consisting of rigid bones and an elastic soft tissue layer that is efficiently constructed from a surface mesh of the character and its underlying skeleton. Building on this model, we introduce Fast Projective Skinning, a novel approach for physics-based character skinning. While maintaining real-time performance it overcomes the well-known artifacts of commonly used geometric skinning approaches. It further enables dynamic effects and resolves local and global self-collisions. In particular, our method neither requires skinning weights, which are often expensive to compute or tedious to hand-tune, nor a complex volumetric tessellation, which fails for many real-world input meshes due to self-intersections. By developing a custom-tailored GPU implementation and a high-quality upsampling method, our ap- proach is the first skinning method capable of detecting and handling arbitrary global collisions in real-time. In the second part of the thesis, we extend the idea of a simplified two-layered volumetric model by developing an anatomically plausible three-layered representation of human virtual characters. Starting with an anatomy model of the male and female body, we show how to generate a layered body template for both sexes. It is composed of three surfaces for bones, muscles and skin enclosing the volumetric skeleton, muscles and fat tissues. Utilizing the simple structure of these templates, we show how to fit them to the surface scan of a person in just a few seconds. Our approach includes a data-driven method for estimating the amount of muscle mass and fat mass from a surface scan, which provides more accurate fits to the variety of human body shapes compared to previous approaches. Additionally, we demonstrate how to efficiently embed fine-scale anatomical details, such as high resolution skeleton and muscle models, into the layered fit of a person. Our second model can be used for physical simulation, statistical analysis and anatomical visualization in computer animation or in medical applications, which we demonstrate on several examples.
  • Item
    On the power of message passing for learning on graph-structured data
    (2022) Fey, Matthias; Weichert, Frank; Kriege, Nils Morten
    This thesis proposes novel approaches for machine learning on irregularly structured input data such as graphs, point clouds and manifolds. Specifically, we are breaking up with the regularity restriction of conventional deep learning techniques, and propose solutions in designing, implementing and scaling up deep end-to-end representation learning on graph-structured data, known as Graph Neural Networks (GNNs). GNNs capture local graph structure and feature information by following a neural message passing scheme, in which node representations are recursively updated in a trainable and purely local fashion. In this thesis, we demonstrate the generality of message passing through a unified framework suitable for a wide range of operators and learning tasks. Specifically, we analyze the limitations and inherent weaknesses of GNNs and propose efficient solutions to overcome them, both theoretically and in practice, e.g., by conditioning messages via continuous B-spline kernels, by utilizing hierarchical message passing, or by leveraging positional encodings. In addition, we ensure that our proposed methods scale naturally to large input domains. In particular, we propose novel methods to fully eliminate the exponentially increasing dependency of nodes over layers inherent to message passing GNNs. Lastly, we introduce PyTorch Geometric, a deep learning library for implementing and working with graph-based neural network building blocks, built upon PyTorch.
  • Item
    The Diamond Laplace for polygonal and polyhedral meshes
    (2021-08-23) Bunge, Astrid; Botsch, Mario; Alexa, Marc
    We introduce a construction for discrete gradient operators that can be directly applied to arbitrary polygonal surface as well as polyhedral volume meshes. The main idea is to associate the gradient of functions defined at vertices of the mesh with diamonds: the region spanned by a dual edge together with its corresponding primal element — an edge for surface meshes and a face for volumetric meshes. We call the operator resulting from taking the divergence of the gradient Diamond Laplacian. Additional vertices used for the construction are represented as affine combinations of the original vertices, so that the Laplacian operator maps from values at vertices to values at vertices, as is common in geometry processing applications. The construction is local, exactly the same for all types of meshes, and results in a symmetric negative definite operator with linear precision. We show that the accuracy of the Diamond Laplacian is similar or better compared to other discretizations. The greater versatility and generally good behavior come at the expense of an increase in the number of non-zero coefficients that depends on the degree of the mesh elements.
  • Item
    Differentiable algorithms with data-driven parameterization in 3D vision
    (2022) Lenssen, Jan Eric; Müller, Heinrich; Kersting, Kristian
    This thesis is concerned with designing and analyzing efficient differentiable data flow for representations in the field of 3D vision and applying it to different 3D vision tasks. To this end, the topic is looked upon from the perspective of differentiable algorithms, a more general variant of Deep Learning, utilizing the recently emerged tools in the field of differentiable programming. Contributions are made in the subfields of Graph Neural Networks (GNNs), differentiable matrix decompositions and implicit neural functions, which serve as important building blocks for differentiable algorithms in 3D vision. The contributions include SplineCNN, a neural network consisting of operators for continuous convolution on irregularly structured data, Local Spatial Graph Transformers, a GNN to infer local surface orientations on point clouds, and a parallel GPU solver for Eigendecomposition on a large number of symmetric matrices. For all methods, efficient forward and backward GPU implementations are provided. Consequently, two differentiable algorithms are introduced, composed of building blocks from these concept areas. The first algorithm, Differentiable Iterative Surface Normal Estimation, is an iterative algorithm for surface normal estimation on unstructured point clouds. The second algorithm, Group Equivariant Capsule Networks, is a version of capsule networks grounded in group theory for unsupervised pose estimation and, in general, for inferring disentangled representations from 2D and 3D data. The thesis concludes that a favorable trade-off in the metrics of efficiency, quality and interpretability can be found by combining prior geometric knowledge about algorithms and data types with the representational power of Deep Learning.
  • Item
    Web-based scientific exploration and analysis of 3D scanned cuneiform datasets for collaborative research
    (2017-12-12) Fisseler, Denis; Müller, Gerfrid G. W.; Weichert, Frank
    The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability.
  • Item
    The PAMONO-sensor enables quantification of individual microvesicles and estimation of nanoparticle size distribution
    (2017-09-27) Shpacovitch, Viktoria; Sidorenko, Irina; Lenssen, Jan Eric; Temchura, Vladimir; Weichert, Frank; Müller, Heinrich; Überla, Klaus; Zybin, Alexander; Schramm, Alexander; Hergenröder, Roland
    In our recent work, the plasmon assisted microscopy of nano-objects (PAMONO) was successfully employed for the detection and quantification of individual viruses and virus-like particles in aquatic samples (Shpacovitch et al., 2015). Further, we adapted the PAMONO-sensor for the specific detection of individual microvesicles (MVs), which have gained growing interest as potential biomarkers of various physiological and pathological processes. Using MVs derived from human neuroblastoma cell line cells, we demonstrated the ability of the PAMONO-sensor to specifically detect individual MVs. Moreover, we proved the trait of the PAMONO-sensor to perform a swift comparison of relative MV concentrations in two or more samples without a prior sensor calibration. The detection software developed by the authors utilizes novel machine learning techniques for the processing of the sensor image data. Using this software, we demonstrated that nanoparticle size information is evident in the sensor signals and can be extracted from them. These experiments were performed with polystyrene nanoparticles of different sizes. We also suggested a theoretical model explaining the nature of observed signals. Taken together, our findings can serve as a basis for the development of diagnostic tools built on the principles of the PAMONO-sensor.
  • Item
    Multi-objective optimisation based planning of power-line grid expansions
    (2018-06-29) Bachmann, Daniel; Bökler, Fritz; Kopec, Jakob; Popp, Kira; Schwarze, Björn; Weichert, Frank
    German nuclear power phase out in 2022 leads to significant reconstruction of the energy transmission system. Thus, efficient identification of practical transmission routes with minimum impact on ecological and economical interests is of growing importance. Due to the sensitivity of Germany’s public to grid expansion (especially in case of overhead lines), the participation and planning process needs to provide a high degree of openness and accountability. Therefore, a new methodological approach for the computer-assisted finding of optimal power-line routes considering planning, ecological and economic decision criteria is presented. The approach is implemented in a tool-chain for the determination of transmission line routes (and sets of transmission line route alternatives) based on multi-criteria optimisation. Additionally, a decision support system, based on common Geographic Information Systems (GIS), consisting of interactive visualisation and exploration of the solution space is proposed.
  • Item
    Review of three-dimensional human-computer interaction with focus on the leap motion controller
    (2018-07-07) Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given.
  • Item
    Extraction, localization, and fusion of collective vehicle data
    (2019) Skibinski, Sebastian; Müller, Heinrich; Schwiegelshohn, Uwe
    Maps representing the detailed features of the road network are becoming more and more important for self-driving vehicles and next generation driver assistance systems. The mapping of the road network, by specially equipped vehicles of the well-known map providers, leads to usually quarterly map updates, which might result in problems encountered by self-driving vehicles in the case that the road information is outdated. Furthermore, the provided maps could lack details, such as precise landmark geometries or data known to exhibit a fast temporal decay rate, which might be, nevertheless, highly relevant, such as friction data. As an alternative, extensive amounts of information about the road network can be acquired by common vehicles, which are, nowadays, commonly equipped with manifold types of sensors. Subsequently, this type of gathered data is referred to as CVD (Collective Vehicle Data). The process of map creation requires, at first, the extraction of relevant sensor data at the vehicle-side and its accurate localization. Unfortunately, sensor data is typically affected by measurement uncertainties and errors. A minimization of both can be achieved by means of an appropriate sensor data fusion. This work aims for a holistic view of a three-staged pipeline, consisting of the extraction, localization, and fusion of CVD, intended for the derivation of large-scale, high-precision, real-time maps from collective sensor measurements acquired by a common vehicle fleet. The vehicle fleet is assumed to be solely equipped with commercially viable sensors. Concerning the processing at the back-end-side, general approaches that are applicable in a straightforward manner to new types of sensor data are strictly favored. For this purpose, a novel distinction of CVD into areal, point-shaped landmark, and complex landmark data is introduced. This way, the similarities between different types of environmental attributes are exploited in an overall highly beneficial manner; and the proposed algorithms can be adapted to new types of data that appertain to these categories by appropriately adjusting their parameterizations. To achieve the above mentioned goals, both novel approaches, where the research lacks established ways, and relevant extensions/adaptations of existing ones are suggested to fulfill the very specific, automotive requirements. All in all, the thesis condenses a broad and manifold research concerning the deduction of large-scale and high-precision map data grounded on preprocessed sensor measurements that have been acquired by common vehicles, the so-called CVD. The focus is put on the utilization of commercially viable sensors. Additionally, besides its broad perspective, this thesis also emphasizes highly relevant details, such as the efficient, adaptive temporal weighting of sensor data at the back-end-side and the template-based hierarchical data storage. A complete pipeline, consisting of the extraction, localization, and fusion of CVD, is presented and evaluated, as each component is known to have a direct impact on the quality of the deduced map data. Approaches to the fusion of areal and point-shaped/complex landmark data are either invented from scratch or significantly enhanced according to the state of the art, always bearing in mind the highly specific needs of the automotive context.
  • Item
    Contributions to computer-aided analysis of cuneiform tablet fragments
    (2019) Fisseler, Denis Bernd; Müller, Heinrich; Botsch, Mario
    This thesis presents methods for computer-aided three-dimensional analysis of digitized cuneiform tablets, an ancient type of writing documents. Since cuneiform script is predominantly conserved in the form of fractured clay tablet fragments, identifying matching fragments is a central task of manuscript reconstruction. This goal can benefit from the increasing 3D digitization of cuneiform fragments, which offers access to highly accurate cuneiform representations. The main contribution of the thesis is a novel model-based method for the extraction of individual cuneiform wedges and associated wedge geometries from 3D scans, which can serve as a base for a statistical analysis of script features. This new automated approach enables access to large amounts of accurate quantitative cuneiform script features, which were not accessible by previously available 2D methods and can be employed for script similarity-based identification of candidates for fragment joining. A central aspect and challenging task is the robustness of the presented extraction method against scanning issues and mesh errors. This is achieved by employing a watershed-based wedge area extraction operating on a surface distance field with a subsequent constructive multi-stage model fitting. The extracted wedge models are refined by a wedge type classification followed by an effective wedge validation to handle false detections on fracture faces and damaged surfaces. An evaluation with respect to extraction rates, robustness, and performance shows the suitability of the developed methods that goes beyond an application purely for cuneiform fragment joining. To address some compromises made during the wedge extraction regarding the representation of complex features, a fast supplementary approach for extracting skeletal surface features is presented. These features provide an alternative readable cuneiform representation and are created using a thinning approach on an approximated distance field. The quality of the resulting skeletons is optimized by employing a complex junction resolution, branch pruning and branch simplification methods, where both pruning and simplification can be used to adjust the resulting representation to different use cases. Aside from manual feature analysis, possible application scenarios also include providing a representation that can be handled by GraphCNNs for retrieval related tasks on cuneiform structures. The cuneiform segmentation methods are complemented by a set of visualization concepts for a cuneiform segmentation framework. This includes a hierarchical concept for data handling and persistent storage of the generated segmentation data. Beyond, methods for fast rendering of large meshes, visualization methods to achieve good depth perception, detail enhancement, and semi-realistic surface shading are integrated. In order to not only address application scenarios like fragment joining and collation related tasks, the framework provides a sophisticated, highly interactive, and flexible segmentation data visualization that additionally offers fast geometry selection methods. A good accessibility of the generated data is guaranteed though an XML-based file format for storing segmentation data and through providing flexible data export methods. Although the framework is primarily intended for real-time segmentation, most segmentation methods can also be scheduled to process large numbers of fragments without user interaction. All presented methods are evaluated with respect to performance aspects and their suitability for a set of philological use cases. The developed methods can be used flexibly in the scope of many aspects of the investigated application cases. This does not only apply to the automated feature extraction, but also to manual analysis aspects, which were discovered only by the new availability of the methods. The usability of the framework is underlined by the fact that it is actively being used by philologists from the Hethitologie-Portal Mainz, an established online resource in Hittitology.
  • Item
    Optimierung thermischer Verhältnisse bei der Bahnplanung für das thermische Spritzen mit Industrierobotern
    (2017) Hegels, Daniel; Müller, Heinrich; Henrich, Dominik
    Diese Arbeit befasst sich mit der Erzeugung und Optimierung von neuartigen Bahnen für Industrieroboter beim thermischen Spritzen auf komplexen Freiformoberflächen unter besonderer Berücksichtigung der thermischen Verhältnisse in dem Werkstück. Thermisches Spritzen ist ein Produktionsprozess, bei dem eine Werkstückoberfläche mit geschmolzenem Material beschichtet wird, so dass die Oberfläche die gewünschten Oberflächeneigenschaften aufweist. Ein Alleinstellungsmerkmal des präsentierten Systems ist der modulare Aufbau, der vor allem eine in diesem Bereich unübliche Trennung zwischen der Initialbahnplanung und der Bahnoptimierung vorsieht. Die Basis des Gesamtsystems bilden verschiedene Simulationskomponenten, wie die Beschichtungssimulation, die thermische Simulation und die Robotersimulation. Die Initialbahnplanung erzeugt flächenüberdeckende Bahnen auf einem Werkstück unter Berücksichtigung verschiedener Qualitätsmerkmale. Dazu werden die Bahnen über flexible Bahnstrukturen repräsentiert, darunter neuartige Strukturen, wie die Rand-zu-Rand Bahnen und die Punkt-zu-Punkt Bahnen. Die Qualität der Bahnen wird über verschiedene Zielfunktionen bewertet, die neben der Schichtqualität vor allem die thermischen Varianzen berücksichtigen, welche bisher nur selten in Betracht gezogen wurden, obwohl sie großen Einfluss auf die endgültige Schichtqualität haben. Weitere praxisrelevante Zielkriterien, wie die Roboterachsbeschleunigungen und der Overspray, welcher das Material beschreibt, das nicht auf der funktionalen Fläche abgelagert wird, werden ebenfalls beachtet. Das Problem der Initialbahnplanung wird als mehrkriterielles Optimierungsproblem formuliert und mit Hilfe eines Evolutionären Algorithmus optimiert. Verschiedene Varianten für die Operatoren des Evolutionären Algorithmus werden verwendet und gegeneinander evaluiert. Hieraus wird die Kombination von Operatoren bestimmt, mit der der Algorithmus mit hoher Konvergenzgeschwindigkeit strukturell gute Bahnen für den anschließenden Bahnoptimierungsprozess erzeugt. Die Bahnoptimierung wird für die Verbesserung vorhandener Bahnen bezüglich der Beschichtungsfehler und der Ausführbarkeit mit Robotern verwendet. Ein neuartiges Konzept zur kombinierten Anwendung des in der Arbeit entwickelten, analytischen Auftragsmodells mit einer externen Blackbox Simulation wird verwendet, um die Bahnen mit Hilfe des Verfahrens der nichtlinearen konjugierten Gradienten zu optimieren. Die Fehler werden hierbei über die externe Simulation und die Gradienten über das analytische Auftragsmodell bestimmt. Die Verwendung der Bahnoptimierung beschränkt sich nicht nur auf die Optimierung der Bahnen, die von der Initialbahnplanung erstellt worden sind, sondern kann ebenfalls genutzt werden, um bereits erstellte Bahnen an andere Spritzprozesse oder ähnliche Werkstückgeometrien anzupassen. Hierdurch lässt sich der erhebliche Aufwand zur Generierung neuer Bahnen stark reduzieren. Zum Abschluss der Arbeit wird ein Verfahren vorgestellt, das die bisher unberücksichtigte Roboterdynamik in das System miteinbezieht. Dazu wird eine Dynamikkorrektur präsentiert, die die Bahnen mit Hilfe einer Roboterherstellersoftware in den dynamisch zulässigen Bereich projiziert. Diese Projektion wird in einer weiteren Optimierungsschleife alternierend mit der Bahnoptimierung genutzt, um eine dynamisch zulässige Bahn zu erzeugen, die sehr gute Ergebnisse bezüglich der Qualitätsmaße liefert.
  • Item
    Exploration of cyber-physical systems for GPGPU computer vision-based detection of biological viruses
    (2017) Libuschewski, Pascal; Marwedel, Peter; Müller, Heinrich
    This work presents a method for a computer vision-based detection of biological viruses in PAMONO sensor images and, related to this, methods to explore cyber-physical systems such as those consisting of the PAMONO sensor, the detection software, and processing hardware. The focus is especially on an exploration of Graphics Processing Units (GPU) hardware for “General-Purpose computing on Graphics Processing Units” (GPGPU) software and the targeted systems are high performance servers, desktop systems, mobile systems, and hand-held systems. The first problem that is addressed and solved in this work is to automatically detect biological viruses in PAMONO sensor images. PAMONO is short for “Plasmon Assisted Microscopy Of Nano-sized Objects”. The images from the PAMONO sensor are very challenging to process. The signal magnitude and spatial extension from attaching viruses is small, and it is not visible to the human eye on raw sensor images. Compared to the signal, the noise magnitude in the images is large, resulting in a small Signal-to-Noise Ratio (SNR). With the VirusDetectionCL method for a computer vision-based detection of viruses, presented in this work, an automatic detection and counting of individual viruses in PAMONO sensor images has been made possible. A data set of 4000 images can be evaluated in less than three minutes, whereas a manual evaluation by an expert can take up to two days. As the most important result, sensor signals with a median SNR of two can be handled. This enables the detection of particles down to 100 nm. The VirusDetectionCL method has been realized as a GPGPU software. The PAMONO sensor, the detection software, and the processing hardware form a so called cyber-physical system. For different PAMONO scenarios, e.g., using the PAMONO sensor in laboratories, hospitals, airports, and in mobile scenarios, one or more cyber-physical systems need to be explored. Depending on the particular use case, the demands toward the cyber-physical system differ. This leads to the second problem for which a solution is presented in this work: how can existing software with several degrees of freedom be automatically mapped to a selection of hardware architectures with several hardware configurations to fulfill the demands to the system? Answering this question is a difficult task. Especially, when several possibly conflicting objectives, e.g., quality of the results, energy consumption, and execution time have to be optimized. An extensive exploration of different software and hardware configurations is expensive and time-consuming. Sometimes it is not even possible, e.g., if the desired architecture is not yet available on the market or the design space is too big to be explored manually in reasonable time. A Pareto optimal selection of software parameters, hardware architectures, and hardware configurations has to be found. To achieve this, three parameter and design space exploration methods have been developed. These are named SOG-PSE, SOG-DSE, and MOGEA-DSE. MOGEA-DSE is the most advanced method of these three. It enables a multi-objective, energy-aware, measurement-based or simulation-based exploration of cyber-physical systems. This can be done in a hardware/software codesign manner. In addition, offloading of tasks to a server and approximate computing can be taken into account. With the simulation-based exploration, systems that do not exist can be explored. This is useful if a system should be equipped, e.g., with the next generation of GPUs. Such an exploration can reveal bottlenecks of the existing software before new GPUs are bought. With MOGEA-DSE the overall goal—to develop a method to automatically explore suitable cyber-physical systems for different PAMONO scenarios—could be achieved. As a result, a rapid, reliable detection and counting of viruses in PAMONO sensor data using high-performance, desktop, laptop, down to hand-held systems has been made possible. The fact that this could be achieved even for a small, hand-held device is the most important result of MOGEA-DSE. With the automatic parameter and design space exploration 84% energy could be saved on the hand-held device compared to a baseline measurement. At the same time, a speedup of four and an F-1 quality score of 0.995 could be obtained. The speedup enables live processing of the sensor data on the embedded system with a very high detection quality. With this result, viruses can be detected and counted on a mobile, hand-held device in less than three minutes and with real-time visualization of results. This opens up completely new possibilities for biological virus detection that were not possible before.
  • Item
    Approximation anatomischer Strukturen und biomedizinischer Prozesse zur rechnergestützten Untersuchung der Hämodynamik in Aneurysmen
    (2016) Walczak, Lars; Müller, Heinrich; Turek, Stefan
    Arterien des Menschen können Aneurysmen aufweisen, deren Ruptur zu lebensbedrohenden inneren Blutungen wie Schlaganfällen führen kann. Ein Therapieansatz ist das Einsetzen von sogenannten Stents. Eine Ruptur oder der Einfluss eines Stents kann mit dem momentanen Stand der Technik nicht exakt vorhergesagt werden. Für eine optimale Behandlung von Patienten wäre dies allerdings eine wichtige Zusatzinformation für den behandelnden Arzt. Zur Bestimmung dieser Zusatzinformation sollen zukünftig Simulationen der Hämodynamik in pathologischen Arterien eingesetzt werden. In dieser Arbeit werden Strömungsgeschwindigkeiten in Arterien ohne beziehungsweise mit Einbringung von Einbauten wie Stents berechnet und die entstehenden Wandscherspannungen im Hinblick auf eine Rupturvorhersage untersucht. Weiterhin wird der Massentransfer zwischen Arterie und Aneurysma charakterisiert und eine Analyse des Thrombosierungsverhaltens unter Strömungseinfluss vorgenommen. Bei letztgenanntem Thema werden insbesondere der Verschluss von Aneurysmen durch Thromben, die Ortseindämmung der Thrombenbildung und das Verhalten von wandanhaftenden Thromben auch in Bezug auf eine Ablösung untersucht. Um hierfür geeignete Simulationen durchführen zu können, wird eine Analyse der biomedizinischen Grundlagen durchgeführt. Für die Untersuchung der komplexen Dynamik sind aus methodischer Sicht zwei grundlegende Aspekte zu bearbeiten: die geometrische und die funktionelle Approximation. Die funktionelle Approximation biomedizinischer Prozesse umfasst die Untersuchung der Blutströmung, des Transports von passiven Stoffen und der Thrombosierung. Hierfür werden entsprechende Modelle identifiziert, in entsprechende Lattice-Boltzmann-Verfahren umgewandelt, simuliert und untersucht. Durch die Erarbeitung geeigneter Konzepte für eine Umsetzung der hier beschriebenen Simulationen auf einzelnen oder mehreren, miteinander kommunizierenden Grafikprozessoren kann eine effiziente Simulation der gekoppelten Multi-Physik-Probleme mit Lattice-Boltzmann-Verfahren erreicht werden. Insgesamt stellt diese Vorgehensweise ein Novum dar und unterstreicht die Praktikabilität der Methode. Die geometrische Approximation anatomischer Strukturen wird in dieser Arbeit mit Level-Set-Darstellungen gelöst. Mit ihnen können vielfältige Problemstellungen im Umfeld der Simulation bearbeitet werden, dies umfasst beispielsweise die Konstruktion einer Simulationsdomäne aus unterschiedlichen Tomographiedaten und die Einbringung von Einbauten wie Stents in das Untersuchungsgebiet. Durch die Kombination mit der Lattice-Boltzmann-Methode können Vorteile gegenüber dem Stand der Technik erreicht werden, etwa bei der effizienten Berechnung der Wandscherspannungen. Eine Validierung der Strömungs- und Transportsimulationen wird mit hochaufgelöster Magnetresonanztomographie vorgenommen. Dazu wird ein Modell des Aufnahmevorgangs unter Einfluss von Radiofrequenz-Magnetfeldern und Gradienten erstellt und der Magnetisierungstransport sowie die Relaxation simuliert. Die bestimmten Abweichungen zwischen Simulation und Messung sind insgesamt gering. Für die Messexperimente werden erstmals 3D-Druckverfahren für die Konstruktion von physischen Modellen eingesetzt und deren Güte untersucht. Durch die Ergebnisse dieser Arbeit steht eine effiziente und umfassende Verarbeitungspipeline für Blutströmungs-, Transport- und Thrombosierungsprozesse für weitere Untersuchungen bereit. Sie kann ebenfalls leicht um neue Modelle erweitert werden. Die Simulation der Magnetresonanztomographie für Flussbildgebung ermöglicht ebenfalls zukünftige Anwendungen im Bereich der Sequenzentwicklung.
  • Item
    A parameter-optimizing model-based approach to the analysis of low-SNR image sequences for biological virus detection
    (2016) Siedhoff, Dominic; Müller, Heinrich; Merhof, Dorit
    This thesis presents the multi-objective parameter optimization of a novel image analysis process. The focus of application is automatic detection of nano-objects, for example biological viruses, in real-time. Nano-objects are detected by analyzing time series of images recorded with the PAMONO biosensor, after parameters have been optimized on synthetic data created by a signal model for PAMONO. PAMONO, which is short for Plasmon-Assisted Microscopy of Nano-Sized Objects, is a biosensor yielding indirect proofs for objects on the nanometer-scale by measuring the Surface Plasmon Resonance (SPR) effects they cause on the micrometer scale. It is an optical microscopy technique enabling the detection of biological viruses and other nano-objects within a portable device. The PAMONO biosensor produces time series of 2-D images on the order of 4000 half-megapixel images per experiment. A particular challenge for automatic analysis of this data emerges from its low Signal-to-Noise Ratio (SNR). Manual analysis takes approximately two days per experiment and analyzing person. With the automatic analysis process developed in this thesis, occurrences of nano-objects in PAMONO data can be counted and displayed in real-time while measurements are being taken. Analysis is divided into a GPU-based detector aiming at high sensitivity, complemented with a machine learning-based classifier aiming at high precision. The analysis process is embedded into a multi-objective optimization approach that automatically adapts algorithm choice and parameters to changes in physical sensor parameters. Such changes occur, for example, during sensor prototype development. In order to automatically evaluate the objectives undergoing optimization, a signal model for the PAMONO sensor is proposed, which serves to synthesize ground truth-annotated data. The parameters of the analysis process are optimized on this synthetic data, and the classifier is learned from it. Hence, the signal model must accurately mimic the data recorded by the sensor, which is achieved by incorporating real sensor data into synthesis. Both, optimized parameters and the learned classifier, achieve high quality results on the real sensor data to be analyzed: Nano-objects with diameters down to 100nm are detected reliably in PAMONO data. Note that the median SNR over all nano-objects to be detected was below two in the examined experiments with 100nm objects. While the presented analysis process can be used for real-time virus detection in PAMONO data, the optimization approach can serve in accelerating the advancement of the sensor prototype towards a final setup of its physical parameters: In this scenario, frequent changes in physical sensor parameters make the automatic adaptation of algorithmic process parameters a desirable goal. No expertise concerning the underlying algorithms is required in these use cases, enabling ready applicability in a lab scenario.
  • Item
    Effiziente, GPU-basierte Simulation thermischer Spritzprozesse
    (2015) Wiederkehr, Thomas; Müller, Heinrich; Turek, Stefan
    Im Rahmen dieser Arbeit wird eine GPU-basierte Beschichtungssimulation für robotergestützte thermische Spritzprozesse vorgestellt, welche die effiziente Berechnung der Schichtdickenverteilung auf komplexen dreidimensionalen Bauteiloberflächen ermöglicht. Zu diesem Zweck wurde ein Footprint-basiertes Beschichtungsmodell entworfen, welches die Modellierung unterschiedlicher Spritzcharakteristiken und -prozesse auf Basis experimentell ermittelter Schichtprofile erlaubt. Der erste Teil dieser Arbeit befasst sich daher ausführlich mit der experimentellen Erzeugung, der Digitalisierung und der Nachbearbeitung von Footprintprofilen, deren exakte Erfassung maßgeblich für die Genauigkeit der Beschichtungssimulation ist. Dabei wird zunächst die grundlegende experimentelle und algorithmische Vorgehensweise dargestellt. Im Folgenden werden verschiedene potentielle Probleme in der Herstellung und Vermessung der benötigten Schichtprofile identifiziert und technische sowie algorithmische Lösungen vorgestellt. Zur genauen und wiederholbaren Einmessung der Spritzpistole auf das zu beschichtende Bauteil wird die Konstruktion einer unmittelbar auf der Spritzpistole montierbaren Lasereinmessvorrichtung beschrieben, welche die Kompensation von translatorischen und rotatorischen Abweichungen ermöglicht und ein Referenzkoordinatensystem für die bisher geometrisch schwer erfassbare Spritzrichtung im Lichtbogenspritzprozess definiert. Zur Digitalisierung der sehr dünnen Schichtprofile werden sechs taktile und optische 3D-Scansysteme unterschiedlicher Bau- und Funktionsweise untersucht und miteinander sowie mit lichtmikroskopischen Vermessungsmethoden verglichen. Zur digitalen Nachbearbeitung der Scandaten werden zwei Methoden zur Kompensation thermisch bedingter Verformungen des Substrats vorgestellt und verglichen, durch die eine Verbesserung der Profilgenauigkeit erreicht werden kann. Ferner wird der Einfluss des zur Probenpräparation eingesetzten Sandstrahlprozesses auf die Messungen mit taktilen Systemen untersucht und als nicht zu vernachlässigender Faktor für die Messgenauigkeit identifiziert. Um darüber hinaus auch die vergleichsweise hohe Genauigkeit zweidimensionaler lichtmikroskopischer Messungen in metallographischen Querschliffen für eine Kalibrierung des dreidimensionalen Beschichtungsmodells nutzen zu können, wird ein modellbasiertes Optimierungsverfahren vorgestellt. Dieses Verfahren erlaubt die geometrische Optimierung der dreidimensionalen Footprintform auf Basis mehrerer Querschnitte von Schichtprofilen, die mittels linearer Verfahrbahnen erzeugt werden. Abschließend werden in diesem ersten Teil der Arbeit Methoden zur Darstellung von Footprintprofilen durch die Überlagerung bivariater Gaußfunktionen untersucht. Im Gegensatz zu bisherigen aus der wissenschaftlichen Literatur bekannten Ansätzen wird dabei die Anzahl der Gaußfunktionen nicht auf die Anzahl der Partikelinjektoren beschränkt. Auf diese Weise können die Vorteile funktionaler Repräsentationen, gute Filtereigenschaften und eine kontinuierliche Repräsentation der Massenstromdichte im Randbereich, mit der Fähigkeit einer numerischen Repräsentation, nahezu beliebige Spritzcharakteristiken abbilden zu können, kombiniert werden. Der zweite Teil dieser Arbeit befasst sich mit dem Entwurf des dreidimensionalen Footprint-basierten Beschichtungsmodells und des GPU-beschleunigten Simulationssystems. Das Beschichtungsmodell wird darin in drei logische Teile gegliedert. Der erste Teil beschreibt eine grundlegende, den Prozess definierende Massenstromdichteverteilung auf Basis der ermittelten Footprintprofile. Der zweite Teil des Modells definiert eine geometrische Übertragungsfunktion, welche die Umrechnung zwischen Footprintexperiment und allgemeinen im Rahmen der Simulation auftretenden Eingriffssituationen erlaubt. In diesem Kontext werden Übertragungsfunktionen für kegelförmige und zylindrische Strahlformen hergeleitet, veranschaulicht und mit einer vereinfachten Formulierung verglichen. Der dritte Teil des Modells bildet den variablen Haftwirkungsgrad ab, zu dessen Bestimmung eine automatisierte, simulationsgestützte Verfahrensweise vorgestellt wird. Das Simulationskonzept zeichnet sich insbesondere durch die Abbildung geometrischer Gegebenheiten des Schichtablagerungsprozesses auf den kamerabasierten Bildgenerierungsprozess der OpenGL-Renderingpipeline aus. Dies ermöglicht die Ausnutzung der Rechenleistung moderner Grafikkarten zur Schichtberechnung. Eine hohe Simulationsgeschwindigkeit, welche insbesondere für die Verwendung der Beschichtungssimulation in automatisierten Bahnplanungs- und Bahnoptimierungssystemen notwendig ist, wird durch die Implementierung weiter Teile der Simulation in Form von GLSL-Shaderprogrammen erzielt. Die interaktive Visualisierung der Schichtdickenverteilung und weiterer oberflächen- und bahnbezogener Zielgrößen sowie die vorgestellten Verfahren zur automatisierten Parameterkalibrierung und zur Sensitivitätsanalyse erlauben eine detaillierte Beurteilung und Planung eines Beschichtungsprozesses. In der ausführlichen Evaluation wird die Beschichtungssimulation anhand einfacher Experimente verifiziert und anhand der Beschichtung komplexer Tiefziehwerkzeuge validiert. Im Kontext der Beschichtung von Tiefziehwerkzeugen werden darüber hinaus die Ergebnisse einer Sensitivitätsanalyse präsentiert und der Einsatz der Simulation wird im Rahmen eines automatisierten Bahnoptimierungsverfahrens demonstriert. Ein Vergleich mit einer Implementierung auf Basis der GPU-Bibliothek Optix Prime von Nvidia bestätigt ferner die hervorragenden Laufzeiteigenschaften des in dieser Dissertation konzipierten Simulationssystems.