LS 14 Software Engineering
Permanent URI for this collection
Browse
Recent Submissions
Item A design theory for data catalogs(2024) Tebernum, Daniel; Howar, Falk; Möller, FrederikIn today's data-driven world, individuals, companies, and government agencies are generating and collecting enormous amounts of data at an increasing rate. This vast amount of data offers immense potential for valuable insights, informed decision-making, and value creation. The correct data can help optimize processes, make predictions, or establish new business models. To exploit this potential, professional and modern data management that supports data discovery, data governance, and data democratization is essential. For this purpose, data catalogs are a valuable tool. They act as a centralized repository within an organization or institution, allowing users to discover, understand, and access data quickly. Data catalogs are gaining popularity in many fields, but holistic, practice-based, and design-oriented knowledge is still lacking. Thus, the goal of this thesis is to provide a design theory that will aid scholars and professionals in the process of designing data catalogs. As a basis for developing the design theory, we utilized our data catalog, DIVA, which we developed over several iterations and years in close exchange with practice. We have done this to create a design theory grounded in practice that is relevant to both researchers and practitioners. Prescriptive design knowledge was extracted from DIVA in the form of design principles. Concrete recommendations for action in the form of design features were also developed based on DIVA. In a qualitative study, people from the target group of our design theory evaluated the results. We present design knowledge for data catalogs of different maturity levels. On the one hand, implicit design knowledge is given as software artifacts. Further, design knowledge has been published in the form of models, methods, and architectures in peer-reviewed publications, which are part of this thesis. Mainly, this work deals with the development of design principles and design features. In sum, the contributions compose a design theory for data catalogs. This thesis contributes to the body of design-oriented knowledge concerning data catalogs and thus also data management in general. The design theory is intended to support researchers and practitioners in designing or developing successful data catalogs. It aids them by providing prescriptive design knowledge and concretizing examples from practice and literature.Item Between environmental perception and decision-making: compositional engineering of safe automated driving systems(2024) Philipp, Robin Sören; Howar, Falk; Chen, Jian-JiaDevelopment of autonomous vehicles has hit a slump in the past years. This slump is caused by the so-called approval trap for autonomous vehicles: While the industry has mostly mastered the methods for building autonomous vehicles, reliable mechanisms for ensuring their safety are still missing. It is generally accepted that the brute-force approach of driving enough mileage for documenting the relatively higher safety of autonomous vehicles (compared to human drivers) is not feasible. Since, as of today, no alternative strategies for the safety approval of autonomous vehicles exist. One promising strategy is decomposition of safety validation into many sub-tasks with compositional sub-goals (akin to safety cases but for a vehicles intended functionality) for replacing mileage by combining validation tasks that together document safety. A prerequisite for this strategy is that the required performance of each component can be specified and shown. Specifying how accurate an environmental perception needs to be, however, is a non-trivial task. Whether perceptual inaccuracies, like a wrongly classified or missing object, also lead to hazardous behavior can only be evaluated when considering both the residual processing chain and the operational situation the autonomous vehicle is in. This thesis proposes a formal approach for the validation of perception components consisting of three consecutive steps: creation of a taxonomy regarding perception component inaccuracy, elicitation of verifiable requirements for perception components regarding these inaccuracies and evaluation of the elicited requirements. To that end, we firstly touch on the specification of perception errors and propose an approach to determine relevance of objects in urban areas. Secondly, we elicit verifiable perception requirements subject to a given decision-making module in different scenarios by structured testing in a simulation framework. Finally, we deal with the evaluation of perception components. This includes our approach for the generation of dimension and classification reference values and an exemplary evaluation of an object detection module regarding relevant errors and our previously elicited requirements. To the best of our knowledge, this is the first time that a coherent, formal approach for a decomposed safety validation of perception components is proposed and demonstrated. We conclude, that our contributions provide a novel perspective on the interface between perception and decision-making and thus further support the idea of a decomposed safety validation for automated driving systems.Item Design principles for data quality tools(2023) Altendeitering, Marcel; Howar, Falk; Janiesch, ChristianData quality is an essential aspect of organizational data management and can facilitate accurate decision-making and building competitive advantages. Nu-merous data quality tools aim to support data quality work by offering automa-tion for different activities, such as data profiling or validation. However, de-spite a long history of tools and research, a lack of data quality remains an issue for many organizations. Data quality tools face changes in the organizational (e.g., evolving data architectures) and technical (e.g., big data) environment. Established tools cannot fully comprehend these changes, and limited prescrip-tive design knowledge on creating adequate tools is available. In this cumula-tive dissertation, we summarize the findings of nine individual studies on the objectives and design of data quality tools. Most importantly, we conducted four case studies on implementing data quality tools in real-world scenarios. In each case, we designed and implemented a separate data quality tool and abstracted the essential design elements. A subsequent cross-case analysis helped us accu-mulate the available design knowledge, resulting in the proposal of 13 general-ized design principles. With the proposal of empirically grounded design knowledge, the dissertation contributes to the managerial and scientific commu-nities. Managers can use our results to create customized data quality tools and assess offerings at the market. Scientifically, we address the lack of prescriptive design knowledge for data quality tools and offer many opportunities to extend our research in multiple directions. The continuous work on data quality tools will help them become more successful in ensuring data fulfills high-quality standards for the benefit of businesses and society.Item Komponentenbasierte Synthese von Simulationsmodellen(2022) Kallat, Fadil; Rehof, Jakob; Meyer, AnneItem The integration of multi-color taint-analysis with dynamic symbolic execution for Java web application security analysis(2023) Mues, Malte; Howar, Falk; Beyer, DirkThe view of IT security in today’s software development processes is changing. While IT security used to be seen mainly as a risk that had to be managed during the operation of IT systems, a class of security weaknesses is seen today as measurable quality aspects of IT system implementations, e.g., the number of paths allowing SQL injection attacks. Current trends, such as DevSecOps pipelines, therefore establish security testing in the development process aiming to catch these security weaknesses before they make their way into production systems. At the same time, the analysis works differently than in functional testing, as security requirements are mostly universal and not project specific. Further, they measure the quality of the source code and not the function of the system. As a consequence, established testing strategies such as unit testing or integration testing are not applicable for security testing. Instead, a new category of tools is required in the software development process: IT security weakness analyzers. These tools scan the source code for security weaknesses independent of the functional aspects of the implementation. In general, such analyzers give stronger guarantees for the presence or absence of security weaknesses than functional testing strategies. In this thesis, I present a combination of dynamic symbolic execution and explicit dynamic multi-color taint analysis for the security analysis of Java web applications. Explicit dynamic taint analysis is an established monitoring technique that allows the precise detection of security weaknesses along a single program execution path, if any are present. Multi-color taint analysis implies that different properties defining diverse security weaknesses can be expressed at the same time in different taint colors and are analyzed in parallel during the execution of a program path. Each taint color analyzes its own security weakness and taint propagation can be tailored in specific sanitization points for this color. The downside of dynamic taint analysis is the single exploration of one path. Therefore, this technique requires a path generator component as counterpart that ensures all relevant paths are explored. Dynamic symbolic execution is appropriate here, as enumerating all reachable execution paths in a program is its established strength. The Jaint framework presented here combines these two techniques in a single tool. More specifically, the thesis looks into SMT meta-solving, extending dynamic symbolic execution on Java programs with string operations, and the configuration problem of multi-color taint analysis in greater detail to enable Jaint for the analysis of Java web applications. The evaluation demonstrates that the resulting framework is the best research tool on the OWASP Benchmark. One of the two dynamic symbolic execution engines that I worked on as part of the thesis has won gold in the Java track of SV-COMP 2022. The other demonstrates that it is possible to lift the implementation design from a research specific JVM to an industry grade JVM, paving the way for the future scaling of Jaint.Item User support for software development technologies(2022) Vasileva, Anna; Rehof, Jakob; Hermann, BenThe adoption of software development technologies is very closely related to the topic of user support. This is especially true in early phases, when the users are not familiar with the modification or the build processes of the software that has to be developed nor with the technology used for software development. This work introduces an approach to improve the usability of software development technologies represented by the Combinatory Logic Synthesizer (CL)S Framework. (CL)S is based on a type inhabitation algorithm for the combinatory logic with intersection types and aims to automatically create software components from a domain-specified repository. The framework yields a complete enumeration of all inhabitants. The inhabitation results are computed in the form of tree grammars. Unfortunately, the underlying type system allows limited application of domain-specific knowledge. To compensate for this limit, this work provides a framework for debugging intersection type specifications and filtering inhabitation results using domain-specific constraints as main aspects. The aim of the debugger is to make potentially incomplete or erroneous input specifications and decisions of the inhabitation algorithm understandable for those who are not experts in the field of type theory. The combination of tree grammars and graph theory forms the foundation of a clear representation of the computed results that informs users about the search process of the algorithm. The graphical representations are based on hypergraphs that illustrate the inhabitation in a step-wise fashion. Within the scope of this work, three filtering algorithms were implemented and investigated. The filtering algorithm integrated into the framework for user support and used for the restriction of inhabitation results is practically feasible and represents a clear improvement compared to existing approaches. It is based on modifying the tree grammars resulting from the (CL)S Framework. Additionally, the usability of the (CL)S Framework is supported by eight perspectives included in a web-based integrated development environment (IDE) that provides detailed graphical and textual information about the synthesis.Item Component-based synthesis of motion planning algorithms(2021) Schäfer, Tristan; Rehof, Jakob; Wiederkehr, PetraCombinatory Logic Synthesis generates data or runnable programs according to formal type specifications. Synthesis results are composed based on a user-specified repository of components, which brings several advantages for representing spaces of high variability. This work suggests strategies to manage the resulting variations by proposing a domain-specific brute-force search and a machine learning-based optimization procedure. The brute-force search involves the iterative generation and evaluation of machining strategies. In contrast, machine learning optimization uses statistical models to enable the exploration of the design space. The approaches involve synthesizing programs and meta-programs that manipulate, run, and evaluate programs. The methodologies are applied to the domain of motion planning algorithms, and they include the configuration of programs belonging to different algorithmic families. The study of the domain led to the identification of variability points and possible variations. Proof-of-concept repositories represent these variability points and incorporate them into their semantic structure. The selected algorithmic families involve specific computation steps or data structures, and corresponding software components represent possible variations. Experimental results demonstrate that CLS enables synthesis-driven domain-specific optimization procedures to solve complex problems by exploring spaces of high variability.Item Programmierkonzepte für die Umsetzung von Nutzungsrichtlinien in industriellen Datenräumen(2022) Bruckner, Fabian; Howar, Falk; Jürjens, JanDaten haben sich im Laufe der Zeit immer mehr zu einem wertvollem Asset entwickelt. Aus diesem Grund ist für Rechteinhaber die Kontrolle über die eigenen Daten von zentraler Bedeutung. Die Fähigkeit des Rechteinhabers selbstbestimmt über die Nutzung seiner Daten zu verfügen wird als Datensouveränität bezeichnet. Diese Arbeit beschäftigt sich mit der Frage, wie die Erlangung sowie der Erhalt der Datensouveränität technisch durch Usage Control Mechanismen unterstützt werden kann. In der vorliegenden Arbeit wird eine flexible und erweiterbare Programmiersprache entwickelt, welche über integrierte Usage Control Mechanismen verfügt und den Namen D° trägt. Durch die Umsetzung des Programmierparadigmas der policy-agnostischen Programmierung wird die Komplexität der Usage Control Mechanismen gekapselt und kann durch Experten adressiert werden. Ein Teil dieser Komplexität ist in den Compiler verlagert und gelöst worden und muss von Anwendern der Sprache nicht mehr beachtet werden. Hierdurch wird der Applikationsentwickler entlastet und die korrekte Nutzung von Usage Control Mechanismen vereinfacht. Des Weiteren wird präsentiert, wie das Remote Evaluation Paradigma für D° umgesetzt werden kann. Das Paradigma zielt auf Szenarien der kooperativen Datennutzung ab und verzichtet auf den Versand von Daten an Dritte, welche die Daten verwenden möchten. Stattdessen werden die datenverarbeitenden Applikationen und deren Berechnungsergebnisse hin- und hergeschickt. Hierdurch verbleiben die Daten stets auf den Systemen des Rechteinhabers, welche gleichzeitig auf die Vorteile der Usage Control Mechanismen in D° zurückgreifen können. Dies erlaubt die kooperative Datennutzung in Szenarien, in denen die Weitergabe von Daten ausgeschlossen ist und technische Maßnahmen zur Datennutzungskontrolle notwendig sind. Die erzielten Ergebnisse werden mithilfe eines größeren Demonstrators präsentiert und validiert. Dabei werden die einzelnen Aspekte von D° anhand von Beispielen praktisch vorgestellt. Außerdem findet eine Einordnung der Lösung in die International Data Spaces statt, welche die vorliegende Arbeit maßgeblich motiviert und geprägt haben. Bei dieser Einordnung wird gezeigt, dass die Mächtigkeit der Usage Control Mechanismen von D° gleich oder besser zu der von anderen Usage Control Mechanismen, welche in den International Data Spaces verwendet werden, ist.Item Modeling of cutting forces in trochoidal milling with respect to wear-dependent topographic changes(2021-05-24) Bergmann, Jim A.; Potthoff, Nils; Rickhoff, Tobias; Wiederkehr, PetraThe aerospace industry utilizes nickel-based super-alloys due to its high level of strength and corrosion resistance. To evaluate milling strategies regarding tool wear, the prediction of forces during these cutting operations is essential. This comprises the determination of the undeformed chip thickness. Due to the complex interdependencies of tool engagements, the determination of these thicknesses is challenging. A geometric physically-based simulation system was extended by a novel time-discrete envelope model to increase the precision of the calculated undeformed chip thicknesses. In order to take tool wear into account, digitized topographies of cutting inserts in different states of tool wear were modelled.Item Algebraic aggregation of random forests(2021-09-29) Gossen, Frederik; Steffen, BernhardRandom Forests are one of the most popular classifiers in machine learning. The larger they are, the more precise the outcome of their predictions. However, this comes at a cost: it is increasingly difficult to understand why a Random Forest made a specific choice, and its running time for classification grows linearly with the size (number of trees). In this paper, we propose a method to aggregate large Random Forests into a single, semantically equivalent decision diagram which has the following two effects: (1) minimal, sufficient explanations for Random Forest-based classifications can be obtained by means of a simple three step reduction, and (2) the running time is radically improved. In fact, our experiments on various popular datasets show speed-ups of several orders of magnitude, while, at the same time, also significantly reducing the size of the required data structure.Item Investigation of the effect of residual stresses in the subsurface on process forces for consecutive orthogonal cuts(2021-05-22) Wöste, Forian; Kimm, Janis; Bergmann, Jim A.; Theisen, Werner; Wiederkehr, PetraThe quality and surface integrity of machined parts is influenced by residual stresses in the subsurface resulting from cutting operations. These stress characteristics can not only affect functional properties such as fatigue life, but also the process forces during machining. Especially for orthogonal cutting as an appropriate experimental analogy setup for machining operations like milling, different undeformed chip thicknesses cause specific residual stress formations in the subsurface area. In this work, the process-related depth profile of the residual stress in AISI 4140 was investigated and correlated to the resulting cutting forces. Furthermore, an analysis of the microstructure of the cut material was performed, using additional characterization techniques such as electron backscatter diffraction and nanoindentation to account for subsurface alterations. On this basis, the influence of process-related stress profiles on the process forces for consecutive orthogonal cutting strategies is evaluated and compared to the results of a numerical model. The insights obtained provide a basis for future investigations on, e. g., empirical modeling of process forces including the influence of process-specific characteristics such as residual stress.Item Automatisierte Komposition und Konfiguration von Workflows zur Planung mittels kombinatorischer Logik(2019) Winkels, Jan; Rehof, Jakob; Steffen, BernhardWird in einem Fabriksystem ein Anpassungsbedarf erkannt, muss ein Anpassungsprozess gestartet werden. Ein solcher Prozess beinhaltet in der Regel eine Planungsphase, in der ein Projektteam ein Vorgehen erarbeitet, wie die Anpassung vorzunehmen ist. Während Produktions-, Logistik- und Fertigungsprozesse bereits weitgehend automatisiert wurden und werden, findet die Entwicklung eines solchen Planungsprozesses in der Regel nach wie vor manuell und individuell statt. Das Planungsteam entwickelt den jeweils benötigten Plan von Hand und nach Bedarf. Die auf das Projekt zugeschnittene Planerstellung ergibt sich aus den spezifischen Anforderungen, die jedes (Anpassungs-) Projekt mit sich bringt. Die Erstellung des Plans erfolgt unter diesen Anforderungen allerdings nach wiedererkennbaren Mustern. Ziel der Dissertation ist es, eine Möglichkeit zu entwickeln, das Erstellen von Plänen und das Planen zu automatisieren. Hierzu soll ein Verfahren (bzw. eine Software) entwickelt werden, das die Möglichkeit bietet, Pläne unter Berücksichtigung von zuvor angegebenen Rahmenbedingungen dynamisch nach Bedarf zu generieren. Um Prozesse dynamisch zu generieren, folgt das Projekt einem Baukastenprinzip. Es wird eine Sammlung von standardisierten Prozessmodulen definiert, aus denen sich komplexe Prozesse und Pläne zusammenfügen lassen. Die Idee ist vergleichbar mit einem Lego-Baukasten. So wie solche Bausteine je nach Wunsch zu beinahe jedem beliebigen Objekt zusammengefügt werden können, sollen auch die Prozessmodule jeden gewünschten Prozess abbilden können. Am Ende des Projektes soll eine Software entstehen, die für jedes Projekt automatisch den passenden Workflow liefert. Ein Projektplaner gibt nur noch grundlegende Informationen (z.B. Budget- und Zeitbeschränkungen) an und erhält eine Auswahl an möglichen Plänen zur Realisierung des Projektes. Treten während der Durchführung des Plans Ereignisse auf, die Plananpassungen notwendig machen, können auch diese durch eine Neugenerierung des Plans automatisiert durchgeführt werden. Um dieses Ziel erreichen zu können, werden Methoden der kombinatorischen Logik und des Constraintsolvings genutzt. Durch die Nutzung der kombinatorischen Logik wird am Lehrstuhl für Software Engineering der TU Dortmund bereits erfolgreich Softwaresynthese betrieben, was bedeutet, dass aus einer gegebenen Menge unterschiedlicher Software-Komponenten individuelle Programme generiert werden können. Constraintsolving wiederum bezeichnet Verfahren zur Ermittlung von Lösungen für (mathematische) Probleme unter Berücksichtigung von einschränkenden Nebenbedingungen (Constraints). In dieser Arbeit werden beiden Technologien in einer Erweiterung eines gängigen Projektplanungverfahrens zusammengeführt. Dazu wird zunächst auf die modernen Herausforderungen der Fabrikplanung im Kontext der Industrie 4.0 eingegangen. Es wird gezeigt, warum Bedarf nach modernen, schnellen und flexiblen Planungsansätzen besteht und wie die Informatik diese unterstützen kann. Im weiteren Verlauf der Arbeit werden dann die methodischen und theoretischen Grundlagen der Fabrikplanung dar- und mögliche Planungssystematiken zur Umsetzung von automatisierter Plan-Generierung vorgestellt. Anschließend werden Technologien aus dem Bereich der Synthese und des Constraintsolving erörtert und in einer prototypischen Softwareanwendung zusammengeführt. Den Abschluss der Arbeit bildet eine Reihe von Experimenten, die das in dieser Arbeit erarbeitete Vorgehen anhand realer Planungsszenarien validieren.Item A type-theoretic framework for software component synthesis(2019) Bessai, Jan; Rehof, Jakob; Heineman, George T.A language-agnostic approach for type-based component-oriented software synthesis is developed from the fundamental principles of abstract algebra and Combinatory Logic. It relies on an enumerative type inhabitation algorithm for Finite Combinatory Logic with Intersection Types (FCL) and a universal algebraic construction to translate terms of Combinatory Logic into any given target language. New insights are gained on the combination of semantic domains of discourse with intersection types. Long standing gaps in the algorithmic understanding of the type inhabitation question of FCL are closed. A practical implementation is developed and its applications by the author and other researchers are discussed. They include, but are not limited to, vast improvements in the context of synthesis of software product line members. An interactive theorem prover, Coq, is used to formalize and check all the theoretical results. This makes them more reusable for other developments and enhances confidence in their correctness.Item Algorithmic aspects of type-based program synthesis(2019) Dudenhefner, Andrej; Rehof, Jakob; Urzyczyn, PawelIn the area of type-based program synthesis, the decision problem of inhabitation (given a type environment Gamma and a type tau, is there a term M such that M can be assigned the type tau in Gamma?) corresponds to existence of a program (term M) that satisfies the given specification (type tau) under additional assumptions (type environment Gamma). Inhabitation in typed lambda-calculi can be seen as functional program synthesis from scratch. Complementarily, inhabitation in combinatory logic can be seen as domain-specific program synthesis. Further restrictions on inhabitant search, such as principality and relevance restrictions, yield inhabitants that are more closely tied to given specifications. Alternatively, dimension, rank, order, and arity restrictions provide means to control the complexity of inhabitant search. This work provides an overview over following results in type-based program synthesis: PSpace-completeness of principal inhabitation in the simply typed lambda-calculus, undecidability of inhabitation in lambda-calculus with intersection types, undecidability of inhabitation in lambda-calculus with intersection types in bounded dimension, undecidability of inhabitation in subintuitionistic combinatory logic, (o+2)-ExpTime-completeness of inhabitation in combinatory logic with intersection types with instantiation of bounded order o, and ExpTime-hardness of intersection type unification.Item Digitales, sektorübergreifendes Prozessmanagement im Gesundheitswesen(2015) Heiden, Katja; Rehof, Jakob; Böckmann, BrittaItem Automatic synthesis of component & connector software architectures with bounded combinatory logic(2014) Düdder, Boris; Rehof, Jakob; Henglein, FritzCombinatory logic synthesis is a new type-based approach towards automatic synthesis of software from components in a repository. In this thesis we show how the type-based approach can naturally be used to exploit taxonomic conceptual structures in software architectures and component repositories to enable automatic composition and configuration of components, and also code generation, by associating taxonomic concepts to architectural building blocks such as, in particular, software connectors. Components of a repository are exposed for synthesis as typed combinators, where intersection types are used to represent concepts that specify intended usage and functionality of a component. An algorithm for solving the type inhabitation problem in combinatory logic - does there exist a composition of combinators with a given type? - is then used to automate the retrieval, composition, and configuration of suitable building blocks with respect to a goal specification. Since type inhabitation has high computational complexity, heuristic optimizations for the inhabitation algorithm are essential for making the approach practical. We discuss particularly important (theoretical and pragmatic) optimization strategies and evaluate them by experiments. Furthermore, we apply this synthesis approach to define a method for software connector synthesis for realistic software architectures based on a type theoretic model. We conduct experiments with a rapid prototyping tool that employs this method on complex concrete ERP- and e-Commerce-systems and discuss the results.Item Erweiterung von Konzepten des complex event processings zur informationslogistischen Verarbeitung telemedizinischer Ereignisse(2014-04-08) Meister, Sven; Rehof, Jakob; Margaria-Steffen, TizianaErste Abschätzungen für das Gesundheitswesen prognostizieren einen Anstieg an Daten von 500 Petabytes im Jahr 2012 auf 25.000 Petabytes im Jahr 2020. Der BITKOM untermauert dieses und benennt eine jährliche Wachstumsrate an Daten von 40-50%. Frost & Sullivan haben die Daten innerhalb von Krankenhäusern auf 1 Milliarde Terabytes geschätzt und prognostizieren für das Jahr 2016 eine Datenmenge von 1.8 Zetabytes. Die zur Verfügung stehenden Daten zeichnen sich durch ein hohes Maß an Heterogenität aus. Insbesondere hochfrequente Echtzeitdaten, wie sie beim Vitalwertmonitoring entstehen, besitzen einen hohen medizinischen Wert, sind jedoch gleichzeitig nur schwer zu erschließen. Im Rahmen dieser Arbeit werden deshalb Konzepte entwickelt, die eine intelligente Verarbeitung von heterogen verteilten Vitalwerten ermöglichen. Zielsetzung ist es, hierbei eben solche Daten derart zu filtern und verdichten, dass hieraus entscheidungsunterstützende Informationen entstehen und das Maß der Informationsüberversorgung reduziert wird. Hierzu werden Konzepte aus den beiden Forschungsfeldern Informationslogistik und Complex Event Processing betrachtet und zu einem ereignisverarbeitenden System für telemedizinische Ereignisse zusammengeführt. Mithilfe der temporalen Abstraktion werden aus einer zeitlich geordneten Menge von einfachen Ereignissen komplexe Ereignisse - sog. Trendpattern - erzeugt. Unter Anwendung des formalisierten Informationsbedarfs eines Anwenders, werden aus diesen Pattern bedarfsgerechte Informationen erzeugt. Die wesentliche Eigenschaft des zu konzipierenden und implementierenden Systems ist die Modularisierung der Verarbeitungsroutinen, zur einfachen Adaption an sich verändernde Gesundheitszustände und somit eine Reduzierung notwendiger Implementierungsaufwendungen. Die konzeptionellen und implementatorischen Ergebnisse dieser Arbeit werden im Rahmen einer Evaluation unter Anwendung großer, heterogener Datenbestände bewertet. Im Fokus steht hierbei der Nachweis einer bedarfsgerechten Verdichtung von Daten zu Informationen sowie einer Minimierung von Implementierungsaufwendungen.Item Verteilte Prozesskontrolle in ressourcenbasierten Architekturen(2013-06-07) Sugioarto, Martin; Rehof, Jakob; Steffen, BernhardProzesskontrolle in entkoppelten, verteilten und datenbasierten Architekturen, wie dem World Wide Web, ist eine Herausforderung, die diese Arbeit thematisiert. Nach der Formalisierung des Representational State Transfer (REST) und der Erweiterung des Architekturstils zu ressourcenbasierten Architekturen wird anhand von Ressourcen, Agenten und ihres Verhaltens gezeigt, wie Informationsübertragung realisiert werden kann, um anschließend kooperative Prozesse mit Hilfe von monotoner Sequenzen und monotoner Warteschlangen betreiben zu können. Abschließend wird untersucht wie sich ressourcenbasierte Architekturen zu bekannten Geschäftsprozesssprachen und ihren Modellen verhalten.Item A generic scheduling architecture for service oriented distributed computing infrastructures(2013-01-28) Wieder, Philipp; Yahyapour, Ramin; Kranzlmüller, DieterIn state-of-the-art distributed computing infrastructures different kinds of resources are combined to offer complex services to customers. As of today, service-oriented middleware stacks are the work-horses to connect resources and their users, and to implement all functions needed to provide those services. Analysing the functionality of prominent middleware stacks, it becomes evident that common challenges, like scalability, manageability, efficiency, reliability, security, or complexity, exist, and that they constitute major research areas in information and communication technologies in general and distributed systems in particular. One core issue, touching all of the aforementioned challenges, is the question of how to distribute units of work in a distributed computing infrastructure, a task generally referred to as scheduling. Integrating a variety of resources and services while being compliant with well-defined business objectives makes the development of scheduling strategies and services a difficult venture, which, for service-oriented distributed computing infrastructures, translates to the assignment of services to activities over time aiming at the optimisation of multiple, potentially competing, quality-of-service criteria. Many concepts, methods, and tools for scheduling in distributed computing infrastructures exist, a majority of which being dedicated to provide algorithmic solutions and schedulers. We approach the problem from another angle and offer a more general answer to the question of ’how to design an automated scheduling process and an architecture supporting it’. Doing so, we take special care of the service-oriented nature of the systems we consider and of the integration of our solutions into IT service management processes. Our answer comprises a number of assets that form a comprehensive scheduling solution for distributed computing infrastructures. Based on a requirement analysis of application scenarios we provide a concept consisting of an automated scheduling process and the respective generic scheduling architecture supporting it. Process and architecture are based on four core models as there are a model to describe the activities to be executed, an information model to capture the capabilities of the infrastructure, a model to handle the life-cycle of service level agreements, which are the foundation for elaborated service management solutions, and a specific scheduling model capturing the specifics of state-of-the-art distributed systems. We deliver, in addition to concept and models, realisations of our solutions that demonstrate their applicability in different application scenarios spanning grid-like academic as well as financial service infrastructures. Last, but not least, we evaluate our scheduling model through simulations of artificial as well as realistic workload traces thus showing the feasibility of the approach and the implications of its usage. The work at hand therefore offers a blueprint for developers of scheduling solutions for state-of-the-art distributed computing infrastructures. It contributes essential building blocks to realise such solutions and provides an important step to integrate them into IT service management solutions.Item Model based security guarantees and change(2012-08-23) Ochoa Ronderos, Martín; Jürjens, Jan; Viganò, LucaAchieving security in practical systems is a hard task. As it is the case for other critical system properties (i.e. safety), security should be a concern through all the phases of software development, starting with the very early phases of requirements and design, because of the potential impact of unwanted behaviour. Moreover, it remains a critical concern throughout a system's life-span, because functionality driven updates or re-engineering of a system can have an impact on its security. The cost of using formal methods is clearly justified for critical applications. But in the context of a wider class of industrial applications answers to two questions are important: What are the gains and limitations of light-weight formal security guarantees achieved at different abstraction levels? What are the advantages of those techniques for reasoning about change? For the first question, we discuss different detailed modelling techniques, ranging from UML models to CPU cache modelling at the level of binary code. To tackle the second question, we discuss results on compositionality and incremental verification techniques which, besides being useful tools for verification in general, allow re-utilization of existing verification results in case of changes in the models. We apply these techniques to exemplary security properties with focus on confidentiality, and pin down security assumptions and guarantees of information flow control across levels of abstraction.