LS 06 Datenbanken und Informationssysteme
Permanent URI for this collection
ehemals: LS 06 Informationssysteme und Sicherheit, Information Retrieval, Informatik und Gesellschaft
Browse
Recent Submissions
Item Perspectives on quality of service in distributed and embedded real-time systems(2023) Schönberger, Lea; Teubner, Jens; Chen, Jian-JiaAs a consequence of technological advancements, a trend towards the development of smart cities has emerged, i.e., towards urban areas that comprise a multitude of sensors, actuators as well as computation and communication resources. Being integrated into buildings, infrastructure elements, and other objects, these components constitute a large and heterogeneous distributed hardware platform. Traffic participants and other actors of a smart city can use this platform in an on-demand fashion to make use of advanced functionalities such as, for instance, smart means of transportation. In fact, vehicles of different levels of autonomy rely on a smart city's distributed infrastructure when performing sophisticated operations that come with specific quality of service (QoS) requirements, including a multitude of parameters such as timing and reliability constraints. Against the background of a shared, heterogeneous hardware infrastructure, however, guaranteeing the satisfaction of QoS requirements and, thus, ensuring the operations' correctness is an intricate matter. This dissertation addresses selected challenges arising in the context of smart cities, focusing on the underlying distributed system as well as on individual systems interacting with it. All challenges contemplated are related to the notion of quality of service and aim to either guarantee the satisfaction of applications' QoS requirements or to enable the system(s) to enhance the level of service provided to (specific types of) applications. Concretely, a concept of QoS contracts concluded between the distributed system and each executed application is proposed that allows to provide QoS guarantees and, moreover, to detect contract violations. An extension of this concept including applications with robustness requirements is provided as well. For individual systems, focusing especially on smart vehicles, recovery protocols are proposed that enable the system to safely offload parts of critical applications to a smart city's distributed system, even under unreliable connections, while ensuring the temporal correctness. In addition, an approach for the optimization of hardware message filters in controller area network is proposed by means of which the overhead due to unnecessary message inspection can be reduced, allowing to spend the saved resource capacity on the execution of other applications. All concepts and approaches contributed in this dissertation have been evaluated and shown to be effective.Item MxTasks: a novel processing model to support data processing on modern hardware(2023) Mühlig, Jan; Teubner, Jens; Leis, ViktorThe hardware landscape has changed rapidly in recent years. Modern hardware in today's servers is characterized by many CPU cores, multiple sockets, and vast amounts of main memory structured in NUMA hierarchies. In order to benefit from these highly parallel systems, the software has to adapt and actively engage with newly available features. However, the processing models forming the foundation for many performance-oriented applications have remained essentially unchanged. Threads, which serve as the central processing abstractions, can be considered a "black box" that hardly allows any transparency between the application and the system underneath. On the one hand, applications are aware of the knowledge that could assist the system in optimizing the execution, such as accessed data objects and access patterns. On the other hand, the limited opportunities for information exchange cause operating systems to make assumptions about the applications' intentions to optimize their execution, e.g., for local data access. Applications, on the contrary, implement optimizations tailored to specific situations, such as sophisticated synchronization mechanisms and hardware-conscious data structures. This work presents MxTasking, a task-based runtime environment that assists the design of data structures and applications for contemporary hardware. MxTasking rethinks the interfaces between performance-oriented applications and the execution substrate, streamlining the information exchange between both layers. By breaking patterns of processing models designed with past generations of hardware in mind, MxTasking creates novel opportunities to manage resources in a hardware- and application-conscious way. Accordingly, we question the granularity of "conventional" threads and show that fine-granular MxTasks are a viable abstraction unit for characterizing and optimizing the execution in a general way. Using various demonstrators in the context of database management systems, we illustrate the practical benefits and explore how challenges like memory access latencies and error-prone synchronization of concurrency can be addressed straightforwardly and effectively.Item Low-latency query compilation(2022-05-10) Funke, Henning; Mühlig, Jan; Teubner, JensQuery compilation is a processing technique that achieves very high processing speeds but has the disadvantage of introducing additional compilation latencies. These latencies cause an overhead that is relatively high for short-running and high-complexity queries. In this work, we present Flounder IR and ReSQL, our new approach to query compilation. Instead of using a general purpose intermediate representation (e.g., LLVM IR) during compilation, ReSQL uses Flounder IR, which is specifically designed for database processing. Flounder IR is lightweight and close to machine assembly. This simplifies the translation from IR to machine code, which otherwise is a costly translation step. Despite simple translation, compiled queries still benefit from the high processing speeds of the query compilation technique. We analyze the performance of our approach with micro-benchmarks and with ReSQL, which employs a full translation stack from SQL to machine code. We show reductions in compilation times up to two orders of magnitude over LLVM and show improvements in overall execution time for TPC-H queries up to 5.5 × over state-of-the-art systems.Item QCLab: a framework for query compilation on modern hardware platforms(2022) Funke, Henning; Teubner, Jens; Neumann, ThomasAs modern in-memory database systems achieve higher and higher processing speeds, the performance of memory becomes an increasingly limiting factor. Although there has been significant progress, the bottleneck only has shifted. While earlier systems were optimized for memory latencies, current systems are rather affected by the limited memory bandwidth. Query compilation is a proven technique to address bandwidth limitations. It translates queries via Just-In-Time compilation to native programs for the target hardware. The compiled queries execute with very high efficiency and only with a bare minimum of communication via memory. Despite these important improvements, the benefit of query compilation in certain scenarios is limited. On the one hand query compilers typically use standard compiler technology with relatively long compilation times. Therefore the overall execution time can be prolonged by the additional compilation time. On the other hand, not all emerging database technology is compatible with the approach. Query compilation uses a tuple-at-a-time processing style that departs from the column-at-a-time or vector-at- a-time approaches that in-memory systems typically use. Especially data-parallel processing techniques, e.g. SIMD or coprocessing-techniques, are challenging to use in combination with the approach. This work presents QCLab, a framework for query compilation on modern hardware platforms. The framework contains several new query compilation techniques that allow us to address the mentioned shortcomings and ultimately to extend the benefit of query compilation to new workloads and platforms. The techniques cover three aspects: compilation, communication, and processing. Together they serve as basis for building highly efficient query compilers. The techniques make efficient use of communication channels and of the large processing capacities of modern systems. They were designed for practical use and enable efficient processing, even when workload characteristics are challenging.Item mxkernel: a novel system software stack for data processing on modern hardware(2020-10-06) Mühlig, Jan; Müller, Michael; Spinczyk, Olaf; Teubner, JensEmerging hardware platforms are characterized by large degrees of parallelism, complex memory hierarchies, and increasing hardware heterogeneity. Their theoretical peak data processing performance can only be unleashed if the different pieces of systems software collaborate much more closely and if their traditional dependencies and interfaces are redesigned. We have developed the key concepts and a prototype implementation of a novel system software stack named mxkernel. For MxKernel, efficient large scale data processing capabilities are a primary design goal. To achieve this, heterogeneity and parallelism become first-class citizens and deep memory hierarchies are considered from the very beginning. Instead of a classical “thread” model, mxkernel provides a simpler control flow abstraction: mxtasks model closed units of work, for which mxkernel will guarantee the required execution semantics, such exclusive access to a specific object in memory. They can be a very elegant abstraction also for heterogeneity and resource sharing. Furthermore, mxtasks are annotated with metadata, such as code variants (to support heterogeneity), memory access behavior (to improve cache efficiency and support memory hierarchies), or dependencies between mxtasks (to improve scheduling and avoid synchronization cost). With precisely the required metadata available, mxkernel can provide a lightweight, yet highly efficient form of resource management, even across applications, operating system, and database. Based on the mxkernel prototype we present preliminary results from this ambitious undertaking. We argue that threads are an ill-suited control flow abstraction for our modern computer architectures and that a task-based execution model is to be favored.Item Resource-efficient processing of large data volumes(2021) Noll, Stefan; Teubner, Jens; Giceva, JanaThe complex system environment of data processing applications makes it very challenging to achieve high resource efficiency. In this thesis, we develop solutions that improve resource efficiency at multiple system levels by focusing on three scenarios that are relevant—but not limited—to database management systems. First, we address the challenge of understanding complex systems by analyzing memory access characteristics via efficient memory tracing. Second, we leverage information about memory access characteristics to optimize the cache usage of algorithms and to avoid cache pollution by applying hardware-based cache partitioning. Third, after optimizing resource usage within a multicore processor, we optimize resource usage across multiple computer systems by addressing the problem of resource contention for bulk loading, i.e., ingesting large volumes of data into the system. We develop a distributed bulk loading mechanism, which utilizes network bandwidth and compute power more efficiently and improves both bulk loading throughput and query processing performance.Item Inference-proof materialized views(2016) Preuß, Marcel; Biskup, Joachim; Kern-Isberner, GabrieleObwohl die Veröffentlichung von Daten heutzutage allgegenwärtig ist, ist diese häufig nur dann gestattet, wenn dabei Vertraulichkeitsanforderungen beachtet werden. Vor diesem Hintergrund wird in dieser Arbeit ein Ansatz entwickelt, um abgeschwächte Sichten auf gegebene Datenbankinstanzen zu erzeugen. Eine solche abgeschwächte Sicht ist dabei inferenzsicher im Sinne der sogenannten "Kontrollierten Interaktionsauswertung" und verhindert damit beweisbar, dass ein Angreifer vertrauliche Information erlangen kann – selbst dann, wenn dieser Angreifer versucht, diese Information unter Zuhilfenahme seiner Kenntnis über den Sicherheitsmechanismus und etwaigem Vorwissen über die Datenbankinstanz oder allgemeine Sachverhalte logisch zu erschließen. Dieses Ziel wird innerhalb einer logik-orientierten Modellierung verwirklicht, in der alles sichere Wissen, das die Vertraulichkeitspolitik verletzt, (soweit möglich) durch schwächere, aber dennoch wahre Disjunktionen bestehend aus Elementen der Vertraulichkeitspolitik ersetzt wird. Auch wenn dieses disjunktive Wissen bewusst Unsicherheit über vertrauliche Information erzeugt, stellt es dennoch mehr Information als eine vollständige Geheimhaltung von vertraulicher Information bereit. Um dabei sicherzustellen, dass Disjunktionen im Hinblick auf ein betrachtetes Einsatzszenario sowohl glaubwürdig als auch aussagekräftig sind, kann ein Kriterium definiert werden, aus welchen Kombinationen von Elementen der Vertraulichkeitspolitik eine mögliche Disjunktion bestehen kann. Dieser Ansatz wird erst in einer generischen Variante entwickelt, in der nicht-triviale Disjunktionen jeder Länge ≥ 2 zum Einsatz kommen können und das erreichte Maß an Vertraulichkeit mit der Länge der Disjunktionen variiert. Dabei wird jegliches Wissen in einem eingeschränkten, aber dennoch vielfältig einsetzbaren Fragment der Prädikatenlogik modelliert, in dem die Gültigkeit von Implikationsbeziehungen effizient ohne den Einsatz von Theorembeweisern entschieden werden kann. Anschließend wird eine Variante dieses generischen Ansatzes vorgestellt, die die Verfügbarkeit maximiert, indem Disjunktionen der Länge 2 effizient mit Hilfe von Clustering auf Graphen konstruiert werden. Diese Variante wird daraufhin derart erweitert, dass sie auch dann effizient inferenzsichere Sichten erzeugen kann, wenn ein Angreifer Vorwissen in Form einer eingeschränkten Unterklasse von sogenannten "Tuple Generating Dependencies" hat. Um die Effizienz dieser (erweiterten) Verfügbarkeit maximierenden Variante zu demonstrieren, wird ein Prototyp unter verschiedenen Testszenarien erprobt. Dabei kommt ein Kriterium zur Konstruktion möglicher Disjunktionen zum Einsatz, das (lokal) die Verfügbarkeit innerhalb von Disjunktion maximiert, indem sich beide Disjunkte einer solchen Disjunktion nur in genau einer Konstante unterscheiden.Item Belief change operations under confidentiality requirements in multiagent systems(2014-05-13) Tadros, Cornelia; Biskup, Joachim; Kern-Isberner, GabrieleMultiagent systems are populated with autonomous computing entities called agents which pro-actively pursue their goals. The design of such systems is an active field within artificial intelligence research with one objective being flexible and adaptive agents in dynamic and inaccessible environments. An agent's decision-making and finally its success in achieving its goals crucially depends on the agent's information about its environment and the sharing of information with other agents in the multiagent system. For this and other reasons, an agent's information is a valuable asset and thus the agent is often interested in the confidentiality of parts of this information. From research in computer security it is well-known that confidentiality is not only achieved by the agent's control of access to its data, but by its control of the flow of information when processing the data during the interaction with other agents. This thesis investigates how to specify and enforce the confidentiality interests of an agent D while it reacts to iterated query, revision and update requests from another agent A for the purpose of information sharing. First, we will enable the agent D to specify in a dedicated confidentiality policy that parts of its previous or current belief about its environment should be hidden from the other requesting agent A. To formalize the requirement of hiding belief, we will in particular postulate agent A's capabilities for reasoning about D's belief and about D's processing of information to form its belief. Then, we will relate the requirements imposed by a confidentiality policy to others in the research of information flow control and inference control in computer security. Second, we will enable the agent D to enforce its confidentiality aims as expressed by its policy by refusing requests from A at a potential violation of its policy. A crucial part of the enforcement is D's simulation of A's postulated reasoning about D's belief and the change of this belief. In this thesis, we consider two particular operators of belief change: an update operator for a simple logic-oriented database model and a revision operator for D's assertions about its environment that yield the agent's belief after its nonmonotonic reasoning. To prove the effectiveness of D's means of enforcement, we study necessary properties of D's simulation of A and then based on these properties show that D's enforcement is effective according to the formal requirements of its policy.Item Ansätze kompositionaler und zustandsbasierter Zugriffskontrolle für Web-basierte Umgebungen(2013-08-22) Wortmann, Sandra; Biskup, Joachim; Krumm, HeikoModerne verteilte Rechensysteme müssen flexibel an wechselnde Rahmenbedingungen und Aufgabenstellungen angepasst werden können. Notwendig hierfür ist, dass diese Rechensysteme in dynamisch veränderlicher Struktur aus verschiedenen informationellen Diensten zusammengesetzt sind. Kompositionalität ist in diesem Kontext eine wünschenswerte Eigenschaft, sowohl der Rechensysteme als auch der den Diensten zugeordneten Zugriffskontrollpolitiken und ihren Implementierungen. Zugriffskontrollpolitiken drücken hier aus, welche Dienste welchen Teilnehmern unter welchen Bedingungen verfügbar sein sollen. Bei anspruchsvollen Anwendungen wie beispielsweise strukturierten Diensten müssen die Zugriffskontrollpolitiken nicht nur für einzelne, atomare Funktionalitäten der Dienste festgelegt werden, sondern auch für komplexe Folgen der Funktionalitäten. Diese Arbeit schlägt eine kompositionale und zustandsbasierte Lösung für die beschriebenen Herausforderungen vor. Es wird eine kompositionale Algebra für Zugriffskontrollpolitiken für strukturierte Dienste entwickelt. Für diese sogenannten zustandsdynamischen Zugriffskontrollpolitiken werden konzeptionelle Durchsetzungsmechanismen erarbeitet. Es werden des Weiteren zentrale und dezentrale Architekturen für Zertifikat-basierte Zugriffskontrollsysteme entworfen, in die die vorgeschlagene Lösung eingebettet werden kann.Item An Effective and Efficient Inference Control System for Relational Database Queries(2011-02-16) Lochner, Jan-Hendrik; Biskup, Joachim; Kern-Isberner, GabrieleProtecting confidential information in relational databases while ensuring availability of public information at the same time is a demanding task. Unwanted information flows due to the reasoning capabilities of database users require sophisticated inference control mechanisms, since access control is in general not sufficient to guarantee the preservation of confidentiality. The policy-driven approach of Controlled Query Evaluation (CQE) turned out to be an effective means for controlling inferences in databases that can be modeled in a logical framework. It uses a censor function to determine whether or not the honest answer to a user query enables the user to disclose confidential information which is declared in form of a confidentiality policy. In doing so, CQE also takes answers to previous queries and the user’s background knowledge about the inner workings of the mechanism into account. Relational databases are usually modeled using first-order logic. In this context, the decision problem to be solved by the CQE censor becomes undecidable in general because the censor basically performs theorem proving over an ever growing user log. In this thesis, we develop a stateless CQE mechanism that does not need to maintain such a user log but still reaches the declarative goals of inference control. This feature comes at the price of several restrictions for the database administrator who declares the schema of the database, the security administrator who declares the information to be kept confidential, and the database user who sends queries to the database. We first investigate a scenario with quite restricted possibilities for expressing queries and confidentiality policies and propose an efficient stateless CQE mechanism. Due to the assumed restrictions, the censor function of this mechanism reduces to a simple pattern matching. Based on this case, we systematically enhance the proposed query and policy languages and investigate the respective effects on confidentiality. We suitably adapt the stateless CQE mechanism to these enhancements and formally prove the preservation of confidentiality. Finally, we develop efficient algorithmic implementations of stateless CQE, thereby showing that inference control in relational databases is feasible for actual relational database management systems under suitable restrictions.Item Preprocessing for controlled query evaluation in complete first-order databases(2009-08-31T09:33:40Z) Wiese, Lena; Biskup, Joachim; Kern-Isberner, GabrieleThis dissertation investigates a mechanism for confidentiality preservation in first-order logic databases. The logical basis is given by the inference control framework of Controlled Query Evaluation (CQE). Beyond traditional access control, CQE incorporates an explicit representation of a user's knowledge and his ability to reason with information; it hence prevents disclosure of confidential information that would occur due to inferences drawn by the user. This thesis pioneers a new approach in the CQE context: An unprotected database instance is transformed into an inference-proof instance that does not reveal confidential information; the inference-proof instance formally guarantees confidentiality with respect to a representation of user knowledge and a specification of confidential information. Hence, inference-proofness ensures that all user queries can truthfully be answered by the database; no sequence of responses enables the user to infer confidential information. Due to this concept, query evaluation on the inference-proof instance does not incur any performance degradation. As a second design goal, the availability requirement to maintain as much as possible of the correct information in the input database is accounted for by minimization of a distortion distance. The transformation modifies the input instance to provide the user with a consistent view of the data. The algorithm relies on query evaluation on the database to efficiently identify those tuples that are to be added or deleted. Due to undecidability of the general first-order case, appropriate fragments are analyzed. The formalization is started with universal formulas (for which a restriction to allowed formulas is chosen); it moves on to existential formulas and then finishes up with tuple-generating dependencies accompanied by existential and denial formulas. The due proofs of refutation soundness engage a version of Herbrand's theorem with semantic trees. An effort was made to present a broad background of related work. Last but not least, exposition and analysis of a prototypical implementation prove practicality of the approach.Item A framework for inference control in incomplete logic databases(2008-03-10T10:03:13Z) Weibert, Torben; Biskup, Joachim; Kern-Isberner, GabrieleSecurity in information systems aims at various, possibly conflicting goals, two of which are availablility and confidentiality. On the one hand, as much information as possible should be provided to the user. On the other hand, certain information may be confidential and must not be disclosed. In this context, inferences are a major problem: The user might combine a priori knowledge and public information gained from the answers in order to infer secret information. Controlled Query Evaluation (CQE) is a dynamic, policy-driven mechanism for the enforcement of confidentiality in information systems, namely by the distortion of certain answers, by means of either lying or refusal. CQE prevents harmful inferences, and tries to provide the best possible availability while still preserving confidentiality. In this thesis, we present a framework for Controlled Query Evaluation in incomplete logic databases. In the first part of the thesis, we consider CQE from a declarative point of view. We present three different types of confidentiality policy languages with different simplicity and expressibility – propositional potential secrets, confidentiality targets, and epistemic potential secrets – and show how they relate to each other. We also give a formal, declarative definition of the requirements for a method protecting these types of policies. As it turns out, epistemic potential secrets are the most expressive policies of the three types studied, so we concentrate on these policies in the second part of the thesis. In that second part, we show how to operationally enforce confidentiality policies based on epistemic potential secrets. We first present an abstract framework in which two parameters are left open: 1. Does the user know the elements of the confidentiality policy? 2. Do we allow only refusal, only lying, or both distortion methods? For five of the six resulting cases, we present instantiations of the framework and prove the confidentiality according to the declarative definition from the first part of the thesis. For the remaining case (combined lying and refusal under unknown policies), we show that no suitable enforcement method can be constructed using the naive heuristics. Finally, we compare the enforcement methods to those constructed for complete databases in earlier work, and we discuss the properties of our algorithms when relaxing the assumptions about the user’s computational abilities.Item Facilitating computer supported cooperative work with socio-technical self-descriptions(2006-03-13T06:59:57Z) Kunau, Gabriele; Herrmann, Thomas; Kern-Isberner, Gabriele; Dourish, PaulHow can the concept of self-description from newer systems theory be used for improving the co-evolvement of software engineering and organizational change in CSCW-projects? This thesis suggests transferring the concept of self-description into a concept of socio-technical self-description allowing an organization to describe its own computer supported work processes. The presentation of results is organized in four steps: First, a theoretical foundation is elaborated; second, an initial methodological is presented; third, empirical evidence from two explorative case studies is analyzed; fourth, a finalized methodological concept is derived.Item Unterstützung der Adoption kommerzieller Standardsoftware durch Diagramme(2005-10-13T07:17:37Z) Loser, Kai-Uwe; Herrmann, Thomas; Fischer, GerhardDie Dissertation befasst sich mit der Frage der Unterstützung der Einführung von Standardsoftware durch partizipative Modellierung. Diagramme, die in einem partizipativen Prozess mit zukünftigen Nutzern erstellt werden, können ein Bild der zukünftigen Arbeit vermitteln, die durch ein neues Softwareprodukt unterstützt werden soll. In zwei empirischen Projekten wurden mit unterschiedlichen Beteiligten Modelle entwickelt, wie sie in der Informatik gängig, für Anwender häufig aber neu und ungewohnt sind. Es stand dabei die Frage im Vordergrund, ob zukünftige Nutzer ein konkretes Bild davon bekommen, wie Ihre Arbeit in Zukunft aussehen wird. Auf diese Weise wird es möglich, die für die heutigen kooperativen Systeme notwendige Umgestaltung der Arbeitsprozesse für ein neues Softwareprodukt umzusetzen. Die entwickelte Methodik wurde in den zwei Fallstudien entwickelt und erprobt und ist durch einen speziellen Modellierungseditor und Methodenhandbücher unterstützt.Item Awareness und Adoption kooperativer Wissensmedien im Kontext informeller Zusammenarbeit(Universität Dortmund, 2004-12-23) Hoffmann, Marcel; Herrmann, Thomas; Morik, KatharinaDie Nutzung von Information- und Kommunikationstechnologien verläuft nicht immer wie geplant. Insbesondere beim Einsatz von Wissenmedien im Kontext informeller Zusammenarbeit entwickeln sich oft vollkommen unerwartete Nutzungsszenarios. In anderen Fällen bleiben Art und Intensität der Nutzung hinter den Annahmen zurück. Die vorliegende Arbeit untersucht, wie durch funktionale Aufwertung der eingesetzten kooperativen Wissensmedien, Aneignungs- bzw. Adoptions- und Nutzungsprozesse optimiert werden können. Dabei befasst sich die Arbeit insbesondere mit der Gestaltung von Gewärtigkeits- oder Awareness-Anzeigen und -Mechanismen, mit denen die in einem kooperativen Wissenmedium abgelaufenen Prozesse, ihre aktuellen Resultat aber auch ihre mögliche Weiterentwicklung für Nutzerinnen und Nutzer transparent gemacht werden. Bei der Entwicklung zahlreicher Prototypen wird demonstriert, wie durch die Gestaltung von Gewärtigkeitsmechanismen die Adoption computerunterstützter kooperativer Medien in lose gekoppelter Zusammenarbeit gefördert werden kann. Bei der Evaluation der implementierten Mechanismen werden objektive Erhebungsmethoden z.B. zur Überprüfung der Verständlichkeit und der Nutzung der Mechanismen mit Befragungen nach subjektiven Einschätzung der Wirkungen der Funktionen kombiniert. Dabei ergeben sich in Bezug auf retrospektive und prospektive Mechanismen und in Bezug auf Anzeigen von Planungs- und Nutzungsdaten unterschiedliche Ergebnisse. Einige Anzeigen werden von „Wenignutzern“ und „Wenigverstehern“ möglicherweise überschätzt. Außerdem variiert die Relevanz der Anzeigen in Abhängigkeit von der Nutzungssituation und den Zielen der Akteure. Anzeigen, hinter denen ein sozialer Akteur vermutet wird, wie z.B. Empfehlungen, Ankündigungen oder Erwartungen, erfuhren größere Aufmerksamkeit als rein statistisch berechnete Darstellungen. Aus den vielen in den Erprobungsphasen gewonnenen Erkenntnissen zur Wahrnehmbarkeit der Anzeigen, ihrer Verständlichkeit und ihrer Wirksamkeit werden Gestaltungsempfehlungen für den Entwurf adoptionsförderlicher Gewärtigkeitsmechanismen abgeleitet. Dabei kommt die Evaluation zu dem Ergebnis, dass Gewärtigkeitsunterstützung als Adoptionsfaktor die Strukturierung einer Anwendung in ihrem sozio-technischen Kontext unterstützen kann und als Akzeptanzfaktor die Wahrnehmung der Systemqualität erhöht.Item Secure offline legitimation systems(Universität Dortmund, 2004-09-13) Bleumer, Gerrit; Biskup, Joachim; Pfitzmann, BirgitItem Towards unifying semantic constraints and security constraints in distributed information systems(Universität Dortmund, 2003-12-03) Sprick, Barbara; Biskup, Joachim; Doberkat, Ernst-ErichModern information systems must respect certain restrictions in order to guarantee the proper and desired functionality. Semantic constraints help to prevent inconsistencies in the stored data resulting from faulty updates. Security constraints are to maintain integrity, secrecy and availability over updates and over queries. This thesis designs a unifying framework for the specification of semantic constraints and security constraints in information systems in order to study interactions between them. We consider an information system as a distributed, reactive system in which each actor and each object acts autonomously and concurrently. Actors gain knowledge by performing read operations on objects and they may update the content of an object by performing update operations. To execute read or update operations, actors need execute rights that can be granted or revoked by other actors.This view of an information system is captured in a computational model.In this model, we consider each component of the information system, actors as well as objects, uniformly as a sequential agent that performs operations autonomously and jointly with other sequential agents. Each agent is alliated with a set of local propositions and a set of local operations as well as with relations that capture the agent's knowledge and belief. An agent's knowledge is determined completely by its local state. Change in knowledge of an agent is due to operations performed by the agent. Interaction between knowledge and operations is captured by the requirement that the enabling and the effect of an operation is completely determined by the knowledge of the acting agents. Knowledge of agents can be changed only byoperations in which they participate. We define a temporal and epistemic specification language with temporaland epistemic operators. The logic provides for each agent local next and until operators as temporal operators and local knowledge and belief operators as epistemic operators. We develop a modal tableau based proof system for a subset of the logic and show its soundness. Completeness can be shown only for a smaller, but still reasonable subset of the logic, decidability remains an open question. The main diffculty of the tableau system arises from the interaction requirement between knowledge and action.In a detailed example we demonstrate how the framework can be used for specifying semantic constraints and security constraints in information systems.Item Using root cause analysis to handle intrusion detection alarms(Universität Dortmund, 2003-11-19) Julisch, Klaus; Biskup, Joachim; Krumm, HeikoAufgrund einer kontinuierlich steigenden Anzahl von Hacker-Angriffen auf die Informationssysteme von Firmen und Institutionen haben Intrusion Detection Systeme als eine neue Sicherheitstechnologie an Bedeutung gewonnen. Diese Systeme überwachen Computer, Netzwerke sowie andere Ressourcen und erzeugen Alarme, wenn Sicherheitsverletzungen entdeckt werden. Leider erzeugen die heutigen Intrusion Detection Systeme im Allgemeinen sehr viele zumeist falsche Alarme. Dies wirft das Problem auf, wie mit dieser Flut falscher Alarme umzugehen ist. Die vorliegende Dissertation präsentiert einen neuen Lösungsansatz für dieses Problem.Von zentraler Bedeutung für diesen Lösungsansatz ist die Vorstellung, dass jeder Alarm eine eindeutige Ursache besitzt. Diese Dissertation macht die Beobachtung, dass ein paar Dutzend Ursachen für über 90% der Alarme verantwortlich sind. Auf diese Beobachtung aufbauend, wird folgende zweistufige Methode für den Umgang mit Intrusion Detection Alarmen vorgeschlagen: Der erste Schritt identifiziert Ursachen, die viele Alarme erzeugen, und der zweite Schritt entfernt diese Ursachen, wodurch die zukünftige Alarmlast zumeist stark gesenkt wird.Alternativ können Alarme, die eine nicht sicherheitsrelevante Ursache besitzen, durch Filter automatisch entfernt werden. Um das Aufdecken von Alarmursachen zu unterstützen, stellen wir eine neue Data Mining Methode zum Clustern von Alarmen vor. Die Grundlage für diese Methode besteht darin, dass sich die meisten Ursachen in Alarmgruppen mit charakteristischen strukturellen Eigenschaften manifestieren. Wir formalisieren diese strukturellen Eigenschaften und stellen eine Clustering Methode vor, die Alarmgruppen mit diesen Eigenschaften findet. Im Allgemeinen ermöglichen es solche Alarmgruppen, die zugrunde liegenden Alarmursachen zu identifizieren. Daran anschließend können die identifizierten Ursachen eliminiert oder falsche Alarme herausgefiltert werden. In beiden Fällen sinkt die Zahl der Alarme, die in Zukunft noch ausgewertet werden müssen.Die vorgestellte Methode zum Umgang mit Alarmen wird in Experimenten mit Alarmen aus 16 verschiedenen Intrusion Detection Installationen getestet. Diese Experimente bestätigen, dass es die beschriebene Alarm Clustering Methode sehr einfach macht Ursachen aufzudecken. Außerdem zeigen die Experimente, dass die Alarmlast um durchschnittlich 70% gesenkt werden kann, wenn auf die identifizierten Alarmursachen in angemessener Weise reagiert wird.Item Secure mediation between strangers in cyberspace(Universität Dortmund, 2002-11-11) Karabulut, Yücel; Biskup, Joachim; Krumm, HeikoThe thesis is concerned with solutions to challenges associated with secure mediation between strangers in cyberspace. In mediated information system clients and information sources are brought together by mediators. The mediation paradigm needs powerful and expressive security mechanisms considering the dynamics and conflicting interests of mediation participiants. The thesis presents a security framework for mediation with an emphasis on confidentiality and authenticity. It is argued for basing the enforcement of confidentiality and authenticity on certified characterizing properties, such as personal authorization attributes, rather than on identification. In the security framework specification and enforcement of permissions are based on the public-key infrastructures which allow the binding of characterizing properties to public keys.Item Approximate similarity search in metric spaces(Universität Dortmund, 2002-07-17) Amato, Giuseppe; Fuhr, Norbert; Zezula, PavelThere is an urgent need to improve the efficiency of similarity queries. For this reason, this thesis investigates approximate similarity search in the environment of metric spaces. Four different approximation techniques are proposed, each of which obtain high performance at the price of tolerable imprecision in the results. Measures are defined to quantify the improvement of performance obtained and the quality of approximations. The proposed techniques were tested on various synthetic and real-lifefiles. The results of the experiments confirm the hypothesis that high quality approximate similarity search can be performed at a much lower cost than exact similarity search. The approaches that we propose provide an improvement of efficiency of up to two orders of magnitude, guaranteeing a good quality of the approximation.The most promising of the proposed techniques exploits the measurement of the proximity of ball regions in metric spaces. The proximity of two ball regions is defined as the probability that data objects are contained in their intersection. This probability can be easily obtained in vector spaces but is very difficult to measure in generic metric spaces, where only distance distribution is available and data distribution cannot be used. Alternative techniques, which can be used to estimate such probability inmetric spaces, are thus also proposed, discussed, and validated in the thesis.