LS 01 Logik in der Informatik

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 36
  • Item
    Query evaluation revised: parallel, distributed, via rewritings
    (2024) Spinrath, Christopher; Schwentick, Thomas; Pichler, Reinhard
    This is a thesis on query evaluation in parallel and distributed settings, and structurally simple rewritings. It consists of three parts. In the first part, we investigate the efficiency of constant-time parallel evaluation algorithms. That is, the number of required processors or, asymptotically equivalent, the work required to evaluate queries in constant time. It is known that relational algebra queries can be evaluated in constant time. However, work-efficiency has not been a focus, and indeed known evaluation algorithms yield huge (polynomial) work bounds. We establish work-efficient constant-time algorithms for several query classes: (free-connex) acyclic, semi-join algebra, and natural join queries; the latter in the worst-case framework. The second part is about deciding parallel-correctness of distributed evaluation strategies: Given a query and policies specifying how data is distributed and communicated among multiple servers, does the distributed evaluation yield the same result as the classical evaluation, for every database? Ketsman et al. proved that parallel-correctness for Datalog is undecidable; by reduction from the undecidable containment problem for Datalog. We show that parallel-correctness is already undecidable for monadic and frontier-guarded Datalog queries, for which containment is decidable. However, deciding parallel-correctness for frontier-guarded Datalog and constraint-based communication policies satisfying a certain property is 2ExpTime-complete. Furthermore, we obtain the same bounds for the parallel-boundedness problem, which asks whether the number of required communication rounds is bounded, over all databases. The third part is about structurally simple rewritings. The (classical) rewriting problem asks whether, for a given query and a set of views, there is a query, called rewriting, over the views that is equivalent to the given query. We study the variant of this problem for (subclasses of) conjunctive queries and views that asks for a structurally simple rewriting. We prove that, if the given query is acyclic, an acyclic rewriting exists if there is any rewriting at all. Analogous statements hold for free-connex acyclic, hierarchical, and q-hierarchical queries. Furthermore, we prove that the problem is NP-hard, even if the given query and the views are acyclic or hierarchical. It becomes tractable if the views are free-connex acyclic or q-hierarchical (and the arity of the database schema is bounded).
  • Item
    A conditional perspective of belief revision
    (2023) Sezgin, Meliha; Kern-Isberner, Gabriele; Fermé, Eduardo
    Belief Revision is a subarea of Knowledge Representation and Reasoning (KRR) that investigates how to rationally revise an intelligent agent's beliefs in response to new information. There are several approaches to belief revision, but one well-known approach is the AGM model, which is rooted in work by Alchourrón, Gärdenfors, and Makinson. This model provides a set of axioms defining desirable properties of belief revision operators, which manipulate the agent's belief set represented as a set of propositional formulas. A famous extension to the classical AGM framework of Belief Revision is Darwiche and Pearl's approach to iterated belief revision. They uncovered that the key to rational behavior under iteration is adequate preservation of conditional beliefs, i.e., beliefs the agent is willing to accept in light of (hypothetical) new information. Therefore, they introduced belief revision operators modifying the agent's belief state, built from conditional beliefs. Kern-Isberner fully axiomatized a principle of conditional preservation for belief revision, which captures the core of adequate treatment of conditional beliefs during the revision. This powerful axiom provides the necessary conceptual framework for revising belief states with sets of conditionals as input, and it shows that conditional beliefs are subtle but essential for studying the process of belief revision. This thesis provides a conditional perspective of Belief Revision for different belief revision scenarios. In the first part, we introduce and investigate a notion of locality for belief revision operators on the semantic level. Hence, we exploit the unique features of conditionals, which allow us to set up local cases and revise according to these cases, s.t., the complexity of the revision task is reduced significantly. In the second part, we consider the general setting of belief revision with respect to additional meta-information accompanying the input information. We demonstrate the versatility and flexibility of conditionals as input for belief revision operators by reducing the parameterized input to a conditional one for two well-known parameterized belief revision operators who are similarly motivated but very different in their technical execution. Our results show that considering conditional beliefs as input for belief revision operators provides a gateway to new insights into the dynamics of belief revision.
  • Item
    On establishing robust consistency in answer set programs
    (2022-09-19) Thevapalan, Andre; Kern-Isberner, Gabriele
    Answer set programs used in real-world applications often require that the program is usable with different input data. This, however, can often lead to contradictory statements and consequently to an inconsistent program. Causes for potential contradictions in a program are conflicting rules. In this paper, we show how to ensure that a program P remains non-contradictory given any allowed set of such input data. For that, we introduce the notion of conflict-resolving λ -extensions. A conflict-resolving λ -extension for a conflicting rule r is a set λ of (default) literals such that extending the body of r by λ resolves all conflicts of r at once. We investigate the properties that suitable λ -extensions should possess and building on that, we develop a strategy to compute all such conflict-resolving λ -extensions for each conflicting rule in P . We show that by implementing a conflict resolution process that successively resolves conflicts using λ -extensions eventually yields a program that remains non-contradictory given any allowed set of input data.
  • Item
    On the correspondence between abstract dialectical frameworks and nonmonotonic conditional logics
    (2021-07-06) Heyninck, Jesse; Kern-Isberner, Gabriele; Thimm, Matthias; Skiba, Kenneth
    The exact relationship between formal argumentation and nonmonotonic logics is a research topic that keeps on eluding researchers despite recent intensified efforts. We contribute to a deeper understanding of this relation by investigating characterizations of abstract dialectical frameworks in conditional logics for nonmonotonic reasoning. We first show that in general, there is a gap between argumentation and conditional semantics when applying several intuitive translations, but then prove that this gap can be closed when focusing on specific classes of translations.
  • Item
    Ranking kinematics for revising by contextual information
    (2021-05-28) Sezgin, Meliha; Kern-Isberner, Gabriele; Beierle, Christoph
    Probability kinematics is a leading paradigm in probabilistic belief change. It is based on the idea that conditional beliefs should be independent from changes of their antecedents’ probabilities. In this paper, we propose a re-interpretation of this paradigm for Spohn’s ranking functions which we call Generalized Ranking Kinematics as a new principle for iterated belief revision of ranking functions by sets of conditional beliefs with respect to their specific subcontext. By taking into account semantical independencies, we can reduce the complexity of the revision task to local contexts. We show that global belief revision can be set up from revisions on the local contexts via a merging operator. Furthermore, we formalize a variant of the Ramsey-Test based on the idea of local contexts which connects conditional and propositional revision in a straightforward way. We extend the belief change methodology of c-revisions to strategic c-revisions which will serve as a proof of concept.
  • Item
    Reasoning about distributed relational data and query evaluation
    (2019) Geck, Gaetano; Schwentick, Thomas; Segoufin, Luc
    Large data sets are often stored distributedly to increase the reliability of systems and the efficiency of query evaluation in them. While some query operators -- like selections and projections -- are intrinsically conform with parallel evaluation, others -- like joins -- demand specific distribution patterns. For relational databases, a common approach to evaluate queries in parallel relies on the use of rather simple distribution patterns for binary joins and the computation of the query result according to some query plan, operator by operator. Often, this requires the redistribution of large intermediate results (possibly larger than the input and/or output) and thus may lead to unnecessary long processing times. Thus, especially in the last decade, more elaborate distribution patterns that depend on the whole query have been studied and shown to allow more efficient query evaluation in several cases by reducing the amount of communication between servers. Ameloot et al. have described a setting where query evaluation is studied for a broad range of distribution patterns. Their work focuses on problems to decide whether a query can be evaluated correctly under a given distribution pattern. More particularly, they have considered two problems: "parallel correctness", where the pattern is specified explicitly, and "parallel-correctness transfer", where the pattern is known to be appropriate for another query. This thesis comprises the author's contributions to the complexity-theoretical investigation of these problems for conjunctive queries (and extensions thereof). These contributions complement the main characterisations and some additional complexity results by Ameloot et al. Furthermore, this thesis contains some new characterisations for "polarised" queries. Via the characterisations, parallel correctness and parallel-correctness transfer can be translated into questions on the co-occurrences of certain facts, induced by the query, on some server. Such questions and others can be modelled by "distribution dependencies", a variant of the well-known tuple- and equality-generating dependencies. Modelling via these constraints allows a more general description of distribution patterns in distributed relational data. The third contribution of this thesis is the study of the implication problem for distribution dependencies, providing lower and upper bounds for some fragments.
  • Item
    Dynamic expressibility under complex changes
    (2019) Vortmeier, Nils; Schwentick, Thomas; Vianu, Victor
    Whenever a database is changed, any previously computed and stored result of a query evaluation on that database becomes invalid and needs to be updated. This update can in principle be conducted by re-evaluating the query from scratch. However, this approach might be too inefficient, especially for large databases. Instead, it is often advantageous to make use of the old query result and possibly further stored auxiliary information. A descriptive framework for studying this dynamic setting of query evaluation was introduced by Patnaik and Immerman in 1994, independently of a very similar formalisation by Dong, Su and Topor that was first published in 1992 and 1993. In this dynamic descriptive complexity framework, auxiliary relations are stored that hold the result of a fixed query as well as further auxiliary information. The update of these auxiliary relations after a change to the input database is specified by first-order formulas that may refer to the input database as well as to the stored auxiliary relations. The dynamic complexity class DynFO contains all queries for which the result can be maintained this way under some specified set of changes. Most of the previous work in the DynFO framework focussed on changes that are insertions and deletions of single tuples. In this thesis we consider also more complex changes, in particular we study changes that insert or delete a set of tuples of bounded size, and changes that are specified by first-order definable queries on the input. The main contribution of this thesis is an investigation of the expressive power of first-order update formulas in the DynFO framework with respect to these classes of complex changes. We show strong DynFO maintainability results under complex changes for queries such as the reachability query for directed and undirected graphs. We also give non-maintainability results for certain restricted dynamic settings. Some of our maintainability results rely on a new technique of dynamic query maintenance. This technique is also used to show that all queries that are definable in monadic second-order logic are in DynFO under single-edge changes of input graphs of bounded treewidth. We experimentally evaluate the dynamic approach and compare the performance of query evaluation from scratch with the performance of different strategies of dynamic query maintenance under complex changes.
  • Item
    Context-free games on strings and nested words
    (2017) Schuster, Martin; Schwentick, Thomas; Löding, Christof
    Kontextfreie Spiele sind ein formales Modell, welches in seiner einfachsten Form den Ableitungsmechanismus kontextfreier Grammatiken zu einem Spiel für zwei Spieler (genannt Juliet und Romeo) verallgemeinert; dabei wählt in einer gegebenen Satzform (d.h. einer Zeichenkette aus Terminal- und Nichtterminalsymbolen) jeweils Juliet ein zu ersetzendes Nichtterminalsymbol aus, worauf Romeo jeweils entsprechend den Ableitungsregeln entscheidet, wodurch dieses Nichtterminalsymbol ersetzt werden soll. Die Gewinnbedingung für Juliet in einem solchen Spiel ist das Erreichen einer Satzform aus einer gegebenen Zielsprache, wohingegen Romeo die Aufgabe hat, dies zu verhindern. Das zentrale algorithmische Problem in kontextfreien Spielen stellt die Frage, gegeben ein Spiel und eine initiale Satzform, ob Juliet in dem gegebenen Spiel auf der Satzform eine Gewinnstrategie hat. Die zentrale praktische Anwendung kontextfreier Spiele liegt in der Modellierung von Active XML-Dokumenten, d.h. XML-Dokumenten, die Referenzen auf externe Quellen enthalten, welche zur Laufzeit aufgerufen werden können um aktuelle Daten in das Dokument einzufügen. Vor diesem Hintergrund ist es sinnvoll, Erweiterungen der oben genannten kontextfreien Spiele auf verschachtelte Wörter zu betrachten, also auf XML-artige Linearisierungen von Bäumen in Zeichenketten. Weitere praktisch motivierte Verallgemeinerungen beinhalten unter anderem die Modellierung von syntaktischer oder semantischer Behandlung von Aufrufparametern beim Aufruf externer Referenzen. Ziel dieser Dissertation ist, einen weitgehend vollständigen Überblick über den aktuellen Stand der Forschung zu kontextfreien Spielen auf Zeichenketten und verschachtelten Wörtern zu liefern. Dazu liefert sie jeweils komplexitätstheoretische Klassifizierungen des Gewinnproblems (und verwandter Probleme) für diverse Varianten kontextfreie Spiele auf Zeichenketten und verschachtelten Wörtern und gibt einen Überblick über Beweismethoden und algorithmische Techniken zur Behandlung dieses Gewinnproblems. Als Teil dieser Betrachtung stellt sie darüber hinaus grundlegende Ergebnisse zu relevanten Automatenmodellen auf verschachtelten Wörtern dar, darunter Varianten von alternierenden Automaten und Transducern.
  • Item
    Logics on data words
    (2016) Kara, Ahmet; Schwentick, Thomas; Bollig, Benedikt
    We investigate logics on data words, i.e., words where each position is labelled by some propositions from a finite set and by some data values from an infinite domain. A basic motivation for the study of these logics, called data logics in this work, is that data words are a suitable model to represent traces of concurrent systems with unboundedly many interacting processes. In such representations data values stand for process IDs. Thus, data logics can be used to formulate requirements on such traces. We first study the expressivity and complexity of the satisfiability problem for these logics. Then, we investigate suitable models for concurrent systems with unboundedly many processes. Finally, we analyse the model checking problem for such systems in the case that data logics are used to specify system requirements. One of our main results is that, despite the bad properties of data logics with respect to satisfiability, there are important cases in which model checking with data logics has moderate complexity. Hence, our results motivate for further investigations with the aim to find interesting models and data logics which can be used in practical model checking tools.
  • Item
    Belief revision, non-monotonic reasoning and secrecy for epistemic agents
    (2015) Krümpelmann, Patrick; Kern-Isberner, Gabriele; Beierle, Christoph
    Software agents are increasingly used to handle the information of individuals and companies. They also exchange this information with other software agents and humans. This raises the need for sophisticated methods of such agents to represent information, to change it, reason with it, and to protect it. We consider these needs for communicating autonomous agents with incomplete information in a partially observable, dynamic environment. The protection of secret information requires the agents to consider the information of agents, and the possible inferences of these. Further, they have to keep track of this information, and they have to anticipate the effects of their actions. In our considered setting the preservation of secrecy is not always possible. Consequently, an agent has to be able to evaluate and minimize the degree of violation of secrecy. Incomplete information calls for non-monotonic logics, which allow to draw tentative conclusions. A dynamic environment calls for operators that change the information of the agent when new information is received. We develop a general framework of agents that represent their information by logical knowledge representation formalisms with the aim to integrate and combine methods for non-monotonic reasoning, for belief change, and methods to protect secret information. For the integration of belief change theory, we develop new change operators that make use of non-monotonic logic in the change process, and new operators for non-monotonic formalisms. We formally prove their adherence to the quality standards taken and adapted from belief revision theory. Based on the resulting framework we develop a formal framework for secrecy aware agents that meet the requirements described above. We consider different settings for secrecy and analyze requirements to preserve secrecy. For the protection of secrecy we elaborate on change operations and the evaluation of actions with respect to secrecy, both declaratively and by providing constructive approaches. We formally prove the adherence of the constructions to the declarative specifications. Further, we develop concrete agent instances of our framework building on and extending the well known BDI agent model. We build complete secrecy aware agents that use our extended BDI model and answer set programming for knowledge representation and reasoning. For the implementation of our agents we developed Angerona, a Java multiagent development framework. It provides a general framework for developing epistemic agents and implements most of the approaches presented in this thesis.
  • Item
    Automatic signal enhancements for spectroscopic measurements
    (2013) Schlenke, Jan; Müller, Heinrich; Rudolph, Rudolph
    Material analysis is a diverse scientific field often requiring a high degree of expert knowledge to achieve satisfactory results. Therefore automation of process steps is essential to allow untrained users access to such methods and reduce manual workload for experts. This requires robust corrective measures that can be applied to measured data containing severe distortions with the goal to enable automatic identification of compounds in so called real world spectra, meaning spectra that were recorded under unknown and sometimes adverse conditions outside a laboratory environment. Extracting information from chemical substances in order to classify or even identify their components is a critical task for almost every application that is aimed to detect and/or identify unknown or hidden compounds. Methods to achieve that goal employ spectroscopic techniques that vary in instrumentation and underlying physical principals. The analysis of the collected data is often non-trivial and interpretation requires experts with years of training to extract reliable information from measurements. This thesis introduces new techniques that reliably reduce distortions in measurements without the need for fixed models, training sets or repeated measuring. The development of these new techniques was motivated and accompanied by discussions with practical physicists and chemists who expressed their discontent with available methods. The focus are measurements that are acquired outside of controlled laboratory environments requiring methods to be highly robust against a wide array of possible distortions without complex control mechanisms that require adjusted to every measurement. The work introduces advancements in noise suppression via wavelet transform that focus on the automatic reduction of noise of varying intensity within a single measurement. Combined with a new baseline estimation technique based on adaptive regression that is highly robust against distortions and requires only minimal information about expected signal characteristics this allows automatic processing and reliable data extraction in scenarios where previously existing methods failed to achieve satisfactory results.
  • Item
    Small dynamic complexity classes
    (2015) Zeume, Thomas; Grädel, Erich; Schwentick, Thomas
    The re-evaluation of a query result after modifying a large database can be a time-consuming process; in particular when it is performed from scratch. For this reason previously computed information such as the old query result and (possibly) other auxiliary information is often reused in order to speed up the process. In this thesis, dynamic query evaluation is studied in a descriptive complexity framework independently introduced by Dong, Su and Topor (1992, 1993) and by Patnaik and Immerman (1994). In this framework, for a relational database subject to change, auxiliary relations are maintained with the intention to help answering a query. When a modification to the database, that is, an insertion or deletion of a tuple, occurs, every auxiliary relation is updated through a first-order query that can refer to both, the database and the auxiliary relations. Our main objective is to advance the understanding of the power of the dynamic descriptive complexity framework. We contribute by (a) providing new methods for proving in-expressibility in this dynamic context, and by (b) exploring the structure of small dynamic descriptive complexity classes and their relation to static complexity classes. One of our contributions to the latter aspect helps to confirm the conjecture by Patnaik and Immerman (1997) that reachability can be maintained by first-order update formulas. This has been one of the major open questions in this area.
  • Item
    Komplexität des typechecking-Problems für Top-down XML-Transformationen
    (2013-05-02) Neubauer, Johannes; Schwentick, Thomas
  • Item
    Feature extraction in NMR data analysis
    (2010-07-06) Koh, Hyung-Won; Rahmann, Sven; Hüllermeier, Eyke
  • Item
    Integration, Indexierung und Interaktion hochdimensionaler Datenobjekte
    (2009-03-17T07:56:00Z) Koll, Konstantin; Dittrich, Gisbert; Müller, Heinrich
  • Item
    Gleichheitsabhängigkeiten zwischen Objekten durch parametrisierte Zuweisungen
    (2007-10-15T13:34:14Z) Chernuchin, Daniel; Dittrich, Gisbert; Padawitz, Peter
  • Item
    Directed evolutionary algorithms
    (2007-01-05T12:09:24Z) Berlik, Stefan; Reusch, Bernd; Rudolph, Günter
  • Item
    On theory of information functions presence
    (Dortmund, LS1, 2006) Tsintsadze, Magda
  • Item
    Logics which allow Degrees of Truth and Degrees of Validity
    (Universität Dortmund, 2005-02-07) Lehmke, Stephan; Thiele, Helmut; Reusch, Bernd; Padawitz, Peter; Dubois, Didier
    In dieser Dissertation werden Semantiken logischer Systeme, die sowohl Vagheit (im Sinne gradueller Wahrheitsbewertung logischer Formeln) als auch Unsicherheit (im Sinne gradueller Vertrauensbewertung logischer Formeln) auszudrücken erlauben, vom Standpunkt der mathematischen Logik aus betrachtet. Üblicherweise werden zur Repräsentation von Vagheit mehrwertige Logiken verwendet, wobei zur konkreten Wahrheitsbewertung häufig Formeln mit konkreten Wahrheitswerten verknüpft werden. Zur Repräsentation von Unsicherheit (im Sinne unvollständigen Wissens oder Vertrauens) werden Formeln der zweiwertigen Logik mit Vertrauensgraden bewertet. Es wurden eine Vielzahl logischer Systeme mit bewerteten Formeln in der Literatur beschrieben, mit zum Teil stark abweichenden Interpretationen der Struktur und Semantik von Markierungen. Zum Teil wird jedoch die Bedeutung von Markierungen nicht präzise definiert, was die Interpretation von Inferenzergebnissen erschwert bis unmöglich macht. Solange nicht eine präzise quantitative Theorie wie etwa die Wahrscheinlichkeitstheorie zur Erklärung von Markierungswerten verwendet wird, gibt es keine kanonische Erklärung für die Bedeutung einer Markierung. Führt dies dann zu einer Vielzahl möglicher Erklärungen, ohne dass diese anhand präzise dargelegter Kriterien verglichen werden können, so wird der Nutzen solcher gradueller Bewertungen insgesamt fraglich. Ein Weg zur Verbesserung dieser Situation liegt darin, Bewertungssysteme anhand grundlegender Bedeutungsunterschiede der Bewertungen in Klassen einzuteilen. Hier werden besonders Bewertungen des Wahrheitsgehalts sowie des Vertrauens in die Gültigkeit logischer Formeln betrachtet. Es gibt gut ausgearbeitete Theorien bewerteter Logiken, die zu der einen oder der anderen Klasse gehören. In dieser Dissertation wird ein sehr allgemeines System zur Definition von Markierungen zur Bewertung von Wahrheit bzw. Vertrauen beschrieben, zusammen mit den sich daraus ergebenden kanonischen Definitionen des Modellbegriffs sowie der semantischen Folgerung für markierte Formeln. Die resultierenden markierten Logiken sind sehr ausdrucksstark und erlauben sowohl Vagheit als auch Unsicherheit als auch Kombinationen beider gradueller Konzepte zu repräsentieren. Semantiken solcher Logiken werden im Allgemeinen und für interessante Spezialfälle studiert.
  • Item
    Asymmetrische Evolutionsstrategien
    (Universität Dortmund, 2003-01-27) Hildebrand, Lars; Reusch, Bernd; Banzhaf, Wolfgang
    In dieser Arbeit wird eine neue Evolutionsstrategie entwickelt, die im Gegensatz zu allen bekannten Evolutionsstrategien einen gerichteten Mutationsoperator einsetzt. Die gerichtete Mutation wird durch Verwendung einer asymmetrischen Verteilungsfunktion realisiert. Eine asymmetrische Verteilungsfunktion, die die Randbedingungen der Evolutionsstrategien erfüllt existierte nicht und wird in dieser Arbeit hergeleitet. Neben der Herleitung der Verteilungsfunktion werden alle stochastisch interessanten Eigenschaften dieser Verteilungsfunktion hergeleitet. Dazu zählen Momente, Verteilungs- und Dichtefunktion und die zugehörigen inversen Funktionen. Die hier entwickelte asymmetrische Verteilungsfunktion ist dabei so allgemein gehalten, dass sie, für spezielle Belegungen der Parameter, auch symmetrische Verteilungen und insbesondere auch die Normalverteilung realisieren kann. Aufbauend auf diese Verteilungsfunktion und den zugehörigen Zufallszahlengenerator wird die asymmetrische Evolutionsstrategie entwickelt. Die Leistungsfähigkeit der die asymmetrische Evolutionsstrategie wird durch Benchmark-Test und reale Anwendungen gezeigt. Zum Vergleich werden alle Test mit anderen Varianten der Evolutionsstrategien verglichen.