LS 05 Programmiersysteme

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 41
  • Item
    The power of typed affine decision structures: a case study
    (2023-04-21) Nolte, Gerrit; Schlüter, Maximilian; Murtovi, Alnis; Steffen, Bernhard
    TADS are a novel, concise white-box representation of neural networks. In this paper, we apply TADS to the problem of neural network verification, using them to generate either proofs or concise error characterizations for desirable neural network properties. In a case study, we consider the robustness of neural networks to adversarial attacks, i.e., small changes to an input that drastically change a neural networks perception, and show that TADS can be used to provide precise diagnostics on how and where robustness errors a occur. We achieve these results by introducing Precondition Projection, a technique that yields a TADS describing network behavior precisely on a given subset of its input space, and combining it with PCA, a traditional, well-understood dimensionality reduction technique. We show that PCA is easily compatible with TADS. All analyses can be implemented in a straightforward fashion using the rich algebraic properties of TADS, demonstrating the utility of the TADS framework for neural network explainability and verification. While TADS do not yet scale as efficiently as state-of-the-art neural network verifiers, we show that, using PCA-based simplifications, they can still scale to medium-sized problems and yield concise explanations for potential errors that can be used for other purposes such as debugging a network or generating new training samples.
  • Item
    Towards rigorous understanding of neural networks via semantics-preserving transformations
    (2023-05-30) Schlüter, Maximilian; Nolte, Gerrit; Murtovi, Alnis; Steffen, Bernhard
    In this paper, we present an algebraic approach to the precise and global verification and explanation of Rectifier Neural Networks, a subclass of Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras, which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified, or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function.
  • Item
    Simplicity-oriented lifelong learning of web applications
    (2023) Bainczyk, Julius Alexander; Steffen, Bernhard; Hähnle, Reiner
    Nowadays, web applications are ubiquitous. Entire business models revolve around making their services available over the Internet, anytime, anywhere in the world. Due to today’s rapid development practices, software changes are released faster than ever before, creating the risk of losing control over the quality of the delivered products. To counter this, appropriate testing methodologies must be deeply integrated into each phase of the development cycle to identify potential defects as early as possible and to ensure that the product operates as expected in production. The use of low- and no-code tools and code generation technologies can drastically reduce the implementation effort by using well-tailored (graphical) Domain-Specific Languages (DSLs) to focus on what is important: the product. DSLs and corresponding Integrated Modeling Environments (IMEs) are a key enabler for quality control because many system properties can already be verified at a pre-product level. However, to verify that the product fulfills given functional requirements at runtime, end-to-end testing is still a necessity. This dissertation describes the implementation of a lifelong learning framework for the continuous quality control of web applications. In this framework, models representing user-level behavior are mined from running systems using active automata learning, and system properties are verified using model checking. All this is achieved in a continuous and fully automated manner. Code changes trigger testing, learning, and verification processes which generate feedback that can be used for model refinement or product improvement. The main focus of this framework is simplicity. On the one hand, it allows Quality Assurance (QA) engineers to apply learning-based testing techniques to web applications with minimal effort, even without writing code; on the other hand, it allows automation engineers to easily implement these techniques in modern development workflows driven by Continuous Integration and Continuous Deployment (CI/CD). The effectiveness of this framework is leveraged by the Language-Driven Engineering (LDE) approach to web development. Key to this is the text-based DSL iHTML, which enables the instrumentation of user interfaces to make web applications learnable by design, i.e., they adhere to practices that allow fully automated inference of behavioral models without prior specification of an input alphabet. By designing code generators to generate instrumented web-based products, the effort for quality control in the LDE ecosystem is minimized and reduced to formulating runtime properties in temporal logic and verifying them against learned models.
  • Item
    Model-based quality assurance of instrumented context-free systems
    (2023) Frohme, Markus; Steffen, Bernhard; Jonsson, Bengt
    The ever-growing complexity of today’s software and hardware systems makes quality assurance (QA) a challenging task. Abstraction is a key technique for dealing with this complexity because it allows one to skip non-essential properties of a system and focus on the important ones. Crucial for the success of this approach is the availability of adequate abstraction models that strike a fine balance between simplicity and expressiveness. This thesis presents the formalisms of systems of procedural automata (SPAs), systems of behavioral automata (SBAs), and systems of procedural Mealy machines (SPMMs). The three model types describe systems which consist of multiple procedures that can mutually call each other, including recursion. While the individual procedures are described by regular automata and therefore are easy to understand, the aggregation of procedures towards systems captures the semantics of context-free systems, offering the expressiveness necessary for representing procedural systems. A central concept of the proposed model types is an instrumentation that exposes the internal structure of systems by making calls to and returns from procedures observable. This instrumentation allows for a notion of rigorous (de-) composition which enables a translation between local (procedural) views and global (holistic) views on a system. On the basis of this translation, this thesis presents algorithms for the verification, testing, and learning of (instrumented) context-free systems, covering a broad spectrum of practical QA tasks. Starting with SPAs as a “base” formalism for context-free systems, the flexibility of this concept is shown by including features such as prefix-closure (SBAs) and dialog-based transductions (SPMMs). In a comparison with related formalisms, this thesis shows that the simplicity of the proposed model types not only increases the understandability of models but can also improve the performance of QA tasks. This makes SPAs, SBAs, and SPMMs a powerful tool for tackling the practical challenges of assuring the quality of today’s software and hardware systems.
  • Item
    A lingualization strategy for knowledge sharing in large-scale DevOps
    (2023) Tegeler, Tim; Steffen, Bernhard; Wirsing, Martin
    DevOps has become a generally accepted practice for software projects in the last decade and approaches certain shortcomings of the agile software development and the steadily gaining popularity of cloud infrastructure. While it shifts more and more responsibilities towards software engineering teams, the prevailing opinion is to keep DevOps teams small to reduce the complexity of inter-team communication. In circumstances where products outgrow the performance capability of a single team, microservice architecture enables multiple DevOps teams to contribute to the same application and meet the increased requirements. Since DevOps teams operate typically self-sufficiently and more or less independently inside an organization, such large-scale DevOps environments are prone to knowledge-sharing barriers. Textual Domain-Specific Languages (DSLs) are one of the cornerstones of DevOps and enable key features like automation and infrastructure provisioning. Nonetheless, most commonly accepted DSLs in the context of DevOps are cumbersome and have a steep learning curve. Thus, they fall short of their potential to truly enable cross-functional collaboration and knowledge sharing, not only between development and operation, but to the whole organization. DevOps teams require tools and DSLs, that treat knowledge sharing and reuse as a first-class citizen, in order to operate sufficiently on a large scale. However, developing DSLs is still presumed as an expensive task which can easily offset the resulting benefits. This dissertation presents a lingualization strategy for addressing the challenge of knowledge sharing in large-scale DevOps. The basic idea is to provide custom-tailored Domain-Specific Modeling Languages (DSMLs) that target single phases of the DevOps lifecycle and ease the DevOps adoption for newly formed teams. The paradigm of Language-Driven Engineering (LDE) bridges the semantic gap between stakeholders by custom-tailored DSMLs and thus is a natural fit for knowledge sharing. Key to a successful practice of LDE is as a new class of stakeholders. In the context of large-scale DevOps, language development can be realized by so-called Meta DevOps teams. Those teams, which themselves practice DevOps internally, manage a centralized repository of small DSMLs and offer them as a service. DevOps teams act as the customers of the Meta DevOps teams and can request new features or complete new DSMLs and provide feedback to already existing DSMLs. The presented Rig modeling environment serves as an exemplary DSML that targets the purpose of Continuous Integration and Deployment (CI/CD), one of the most important building blocks of DevOps. Rig comes with an associated code generator to fully-generate CI/CD workflows from graphical models. Those graphical models provide an executable documentation and assist knowledge-sharing between stakeholders. The fundamental modeling concepts of the lingualization strategy are evaluated against previously published requirements by Bordeleau et al. on a DevOps modeling framework in an industrial context. In addition to that, Rig is evaluated based on results of a workshop during the 6th International School on Tool-Based Rigorous Engineering of Software Systems. Both evaluations yield encouraging results and demonstrate the potential of the lingualization strategy to break down knowledge-sharing barriers in large-scale DevOps environments.
  • Item
    Evolution of ecosystems for Language-Driven Engineering
    (2023) Boßelmann, Steve; Steffen, Bernhard; Wirsing, Martin
    Language-Driven Engineering (LDE) is a means to model-driven software development by creating Integrated Modeling Environments (IMEs) with Domain/Purpose-Specific Languages (PSLs), each tailored towards a specific aspect of the respective system to be modeled, thereby taking the specific needs of developers and other stakeholders into account. Combined with the powerful potential of full code generation, these IMEs can generate complete executable software applications from descriptive models. As these products themselves may again be IMEs, this approach leads to LDE Ecosystems of modeling environments with meta-level dependencies. This thesis describes new challenges emerging from changes that affect single components, multiple parts or even the whole LDE ecosystem. From a top-down perspective, this thesis discusses the necessary support by language definition technology to ensure that corresponding IMEs can be validated, generated and tested on demand. From a bottom-up perspective, the formulation of change requests, their upwards propagation and generalization is presented. Finally, the imposed cross-project knowledge sharing and transfer is motivated, fostering interdisciplinary teamwork and cooperation. Based on multifaceted contributions to full-blown projects on different meta-levels of an exemplary LDE ecosystem, this thesis presents specific challenges in creating and continuously evolving LDE ecosystems and deduces a concept of PUTD effects to systematically address various dynamics and appropriate actions to manage both product-level requests that propagate upwards in the meta-level hierarchy as well as the downward propagation of changes to ensure product quality and adequate migration of modeled artifacts along the dependency paths. Finally, the effect of language-driven modeling on the increasingly blurred line between building and using software applications is illustrated to emphasize that the distinction between programming and modeling becomes a mere matter of perspective.
  • Item
    The RERS challenge: towards controllable and scalable benchmark synthesis
    (2021-06-24) Howar, Falk; Jasper, Marc; Mues, Malte; Steffen, Bernhard; Schmidt, David
    This paper (1) summarizes the history of the RERS challenge for the analysis and verification of reactive systems, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants, and (2) presents the most recent development concerning the synthesis of hard benchmark problems. In particular, the second part proposes a way to tailor benchmarks according to the depths to which programs have to be investigated in order to find all errors. This gives benchmark designers a method to challenge contributors that try to perform well by excessive guessing.
  • Item
    Compositional learning of mutually recursive procedural systems
    (2021-10-05) Frohme, Markus; Steffen, Bernhard
    This paper presents a compositional approach to active automata learning of Systems of Procedural Automata (SPAs), an extension of Deterministic Finite Automata (DFAs) to systems of DFAs that can mutually call each other. SPAs are of high practical relevance, as they allow one to efficiently learn intuitive recursive models of recursive programs after an easy instrumentation that makes calls and returns observable. Key to our approach is the simultaneous inference of individual DFAs for each of the involved procedures via expansion and projection: membership queries for the individual DFAs are expanded to membership queries of the entire SPA, and global counterexample traces are transformed into counterexamples for the DFAs of concerned procedures. This reduces the inference of SPAs to a simultaneous inference of the DFAs for the involved procedures for which we can utilize various existing regular learning algorithms. The inferred models are easy to understand and allow for an intuitive display of the procedural system under learning that reveals its recursive structure. We implemented the algorithm within the LearnLib framework in order to provide a ready-to-use tool for practical application which is publicly available on GitHub for experimentation.
  • Item
    Towards language-to-language transformation
    (2021-06-18) Kopetzki, Dawid; Lybecait, Michael; Naujokat, Stefan; Steffen, Bernhard
    This paper proposes a simplicity-oriented approach and framework for language-to-language transformation of, in particular, graphical languages. Key to simplicity is the decomposition of the transformation specification into sub-rule systems that separately specify purpose-specific aspects. We illustrate this approach by employing a variation of Plotkin’s Structural Operational Semantics (SOS) for pattern-based transformations of typed graphs in order to address the aspect ‘computation’ in a graph rewriting fashion. Key to our approach are two generalizations of Plotkin’s structural rules: the use of graph patterns as the matching concept in the rules, and the introduction of node and edge types. Types do not only allow one to easily distinguish between different kinds of dependencies, like control, data, and priority, but may also be used to define a hierarchical layering structure. The resulting Type-based Structural Operational Semantics (TSOS) supports a well-structured and intuitive specification and realization of semantically involved language-to-language transformations adequate for the generation of purpose-specific views or input formats for certain tools, like, e.g., model checkers. A comparison with the general-purpose transformation frameworks ATL and Groove, illustrates along the educational setting of our graphical WebStory language that TSOS provides quite a flexible format for the definition of a family of purpose-specific transformation languages that are easy to use and come with clear guarantees.
  • Item
    Aligned and collaborative language-driven engineering
    (2022) Zweihoff, Philip; Steffen, Bernhard; Jörges, Sven
    Today's software development is increasingly performed with the help of low- and no-code platforms that follow model-driven principles and use domain-specific languages (DSLs). DSLs support the different aspects of the development and the user's mindset by a tailored and intuitive language. By combining specific languages with real-time collaboration, development environments can be provided whose users no longer need to be programmers. This way, domain experts can develop their solution independently without the need for a programmer's translation and the associated semantic gap. However, the development and distribution of collaborative mindset-supporting IDEs (mIDEs) is enormously costly. Besides the basic challenge of language development, a specialized IDE has to be provided, which should work equally well on all common platforms and individual heterogeneous system setups. This dissertation describes the conception and realization of the web-based, unified environment CINCO Cloud, in which DSLs can be collaboratively developed, used, transformed and executed. By providing full support at all steps, the philosophy of language-driven engineering is enabled and realized for the first time. As a foundation for the unified environment, the infrastructure of cloud development IDEs is analyzed and extended so that new languages can be distributed on-the-fly. Subsequently, concepts for language specialization, refinement and concretization are developed and described to realize the language-driven engineering approach, in a dynamic cluster-based environments. In addition, synchronization mechanisms and authorization structures are designed to enable collaboration between the users of the environment. Finally, the central aligned processes within the CINCO Cloud for developing, using, transforming and executing a DSL are illustrated to clarify how the dynamic system behaves.
  • Item
    Aggressive aggregation
    (2021) Gossen, Frederik Jakob; Margaria, Tiziana; Steffen, Bernhard
    Among the first steps in a compilation pipeline is the construction of an Intermediate Representation (IR), an in-memory representation of the input program. Any attempt to program optimisation, both in terms of size and running time, has to operate on this structure. There may be one or multiple such IRs, however, most compilers use some form of a Control Flow Graph (CFG) internally. This representation clearly aims at general-purpose programming languages, for which it is well suited and allows for many classical program optimisations. On the other hand, a growing structural difference between the input program and the chosen IR can lose or obfuscate information that can be crucial for effective optimisation. With today’s rise of a multitude of different programming languages, Domain-Specific Languages (DSLs), and computing platforms, the classical machine-oriented IR is reaching its limits and a broader variety of IRs is needed. This realisation yielded, e.g., Multi-Level Intermediate Representation (MLIR), a compiler framework that facilitates the creation of a wide range of IRs and encourages their reuse among different programming languages and the corresponding compilers. In this modern spirit, this dissertation explores the potential of Algebraic Decision Diagrams (ADDs) as an IR for (domain-specific) program optimisation. The data structure remains the state of the art for Boolean function representation for more than thirty years and is well-known for its optimality in size and depth, i.e. running time. As such, it is ideally suited to represent the corresponding classes of programs in the role of an IR. We will discuss its application in a variety of different program domains, ranging from DSLs to machine-learned programs and even to general-purpose programming languages. Two representatives for DSLs, a graphical and a textual one, prove the adequacy of ADDs for the program optimisation of modelled decision services. The resulting DSLs facilitate experimentation with ADDs and provide valuable insight into their potential and limitations: input programs can be aggregated in a radical fashion, at the risk of the occasional exponential growth. With the aggregation of large Random Forests into a single aggregated ADD, we bring this potential to a program domain of practical relevance. The results are impressive: both running time and size of the Random Forest program are reduced by multiple orders of magnitude. It turns out that this ADD-based aggregation can be generalised, even to generaliii purpose programming languages. The resulting method achieves impressive speedups for a seemingly optimal program: the iterative Fibonacci implementation. Altogether, ADDs facilitate effective program optimisation where the input programs allow for a natural transformation to the data structure. In these cases, they have proven to be an extremely powerful tool for the optimisation of a program’s running time and, in some cases, of its size. The exploration of their potential as an IR has only started and deserves attention in future research.
  • Item
    Synthesizing realistic verification tasks
    (2021) Jasper, Marc; Steffen, Bernhard; Siegel, Stephen F.
    This thesis by publications focuses on realistic benchmarks for software verification approaches. Such benchmarks are crucial to an evaluation of verification tools which helps to assess their capabilities and inform potential users. This work provides an overview of the current landscape of verification tool evaluation and compares manual and automatic approaches to benchmark generation. The main contribution of this thesis is a new framework to synthesize realistic verification tasks. This framework allows to generate verification tasks that target sequential or parallel programs. Starting from a realistic formal specification, a Büchi automaton is synthesized while ensuring realistic hardness characteristics such as the number of computation steps after which errors occur. The resulting automaton is then transformed to a Mealy machine to produce a sequential program in C or Java or to a parallel composition of modal transition systems. A refinement of the latter is encoded in Promela or as a Petri net. A task that targets such a parallel system requires checking whether or not a given interruptible temporal property is satisfied or whether parallel systems are weakly bisimilar. Temporal properties may include branching-time and linear-time formulas. For the latter, it can be ensured that every parallel component matters during verification. This thesis contains additional contributions that build on top of attached publications. These are (i) a generalization of interruptibility that covers branching-time properties, (ii) an improved generation of parallel contexts, and (iii) a definition of alphabet extension on a semantic level. Alphabet extensions are a key part for ensuring hardness of generated tasks that target parallel systems. Benchmarks that were synthesized using the presented framework have been employed in the international Rigorous Examination of Reactive Systems (RERS) Challenge during the last five years. Several international teams attempted to solve the corresponding verification tasks and used ten different tools to verify the newly added parallel programs. Apart from the evaluation of these tools, this endeavor motivated participants of RERS to conceive new formal techniques to verify parallel systems. The result of this thesis thus helps to improve the state of the art of software verification.
  • Item
    Characteristic invariants in Hennessy-Milner logic
    (2020-05-06) Jasper, Marc; Schlüter, Maximilian; Steffen, Bernhard
    In this paper, we prove that Hennessy–Milner Logic (HML), despite its structural limitations, is sufficiently expressive to specify an initial property φ0 and a characteristic invariant χI for an arbitrary finite-state process P such that φ0∧AG(χI) is a characteristic formula for P. This means that a process Q, even if infinite state, is bisimulation equivalent to P iff Q⊨φ0∧AG(χI). It follows, in particular, that it is sufficient to check an HML formula for each state of a finite-state process to verify that it is bisimulation equivalent to P. In addition, more complex systems such as context-free processes can be checked for bisimulation equivalence with P using corresponding model checking algorithms. Our characteristic invariant is based on so called class-distinguishing formulas that identify bisimulation equivalence classes in P and which are expressed in HML. We extend Kanellakis and Smolka’s partition refinement algorithm for bisimulation checking in order to generate concise class-distinguishing formulas for finite-state processes.
  • Item
    Generation of domain-specific language-to-language transformation languages
    (2019) Kopetzki, Dawid; Steffen, Bernhard; Jörges, Sven
    The increasing complexity of software systems entailed by the imposed requirements and involved stakeholders creates new challenges towards software development and turns it into a complex task. Nowadays, sophisticated development approaches and tools are needed to handle this complexity. Model-Driven Engineering (MDE) provides means to abstract from the details of a software system during the development phase by using models. Domain-Specific Modeling (DSM), a branch of MDE, tackles the complexity by proposing to use modeling languages which are restricted towards the solution space of the targeted problem domain. These Domain-Specific Visual Languages (DSVLs) are used in the DSM approach to create models in the restricted design space making the generation of modeled solutions feasible and providing a basis for the communication between various stakeholders. Since for each of the targeted domains a DSVL is needed, language workbenches emerged which support the development of DSVLs. During the development of a DSVL the semantics of the language has to be defined and, if the DSVL changes, existing models created using the DSVL have to be migrated. Furthermore, models are represented in a specific format hindering the application of, e.g., mature verification methods and tools. To solve these tasks, model transformations are promoted to transform models into different representations conforming to other DSVL. This thesis presents a new kind of model transformation languages, which can be used to handle the arising tasks during the development of DSVLs. These transformation languages are tailored towards the domain of "computational model transformations between DSVLs". The presented transformation languages are based on graph-transformation approaches and simplify the specification of computations by utilizing Plotkin's Strucural Operation Semantics (SOS), and thereby facilitate the definition of computation steps in a declarative way. This approach suffers from the versatility in the scope of DSVLs and thereby requires techniques to reduce the development costs of the transformation languages for different source and target languages. The key to reduce the development costs is the application of the Domain-specific, Full-generation, Service orientation (DFS) approach for the domain of model transformation languages. The application of domain-specifc concept results in graph-based, domain-specific two-level transformation languages. The essence of those languages is captured in a pattern describing possible two-level transformation languages. This pattern is used as the basis for the definition of a generator for those kind of transformation languages making full-generation feasible. The semantics of pattern matching and rewriting rules in the context of graph-based transformations are defined by the utilization of existing graph-transformation tools.
  • Item
    Meta-model based generation of domain-specific modeling tools
    (2019) Lybecait, Michael; Steffen, Bernhard; Jörges, Sven
    Today software development often depends on the communication between different shareholders with various professional backgrounds. Domain specific languages (DSL) aim to close the semantic gap between these shareholders by providing a common method for communication. When using meta-tooling suites or language workbenches it is quiet easy to create DSLs for small scenarios or even for single use. But with the more frequent use of DSLs the need for domain-specific tooling has also risen. This dissertation deals with the challenges of creating domain-specific modeling tools using high-level specification languages via code generation. It focuses on three important elements of domain-specific tool generation such as: specification languages, the tool generation process and the generation of domain-specific APIs, for amplifying the development of plug-ins for the generated tool, which are the main contributions of this dissertation. The first main contribution focuses on the formalization of the specification languages. It is illustrated by detailing the three specification languages of the meta-tooling suite. The second main contribution introduces the product generation process, which is used to create domain-specific modeling tools from the high-level domain specific languages, defined in the first contribution. The approach is illustrated by the [ product generation process]{} (), which defines the necessary steps to produce a standalone modeling tool in the meta-tooling suite. The third main contribution of this dissertation is the generation of a domain-specific API based on the same high-level descriptions used for the product generation. It uses information present at generation time to create specific operations that are useful for transformations on graph-models (such as typed successor/predecessor or containment relationships). Therefore the API generation of any product is generated during the execution of the . The API makes it easy to develop any extensions for the product due to its domain-specific nature and the ability to resemble user actions in the generated editors.
  • Item
    Heavy meta: model-driven domain-specific generation of generative domain-specific modeling tools
    (2017) Naujokat, Stefan; Steffen, Bernhard; Legay, Axel; Rehof, Jakob
    Software is so prevalent in all areas of life that one could expect we have come up with more simple and intuitive ways for its creation by now. However, software development is still too complicated to easily and efficiently cope with individual demands, customizations, and changes. Model-based approaches promise improvements through a more comprehensible layer of abstraction, but they are rarely fully embraced in practice. They are perceived as being overly complex, imposing additional work, and lacking the flexibility required in the real world. This thesis presents a novel approach to model-driven software engineering that focuses on simplicity through highly specialized tools. Domain experts are provided with development tools tailored to their individual needs, where they can easily specify the intent of the software using their known terms and concepts. This domain specificity (D) is a powerful mechanism to boil down the effort of defining a system to relevant aspects only. Many concepts are set upfront, which imposes a huge potential for automated generation. However, the full potential of domain-specific models can only unfold, if they are used as primary artifacts of development. The presented approach thus combines domain specificity with full generation (F) to achieve an overall pushbutton generation that does not require any round-trip engineering. Furthermore, service orientation (S) introduces a ‘just use’ philosophy of including arbitrarily complex functionality without needing to know their implementation, which also restores flexibility potentially sacrificed by the domain focus. The unique combination of these three DFS properties facilitates a focused, efficient, and flexible simplicity-driven way of software development. Key to the approach is a holistic solution that in particular also covers the simplicity-driven development of the required highly specialized DFS tools, as nothing would be gained if the costs of developing such tools outweighed the resulting benefits. This simplicity is achieved by applying the very same DFS concepts to the domain of tool development itself: DFS modeling tools are fully generated from models and services specialized to the (meta) domain of modeling tools. The presented Cinco meta tooling suite is a first implementation of such a meta DFS tool. It focuses on the generation of graphical modeling tools for graph structures comprising of various types of nodes and edges. Cinco has been very successfully applied to numerous industrial and academic projects, and thus also serves as a proof of concept for the DFS approach itself. The unique combination of the three DFS strategies and Cinco's meta-level approach towards their realization in practice lay the foundation for a new paradigm of software development that is strongly focused on simplicity.
  • Item
    Foundations of active automata learning: an algorithmic perspective
    (2015) Isberner, Malte; Steffen, Bernhard; Vaandrager, Frits
  • Item
    Kontinuierliche Qualitätskontrolle von Webanwendungen auf Basis maschinengelernter Modelle
    (2014-07-25) Windmüller, Stephan; Steffen, Bernhard; Rehof, Jakob
  • Item
    Higher order process engineering
    (2014-07-09) Neubauer, Johannes; Steffen, Bernhard; Hinchey, Mike