Research

Research Interests

  • Database Technology
    • Data management on modern hardware
      • Co-processor-accelerated query optimization
      • Efficient algorithms for query (co-)processing on heterogeneous hardware (e.g., GPUs, Intel Xeon Phi, NUMA Systems)
      • Genome data analysis using main-memory databases
      • Multi-dimensional index structures for main-memory databases
    • Graph database management systems
    • Large-scale and cloud data management
      • NoSQL databases
      • Transaction management in the cloud
      • Data integrity in the cloud
      • Parallel entity resolution
      • Self-tuning for cloud storage clusters
  • Feature-Oriented Software Development (FOSD)
    • Product-line configuration recommender systems
    • Prioritization for software product line testing
    • Migration of cloned product variants to a software product line
    • Variability-aware refactoring
    • Variability-aware code smells
    • Formal specification and verification of software product lines
    • Analysis of variability models
    • Multi software product lines
  • Variability in Embedded Systems / Heterogenous Hardware
    • Composition and adaptation of software product lines at runtime
    • Syntactical and semanticle interoperability in heterogeneous (embedded) systems
    • Data management in embedded systems and sensor networks

Current Funded Projects

  • SPL Testing
  • SPL Testing

    Exhaustively testing every product of a software product line (SPL) is a difficult task due to the combinatorial explosion of the number of products. Combinatorial interaction testing is a technique to reduce the number of products under test. In this project, we aim to handle multiple and possibly conflicting objectives during the test process of SPL.

    Website:Project-Website
    Leader:Gunter Saake
    Type:Drittmittelprojekt
    Funded by:DAAD
    Funded:01.10.2013 - 01.10.2016
    Members:Mustafa Al-Hajjaji, Jens Meinicke
    Keywords:Software product lines, Testing, Sampling, Prioritization
  • Southeast Asia Research Network: Digital Engineering
  • Southeast Asia Research Network: Digital Engineering

    German research organizations are increasingly interested in outstanding Southeast Asian institutions as partners for collaboration in the fields of education and research. Bilateral know-how, technology transfer and staff exchange as well as the resultant opportunities for collaboration are strategically important in terms of research and economics. Therefore, the establishment of a joint research structure in the field of digital engineering is being pursued in the project "SEAR DE Thailand" under the lead management of Otto von Guericke University Magdeburg (OvGU) in cooperation with the Fraunhofer Institute for Factory Operation and Automation (IFF) and the National Science and Technology Development Agency (NSTDA) in Thailand.

    Leader:Prof. Dr. Gunter Saake
    Type:Drittmittelprojekt
    Funded by:BMBF
    Funded:01.06.2013 - 30.05.2017
    Members:Sebastian Krieter
    Partners:NSTDA
    Fraunhofer IFF
    Keywords:Digital Engineering
  • Supporting Advanced Data Management Features for the Cloud Environment (Clustering the Cloud, Consistent data management for cloud gaming)
  • Supporting Advanced Data Management Features for the Cloud Environment

    Description: the aim of this project is to support advanced features of cloud data management. The project has two basic directions. The focus of the first direction is (self-) tuning for cloud data management clusters that are serving one or more applications with divergent workload types. It aims to achieve dynamic clustering to support workload based optimization. This approach is based on logical clustering within a DB cluster based on different criteria such as: data, optimization goal, thresholds, and workload types. The second direction focuses on the design of Cloud-based massively multiplayer online games. It aims to provide a scalable available efficient and reusable game architecture. Our approach is to manage data differently in multiple storage systems (file system, NoSQL system and RDBMS) according to their data management requirements, such as data type, scale, and consistency.

    Members:Siba Mohammad
    Ziqiang Diao
    Keywords:Cloud data management, online games, self tuning

    Clustering the Cloud - A Model for Self-Tuning of Cloud Datamangement Systems

    Over the past decade, cloud data management systems became increasingly popular, because they provide on-demand elastic storage and large-scale data analytics in the cloud. These systems were built with the main intention of supporting scalability and availability in an easily maintainable way. However, the (self-) tuning of cloud data management systems to meet specific requirements beyond these basic properties and for possibly heterogeneous applications becomes increasingly complex. Consequently, the self-management ideal of cloud computing is still to be achieved for cloud data management. The focus of this PhD project is (self-) tuning for cloud data management clusters that are serving one of more applications with divergent workload types. It aims to achieve dynamic clustering to support workload based optimization. Our approach is based on logical clustering within a DB cluster based on different criteria such as: data, optimization goal, thresholds, and workload types.

    Type:Drittmittelprojekt
    Funded by:Syrian Ministry of Higher Education and DAAD
    Funded:October 2011 - March 2015
    Members:Siba Mohammad

    Consistent data management for cloud gaming

    Cloud storage systems are able to meet the future requirements of the Internet by using non-relational database management systems (NoSQL DBMS). NoSQL system simplifies the relational database schema and the data model to improve system performances, such as system scalability and parallel processing. However, such properties of cloud storage systems limit the implementation of some Web applications like massively multi-player online games (MMOG). In the research described here, we want to expand existing cloud storage systems in order to meet requirements of MMOG. We propose to build up a transaction layer on the cloud storage layer to offer flexible ACID levels. As a goal the transaction processing should be offered to game developers as a service. Through the use of such an ACID level model both the availability of the existing system and the data consistency during the interactivity of multi-player can be converted according to specific requirements.

    Type:Drittmittelprojekt
    Funded by:Graduate Funding of Saxony-Anhalt
    Funded:July 2012 - December 2014
    Members:Zigiand Diao
  • Nachhaltiges Variabilitätsmanagement von Feature-orientierten Software-Produktlinien (NaVaS)
  • Nachhaltiges Variabilitätsmanagement von Feature-orientierten Software-Produktlinien (NaVaS)

    A software product line is a set of software-intensive systems that share a common, managed set of features. Product lines promise significant improvements to the engineering process of software systems with variability and are applicable to a wide range of domains, ranging from embedded devices to large enterprise solutions. The goal of "Sustainable Variability Management of Feature-Oriented Software Product Lines" is to improve the research prototype FeatureIDE, an integrated development environment especially targeted at the construction of software product lines. Apart from the benefits for practitioners, this endeavor will also improve education and research.

    Website:Project-Website
    Leader:Prof. Dr. Gunter Saake
    Type:Drittmittelprojekt
    Funded by:BMBF
    Funded:01.09.2014 - 31.08.2016
    Members:Reimar Schröter
    Keywords:Software product lines, Nachhaltige Softwareentwicklung, Variabilitätsmanagement, ganzheitliche Werkzeugunterstützung
  • EXtracting Product Lines from vAriaNTs (EXPLANT)
  • EXtracting Product Lines from vAriaNTs (EXPLANT)

    Software product lines promote strategic reuse and support variability in a systematic way. In practice, however, the need for reuse and variability has often been satisfied by copying programs and adapting them as needed — the clone-and-own approach. The result is a family of cloned product variants that is hard to maintain in the long term. This project aims at consolidating such cloned product families into a well-structured, modular software product line. Guided by code-clone detection, architectural analyses, and domain knowledge, the consolidation process is semi-automatic and stepwise. Each step constitutes a small, semantics-preserving transformation of the code, the feature model or both. These semantics-preserving transformations are called variant-preserving refactorings.

    Website:Project Website
    Leader:Prof. Dr. Gunter Saake, Prof. Dr.-Ing. Thomas Leich
    Type:Drittmittelprojekt
    Funded by:DFG
    Funded:16.02.2016 — 15.02.2018
    Members:Wolfram Fenske, Jacob Krüger
    Keywords:Software product lines, clone-and-own, migration, product variants, code clones, refactoring
  • Secure Data Outsourcing to Untrusted Clouds
  • Secure Data Outsourcing to Untrusted Clouds

    Cloud storage solutions are being offered by many big vendors like Google, Amazon & IBM etc. The need of Cloud storage has been driven by the generation of Big Data in almost every corporation. The biggest hurdle in outsourcing data to Cloud Data vendors is the Security Concern of the data owner. These security concerns have become the stumbling block in large scale adoption of Third Party Cloud Databases. The focus of this PhD project is to give a comprehensive framework for the security of Outsourced data to Untrusted Clouds. This framework includes Encrypted storage in Cloud Databases, Secure Data Access, Privacy of Data Access & Authenticity of Stored Data in the Cloud. This security framework will be based on Hadoop based open source products.

    Members:Muhammad Saqib Niaz
    Funded by:Higher Education Commission of Pakistan and DAAD
    Funded:Oct. 2014 to Oct. 2017
    Keywords:Hadoop, HDFS, Cloud Databases, Security
    See also here

Other Research Projects

  • Modern Data Management Technologies for Genome Analysis
  • Modern Data Management Technologies for Genome Analysis

    Genome analysis is an important method to improve disease detection and treatment. The introduction of next generation sequencing techniques allows to generate genome data for genome analysis in less time and at reasonable cost. In order to provide fast and reliable genome analysis, despite ever increasing amounts of genome data, genome data management and analysis techniques must also improve. In this project, we develop concepts and approaches to use modern database management systems (e.g., column-oriented, in-memory database management systems) for genome analysis.

    Project's scope:

    1. Identification and evaluation of genome analysis use cases suitable for database support
    2. Development of data management concepts for genome analysis using modern database technology with regard to chosen use cases and data management aspects such as data integration, data integrity, data provenance, data security
    3. Development of efficient data structures for querying and processing genome data in databases for defined use cases
    4. Exploiting modern hardware capabilities for genome data processing

    Leader:Prof. Dr. Gunter Saake
    Members:Sebastian Dorok
    Keywords:genome analysis, modern database technologies, main memory database systems, column-store
  • Software Product Line Languages and Tools (FeatureIDE, SPL2go)
  • Software Product Line Languages and Tools

    In this project we focus on research and development of tools and languages for software product lines. Our research focuses usability, flexibility and complexity of current approaches. Research includes tools as FeatureHouse, FeatureIDE, CIDE, FeatureC++, Aspectual Mixin Layers, Refactoring Feature Modules, and formalization of language concepts. The research centers around the ideas of feature-oriented programming and explores boundaries toward other development paradigms including type systems, refactorings, design patterns, aspect-oriented programming, generative programming, model-driven architectures, service-oriented architectures and more.

    Members:Thomas Thüm
    Reimar Schröter
    Thomas Leich
    Norbert Siegmund
    Project partners: Prof. Don Batory, University of Texas at Austin, USA
    Dr. Sven Apel, University Passau
    Prof. Christian Lengauer, University Passau
    Salvador Trujillo, PhD, IKERLAN Research Centre, Mondragon, Spanien

    FeatureIDE: An Extensible Framework for Feature-Oriented Software Development

    Website:Project-Website
    Manager:Thomas Thüm
    Funded by:Metop, Haushalt
    Members:Thomas Thüm; Reimar Schröter; Christian Kästner; Sven Apel; Don Batory; Thomas Leich; Gunter Saake
    Keywords:Feature-oriented software development, software product lines, feature modeling, feature-oriented programming, aspect-oriented programming, delta-oriented programming, preprocessors, tool support

    SPL2go: A Catalog of Publicly Available Software Product Lines

    Website:Project-Website
    Manager:Thomas Thüm
    Funded by:Metop, Haushalt
    Members:Thomas Thüm; Thomas Leich; Gunter Saake
    Keywords:Software product lines, product-line analyses, variability modeling, feature model, domain implementation, source code, case studies
  • Load-balanced Index Structures for Self-tuning DBMS
  • Load-balanced Index Structures for Self-tuning DBMS

    Index tuning as part of database tuning is the task of selecting and creating indexes with the goal of reducing query processing times. However, in dynamic environments with various ad-hoc queries it is difficult to identify potentially useful indexes in advance. The approach for self-tuning index cogurations developed in previous research provides a solution for continuous tuning on the level of index configurations, where configurations are a set of common index structures. In this project we investigate a novel approach, that moves the solution of the problem at hand to the level of the index structures, i.e. to create index structures which have an inherently self-optimizing structure.

    Leader:Dr.-Ing. Eike Schallehn
    Type:Haushalt
    Keywords:Index-structure selection, self tuning
  • Model-Based Refinement of Product Lines
  • Model-Based Refinement of Product Lines

    Software product lines are families of related software systems that are developed by taking variability into account during the complete development process. In model-based refinement methods (e.g., ASM, Event-B, Z, VDM), systems are developed by stepwise refinement of an abstract, formal model.

    In this project, we develop concepts to combine model-based refinement methods and software product lines. On the one hand, this combination aims to improve the cost-effectiveness of applying formal methods by taking advantage of the high degree of reuse provided by software product lines. On the other hand, it helps to handle the complexity of product lines by providing means to detect defects on a high level of abstraction, early in the development process.

    Members:Fabian Benduhn
    Keywords:software product lines, formal methods, refinement
  • GPU-accelerated Join-Order Optimization
  • GPU-accelerated Join-Order Optimization

    Different join orders can lead to a variation of execution times by several orders of magnitude, which makes join-order optimization to one of the most critical optimizations within DBMSs. At the same time, join-order optimization is an NP-hard problem, which makes the computation of an optimal join-order highly compute-intensive. Because current hardware architectures use highly specialized and parallel processors, the sequential algorithms for join-order optimization proposed in the past cannot fully utilize the computational power of current hardware architectures. Although existing approaches for join-order optimization such as dynamic programming benefit from parallel execution, there are no approaches for join-order optimization on highly parallel co-processors such as GPUs.

    In this project, we are building a GPU-accelerated join-order optimizer by adapting existing join-order optimization approaches. Here, we are interested in the effects of GPUs on join-order optimization itself as well as the effects for query processing. For GPU-accelerated DBMSs, such as CoGaDB, using GPUs for query processing, we need to identify efficient scheduling strategies for query processing and query optimization tasks such that the GPU-accelerated optimization does not slow down query processing on GPUs.

    Manager:Andreas Meister
    Type:Haushalt
    Keywords:GPU-Accelerated Datamanagement, Self-Tuning
  • On the Impact of Hardware on Relational Query Processing
  • On the Impact of Hardware on Relational Query Processing

    Satisfying the performance needs of tomorrow typically implies using modern processor capabilities (such as single instruction, multiple data) and co-processors (such as graphics processing units) to accelerate database operations. Algorithms are typically hand-tuned to the underlying (co-)processors. This solution is error-prone, introduces high implementation and maintenance cost and is not portable to other (co-)processors. To this end, we argue for a combination of database research with modern software-engineering approaches, such as feature-oriented software development (FOSD). Thus, the goal of this project is to generate optimized database algorithms tailored to the underlying (co-)processors from a common code base. With this, we maximize performance while minimizing implementation and maintenance effort in databases on new hardware.

    Project milestones:

    • Creating a feature model: Arising from heterogeneous processor capabilities, promising capabilities have to be identified and structured to develop a comprehensive feature model. This includes fine-grained features that exploit the processor capabilities of each device.
    • Annotative vs. compositional FOSD approaches: Both approaches have known benefits and drawbacks. To have a suitable mechanism to construct hardware-tailored database algorithms using FOSD, we have to evaluate which of these two approaches is the best for our scenario.
    • Mapping features to code: Arising from the feature model, possible code snippets to implement a feature have to be identified.
    • Performance evaluation: To validate our solution and derive rules for processor allocation and algorithm selection, we have to perform an evaluation of our algorithms.

    Leader:Prof. Dr. Gunter Saake
    Members:David Broneske
    Funded by:Haushalt
    Keywords:heterogeneity of processing devices, CPU, GPU, FPGA, MIC, APU, tailored database operations
  • Variability in service-oriented computing
  • Variability in service-oriented computing

    Economies of scale are achieved in service-oriented computing (SOC) by offering services to multiple consumers which demands ability to change/vary the services effectively and efficiently for consumers. Service providers want to retain consumers and maximize their profits by offering variability in services. Many solutions exist to address variability, however, each solution is tailored to a specific problem and holistic view or framework is missing to address variability issues in detail.

    In this project, we focus on the variability in SOC. We classify the variability in different layers, we survey variability mechanisms from literature and summarize solutions, consequences, and possible combinations in form of a pattern catalogue. Based on the pattern catalogue, we compare different variability patterns and combinations of patterns with evaluation criteria. Our catalogue helps to choose an appropriate technique for the variability problem at hand and illustrates its consequences in SOC. We will evaluate our solution catalogue using a case study.

    Members:Ateeq Khan
    Keywords:service-oriented computing, software as a service (SaaS); variability; service customization; variability approaches
  • Reliable and Reproducible Evaluation of High-Dimensional Index Structures (QuEval)
  • Reliable and Reproducible Evaluation of High-Dimensional Index Structures (QuEval)

    Multimedia data, or high-dimensional data in general, have been subject to research for more than two decades and gain momentum even more in the communication technology age. From a database point of view, the myriads of gigabyte of data pose the problem of managing these data. In this course, query processing is a challenging task due to the high dimensionality of such data. In the past, dozens of index structures for high-dimensional data have been proposed and some of them are even standard-like references. However, it is still some kind of black magic to decide which index structure fits to a certain problem or outweighs other index structures.

    Members:Dr. Veit Köppen
    Reimar Schröter
    Keywords:High-dimensional index selection & tuning

    QuEval

    This is where QuEval, a framework for quantitative comparison and evaluation of high-dimensional index structures comes into play. QuEval is a Java-based framework that supports the comparison of index strucutres regarding certain characteristics such as dimensionality, accuracy, or performance. Currently, the framework contains six different index structures. However, a main focus of the framework is its extensibility and we encourage people to contribute to QuEval by providing more index structures or other interesting aspects for their comparison.

    Website:Project-Website
    Manager:Dr. Veit Köppen
    Members:Alexander Grebhahn; Tim Hering; Veit Köppen; Christina Pielach; Martin Schäler; Reimar Schröter; Sandro Schulze

Completed Projects

  • Minimal-invasive integration of the provenance concern into data-intensive systems
  • Minimal-invasive integration of the provenance concern into data-intensive systems

    In the recent past a new research topic named provenance gained much attention. The purpose of provenance is to determine origin and derivation history of data. Thus, provenance is used, for instance, to validate and explain computation results. Due to the digitalization of previously analogue process that consume data from heterogeneous sources and increasing complexity of respective systems, it is a challenging task to validate computation results. To face this challenge there has been plenty of research resulting in solutions that allow for capturing of provenance data. These solutions cover a broad variety of approaches reaching from formal approaches defining how to capture provenance for relational databases, high-level data models for linked data in the web, to all-in-one solutions to support management of scientific work ows. However, all these approaches have in common that they are tailored for their specific use case. Consequently, provenance is considered as an integral part of these approaches that can hardly be adjusted for new user requirements or be integrated into existing systems. We envision that provenance, which highly needs to be adjusted to the needs of specific use cases, should be a cross-cutting concern that can seamlessly be integrated without interference with the original system.

    Leader:Prof. Dr. Gunter Saake
    Members:Martin Schäler
    Funded by:Haushalt
  • MultiPLe - Multi Software Product Lines
  • MultiPLe - Multi Software Product Lines

    MultiPLe is a project that aims at developing methods and tools to support development of Multi Software Product Lines (MPLs), which are a special kind of software product lines (SPLs). An SPL is a family of related programs that are often generated from a common code base with the goal of maximizing reuse between these programs. An MPL is a set of interacting and interdependent SPLs.

    Website:Project-Website
    Leader:Prof. Dr. Gunter Saake
    Type:Drittmittelprojekt
    Funded by:Deutsche Forschungsgemeinschaft (DFG)
    Funded:01.03.2012 - 28.02.2014
    Members:Reimar Schröter
    Keywords:Software product lines, multi product lines, program interfaces
  • A Hybrid Query Optimization Engine for GPU accelerated Database Query Processing (HyPE-Library, CoGaDB)
  • A Hybrid Query Optimization Engine for GPU accelerated Database Query Processing

    Performance demands for database systems are ever increasing and a lot of research focus on new approaches to fulfill performance requirements of tomorrow. GPU acceleration is a new arising and promising opportunity to speed up query processing of database systems by using low cost graphic processors as coprocessors. One major challenge is how to combine traditional database query processing with GPU coprocessing techniques and efficient database operation scheduling in a GPU aware query optimizer. In this project, we develop a Hybrid Query Processing Engine, which extends the traditional physical optimization process to generate hybrid query plans and to perform a cost based optimization in a way that the advantages of CPUs and GPUs are combined. Furthermore, we aim at a database architecture and data model independent solution to maximize applicability.

    Type:Haushalt
    Members:Sebastian Breß
    Project partners: Prof. Kai-Uwe Sattler, Ilmenau University of Technology, Ilmenau;
    Prof. Ladjel Bellatreche, University of Poitiers, Frankreich;
    Dr. Tobias Lauer, Jedox AG (Freiburg im Breisgau)
    Keywords:query processing, query optimization, gpu-accelerated datamangement, self-tuning

    HyPE-Library

    HyPE is a hybrid query processing engine build for automatic selection of processing units for coprocessing in database systems. The long-term goal of the project is to implement a fully fledged query processing engine, which is able to automatically generate and optimize a hybrid CPU/GPU physical query plan from a logical query plan. It is a research prototype developed by the Otto-von-Guericke University Magdeburg in collaboration with Ilmenau University of Technology.

    Website:Project-Website
    Manager:Sebastian Breß
    Members:Sebastian Breß Klaus Baumann; Robin Haberkorn; Steven Ladewig; Harmen Landsmann; Tobias Lauer; Gunter Saake; Norbert Siegmund
    Partner:Felix Beier; Ladjel Bellatreche; Max Heimel; Hannes Rauhe; Kai-Uwe Sattler

    CoGaDB

    CoGaDB is a prototype of a column-oriented GPU-accelerated database management system developed at the University of Magdeburg. Its purpose is to investigate advanced coprocessing techniques for effective GPU utilization during database query processing. It uses our hybrid query processing engine (HyPE) for the physical optimization process.

    Website:Project-Website
    Manager:Sebastian Breß
    Members:Sebastian Breß Robin Haberkorn; Rene Hoyer; Steven Ladewig; Gunter Saake; Norbert Siegmund; Patrick Sulkowski
    Partner:Ladjel Bellatreche (LIAS/ISEA-ENSMA, Futuroscope, France)
  • ViERforES-II (Dependable systems, Interoperability)
  • ViERforES-II (Dependable systems, Interoperability)

    Software-intensive systems are becoming more and more important in an increasing number of traditional engineering domains. Digital Engineering is a new emerging trend that meets the challenge to bring together traditional engineering and modern approaches in Software- and Systems Engineering. Engineers in the traditional domains are confronted with both the usage of software systems in a growing amount and also with the development of software-intensive systems. Therefore, Software- and Systems- Engineering play a growing role in many engineering domains. While functional properties of software systems are often included in the development process, non-functional properties of safety and security and their early inclusion in the development process are not respected sufficiently.

    Members:Dr. Veit Köppen
    Janet Siegmund (geb. Feigenspan)
    Norbert Siegmund
    Keywords:High-dimensional index selection & tuning

    ViERforES-II - Dependable systems

    The project deals with with security aspects in embedded systems regarding threats, which can be caused by, among others, malware. Another important aspect is to find security leaks already on source-code level, for which cognitive processes that are related to program comprehension are important. One goal is to evaluate factors that allow us to understand abilities of developers, but also the risk potential of projects.

    Website:Project-Website
    Leader:Prof. Dr. Gunter Saake
    Type:Drittmittelprojekt
    Funded by:BMBF
    Funded:01.01.2010 - 30.09.2013
    Members:Janet Siegmund (geb. Feigenspan)
    Keywords:Empirical software engineering

    ViERforES-II - Interoperability

    Ensuring interoperability of cooperating embedded systems is one of the key challenges to solve to build complex and highly interactive ubiquitous systems. To this end, we have to consider different levels of interoperability: syntactical, semantic, and non-functional interoperability. In ViERforES-I, we developed solutions for the first two levels using software product lines and service-oriented architecture. In ViERforES-II, we focus on techniques to determine non-functional properties of customizable software deployed on embedded systems. We develop means to model, measure, and quantify non-functional properties, such that we can compute an optimal configuration of all cooperating software systems. This way, we ensure that embedded systems are interoperable regarding performance, energy consumption, and other quality attributes.

    In the second line of work, we combine distributed cooperating simulations using OpenGL. The goal is to support engineers during the product development by providing an integrated view on a product in the virtual reality based by merging the graphics stream of several simulations. Moreover, with 3D cameras, we aim at placing the engineer inside a simulation. Through interaction with the 3D product, this allows for the simulation of early training and maintenance tasks.

    Website:Project-Website
    Leader:Prof. Dr. Gunter Saake
    Type:Drittmittelprojekt
    Funded by:BMBF
    Funded:01.01.2010 - 30.09.2013
    Members:Norbert Siegmund
    Maik Mory
    Keywords:non-functional properties, optimization, cooperating simulations, openGL, interoperability
  • Analysis Strategies for Software Product Lines
  • Analysis Strategies for Software Product Lines

    Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a set of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, testing, and formal verification, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis.

    The emerging field of product-line analysis techniques is both broad and diverse such that it is difficult for researchers and practitioners to understand their similarities and differences (e.g., with regard to variability awareness or scalability), which hinders systematic research and application. We classify the corpus of existing and ongoing work in this field, we compare techniques based on our classification, and we infer a research agenda. A short-term benefit of our endeavor is that our classification can guide research in product-line analysis and, to this end, make it more systematic and efficient. A long-term goal is to empower developers to choose the right analysis technique for their needs out of a pool of techniques with different strengths and weaknesses.

    Website:Project-Website
    Manager:Thomas Thüm
    Type:Haushalt
    Members:Thomas Thüm; Sven Apel; Christian Kästner; Ina Schaefer; Gunter Saake
    Keywords:Product-line analysis, software product lines, program families, deductive verification, theorem proving, model checking, type checking
  • Automotive - Virtual Engineering
  • Automotive - IT Security and Data Management
  • ViERforES - Virtual and augmented reality for safety, security, and reliability of embedded systems
  • Digi-Dak (Digital Fingerprints)
  • Reflective and Adaptive Middleware for Software Evolution of Non-Stopping Information Systems Duration
  • FAME-DBMS
  • Adaptive Replikation von Daten in heterogenen mobilen Kommunikationsnetzen
  • Informationsfusion
  • MuSofT - Multimedia in der SoftwareTechnik
  • GlobalInfo
  • Lost Art Internet-Datenbank
  • Föderierung heterogener Datenbanksysteme und lokaler Datenmanagementkomponenten zur systemübergreifenden Integritätssicherung
  • Formale objektorientierte Methodiken zur Spezifikation, Verifikation und Operationalisierung von komplexen Kommunikationssystemen für offene verteilte Automatisierungssysteme
  • ESPRIT BRA Working Group FIREworks (Feature Integration in Requirements Engineering)
  • ESPRIT BRA Working Group ASPIRE
  • Spezifikation flexibel anpaßbarer Abläufe in ingenieurwissenschaftlichen Anwendungen
  • ESPRIT BRA Working Group ModelAge (A Common Formal Model for Cooperating Intelligent Agents)
  • ESPRIT BRA Working Group IS-CORE II
  • Implementierung von Informationssystemen
  • Untersuchungen zum dynamischen Netzwerkmanagement in Bündelfunksystemen mittels objektorientierter Modellierung

Past Conference

Database Systems for Business, Technology, and Web (BTW)

The 15th BTW conference on "Database Systems for Business, Technology, and Web" (BTW 2013) of the Gesellschaft für Informatik (GI) will take place from March 11th to March 15th, 2013 at the Otto-von-Guericke-University of Magdeburg, Germany.

Website:Conference-Website

Past Workshops