Designing a software architecture requires architectural reasoning, i.e., activities that translate requirements to an architecture solution. Architectural reasoning is particularly challenging in the design of product-lines of self-adaptive systems, which involve variability both at development time and runtime. In previous work we developed an extended Architectural Reasoning Framework (eARF) to address this challenge. However, evaluation of the eARF showed that the framework lacked support for rigorous reasoning, ensuring that the design complies to the requirements. In this paper, we introduce an analytical framework that enhances eARF with such support. The framework defines a set of artifacts and a series of activities. Artifacts include templates to specify domain quality attribute scenarios, concrete models, and properties. The activities support architects with transforming requirement scenarios to architecture models that comply to required properties. Our focus in this paper is on architectural reasoning support for a single product instance. We illustrate the benefits of the approach by applying it to an example client-server system, and outline challenges for future work. © 2016 IEEE.
More than two decades of research have demonstrated an increasing need for software systems to be self-adaptive. Self-adaptation is required to deal with runtime dynamics which are difficult to predict before deployment. A vast body of knowledge to develop Self-Adaptive Software Systems (SASS) has been established. We, however, discovered a lack of process support to develop self-adaptive systems with reuse. To that end, we propose a domain-engineering based methodology, Autonomic Software Product Lines engineering (ASPLe), which provides step-by-step guidelines for developing families of SASS with systematic reuse. The evaluation results from a case study show positive effects on quality and reuse for self-adaptive systems designed using the ASPLe compared to state-of-the-art engineering practices.
We describe ongoing work in knowledge evolution management for autonomic software product lines. We explore how an autonomic product line may benefit from new knowledge originating from different source activities and artifacts at run time. The motivation for sharing run-time knowledge is that products may self-optimize at run time and thus improve quality faster compared to traditional software product line evolution. We propose two mechanisms that support knowledge evolution in product lines: online learning and knowledge sharing. We describe two basic scenarios for runtime knowledge evolution that involves these mechanisms. We evaluate online learning and knowledge sharing in a small product line setting that shows promising results.
The concept of variability is fundamental in software product lines and a successful implementation of a product line largely depends on how well domain requirements and their variability are specified, managed, and realized. While developing an educational software product line, we identified a lack of support to specify variability in quality concerns. To address this problem we propose an approach to model variability in quality concerns, which is an extension of quality attribute scenarios. In particular, we propose domain quality attribute scenarios, which extend standard quality attribute scenarios with additional information to support specification of variability and deriving product specific scenarios. We demonstrate the approach with scenarios for robustness and upgradability requirements in the educational software product line.
With the recent advances of manufacturing technologies, referred to as Industry 4.0, maintenance approaches have to be developed to fulfill the new de-mands. The technological complexity associated to Industry 4.0 makes designing maintenance solutions particularly challenging. This paper proposes a novel maintenance framework leveraging principles from self-adaptation and software architecture. The framework was tested in an operational scenario where a bearing condition in an electrical motor needs to be managed, the results showed a proper operation. As a conclusion, the proposed framework could be used to develop maintenance systems for Industry 4.0.
We advocate a novel concept of dependable intelligent edge systems (DIES) i.e., the edge systems ensuring a high degree of dependability (e.g., security, safety, and robustness) and autonomy because of their applications in critical domains. Building DIES entail a paradigm shift in architectures for acquiring, storing, and processing potentially large amounts of complex data: data management is placed at the edge between the data sources and local processing entities, with loose coupling to storage and processing services located in the cloud. As such, the literal definition of edge and intelligence is adopted, i.e., the ability to acquire and apply knowledge and skills is shifted towards the edge of the network, outside the cloud infrastructure. This paradigm shift offers flexibility, auto configuration, and auto diagnosis, but also introduces novel challenges. © 2019 ACM.
Despite the fact that some practitioners and researchers report successful stories on Software ProductLines (SPL) adaptation, the evolution of SPL remains challenging. In our research we study a specific aspect of SPL adaptation, namely on updating of deployed products. Our particular focus is on the correct execution of updates and minimal interruption of services during the updates. The update process has two stages. First, the products affected by the evolution must be identified. We call this stage SPL-wide change impact analysis. In the second stage, each of the affected products has to be updated. In our previous work we have addressed the second stage of the update process. In this paper we report on our early results of the first stage: change impact analysis. We discuss how existing variability models can be employed to support automated identification of the products that require an update. The discussion is illustrated with the examples from an educational SPL that we are developing at K.U. Leuven.
Internet-of-Things (IoT) is an emergent paradigm that is increasingly applied in smart cities. A popular technology used in IoT is LoRa that supports long-range wireless communication. In this research, we study LoRa-based IoT systems with battery-powered end nodes that collect and communicate data to a gateway for further processing. Existing approaches in such IoT systems usually only consider stationary end nodes. We focus on systems with mobile end nodes, paving the way to new applications such as target tracking. Key Quality of Service (QoS) requirements for these settings are the reliability of the communication and energy consumption. With mobile end nodes, ensuring these QoS is challenging as the system is subject to continuous changes. In this paper, we investigate how the settings of a mobile end node impact key performance indicators for reliability and energy consumption. Based on insights obtained from extensive field experiments, we devise an algorithm that automatically adapts the settings of a mobile end node to ensure its QoS requirements for a setup with a single gateway. We then extend the algorithm to a setup with multiple gateways. We demonstrate how the algorithms achieve the QoS requirements of a mobile end node in a concrete IoT deployment.
In Cyber-Physical Production System (CPPS) engineering, domain experts design assets like products, production processes, and resources. However, the representation of dependencies between assets in heterogeneous engineering artifacts, like system plans, models, and tool data, is insufficient to coordinate engineering activities, like changes to shared asset properties, as it increases the risk of unplanned rework and project delay. While Industry 4.0 (I4.0) assets, defined in RAMI 4.0, allow representing multi-disciplinary views on assets, it remains open how to leverage this capability to coordinate changes in CPPS engineering. In this paper, we introduce the Product- Process-Resource Asset Network (PAN) coordination artifact, a knowledge graph based on I4.0 assets. We argue that the PAN coordination artifact fosters an explicit representation of change dependencies and engineering knowledge, improving capabilities for coordinating changes efficiently. In a feasibility study, we investigate the coordination capabilities of a PAN that represents change dependencies on a typical robot work cell in automotive manufacturing. Results show that the PAN provides effective capabilities for identifying risky assets for re-validation after changes.
For two years, we have been involved in a challenging project to develop a new architecture for an industrial transportation system. The motivating quality attributes to develop this innovative architecture were flexibility and openness. Taking these quality attributes into account, we proposed a decentralized architecture using multiagent systems (MASs). A MAS consists of multiple autonomous entities that coordinate with each other to achieve decentralized control. The typical advantages attributed to such decentralized architecture are flexibility and openness, the motivating quality attributes to apply MAS in this case. The Architecture Tradeoff Analysis Method (ATAM) was used to provide insights wether our architecture meets the expected flexibility and openness, and to identify tradeoffs with other quality attributes. Applying the ATAM proved to be a valuable experience. One of the main outcome of applying the ATAM was the identification of a tradeoff between flexibility and communication load that results from the use of a decentralized architecture
Managing the architectural description (AD) of a complex software system and maintaining consistency among the different models is a demanding task. To understand the underlying problems, we analyse several non-trivial software architectures. The empirical study shows that a substantial amount of information of ADs is repeated, mainly by integrating information of different models in new models. Closer examination reveals that the absence of rigorously specified dependencies among models and the lack of support for automated composition of models are primary causes of management and consistency problems in software architecture. To tackle these problems, we introduce an approach in which compositions of models, together with relations among models, are explicitly supported in the ADL. We introduce these concepts formally and discuss a proof-of-concept instantiation of composition in xADL and its supporting tools. The approach is evaluated by comparing the original and revised ADs in an empirical study. The study indicates that our approach reduces the number of manually specified elements by 29%, and reduces the number of manual changes to elements for several realistic change scenarios by 52%.
Our position is that architectural descriptions lackcomposition of views, preventing a proper separation of concerns.This position took shape from experiences with buildingseveral complex distributed software systems. Our claim is thatview composition should be a first-class entity in architecturaldescriptions. As a first step, we propose an extension of the conceptualmodel of IEEE Recommended Practice for ArchitecturalDescription of Software-Intensive Systems with view composition.
Cyber-Physical Systems (CPS) are large interconnected softwareintensivesystems that influence, by sensing and actuating, thephysical world. Examples are traffic management and power grids.One of the trends we observe is the need to endow such systemswith the “smart” capabilities, typically in the form of selfawarenessand self-adaptation, along with the traditional qualitiesof safety and dependability. These requirements combined withspecifics of the domain of smart CPS – such as large scale, the roleof end-users, uncertainty, and open-endedness – render traditionalsoftware engineering (SE) techniques not directly applicable; makingsystematic SE of smart CPS a challenging task. This paperreports on the results of the First International Workshop on SoftwareEngineering of Smart Cyber-Physical Systems (SEsCPS2015), where participants discussed characteristics, challenges andopportunities of SE for smart CPS, with the aim to outline anagenda for future research in this important area.
Cyber-physical system (CPS) have been recognized as a top-priority in research and development. The innovations sought for CPS demand them to deal effectively with dynamicity of their environment, to be scalable, adaptive, tolerant to threats, etc. -- i.e. they have to be smart. Although approaches insoftware engineering (SE) exist that individually meet these demands, their synergy to address the challenges of smart CPS (sCPS) in a holistic manner remains an open challenge. The workshop focuses on software engineering challenges for sCPS. The goals are to increase the understanding of problems of SE for sCPS, study foundational principles for engineering sCPS, and identify promising SE solutions for sCPS. Based on these goals, the workshop aims to formulate a research agenda for SE of sCPS.
A traditional approach to realize self-adaptation in software engineering (SE) is by means of feedback loops. The goals of the system can be specified as formal properties that are verified against models of the system. On the other hand, control theory (CT) provides a well-established foundation for designing feedback loop systems and providing guarantees for essential properties, such as stability, settling time, and steady state error. Currently, it is an open question whether and how traditional SE approaches to self-adaptation consider properties from CT. Answering this question is challenging given the principle differences in representing properties in both fields. In this paper, we take a first step to answer this question. We follow a bottom up approach where we specify a control design (in Simulink) for a case inspired by Scuderia Ferrari (F1) and provide evidence for stability and safety. The design is then transferred into code (in C) that is further optimized. Next, we define properties that enable verifying whether the control properties still hold at code level. Then, we consolidate the solution by mapping the properties in both worlds using specification patterns as common language and we verify the correctness of this mapping. The mapping offers a reusable artifact to solve similar problems. Finally, we outline opportunities for future work, particularly to refine and extend the mapping and investigate how it can improve the engineering of self-adaptive systems for both SE and CT engineers. © 2021 IEEE.
Ensuring that systems achieve their goals under uncertainty is a key driver for self-adaptation. Nevertheless, the concept of uncertainty in self-adaptive systems (SAS) is still insufficiently understood. Although several taxonomies of uncertainty have been proposed, taxonomies alone cannot convey the SAS research community’s perception of uncertainty. To explore and to learn from this perception, we conducted a survey focused on the SAS ability to deal with unanticipated change and to model uncertainty, and on the major challenges that limit this ability. In this paper, we analyse the responses provided by the 51 participants in our survey. The insights gained from this analysis include the view—held by 71% of our participants—that SAS can be engineered to cope with unanticipated change, e.g., through evolving their actions, synthesising new actions, or using default actions to deal with such changes. To handle uncertainties that affect SAS models, the participants recommended the use of confidence intervals and probabilities for parametric uncertainty, and the use of multiple models with model averaging or selection for structural uncertainty. Notwithstanding this positive outlook, the provision of assurances for safety-critical SAS continues to pose major challenges according to our respondents. We detail these findings in the paper, in the hope that they will inspire valuable future research on self-adaptive systems.
Building on concepts drawn from control theory, self-adaptive software handles environmental and internal uncertainties by dynamically adjusting its architecture and parameters in response to events such as workload changes and component failures. Self-adaptive software is increasingly expected to meet strict functional and non-functional requirements in applications from areas as diverse as manufacturing, healthcare and finance. To address this need, we introduce a methodology for the systematic ENgineering of TRUstworthy Self-adaptive sofTware (ENTRUST). ENTRUST uses a combination of (1) design-time and runtime modelling and verification, and (2) industry-adopted assurance processes to develop trustworthy self-adaptive software and assurance cases arguing the suitability of the software for its intended application. To evaluate the effectiveness of our methodology, we present a tool-supported instance of ENTRUST and its use to develop proof-of-concept self-adaptive software for embedded and service-based systems from the oceanic monitoring and e-finance domains, respectively. The experimental results show that ENTRUST can be used to engineer self-adaptive software systems in different application domains and to generate dynamic assurance cases for these systems.
Physical telerehabilitation services over the Internet allow physiotherapists to engage in remote consultation with patients at their homes, improving the quality of care and reducing costs. Traditional visual approaches, such as webcams and videophones, are limited in terms of precision of assessment and support for assistance with exercises. In this paper, we present a Physical Telerehabilitation System (PTS) that enhances video interaction with IoT technology to monitor the position of the body of patients in space and provide smart data to physiotherapists and users. We give an overview of the architecture of the PTS and evaluate (i) its usability based on a number of interviews and focus groups with stakeholders, and (ii) its technical efficiency based on a series of measurements. From this evaluation, we derive a number of challenges for further improvement of the PTS and outline a possible solution based on a microservices architecture.
Emerging cyber-physical systems, such as robot swarms, crowds of augmented people, and smart cities, require well-crafted self-organizing behavior to properly deal with dynamic environments and pervasive disturbances. However, the infrastructures providing networking and computing services to support these systems are becoming increasingly complex, layered and heterogeneous-consider the case of the edge-fog-cloud interplay. This typically hinders the application of self-organizing mechanisms and patterns, which are often designed to work on flat networks. To promote reuse of behavior and flexibility in infrastructure exploitation, we argue that self-organizing logic should be largely independent of the specific application deployment. We show that this separation of concerns can be achieved through a proposed "pulverization approach": the global system behavior of application services gets broken into smaller computational pieces that are continuously executed across the available hosts. This model can then be instantiated in the aggregate computing framework, whereby self-organizing behavior is specified compositionally. We showcase how the proposed approach enables expressing the application logic of a self-organizing cyber-physical system in a deployment-independent fashion, and simulate its deployment on multiple heterogeneous infrastructures that include cloud, edge, and LoRaWAN network elements.
Simple Summary The engineering of self-organising cyber-physical systems can benefit from a variety of "logical devices", including digital twins, virtual devices, and (augmented) collective digital twins. In particular, collective digital twins provide for a design construct towards collective computing, which can be augmented with virtual devices to improve the performance of existing self-organising applications-as shown through swarm exploration and navigation scenarios. The engineering of large-scale cyber-physical systems (CPS) increasingly relies on principles from self-organisation and collective computing, enabling these systems to cooperate and adapt in dynamic environments. CPS engineering also often leverages digital twins that provide synchronised logical counterparts of physical entities. In contrast, sensor networks rely on the different but related concept of virtual device that provides an abstraction of a group of sensors. In this work, we study how such concepts can contribute to the engineering of self-organising CPSs. To that end, we analyse the concepts and devise modelling constructs, distinguishing between identity correspondence and execution relationships. Based on this analysis, we then contribute to the novel concept of "collective digital twin" (CDT) that captures the logical counterpart of a collection of physical devices. A CDT can also be "augmented" with purely virtual devices, which may be exploited to steer the self-organisation process of the CDT and its physical counterpart. We underpin the novel concept with experiments in the context of the pulverisation framework of aggregate computing, showing how augmented CDTs provide a holistic, modular, and cyber-physically integrated system view that can foster the engineering of self-organising CPSs.
Context. Self-organising and collective computing approaches are increasingly applied to large-scale cyber-physical systems (CPS), enabling them to adapt and cooperate in dynamic environments. Also, in CPS engineering, digital twins are often leveraged to provide synchronised logical counterparts of physical entities, whereas in sensor networks the different-but-related concept of virtual device is used e.g. to abstract groups of sensors. Vision. We envision the design concept of ’augmented collective digital twin’ that captures digital twins at a collective level extended with purely virtual devices. We argue that this concept can foster the engineering of self-organising CPS by providing a holistic, declarative, and integrated system view. Method. From a review and proposed taxonomy of logical devices comprehending both digital twins and virtual devices, we reinterpret a meta-model for self-organising CPSs and discuss how it can support augmented collective digital twins. We illustrate the approach in a crowd-aware navigation scenario, where virtual devices are opportunistically integrated into the system to enhance spatial coverage, improving navigation capabilities. Conclusion. By integrating physical and virtual devices, the novel notion of augmented collective digital twin paves the way to self-improving system functionality and intelligent use of resources in self-organising CPSs. Conclusion. By integrating physical and virtual devices, the novel notion of augmented collective digital twin paves the way to self-improving system functionality and intelligent use of resources in self-organising CPSs. © 2021 IEEE.
Empirical studies indicate that user experience can significantly be improved in model-driven engineering. Blended modelling aims at mitigating this by enabling users to interact with a single model through different notations. Blended modelling contributes to various modelling qualities, including comprehensibility, analysability, and acceptability. In this paper, we define the notion of blended modelling and propose a set of dimensions that characterise blended modelling. The dimensions are grouped in two classes: user-oriented dimensions and realisation-oriented dimensions. Each dimension describes a facet that is relevant to blended modelling together with its domain (i.e., the range of values for that dimension). The dimensions offer a basic vocabulary to support tool developers with making well-informed design decisions as well as users to select appropriate tools and configure them according to the needs at hand. We illustrate how the dimensions apply to different cases relying on our experience with blended modelling. We discuss the impact of blended modelling on usability and user experience and sketch metrics to measure it. Finally, we outline a number of core research directions in this increasingly important modelling area.
Advanced vehicle guidance systems use real-time traffic information to route traffic and to avoid congestion. Unfortunately, these systems can only react upon the presence of traffic jams and not to prevent the creation of unnecessary congestion. Anticipatory vehicle routing is promising in that respect, because this approach allows directing vehicle routing by accounting for traffic forecast information. This paper presents a decentralized approach for anticipatory vehicle routing that is particularly useful in large-scale dynamic environments. The approach is based on delegate multiagent systems, i.e., an environment-centric coordination mechanism that is, in part, inspired by ant behavior. Antlike agents explore the environment on behalf of vehicles and detect a congestion forecast, allowing vehicles to reroute. The approach is explained in depth and is evaluated by comparison with three alternative routing strategies. The experiments are done in simulation of a real-world traffic environment. The experiments indicate a considerable performance gain compared with the most advanced strategy under test, i.e., a traffic-message-channel-based routing strategy.
Two of the main paradigms used to build adaptive software employ different types of properties to capture relevant aspects of the system's run-time behavior. On the one hand, control systems consider properties that concern static aspects like stability, as well as dynamic properties that capture the transient evolution of variables such as settling time. On the other hand, self-adaptive systems consider mostly non-functional properties that capture concerns such as performance, reliability, and cost. In general, it is not easy to reconcile these two types of properties or identify under which conditions they constitute a good fit to provide run-time guarantees. There is a need of identifying the key properties in the areas of control and self-adaptation, as well as of characterizing and mapping them to better understand how they relate and possibly complement each other. In this paper, we take a first step to tackle this problem by: (1) identifying a set of key properties in control theory, (2) illustrating the formalization of some of these properties employing temporal logic languages commonly used to engineer self-adaptive software systems, and (3) illustrating how to map key properties that characterize self-adaptive software systems into control properties, leveraging their formalization in temporal logics. We illustrate the different steps of the mapping on an exemplar case in the cloud computing domain and conclude with identifying open challenges in the area.
The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
Variability is the ability of a software system or artifact to be adapted for specific contexts, in a preplanned manner. Many of today's software systems are built with variability in mind, e.g., product lines and families, self-adaptive systems, open platforms, or service-based systems that support dynamic runtime composition of web services. Variability is reflected in and facilitated through the software architecture. Also, as the software architecture is a reference point for many development activities and for achieving quality attributes, variability should be treated as a first-class and cross-cutting concern in software architecture. Therefore, the Second International Workshop on Variability in Software Architecture (VARSA 2012) aims at identifying critical challenges and progressing the state-of-the-art on variability in software architecture. VARSA 2012 is a follow-up of the First International Workshop on Variability in Software Architecture (VARSA 2011), held at WICSA 2011.
Variability in a software system is reflected in and facilitated through the architecture of that system. The Third International Workshop on Variability in Software Architecture (VARSA) was held in conjunction with the 11th Working IEEE/IFIP Conference on Software Architecture 2014 in Sydney, Australia. Based on the findings from previous editions of VARSA, this edition aimed at exploring methods, technologies and tools to handle variability at the software architecture level. The workshop featured one industrial keynote talk, one academic keynote talk and five research paper presentations.
Context: Empirical research helps gain well-founded insights about phenomena. Furthermore, empirical research creates evidence for the validity of research results. Objective: We aim at assessing the state-of-practice of empirical research in software architecture. Method: We conducted a comprehensive survey based on the systematic mapping method. We included all full technical research papers published at major software architecture conferences between 1999 and 2015. Results: 17% of papers report empirical work. The number of empirical studies in software architecture has started to increase in 2005. Looking at the number of papers, empirical studies are about equally frequently used to a) evaluate newly proposed approaches and b) to explore and describe phenomena to better understand software architecture practice. Case studies and experiments are the most frequently used empirical methods. Almost half of empirical studies involve human participants. The majority of these studies involve professionals rather than students. Conclusions: Our findings are meant to stimulate researchers in the community to think about their expectations and standards of empirical research. Our results indicate that software architecture has become a more mature domain with regards to applying empirical research. However, we also found issues in research practices that could be improved (e.g., when describing study objectives and acknowledging limitations).
Context: Previous research highlighted concerns about empirical research in software engineering (e.g., reproducibility, applicability of findings). It is unclear how these concerns reflect views of those who conduct and evaluate research.Objective: Focusing on software architecture, one subfield of software engineering, we study percep-tions of the research community on (1) how empirical research is applied, (2) human participants, (3) internal and external validity, and (4) replications. Method: We collected responses from 105 key players in architecture research via a survey; we analyzed data quantitatively and qualitatively.Results: Although respondents do generally not prefer either quantitative or qualitative research, around 40% express a preference for various reasons. Professionals are the preferred participants; there is no consensus on the value of student participants. Also, there is no consensus on when to focus on internal or external validity. Most respondents value replications, but acknowledge difficulties. A comparison with published research shows differences between how the community thinks research should be done.Conclusions: We provide evidence that consensus about empirical research is limited. Findings have implications for conducting and reviewing empirical research (e.g., training researchers and reviewers), and call for reflection on empirical research (e.g., to resolve conflicts). We outline actions for the future.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Variability (the ability of a software system or software artifact to be adapted for use in a specific context) is reflected in and facilitated through the software architecture. The Second International Workshop on Variability in Software Architecture (VARSA) was held in conjunction with the Joint 10th Working IEEE/IFIP Conference on Software Architecture & 6th European Conference on Software Architecture 2012 in Helsinki, Finland. The workshop aimed at exploring current and emerging methods, languages, notations, technologies and tools to model, implement, and manage variability in the software architecture. It featured one industrial talk, five research paper presentations, and three working group discussions. Working groups discussed topics that emerged during the workshop. This report summarizes the themes of the workshop and presents the results of the working group discussions.
Context: Variability (i.e., the ability of software systems or artifacts to be adjusted for different contexts) became a key property of many systems. Objective: We analyze existing research on variability in software systems. We investigate variability handling in major software engineering phases (e.g., requirements engineering, architecting). Method: We performed a systematic literature review. A manual search covered 13 premium software engineering journals and 18 premium conferences, resulting in 15,430 papers searched and 196 papers considered for analysis. To improve reliability and to increase reproducibility, we complemented the manual search with a targeted automated search. Results: Software quality attributes have not received much attention in the context of variability. Variability is studied in all software engineering phases, but testing is underrepresented. Data to motivate the applicability of current approaches are often insufficient; research designs are vaguely described. Conclusions: Based on our findings we propose dimensions of variability in software engineering. This empirically grounded classification provides a step towards a unifying, integrated perspective of variability in software systems, spanning across disparate or loosely coupled research themes in the software engineering community. Finally, we provide recommendations to bridge the gap between research and practice and point to opportunities for future research.
Recent advances in embedded systems and underwater communications raised the autonomy levels in unmanned underwater vehicles (UUVs) from human-driven and scripted to adaptive and self-managing. UUVs can execute longer and more challenging missions, and include functionality that enables adaptation to unexpected oceanic or vehicle changes. As such, the simulated UUV exemplar UNDERSEA introduced in our paper facilitates the development, evaluation and comparison of self-adaptation solutions in a new and important application domain. UNDERSEA comes with predefined oceanic surveillance UUV missions, adaptation scenarios, and a reference controller implementation, all of which can easily be extended or replaced.
Recently, machine learning (ML) has become a popular approach to support self-adaptation. ML has been used to deal with several problems in self-adaptation, such as maintaining an up-to-date runtime model under uncertainty and scalable decision-making. Yet, exploiting ML comes with inherent challenges. In this article, we focus on a particularly important challenge for learning-based self-adaptive systems: drift in adaptation spaces. With adaptation space, we refer to the set of adaptation options a self-adaptive system can select from to adapt at a given time based on the estimated quality properties of the adaptation options. A drift of adaptation spaces originates from uncertainties, affecting the quality properties of the adaptation options. Such drift may imply that the quality of the system may deteriorate, eventually, no adaptation option may satisfy the initial set of adaptation goals, or adaptation options may emerge that allow enhancing the adaptation goals. In ML, such a shift corresponds to a novel class appearance, a type of concept drift in target data that common ML techniques have problems dealing with. To tackle this problem, we present a novel approach to self-adaptation that enhances learning-based self-adaptive systems with a lifelong ML layer. We refer to this approach as lifelong self-adaptation. The lifelong ML layer tracks the system and its environment, associates this knowledge with the current learning tasks, identifies new tasks based on differences, and updates the learning models of the self-adaptive system accordingly. A human stakeholder may be involved to support the learning process and adjust the learning and goal models. We present a general architecture for lifelong self-adaptation and apply it to the case of drift of adaptation spaces that affects the decision-making in self-adaptation. We validate the approach for a series of scenarios with a drift of adaptation spaces using the DeltaIoT exemplar.
In the past years, machine learning (ML) has become a popular approach to support self-Adaptation. While ML techniques enable dealing with several problems in self-Adaptation, such as scalable decision-making, they are also subject to inherent challenges. In this paper, we focus on one such challenge that is particularly important for self-Adaptation: ML techniques are designed to deal with a set of predefined tasks associated with an operational domain; they have problems to deal with new emerging tasks, such as concept shift in input data that is used for learning. To tackle this challenge, we present lifelong self-Adaptation: A novel approach to self-Adaptation that enhances self-Adaptive systems that use ML techniques with a lifelong ML layer. The lifelong ML layer tracks the running system and its environment, associates this knowledge with the current tasks, identifies new tasks based on differentiations, and updates the learning models of the self-Adaptive system accordingly. We present a reusable architecture for lifelong self-Adaptation and apply it to the case of concept drift caused by unforeseen changes of the input data of a learning model that is used for decision-making in self-Adaptation. We validate lifelong self-Adaptation for two types of concept drift using two cases.
Recently, we have been witnessing a rapid increase in the use of machine learning techniques in self-adaptive systems. Machine learning has been used for a variety of reasons, ranging from learning a model of the environment of a system during operation to filtering large sets of possible configurations before analyzing them. While a body of work on the use of machine learning in self-adaptive systems exists, there is currently no systematic overview of this area. Such an overview is important for researchers to understand the state of the art and direct future research efforts. This article reports the results of a systematic literature review that aims at providing such an overview. We focus on self-adaptive systems that are based on a traditional Monitor-Analyze-Plan-Execute (MAPE)-based feedback loop. The research questions are centered on the problems that motivate the use of machine learning in self-adaptive systems, the key engineering aspects of learning in self-adaptation, and open challenges in this area. The search resulted in 6,709 papers, of which 109 were retained for data collection. Analysis of the collected data shows that machine learning is mostly used for updating adaptation rules and policies to improve system qualities, and managing resources to better balance qualities and resources. These problems are primarily solved using supervised and interactive learning with classification, regression, and reinforcement learning as the dominant methods. Surprisingly, unsupervised learning that naturally fits automation is only applied in a small number of studies. Key open challenges in this area include the performance of learning, managing the effects of learning, and dealing with more complex types of goals. From the insights derived from this systematic literature review, we outline an initial design process for applying machine learning in self-adaptive systems that are based on MAPE feedback loops.
Recently, we have been witnessing an increasing use of machine learning methods in self-adaptive systems. Machine learning methods offer a variety of use cases for supporting self-adaptation, e.g., to keep runtime models up to date, reduce large adaptation spaces, or update adaptation rules. Yet, since machine learning methods apply in essence statistical methods, they may have an impact on the decisions made by a self-adaptive system. Given the wide use of formal approaches to provide guarantees for the decisions made by self-adaptive systems, it is important to investigate the impact of applying machine learning methods when such approaches are used. In this paper, we study one particular instance that combines linear regression to reduce the adaptation space of a self-adaptive system with statistical model checking to analyze the resulting adaptation options. We use computational learning theory to determine a theoretical bound on the impact of the machine learning method on the predictions made by the verifier. We illustrate and evaluate the theoretical result using a scenario of the DeltaIoT artifact. To conclude, we look at opportunities for future research in this area.
Mobile technologies have emerged as facilitators in the learning process, extending traditional classroom activities. However, engineering mobile learning applications for outdoor usage poses severe challenges. The requirements of these applications are challenging, as many different aspects need to be catered, such as resource access and sharing, communication between peers, group management, activity flow, etc. Robustness is particularly important for learning scenarios to guarantee undisturbed and smooth user experiences, pushing the technological aspects in the background. Despite significant research in the field of mobile learning, very few efforts have focused on collaborative mobile learning requirements from a software engineering perspective. This paper focuses on aspects of the software architecture, aiming to address the challenges related to resource sharing in collaborative mobile learning activities. This includes elements such as autonomy for personal interactive learning, richness for large group collaborative learning (indoor and outdoor), as well as robustness of the learning system. Additionally, we present self-adaptation as a solution to mitigate risks of resource unavailability and organization failures that arise from environment and system dynamism. Our evaluation provides indications regarding the system correctness with respect to resource sharing and collaboration concerns, and offers qualitative evidence of self-adaptation benefits for collaborative mobile learning applications.
Engineering multi-agent systems (MAS) is known to be a complex task. One of the reasons lays in the complexity to combine multiple concerns that a MAS is expected to address, such as system functionality, coordination, robustness, etc. A well-recognized approach to manage system complexity is the use of self-adaptive (SA) mechanisms. Self-adaptation allows to adjust the system behavior in order to achieve certain software qualities (optimization, fault-tolerance, etc.). The key idea behind self-adaptation is complexity management through separation of concerns. In this paper we introduce SA-MAS, an architectural approach that integrates the functionalities provided by a MAS with software qualities oered by a SA solution. The paper presents a reference model for SA-MAS and applies it to a Mobile learning case, in which we deal with robustness properties. In addition, we apply formal verication techniques as an approach to guarantee the requirements of the SA-MAS application.