Self-adaptive systems are systems that automatically adapt in response to environmental and internal changes, such as possible failures and variations in resource availability. Such systems are often realized by a MAPE-K feedback loop, where Monitor, Analyze, Plan and Execute components have access to a runtime model of the system and environment which is kept in the Knowledge component. In order to provide guarantees on the correctness of a self-adaptive system at runtime, the MAPE-K feedback loop needs to be extended with assurance techniques. To address this issue, we propose a coordinated actor-based approach to build a reusable and scalable model@runtime for self-adaptive systems in the domain of track-based traffic control systems. We demonstrate the approach by implementing an automated Air Traffic Control system (ATC) using Ptolemy tool.We compare different adaptation policies on the ATC model based on performance metrics and analyze combination of policies in different configurations of the model. We enriched our framework with runtime performance analysis such that for any unexpected change, subsequent behavior of the model is predicted and results are used for adaptation at the change-point. Moreover, the developed framework enables checking safety properties at runtime. © Springer International Publishing AG 2017.
Self-adaptation is a well-known technique to handle growing complexities of software systems, where a system autonomously adapts itself in response to changes in a dynamic and unpredictable environment. With the increasing need for developing self-adaptive systems, providing a model and an implementation platform to facilitate integration of adaptation mechanisms into the systems and assuring their safety and quality is crucial. In this paper, we target Track-based Traffic Control Systems (TTCSs) in which the traffic flows through pre-specified sub-tracks and is coordinated by a traffic controller. We introduce a coordinated actor model to design self-adaptive TTCSs and provide a general mapping between various TTCSs and the coordinated actor model. The coordinated actor model is extended to build large-scale self-adaptive TTCSs in a decentralized setting. We also discuss the benefits of using Ptolemy II as a framework for model-based development of large-scale self-adaptive systems that supports designing multiple hierarchical MAPE-K feedback loops interacting with each other. We propose a template based on the coordinated actor model to design a self-adaptive TTCS in Ptolemy II that can be instantiated for various TTCSs. We enhance the proposed template with a predictive adaptation feature. We illustrate applicability of the coordinated actor model and consequently the proposed template by designing two real-life case studies in the domains of air traffic control systems and railway traffic control systems in Ptolemy II.
In the realm of sound object-oriented program analyses for information-flow control, very few approaches adopt flow-sensitive abstractions of the heap that enable a precise modeling of implicit flows. To tackle this challenge, we advance a new symbolic abstraction approach for modeling the heap in Java-like programs. We use a store-less representation that is parameterized with a family of relations among references to offer various levels of precision based on user preferences. This enables us to automatically infer polymorphic information-flow guards for methods via a co-reachability analysis of a symbolic finite-state system. We instantiate the heap abstraction with three different families of relations. We prove the soundness of our approach and compare the precision and scalability obtained with each instantiated heap domain by using the IFSpec benchmarks and real-life applications.
Today’s digital world and evolving technology has improved the quality of our lives but it has also come with a number of new threats. In the society of smart-cities and Industry 4.0, where many cyber-physical devices connect and exchange data through the Internet of Things, the need for addressing information security and solve system failures becomes inevitable. System failures can occur because of hardware failures, software bugs or interoperability issues. In this paper we introduce the industry-originated concept of “smart-troubleshooting” that is the set of activities and tools needed to gather failure information generated by heterogeneous connected devices, analyze them, and match them with troubleshooting instructions and software fixes. As a consequence of implementing smart-troubleshooting, the system would be able to self-heal and thus become more resilient. This paper aims to survey frameworks, methodologies and tools related to this new concept, and especially the ones needed to model, analyze and recover from failures in a (semi)automatic way. Smart-troubleshooting has a relation with event analysis to perform diagnostics and prognostics on devices manufactured by different suppliers in a distributed system. It also addresses management of appropriate product information specified in possibly unstructured formats to guide the troubleshooting workflow in identifying fault–causes and solutions. Relevant research is briefly surveyed in the paper in order to highlight current state-of-the-art, open issues, challenges to be tackled and future opportunities in this emerging industry paradigm.
A separation kernel simulates a distributed environment using a single physical machine by executing partitions in isolation and appropriately controlling communication among them. We present a formal verification of information flow security for a simple separation kernel for ARMv7. Previous work on information flow kernel security leaves communication to be handled by model-external means, and cannot be used to draw conclusions when there is explicit interaction between partitions. We propose a different approach where communication between partitions is made explicit and the information flow is analyzed in the presence of such a channel. Limiting the kernel functionality as much as meaningfully possible, we accomplish a detailed analysis and verification of the system, proving its correctness at the level of the ARMv7 assembly. As a sanity check we show how the security condition is reduced to noninterference in the special case where no communication takes place. The verification is done in HOL4 taking the Cambridge model of ARM as basis, transferring verification tasks on the actual assembly code to an adaptation of the BAP binary analysis tool developed at CMU.
IT ecosystems - systems composed of a large number of distributed, autonomous, cooperating, decentralized, interacting, organically grown, heterogeneous, and continually evolving subsystems - are the future system generation. Today’s state of the art does not enable us to develop these systems. Within the NTH Focused Research School for IT Ecosystems, research project AIM deals with methods and tools to guarantee the functionality of a complex IT ecosystem especially when a top-down design is not possible anymore. Thus, adaptive information- and collaboration architectures considering independent evolution of subsystems as well as suitable control mechanisms are examined. This technical report analyzes how adaptive behavior of subsystems can be modeled adequately by standard formalisms for behavioral modeling (e.g. UML) as well as advanced approaches for modeling adaptive behavior (e.g. PobSAM). We apply the selected modeling languages on a fictional case study, an airport departure scenario. The smart airport itself can be seen as an IT ecosystem due to the complexity of the interacting systems.
In this paper, we propose a sound method to synthesize a permissive monitor using boolean supervisory controller synthesis that observes a Java program at certain checkpoints, predicts information flow violations and applies suitable countermeasures to prevent violations. We introduce an approach for modeling heap and information flow via heap. To improve permissiveness, we train the monitor and remove false positives by executing the program along with its executable model. If a security violation is detected, the user can define sound countermeasures, including declassification to apply in checkpoints. We prove that the monitored program ensures localized delimited release in case of declassifying information and termination-insensitive noninterference in case of no declassification. We implement a tool to automate the whole process and generate a monitor. Our method is evaluated by applying it on the Droidbench benchmark and one real-life Android application.
To realize correct adaptive and reconfigurable systems, we need techniques to assure that the behavior of an adaptive system during dynamic adaptation is correct. In this paper, we propose a modular approach to synthesize a symbolic reconfiguration controller that guides the behavior of a system during adaptation under partial observations. The reconfiguration controller observes the system behavior partially during an adaptation and controls it by allowing/disallowing actions in a way to ensure that a given property is satisfied and a deadlock is avoided.
Today's software systems need to adapt their behavior due to the changes in their operational environments and user requirements. To this end, an adaptive software performs a sequence of adaptations at runtime. Correctness of the behavior of an adaptive software system during dynamic adaptation is an important challenge along the way to realize correct adaptive systems. In this research, we model adaptation as a supervisory control problem and synthesize a controller that guides the behavior of a software system during adaptation. The system during adaptation is modeled using a graph transition system and properties to be enforced are specified using an automaton. To ensure correctness, we then synthesize a controller that imposes constraints on the system during adaptation.
Correctness of the behavior of an adaptive system during dynamic adaptation is an important challenge to realize correct adaptive systems. Dynamic adaptation refers to changes to both the functionality of the computational entities that comprise a composite system, as well as the structure of their interconnections, in response to variations in the environment, e.g., the load of requests on a server system. In this research, we view the problem of correct structural adaptation as a supervisory control problem and synthesize a reconfiguration controller that guides the behavior of a system during adaptation. The reconfiguration controller observes the system behavior during an adaptation and controls the system behavior by allowing/disallowing actions in a way to ensure that a given property is satisfied and a deadlock is avoided. The system during adaptation is modeled using a graph transition system and properties to be enforced are specified using a graph automaton. We adapt a classical theory of supervisory control for synthesizing a controller for controlling the behavior of a system modeled using graph transition systems. This theory is used to synthesize a controller that can impose both behavioral and structural constraints on the system during an adaptation. We apply a tool that we have implemented to support our approach on a case study involving https servers.
Intrusion detection can no longer satisfy security needs of an organization solely. Recently, the attention of security community turned to automatic intrusion response and prevention, as the techniques, to protect network resources as well as to reduce the attack damages. Knowing attack scenarios enables the system administrator to respond to the threats swiftly by either blocking the attacks or preventing them from escalating. Alert correlation is a technique to extract attack scenarios by investigating the correlation of intrusion detection systems alerts. In this paper, we propose a new learning-based method for alert correlation that employs supervised and transductive learning techniques. Using this method, we are able to extract attack scenarios automatically.
Packet filtering firewalls have an important role in providing security in IP networks which control the traversal of packets across the boundaries of a secured network based on a specific security policy. Manual configuring of packet filtering firewalls can be extremely complex and error-prone. Therefore, it can be performed in an improper way which is not in conformance with security policies. So, we need an approach to analyze the configuration of whole packet-filtering firewalls in the network in order to discover all policy violations. In this article, we introduce an approach based on description logics to verify the configuration of all the firewalls in a network universally vs. security policies. Using this approach, system managers can express and analyze security policies with a formal and simple language. This high-level language is extensible and topology-independent. In this approach, we first automatically transform high-level security policies into low-level policies, i.e., filtering rules. Then we develop an algorithm to discover policy violations which takes configuration of the firewalls, network topology, routing information, and low-level security policies as input and determines existing policy violations as output.
The next generation of software systems includes systems composed of a large number of distributed, decentralized, autonomous, interacting, cooperating, organically grown, heterogeneous, and continually evolving subsystems, which we call IT Ecosystems. Clearly, we need novel models and approaches to design and develop such systems which can tackle the long-term evolution and complexity problems. In this paper, our framework to model IT-Ecosystems is a combination of top-down (centralized control) and bottom-up (self-organizing) approach. We use a flexible formal model, hierarchical PobSAM, that supports both behavioral and structural adaptation/evolution. We use a detailed, close to real-life, case study of a smart airport to show how we can use hierarchical PobSAM in modeling, analyzing and developing an IT Ecosystem. We provide an executable formal specification of the model in Maude, and use LTL model checking and bounded state space search provided by Maude to analyze the model. We develop a prototype of our case study designed by hierarchical PobSAM using Java and Ponder2. Due to the complexity of model, we can not check all properties at design time using Maude. We propose a new approach for run-time verification of our case study, and check different types of properties which we could not verify using model checking. As our model uses dynamic policies to control the behavior of system which can be modified at runtime, it provides us a suitable capability to react to the property violation by modification of policies.
The next generation of software systems includes systems composed of a large number of distributed, decentralized, autonomous, interacting, cooperating, organically grown, heterogeneous, and continually evolving subsystems, which we call IT Ecosystems. Clearly, we need novel models and approaches to design and develop such systems which can tackle the long-term evolution and complexity problems. In this paper, our framework to model IT Ecosystems is a combination of centralized control (top-down) and self-organizing (bottom-up) approach. We use a flexible formal model, HPobSAM, that supports both behavioral and structural adaptation/evolution. We use a detailed, close to real-life, case study of a smart airport to show how we can use HPobSAM in modeling, analyzing and developing an IT Ecosystem. We provide an executable formal specification of the model in Maude, and use LTL model checking and bounded state space search provided by Maude to analyze the model. We develop a prototype of our case study designed by HPobSAM using Java and Ponder2. Due to the complexity of the model, we cannot check all properties at design time using Maude. We propose a new approach for run-time verification of our case study, and check different types of properties which we could not verify using model checking. As our model uses dynamic policies to control the behavior of systems which can be modified at runtime, it provides us a suitable capability to react to the property violation by modification of policies.
In this paper, we present a formal model, named PobSAM (Policy-based Self-Adaptive Model), for developing and modeling self-adaptive evolving systems. In this model, policies are used as a mechanism to direct and adapt the behavior of self-adaptive systems. A PobSAM model is a collection of autonomous managers and managed actors. The managed actors are dedicated to the functional behavior while the autonomous managers govern the behavior of managed actors by enforcing suitable policies. A manager has a set of configurations including two types of policies: governing policies and adaptation policies. To adapt the system behavior in response to the changes, the managers switch among different configurations. We employ the combination of an algebraic formalism and an actor-based model to specify this model formally. Managed actors are expressed by an actor model. Managers are modeled as meta-actors whose configurations are described using a multi-sorted algebra called CA. We provide an operational semantics for PobSAM using labeled transition systems. Furthermore, we provide behavioral equivalence of different sorts of CA in terms of splitting bisimulation and prioritized splitting bisimulation. Equivalent managers send the same set of messages to the actors. Using our behavioral equivalence theory, we can prove that the overall behavior of the system is preserved by substituting a manager by an equivalent one.
In this paper, we present a formal model, named PobSAM (Policy-based Self-Adaptive Model), for developing and modeling self-adaptive systems. In this model, policies are used as a mechanism to direct and adapt the behavior of self-adaptive systems. A PobSAM model consists of a set of self-managed modules(SMM). An SMM is a collection of autonomous managers and managed actors. Managed actors are dedicated to functional behavior while autonomous managers govern the behavior of managed actors by enforcing suitable policies. To adapt SMM behavior in response to changes, policies governing an SMM are adjusted, i.e., dynamic policies are used to govern and adapt system behavior. We employ the combination of an algebraic formalism and an actor-based model to specify this model formally. Managers are modeled as meta-actors whose policies are described using an algebra. Managed actors are expressed by an actor model. Furthermore, we provide an operational semantics for PobSAM described using labeled transition systems.
PobSAM is a flexible actor-based model with formal foundation for model-based development of self-adaptive systems. In PobSAM policies are used to control and adapt the system behavior, and allow us to decouple the adaptation concerns from the application code. In this paper, we use the actor-based language Rebeca to model check PobSAM models. Since policies are used to govern the system behavior, it is required to verify if the governing policies are enforced correctly. To this aim, we present a new generic classification of the policy conflicts and provide temporal patterns expressed in LTL to detect each class of conflicts. Moreover, we propose LTL patterns for checking the correctness of adaptation. An approach based on static analysis of adaptation policies is presented to check the system stability as well.
An approach for modelling adaptive complex systems should be flexible and scalable to allow a system to grow easily, and should have a formal foundation to guarantee the correctness of the system behavior. In this paper, we present the architecture, and formal syntax and semantics of HPobSAM which is a model for specifying behavioral and structural adaptations to model large-scale systems and address re-usability concerns. Self-adaptive modules are used as the building blocks to structure a system, and policies are used as the mechanism to perform both behavioral and structural adaptations. While a self-adaptive module is autonomous to achieve its local goals by collaborating with other self-adaptive modules, it is controlled by a higher-level entity to prevent undesirable behavior. HPobSAM is formalized using a combination of algebraic, graph transformation-based and actor-based formalisms.
We review and compare three notions of conformance testing for cyber-physical systems. We begin with a review of their underlying semantic models and present conformance-preserving translations between them. We identify the differences in the underlying semantic models and the various design decisions that lead to these substantially different notions of conformance testing. Learning from this exercise, we reflect upon the challenges in designing an "ideal" notion of conformance for cyber-physical systems and sketch a roadmap of future research in this domain.
In this paper, we formally verify security properties of the ARMv7 Instruction Set Architecture (ISA) for user mode executions. To obtain guarantees that arbitrary (and unknown) user processes are able to run isolated from privileged software and other user processes, instruction level noninterference and integrity properties are provided, along with proofs that transitions to privileged modes can only occur in a controlled manner. This work establishes a main requirement for operating system and hypervisor verification, as demonstrated for the PROSPER separation kernel. The proof is performed in the HOL4 theorem prover, taking the Cambridge model of ARM as basis. To this end, a proof tool has been developed, which assists the verification of relational state predicates semi-automatically.
An important challenge to realize dynamic adaptation is finding suitable components for substitution or interaction according to the current context. A possible solution is checking behavioral equivalence of components in different contexts. Two components are equivalent with respect to a context, if they behave equivalently in that context. In this work, we deal with context-specific behavioral equivalence of PobSAM components. PobSAM is a flexible formal model for developing and modeling evolving self-adaptive systems. A PobSAM model is a collection of actors, views, and autonomous managers. Autonomous managers govern the behavior of actors by enforcing suitable context-based policies. Views provide contextual information for managers to control and adapt the actors behavior. Managers are the core components used to realize adaptation by changing their policies. They are modeled as meta-actors whose configurations are described using a multi-sorted algebra called CA. The behavior of mangers depends on the context in which they are executing. In this paper, we present an equational theory to reason about context-specific behavioral equivalence of managers independently from actors. To this end, we introduce and axiomatize a new operator to consider the interaction of managers and the context. This equational theory is based on the notion of statebased bisimilarity and allows us to reason about the behavioral equivalence of managers as well as the behavioral equivalence of the constitutes of managers (i.e., policies and configurations). We illustrate our approach through an example.
Smart spaces contain a large number of computing devices communicating with each other to perform various high-order tasks. They are governed by predefined policies that users can put according to their preferences. In this paper, we investigate the policy interaction problem beyond the smart home domain. We use a formal method for detecting dynamic conflicts between policies. First, we give an abstract model of the system described with an actor-based language. Then, we identify different kinds of conflicts that may exist among policies in smart home domain. To reduce the complexity of model checking, we use compositional verification as well as data abstraction techniques.
To reason about and enforce security in dynamic software systems, automated analysis and verification approaches are required. However, such approaches often encounter scalability issues, particularly when employed for runtime analysis, which is necessary in software systems with dynamically changing architectures, such as self-adaptive systems. In this work, we propose an automated formal approach for security analysis of component-based systems with dynamic architectures. This approach leverages formal abstraction and incremental analysis techniques to reduce the complexity of runtime analysis. We have implemented and evaluated our approach against ZNN, a widely known self-adaptive system exemplar. Our experimental results demonstrate the effectiveness of our approach in addressing scalability issues.
In this paper, we propose a new sound method to synthesize a permissive monitor using boolean supervisory controller synthesis that observes a Java program at certain checkpoints, predicts information flow violations and applies suitable countermeasures to prevent violations. To improve the permissiveness, we train the monitor and remove false positives by executing the program along with its executable model. If a security violation is detected, the user can define sound countermeasures, including declassification to apply in the checkpoints. We implement a tool that automates the whole process and generates a monitor. We evaluate our method by applying it on the Droidbench benchmark and one real-life Android application.
As any software system, a self-adaptive system is subject to security threats. However, applying self-adaptation may introduce additional threats. So far, little research has been devoted to this important problem. In this paper, we propose an approach for vulnerability analysis of architecture-based adaptations in self-adaptive systems using threat modeling and analysis techniques. To this end, we specify components' vulnerabilities and the system architecture formally and generate an attack model that describes the attacker's strategies to attack the system by exploiting different vulnerabilities. We use a set of security metrics to quantitatively assess the security risks of adaptations based on the produced attack model which enables the system to consider security aspects while choosing an adaptation to apply to the system. We automate and incorporate our approach into the Rainbow framework, allowing for secure architectural adaptations at runtime. To evaluate the effectiveness of our approach, we apply it on a simple document storage system and on the ZNN system.
Nowadays, service oriented architecture has been given strong attention as an important approach to integrate heterogeneous systems, in which complex services are created by composing simplerservices offered by various systems. The correctness of composition requires techniques to verify if the composite service behaves properly. To this end, in this paper we propose a new method forruntime monitoring of composite services which uses Communicating Sequential Processes (CSP) to specify properties formally. Then, the CSP specification of properties is translated to a Labeled Transition System (LTS). In order to verify the safety of a composite service, we traverse the generated LTS at runtime. Existing methods almost use temporal logic to specify safety properties. There are two advantages in using CSP: 1) similarity of CSP operators and service composition patterns makes CSPstraightforward to be used by users. 2) there are some properties which can not be specified by temporal logic, while they can be expressed using CSP.
As threats to computer security become more common, complex and frequent, systems that canautomatically protect themselves from attacks are imminently needed. In this paper, we proposea formal approach to achieve self-protection by performing security analysis on self-adaptive systems, taking the adaptation process into account. We use probabilistic model checking to quantitatively analyze adaptation security, rank the strategies available and select the most secure one to apply in the system. We have incorporated our approach in Rainbow which is a framework to develop architecture-based self-adaptive systems.To evaluate our approach's effectiveness, we applied it on two case studies: a simple document storage system and ZNN, a well known self-adaptive exemplar. The results show that applying our approachcan guarantee a reasonable degree of security, both during and after adaptation.
Modern software systems are decentralized, distributed, and dynamic, and consequently, require decentralized mechanisms to enforce security. In this paper, we propose an adaptive approach using a combination of decentralized information flow control (DIFC) mechanisms, trust-based methods and decentralized control architectures to enforce security in open distributed systems. In our approach, adaptivity mitigates two aspects of the system dynamics that cause uncertainty: the ever-changing nature of trust and system openness. We formalize our trust-aware DIFC model and instantiate two decentralized control architectures to implement and evaluate it. We evaluate the effectiveness and performance of our method and decentralized control architectures on two case studies.
Modern software systems and their corresponding architectures are increasingly decentralized, distributed, and dynamic. As a consequence, decentralized mechanisms are required to ensure security in such architectures. Decentralized Information Flow Control (DIFC) is a mechanism to control information flow in distributed systems. This article presents and discusses several improvements to an adaptive decentralized information flow approach that incorporates trust for decentralized systems to provide security. Adaptive Trust-Aware Decentralized Information Flow (AT-DIFC+) combines decentralized information flow control mechanisms, trust-based methods, and decentralized control architectures to control and enforce information flow in an open, decentralized system. We strengthen our approach against newly discovered attacks and provide additional information about its reconfiguration, decentralized control architectures, and reference implementation. We evaluate the effectiveness and performance of AT-DIFC+ on two case studies and perform additional experiments and to gauge the mitigations’ effectiveness against the identified attacks.
Modern software systems and their corresponding architectures are decentralized, distributed, and dynamic. As a consequence, decentralized mechanisms are also required to ensure security in such architectures. Decentralized Information Flow Control (DIFC) is a mechanism to control information flow in distributed systems. However, DIFC mechanisms require the resolution of specific centralized control and trust issues.In this paper, we propose an adaptive, trust-aware, decentralized information flow approach that incorporates trust in DIFC for decentralized systems. We employ decentralized feedback loops to enable decentralized control and adaptive trust assignments. In our approach, adaptivity mitigates two aspects of systems dynamics that cause uncertainty: the ever-changing nature of trust and the system openness. We formalize our trust-aware DIFC model and instantiate two decentralized feedback loop architectures to implement it.
Given the increasing complexity of software-intensive systems as well as the sophistication and high frequencyof cyber-attacks, automated and sound approaches to select countermeasures are required to effectively protect softwaresystems. In this paper, we propose a formal architecture-centered approach to analyze the security of a software-intensive component-based system to find cost-efficient countermeasuresthat consider both the system architecture and its behavior. We evaluate our approach by applying it on a case study.
In recent years, it has become more challenging for organizations to assess the security risks of their assets properly, as more vulnerabilities are discovered, exploited, and weaponized. Further, attackers usually use complex multi-stage attack strategies to compromise a system and achieve their goals by exploiting several vulnerabilities.The number of affected assets and the strategy used to create the compromises by the threat actor will often dictate the costs and damages to the organization.When performing risk analysis, in addition to existing vulnerabilities, it is important, but often neglected, to consider the criticality of the data residing in the vulnerable asset. However, graphical threat modeling techniques often do not offer suitable toolsfor this type of analysis.In this paper, we propose a class of security risk metrics to estimate the cost of an attack that considers the criticality of data in addition to the dependencies among vulnerabilities. Our metrics are based on graphical modeling techniques in which we incorporate data criticality. We applied our approach to a real-life case study and obtained promising results.
This chapter targets a better understanding of the compositionality of analyses, including different forms of compositionality and specific conditions of composition. Analysis involves models, contexts, and properties. These are all expressed in languages with their own semantics. For a successful composition of analyses, it is therefore important to compose models as well as the underlying languages. We aim to develop a better understanding of what is needed to answer questions such as “When I want to compose two or more analyses, what do I need to take into account?” We describe the elements impacting analysis compositionality, the relation of these elements to analysis, and how composition of analysis relates to compositionality of these elements.
This core chapter addresses Challenge 1 introduced in Chap. 3 of this book (the theoretical foundations—how to compose the underlying languages, models, and analyses).
This chapter gives an introduction to the key concepts and terminology relevant for model-based analysis tools and their composition. In the first half of the chapter, we introduce concepts relevant for modelling and composition of models and modelling languages. The second half of the chapter then focuses on concepts relevant to analysis and analysis composition. This chapter, thus, lays the foundations for the remainder of the book, ensuring that readers can go through the book as a coherent piece.
Attacks against business logic rules occur when the attacker exploits the domain rules in a malicious way. Such attacks have not received sufficient attention in research so far. In this paper, we propose a novel self-protecting approach that defends a system against the exploitation of business logic vulnerabilities. The approach empowers a system with a self-protecting layer to protect it against attacks aimed at misusing business logic rules. The approach maintains up-to-date domain knowledge which is analyzed using runtime verification to detect logical attacks. When attacks are discovered they are dynamically mitigated by applying proper system reconfigurations at runtime. We evaluate the approach using a case from the domain of hotel booking systems.