The continuous increase in the quantity and sophistication of cyber attacks is making it more difficult and error-prone for the system administrators to handle the alerts generated by Intrusion Detection Systems (IDSs). To deal with this problem, several Intrusion Response Systems (IRSs) have been proposed lately. IRSs extend the IDSs by providing an automatic response to the detected attack. Such a response is usually selected either with a static attack-response mapping or by quantitatively evaluating all the available responses, given a set of pre-defined criteria. In this paper, we introduce a probabilistic model-based IRS built on the Markov Decision Process (MDP) framework. In contrast with most existing approaches to intrusion response, the proposed IRS effectively captures the dynamics of both the defended system and the attacker and is able to compose atomic response actions to plan optimal multi-objective long-term response policies to protect the system. We evaluate the effectiveness of the proposed IRS by showing that long-term response planning always outperforms short-term planning and we conduct a thorough performance assessment to show that the proposed IRS can be adopted to protect large distributed systems at run-time.
A central problem in swarm robotics is to design a controller which will allow the member robots of the swarm to collectively perform a given task. Of particular interest in massively distributed applications are controllers with severely limited computational and sensory abilities. In this paper, we give the results of the first computational complexity analysis of the swarm design problem relative to what is arguably the simplest possible type of reactive controller, the so-called computation-free controller proposed by Gauci et al. We show that computation-free swarm design is not polynomial-time solvable either in general or by the most desirable types of approximation algorithms (including evolutionary algorithms with high probability of producing correct solutions) but is solvable in effectively polynomial-time relative to several types of restrictions on swarms, environments, and tasks. We also show that all of our intractability and inapproximability results hold for the design of any type of reactive swarm (including those based on the popular feedforward neural network and Brooks-style subsumption controllers) whose member robots satisfy two simple conditions.
Proactive latency-aware adaptation is an approach for self-adaptive systems that considers both the current and anticipated adaptation needs when making adaptation decisions, taking into account the latency of the available adaptation tactics. Since this is a problem of selecting adaptation actions in the context of the probabilistic behavior of the environment, Markov decision processes (MDP) are a suitable approach. However, given all the possible interactions between the different and possibly concurrent adaptation tactics, the system, and the environment, constructing the MDP is a complex task. Probabilistic model checking has been used to deal with this problem, but it requires constructing the MDP every time an adaptation decision is made to incorporate the latest predictions of the environment behavior. In this article, we describe PLA-SDP, an approach that eliminates that run-time overhead by constructing most of the MDP offline. At run time, the adaptation decision is made by solving the MDP through stochastic dynamic programming, weaving in the environment model as the solution is computed. We also present extensions that support different notions of utility, such as maximizing reward gain subject to the satisfaction of a probabilistic constraint, making PLA-SDP applicable to systems with different kinds of adaptation goals.
TAAS 12:4 Farewell Editorial
We present design concepts, programming constructs, and automatic verification techniques to support the development of adaptive Wireless Sensor Network (WSN) software. WSNs operate at the interface between physical world and the computing machine, and are hence exposed to unpredictable environment dynamics. WSN software must adapt to these dynamics to maintain dependable and efficient operation. While significant literature exists on the necessary adaptation logic, developers are left without proper support in materializing such a logic in a running system. Our work fills this gap with three contributions: i) design concepts help developers organize the adaptive functionality and understand their relations, ii) dedicated programming constructs simplify the implementations, iii) custom verification techniques allow developers to check the correctness of their design before deployment. We implement tool support to tie the three contributions, facilitating their application. Our evaluation considers representative WSN applications to analyze code metrics, synthetic simulations, and cycle-accurate emulation of popular WSN platforms. The results indicate that our work simplifies the development of adaptive WSN software; for example, implementations are provably easier to test, the run-time overhead of our programming construct is negligible, and our verification techniques return results in a matter of seconds.
In this paper, we explore the efficacy of dynamic effective capacity modulation (i.e., using virtualization techniques to offer lower resource capacity than that advertised by the cloud provider) as a control knob for a cloud provider's profit maximization complementing the more well-studied approach of dynamic pricing. In particular, our focus is on emerging cloud ecosystems wherein we expect tenants to modify their demands strategically in response to such modulation in effective capacity and prices. Towards this, we consider a simple model of a cloud provider that offers a single type of virtual machine to its tenants and devise a leader/follower game-based cloud control framework to capture the interactions between the provider and its tenants. We assume both parties employ myopic control and short-term predictions to reflect their operation under the high dynamism and poor predictability in such environments. Our evaluation using a combination of real data center traces and real-world benchmarks hosted on a prototype OpenStack-based cloud shows 10-30% profit improvement for a cloud provider compared with baselines that use static pricing and/or static effective capacity.