The continuous increase in the quantity and sophistication of cyber attacks is making it more difficult and error-prone for the system administrators to handle the alerts generated by Intrusion Detection Systems (IDSs). To deal with this problem, several Intrusion Response Systems (IRSs) have been proposed lately. IRSs extend the IDSs by providing an automatic response to the detected attack. Such a response is usually selected either with a static attack-response mapping or by quantitatively evaluating all the available responses, given a set of pre-defined criteria. In this paper, we introduce a probabilistic model-based IRS built on the Markov Decision Process (MDP) framework. In contrast with most existing approaches to intrusion response, the proposed IRS effectively captures the dynamics of both the defended system and the attacker and is able to compose atomic response actions to plan optimal multi-objective long-term response policies to protect the system. We evaluate the effectiveness of the proposed IRS by showing that long-term response planning always outperforms short-term planning and we conduct a thorough performance assessment to show that the proposed IRS can be adopted to protect large distributed systems at run-time.
A central problem in swarm robotics is to design a controller which will allow the member robots of the swarm to collectively perform a given task. Of particular interest in massively distributed applications are controllers with severely limited computational and sensory abilities. In this paper, we give the results of the first computational complexity analysis of the swarm design problem relative to what is arguably the simplest possible type of reactive controller, the so-called computation-free controller proposed by Gauci et al. We show that computation-free swarm design is not polynomial-time solvable either in general or by the most desirable types of approximation algorithms (including evolutionary algorithms with high probability of producing correct solutions) but is solvable in effectively polynomial-time relative to several types of restrictions on swarms, environments, and tasks. We also show that all of our intractability and inapproximability results hold for the design of any type of reactive swarm (including those based on the popular feedforward neural network and Brooks-style subsumption controllers) whose member robots satisfy two simple conditions.
Because of the rapid growth in the scale and complexity of information networks, self-organizing systems are being used increasingly to realize novel network control systems that are highly scalable, adaptable, and robust. However, the uncertainty of information (incompleteness, vagueness, and dynamics) in self-organizing systems makes it difficult for them to work appropriately in accordance with the network state. In this study, we apply the collective decision-making of animal groups to self-organizing control mechanisms to allow them to adapt to information uncertainty. Specifically, we apply a mathematical model of collective decision-making known as the effective leadership model (ELM). In the ELM, informed individuals that are well experienced or informed take the role of leading the others. In contrast, uninformed individuals who perceive only local information follow neighboring individuals. As a result of the collective behavior of individuals, the animal group achieves consensus. We consider potential-based routing with an optimal control, a self-organizing control mechanism, and propose a mechanism for determining a data-packet forwarding scheme based on the ELM. Through simulation evaluation, we reveal that, in a situation in which the perceived information is incomplete and dynamic, nodes can forward data packets in accordance with the network state by applying the ELM.
Proactive latency-aware adaptation is an approach for self-adaptive systems that considers both the current and anticipated adaptation needs when making adaptation decisions, taking into account the latency of the available adaptation tactics. Since this is a problem of selecting adaptation actions in the context of the probabilistic behavior of the environment, Markov decision processes (MDP) are a suitable approach. However, given all the possible interactions between the different and possibly concurrent adaptation tactics, the system, and the environment, constructing the MDP is a complex task. Probabilistic model checking has been used to deal with this problem, but it requires constructing the MDP every time an adaptation decision is made to incorporate the latest predictions of the environment behavior. In this article, we describe PLA-SDP, an approach that eliminates that run-time overhead by constructing most of the MDP offline. At run time, the adaptation decision is made by solving the MDP through stochastic dynamic programming, weaving in the environment model as the solution is computed. We also present extensions that support different notions of utility, such as maximizing reward gain subject to the satisfaction of a probabilistic constraint, making PLA-SDP applicable to systems with different kinds of adaptation goals.
Airborne sensor platforms are becoming increasingly significant for both civilian and military operations, yet their sensors are typically idle for much of their flight time, e.g., while the sensor-equipped platform is in transit to and from locations of sensing tasks. The sensing needs of many other potential information consumers might thus be served by sharing sensors, allowing other consumers to opportunistically task them during otherwise unscheduled time, and enabling other improvements, such as decreasing the number of platforms needed to achieve a goal and increasing resilience of sensor tasks through duplication. We have implemented a prototype system realizing these goals in Mission-Driven Tasking of Information Producers (MTIP), which leverages an agent-based representation of tasks and sensors to enable fast, effective, and adaptive opportunistic sharing of airborne sensors. Using a simulated large-scale disaster-response scenario populated with publicly available GIS data sets, we demonstrate that correlations in task location lead to a high degree of potential for sensor-sharing. We then validate that our implementation of MTIP can successfully carry out such sharing, showing it increases the number of sensor tasks served, reduces number of platforms required for a given set of sensor tasks, and adapts well to radical changes in flight path.
In this paper, we explore the efficacy of dynamic effective capacity modulation (i.e., using virtualization techniques to offer lower resource capacity than that advertised by the cloud provider) as a control knob for a cloud provider's profit maximization complementing the more well-studied approach of dynamic pricing. In particular, our focus is on emerging cloud ecosystems wherein we expect tenants to modify their demands strategically in response to such modulation in effective capacity and prices. Towards this, we consider a simple model of a cloud provider that offers a single type of virtual machine to its tenants and devise a leader/follower game-based cloud control framework to capture the interactions between the provider and its tenants. We assume both parties employ myopic control and short-term predictions to reflect their operation under the high dynamism and poor predictability in such environments. Our evaluation using a combination of real data center traces and real-world benchmarks hosted on a prototype OpenStack-based cloud shows 10-30% profit improvement for a cloud provider compared with baselines that use static pricing and/or static effective capacity.