A central problem in swarm robotics is to design a controller which will allow the member robots of the swarm to collectively perform a given task. Of particular interest in massively distributed applications are controllers with severely limited computational and sensory abilities. In this paper, we give the results of the first computational complexity analysis of the swarm design problem relative to what is arguably the simplest possible type of reactive controller, the so-called computation-free controller proposed by Gauci et al. We show that computation-free swarm design is not polynomial-time solvable either in general or by the most desirable types of approximation algorithms (including evolutionary algorithms with high probability of producing correct solutions) but is solvable in effectively polynomial-time relative to several types of restrictions on swarms, environments, and tasks. We also show that all of our intractability and inapproximability results hold for the design of any type of reactive swarm (including those based on the popular feedforward neural network and Brooks-style subsumption controllers) whose member robots satisfy two simple conditions.
Proactive latency-aware adaptation is an approach for self-adaptive systems that considers both the current and anticipated adaptation needs when making adaptation decisions, taking into account the latency of the available adaptation tactics. Since this is a problem of selecting adaptation actions in the context of the probabilistic behavior of the environment, Markov decision processes (MDP) are a suitable approach. However, given all the possible interactions between the different and possibly concurrent adaptation tactics, the system, and the environment, constructing the MDP is a complex task. Probabilistic model checking has been used to deal with this problem, but it requires constructing the MDP every time an adaptation decision is made to incorporate the latest predictions of the environment behavior. In this article, we describe PLA-SDP, an approach that eliminates that run-time overhead by constructing most of the MDP offline. At run time, the adaptation decision is made by solving the MDP through stochastic dynamic programming, weaving in the environment model as the solution is computed. We also present extensions that support different notions of utility, such as maximizing reward gain subject to the satisfaction of a probabilistic constraint, making PLA-SDP applicable to systems with different kinds of adaptation goals.
TAAS 12:4 Farewell Editorial
We present design concepts, programming constructs, and automatic verification techniques to support the development of adaptive Wireless Sensor Network (WSN) software. WSNs operate at the interface between physical world and the computing machine, and are hence exposed to unpredictable environment dynamics. WSN software must adapt to these dynamics to maintain dependable and efficient operation. While significant literature exists on the necessary adaptation logic, developers are left without proper support in materializing such a logic in a running system. Our work fills this gap with three contributions: i) design concepts help developers organize the adaptive functionality and understand their relations, ii) dedicated programming constructs simplify the implementations, iii) custom verification techniques allow developers to check the correctness of their design before deployment. We implement tool support to tie the three contributions, facilitating their application. Our evaluation considers representative WSN applications to analyze code metrics, synthetic simulations, and cycle-accurate emulation of popular WSN platforms. The results indicate that our work simplifies the development of adaptive WSN software; for example, implementations are provably easier to test, the run-time overhead of our programming construct is negligible, and our verification techniques return results in a matter of seconds.
Self-adaptive software systems monitor their operation and adapt when their requirements fail due to unexpected phenomena in their environment. This paper examines the case where the environment changes dynamically over time and the chosen adaptation has to take into account such changes. In control theory, this type of adaptation is known as Model Predictive Control and comes with a well-developed theory and myriads of successful applications. The paper focuses on modelling the dynamic relationship between requirements and possible adaptations. It then proposes a controller that exploits this relationship to optimize the satisfaction of requirements relative to a cost-function. This is accomplished through a model-based framework for designing self-adaptive software systems that can guarantee a certain level of requirements satisfaction over time, by dynamically composing adaptation strategies when necessary. The proposed framework is illustrated and evaluated through two simulated systems, namely the Meeting-Scheduling exemplar and an E-Shop.
In this paper, we explore the efficacy of dynamic effective capacity modulation (i.e., using virtualization techniques to offer lower resource capacity than that advertised by the cloud provider) as a control knob for a cloud provider's profit maximization complementing the more well-studied approach of dynamic pricing. In particular, our focus is on emerging cloud ecosystems wherein we expect tenants to modify their demands strategically in response to such modulation in effective capacity and prices. Towards this, we consider a simple model of a cloud provider that offers a single type of virtual machine to its tenants and devise a leader/follower game-based cloud control framework to capture the interactions between the provider and its tenants. We assume both parties employ myopic control and short-term predictions to reflect their operation under the high dynamism and poor predictability in such environments. Our evaluation using a combination of real data center traces and real-world benchmarks hosted on a prototype OpenStack-based cloud shows 10-30% profit improvement for a cloud provider compared with baselines that use static pricing and/or static effective capacity.