Cloud computing, network applications, and IT services utilize datacenter infrastructure to provide their services. The underlying datacenter networks (DCNs) are inherently constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table, and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of reinforcement learning (RL) algorithmsthe first of which is traditional reinforcement learning algorithm based while the other is deep reinforcement learning based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead, and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method given a fixed size flow table of 4KB.
A major concern for today's smartphones is their much faster battery drain than traditional feature phones. The difference is mainly contributed by those more powerful but also much more power-consuming smartphone components, such as the multi-core application processor and the high-definition (HD) display. In this paper, we investigate how to increase the battery life of smartphones by minimizing the use of application processor and HD display for operations related to basic functions. We find that the application processor is often waken up by a process running on it, called the Radio Interface Layer Daemon (RILD), which interfaces users/apps to the GSM/LTE cellular network. Consequently, we design a Smart On Demand (SOD) configuration that reduces the smartphone energy consumption by running RILD on a secondary low-power microcontroller and by using a secondary low-power display to interface the user with basic functions. Thus, basic phone functions are handled at much lower energy costs while the power-consuming application processor and HD display are waken up only when one needs to use smart apps. We have built a prototype of SOD and evaluated it with real user traces. Our results show that SOD can increase its battery life by up to 2.5 more days.
The Internet of Things equips citizens with phenomenal new means for online participation and control over machine learning applications. When agents self-determine options from which they choose, while these choices have collective impact, optimal decision-making turns into a combinatorial optimization problem known as NP-hard. In such challenging computational problems, centrally managed deep learning systems often collect and process personal data with implication on privacy and citizens' autonomy. This paper envisions an alternative unsupervised deep learning approach that preserves privacy, autonomy and participation by transforming the communication network of multi-agent systems into a deep hierarchical tree structure. Self-organized remote interactions orchestrate a decentralized and highly efficient process for collective deep learning. This disruptive concept is realized by I-EPOS, the Iterative Economic Planning and Optimized Selections, accompanied by a paradigmatic software artifact. Strikingly, I-EPOS outperforms related algorithms that involve non-local brute-force operations or exchange full information. This paper contributes new experimental findings about the influence of network topology and planning on learning efficiency as well as findings on techno-socio-economic trade-offs and global optimality. Experimental evaluation with real-world data from energy and bike sharing pilots demonstrates the grand potential of collective deep learning to design ethically and socially responsible participatory sharing economies.