Title
From C-RAN to G-RAN: Carbon-Aware Autoscaling for Cloud Radio Access Networks (C2GRAN)
Organization
Telecommunication Software and Systems Group (TSSG), Ireland, https://tssg.org/
Project Description
Cloud Radio Access Networks (C-RAN) are getting more and more attention in 5G mobile networks domain because of their manageability, deployability, resource usage, cost effectiveness and energy efficiency. In 5G mobile networks, C-RAN presents itself as a flexible and dynamic infrastructure resource for multiple Baseband Units (BBU pool). Within a BBU pool, each virtualised BBU (vBBU) shares common physical resources such as CPU, memory and network etc. thus each individual BBU might contends for resources. With the increased users’ demands and limited availability of resources, it is imperative to employ an intelligent resource manager so that resources can be optimally utilised, and users’ demands can also be satisfied. This can also contribute in reducing the CAPEX and OPEX.
Hypothesis: Dynamic allocation of processing resources across geographically dispersed network services such as C-RAN, supported by NFV in virtualised BBUs for 5G mobile networks, subject to SLA and energy cost constraints, can significantly reduce the energy consumption and carbon emission and also improve the quality of multimedia data transmission.
In this experiment we examined the capability of recurrent neural network based machine learning model to predict the resource demands by the BBU pool in a CRAN. The ML model incorporated the current and historical data about resources for target node and its neighbouring nodes. The model also taken into account the power consumption by each node and the carbon emission. We used RNN based staked LSTM model to forecast the resource requirements, power consumption and carbon emission.
Project Implementation
For the BBU implementation, we used GNURadio which is an open source implementation of software defined radios. We setup a topology of C-RAN at WINS_5G testbed with 4 USRPs and two source and two sink nodes. So, this setup comprised of 2 non-overlapping radio networks. Zabbix tool is used for metrics collection along with SNMP agents. Zabbix provided us the details about CPU, memory and network utilisation of each physical machine as well as each virtual machine using SNMP. We could also get power consumption details direct from the machine’s power supply by using Dell’s iDRAC.
Two testbed locations were used. One server is located in Ireland and other is located in the UK. As the power generation sources for both the locations are different so both countries have different carbon emission levels at different time of the day. The live data about carbon emission was taken directly from the Eirgrid’s website for Ireland (http://www.eirgridgroup.com/). Eirgrid releases information about the sources of electricity generation and relevant carbon emission levels. For live data about carbon emission in UK, we could get details from a 3rd party online portal (https://www.electricitymap.org/). For end to end traffic, over 1 million voice calls initiated during the whole experiment. VLC streaming server and clients were used for voice traffic generation and reception purposes.
All this combined information from each location and server was then fed into our recurrent neural network based ML algorithm which takes into account the current and historical data of the target node as well as neighbouring nodes and predicts for future resource requirements. On the basis of predictions made by the ML algorithm, the decisions for scaling up/down and load distribution could then be taken. Three different load distribution mechanisms were tested; random, round robin and our proposed DCeC.
We proposed a metric named as DCeC (Data Center Energy Contributivity) to compare two or more data centers or computing resources on the basis of work done, power consumption and carbon emission. From the results, it is shown that the use of our proposed DCeC based load distribution algorithm consumed up to 20% less power than the other two mechanisms. With 20% less power consumption, 14% less carbon emitted. The auto scaling also shows promising results. With the vertical scaling of server processes, significant reduction in jitter and latency were noted.
Load aware vertical scaling of the computing resources was also tested. For vertical scaling of vBBU, we used Linux’s taskset utility to assign CPU cores to the server process at runtime. Based on the next minute prediction of the incoming load, the vBBU could scale itself accordingly as shown below:
Key Objectives
- To meet the user demands for resources
- To avoid overprovisioning
- To minimize the carbon emission
- To minimize the energy consumption
- To make 5G mobile networks more scalable and robust
- To optimize the use of available resources
- To reduce CAPEX and OPEX
Contact
Contact: Ehsan Elahi (eelahi@tssg.org )