Bristol Smart City Safety Testbed

Given the critical importance of security in cities, innovations advances in wireless communication system are increasingly improving the safety of city inhabitants. New services such as audio and video monitoring of public areas and automated municipality rule infraction detection allow a quicker response to threats and anomalies prevent reoccurrence. Based on this context UNIVBRIS has deploying a smart city safety use case, as a proof of concept, to identify suspicious activities in the city. The basic components of this use case are listed below and they are connected together to the Internet through a WiFi Interface.

  • Bike rider helmet
  • Raspberry PI
  • 360 degree camera e audio

Figure 1: High level smart city safety architecture

Figure 1 shows a high level architecture of the smart city safety use case. The bike rider carries his helmet, which has attached the Raspberry PI and the 360-degree camera. Along of his path video and audio is capturing and send via WiFi to the Mobile Edge Computing (MEC) or Cloud to be processed. Once the audio and video has been processed and any suspicious activity has been detected a notification is generated and sent to the different security agents.


Many of today’s municipalities are becoming test beds for the smart city experimentation where technological capabilities are addressing daily needs from parking, water treatment to city security. University of Bristol together with Bristol is Open are working to provide through 5GinFIRE platform a smart city safety use case which has been deployed according to the architecture shown in Figure 2. Figure 2 shows the main building blocks that make the smart city safety use case a reality. Note that only open-source frameworks (OpenStack, OpenDayLigh, etc) are being used to deploy the use case.

Figure 2: The main building blocks of UNIVBRIS/BIO Testbed Architecture

The testbed is a physical and virtualized infrastructure that has been deployed as part of 5GinFIRE overall architecture. One of the main challenges in the design of VNF architecture is how to cope with multi-site orchestration and end-to-end VNF connectivity. This testbed has been deployed to operate cross-domain between ITAv and UC3M testbed. The UNIVBRIS testbed deployment will also provide VxF for FIRE testbed, providing a unique testbed that is able to assess the proposed VNF in a variety of different domains, providing excellent testing sites for heterogeneous experimentations. In particular the testbed purpose is:

  • to provide computational resources or/and slicing for hosting, deploying, instantiating and supporting VxF’s life cycle serving as a platform for conducting rigorous, transparent and replicable testing of NFV ecosystem.

A brief description about the computational resources is listed as follows.

  • UNIVBRIS staging OpenStack testbed consists of a single control/network node and two compute nodes connected through two optical switches.
  • The control node
    • 2 Virtual CPUs
    • 2,4 GHz
    • 8 GB RAM
  • MVB compute node
    • 2 x 8 core 16 thread CPUs
    • 4 GHz
    • 32 GB RAM – to be expanded to 128GB
    • 2x RAID 1 900 GB hard disks
  • ENS compute node
    • 2 x 8 core 16 thread CPUs
    • 4 GHz
    • 16 GB RAM – to be expanded to 128GB
    • 2x RAID 1 900 GB hard disks
  • One MVB NEC Optical Switch
  • One ENS NEC Optical Switch
  • Targeted users/experiments
    • The testbed enables to create, deploy, instantiate, manage and destroy VxFs providing an experimentation platform for VNFs.

Figure 1: UNIVBRIS/BiO Testbed Architecture

All BIO OpenStack nodes run OpenStack Ocata on top of CentOS 7 Operating Systems.

The controller & network node hosts the following OpenStack components:

  • Nova-api
  • Nova-cert
  • Nova-scheduler
  • Nova-conduct
  • Neutron-server
  • Heat-engine
  • Memache
  • Keystone
  • Glance
  • Rabbitmq
  • Mariadb
  • Designate-engine

API node:

  • Dashboard
  • Nova-spiceproxy
  • Bind9 designate-backend
  • Openstack api proxy
  • Apache2
  • Memcached

Compute node:

  • Neutron-ovs-agent
  • Nova-compute
  • Nova-common
  • Cinder
  • Openvswitch
  • KVM
  • Neutron-ml2
  • Neutron-common
  • Nova—novnc-proxy
  • Neutron


Projects have a ‘self-service networks’ capability – new networks can be created by clients as needed, and connected to their virtual routers in the same manner as with the BIO-provided VLAN networks.

Self-service networks are implemented using VxLan tunnels between the Openstack compute and networking nodes.

Network slicing capabilities

Projects are separated through VLANs, there is a trunk around the city containing vlans and these are then split out to individual SSIDs or electrical ports for the experimenters needs. The trunk VLANs terminate in an open stack project but BIO can also route VLANs locally or separately depending on the experimenters needs.

Monitoring capabilities

  • Meru (BIOs wifi controller) has a proprietary monitoring system. BIO also has some network visualization tools:
  • Nagios provides connectivity monitoring
  • Munin provides server performance
  • Smoke Ping helps check network latency
  • OpenNMS
  • Jenkins for monitoring Open Stack
  • Wireshark for packet analysis

BIO production OpenStack

Once the 5ginfire platform is deployed and validated in staging the platform will be moved to BIO production OpenStack environment. The production environment mimic’s staging, therefore the openstack components live on the equivalent nodes as listed earlier for the staging environment.

Server based resources

4 x OpenStack Compute Nodes, Installed across 4 locations around the city each with 2680v3 x 2 48 cores, 192GB RAM, 1TB storage 


In the context of NVF experimentation, an experimenter in the testbed follow the phases.

  • Experiment design. The user defines the VNF at a high-level using YAML description files.
  • Auto configuration software (OSM MANO) maps this high-level description into a virtual/physical topology.
  • According to the resource availability the testbed will support the specific VxF/VNF
  • Orchestration tool is in charge of controlling the node booting process, disk image loading, and execution of VxF
  • Data collection for analysis can also be programmed using the descriptor


Aloizio P. Silva ( and Isaac Fraser (

Merchant Venturers Building, High Performance Network Laboratory
University of Bristol – UK
Woodland Road BS8 1UB

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.