Numerous Internet of Things (IoT) devices are being connected to the net-works to offer services. To cope with a large diversity and number of IoT ser-vices, operators must meet those needs with a more flexible and efficient net-work architecture. Network slicing in 5G promises a feasible solution for this issue with network virtualization and programmability enabled by NFV (Net-work Functions Virtualization). In this research, we use virtualized IoT plat-forms as the Virtual Network Functions (VNFs) and customize network slices enabled by NFV with different QoS to support various kinds of IoT services for their best performance. We construct three different slicing systems including: 1) a single slice system, 2) a multiple customized slices system and 3) a single but scalable network slice system to support IoT services. Our objective is to compare and evaluate these three systems in terms of their throughput, aver-age response time and CPU utilization in order to identify the best system de-sign. Validated with our experiments, the performance of the multiple slicing system is better than those of the single slice systems whether it is equipped with scalability or not.
5G networks meet the different needs of various vertical services flexibly through network slicing. The Third Generation Partnership Project (3GPP) provides four standardized slice/service types including enhanced Mobile BroadBand (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), massive Internet of Things (mIoT) and Vehicle-to-X (V2X) [
Both Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) technologies can be used to enable network slicing for IoT [
We propose to customize each network slice first with different bandwidth to handle different types of IoT services. In addition, because of the virtualization of IoT platforms, we can scale out/in their instances rapidly and dynamically to support the variation in service load [
The rest of the paper is organized as follows: Section 2 introduces the background information of oneM2M, ETSI NFV architectural framework and network slice. Section 3 presents our system design and system workflow in Open-Stack. Section 4 describes three different systems and compares their performance evaluation. Section 5 presents our system design, implementation and evaluation in Kubernetes. Finally, Section 6 shows the conclusion and future work of this paper.
In this section, we explain the oneM2M IoT platform we used in our system, the NFV architectural framework and the concept of network slicing.
oneM2M [
· Field Domain is the domain where sensors, actors, aggregators and gateways are deployed. It is the M2M area network and resides at the edge of the network.
· Infrastructure Domain is the M2M core network and is normally located in a cloud environment where IoT servers and applications reside.
There are 4 different kinds of nodes in the oneM2M network: Application Dedicated Node (ADN), Application Service Node (ASN), Middle Node (MN) and Infrastructure Node (IN). The Field Domain consists of ADN, ASN and MN while IN is located in the Infrastructure Domain. We only apply ASN and IN in our system. A Traffic Generator is designed as the ASN to simulate IoT devices sending traffic. Three different types of ASN devices including video camera, light pole and parking detector are simulated for three IoT services, respectively. On the other hand, virtualized IoT servers are used as INs in the cloud to receive application traffic from Traffic Generator.
We utilize OM2M, which is an open source implementation of oneM2M developed by LAAS-CNRS [
NFV MANO (Management and Orchestration) [
and VNFs. Whenever there is a demand for virtual resources, NFV MANO will coordinate, verify and authorize requests for these resources. It is also responsible for managing the life cycle of VNFs, such as instantiation, scaling, update and termination. In addition, it manages the policy of network services, the collection and transfer of performance measurement, and the allocation of resources related to infrastructures. The NFV MANO framework is adopted in our research to construct the network slicing environment.
As illustrated in
· NFV Orchestrator (NFVO), which is in charge of the lifecycle of Network Services (NS) and responsible for onboarding Network Service Descriptor (NSD).
· VNF Manager (VNFM), which is responsible for the lifecycle management of the VNFs including VNF scaling out/in and their performance and fault management.
· Virtualized Infrastructure Manager (VIM), which is in charge of allocating and releasing NFV infrastructure (NFVI) including compute, storage, and network resources upon requests of the VNFM and NFVO.
Virtualized Network Functions (VNFs) are the software implementations of network functions such as firewall, load balancer. VNF can improve network scalability and agility and also make better use of network resources. Network Service (NS) consists of one or more network functions, which can include VNFs and Physical Network Functions (PNFs).
The NFV Infrastructure (NFVI) is composed of hardware and software resources that allow VNFs to be deployed, managed and executed. The physical infrastructure includes compute, storage and network. The virtualization layer decouples the virtual resources from the underlying hardware resources. Virtual resources are abstractions of the physical resources through the virtualization layer.
According to 3GPP [
Network slicing enables the operator to divide a physical network into multiple virtual and logically independent end-to-end networks. Each network slice is tailored to fulfill different service requirements, such as delay, bandwidth, security, and reliability to cope with diverse network application scenarios [
In our design, we utilize Tacker [
· OpenStack is an open source cloud operating system for virtualizing and managing resources including compute, network and storage. It provides multiple managing services such as Nova for compute, Neutron for network [
· Tacker is an official OpenStack project. It is an open source implementation of the ETSI MANO architecture. It provides a generic VNFM and NFVO to deploy and operate NSs and VNFs based on VIM. It supports both Open-Stack and Kubernetes as its VIM.
There are other orchestrators that can be used as NFVO and VNFM, such as Open Network Automation Platform (ONAP) [
In our system, OM2M IN instances are deployed as the VNFs. On the other hand, NSs are the composition of OM2M IN instances and a Load Balancer to be introduced next.
We construct the three slicing systems for our research experiments to support IoT services including: 1) a single slice system, 2) a multiple customized slices system and 3) a single but scalable network slice system. Our objective is to compare and evaluate these three systems in terms of their throughput, average response time and CPU utilization in order to identify the best system design.
We first explain the functional blocks of our three systems, then show the Network Service lifecycle management flows and the system workflows. The general architecture of our systems is illustrated in
Note that we design three new system components Master Node, Load Balancer and Traffic Generator on top of OpenStack and Tacker open sources in order to complete our systems.
· Master Node is incorporated in VNFM to monitor the CPU status of VNFs on each network slice in order to trigger scale-out or scale-in actions [
· Load Balancer is designed to fairly dispatch the incoming traffic to each VNF [
· Traffic Generator is a multi-thread program that we design to simulate three types of IoT traffic. It can set the number for each kind of ASN devices and the frequency of sending data. Three types of IoT traffic generated include video, adaptive lighting and smart parking. Each traffic is a stream of HTTP requests.
1) Video: This is to simulate a security surveillance service enabled by the video camera. It provides monitoring services for road traffic and crowd movement. This service has the highest bandwidth demand among all three types of traffic.
2) Adaptive lighting: This is to simulate an adaptive lighting service by the smart street light pole that monitors weather conditions and adapts the brightness of street lighting accordingly based on the inputs from temperature, humidity, air pollution and light sensors.
3) Smart parking: This is to simulate a smart parking service that monitors the availability of parking spaces based on geomagnetic sensors embedded in parking areas. This service has the lowest bandwidth requirement among these three types of IoT services.
As depicted in
There are four phases to run a network slicing system.
In the preparation phase, we set up the environment by first registering an OpenStack VIM to the Tacker MANO system. This is done by setting up an account with authentication URL, username, password, project name and certificate. After registration is complete, we first onboard VNFD and then NSD to NFVO. NFVO will verify the integrity and authenticity of the NSD and check whether the VNFD required by the NSD exists. If acknowledged, it means that the NSD is successfully onboarded.
In the instantiation phase, NFVO receives a request to instantiate an NS. After receiving the request, NFVO validates the request and checks with VNFM whether the VNF instances required exist. If any of them does not exist, NFVO will request VNFM to instantiate it. NFVO will also check with VIM about the availability of required network resources and request the instantiation of virtual resources needed by the NS. Then NFVO will proceed to instantiate the NS by instantiating all its needed VNFs one by one. NFVO will also set up the connectivity among VNFs according to VLDs and VNFFGDs in the NSD. After deploying NS, we limit the bandwidth of each NS according to the attribute in NSD.
In the run-time phase, Master Node will keep monitoring the load status of each VNF and scale out/in VNFs according to the load. When there is only one VNF left, no scale-in action will be triggered.
In the termination phase, NFVO receives a request to terminate an NS instance and requests VNFM to terminate every required VNF in the NS if it is not used by another NS. VIM then deletes the compute, storage and network resources required by the VNFs. After NFVO acknowledges the completion of the Network Service termination, the lifecycle of NS ends in this phase.
The workflow of our system for scaling is shown in
scale-in threshold, it will trigger the scale-in action. However, if there is only one VNF left for the NS, the scale-in action will not be triggered. In addition, if there is already an action being executed, the next scale-out or scale-in action will not be triggered until the previous one ends.
In this section, we show our test environment setup and experimental results. Three types of traffic are simulated through Traffic Generator designed to evaluate the performance of each system. The evaluation metrics include throughput, average response time and CPU utilization.
Our test environment consists of two servers. Both Tacker and OpenStack are running on these two servers configured as shown in
In our experiment, we use Traffic Generator to simulate three types of traffic and send the HTTP requests to each VNF on the network slice. For the single slice system, we send the traffic to the OM2M IoT platform directly. For the multiple slicing system, each type of IoT traffic will be sent to the VNF on the corresponding network slice. For the single slice scalable system, we send all HTTP requests to the load balancer on the network slice first; the load balancer then distributes the requests to each VNF.
In order to meet the different requirements of each type of IoT requests we simulate, we set the required bandwidth of each network slice according to the setting in
The required bandwidth is set to twice the expected traffic throughput to avoid temporary excessive traffic. For the single slice system and the single slice scalable system, we set the bandwidth to 1400 Kbps. For the multiple slicing system, the bandwidth limits are 1000 Kbps, 300 Kbps and 100 Kbps respectively with a total at 1400 Kbps that is the same as the other two systems for the fairness of comparison. The configuration of Traffic Generator in
To test each system, there are three stages in our experiments. The whole process takes a total of 240 seconds. The payload size of each request sent by Traffic Generator will be based on the settings defined in
Entity | Operating System | CPU | RAM | Version |
---|---|---|---|---|
Tacker | Ubuntu 18.04 | Intel E5-2678V3 2.5Ghz 10 Cores | 128 GB | Stable Rocky |
OpenStack | 128 GB | Stable Train |
Entity | Image | vCPU | RAM | Disk Size |
---|---|---|---|---|
IoT Platform (OM2M) | xenial-server-cloudimg- amd64-disk1 | 1 | 1 GB | 10 GB |
Load Balancer (RabbitMQ) | 2 | 4 GB | 40 GB |
System | Bandwidth (Expected Traffic Throughput/Max Bandwidth) | |
---|---|---|
Single Slice System | 700 Kbps/1400Kbps | |
Multiple Slicing System | Video | 500 Kbps/1000Kbps |
Adaptive Lighting | 150 Kbps/300Kbps | |
Smart Parking | 50 Kbps/100Kbps | |
Single Slice Scalable System | 700 Kbps/1400Kbps |
Application | Data Frequency | Number of User Threads (In the First and Third Stages/In the Second Stage) | Payload Size (Bytes) |
---|---|---|---|
Video | 1 request/s | 1/3 | 20,000 |
Adaptive Lighting | 1 request/s | 1/3 | 6500 |
Smart Parking | 3 requests/s | 1/3 | 700 |
· In the first stage, we will follow the configuration in
· In the second stage, we triple the number of user threads as shown in
· In the final stage, we return to the same configuration as the first stage for 90 seconds. During this stage, the scalable system will trigger the scale-in action back to its original status.
Comparing the single slice system with the single slice scalable system, it is clear that the average response time of the single slicing system with scalability is better than the one without scalability. In the first stage, since the single slice scalable system must go through Load Balancer which is an additional VNF, the response time is longer than the single slice system. However, when the traffic load increases in the second stage, the response time of the single slice scalable system is similar to that of the single slice system. Moreover, the result of the single slice scalable system is even better in the final stage. This is because the single slice scalable system can deal with increasing traffic loads better than the single slice system. However, the total CPU utilization of the single slice scalable
Throughput (Kbits/second) | ||||
---|---|---|---|---|
System | Type | Stage 1 | Stage 2 | Stage 3 |
Single Slice System | Video | 168.6 | 506.9 | 168.4 |
Adaptive Lighting | 51.3 | 154.1 | 51.4 | |
Smart Parking | 16.4 | 49.1 | 16.4 | |
Multiple Slicing System | Video | 168.8 | 505.9 | 168.9 |
Adaptive Lighting | 51.5 | 154.2 | 51.6 | |
Smart Parking | 16.4 | 48.9 | 16.4 | |
Single Slice Scalable System | Video | 169.5 | 506.3 | 169.1 |
Adaptive Lighting | 51.1 | 153.8 | 50.6 | |
Smart Parking | 16.0 | 47.9 | 15.9 |
system is always higher than the other two systems due to the overhead of Load Balancer and scalability.
The average response times of each application type in the single slice system over all testing stages are shown in
According to the above results, we speculate that implementing the horizontal scalability across the multiple slicing system may improve its performance and stability, which will be our future work. The research in [
In this section, we report our research results of building the NFV MANO framework with Tacker as NFVO/VNFM and Kubernetes [
Kubernetes is an open-source system for automating application deployment, scaling, and management. It provides a platform for deploying, managing, and scaling containerized applications across clusters of hosts. It works with a variety of container tools, including Docker.
We construct only two slicing systems in this experiment including: 1) a single slice system and 2) a multiple slicing system. At the end of this section, we will compare and evaluate these two systems in terms of their average response time and CPU utilization.
Traffic Generator which is our design will simulate the same three types of traffic as our experiments in OpenStack. As depicted in
Our test environment consists of two servers. Tacker and Kubernetes are each running on a server configured as shown in
In this experiment, we use Traffic Generator to simulate three types of traffic and send the HTTP requests to each containerized VNF on the network slice. For the single slice system, we send all three types of traffic to the OM2M IoT
platform directly. For the multiple slicing system, each type of IoT traffic will be sent to the containerized VNF on the corresponding network slice.
The configuration of Traffic Generator is the same as the one used for Open-Stack as shown in
To test each system, there are three stages in our experiment. The whole process takes a total of 90 seconds. The payload size of each request sent by Traffic Generator will be based on the settings defined in
· In the first stage, we will follow the configuration in
· In the second stage, we triple the number of user threads as shown in
· In the final stage, we return to the same configuration as the one in the first stage for 30 seconds. During this stage, the systems will approach stability.
Because Kubernetes has its own scaling functions and policy for scalability, we only construct a single slice system and a multiple slicing system. Also, the time spent in the experiment for Kubernetes as VIM is different from the previous one for OpenStack as VIM. Since there was no need to do scalability, we shortened the total time of the experiment.
Entity | Operating System | CPU | RAM | Version |
---|---|---|---|---|
Tacker | Ubuntu 18.04 | Intel E5-2678V3 2.5 Ghz 10 Cores | 128 GB | Stable Rocky |
Kubernetes | Intel® Core™ i7-8700 CPU 3.2 Ghz 6 Cores | 64 GB | v1.15.9 |
Entity | Image | vCPU | RAM | Disk Size |
---|---|---|---|---|
IoT Platform (OM2M) | tingan0531/om2m | 1 | 1 GB | 10 GB |
Throughput generated through Traffic Generator (Kbits/second) | ||||
---|---|---|---|---|
Type | Stage 1 | Stage 2 | Stage 3 | |
Video | 168.2 | 507.4 | 170.3 | |
Adaptive Lighting | 51.4 | 154.1 | 51.4 | |
Smart Parking | 16.4 | 49.1 | 16.1 |
This result shows that the multiple slicing system has better performance when it encounters high traffic.
On the other hand, as depicted in
Integrating the information from these two charts, we conclude that the performance of the multiple slicing system is better in general as its total CPU utilization is only slightly higher than that of the single slice system but it can achieve faster response time than the single slice system.
In this paper, we propose three different slicing systems enabled by NFV, based on the MANO framework including: 1) a single slice system, 2) a multiple customized slices system and 3) a single but scalable network slice system to support IoT services. We utilize several open sources such as OpenStack, Tacker, Kubernetes, OM2M and RabbitMQ for constructing our system. In our system,
we leverage Tacker as NFVO and VNFM, and OpenStack/Kubernetes as VIM and OM2M as VNFs to set up our NFV-enabled network slicing system for IoT. To support different kinds of IoT services, we customize each network slice with a specific QoS. Moreover, we design a Master Node to monitor the CPU usage of each VNF and scale out or scale in VNFs on the slice according to this information. Also, Load Balancer is designed for the single slice scalable system to dispatch traffic fairly.
In our experiment, we design Traffic Generator to simulate three types of IoT traffic including video, adaptive lighting and smart parking. The test traffic consists of three stages with different traffic loads. We measure the average response time and the CPU utilization of these three systems to identify the best system design. Comparing the results of these three systems, the multiple slicing system has the best performance among them. In addition, the single slicing system with scalability is more stable than the system without scalability with the tradeoff of higher CPU utilization.
Combining the results of the two experiments, the multiple slicing system is the best system design. Although in our experiment with Kubernetes as the VIM, we only constructed the first two systems. The results also show that the performance of the multiple slicing system is better than that of the single slice system.
In the future, we plan to construct a network slicing system with vertical scalability by adapting to changing QoS requirements dynamically. We also plan to experiment the horizontal scalability across multiple slices than just on a single slice. Moreover, constructing a hybrid system of horizontal and vertical scalability to meet more diverse requirements of IoT services is also a potential future research direction [
This work was financially supported by Ministry of Science and Technology (MOST) of Taiwan Government under Project Number MOST 109-2221-E-009-083 and the Center for Open Intelligent Connectivity from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.
The authors declare no conflicts of interest regarding the publication of this paper.
Tsai, T.-A. and Lin, F.J. (2020) Enabling IoT Network Slicing with Network Function Virtualization. Advances in Internet of Things, 10, 17-35. https://doi.org/10.4236/ait.2020.103003