Sponsored By

5G Standards

Digital twins enable dynamic control of 5G slicing

by Blogs & Opinions
Article ImageOne of the most frequently touted new features of 5G, beyond vastly increased bandwidth, is the opportunity for carriers to deliver secure, isolated network services to enterprises and smart cities. A customer can avoid the complexity of a dedicated or wired network for remote or temporary sites; or ensure that traffic from mobile devices in the hands of employees is isolated cleanly from the Internet.

Network slicing offers an opportunity for carriers to eliminate the complexity of VPN management and to offer services that isolate and secure enterprise traffic a significant opportunity given the ever-present risks of cyber-attacks and rapid growth of a distributed workforces.

A consequence of slicing is that each base station effectively becomes a multiplexer it must juggle the transmissions from endpoints on multiple slices dynamically, buffering transmissions temporarily until they can be forwarded onto the carrier's core network. Inevitably, this requires buffering. Deciding the order in which buffered transmissions from different slices are forwarded is complicated.

Each slice has an associated quality of service (QoS) and the multiplexer has the job of satisfying the aggregate set of constraints for all slices. QoS is not new in mobile networking 4G standards specify up to 256 different QoS levels. Few providers have adopted 4G QoS because the primary use of 4G has been consumer mobile networking, and enterprises have adopted different forms of isolation (like VPNs) and managed to get by given the huge capacities put in by carriers to serve consumers.

Network slicing provides a new business opportunity to carriers because it offers both isolation and QoS, which in turn delivers a guarantee against denial of service attacks from third parties, and an assurance that enterprise traffic will receive priority.

However, two challenges result. First, the base station needs to ensure that QoS guarantees are met for each slice given that traffic is variable bitrate. It also needs to ensure that the resources allocated to each slice (bandwidth and QoS) are feasible (in other words, that the base station can meet the aggregate local QoS commitments).

Second, for each customer the provider must ensure that each base station has capacity to meet network-wide demand. The QoS parameters in 5G networks support guaranteed and maximum bitrates per slice in addition to numerous other attributes including priority level, packet delay budget, packet error rate, the maximum burst size and window size for window-based rate guarantees. Management is therefore complex, and real-time measurements of capacity and performance are needed to solve the capacity allocation problem and to prove that the network is performing.

Adding complexity, 5G networking standards apply to the device to base-station protocols and not to the network-wide problem of capacity assignment. This creates a challenging problem for network providers both per-base station and network-wide capacity allocation problems must be solved. Finally, mobile devices can move between base stations at will.

Analysis and capacity assignment in real-time is crucially important but very hard to address because data volumes are large, moving raw data is expensive, data is generated by widely distributed base stations and mobile devices and raw data is of ephemeral value solving the capacity management problem means that analysis can't wait, so the "store then analyze" approach will fail. This is a problem that demands in-memory, "on-the-fly" analysis of data to enable per-base station and network-wide performance optimization.

A recent innovation in streaming data analysis is key to rapid solution of the capacity management problem in both 4G and 5G networks: digital twins that analyze streaming data on-the-fly. The approach relies on a distributed fabric of concurrent digital twins one for each base station and device that process raw data (measurements of connection quality, bitrate, transmission rate, QoS committed and received, etc.) from each real-world device in parallel.

Digital twins run on a distributed execution platform that includes both per-base station and regional compute. Each digital twin represents a single entity, and the graph of digital twins represents proximity, containment and even correlation.

In addition, each digital twin processes its own real-world data and shares its state over the graph with related digital twins to enable real-time contextual analysis. The state of every real-world device and virtual object is mirrored, in real-time, by the set of digital twins. The distributed graph enables real-time state sharing a bit like a "LinkedIn for things."

Digital twins analyze their own state and the states of others they are linked to enabling analysis, learning and even prediction. Each digital twin statefully evolves and analyzes its own state and the state of its linked "neighbors" and then streams its insights in real time to the capacity management application, discarding the original raw data. This enables most raw data to be discarded at the base stations where it originates, and the distributed computation helps to solve the capacity assignment and monitoring problem using the states of the digital twins. As a result, this process reduces the data volume and provides faster data analysis because computation can be done at memory speeds. The net result is that the computing capacity required is substantially reduced typically less than 10% of the resources required for a store-then-analyze approach and results are continually available.

— Simon Crosby, CTO SWIM.AI

Photo by unsplash-logoErik Eastman

5G Newsletter Sign Up


Sign Up