Read the media coverage around 5G, and you might think you've picked up a science fiction novel. Augmented reality, driverless vehicles, remote surgery it all sounds like Star Trek
. It's easy to get excited about the growing list of 5G use cases. And we should be excited. But there's still a big gap between the world envisioned by these next-generation applications and where 5G networks are today.
5G is a radically different network technology than anything that's come before, and it introduces all sorts of new wrinkles we're still ironing out. Among the biggest: testing and validation. How do you test networks that are exponentially more complicated than the ones your methodologies were designed for? When a bad outcome could be caused by any of hundreds of virtual components and microservices, most of which have never run at scale before? When key pieces of your network even parts of the 5G specification itself don't actually exist yet?
We will, ultimately, get to the 5G future we're all anticipating. But these are the questions we need to answer before we do.
Grappling with complexity
As much hype as 5G gets, much of the coverage still glosses over how different it is from previous cellular standards. From network slicing to cloud-native infrastructure to continuous integration/continuous delivery (CI/CD) pipelines, core aspects of 5G represent entirely new ground for the service providers delivering it. Operators must contend with:
- More complexity: Yesterday's network environments consisted mostly of monolithic network appliances that operated in predictable ways. 5G replaces the old, proven standards and components with thousands of deconstructed, decomposed little virtualized mysteries, all of which have to work really well together to do what we want them to do. That's an exponential increase in the number of variables we have to test.
- More vendors: Historically, operators dealt with a relatively small portfolio of network equipment from an even smaller number of vendors. 5G turns that status quo on its head, introducing new vendors and innovations to every part of the network. That multivendor openness is a big reason why 5G can enable so many amazing new things. But it also makes interoperability testing much, much more complicated.
- More frequent change: Networks aren't static, and testing for performance, security and interoperability can't be either. Any time you update network software, you need to retest everything: the performance of individual components, how those components interoperate with adjacent components at scale, and how they behave at the level of end-to-end services. If you work with two or three vendors, releasing updates two to three times per year, that process is relatively predictable. Now though, you might have 30 vendors, all releasing updates at their own cadences. How are legacy, largely manual testing methodologies supposed to keep up?
When 5G goes wrong
Given these challenges, it shouldn't be surprising when things don't work perfectly on the first try. And that's exactly what we're seeing in the field in early 5G rollouts. More than once, we've field-tested devices across live networks and measured perfectly acceptable statistics until we forced the device to communicate only with the 5G network, and saw those KPIs fall off the map. Other times, we've seen 5G-to-4G handoffs for mission-critical services suffer half-minute delays.
In almost all cases, these problems aren't caused by fatal flaws in the technology. Some component was out of compliance with the 3GPP specification, or one bad configuration somewhere caused unexpected issues. It's all fixable and it's all information we need to know. But the point is, these aren't problems on the level of trying to squeeze 5% better performance from a telesurgery network slice. These are fundamental issues with how 5G networks operate. Before we start blazing a trail to the future, we need to address these issues correctly.
Getting 5G ready for primetime
We need to be realistic about setting expectations for 5G networks, and we need to be methodical about how we approach testing from lab to live and back again. Here are three basic guidelines to do it:
- Plan for significant, ongoing effort: Given the issues we're still finding in live networks, we should be reevaluating our testing approach top to bottom, making sure we apply the rigor 5G demands. That means testing in the lab, testing as you go to preproduction, testing as you begin rolling out infrastructure, testing as you scale users. And this won't be a one-time effort. Whenever anything changes, we need to go through the whole exercise again.
- Assume you'll need to automate: With all the new variables 5G introduces, all the new vendors and components and software updates, it's just not possible to conduct testing the way we have in the past. Legacy manual approaches won't work in dynamic cloud-native environments. Nor will designing your own test cases for everything you need to evaluate, certainly not at scale. If you can't use prebuilt, automated testing, you won't be able to keep up.
- Don't try to do it yourself: In the past, it was possible for operators and vendors to conduct their own testing. As we've seen though, testing 5G networks and services is much more than a full-time job. Just keeping up with ongoing changes to the specification is a major undertaking never mind validating products from new and legacy vendors. And remember, while new 5G startups are doing amazing things, many have little experience testing telco equipment, certainly not at scale. Working with third parties that focus exclusively on testing is the quickest way to get objective, vendor-agnostic validation.
We will, eventually, get to the Star Trek future that we're all anticipating and it promises to be as exciting as advertised. But just because we can picture it, that doesn't mean we're there yet. Let's make sure we're taking the basic steps we need to take to make our 5G dreams reality.
Doug Roberts, General Manager, Lifecycle Service Assurance, Spirent
Photo source: NicoElNino/Alamy Stock Photo