By Leigh Chinitz, Spirent Communications
Until recently, service providers were unable to predict in-home performance of Wi-Fi devices because they lacked standardized test cases. Now, industry groups have advanced three new standards, each geared toward different Wi-Fi use cases.
The Wi-Fi industry has long had standardized conformance testing to confirm that a device meets standards and will interoperate with other Wi-Fi-certified devices. Until recently, though, there was no standardized way to measure Wi-Fi performance or make comparisons across devices. For service providers, this lack of industrywide, vendor-agnostic performance testing has been an ongoing source of frustration.
According to one 2023 study, 92% of U.S. households use Wi-Fi for home Internet, and more than 25% of those surveyed value quality of experience (QoE) over price. Given that many subscribers rely on Wi-Fi routers or gateways provided by their service provider, operators get the blame — and support calls — when customers have a bad experience. Providers want as much information as possible when evaluating equipment to deploy to their customers. Service providers have been among the loudest voices advocating for standardized performance testing.
Why has it taken so long for the industry to respond? The answer is a story with a happy ending: There are now three different standardized test sets available, each examining different aspects of Wi-Fi performance in different contexts. As Wi-Fi technology grows more sophisticated with each new release, and more important to both residential and business users, these efforts couldn’t have come at a better time.
Why so hard?
Conformance testing and performance testing are different. Think of it this way: “Will this device interoperate with that device?” as opposed to “How well will this device perform while interoperating with that device?” Both questions are important, and industry efforts to answer the first have been enormously consequential. The fact that you can buy any Wi-Fi-enabled smart TV, for example, without worrying if it will work with your home router is a testament to the Wi-Fi Alliance’s conformance testing regime. Arguably, the confidence consumers have that Wi-Fi devices will interoperate is the biggest reason for Wi-Fi’s global success.
Performance testing is, however, another matter. There are, of course, methods to measure how a device performs in each location, but they typically require RF experts to conduct onsite walk tests with specialized equipment. Such tests are useful when outfitting a new enterprise campus but not scalable to millions of subscriber homes. Figure 1 shows just one of countless possible configurations. Coming up with a standardized test set that can be conducted in the lab for this purpose — that will yield consistent data to make meaningful comparisons — has proven much more difficult for several reasons.
First, Wi-Fi performance is extremely sensitive to environmental factors. A location’s layout, construction materials, and, especially, other devices sharing its RF airspace can all affect performance. Even in the same location, airspace is highly dynamic, so the same test set can yield different results from one hour to the next. A device’s performance can also fluctuate depending on the types of traffic different devices transmit and the specific interference conditions at that moment — factors difficult to recreate or control in the lab.
With so much variability, Wi-Fi access point (AP) vendors have been skeptical of any attempt to define testing standards that would rank one device’s performance over another in a purportedly objective way. It’s hard to blame them. We don’t even have a universally accepted definition of what “good Wi-Fi performance” means. What a given customer considers good represents a mix of data rates, range, reliability, supported features, and price. A device that’s an excellent choice for one use case (say, home broadband) might be poorly suited to another (a busy office environment), and vice-versa.
Use-case testing
If navigating Wi-Fi variability has been the biggest challenge in developing standardized performance testing, embracing that variability has enabled us to finally find a solution. Rather than trying to devise a single, universal test set to compare devices, the industry has adopted a “it for purpose” approach, advancing three different standards to measure performance for specific Wi-Fi use cases. They are:
- Broadband Forum (BBF) TR-398: Released in 2019, BBF’s Wi-Fi In-Premises Performance Testing plan was the first to define specific performance test cases and methodologies for evaluating residential Wi-Fi. True to the fit-for-purpose model, TR-398 focuses exclusively on residential APs (not client devices), measuring in-premises performance in single-AP environments, which is the most common home deployment. The test plan covers RF capability, coverage, stability, and performance, both at baseline and with multiple connected clients, with pass/fail criteria for each test.
- Wi-Fi Alliance Device Metrics Test Plan: This 2022 effort from Wi-Fi Alliance’s Customer Experience Group is also geared towards residential Wi-Fi APs. Instead of establishing baseline pass/fail performance tests, however, the Device Metrics Test Plan aims to provide consistent statistical analysis so that testers can compare devices for a specific purpose. Test cases include rate vs. range, AP latency, channel switching, roaming, and augmented reality/virtual reality (AR/VR) performance. The standard focuses on analyzing and presenting data in a clear and consistent way so that prospective buyers can decide for themselves, based on their specific market and end-user requirements, whether a device meets their needs.
- European Telecommunications Standards Institute (ETSI) specification TS 103 754: Also released in 2022, the ETSI Broadband Radio Access Network (BRAN) Multiple Access Points Performance Testing Plan provides a framework specifically for evaluating multi-AP environments, such as Wi-Fi mesh or extender scenarios. It covers roaming time and throughput, one- and two-hop throughput, band steering, and network configuration and self-healing.
While focusing on different use cases, these standards share some commonalities. They all require that tests be conducted in a controlled RF environment (not open air) and that testers use multiple independent, interconnected RF chambers (at least for some of the test cases) to create more complicated and lifelike testing topologies.
New features, new challenges
The release of these standards is good news for service providers and Wi-Fi customers. For the industry groups advancing them, however, the work is only beginning. As useful as these initial test cases are, they still provide only a partial picture of Wi-Fi performance. They don’t address several of the more advanced features in Wi-Fi 6 or Wi-Fi 7, and new features are constantly in development.
In many cases, new Wi-Fi capabilities expand not only the way we define “good performance,” they create new use-case-specific testing considerations. Consider two features introduced in Wi-Fi 6:
- Multi-user (OFDMA) adds new channel and multi-user capabilities, including allowing an AP to communicate with multiple end devices simultaneously. BBF’s TR-398 Issue 2 added some new testing focused on the Wi-Fi 6’s higher throughput, but the standard does not yet address OFDMA performance.
- Targeted Wake Time (TWT) enables more flexible sleep time options for Wi-Fi devices. So far, we lack a test to measure this performance — or any other Wi-Fi power-saving features.
As Wi-Fi becomes more important for a wider range of devices and applications, the need to understand how these features perform will grow. For instance, Wi-Fi devices will increasingly use OFDMA to improve throughput and quality for streaming and real-time videoconferencing. Power-saving features such as TWT will connect door locks, alarm systems, home appliances, and other IoT devices. Service providers will need to test device performance with and without these features and under different conditions while sharing a network with other devices. Figure 2 compares capacity, latency, throughput, and coverage parameters and where testing services and technologies fall.
Fortunately, the groups that developed the first wave of performance testing standards continue to expand them. For example, TRF-398 Issue 2 added dual-band and bidirectional throughput tests, channel auto-selection tests, and initial “roaming” and “repeater” tests. Issue 3, which BBF will release in 2024, will add testing to address the new 6-GHz spectrum, quality of service (QoS), latency, and more.
Looking ahead
With the release of Wi-Fi 7, we can expect even more advanced features and more complex testing challenges. Changes to Wi-Fi modulations, new OFDMA spectrum-allocation functions, and especially multi-link operation (MLO), which enables APs to combine channels across bands, promise to dramatically improve throughput and QoE for end-users.
Defining use cases, test methodologies, and metrics for these new features will be a major area of focus moving forward. Equally important, as customers use Wi-Fi to support more mission-critical applications in both homes and businesses (a primary area of focus for Wi-Fi 7), we will need the ability to measure and guarantee performance in more granular ways.
The good news is that we finally have an approach to Wi-Fi performance testing that everyone can agree on. By adopting fit-for-purpose testing, we can sidestep the challenge of creating a single, universal standard for measuring performance. Instead, we can provide something much more useful: clear data for the use cases that matter most to each customer.
Brent Bischoff says
Excellent article highlighting the vital need to close the gaps in residential WiFi testing. Great to see the SDO’s moving this forward.