Governments, policy analysts and advocates routinely rely on user-generated tests of their home broadband speeds. Most of those tests are conducted from customer smartphones. Since 70 percent of the world's people use a smartphone as their only form of home broadband access, that would hardly be surprising.
But there is more. Even where home broadband is available and purchased over a fixed network, the internet service provider connection must first be passed through a Wi-Fi router, which supplies the actually-used in-home signal.
And there is a significant difference between the ISP-delivered speed and the Wi-Fi router’s output speed.
On this plot of home broadband speed measurements, the blue line shows the difference between output from the home Wi-Fi router and the speed delivered to the location by the internet service provider.
The point is that the difference in measured speeds on consumer devices (PCs, tablets, smartphones) between the ISP-delivered speed and the Wi-Fi output speed can range as high as 40 percent on higher-speed connections and on mobile devices.
Also, many people likely test their connections only when they sense a slow connection, compared to what they typically expect. That subjective component of the testing process also suggests that many of the tests are conducted when congestion or other problems are present.
On top of that, many consumers now use virtual private networks which can degrade performance further, commonly by 20 percent. In my own experience, performance can be degraded by 50 percent or 60 percent, routinely. That is not unusual.
And then consider another obvious shaper of test results: the actual buying decisions consumers make about service plans. Since home broadband service plans offer a choice of cheaper but slower plans and more-expensive but faster service plans, the theoretical measured speeds are limited by the buying choices consumers have made.
If half of consumers buy the slowest-speed plans, that will shape the measured outcomes. If half of consumers buy the fastest tiers of service there will also be a shaping of the measured outcomes.
The point is that user-generated tests of internet access speed are, for a number of reasons, weighted to the low side of actual ISP performance.
In some cases, actual ISP-delivered speeds, as reported, could easily be as much as 60 percent lower than they actually are reported by customers using mobile phones as the test device; connected to Wi-Fi; using a VPN on the device and buying a service plan that is slower than average.
And if a significant percent of tests are taken by consumers who are experiencing a slowdown in performance, the dichotomy can be even greater.
All that might not matter so much when the point of testing is to make comparisons between user experience as provided by different ISPs; in different geographies; by different customers.