End user experience of the internet is virtually impossible to quantify in terms of total availability. In other words, it is objectively impossible to measure. And, no matter what the objective estimates might be, they are conditioned by subjective issues.
For example, even if we can estimate that the total availability of all internet apps is collectively 90 percent, meaning something, somewhere, is not available about 10 percent of the time, no user is actually trying to use all those potential apps.
Outages that happen do not matter for users not using any particular app, data center, internet service provider or backbone network. Furthermore, nobody is actively interacting with internet apps 24 hours a day, seven days a week.
So outages that happen when one is not interacting with an app effectively “do not matter” for that user.
We long have known that anything related to the internet is not as “reliable” (service availability) as the old public switched telephone network. The telephone network uptime standard was 99.999 percent availability, representing annual downtime of about five minutes.
Availability for consumer internet apps and services tends to be far lower, in large part because the entire end-to-end transmission chain is not under any single entity’s control. Consider recent availability data for U.K. internet service providers. In all cases, availability was in the range of 97 percent to 90 percent.
But that was only for the local access link. Those availability figures do not take into account any other sources of availability loss: far end access; app availability; device availability; local power availability or any other platform availability. Since there is no “chain of custody,” it is virtually impossible to estimate average availability for any particular configuration of hardware, software and platforms at any single location or for any single device.
It is safe to say that 99.999 percent availability for consumer internet “anything,” end to end, is impossible.
It probably goes without saying that the Internet is a complex system, with lots of servers, transmission paths, networks, devices and software all working together to create a complete value chain.
And since the availability of any complex system is the combined performance of all cumulative potential element failures, it should not come as a surprise that a complete end-to-end user experience is not “five nines.”
Consider a 24×7 e-commerce site with lots of single points of failure. Note that no single part of the whole delivery chain has availability of more than 99.99 percent, and some portions have availability as low as 85 percent.
In principle, availability could be quite low, without redundancy built in.
The expected availability of the site would be 85%*90%*99.9%*98%*85%*99%*99.99%*95%, or 59.87 percent. Redundancy is the way performance typically is enhanced at a data center or on a transmission network, to avoid such a state.
In practice, cloud data centers have made great strides where it comes to availability, so that dire situation virtually never happens. The other issue is that “user experienced” availability is better than actual end-to-end availability since users are not actually connected and engaged with internet experiences and apps all day, every day.
Outages can occur with any user-experienced downtime if any particular user is not actually interacting with the internet at the moments when the outages occur. Even if end-to-end experience is at the 90-percent level, implying outages about 10 percent of the time, it does not affect a user if those outages occur when a person is sleeping or otherwise logged off.
No comments:
Post a Comment