Sunday, July 31, 2016

India Mobile Subscriptions Drop in May 2016

One does not often see mobile subscriptions decline in any developing country market, in any particular quarter or month, but that seems to be what happened in India in May 2016, when total mobile subscriptions dropped about one percent.

In urban areas, mobile subscriptions dropped two percent. In rural areas, subscriptions grew.

But most of the dip comes from share losses by several mobile providers, not necessarily an  across the board decline in net subscribers for most mobile service providers.

Fixed voice subscriptions also dropped, but that is not entirely unexpected in many markets.


Consumer Revenues Propel Service Provider Revenue Growth

Among the big changes in the telecom business over the last three to four decades, one of the most prominent is the growing revenue contribution (and profit contribution, in many cases) made by consumer accounts, as opposed to business customers.

That was not always the case. In the past, super-high profits from business customers funded operation of networks also serving consumers.

These days, it is consumer revenues that lead growth for most service providers.

The other big shift is the relative contributions made by mobile, as opposed to fixed, revenues.

As I mentioned during a recent keynote address for Telegration business partners (a U.S.-based sales organization focusing on enterprise and mid-market customers), when conducting analyses of Internet adoption on a global basis, one can essentially ignore all fixed network access, look only at mobile Internet access, and still get the trend right, and the magnitudes of usage about right.

That is quite a change from historical patterns, where business user revenues accounted for about a half of total revenues in developed countries. But the direction of change is mostly towards greater consumer revenues (What EY calls “smart operator”).

A few service providers might opt for becoming “mostly” wholesale providers for retail partners, in which case the revenue contributors shift dramatically to “business” revenue (wholesale). So far, we see little evidence that service providers are willing to retrench as wholesale capacity suppliers, however.

And that means the percentage of revenue earned from consumers is going to dominate, in the future.

Except for about 10.5 percent of Internet users in Asia, for example, who do get access using a fixed network, substantially all the rest of the Internet users do so using mobile networks.

In some Asian countries--such as the Philippines and Thailand--perhaps 20 percent to 25 percent of people use a fixed Internet connection (primarily urban residents). But in many other countries, usage of fixed Internet access can be in single digits, or less than one percent.

As we start to connect the half of people who do not presently use the Internet, the percentage of mobile or wireless users will climb, while the percentage of fixed network users shrinks. And we are talking about roughly two billion new Internet users, in Asia alone.

That is not to say other access methods will not emerge, but it is hard to ignore the fact that nearly 90 percent of Internet access in Asia now is provided by mobile networks.

Rural coverage, language relevance, device prices and recurring access costs all are issues. Still, mobile has to be reckoned the primary delivery vehicle.

As the 80/20 rule suggests, "20 percent of activities produce 80 percent of the results." For Internet access, the practical application is that only mobile really matters, where it comes to consumer Internet access.

Will that change in the future? It is possible. If Internet of Things develops as a key revenue driver, then enterprise or business revenue contributors will grow, again.

source: ITU

Bringing stakeholders together to understand changing supply and demand issues, and the business model for Internet access, is a key focus of the Spectrum Futures conference. Here’s a  fact sheet and Spectrum Futures schedule.

Saturday, July 30, 2016

90% of Asia Internet access is Provided by Mobiles

Across Asia, about 58 percent of people still do not use the Internet, according to the International Telecommunications Union.

If you exclude China, Japan and Korea, plus the city-states of Singapore and island of Taiwan, the percentage of Internet non-users can, in some cases, range upwards of 76 percent.

Except for about 10.5 percent of Internet users in Asia, who do get access using a fixed network, substantially all the rest of the Internet users do so using mobile networks. That is not to say other access methods will not emerge, but it is hard to ignore the fact that nearly 90 percent of Internet access in Asia now is provided by mobile networks.

Rural coverage, language relevance, device prices and recurring access costs all are issues. Still, mobile has to be reckoned the primary delivery vehicle.

As I mentioned during a keynote address for Telegration business partners (a U.S.-based sales organization focusing on enterprise and mid-market customers), when conducting analyses of Internet adoption, one can essentially ignore all fixed network access, look only at mobile Internet access, and still get the trend right, and the magnitudes of usage about right.

As the 80/20 rule suggests, "20 percent of activities produce 80 percent of the results." For Internet access, the practical application is that only mobile really matters, where it comes to consumer Internet access.

Bringing stakeholders together to do something about that is the mission of the Spectrum Futures conference. Here’s a  fact sheet and Spectrum Futures schedule.



Why Bundling?

U.K. service provider bundled subscriptions will have grown by 20 percent from 2015 to 2020, with quadruple play revenues growing 300 percent, while total multiplay market revenues grow by 34 percent, Strategy Analytics predicts.

Dual-play subscriptions will peak in 2016 and begin to decline as customers move to triple and quad-play bundles, the firm estimates. By 2020, quadruple-play subscriptions will represent more than 21 percent of bundled subscriptions in the U.K. market.  
The bundled services trend--in any market--tells you quite a lot about the state of the telecommunications market. There are some advantages in terms of customer acquisition and retention.

Simply, most bundles are a form of discounting: “buy more, save money.” Once consumers have made those decisions, churn tends to drop, because dropping any one service means losing the cost savings of the whole bundle.

But bundling also tells you how hard it is (perhaps how “nearly impossible” it is) to build a modern fixed network and build a business model on any single service.

Competition, more than anything, is the cause. Assume a single-purpose network (voice, and entertainment video being the historical examples). Under monopoly conditions, the addressable market is nearly 100 percent of locations. The actual take rates were above 80 percent for video and above 90 percent for voice.

But add competition and the business model is unsustainable. Even if only two competitors split a market, neither is likely to gain more than half the available market. That means actual subscribers are gotten from about 40 percent to 45 percent of locations.

In other words, subscription gross revenue--all other things being equal--is cut in half. Add more competitors and the numbers get worse. And things never are equal. New competition also leads to price cuts. So gross revenue falls more than half.

Bundling allows any single provider to create a new revenue model based on smaller shares of bigger markets. Even if no single provider has more than half the market share in any product segment, any successful provider can maintain gross revenue by selling multiple products to the same customer.

A simple illustration: a supplier that formerly had 100 percent of the market for one product might instead have 33 percent of the market for each of three products. In such a case, the total number of sold units remains the same as under monopoly conditions.

Gross revenue per unit,and profit margins, are issues, but you see the logic.

The most important observation might be that bundling now is necessary because no fixed network can survive on revenues from a single product.

More change is coming, since virtually all existing legacy products in the bundle face product substitution or outright abandonment.

Friday, July 29, 2016

AT&T, Comcast Boost Usage Caps for Internet Access Customers

As with virtually everything else connected with Internet access, speeds keep getting faster, and usage (where there are any limits at all) allocations keep getting bigger, for U.S. customers of leading fixed network Internet service providers.

AT&T has used a variety of usage caps for customers of its U-verse Internet access service, with caps based on user access speed. Users at 768 Kbps to 6 Mbps had a 300 GB cap.

U-verse users on connections of 12 Mbps to 75 Mbps had 600 GB caps. Customers on anything between 100 Mbps and 1 Gbps had a terabyte (1,000 Gb) monthly usage cap.

But AT&T has bumped usage caps usage caps higher. But AT&T now appears to have bumped usage allowances up to 1,000 Gb for all customers on access connections of 768 kbps or faster.

As a practical matter, that means all U-verse customers, and customers of AT&T’s gigabit service have no caps at all.


Charter Communications, as a condition of the approval of its purchase of Time Warner Cable, will not be allowed to impose any usage caps for seven years.

In markets where it is testing usage caps for its Internet access customers, Comcast has raised the monthly usage from 300 Gb to 1,000 Gb (a terabyte).

Comcast says its typical customer uses about 60 gigabytes of data in a month.

To be sure, some advocate no caps or any kind, ever, arguing that there actually are not cost implications for unlimited use. ISPs would disagree, of course. All networks are dimensioned for some amount of expected usage, and further designed to support peak loads.

Capital has to be invested when peak loads grow substantially. So the amount of usage by typical and the heaviest users does matter.

The terabyte cap allows for viewing of 700 hours of high-definition video, or about 23 hours at is each day. No single person watches that much video, and even if shared between four people, that amounts to nearly six hours per day, per person.

Power users (less than one percent of Comcast’s high speed access customer base) who want more than a terabyte can sign up for an unlimited plan for an additional $50 a month, or they have the option to purchase additional buckets of 50 gigabytes of data for $10 each.

As a practical matter, nearly all of you who actually have kept track of data usage under both “unlimited” and data plans with large buckets of usage, would likely agree that, as a practical, a reasonably-sized usage bucket is virtually indistinguishable from an “unlimited” plan, as a practical matter.

Usage limits vary on U.S. mobile services as well. Sprint and T-Mobile US have been bigger proponents of unlimited usage than have Verizon and AT&T.

But AT&T recently has been offering unlimited mobile data usage for mobile customers who bundle their service with AT&T’s DirecTV service.

Comcast Boosts Usage Caps to a Terabyte

In markets where it is testing usage caps for its Internet access customers, Comcast has raised the monthly usage from 300 Gb to 1,000 Gb (a terabyte).

Comcast says its typical customer uses about 60 gigabytes of data in a month.

To be sure, some advocate no caps or any kind, ever, arguing that there actually are not cost implications for unlimited use. ISPs would disagree, of course. All networks are dimensioned for some amount of expected usage, and further designed to support peak loads.

Capital has to be invested when peak loads grow substantially. So the amount of usage by typical and the heaviest users does matter.

The terabyte cap allows for viewing of 700 hours of high-definition video, or about 23 hours at is each day. No single person watches that much video, and even if shared between four people, that amounts to nearly six hours per day, per person.

Power users (less than one percent of Comcast’s high speed access customer base) who want more than a terabyte can sign up for an unlimited plan for an additional $50 a month, or they have the option to purchase additional buckets of 50 gigabytes of data for $10 each.

As a practical matter, nearly all of you who actually have kept track of data usage under both “unlimited” and data plans with large buckets of usage, would likely agree that, as a practical, a reasonably-sized usage bucket is virtually indistinguishable from an “unlimited” plan, as a practical matter.

U.S. Mobile Customers More Satisfied, AT&T Leads, But Billing Issues Still Drive 44% of Inbound Customer Service Traffic

In its latest study of mobile service provider customer care satisfaction, J.D. Power found U.S. consumers were more satisfied with customer care experiences.

But some things do not seem to change. Questions about billing are huge drivers of in-bound customer service activity.

Among wireless customers contacting their carrier by telephone, billing (44 percent) is the most frequently reported reason for contact.

The same is true for customers contacting using the online channel (52 percent).

On the other hand, the majority of mobile customers visiting a retail facility for customer service purposes do so with questions about service options and equipment (40 percent).

AT&T ranks highest among  full-service carriers, with an overall score of 820. So some things change, in the mobile consumer satisfaction area.

AT&T performs particularly well in the walk-in (retail stores) and online channels, and performs above the full-service average in all four service channels, J.D. Power said.

Consumer Cellular ranks highest for the first time among wireless non-contract carriers, scoring 878.

Consumer Cellular performs above the non-contract average in all four service channels, especially in the automated response system channel, followed by the customer service representative  channel.

AT&T ranks highest among wireless full-service carriers, with an overall score of 820. AT&T performs particularly well in the walk-in (retail stores) and online channels, and performs above the full-service average in all four service channels.

Consumer Cellular ranks highest for the first time among wireless non-contract carriers, scoring 878. Consumer Cellular performs above the non-contract average in all four service channels, especially in the ARS, then CSR channel.

Overall satisfaction among mobile full-service customers is 804, an improvement of 16 points from the 2016 Full-Service Study—Vol 1, J.D. Power said.

Satisfaction among non-contract wireless customers is 761, a significant 23-point increase from the 2016 Non-Contract Study—Vol 1, J.D. Power aded.

Overall satisfaction is highest among customers whose online contact was via a user forum (838), followed by social media (836), email (827), carrier website (826) and online chat (813).

Researching information on the carrier's website (51%) is the most common activity via online contact, followed by online chat (44%), email (26%), user forums (23%), and social media (7%).

One More Proposed Low Earth Orbit Satellite Constellation Ultimately Featuring 2,956 Satellites

Boeing Co. has applied for a license from the U.S. Federal Communications Commission to launch and operate a network of thousands of satellites in low earth orbit, enabling  high speed Internet access and communication services that likely will reach every inch of the earth’s surface.

That, of course, would help maritime applications, but also could bring high speed access to isolated areas at new, and lower, price points.

Mariah Shuman, O3B Networks
Of course, Boeing is not alone. SpaceX and OneWeb also plan to launch LEO constellations, and O3b (using a medium earth orbit constellation) already is in commercial service.

Boeing said it planned to initially deploy 1,396 satellites into low-earth orbit within six years of the license approval.

Eventually, the aerospace giant said its system would total 2,956 satellites designed to provide Internet and communications services for commercial and government users around the globe.

There still is some possibility Boeing--if successful--might take a wholesale approach, launching the constellation but then leasing capacity to third parties.

And, as was the case in the 1990s, the business models might not work, for some or even most of the potential contestants. How extensive demand will be is the issue. With mobile operators expected to step up their Internet access efforts, and with new backhaul methods, using balloons or unmanned aerial vehicles, and with new options based on use of either 5G mobile or fixed wireless, there will be many options for supplying high speed Internet access to isolated places.

So the LEO constellations are racing all the other would-be providers. The biggest areas of natural advantage for the LEO providers are the traditional maritime, government and commercial users, as well as isolated areas such as South Pacific islands and island archipelagos including Indonesia and the Philippines.

As always, we are likely to overshoot on investment, meaning there will not be commercially viable niches for all of the would-be suppliers. It might also be reasonable to suggest that, eventually, all of the surviving LEO constellations will be sold to incumbent satellite services companies, who themselves are looking to move beyond legacy video backhaul services that are threatened by the rise of over the top video consumption ill suited to satellite delivery.

Still, some idea of the value of the advances are clear enough. O3b, for example, is the backhaul for mobile operator Digicel’s service in Samoa, providing significantly higher retail end user speeds than possible in the past using geostationary satellites, and better latency performance for applications such as voice.

Mariah Shuman, O3b Networks maritime and international regulatory affairs director, will speak about such constellations, and their value, at the Spectrum Futures conference. Here’s a  fact sheet and Spectrum Futures schedule.

Eventually, Net Neutrality Rules Will Not be Needed

The argument for regulation of communications services of any type always is justified on the basis of scarcity: that some product or service is supplied by too few suppliers, on too limited a basis, to allow competition to act as the regulator.

Eliminate scarcity and the argument for government regulation goes away. Eventually, even network neutrality rules justified on the basis of scarcity (a few ISPs are so powerful they can shape or limit competition) is going to go away.

Huge amounts of new spectrum; including 29 gigaHertz of new wireless communications capacity, as much as seven gigaHertz to be made available on an unlicensed basis; new competition from Google Fiber, municipal networks and independent ISPs; new economics of fixed wireless; spectrum sharing and new next-generation mobile networks are going to eliminate scarcity.

Thursday, July 28, 2016

"Five Nines" Now is Effectively Impossible for Consumer Web Experience

It probably goes without saying that the Internet is a complex system, with lots of servers, transmission paths, networks, devices and software all working together to create a complete value chain.

And since the availability of any complex system is the combined performance of all cumulative potential element failures, it should not come as a surprise that a complete end-to-end user experience is not “five nines.”

Consider a 24×7 e-commerce site with lots of single points of failure. Note that no single part of the whole delivery chain has availability of  more than 99.99 percent, and some portions have availability as low as 85 percent.

The expected availability of the site would be 85%*90%*99.9%*98%*85%*99%*99.99%*95%, or  59.87 percent. Redundancy is the way performance typically is enhanced at a data center or on a transmission network.

For consumers, “hot” redundancy generally is not possible for devices. One can keep spare devices around, but manual restoration (switch to a different device, power it up) is required. Most often, “rebooting” is the restoration protocol, as “I will call you back” is the restoration protocol for a dropped mobile call.

Component
Availability
Web
85%
Application
90%
Database
99.9%
DNS
98%
Firewall
85%
Switch
99%
Data Center
99.99%
ISP
95%

Some of us are old enough to remember joking about “rebooting your TV,” a quip meant to suggest what would happen as TV signal formats switched from analog to digital, from standard to high-definition formats, from playback devices to Internet-connected devices.

Of course, we sometimes find we actually must reboot our TVs, set-top decoders, Wi-Fi and other access routers, so the quip was not without foundation.

In the past, some might have contrasted the availability (uptime) of televisions compared to computing devices. There are many issues.

Software with lots of code, and little fault isolation, can lead to some amount of crashing, and therefore lower availability. Drivers are known to cause faults.

One study of server availability found that 58 percent of IBM servers operated at 99.999 percent availability, but 46 percent of Hewlett-Packard servers and 40 percent of Oracle servers. Such issues normally are dealt with by building in automatic failover to redundant machines.

But many servers have “two nines” 99 percent availability, off the shelf.

Still, although 79 percent majority of corporations now require a minimum of 99.99 percent uptime or better for mission critical hardware, operating systems and main line of business applications, that target obviously is less than the “five nines” standard for telecom services.

On the other hand, IBM “fault tolerant” servers are supposed to operate at “six nines” of availability, higher than the telecom standard.

Whether software is as reliable, or less reliable, than a “five nines” network is debatable. But most would agree that software and hardware (without redundancy) operates at less than 99.999 percent availability.

There is a big difference between 99 percent availability (88 hours of downtime per year) and 99.9 percent availability (8.8 hours of downtime per year); or 99.99 percent availability (53 minutes each year) and 99.999 percent availability (a bit more than five minutes a year).

It is a myth that “five nines” remains the operational definition of availability for modern IP-based systems supporting voice, web and other over-the-top applications, even if service providers can produce reams of data proving that their core networks actually perform at that level.

In other words, even if networks are highly reliable, human beings use devices and applications that never work close to “five nines” in terms of availability.

The fundamental problem is that end user appliances, applications and operating systems cannot reach “five nines” levels of performance. And the whole calculation of availability is based on concatenated chains of devices. Element A might operate at “five nines.”

But, without redundancy, any transmission chain with three elements would be calculated as 99.999 times 99.999 times 99.999. By definition, the total chain involves the downtime caused by any single element in the chain.

Traditionally, telecom networks have considered 99.999 percent availability the standard for fixed network voice services.

These days, it is hard to find anyone arguing that actual end user application or service experience actually ever approaches “five nines.” The reason is that most of the applications people want access to on the Internet actually are processed in data centers whose servers cannot operate at five nines availability.

To cope with that issue, data centers use redundancy. In other words the issue is not how reliable any server is. The issue is how fast an entity can detect a fault and switch to a backup server.

That same approach (redundancy) is used by transport networks and business access networks.

But many apps still are delivered over networks that are unmanaged, even if availability on the part of the delivery chain any single network can control, is “five nines.”

A new way of thinking about reliability or availability is that modern application delivery systems cannot actually meet the old “five nines” standard, end to end, because the actual end-to-end systems are going to crash often enough that five nines is not possible, even when there are redundant “five nines” access and transport systems.

In other words, loss of local power alone is a threat to five nines for end user experience. Operating systems crash, access to websites hiccups, mobile phone calls drop. Devices run out of battery power.

Wi-Fi is the typical device connection in homes and offices, and no matter how well other elements and systems work, Wi-Fi operations alone would crash “five nines” performance, in terms of the actual experience of application and service availability.


The point is that “five nines” is a myth, when considered from the standpoint of a consumer end user of any Internet service or app, on any consumer device.

"Tokens" are the New "FLOPS," "MIPS" or "Gbps"

Modern computing has some virtually-universal reference metrics. For Gemini 1.5 and other large language models, tokens are a basic measure...