Friday, July 31, 2015

SoftBank Remains Committed to Sprint Turn Around; Will Take Time

SoftBank President Nikesh Arora says the company still is committed to turning around Sprint Corp. While it might be the case that a person in that position must say such things, Arora also seems to acknowledge it will take some time.

“Telecom business is a long-term business...It takes a while to shift the direction in the industry,” said Arora.

SoftBank bought Sprint in 2013 for about $22 billion. Today Sprint is worth about $13.7 billion, in terms of equity value. In other words, Sprint’s equity value has to climb $8 billion just to reach the original purchase price.

Do do that, Sprint will have to start making money. In the first quarter of 2015, Sprint still was losing money.

At least so far, Sprint has remained committed to competing as a full-service, national provider. Indeed, it is hard to see how a long-term turnaround could succeed with any other strategy. A retreat to some sort of niche role, at this point, likely would ensure that Sprint never would recover its full original value.

The reason is that a niche strategy would not supply the revenue scale Sprint needs. Today, Sprint is among the top four “full-line generalists” in the U.S. mobile market, competing across a full range of products and markets.

That strategy requires volume and market share, as financial performance tends to improver with additional scale. In that regard, Sprint presently has about 15 percent share, far above the level it might otherwise have as a niche provider, if it could uncover such a role.

Specialists tend to be margin-driven players, and their financial performance deteriorates as they increase their share of the broad market.

Comcast, Australian NBN to Deploy DOCSIS 3.1 in 2016, 2017

It isn’t clear precisely when, and in what quantities, the DOCSIS 3.1 standard will be commercially deployed, but Australia and the United States likely will be early to do so.

Comcast has said it will deploy DOCSIS 3.1, designed to support 10 Gbps in the downlink, early in 2016.

NBN Co in Australia has said it will introduce “Data Over Cable Service Interface Specification 3.1 (DOCSIS)” by about 2017.
.
DOCSIS 3.1 supports download speeds of up to 10Gbps and up to 1Gbps upstream.

More than three million homes and businesses in Brisbane, Melbourne, Perth, Adelaide, Sydney and the Gold Coast are earmarked to receive the National Broadband Network over the Hybrid Fibre Coaxial (HFC) network that presumably will be engineered to support DOCSIS 3.1, even if DOCSIS 3.0 initially is used by many consumers.

The issue is largely dictated by marketplace realities. Not many consumers will happily pay hundreds of dollars to buy a multi-gigabit service, when lower-cost megabit services that work well can be purchased.

100 Million in Asia Now Buy FTTH Subscriptions

Asia is a place of large numbers. More people means more of almost everything, including fiber to home connections. About 100 million people in the Asia-Pacific region are now subscribed to Fiber-to-the-Home (FTTH) services, according to Ovum

That growth is a continuation of 2014 trends. Fiber to home subscribers in Asia Pacific rose 35 percent to 115.8 million at the end of 2014, according to other statistics published by IDATE.

The number of homes connected with fiber, meanwhile, increased 36.8 percent to 338 million by the end of 2014, up 36.8 percent from 2013.

Japan leads the region with 100 percent of homes passed, followed by South Korea and Singapore with 95 percent each.

Taiwan, Hong Kong, Malaysia and China also are among the top eight.

#maketechhuman

Sometimes we forget why we do things. This is a reminder

economic times.com
Nokia invites you to a debate on how technology can truly serve humanity.
20 years ago, on this very day, an epic journey was started. A journey of connecting millions of Indians with the help of a revolutionary technology. It was the day when the first mobile call was made in India on a Nokia handset, using a network built by Nokia.
Today as Nokia celebrates the 20th anniversary of mobile telephony in India and the 150th year of its existence, the torchbearer of mobile technology is turning a new leaf. Nokia in association with The Economic Times, is bringing #maketechhuman, a global debate about the human possibilities of technology and the positive and even negative impacts that technological developments may have on people's lives, to India. The aim is to engage major influencers in the technology field in India adding their own voices and perspectives to the conversation in the context of emerging and unique economies like India.
maketechhuman enabled by NOKIA

Myanmar to Test "Rule of Three"

That mobile and fixed network communications industries structurally are oligopolies might irritate many, but has proven to be an enduring foundation of communications industry dynamics globally, since the great wave of privatizations and competition began in the 1980s.
Some might argue that stable oligopolies are possible somewhere between two and four providers, with many arguing three strong contestants is the optimal sustainable outcome. That four or more providers exist in many markets is considered by many a “problem” in that regard, generally called the rule of three.

Most big markets eventually take a rather stable shape where a few firms at the top are difficult to dislodge.

Some call that the rule of three or four. Others think telecom markets could be stable, long term, with just three leading providers. The reasons are very simple.

In most cases, an industry structure becomes stable when three firms dominate a market, and when the market share pattern reaches a ratio of approximately 4:2:1.

A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest, according to the rule of three.

In other words, the market share of each contestant is half that of the next-largest provider, according to Bruce Henderson, founder of the Boston Consulting Group (BCG).

Those ratios also have been seen in a wide variety of industries tracked by the Marketing Science Institute and Profit Impact of Market Strategies (PIMS) database.

Myanmar aims to test the thesis.

Myanmar has formally invited proposals from local public companies to create a fourth mobile operator, in partnership with a foreign company. Myanmar also is planning to auction off additional spectrum.

The Rule of Three applies wherever competitive market forces are allowed to determine market structure with only minor regulatory and technological impediments. But there are some circumstances where market structure does not take that stable “rule of three” shape.

Regulatory policies hinder market consolidation or allow for the existence of “natural” monopolies.
Also, in some cases, major barriers to trade and foreign ownership of assets can have the same effect.

We shall see what happens, long term, in Myanmar. In other instances, four has proven to create an  unstable market. But instability can last for long periods of time, so the outcome cannot be predicted, yet.

In July 2015, Telenor had grown its subscribers to more than 10 million, while Ooredoo reported  3.3 million at the end of April, 2015.

Myanmar also is allocate more sub-1-GHz spectrum in the 700 MHz, 850 MHz, and 900 MHz bands.

As you would guess, as mobile adoption--especially of smartphones--grows, new demand will be created for subsea bandwidth to Myanmar.

As of the end of the first quarter of this year, mobile penetration in Myanmar stood at 25 percent,  up from less than 19 percent at the end of 2014.  

In a significant development, half of people buying a mobile phone buy a data plan, while 70 percent of all the phones sold are smartphones.

Myanmar  has about 30 Gbps of international bandwidth, with Telenor and Ooredo adding another 10 Gbps, and another order of magnitude to come over several years.

There are currently 3,000 mobile towers, but the country needs 15,000 to 20,000, and 25,000 km more transport facilities.

Thursday, July 30, 2015

5G: Breath-Taking Stuff

You might think some particular element of the proposed fifth generation network is the “defining” characteristic.

Perhaps it is the ultra-low latency that makes 5G distinctive. For others, the access speeds will matter most.

Connectivity transparency or consistent experience might be seen as the “key” features. In other cases, it might be the new business models or applications that seem most unique.

For some, “network slicing,” the ability to optimize general purpose network features in different ways to support specific use cases and applications will seem key.

Also, the growing inability to clearly separate “access” from “transport;” “connectivity” from “apps” will be the uniquely-new character.

In most instances, the flexibility or customization of the new network will seem the most-striking new feature.

5G will bring multiple propositions to all customers and at the same time provides an enhanced and unique proposition tailored to each one of them. The definition of the customer is not limited to the consumers and the enterprises as in today’s environment but also expand to include verticals and other partnerships.

Any of those choices will make sense, if the 5G vision articulated by the Next Generation Mobile Network Alliance is correct.

For decades, network architects have strived to create a network so flexible, and so powerful, that it can actually customize what used to be thought of as “vanilla” communications services for specific applications, users, devices and business models.

We used to call that “bandwidth on demand,” for example. But 5G will advance far beyond that, creating a network able to create and customize end user experiences when using network resources.

The clear hope is that such flexibility, customization and control will enable many huge new business models. In other words, the general purpose network can be customized for specific types of devices, users or apps.

Smartphones used by people for voice, Internet access and messaging might use one set of network features, “tuned” for people using smartphones in different settings, at different times or day or places.

Autonomous vehicles will require a different set of performance parameters, and the network should be able to supply just the needed performance, while not burdening what might be thought of as a complete virtual network with unnecessary features or capabilities.

Likewise, sensor networks should be optimized in ways that best support their intended functions, without unnecessary cost and overhead.

Where content delivery is the key business function, the network should be able to support linear or on-demand content efficiently.

Service features for airplane passengers will be distinct from service features for pedestrians in shopping malls or sports stadiums. High-density service requirements and low-density rural apps, with a mix of latency requirements, access speeds or mobility support should be supported.



Almost by definition, the 5G network should provide many more was for “access providers” or “transport providers” or “data center providers” to create additional value and distinctiveness.

There should be many new possibilities for combining computing and application assets with connectivity functions and application capabilities and features.  

All of that will provide many new ways to “move up the value chain” and craft distinctive, non-commodity positions within the market.



The point is that 5G presupposes a next generation network that is programmable in an entirely-new way, built from the inception to support a wide range of potential newe business cases.

It is breath-taking stuff.

New Spanish Copyright Law Reduces Traffic, Revenue for Content Producers

Rule number one: you get less buying of any desired product when the price is raised.


A corollary to rule one: anything that restricts a consumer’s finding out about a desired product will lessen consumption.


So you will not be surprised to learn that a Spanish law limiting news aggregator use of content snippets has reduced content provider traffic and business prospects, according to
a study conducted by NERA Economic Consulting on behalf of the Spanish Association of Publishers of Periodicals.


Article 32.2 of the Spanish Copyright Act mandated payment of a copyright fee by online news aggregators to publishers for linking their content within their aggregation services.


The study says the new ban on use of snippets by news aggregators resulted in less variety of content for consumers and an increase in news search time estimated, in the short term, to represent approximately 1,750 million euros per year.

Content viewership dropped 14 percent. As a result, content producers lost about 10 million euros worth of revenue per year.

You might argue this is a case of unintended consequences. That probably is formally correct. The law did not have the state purpose of harming Spanish content creators. But many would have predicted precisely the harm that NERA says happened.




For content providers, there are smaller audiences and correspondingly lower advertising revenues.


On average, traffic dropped 16 percent and small and emerging media were most affected.

Content providers lost an estimated seven million euros per year.

One Clear Way Net Neutrality Rules are Driving Up ISP Costs

One practical concern some had about reclassifying Internet access as a common carrier service was that there would be an explosion of complaints, leading to higher overhead costs simply to deal with the administrative compliance.


That appears to be happening.


In just the first month that net neutrality regulations have been in effect, consumers have filed about 2,000 complaints to the Federal Communications Commission against Comcast, AT&T, and other Internet service providers, according to records obtained by National Journal.


Officials in the FCC's Enforcement Bureau can choose whether to investigate any of the complaints for further action or penalties.


Justified or not, Internet service providers must respond to both the FCC and the complaining consumer within 30 days.

Harold Feld, the senior vice president of Public Knowledge, a consumer-advocacy group, acknowledged that most of the complaints probably do not identify real violations of the FCC's net-neutrality rules.

ISPs will have to hope the torrent of new paperwork does not become an avalanche. All the costs of responding will be borne by legal and support staffs. And all those costs will have to be recovered from ISP subscribers and business partners.

You can read the complaints, if you like.

Why Consumers Won't Pay Much for Any Single Show

There is a logical reason why most consumers likely consider the fair price for any single piece of content (a story, a song, a picture, a video, a TV show) normally purchased as part of a subscription service to be a fairly low number.
Consumers “logically” compared the cost of a compact disc, with 10 to 12 songs, to the download of a single song.
It wasn’t very hard to figure out a “fair price” of about $1 to $1.30 each.
Much the same sort of logic is likely to play out as TV content services are unbundled. Consider a single channel, with a wholesale price to the distributor of less than $1 a month. Triple that number to cover network, marketing and other overhead costs, plus profit and taxes, to $3 a month.
Consumers are going to conclude that any single show should not cost very much.
Nobody will be able to rationally figure out how much attributed cost is the “right number.” And delivery costs in a fully unbundled market are likely to be much higher, actually.
It won’t matter. Consumers simply are going to assume that if a whole channel costs “X” for a whole month, then a single show should not cost much at all.
X divided by 30 will be the assumed “all you want to watch for a day” cost. Then assume 48 discrete shows in a day, and the assumed cost for one show is going to be X/30/48. It will be a small number.
The reality of costs in a new unbundled environment won't matter much 

Based on its Churn Rates, Netflix is Way Ahead of its Competition

Are churn rates for over the leading over the top video services high or low? The answer, of course, is always “in relation to what?”
Netflix subscriber churn, for example, might be considered quite low for a consumer service.
Over the last year, four percent  of U.S. broadband households cancelled their Netflix service, representing nearly nine percent of Netflix’s current subscriber base. At 0.075 percent monthly churn, that represents a churn rate lower than experienced by AT&T and Verizon Mobile, for example.
Hulu Plus customers churn at higher rates. Over a year, Hulu Plus churn is in the 50 percent range, meaning that about four percent a month, which most observers would consider high churn.
Other services fare even worse, with churn rates as high as 60 percent annually, or about five percent a month.
Currently, 85 percent of U.S. broadband households subscribe to a linear video subscription  service, while 59 percent of U.S. broadband households have an over the top video subscription.
Of those who use OTT video, a bit fewer than half use two or more OTT services.
Half a decade ago, for example, mobile subscriber churn   was far higher than it is today. About 2010, monthly churn of two percent to three percent a month among some of the largest four U.S. mobile service providers was not unusual.
2009Q4 Subscriber churn
And churn has consequences, tending to decrease the average lifetime value of any account.
2009Q4 Avg Sub months

High churn rates might be explained in different ways for apps (especially free or low cost apps), as compared to low-cost or high-cost subscription services. Very high rates of churn--well beyond three to five percent a month--might be normal for mobile apps, for example, where consumers sample lots of apps and find most not to have long-term value, leading to high churn rates.

Churn also tends to be higher for new services consumers are unfamiliar with, since there is lots of sampling.

Churn is a more-serious problem for established products that consumers understand well. High churn for those sorts of products tends to indicate some consumer issue with the product.

The point is that Netflix seems to be a service with very low churn, even when one might otherwise predict high levels of churn.

Netflix--especially the streaming product--is a new type of product, competing against many other functional substitutes.

Ultra-Low Latency Not Needed by All, or Most Apps, But Mobile Networks Will be Designed to Support Them

One often hears it said that one-millisecond latency is a useful or necessary attribute of network performance for a relatively small number of applications. That is true. 

But the applications requiring such responsiveness are important because lives are at stake, as in the case of autonomous vehicles, or for the realism of an experience, as for augmented reality.

Networks, though, are designed and dimensioned for the most-stringent apps, even when many--or even most--other apps do not require that level of performance.

So future fifth generation networks will be designed to support the most-stringent apps, in terms of latency and bandwidth, despite being a case of "overkill" for most other apps.

Bandwidth and latency requirements of potential 5G use cases

source:  GSMA Intelligence

92% of 2014 Mobile Devce Models Produced by Asian Suppliers

Asia Pacific set to fuel growth in the app economyWhy this matters: if you are a policymaker or regulator in any country where you believe mobile devices or mobile apps represent a growth industry, you will feel pressure to encourage the device or app segment of the mobile and Internet ecosystem, possibly even to the detriment of the Internet service provider or other segments of the ecosystem.

If, on the other hand, one believes a domestic device or app industry is unlikely to develop, it is rational to take steps to encourage the ISP segment of the ecosystem. 

The device and apps business tends to develop on a "winner take all" pattern.

If so,  the tasks in most countries will tend to center on supporting ISPs and access availability, there being little possibility of fostering a globally-significant device or apps industry. 



Consumer Segment a bright spot for BT Revenue

BT faces many issues in common with other European Community telecom service providers, with flat to declining revenue growth being the most salient issue.

BT also faces the possible divestiture of its wholesale Openreach business, representing about 28 percent of total revenue. BT obviously considers the ownership of that chunk of the business important, in part because it represents a stable revenue segment.

Virtually all other revenue drivers have fallen since the 2009/2010 financial year, with the exception of the consumer segment, where sales of video subscriptions are growing.

The consumer segment represents about 24 percent of total revenue.

Mobile net additions and high speed access also show growth.

In the most-recent quarter ending June 30, 2015, revenue was down about two percent.

Global Services dropped about six percent, while BT Business dropped about two percent.

The consumer segment grew three percent while wholesale shrank about one percent.

Openreach was essentially flat.

Wednesday, July 29, 2015

Why 15% Non-Internet User Base is Not a Long Term Problem

Offline Population Has Declined Substantially since 2000A stubborn 15 percent  of U.S. adults do not use the Internet, according to the Pew Research Center.


The size of this group has changed little over the past three years, despite recent government and social service programs to encourage internet adoption.


Two observations: perhaps we sometimes forget that people have the right to exercise lawful choice. If people do not want to use the Internet, as much as we might think they “should,” they have the right to refuse, for any reason.


At some point, in any business or endeavor, the last increment of progress is so costly it is rational to consider not bothering to achieve it, and allocating effort and resources to some other important problem where the input will achieve greater results.


The other observation is that we have seen such technology laggard behavior before. Such problems fix themselves, assuming the innovation is widely perceived to have value.


Who's Not Online?The point is that the 15 percent of people who do not use the Internet may not wish to use it, and that the number of non-users will likely fall to insignificance over a relatively short period of time, if only because “using the Internet” will take so many forms that people may not even recognize.


Traditionally, some non-users have said they simply do not want to use the Internet.
A 2013 Pew Research survey found 34 percent of non-users did not go online because they had no interest in doing so or did not think the internet was relevant to their lives.


Some 32 percent of non-internet users said the internet was too difficult to use.


Cost was also a barrier for some adults who were offline and 19 percent cited the expense of internet service or owning a computer.


One might argue that the “owning a computer” will be a minimal problem in the future, since all smartphones will provide that function, as will tablets. With the spread of Wi-Fi, private and public, we also can reasonably assume that the cost of access will cease to be a real problem.


The latest Pew Research analysis continues to show that internet non-adoption is correlated to a number of demographic variables, including age, educational attainment, household income, race and ethnicity, and community type, as most would expect.


Seniors are the group most likely to say they never go online. About 39 percent of people 65 and older do not use the internet, compared with only three percent of 18- to 29-year-olds.


Over time, that implies non-use will be a status three percent or fewer people actually have. We can assume that free Wi-Fi will be plentiful enough that the cost of access will not be a real barrier. Nor will devices, as smartphone adoption will be nearly universal.




Rural Americans are about twice as likely as those who live in urban or suburban settings to “never” use the internet.


Racial and ethnic differences are also evident. One-in-five blacks and 18% of Hispanics do not use the internet, compared with 14% of whites and only 5% of English-speaking Asian-Americans – the racial or ethnic group least likely to be offline.


Despite some groups having persistently lower rates of internet adoption, the vast majority of Americans are online.


Over time, the offline population has been shrinking, and for some groups that change has been especially dramatic.


For example, 86 percent of adults 65 and older did not go online in 2000; today that figure has been cut in half.


And among those without a high school diploma, the share not using the internet dropped from 81 percent to 33 percent in the same time period. The other issue is whether mobile apps “count” as Internet use, since many population groups over-index for smartphone usage.

The point is that, over time, the percentage of people who do not use the Internet will naturally drop nearly to zero. Chasing the last increment of change, when change will come even if we do almost nothing, might not be the best use of resources and effort.  

"Tokens" are the New "FLOPS," "MIPS" or "Gbps"

Modern computing has some virtually-universal reference metrics. For Gemini 1.5 and other large language models, tokens are a basic measure...