Friday, January 9, 2015

With Shared, Licensed, Unlicensed Spectrum, Who will Not be Able to Become a "Mobile" Service Provider?

Since 2010, there has been movement towards possibly freeing up 500 MHz of spectrum now primarily under the control of federal government agencies for potential use by private sector users on shared basis.

Such shared spectrum access would allow existing licensees to retain primary use of their licensed frequencies, but also allow commercial users access when primary licensees do not need the capacity.

Such spectrum sharing approaches are the newest idea in spectrum allocation policies that primarily have relied on exclusive licenses, and partly on unlicensed approaches.

The big innovation is the concept that a shared access system will deliver results faster, at lower cost, than clearing spectrum, moving  licensed users to new bands, and then allowing new uses of cleared spectrum.

Current thinking is that current licensed users would have priority, while other users could use spectrum when it was available and not needed by primary licensees. Among the ideas for allowing such access is that perhaps new users could pay for secondary rights, while fully non-licensed use would be possible for users who do not have any quality of service guarantees.

At the moment, the National Telecommunications and Information Administration (NTIA) is working on a plan that would make about 100 megahertz of spectrum available for shared small cell use in the 3.5 GHz band currently used primarily for military radar systems.

NTIA also is evaluating additional unlicensed use in the 5 GHz band.

The plan has not been universally well received. Traditional telecom, cable TV and satellite firms prefer the exclusive licensee approach, for reasons of quality of service control, and, some would say, for reasons of promoting communications spectrum scarcity.

Some have noted that signal propagation issues in the 3.5-GHz and other similar bands would likely mean that shared spectrum is most helpful in urban areas, where small cells are practical.

But that might suit some mobile service providers just fine. Illiad’s Free Mobile relies on Wi-Fi access where it can, as a way of reducing the cost of sourcing capacity from other mobile operators. Republic Wireless and Scratch Wireless do the same.

Comcast is deploying Wi-Fi hotspots as part of its consumer fixed network broadband service, in an effort to create a huge footprint of potential public Wi-Fi hotspots that likewise could be used to reduce the cost of creating a mobile virtual network operator operation.

Other ISPs with fixed network assets, including Google Fiber, might be able to use such shared spectrum assets in similar ways, to reduce the cost of mobile service that relies on wholesale-sourced facilities.

Some have argued that a separate Google initiative to supply Wi-Fi gear for businesses, and centrally manage all the routers, could play an infrastructure role as well.

The point is that new ways of combining licensed and unlicensed; exclusive and shared; carrier, enterprise and consumer network assets are coming. All of that is going to create new possibilities for varieties of Internet access and mobile service.

That would be the fulfillment of a hope that has been raised for decades, namely that it will be possible for any entity to become a mobile service provider. 

In an earlier iteration, sports brands (EXPN), family brands (Disney), electronics brands (Best Buy) and others have experimented with custom mobile service provider brands. In other markets, supermarkets have considered offering their own service, and Walmart already does so.

With many more federated public Wi-fi networks, much more spectrum and new contenders with fixed network assets, the possibilities will reach a new level, and lower retail price points than possible before.

If most mobile device use occurs in the home, then some believe new mobile providers such as Comcast could operate as MVNOs with 30 percent lower retail costs.


AT&T Introduces "Rollover Data"

AT&T Mobility has introduced "Rollover Data," a program that allows customers on Mobile Share Value plans to roll over unused data in one billing cycle, for use in the next cycle, at no extra charge. 

If you have a 15GB AT&T Mobile Share Value plan and only use 10GB, the remaining 5GB (the Rollover Data balance) can be used the next month (a total of 20GB). 

 There’s no cap on the amount of unused plan data within a given month that’s eligible for rollover, but one month's rollover data last only for the next month.

Some of you might remember that Cingular, the brand AT&T once used, offered a rollover feature for voice minutes that, as I recall, expired if unused after about a year. 

After T-Mobile US launched its Data Stash plan, it was inevitable that there would be a response. 

The difference, at least for the moment, is that the Data Stash unused mobile data usage will be available for a year.  




Internet Access Speed Growth is Linear, but in a Moore's Law Way

You might not know it from the stream of quarterly updates on “average” Internet connection speeds around the world, but a long history of speed advances confirms that consumer Internet access grows about as fast as Moore’s Law would suggest.

So even if it seems very little is happening, quite a lot is happening, all the time. You couldn’t tell that from quarterly or even annual changes in typical access speeds.

In the third quarter of 2014, for example, global average mobile Internet connection speeds dropped 2.8 percent to 4.5 Mbps, and the global average peak connection speed fell 2.3 percent to 24.8 Mbps in the third quarter of 2014, according to Akamai.  

On an annual basis, average mobile Internet connection speeds globally were up 25 percent from the third quarter of 2013, though. That implies a doubling of speed about every four years.

Most people would likely agree that usage grows faster than that.  Based on traffic data collected by Ericsson, the volume of mobile data traffic grew by approximately 10 percent between the second and third quarters of 2014, implying annual growth of more than 40 percent. But that’s usage, not average speed.

The global average fixed Internet connection speed saw a slight decline in the third quarter of 2014, dropping 2.8 percent to 4.5 Mbps. Global average peak connection speeds declined slightly in the third quarter, dropping 2.3 percent to 24.8 Mbps.

Those sorts of figures are hard to square with the notion that typical speed doubles about every 18 months to two years.

Logic seemingly would suggest that is unlikely. Communications networks--especially those of the fixed variety--are expensive construction projects. Such networks also are subject to local, state and national regulations, interest rates, economic conditions, changes in tax laws and changes in demand curves, all of which should slow rates of change, compared to rates of change for semiconductor products that follow Moore’s Law.

Shockingly, then, some studies have shown that even on twisted-pair copper telephone networks, speed doubled about every 1.9 years.

Other studies show similar results: some say an Edholm's Law shows that Internet access bandwidth does increase as Moore’s Law would predict.

Of course, experts have argued for decades about whether Moore’s Law would end. That debate still hasn’t been settled. But some argue that communications bandwidth would continue to improve on a Moore’s Law pattern, even if classic Moore’s Law slowed or flattened.

That’s a foundational assumption. If access bandwidth really does grow at Moore’s Law rates, then gigabit access networks are inevitable, no matter how crazy that seems.

But that is going to obvious first in the developed regions that have been at it the longest, in North America, some portions of Asia (Japan, Korea, Taiwan, Singapore) and parts of Europe.

Other regions with tougher economics might still be on the curve, but will start at slower speeds, as did Internet access in the more-developed regions.

The global broadband adoption rate (at least 4 Mbps) edged up slightly in the third quarter, gaining one percent and growing to 60 percent.

The global adoption rate of access at speeds of at least 10 Mbps was up 22 percent in the third quarter, following 65 percent increases seen in both the first and second quarters of 2014.

South Korea had the highest average connection speed at 25.3 Mbps but Hong Kong
again had the highest average peak connection speed at 84.6 Mbps.

Demand is going to grow as well, given both streaming popularity and new video formats including 4K video. With 4k adaptive bitrate streams generally requiring between 10 Mbps to 20 Mbps of bandwidth, markets where 4K streaming is widespread will face new investment requirements.

Though it seems improbable, and even when quarterly or annual statistics do not fully show the progress, Internet access speeds do grow about as fast as Moore’s Law would suggest. It’s astounding, really.

Video Autoplay Drives 60% to 200% Facebook Data Consumption Growth

Autoplay video is having a huge impact on Internet access networks, and also likely is going to shape end user behavior in ways beyond new concerns about uncontrollable bandwidth usage.

Over the 12 months preceding September 2014, Facebook traffic increased by 60 percent on the mobile network, and by over 200 percent on the fixed network, driven mainly by the addition of autoplay video to the Facebook feed, according to Sandvine.

Autoplay on Instagram and many other sites probably is having similar effects, namely increasing data consumption in an involuntary way.  

Users probably are starting to shut off audio when working in quiet places, or all the time. People might start reducing or avoiding some sites that are aggressive about autoplay video. Demand for third party apps to disable such features are going to become more popular.

Here are some ways to disable autoplay in your browser.

Thursday, January 8, 2015

Did an Understanding of Moore's Law "Save" the 1980s Cable TV Business?

Did an understanding of Moore's Law "save" the U.S. cable TV industry in the mid-1980s, in the same way Moore's Law enabled Microsoft and Netflix?

Maybe so.

Most will agree that it matters greatly whether Moore’s Law continues at historic rates or whether bandwidth advances continue at historic rates. 

The reason is that so many businesses implicitly or explicitly embed such assumptions into business models and expected or potential rates of growth.

And the continuation of that trend has been highly contestable. For three decades, observers have predicted that the rate of improvement simply could not continue, as we would reach the limits of our ability to etch smaller pathways onto silicon substrates. Optimists have countered that we would begin working with different substrates.

Stubbornly, Moore’s Law has, so far, defied projections. Over the past three decades, big businesses, and big bets, have been contingent on Moore’s Law.

Perhaps the biggest early bet was made by a few in the U.S. cable TV industry.

Way back in the 1980s, proposed high definition TV standards threatened to choke off growth of the U.S. cable TV industry, for example.

Still a smallish industry of possibly $30 billion annual revenues, initial standards proposed by Japanese electronics interests and over-the-air broadcasters would have severely disrupted the cable industry’s business model.

At that time, Japanese suppliers dominated and lead the TV set business, and cable operators were struggling with the cost of supporting complicated in home installs of cable, plus TVs, plus off-air antenna service, plus videocassette records and multiple remotes.

All that complexity generated consumer unhappiness.

At the time, the proposed HDTV standards were partly analog, partly digital and might have required about 45 Mbps of bandwidth per channel, at a time when cable access networks were set up to use 6-MHz channels.

In addition to requiring completely new electronics across the network, some also suggested the initial standard would not last more than five to 10 years, requiring yet another “rip and replace” investment cycle at the end of that period.

Astute cable TV industry executives knew they could not easily afford to make major upgrades twice within 15 years. Consumer electronics suppliers would win, because they could expect two waves of device replacement (consumer and industrial) within 15 years.

Broadcasters might also have reasonably assumed they would gain strategic advantages over cable TV, then seen as a direct competitor. Also, given the growing trend to greater realism in TV image quality, the quality of the existing product would be enhanced, at less cost per TV station than a cable operation would face.

Enter Moore’s Law. Few experts at the time believed it was possible to move directly to an “all-digital” form of HDTV, in one step, and yet retain the standard channelization. The reason was simple enough.

Decoding such a signal, massively compressed and processed, would require the equivalent of a mainframe computer in the home.

The issue, though, was whether Moore’s Law actually would continue to improve at historic rates, and therefore provide affordable mainframe computing capabilities. Most believed that unlikely. But a few did bet on Moore’s Law continuing, which would make possible a consumer decoder at a price that, while significant, would still allow cable operators to support HDTV.

To make a longish story short, Moore’s Law remained intact, and it indeed was possible to compress a 50-Mbps “raw” data stream into 6 MHz of bandwidth.

Some would say, in that instance, Moore’s Law “saved” the economics of the whole U.S. cable TV industry.

Some also would note that Reed Hastings of Netflix made a bet on Moore’s Law as well. Earlier, some would argue Microsoft was built on an understanding of the implications of Moore’s Law. What a business could like like if computing or bandwidth were free is the key question.

For Gates, the key assumption was that Moore’s Law would make the cost of computing hardware a non-problem for a software supplier, and also would create huge new markets for computers.

For Hastings, Moore’s Law, as embedded in Internet access prices, would make possible streaming services even lower in cost than mailing DVDs using the postal service.

The point is that, sometimes, a big forecast on a key trend can enable a whole new industry or business, or perhaps save a whole industry or business.

Most other attempts to quantify the future also are subject to uncertainty. So forecasting errors always are possible. In fact, they might be the normal state of affairs.

Philip Tetlock's Expert Political Judgment: How Good Is It? How Can We Know? found that “specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study.”

Sam L. Savage’s The Flaw of Averages points out that plans based on average assumptions are wrong on average, because uncertainty in life is much more pronounced than people generally assume to be the case.

Nassim Talib’s The Black Swan likewise deals with the powerful impact of unpredictable and unexpected developments.

In fact, some would go so far as to say that forecasts always are wrong, to some degree. That isn’t necessarily a bad thing, as minor fluctuations along a predicted trend line nearly always happen. That is true of most economic forecasting, some argue.

That doesn't mean people will stop listening to forecasts, or that experts will fail to make them. Occasionally, though, big bets are made based on such forecasts, no matter how inaccurate forecasts might be.

Sprint, T-Mobile US Tout Subscriber Growth: Will AT&T and Verizon Tout ARPU?

Average revenue per user, or average revenue per account are important measures of mobile service provider operational performance. The total number of accounts, and the composition of accounts (prepaid or postpaid; tablet or phone) also are important.

With a mobile marketing war going on in the U.S. market, the value of each metric might change. At a high level, Sprint and T-Mobile US are likely to tout subscriber growth. AT&T and Verizon are likely to soon emphasize ARPU or ARPA.

The reasons are prosaic as well as strategic. No company likes to report negative numbers, and eventually, it is possible AT&T or Verizon or both might actually have to report negative account figures. It hasn’t happened yet, but it is possible.

T-Mobile US and Sprint, on the other hand, are gambling on a furious pursuit of account growth, at the expense of ARPU or ARPA. Executives at those firms will be no more happy to report negative numbers for ARPU or ARPA, and will tout account growth.

Sprint preannounced net account growth in the calendar fourth quarter (Sprint’s third fiscal quarter) of 2014.

During the quarter, Sprint platform net additions totaled 967,000 including postpaid net additions of 30,000, prepaid net additions of 410,000 and wholesale net additions of 527,000. That reversed a trend of account losses.

“The growth in postpaid customers was driven by the highest number of postpaid gross additions in three years, and postpaid phone gross additions increased 20 percent for the quarter year-over-year,” Sprint said. “ In addition, the percentage of prime customers was the highest on record.”

Non-prime customers are those who are subject to involuntary churn, being disconnected for non-payment of their bills. In fact, financial performance at Sprint is credited with a 10-percent drop in profits for Sprint parent SoftBank.

T-Mobile US, for its part, likewise touted high account growth for its fourth quarter of 2014.

In the fourth quarter, T-Mobile US added 2.1 million net customers,  the seventh quarter in a row that T-Mobile US has generated more than one million net customer additions.

Branded postpaid net customer additions were 1.3 million, a 47 percent improvement compared to the fourth quarter of 2013, T-Mobile US said.   

Branded postpaid phone net additions were over one million, implying T-Mobile US activated about 300,000 tablet connections.   

For the full-year 2014, the Company reported branded postpaid net customer additions of 4.9 million and 8.3 million net total customers, an 89 percent increase from the prior year.

Killer App for LTE Might be Entertainment Video

What is the “killer app” for fourth generation Long Term Evolution networks, if in fact there is such a killer app? The easy answer is to reiterate that there is no killer app for 4G or 3G, only many apps and services that contribute to the total value, and of course, faster speeds.


That answer, even if arguably correct, is not granular enough to be of any value to practitioners and companies in the ecosystem. If faster speed alone were the key driver, and if retail pricing and packaging allowed mobile to become a viable substitute for fixed network access, a variety of revenue opportunities based on “substitution” would immediately become relevant.


If mobile video, specifically, were to become a lead app, that would imply there is a new opportunity for mobile streaming services. The same is true if LTE creates a better mobile gaming experience (latency performance, not just speed).


Other opportunities arise if cloud-based apps, in general, are widely accessible on mobile devices.


But it always has seemed as if entertainment video was likely to be the key application that distinguishes 4G from 3G. A study by  the Office of Communications, the U.K. communications regulator, suggests that, in most countries, 4G Long Term Evolution networks lead to more video streaming, compared to all other mobile networks.  


More video consumption also was among the predicted value of 4G networks, according to the conclusions reached by a study prepared for the Ofcom.


Among U.S. internet users polled, 50 percent of respondents who used 4G streamed or downloaded mobile video, according to a study by eMarketer.


About 32 percent of non-4G users reported they downloaded or streamed video. And new smartphone users on 4G networks say video is among the new apps they use most.


When a July 2014 Deloitte study asked subscribers in the US about which activities they conducted more often on their mobile networks since signing up for 4G, 33 percent said they watched more video.


Another example is skyrocketing video on Facebook, an app used exclusively on a mobile phone by about 30 percent of Facebook users. About 78 percent of Facebook users use the app on their mobile devices at least some of the time.  
So a shift towards visual content on Facebook, especially video, automatically means more usage on mobile networks.  


In one year, the number of Facebook video posts per person has increased 75 percent globally and 94 percent in the United States.


Globally, the amount of video from people and brands in News Feed has increased 3.6 times year-over-year.


Since June 2014, Facebook has averaged more than one billion video views every day, the company says.  


On average, more than 50 percent of people visiting Facebook in the United States every day  watch at least one video daily and 76 percent of people in the United States who use Facebook also say they tend to discover the videos they watch on Facebook.


A Sandvine report shows that Facebook now accounts for 19.43 percent of all smartphone data consumed in North America.


Facebook leads in “upstream” data, accounting for 22.4 percent of that traffic, and is behind only YouTube in “downstream” data.


YouTube accounts for 19.8 percent of that traffic, compared to Facebook’s 19 percent.


But Facebook has 19.4 percent share of aggregate of upstream and downstream data, exceeding YouTube’s 18 percent share.

Facebook-owned Instagram also accounts for an additional 2.6 percent of upstream data, 4.5 percent of downstream data, and 4.3 percent of total smartphone data consumption.

Wednesday, January 7, 2015

Facebook Video Traffic Grows 94% in U.S. Market

Facebook says it increasingly is seeing a shift towards visual content on Facebook, especially video.

In one year, the number of video posts per person has increased 75 percent globally and 94 percent in the United States.

Globally, the amount of video from people and brands in News Feed has increased 3.6 times year-over-year.

Since June 2014, Facebook has averaged more than one billion video views every day, the company says.  

On average, more than 50 percent of people visiting Facebook in the United States every day  watch at least one video daily and 76 percent of people in the United States who use Facebook also say they tend to discover the videos they watch on Facebook.

A Sandvine report shows that Facebook now accounts for 19.43 percent of all smartphone data consumed in North America.

Facebook leads in “upstream” data, accounting for 22.4 percent of that traffic, and is behind only YouTube in “downstream” data.

YouTube accounts for 19.8 percent of that traffic, compared to Facebook’s 19 percent.

But Facebook has 19.4 percent share of aggregate of upstream and downstream data, exceeding YouTube’s 18 percent share.

Facebook-owned Instagram also accounts for an additional 2.6 percent of upstream data, 4.5 percent of downstream data, and 4.3 percent of total smartphone data consumption.

Major Innovations--Even IoT--Take Decades to Produce Clear Productivity Increases

Vodafone notes it has been 30 years since the first mobile call was carried on the Vodafone network in January 1985. About that time, Vodafone forecast it would sell about a million subscriptions. BT predicted the market at only about 500,000 subscriptions.

In 1995, after a decade of availability, U.K. mobile adoption had reached seven percent. By 1998 adoption reached 25 percent. By 1999 adoption had reached 46 percent. Just five years later, adoption exceeded 100 percent.

We might argue about when mobility became an essential service for consumers or businesses. But we might all agree that point has been reached, and that the bigger question is how much more vital mobility will become, and how it displaces older modes of communication, computing, shopping, working and learning.

Mobile usage in the U.S. market followed a similar trajectory, with 340,000 subscribers in 1985, growing to 33.8 million by 1995. By 2005, mobile adoption had grown exponentially to about 208 million accounts.

Those figures hint at the perception of value by consumers and businesses. Growing abandonment of fixed line voice, greater volumes of mobile-initiated Internet sessions, use of websites, email and social media provide other bits of evidence about mobile’s perceived value.

But even looking only at ubiquity of usage, it took 20 years for mobility to become something “everybody” uses. It took 40 years for electrification to change productivity in measurable ways.

Keep that in mind when thinking about the “Internet of Things.” Despite the fact that U.S. businesses and organizations made huge investments in information technology in the 1980s, many would argue the benefits did not appear until much later in the 1990s.

Most of us likely instinctively believe that applying more computing and communications necessarily improves productivity, even when we can’t really measure the gains.

But investments do not always immediately translate into effective productivity results. This productivity paradox was apparent for much of the 1980s and 1990s, when one might have struggled to identify clear evidence of productivity gains from a rather massive investment in information technology.

Some would say the uncertainty covers a wider span of time, dating back to the 1970s and including even the “Internet” years from 2000 to the present.

Computing power in the U.S. economy increased by more than two orders of magnitude between 1970 and 1990, for example, yet productivity, especially in the service sector, stagnated).

And though it seems counter-intuitive, some argue the Internet has not clearly affected economy-wide productivity.

Whether that is simply because we cannot measure the changes, yet, is part of the debate. To be sure, It is hard to assign a value to activities that have no incremental cost, such as listening to a streamed song instead of buying a compact disc. And many of the potential productivity gains we might be seeing are of that sort.

The other issue is that revenue is decreasing, in many industries, even if most users and buyers would say value is much higher.

A productivity gain, by definition, means getting more output from less input.

In other words, it is “how” technology is used productively that counts, not the amount of raw computing power or connectivity. And there is good reason to believe that new technology does not reshape productivity until whole processes are changed. Automating typing is helpful. But changing the content production ecosystem arguably is where the biggest productivity gains come, for example.

Windows Phone Bigger Than Apple IoS?

Infographic: Windows Phone beats iOS?! | Statista
source: Statista
Forecasting is a tough challenge, even if it is natural to wonder "what will happen this year, or beyond," and something we always hear lots about at the start of a year.  

Most of us would not wish to be reminded of how wrong our own forecasts have been. 

Consider projections about market share in the mobile market made just a couple of years ago, when Microsoft purchased Nokia in 2013 and made a renewed push into the mobile operating system business. 

That move of course required analysts to make estimates of potential changes in the mobile operating system market. Many expected Windows Phone to become the third largest OS in terms of market share

At least a few major firms forecast that Windows Phone would eclipse eclipse Apple iOS to become the second largest OS market share

That is not to diminish the accuracy of other predictions made by those firms, or the overall accuracy of any other forecasts always made by market researchers, weather experts, economists and executives in general. Still, forecasting is a tough business. 

In 2011, Gartner expected that by 2015 Microsoft would have nearly 20 percent OS market share by 2015. 

But Windows Phone seems unable to grow beyond about three percent of the installed base even if recent sales share was as high as 13 percent in Italy. In the same time period in Italy, Android grew 68 percent, by way of comparison.  

It's just hard to predict the future.

Why Agentic AI "Saves" Google Search

One reason Alphabet’s equity valuation has been muted recently, compared to some other “Magnificent 7” firms, is the overhang from potential...