Monday, January 12, 2015

Full Video Cord Cutting Remains Relatively Rare, So Far

Just 55 percent of Millennials use TVs as their primary video entertainment viewing platform, according to a research study sponsored by NATPE and Consumer Electronics Association.

On the other hand, only about 2.7 percent of U.S. households (across every age group) actually do not buy a linear video subscription service, but do buy high speed access, considered to be an indication those households are reliant solely on streaming video services.

As you might expect, Millennials (in this case defined as those 13 to 34) are significantly more likely to consume full-length TV programs from a streaming source (84 percent streamed in the past six months) than live TV programming at its original air time (54 percent), or recorded content from a DVR (33 percent).

About half of Millennials say they watch TV programming on a laptop, and for 19 percent, it’s their preferred TV viewing screen.  About 28 percent watch television on a tablet and 22 percent on a smartphone.  

While 90 percent of all TV viewers say they watch on a television set, about 85 percent of Millennials say they do so. But some would argue full video cord cutting remains relatively rare.

But challenges keep coming.

Dish Network, at long last, is launching a $20-a-month TV streaming service that notably includes ESPN and 11 other channels.

Sling TV is the first stand-alone streaming service that does not require a prior subscription to a linear video service, and importantly will include ESPN. That matters because, up to this point, live sports programming has been known as a “firewall” against greater cord cutting. Pre-recorded video is available from the major streaming services and from some of the networks directly.

But live sports have been unavailable in a streaming service. So ESPN will provide a major test of live sports exclusivity on linear subscription services, and the ability of live sports to glue subscribers to linear video.

Sling TV will offer live feeds of sports, news and scripted shows on TVs, computers and mobile devices, with programming  from ESPN, ESPN2, TNT, ABC Family, Food Network, HGTV and Travel Channel as part of the 12-channel package.

But, so far, no broadcast TV networks or the most-watched cable news channel, Fox News, are part of the package.

The $20 Sling TV base package features add-on packs with additional kids and news programming, available for $5 each.

Most observers would say a package including the major local TV networks plus sports and perhaps HBO is the likeliest candidate for a winning, but stripped-down, streaming package. So the Sling TV will not be a full test of that thesis.

So far, linear video, as a business, has only plateaued, and begun what looks like the declining part of its life cycle. The real shift is yet to arrive.

Austrian Mobile Prices Rise Significantly After Consolidation

As much as consumers legitimately enjoy low costs, or ultra-low costs, for products they buy, it sometime is possible that prices are too low. One might argue that prices are too low when they imperil the ability of an industry to keep supplying better versions of products consumers really want.

Ironically, significant price increases in mobile markets that have recently consolidated from four to three suppliers might be instances where substantially-higher prices are needed to allow an industry to keep investing.

Some might point to similar strategic issues in the energy industry. Ironically, low oil prices suppress the development of alternative energy supplies and demand for all-electric or hybrid vehicles.

Global oil prices that are too low suppress the development of domestic energy supplies. So, sometimes, price increases arguably are a good thing for consumers, long term. Consider the Austrian mobile market, long among the most-competitive in Europe.

Tariffs for mobile service in Austria grew substantially in 2014, according to a report by the Arbeiterkammer Wien, the Vienna Chamber of Labor. The analysis deals only with posted tariffs, and does not appear to analyze which plans people buy most. But posted prices are higher.  

Retail plan costs increased 29 percent to 78 percent from September 2013 to December 2014, the group says. Of course, some would note that Austria has had ultra-competitive prices that are unsustainable over the long term.  

The average price rise for voice-only prepaid subscriber information modules (accounts) was 29 percent. The average increase in postpaid voice-only contract plans was 56 percent.

Prices for mobile Internet plans for frequent users grew 58 percent. Higher mobile Internet access prices in Austria are a result of the market consolidation, some argue, and other studies also show a 60 percent increase in mobile Internet plan prices in 2013 in the Austrian market.

Some argue the Austrian market remains highly competitive, even after the increases, where it earlier had been “ultra competitive.”  

Also, some argue it is the presence of an attacker dedicated to competing on price that matters most, not whether the number of leading providers is three or four.

Tariffs for voice and text, but no mobile Internet service grew 78 percent, according to Arbeiterkammer Wien. In certain cases, rates had even doubled, the group said.

Some will say the price increases are a direct result of consolidation of the Austrian mobile market to three facilities-based providers, from four. Others argue prices will be better, longer term, as investments are made in facilities and spectrum.

Consumers will benefit from the consolidation of providers from four to three, a study prepared for the GSMA suggests. Other studies suggest consumer prices will increase when a market consolidates from four to three operators.

But as many as 16 new mobile virtual network operators are set to enter the Austrian market, potentially promising more pricing changes.

The Arbeiterkammer Wien said that registration fees have risen by about 40 percent at A1 Austria Telekom, T-Mobile Austria and 3 Austria.

Prices have been falling in key parts of Europe’s mobile industry for some time, and also revenue, so some might argue a sustainable industry requires higher prices. That is never a popular idea.

Friday, January 9, 2015

With Shared, Licensed, Unlicensed Spectrum, Who will Not be Able to Become a "Mobile" Service Provider?

Since 2010, there has been movement towards possibly freeing up 500 MHz of spectrum now primarily under the control of federal government agencies for potential use by private sector users on shared basis.

Such shared spectrum access would allow existing licensees to retain primary use of their licensed frequencies, but also allow commercial users access when primary licensees do not need the capacity.

Such spectrum sharing approaches are the newest idea in spectrum allocation policies that primarily have relied on exclusive licenses, and partly on unlicensed approaches.

The big innovation is the concept that a shared access system will deliver results faster, at lower cost, than clearing spectrum, moving  licensed users to new bands, and then allowing new uses of cleared spectrum.

Current thinking is that current licensed users would have priority, while other users could use spectrum when it was available and not needed by primary licensees. Among the ideas for allowing such access is that perhaps new users could pay for secondary rights, while fully non-licensed use would be possible for users who do not have any quality of service guarantees.

At the moment, the National Telecommunications and Information Administration (NTIA) is working on a plan that would make about 100 megahertz of spectrum available for shared small cell use in the 3.5 GHz band currently used primarily for military radar systems.

NTIA also is evaluating additional unlicensed use in the 5 GHz band.

The plan has not been universally well received. Traditional telecom, cable TV and satellite firms prefer the exclusive licensee approach, for reasons of quality of service control, and, some would say, for reasons of promoting communications spectrum scarcity.

Some have noted that signal propagation issues in the 3.5-GHz and other similar bands would likely mean that shared spectrum is most helpful in urban areas, where small cells are practical.

But that might suit some mobile service providers just fine. Illiad’s Free Mobile relies on Wi-Fi access where it can, as a way of reducing the cost of sourcing capacity from other mobile operators. Republic Wireless and Scratch Wireless do the same.

Comcast is deploying Wi-Fi hotspots as part of its consumer fixed network broadband service, in an effort to create a huge footprint of potential public Wi-Fi hotspots that likewise could be used to reduce the cost of creating a mobile virtual network operator operation.

Other ISPs with fixed network assets, including Google Fiber, might be able to use such shared spectrum assets in similar ways, to reduce the cost of mobile service that relies on wholesale-sourced facilities.

Some have argued that a separate Google initiative to supply Wi-Fi gear for businesses, and centrally manage all the routers, could play an infrastructure role as well.

The point is that new ways of combining licensed and unlicensed; exclusive and shared; carrier, enterprise and consumer network assets are coming. All of that is going to create new possibilities for varieties of Internet access and mobile service.

That would be the fulfillment of a hope that has been raised for decades, namely that it will be possible for any entity to become a mobile service provider. 

In an earlier iteration, sports brands (EXPN), family brands (Disney), electronics brands (Best Buy) and others have experimented with custom mobile service provider brands. In other markets, supermarkets have considered offering their own service, and Walmart already does so.

With many more federated public Wi-fi networks, much more spectrum and new contenders with fixed network assets, the possibilities will reach a new level, and lower retail price points than possible before.

If most mobile device use occurs in the home, then some believe new mobile providers such as Comcast could operate as MVNOs with 30 percent lower retail costs.


AT&T Introduces "Rollover Data"

AT&T Mobility has introduced "Rollover Data," a program that allows customers on Mobile Share Value plans to roll over unused data in one billing cycle, for use in the next cycle, at no extra charge. 

If you have a 15GB AT&T Mobile Share Value plan and only use 10GB, the remaining 5GB (the Rollover Data balance) can be used the next month (a total of 20GB). 

 There’s no cap on the amount of unused plan data within a given month that’s eligible for rollover, but one month's rollover data last only for the next month.

Some of you might remember that Cingular, the brand AT&T once used, offered a rollover feature for voice minutes that, as I recall, expired if unused after about a year. 

After T-Mobile US launched its Data Stash plan, it was inevitable that there would be a response. 

The difference, at least for the moment, is that the Data Stash unused mobile data usage will be available for a year.  




Internet Access Speed Growth is Linear, but in a Moore's Law Way

You might not know it from the stream of quarterly updates on “average” Internet connection speeds around the world, but a long history of speed advances confirms that consumer Internet access grows about as fast as Moore’s Law would suggest.

So even if it seems very little is happening, quite a lot is happening, all the time. You couldn’t tell that from quarterly or even annual changes in typical access speeds.

In the third quarter of 2014, for example, global average mobile Internet connection speeds dropped 2.8 percent to 4.5 Mbps, and the global average peak connection speed fell 2.3 percent to 24.8 Mbps in the third quarter of 2014, according to Akamai.  

On an annual basis, average mobile Internet connection speeds globally were up 25 percent from the third quarter of 2013, though. That implies a doubling of speed about every four years.

Most people would likely agree that usage grows faster than that.  Based on traffic data collected by Ericsson, the volume of mobile data traffic grew by approximately 10 percent between the second and third quarters of 2014, implying annual growth of more than 40 percent. But that’s usage, not average speed.

The global average fixed Internet connection speed saw a slight decline in the third quarter of 2014, dropping 2.8 percent to 4.5 Mbps. Global average peak connection speeds declined slightly in the third quarter, dropping 2.3 percent to 24.8 Mbps.

Those sorts of figures are hard to square with the notion that typical speed doubles about every 18 months to two years.

Logic seemingly would suggest that is unlikely. Communications networks--especially those of the fixed variety--are expensive construction projects. Such networks also are subject to local, state and national regulations, interest rates, economic conditions, changes in tax laws and changes in demand curves, all of which should slow rates of change, compared to rates of change for semiconductor products that follow Moore’s Law.

Shockingly, then, some studies have shown that even on twisted-pair copper telephone networks, speed doubled about every 1.9 years.

Other studies show similar results: some say an Edholm's Law shows that Internet access bandwidth does increase as Moore’s Law would predict.

Of course, experts have argued for decades about whether Moore’s Law would end. That debate still hasn’t been settled. But some argue that communications bandwidth would continue to improve on a Moore’s Law pattern, even if classic Moore’s Law slowed or flattened.

That’s a foundational assumption. If access bandwidth really does grow at Moore’s Law rates, then gigabit access networks are inevitable, no matter how crazy that seems.

But that is going to obvious first in the developed regions that have been at it the longest, in North America, some portions of Asia (Japan, Korea, Taiwan, Singapore) and parts of Europe.

Other regions with tougher economics might still be on the curve, but will start at slower speeds, as did Internet access in the more-developed regions.

The global broadband adoption rate (at least 4 Mbps) edged up slightly in the third quarter, gaining one percent and growing to 60 percent.

The global adoption rate of access at speeds of at least 10 Mbps was up 22 percent in the third quarter, following 65 percent increases seen in both the first and second quarters of 2014.

South Korea had the highest average connection speed at 25.3 Mbps but Hong Kong
again had the highest average peak connection speed at 84.6 Mbps.

Demand is going to grow as well, given both streaming popularity and new video formats including 4K video. With 4k adaptive bitrate streams generally requiring between 10 Mbps to 20 Mbps of bandwidth, markets where 4K streaming is widespread will face new investment requirements.

Though it seems improbable, and even when quarterly or annual statistics do not fully show the progress, Internet access speeds do grow about as fast as Moore’s Law would suggest. It’s astounding, really.

Video Autoplay Drives 60% to 200% Facebook Data Consumption Growth

Autoplay video is having a huge impact on Internet access networks, and also likely is going to shape end user behavior in ways beyond new concerns about uncontrollable bandwidth usage.

Over the 12 months preceding September 2014, Facebook traffic increased by 60 percent on the mobile network, and by over 200 percent on the fixed network, driven mainly by the addition of autoplay video to the Facebook feed, according to Sandvine.

Autoplay on Instagram and many other sites probably is having similar effects, namely increasing data consumption in an involuntary way.  

Users probably are starting to shut off audio when working in quiet places, or all the time. People might start reducing or avoiding some sites that are aggressive about autoplay video. Demand for third party apps to disable such features are going to become more popular.

Here are some ways to disable autoplay in your browser.

Thursday, January 8, 2015

Did an Understanding of Moore's Law "Save" the 1980s Cable TV Business?

Did an understanding of Moore's Law "save" the U.S. cable TV industry in the mid-1980s, in the same way Moore's Law enabled Microsoft and Netflix?

Maybe so.

Most will agree that it matters greatly whether Moore’s Law continues at historic rates or whether bandwidth advances continue at historic rates. 

The reason is that so many businesses implicitly or explicitly embed such assumptions into business models and expected or potential rates of growth.

And the continuation of that trend has been highly contestable. For three decades, observers have predicted that the rate of improvement simply could not continue, as we would reach the limits of our ability to etch smaller pathways onto silicon substrates. Optimists have countered that we would begin working with different substrates.

Stubbornly, Moore’s Law has, so far, defied projections. Over the past three decades, big businesses, and big bets, have been contingent on Moore’s Law.

Perhaps the biggest early bet was made by a few in the U.S. cable TV industry.

Way back in the 1980s, proposed high definition TV standards threatened to choke off growth of the U.S. cable TV industry, for example.

Still a smallish industry of possibly $30 billion annual revenues, initial standards proposed by Japanese electronics interests and over-the-air broadcasters would have severely disrupted the cable industry’s business model.

At that time, Japanese suppliers dominated and lead the TV set business, and cable operators were struggling with the cost of supporting complicated in home installs of cable, plus TVs, plus off-air antenna service, plus videocassette records and multiple remotes.

All that complexity generated consumer unhappiness.

At the time, the proposed HDTV standards were partly analog, partly digital and might have required about 45 Mbps of bandwidth per channel, at a time when cable access networks were set up to use 6-MHz channels.

In addition to requiring completely new electronics across the network, some also suggested the initial standard would not last more than five to 10 years, requiring yet another “rip and replace” investment cycle at the end of that period.

Astute cable TV industry executives knew they could not easily afford to make major upgrades twice within 15 years. Consumer electronics suppliers would win, because they could expect two waves of device replacement (consumer and industrial) within 15 years.

Broadcasters might also have reasonably assumed they would gain strategic advantages over cable TV, then seen as a direct competitor. Also, given the growing trend to greater realism in TV image quality, the quality of the existing product would be enhanced, at less cost per TV station than a cable operation would face.

Enter Moore’s Law. Few experts at the time believed it was possible to move directly to an “all-digital” form of HDTV, in one step, and yet retain the standard channelization. The reason was simple enough.

Decoding such a signal, massively compressed and processed, would require the equivalent of a mainframe computer in the home.

The issue, though, was whether Moore’s Law actually would continue to improve at historic rates, and therefore provide affordable mainframe computing capabilities. Most believed that unlikely. But a few did bet on Moore’s Law continuing, which would make possible a consumer decoder at a price that, while significant, would still allow cable operators to support HDTV.

To make a longish story short, Moore’s Law remained intact, and it indeed was possible to compress a 50-Mbps “raw” data stream into 6 MHz of bandwidth.

Some would say, in that instance, Moore’s Law “saved” the economics of the whole U.S. cable TV industry.

Some also would note that Reed Hastings of Netflix made a bet on Moore’s Law as well. Earlier, some would argue Microsoft was built on an understanding of the implications of Moore’s Law. What a business could like like if computing or bandwidth were free is the key question.

For Gates, the key assumption was that Moore’s Law would make the cost of computing hardware a non-problem for a software supplier, and also would create huge new markets for computers.

For Hastings, Moore’s Law, as embedded in Internet access prices, would make possible streaming services even lower in cost than mailing DVDs using the postal service.

The point is that, sometimes, a big forecast on a key trend can enable a whole new industry or business, or perhaps save a whole industry or business.

Most other attempts to quantify the future also are subject to uncertainty. So forecasting errors always are possible. In fact, they might be the normal state of affairs.

Philip Tetlock's Expert Political Judgment: How Good Is It? How Can We Know? found that “specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study.”

Sam L. Savage’s The Flaw of Averages points out that plans based on average assumptions are wrong on average, because uncertainty in life is much more pronounced than people generally assume to be the case.

Nassim Talib’s The Black Swan likewise deals with the powerful impact of unpredictable and unexpected developments.

In fact, some would go so far as to say that forecasts always are wrong, to some degree. That isn’t necessarily a bad thing, as minor fluctuations along a predicted trend line nearly always happen. That is true of most economic forecasting, some argue.

That doesn't mean people will stop listening to forecasts, or that experts will fail to make them. Occasionally, though, big bets are made based on such forecasts, no matter how inaccurate forecasts might be.

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...