Sunday, February 9, 2014

IP Interconnection is Changing, Because the Commercial Relationships and Traffic Flows are Changing

IP network interconnection periodically erupts as a business issue between two or more interconnecting IP domains, and the problems will grow as the types of interconnecting domains diversify.

The interconnection issue further is complicated by the types of domains. Interconnections can occur between scores of thousands of “autonomous systems,” also called “routing domains.”

Though most of the autonomous systems are Internet service providers, interconnections also occur between enterprises, governmental and educational institutions, large content providers with mostly outbound traffic such as Google, Yahoo, and YouTube, as well as
overlay content distribution networks such as Akamai and Limelight.

In other words, end users, application, service and “access” and “wide area network” providers now are among the entities interconnecting, complicating any potential frameworks for regulating such diverse entities in ways that promote investment and innovation.

Where separate “common carrier” regulation arguably was easier, in the sense that only licensed “carriers” could interconnect, these days, application providers including Google, Apple, Netflix and others operate their own IP networks, interconnecting with carriers and non-profit entities alike.

The interconnection of IP networks historically has been a matter of bilateral agreements between IP network owners, with a tendency to interconnect without settlement payments so long as traffic flows were roughly balanced (the same amount of sent and received traffic on each interconnecting network).

As you can imagine, highly asymmetrical traffic flows such as streaming video will upset those assumptions. That matters, as a practical business matter, since interconnection costs money if traffic flows are not equivalent, or if domains are of disparate size.

Historically, the major distinction among different ISPs was their size, based on geographic scope, traffic volume across network boundaries or the number of attached customers. But symmetry encouraged the “peering” or “settlement-free interconnection” model.

Those assumptions increasingly are challenged, though. Today, a smaller number of large networks exchange traffic with many smaller networks. And there are cost implications.

In an uncongested state, a packet that originates on a network with smaller geographic scope and ends up on the larger network might be expected to impose higher delivery costs on the larger network (which must typically carry the packet a greater distance).

A larger network would presumably have more customers, and this might be seen as giving the
larger network more value because of the larger positive network externalities associated with being part of their networks.

Perhaps more important, even networks of similar size have different characteristics. Consumer-focused “access” providers (cable and telcos) are “eyeball aggregators.” Other ISPs, such as Netflix, are content stores. That has practical implications, namely highly asymmetrical traffic flows between the “eyeball aggregators” and “content stores.”

Also, there are natural economies of scale for a wide area network-based ISP than for an “access” ISP that has to supply last mile connections. Even when traffic flows actually are symmetrical, network costs are unequal.

The point is that settlement-free peering worked best when networks were homogenous, not heterogeneous as they now have become. Like it or not, the traditional peering and transit arrangements are less well suited to today’s interconnection environment.

For that reason, “partial transit” deals have arisen, where  a network Z sells access
to and from from a subset of the Internet prefixes.

For instance, Z may sell A only the ability to send traffic to part of the Internet, but not receive traffic. The reverse may also occur: a network may be allowed to receive traffic but not send traffic.

That arrangement is intended to reflect asymmetrical traffic flows between content store and eyeball aggregator networks.

Those changes in traffic flows, which bifurcate along content store and eyeball aggregator roles, inevitably will disrupt interconnection economics and business arrangements, leading to demand for imposition of common carrier rules for interconnection of IP networks.

Oddly enough, the logic of common carrier rules might lead to precisely the opposite “benefit” some expect.

Disagreements by parties to a bilateral interconnection agreement can lead to service disruptions, if one network refuses to accept traffic from another network on a “settlement free” basis.

So some might call for mandatory interconnection rules, to end peering disputes. Such rules could make the problem worse.

Interconnection disagreements today are about business models and revenue flows. Content stores benefit from settlement-free peering, since they deliver far more traffic than they receive.

Eyeball aggregators often benefit from transit agreements, since they would be paid for the asymmetric traffic flows.

Unless the assumption is that network economics are to be disregarded, the way common carrier rules would work, if applied to IP networks in a manner consistent with common carrier regulation  is that a network imposed an asymmetric load on a receiving network would have to pay for such access.

Disputes over “peering” between IP domains sometimes leads to service disruptions viewed as “throttling” of traffic in some quarters. It is not “throttling,” but a contract dispute.

The relationships between discrete IP networks take three forms. Large networks with equal traffic flows “peer” without payment of settlement fees.

Networks of unequal size tend to use “transit” agreements, where the smaller network pays to connect with the larger network, but also gets access to all other Internet domains. Also, in many cases one network pays a third party to provide interconnection.

Peering and transit rules are going to change, if only because the business interests of IP domain owners are distinct. The issue is whether such change will change to reflect the actual commercial interests, or take some form that is economically irrational.

Saturday, February 8, 2014

Internet Access Prices are Dropping, in "Real" Terms


Historically, as most observers will readily agree, Internet access prices per bit have dropped.

But many would argue that absolute prices have not dropped.

In many cases, consumers have paid about the same amount of money on a recurring basis but have gotten better performance, in terms of access speed, many would argue.

It is a nuanced issue. In some cases, absolute prices might have climbed, on average.

So how can one claim that prices have declined, as some note. Prices declined 82 percent, globally, between 2007 and 2012, according to the International Telecommunications Union, measured as a percentage of income.

That's the key: "as a percentage of income." In some cases, higher absolute prices might represent a lower percentage of household income. So, in "real" terms, prices dropped.

That trend is clear enough, globally, for Long Term Evolution prices, which have dropped in about 73 percent of markets. There also is evidence that U.S. Internet access prices also dropped between 2004 and 2009, for example.

A 2011 study by the International Telecommunications Union, for example, shows consumers and businesses globally are paying on average 18 percent less for entry-level services than they did in 2009, and more than 50 percent  less for high-speed Internet connections, the ITU found.

Relative prices for mobile cellular services decreased by almost 22 percent from 2008 to 2010, while fixed telephony costs declined by an average of seven percent.  




Greater Scale Leads to Lower Prices, Even in a More Concentrated Mobile Business?

If telecommunications really is a business with scale characteristics, then additional scale should lead to lower retail prices. And there is evidence that higher concentration levels in the U.S. mobile business have happened at the same time that retail prices have dropped. 

2013 Began a Reset of Consumer Expectations about Internet Access

Major Internet service providers long have argued that demand for very high speed Internet access (50 Mbps, 100 Mbps, 300 Mbps and faster) is limited. For a very long time, those ISPs have had the numbers on their side.

But that is changing.

By the end of fourth-quarter 2013, 46 percent of Verizon Communications consumer FiOS Internet customers subscribed to FiOS Quantum, which provides speeds ranging from 50 Mbps to 500 Mbps, up from 41 percent at the end of third quarter 2013.

In the fourth quarter of 2013, 55 percent of consumer FiOS Internet sales were for speeds of at least 50 megabits per second. That is a big change, as historically, consumers have tended not to buy services operating at 50 Mbps or faster.

ISPs in the United Kingdom have in the past also  found demand challenges for very high speed services.

Major ISPs would have been on firm ground in arguing that most consumers were happy enough with the 20 Mbps to 30 Mbps speeds they already were buying, and that demand for 50 Mbps, 100 Mbps, 300 Mbps or 1 Gbps services were largely limited to business users or early adopters.

But something very important changed in 2013, namely the price-value relationship for very high speed Internet access services. The Verizon data provides one example. Google Fiber was the other big change.

Previously, where triple-digit speeds were available, the price-value relationship had been anchored around $100 or so for 100 Mbps, each month.

In the few locations where gigabit service actually was available, it tended to sell for $300 a month.

Then came Google Fiber, resetting the value-price relationship dramatically, to a gigabit for $70 a month. Later in 2013, other providers of gigabit access lowered prices to the $70 a month or $80 a month level, showing that Google Fiber indeed is resetting expectations.

Sooner or later, as additional deployments, especially by other ISPs, continue to grow, that pricing umbrella will settle over larger parts of the market, reshaping consumer expectations about the features, and the cost, of such services.

That price umbrella also should reshape expectations for lower-speed services as well. If a gigabit costs $70 a month, what should a 100-Mbps service cost?

So the big change in 2013 was that the high end of the Internet access or broadband access market was fundamentally reset, even if the practical implications will take some time to be realized on a fairly ubiquitous basis.

Google Fiber’s 1 Gbps for $70 a month pricing now is reflected in most other competing offers, anywhere in the United States.

And those changes will ripple down through the rest of the ecosystem. Where Google Fiber now offers 5 Mbps for free, so all other offers will have to accommodate the pricing umbrella of a gigabit for $70 a month.

Be clear, Google Fiber has sown the seeds for a destruction of the prevailing price-value relationship for Internet access.

Eventually, all consumers will benchmark what they can buy locally against the “gigabit for $70” standard. And those expectations will affect demand for all other products.

Where alternatives are offered, many consumers will opt for hundreds of megabits per second at prices of perhaps $35 a month, because that satisfies their needs, and is congruent with the gigabit for $70 pricing umbrella.

One might also predict that, on the low end, 5 Mbps will be seen as a product with a retail price of perhaps cents per month.

Friday, February 7, 2014

One Reason Why U.S. Vehicle Communications (Machine to Machine) Market HAS to Grow

The U.S.  Department of Transportation (specifically the U.S. NHTSA) is preparing regulatory proposals to make vehicle-to-vehicle communications (part of the broader "machine to machine" market) compulsory, to prevent crashes, reduce traffic congestion and to save fuel.

The U.S. Department of Transportation's (DOT) National Highway Traffic Safety Administration believes vehicle-to-vehicle communication technology for light vehicles, allowing cars to talk to each other, would  avoid many crashes altogether by exchanging basic safety data, such as speed and position, ten times per second.



That is one way to create a market: mandate it.  The major mobile service providers possibly stand to  benefit, even if the actual communications will use the 5.9GHz band and Wi-Fi air interface (802.11p), in part because any such systems will benefit from wide area communications as well. 

But most of the revenue likely will be earned by application providers, in a complicated ecosystem.

The Department of Transportation's Intelligent Transportation System Architecture document attempt to bring some order to a fiendishly complex collection of technologies.










60% of All Internet Devices Exchange Traffic with Google Every Day

About 60 percent of all Internet end devices and end users exchange traffic with Google servers during the course of an average day, according to Deepfield.  In 2010, Google represented just six percent of Internet traffic.

In the summer of 2013, Google accounted for nearly 25 percent of Internet traffic on average. Perhaps as significantly, Google has deployed thousands of Google servers  (Google Global Cache) in Internet service provider operations around the world, accelerating performance and improving end user experience.

Aside from all the other things that presence could mean, one might argue that Google might be able to leverage all of that to better compete with Amazon Web Services, the clear market leader in the cloud infrastructure business.



Mid-2013 research by Synergy Research Group  indicated Amazon Web Services (AWS) had 27 percent market share of the infrastructure as a service and platform as a service segments of the cloud computing business.

At that point in time, North America accounted for well over half of the worldwide market, while Asia-Pacific region accounted for 21 percent of revenue and Europe, the Middle East and Africa accounted for 20 percent of revenue.
Ignoring Salesforce.com, which is in the applications as a service segment, Microsoft, IBM, Google and Fujitsu arguably were positioned in a clear second tier of providers, with market share between four percent and five percent.

AT&T and Verizon each had about two percent share. The question is what any of the other contenders can do to catch up to AWS. Some might argue Google is the firm best positioned to leverage other assets in that regard.

Some argue that Google is Amazon's only competition. Other cloud infrastructure providers might disagree, but few would doubt Google’s ability to challenge AWS, in ways other cloud infrastructure providers would find difficult and expensive.




By some estimates, since 2005, Google has spent $20.9 billion on its infrastructrure. Microsoft has invested about  $18 billion and Amazon about $12 billion.


TV is Turning Out to be Quite a Srtategic Asset for Telcos

There's a good reason why Vodafone bought Kabel Deutschland, and might be considering additional purchases of video subscription service assets in the United Kingdom: video services are among the areas where telco market share is growing.

And while profit margins at smaller providers will be slim to non-existent, larger telcos likely are seeing profit margins in the 20-percent range. 

Video also has emerged as a core application complementing a broadband access service. According to Bernstein Research, where U.S. cable TV and satellite TV providers are losing customers, U.S. telcos are gaining them.

BII_PayTVSubs_2


Would You Rather Be HBO or Netflix?

HBO generated $1.8 billion in operating profit in 2013, propelled by revenue growth of four percent (to reach $4.9 billion). 



Netflix's revenue rose 21 percent in 2013 to $4.37 billion and $228 million in operating income. 



Netflix is growing faster, and already generates more gross revenue than HBO, though HBO has much higher profit margins. 



You might argue Netflix is more exposed to a slowdown in growth, as it will have to increase spending on original content. 




Verizon to offer $100 for New Lines in February

One big question observers have had about the escalating mobile marketing wars in the U.S. market is whether Verizon Wireless would have to respond. It appears that is happening, on at least a limited basis. 



Verizon will offer $100 for new lines from Feb. 7, 2014 to Feb. 28, 2014, on a two-year contract plan. 



What remains to be seen is how the offers and counter-offers develop over time, with or without any merger between Sprint and T-Mobile US. 


50% LTE Coverage in Africa by 2018

Though 2G and 3G likely will continue to represent the mobile networks most consumers connect to, by end-2018, half the African population will be covered by Long Term Evolution networks, according to ABI Research.

African LTE mobile subscriptions will grow at a 128 percent compound annual growth rate to surpass 50 million subscribers at the end of 2018, ABI Research forecasts.

“LTE handset shipments will increase by 75 percent annually on average in the next five years,” said Jake Saunders, ABI Research VP and practice director. “Given the poor fixed-line infrastructure, people will depend on the wireless network for Internet access.”

LTE base stations will grow at a compound annual growth rate of 40 percent over the next five years, ABI Research forecasts.

However, LTE network population coverage will be far from homogenous across the region, with a few countries such as Angola and Namibia nearing the halfway point already while wealthier nations like Botswana and Gabon have yet to deploy the advanced technology.

“Part of the underlying reason for this digital divide is the different types of initiatives driving LTE roll-out,” said Ying Kang Tan, ABI Research research associate. “We expect wholesale or shared networks such as the joint venture between the Rwandan government and Korea Telecom and the public-private partnership proposed by the Kenyan government to spur LTE deployment.

 Projected 2018 Network Subscriptions by Type of Network
source: Ericsson



Thursday, February 6, 2014

We Should Know in a Few Weeks if Sprint Really is Going to Launch a Takeover Bid for T-Mobile US

SoftBank and Sprint reportedly will have to decide over the next few weeks whether to launch a bid to acquire T-Mobile US.

The argument essentially has to be that the U.S. market is becoming a duopoly, a condition that historically has resulted in high prices and low innovation in the U.S. mobile market, and that only a stronger number three provider (Sprint fortified by T-Mobile US) can check the growing market power of Verizon Wireless and AT&T Mobility.

U.S. regulators and antitrust officials likely already have signaled to Sprint a continuing belief that the U.S. mobile market already is too concentrated. Indeed, Sprint itself argued against the AT&T acquisition of T-Mobile US, on the grounds that competition would be harmed if the number of national providers dropped from four to three, the step it now might propose.

“Removing T-Mobile from the market would substantially reduce the likelihood of market disruption by a maverick,” Sprint said in a 2011 filing asking the FCC to block AT&T’s proposed purchase of T-Mobile. “T-Mobile, as one of only four national carriers, provides a critical constraint on AT&T’s consumer retail prices.”

Son and Hesse argued a combined entity will be a “super maverick” in the mold of T-Mobile US. On the other hand, regulators also might view a resurgent T-Mobile US as evidence that competition is increasing.

T-Mobile US has reversed a nearly decade-long slide in subscribers, and in 2013 had the second-highest net subscriber gains in the U.S industry, adding 2.1 net new customers over the last three quarters, compared to net additions of 4.1 million at Verizon Wireless and 1.8 million at AT&T Mobility, Bloomberg notes.

Arguments can be made, either way, that competition will be most effectively promoted if four carriers remain in the market, or if Sprint and T-Mobile US combine. The former argument will rely on empirical evidence of what T-Mobile US is doing; the latter on economies of scale that might be needed if a larger Sprint plus T-Mobile US wants to disrupt industry prices and packaging more than at present.

Where to Share Spectrum Might be the Issue, Not Sharing as Such

To say communications spectrum policy is contentious is an understatement, and contention exists over the concept of “shared spectrum” as well.

The notion is that instead of the traditional method of reallocating spectrum, namely compensating licensed users to clear blocks of spectrum for supplemental auction, it might be less costly and get new communications spectrum to market faster if licensed users and commercial users share spectrum.

Put simply, incumbent mobile service providers probably have good reasons to oppose the concept, while challengers likely have good reasons to support the concept.

“The burden of proof should be on those who argue for spectrum exclusivity over sharing,” argues Kevin Werbach, Wharton School, University of Pennsylvania associate professor of Legal Studies and Business Ethics, and founder of the Supernova Group, a technology consulting firm.

“Both licensed and unlicensed spectrum provides significant value to consumers,” argues Dr. George Ford, Phoenix Center for Advanced Legal & Economic Public Policy Studies chief economist. That said, Ford is skeptical about the economics of shared spectrum, as an exclusive method of allocating a scarce resource.

“The allocation decision should be made based on which licensing approach is expected to generate the greatest value for the spectrum being allocated,” Ford argues.

One key contextual issue is where additional spectrum will be needed, what applications will need to be supported and whether spectrum sharing is suited to new uses and business models, compared to the traditional mobile business model.

“Shared spectrum is largely for low-power uses only,” Ford argues. “Sharing spectrum that covers greater distances per unit of power—like the TV broadcasting spectrum—is counterproductive and economically senseless.”

But some might also note the biggest potential use of new spectrum and networks is for high bandwidth but low power applications, precisely the places that a shared spectrum approach would make most sense.

The legacy use of Wi-Fi provides a useful model. Where traditional mobile networks logically have been optimized for mobility over wide areas, making low-power licensed spectrum a logical choice, tomorrow’s networks might well be built around low power, local access. And that is precisely the area where shared spectrum might make lots of sense.

There might be less disagreement than first appears.

Even if Gigabit Connections are Available, Will Most Consumers Buy 100 Mbps?

Whether the typical consumer will buy a gigabit service, by about 2020, is not so clear. What is more clear is that such speeds will be generally available to most U.S. consumers, by about that point.

Perhaps the bigger question is what gigabit services will cost in 2020. Generally speaking, the trend in the U.S market has been for average speeds to grow, at about a 50 percent a year clip, while absolute prices remain roughly stable, while premium services have been priced higher.

The issue is whether a gigabit service will remain the premium offering in 2020. Some predict that half of U.S. households will be buying 100 Mbps in 2020, for example.

The other complication is that broadband speeds keep changing, so the product a consumer bought in 1998 is different from 2008 and will be different from what is purchased in 2018.

In 2002, only about 10 percent of U.S. households were buying broadband service. Back then, where a dial-up connection might have cost about $20 a month, a then-current broadband connection would have cost much more. Some of us were buying 756 kbps connections for $100 a month, back then, for example.

So one might argue either that monthly prices will remain roughly constant, while speeds grow, or that prices will grow as speeds increase. The “natural limit” would seem to be Google Fiber’s gigabit for $70 a month price point. It is hard to see triple-digit broadband prices for gigabit services in 2020, if $70 or $80 a month is the current price of a gigabit connection.

Back when modems operated at 56 kbps, Netflix took a look at Moore’s Law and plotted what that would mean for bandwidth, over time.

“We took out our spreadsheets and we figured we’d get 14 megabits per second to the home by 2012, which turns out is about what we will get,” says Reed Hastings, Netflix CEO.“If you drag it out to 2021, we will all have a gigabit to the home.”



                           





Remember Economics of Dial-Up ISP Business? Get Ready. You Might See it Again

For those of you who actually remember the economics of the dial-up Internet access business, you will recall that a profitable smallish business became unsustainable with the advent of broadband access. Profit margin was the key issue.

When Internet access was an app that rode on top of a standard unlimited use local voice connection, small ISPs could make a business case for service because they were not leasing access connections.

With broadband, independent ISPs suddenly found themselves required to lease wholesale capacity from facilities-based providers in order to provide service, which wiped out profit margins.

One wonders whether, eventually, that is a problem similar to what independent ISPs will face in the future, on the assumption that the market-standard Internet access offer is 1 Gbps, in urban and suburban markets.

Cable companies and telcos will have to spend more, but can adjust.

Whether that will also be true for wireless ISPs is not so clear. To be sure, there traditionally is a gap between price-performance of urban Internet access and rural Internet access. That will allow rural and independent providers a shot at survival, if they can offer 100 Mbps or more, even if 1 Gbps proves prohibitive.

But nothing is certain, and the precedent of the dial-up to broadband transition might provide a warning. As broadband initially meant a transition from kilobits per second to megabits per second, or an order of magnitude increase, so might a jump of two orders of magnitude likewise pose a calamitous challenge for wireless providers, who might not have access to enough bandwidth to compete.

At least for the moment, what many service providers (cable and telco) will have to do, when faced with the reality of a 1-Gbps competitor, is drop prices. Sooner or later, though, even that is going to be a tough proposition, since the immediate steps have tended to be creation of offers something like “100 Mbps for $70,” when Google Fiber offers 1 Gbps for $70.

At an implied price of seven cents per Mbps (1,000 Mbps at $70), a 100 Mbps service “should” cost $7 a month. That is why Google Fiber prices 5 Mbps at the level of “free.” Using the same metric, 5 Mbps would cost 35 cents a month.

Mobile service providers might eventually face the same challenge. If enough people have very high speed connections, and there is even more Wi-Fi available than there is at present, it will make sense for people to buy relatively small mobile data access packages, and default to Wi-Fi most of the time.

One might argue that is what people already do.

To be sure, mobile access likely always will come at a price premium to fixed service. But the premium will shift as the price per megabit per second for a fixed connection climbs towards the gigabit for $70 a month level.

Even if mobile data continues to cost about an order of magnitude more than the equivalent amount of bandwidth delivered by a fixed connection, absolute cost has to adjust. If 50 Mbps on a fixed network costs about $3.50 a month, for the sake of argument, then 50 Mbps on a mobile network might be expected to cost $35 a month.

Bandwidth economics are going to be interesting going forward. Where we normally operate within “scarcity” constraints, we might in the future actually be facing relative abundance. And that is going to have financial implications for ISPs.

Wednesday, February 5, 2014

Seeing "Throttling" Where it Does Not Exist

It probably was inevitable: some will see content “throttling” in the wake of a U.S. appeals court overturning of Federal Communications Commission “network neutrality” rules.

Verizon Communications was swift to say it was not blocking or impeding packets. Peering congestion is the likely culprit. Verizon is not dumb enough to block or impede lawful packets, a clear violation of still in force FCC rules.

U.S. Consumers Still Buy "Good Enough" Internet Access, Not "Best"

Optical fiber always is pitched as the “best” or “permanent” solution for fixed network internet access, and if the economics of a specific...