Monday, February 10, 2014

Sprint Execs "Surprised" by Opposition to T-Mobile US Bid?

The odds against a Sprint bid to acquire T-Mobile US seem to as long as ever. Sprint Chairman Masayoshi Son and Chief Executive Dan Hesse reportedly were "surprised" by U.S. Justice Department and the Federal Communications Commission opposition to the merger.

They really should not have been surprised. The Justice Department signaled clearly its conviction that the U.S. mobile market already is too concentrated when AT&T tried to buy T-Mobile US. None of that has changed over the past two years.

In fact, some might say a T-Mobile US resurgence works against any attempted acquisition, as it suggests meaningful competition is possible under the present market structure.

Whether heightened competition is possible over the longer term is likely the bigger issue. Many would argue that neither Sprint nor T-Mobile US have the financial ability to weather a prolonged marketing war that reduces average revenue per account and gross revenues. 

If that proves to be true, then a merger eventually might be viewed differently, but only after both Sprint and T-Mobile US have become financially more weakened than they are at present. 

Almost perversely, an eventual merger of a weaker Sprint, and a weaker T-Mobile US, will make success in the competition with Verizon Wireless and AT&T Mobility even tougher. 

But that is a likelihood, if antitrust officials will not allow a merger at present. The old adage about bankers making loans only when the customer doesn't need a loan probably applies here. 

Antitrust officials will approve a merger between Sprint and T-Mobile US only when it is too late to prevent creation of an effective mobile duopoly. 

As sometimes, perhaps often happens, current policies will create precisely the outcomes the said policy hopes to avoid. 



Sunday, February 9, 2014

IP Interconnection is Changing, Because the Commercial Relationships and Traffic Flows are Changing

IP network interconnection periodically erupts as a business issue between two or more interconnecting IP domains, and the problems will grow as the types of interconnecting domains diversify.

The interconnection issue further is complicated by the types of domains. Interconnections can occur between scores of thousands of “autonomous systems,” also called “routing domains.”

Though most of the autonomous systems are Internet service providers, interconnections also occur between enterprises, governmental and educational institutions, large content providers with mostly outbound traffic such as Google, Yahoo, and YouTube, as well as
overlay content distribution networks such as Akamai and Limelight.

In other words, end users, application, service and “access” and “wide area network” providers now are among the entities interconnecting, complicating any potential frameworks for regulating such diverse entities in ways that promote investment and innovation.

Where separate “common carrier” regulation arguably was easier, in the sense that only licensed “carriers” could interconnect, these days, application providers including Google, Apple, Netflix and others operate their own IP networks, interconnecting with carriers and non-profit entities alike.

The interconnection of IP networks historically has been a matter of bilateral agreements between IP network owners, with a tendency to interconnect without settlement payments so long as traffic flows were roughly balanced (the same amount of sent and received traffic on each interconnecting network).

As you can imagine, highly asymmetrical traffic flows such as streaming video will upset those assumptions. That matters, as a practical business matter, since interconnection costs money if traffic flows are not equivalent, or if domains are of disparate size.

Historically, the major distinction among different ISPs was their size, based on geographic scope, traffic volume across network boundaries or the number of attached customers. But symmetry encouraged the “peering” or “settlement-free interconnection” model.

Those assumptions increasingly are challenged, though. Today, a smaller number of large networks exchange traffic with many smaller networks. And there are cost implications.

In an uncongested state, a packet that originates on a network with smaller geographic scope and ends up on the larger network might be expected to impose higher delivery costs on the larger network (which must typically carry the packet a greater distance).

A larger network would presumably have more customers, and this might be seen as giving the
larger network more value because of the larger positive network externalities associated with being part of their networks.

Perhaps more important, even networks of similar size have different characteristics. Consumer-focused “access” providers (cable and telcos) are “eyeball aggregators.” Other ISPs, such as Netflix, are content stores. That has practical implications, namely highly asymmetrical traffic flows between the “eyeball aggregators” and “content stores.”

Also, there are natural economies of scale for a wide area network-based ISP than for an “access” ISP that has to supply last mile connections. Even when traffic flows actually are symmetrical, network costs are unequal.

The point is that settlement-free peering worked best when networks were homogenous, not heterogeneous as they now have become. Like it or not, the traditional peering and transit arrangements are less well suited to today’s interconnection environment.

For that reason, “partial transit” deals have arisen, where  a network Z sells access
to and from from a subset of the Internet prefixes.

For instance, Z may sell A only the ability to send traffic to part of the Internet, but not receive traffic. The reverse may also occur: a network may be allowed to receive traffic but not send traffic.

That arrangement is intended to reflect asymmetrical traffic flows between content store and eyeball aggregator networks.

Those changes in traffic flows, which bifurcate along content store and eyeball aggregator roles, inevitably will disrupt interconnection economics and business arrangements, leading to demand for imposition of common carrier rules for interconnection of IP networks.

Oddly enough, the logic of common carrier rules might lead to precisely the opposite “benefit” some expect.

Disagreements by parties to a bilateral interconnection agreement can lead to service disruptions, if one network refuses to accept traffic from another network on a “settlement free” basis.

So some might call for mandatory interconnection rules, to end peering disputes. Such rules could make the problem worse.

Interconnection disagreements today are about business models and revenue flows. Content stores benefit from settlement-free peering, since they deliver far more traffic than they receive.

Eyeball aggregators often benefit from transit agreements, since they would be paid for the asymmetric traffic flows.

Unless the assumption is that network economics are to be disregarded, the way common carrier rules would work, if applied to IP networks in a manner consistent with common carrier regulation  is that a network imposed an asymmetric load on a receiving network would have to pay for such access.

Disputes over “peering” between IP domains sometimes leads to service disruptions viewed as “throttling” of traffic in some quarters. It is not “throttling,” but a contract dispute.

The relationships between discrete IP networks take three forms. Large networks with equal traffic flows “peer” without payment of settlement fees.

Networks of unequal size tend to use “transit” agreements, where the smaller network pays to connect with the larger network, but also gets access to all other Internet domains. Also, in many cases one network pays a third party to provide interconnection.

Peering and transit rules are going to change, if only because the business interests of IP domain owners are distinct. The issue is whether such change will change to reflect the actual commercial interests, or take some form that is economically irrational.

Saturday, February 8, 2014

Internet Access Prices are Dropping, in "Real" Terms


Historically, as most observers will readily agree, Internet access prices per bit have dropped.

But many would argue that absolute prices have not dropped.

In many cases, consumers have paid about the same amount of money on a recurring basis but have gotten better performance, in terms of access speed, many would argue.

It is a nuanced issue. In some cases, absolute prices might have climbed, on average.

So how can one claim that prices have declined, as some note. Prices declined 82 percent, globally, between 2007 and 2012, according to the International Telecommunications Union, measured as a percentage of income.

That's the key: "as a percentage of income." In some cases, higher absolute prices might represent a lower percentage of household income. So, in "real" terms, prices dropped.

That trend is clear enough, globally, for Long Term Evolution prices, which have dropped in about 73 percent of markets. There also is evidence that U.S. Internet access prices also dropped between 2004 and 2009, for example.

A 2011 study by the International Telecommunications Union, for example, shows consumers and businesses globally are paying on average 18 percent less for entry-level services than they did in 2009, and more than 50 percent  less for high-speed Internet connections, the ITU found.

Relative prices for mobile cellular services decreased by almost 22 percent from 2008 to 2010, while fixed telephony costs declined by an average of seven percent.  




Greater Scale Leads to Lower Prices, Even in a More Concentrated Mobile Business?

If telecommunications really is a business with scale characteristics, then additional scale should lead to lower retail prices. And there is evidence that higher concentration levels in the U.S. mobile business have happened at the same time that retail prices have dropped. 

2013 Began a Reset of Consumer Expectations about Internet Access

Major Internet service providers long have argued that demand for very high speed Internet access (50 Mbps, 100 Mbps, 300 Mbps and faster) is limited. For a very long time, those ISPs have had the numbers on their side.

But that is changing.

By the end of fourth-quarter 2013, 46 percent of Verizon Communications consumer FiOS Internet customers subscribed to FiOS Quantum, which provides speeds ranging from 50 Mbps to 500 Mbps, up from 41 percent at the end of third quarter 2013.

In the fourth quarter of 2013, 55 percent of consumer FiOS Internet sales were for speeds of at least 50 megabits per second. That is a big change, as historically, consumers have tended not to buy services operating at 50 Mbps or faster.

ISPs in the United Kingdom have in the past also  found demand challenges for very high speed services.

Major ISPs would have been on firm ground in arguing that most consumers were happy enough with the 20 Mbps to 30 Mbps speeds they already were buying, and that demand for 50 Mbps, 100 Mbps, 300 Mbps or 1 Gbps services were largely limited to business users or early adopters.

But something very important changed in 2013, namely the price-value relationship for very high speed Internet access services. The Verizon data provides one example. Google Fiber was the other big change.

Previously, where triple-digit speeds were available, the price-value relationship had been anchored around $100 or so for 100 Mbps, each month.

In the few locations where gigabit service actually was available, it tended to sell for $300 a month.

Then came Google Fiber, resetting the value-price relationship dramatically, to a gigabit for $70 a month. Later in 2013, other providers of gigabit access lowered prices to the $70 a month or $80 a month level, showing that Google Fiber indeed is resetting expectations.

Sooner or later, as additional deployments, especially by other ISPs, continue to grow, that pricing umbrella will settle over larger parts of the market, reshaping consumer expectations about the features, and the cost, of such services.

That price umbrella also should reshape expectations for lower-speed services as well. If a gigabit costs $70 a month, what should a 100-Mbps service cost?

So the big change in 2013 was that the high end of the Internet access or broadband access market was fundamentally reset, even if the practical implications will take some time to be realized on a fairly ubiquitous basis.

Google Fiber’s 1 Gbps for $70 a month pricing now is reflected in most other competing offers, anywhere in the United States.

And those changes will ripple down through the rest of the ecosystem. Where Google Fiber now offers 5 Mbps for free, so all other offers will have to accommodate the pricing umbrella of a gigabit for $70 a month.

Be clear, Google Fiber has sown the seeds for a destruction of the prevailing price-value relationship for Internet access.

Eventually, all consumers will benchmark what they can buy locally against the “gigabit for $70” standard. And those expectations will affect demand for all other products.

Where alternatives are offered, many consumers will opt for hundreds of megabits per second at prices of perhaps $35 a month, because that satisfies their needs, and is congruent with the gigabit for $70 pricing umbrella.

One might also predict that, on the low end, 5 Mbps will be seen as a product with a retail price of perhaps cents per month.

Friday, February 7, 2014

One Reason Why U.S. Vehicle Communications (Machine to Machine) Market HAS to Grow

The U.S.  Department of Transportation (specifically the U.S. NHTSA) is preparing regulatory proposals to make vehicle-to-vehicle communications (part of the broader "machine to machine" market) compulsory, to prevent crashes, reduce traffic congestion and to save fuel.

The U.S. Department of Transportation's (DOT) National Highway Traffic Safety Administration believes vehicle-to-vehicle communication technology for light vehicles, allowing cars to talk to each other, would  avoid many crashes altogether by exchanging basic safety data, such as speed and position, ten times per second.



That is one way to create a market: mandate it.  The major mobile service providers possibly stand to  benefit, even if the actual communications will use the 5.9GHz band and Wi-Fi air interface (802.11p), in part because any such systems will benefit from wide area communications as well. 

But most of the revenue likely will be earned by application providers, in a complicated ecosystem.

The Department of Transportation's Intelligent Transportation System Architecture document attempt to bring some order to a fiendishly complex collection of technologies.










60% of All Internet Devices Exchange Traffic with Google Every Day

About 60 percent of all Internet end devices and end users exchange traffic with Google servers during the course of an average day, according to Deepfield.  In 2010, Google represented just six percent of Internet traffic.

In the summer of 2013, Google accounted for nearly 25 percent of Internet traffic on average. Perhaps as significantly, Google has deployed thousands of Google servers  (Google Global Cache) in Internet service provider operations around the world, accelerating performance and improving end user experience.

Aside from all the other things that presence could mean, one might argue that Google might be able to leverage all of that to better compete with Amazon Web Services, the clear market leader in the cloud infrastructure business.



Mid-2013 research by Synergy Research Group  indicated Amazon Web Services (AWS) had 27 percent market share of the infrastructure as a service and platform as a service segments of the cloud computing business.

At that point in time, North America accounted for well over half of the worldwide market, while Asia-Pacific region accounted for 21 percent of revenue and Europe, the Middle East and Africa accounted for 20 percent of revenue.
Ignoring Salesforce.com, which is in the applications as a service segment, Microsoft, IBM, Google and Fujitsu arguably were positioned in a clear second tier of providers, with market share between four percent and five percent.

AT&T and Verizon each had about two percent share. The question is what any of the other contenders can do to catch up to AWS. Some might argue Google is the firm best positioned to leverage other assets in that regard.

Some argue that Google is Amazon's only competition. Other cloud infrastructure providers might disagree, but few would doubt Google’s ability to challenge AWS, in ways other cloud infrastructure providers would find difficult and expensive.




By some estimates, since 2005, Google has spent $20.9 billion on its infrastructrure. Microsoft has invested about  $18 billion and Amazon about $12 billion.


We Might Have to Accept Some Degree of AI "Not Net Zero"

An argument can be made that artificial intelligence operations will consume vast quantities of electricity and water, as well as create lot...