Tuesday, February 11, 2014

Grande Communications Launches Gigabit Service in Austin, Texas in February 2014

Grande Communications plans to launch gigabit access service in parts of Austin, Texas by Feb. 18, 2014, beating both Google Fiber and AT&T in  launching 1 Gbps services in Austin.

Grande will sell gigabit service for $64.99 per month, undercutting. AT&T’s “GigaPower” service for $70, as well as Google Fiber, also selling for $70 a month.

Grande says its service will have no contracts or bandwidth usage caps, and initially will be available in west Austin neighborhoods, representing about 25 percent of Grande’s addressable market in Austin.

If take rates are as high as Grande expects, the gigabit service will be rolled out to other Grande-served neighborhoods as well.

What isn’t immediately clear is what impact the gigabit offer will have on take rates for other Grande offers in Austin.

Grande has launched faster speeds offers in Austin, Corpus Christi, Dallas, Midland, Odessa, San Antonio, San Marcos and Waco. Though markets outside Austin will not be affected directly, demand in west Austin clearly will be affected.

The new Internet speed options begin at 15 megabits per second for $35 a month and include other offers at 50 Mbps ($45 a month) for 75 Mbps ($55 a month) and 110 Mbps ($65 a month).

In the west Austin neighborhoods where gigabit service is available, the 110 Mbps service obviously will be cannibalized by the gigabit service, which costs the same. But the other issue is how many customers will determine that a gigabit service for $65 is preferable to a 75 Mbps service for $55 a month.

Longer term, one has to expect a resetting of price and value expectations for all the slower-speed services, even if there is no wholesale resetting of prices in the near term.

8 ISPS Respond to Gigabit Network RFI

With the caveat that "interest" might not actually represent "intent to provide," eight Internet service providers have responded to a request for intormation on gigabit Internet access networks issued by the Louisville-Jefferson County Metro Government.

The request sought information about how a gigabit-capable network could be provided across the city or in targeted commercial corridors and in residential areas and how gigabit service could be delivered at prices comparable to other gigabit fiber communities across the nation.

City officials hope to lure a vendor who will provide commercial gigabit Internet service at symmetrical speeds, up and down. Time Warner Cable and AT&T are said to be among the firms that responded to the request for information.

City officials apparently are willing to entice would-be providers by allowing access to city rights of way, including waterlines, sewer lines and alleys.

In some ways, the RFI illustrates new thinking about ways municipalities can offer inducements to ISPs interested in creating new gigabit access networks. Municipal officials in this case do not seem to envision a full public-private partnership, but instead simply easier and presumably low-cost access to rights of way and conduit that would allow ISPs to cut the cost of building any such new networks.

What will be interesting is whether municipal officials are willing to allow partial builds only in parts of the metro area, instead of mandating 100-percent coverage, which might not be feasible.

In essence, that would mimic the way competitive local exchange carriers have tended to create new networks, building only where there is clear customer demand. In the case of CLECs focused on business customers, that has meant focusing on business districts and office parks, for example.

In a consumer context, the same approach might allow ISPs to build only in residential neighborhoods where customer demand was high enough to promise a potential financial return.

Though the concept clashes with historic notions about universal access, such approaches have proven effective at stimulating new network capacity.

Monday, February 10, 2014

Sprint Execs "Surprised" by Opposition to T-Mobile US Bid?

The odds against a Sprint bid to acquire T-Mobile US seem to as long as ever. Sprint Chairman Masayoshi Son and Chief Executive Dan Hesse reportedly were "surprised" by U.S. Justice Department and the Federal Communications Commission opposition to the merger.

They really should not have been surprised. The Justice Department signaled clearly its conviction that the U.S. mobile market already is too concentrated when AT&T tried to buy T-Mobile US. None of that has changed over the past two years.

In fact, some might say a T-Mobile US resurgence works against any attempted acquisition, as it suggests meaningful competition is possible under the present market structure.

Whether heightened competition is possible over the longer term is likely the bigger issue. Many would argue that neither Sprint nor T-Mobile US have the financial ability to weather a prolonged marketing war that reduces average revenue per account and gross revenues. 

If that proves to be true, then a merger eventually might be viewed differently, but only after both Sprint and T-Mobile US have become financially more weakened than they are at present. 

Almost perversely, an eventual merger of a weaker Sprint, and a weaker T-Mobile US, will make success in the competition with Verizon Wireless and AT&T Mobility even tougher. 

But that is a likelihood, if antitrust officials will not allow a merger at present. The old adage about bankers making loans only when the customer doesn't need a loan probably applies here. 

Antitrust officials will approve a merger between Sprint and T-Mobile US only when it is too late to prevent creation of an effective mobile duopoly. 

As sometimes, perhaps often happens, current policies will create precisely the outcomes the said policy hopes to avoid. 



Sunday, February 9, 2014

IP Interconnection is Changing, Because the Commercial Relationships and Traffic Flows are Changing

IP network interconnection periodically erupts as a business issue between two or more interconnecting IP domains, and the problems will grow as the types of interconnecting domains diversify.

The interconnection issue further is complicated by the types of domains. Interconnections can occur between scores of thousands of “autonomous systems,” also called “routing domains.”

Though most of the autonomous systems are Internet service providers, interconnections also occur between enterprises, governmental and educational institutions, large content providers with mostly outbound traffic such as Google, Yahoo, and YouTube, as well as
overlay content distribution networks such as Akamai and Limelight.

In other words, end users, application, service and “access” and “wide area network” providers now are among the entities interconnecting, complicating any potential frameworks for regulating such diverse entities in ways that promote investment and innovation.

Where separate “common carrier” regulation arguably was easier, in the sense that only licensed “carriers” could interconnect, these days, application providers including Google, Apple, Netflix and others operate their own IP networks, interconnecting with carriers and non-profit entities alike.

The interconnection of IP networks historically has been a matter of bilateral agreements between IP network owners, with a tendency to interconnect without settlement payments so long as traffic flows were roughly balanced (the same amount of sent and received traffic on each interconnecting network).

As you can imagine, highly asymmetrical traffic flows such as streaming video will upset those assumptions. That matters, as a practical business matter, since interconnection costs money if traffic flows are not equivalent, or if domains are of disparate size.

Historically, the major distinction among different ISPs was their size, based on geographic scope, traffic volume across network boundaries or the number of attached customers. But symmetry encouraged the “peering” or “settlement-free interconnection” model.

Those assumptions increasingly are challenged, though. Today, a smaller number of large networks exchange traffic with many smaller networks. And there are cost implications.

In an uncongested state, a packet that originates on a network with smaller geographic scope and ends up on the larger network might be expected to impose higher delivery costs on the larger network (which must typically carry the packet a greater distance).

A larger network would presumably have more customers, and this might be seen as giving the
larger network more value because of the larger positive network externalities associated with being part of their networks.

Perhaps more important, even networks of similar size have different characteristics. Consumer-focused “access” providers (cable and telcos) are “eyeball aggregators.” Other ISPs, such as Netflix, are content stores. That has practical implications, namely highly asymmetrical traffic flows between the “eyeball aggregators” and “content stores.”

Also, there are natural economies of scale for a wide area network-based ISP than for an “access” ISP that has to supply last mile connections. Even when traffic flows actually are symmetrical, network costs are unequal.

The point is that settlement-free peering worked best when networks were homogenous, not heterogeneous as they now have become. Like it or not, the traditional peering and transit arrangements are less well suited to today’s interconnection environment.

For that reason, “partial transit” deals have arisen, where  a network Z sells access
to and from from a subset of the Internet prefixes.

For instance, Z may sell A only the ability to send traffic to part of the Internet, but not receive traffic. The reverse may also occur: a network may be allowed to receive traffic but not send traffic.

That arrangement is intended to reflect asymmetrical traffic flows between content store and eyeball aggregator networks.

Those changes in traffic flows, which bifurcate along content store and eyeball aggregator roles, inevitably will disrupt interconnection economics and business arrangements, leading to demand for imposition of common carrier rules for interconnection of IP networks.

Oddly enough, the logic of common carrier rules might lead to precisely the opposite “benefit” some expect.

Disagreements by parties to a bilateral interconnection agreement can lead to service disruptions, if one network refuses to accept traffic from another network on a “settlement free” basis.

So some might call for mandatory interconnection rules, to end peering disputes. Such rules could make the problem worse.

Interconnection disagreements today are about business models and revenue flows. Content stores benefit from settlement-free peering, since they deliver far more traffic than they receive.

Eyeball aggregators often benefit from transit agreements, since they would be paid for the asymmetric traffic flows.

Unless the assumption is that network economics are to be disregarded, the way common carrier rules would work, if applied to IP networks in a manner consistent with common carrier regulation  is that a network imposed an asymmetric load on a receiving network would have to pay for such access.

Disputes over “peering” between IP domains sometimes leads to service disruptions viewed as “throttling” of traffic in some quarters. It is not “throttling,” but a contract dispute.

The relationships between discrete IP networks take three forms. Large networks with equal traffic flows “peer” without payment of settlement fees.

Networks of unequal size tend to use “transit” agreements, where the smaller network pays to connect with the larger network, but also gets access to all other Internet domains. Also, in many cases one network pays a third party to provide interconnection.

Peering and transit rules are going to change, if only because the business interests of IP domain owners are distinct. The issue is whether such change will change to reflect the actual commercial interests, or take some form that is economically irrational.

Saturday, February 8, 2014

Internet Access Prices are Dropping, in "Real" Terms


Historically, as most observers will readily agree, Internet access prices per bit have dropped.

But many would argue that absolute prices have not dropped.

In many cases, consumers have paid about the same amount of money on a recurring basis but have gotten better performance, in terms of access speed, many would argue.

It is a nuanced issue. In some cases, absolute prices might have climbed, on average.

So how can one claim that prices have declined, as some note. Prices declined 82 percent, globally, between 2007 and 2012, according to the International Telecommunications Union, measured as a percentage of income.

That's the key: "as a percentage of income." In some cases, higher absolute prices might represent a lower percentage of household income. So, in "real" terms, prices dropped.

That trend is clear enough, globally, for Long Term Evolution prices, which have dropped in about 73 percent of markets. There also is evidence that U.S. Internet access prices also dropped between 2004 and 2009, for example.

A 2011 study by the International Telecommunications Union, for example, shows consumers and businesses globally are paying on average 18 percent less for entry-level services than they did in 2009, and more than 50 percent  less for high-speed Internet connections, the ITU found.

Relative prices for mobile cellular services decreased by almost 22 percent from 2008 to 2010, while fixed telephony costs declined by an average of seven percent.  




Greater Scale Leads to Lower Prices, Even in a More Concentrated Mobile Business?

If telecommunications really is a business with scale characteristics, then additional scale should lead to lower retail prices. And there is evidence that higher concentration levels in the U.S. mobile business have happened at the same time that retail prices have dropped. 

2013 Began a Reset of Consumer Expectations about Internet Access

Major Internet service providers long have argued that demand for very high speed Internet access (50 Mbps, 100 Mbps, 300 Mbps and faster) is limited. For a very long time, those ISPs have had the numbers on their side.

But that is changing.

By the end of fourth-quarter 2013, 46 percent of Verizon Communications consumer FiOS Internet customers subscribed to FiOS Quantum, which provides speeds ranging from 50 Mbps to 500 Mbps, up from 41 percent at the end of third quarter 2013.

In the fourth quarter of 2013, 55 percent of consumer FiOS Internet sales were for speeds of at least 50 megabits per second. That is a big change, as historically, consumers have tended not to buy services operating at 50 Mbps or faster.

ISPs in the United Kingdom have in the past also  found demand challenges for very high speed services.

Major ISPs would have been on firm ground in arguing that most consumers were happy enough with the 20 Mbps to 30 Mbps speeds they already were buying, and that demand for 50 Mbps, 100 Mbps, 300 Mbps or 1 Gbps services were largely limited to business users or early adopters.

But something very important changed in 2013, namely the price-value relationship for very high speed Internet access services. The Verizon data provides one example. Google Fiber was the other big change.

Previously, where triple-digit speeds were available, the price-value relationship had been anchored around $100 or so for 100 Mbps, each month.

In the few locations where gigabit service actually was available, it tended to sell for $300 a month.

Then came Google Fiber, resetting the value-price relationship dramatically, to a gigabit for $70 a month. Later in 2013, other providers of gigabit access lowered prices to the $70 a month or $80 a month level, showing that Google Fiber indeed is resetting expectations.

Sooner or later, as additional deployments, especially by other ISPs, continue to grow, that pricing umbrella will settle over larger parts of the market, reshaping consumer expectations about the features, and the cost, of such services.

That price umbrella also should reshape expectations for lower-speed services as well. If a gigabit costs $70 a month, what should a 100-Mbps service cost?

So the big change in 2013 was that the high end of the Internet access or broadband access market was fundamentally reset, even if the practical implications will take some time to be realized on a fairly ubiquitous basis.

Google Fiber’s 1 Gbps for $70 a month pricing now is reflected in most other competing offers, anywhere in the United States.

And those changes will ripple down through the rest of the ecosystem. Where Google Fiber now offers 5 Mbps for free, so all other offers will have to accommodate the pricing umbrella of a gigabit for $70 a month.

Be clear, Google Fiber has sown the seeds for a destruction of the prevailing price-value relationship for Internet access.

Eventually, all consumers will benchmark what they can buy locally against the “gigabit for $70” standard. And those expectations will affect demand for all other products.

Where alternatives are offered, many consumers will opt for hundreds of megabits per second at prices of perhaps $35 a month, because that satisfies their needs, and is congruent with the gigabit for $70 pricing umbrella.

One might also predict that, on the low end, 5 Mbps will be seen as a product with a retail price of perhaps cents per month.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...