Saturday, January 27, 2018

Federal Preemption Coming in Internet Access Business?

Communications that cross state lines generally have been regulated differently than communications that are confined within a single state, or parts of a state. In the internet era--even if data communications tend not to be regulated very much--there has been a “hands off” approach, which fits the generally highly-distributed nature of modern computing.

In more recent times--in the wake of the Telecommunications Act of 1996--there was a perhaps-necessary clarification of state and federal roles, mostly in the area of federal preemption of state and local rules.

The logic has been that, for clear efficiency reasons, it does not make sense to have potentially 50 sets of rules for communications that are, almost exclusively, interstate or global in nature.

It seems almost inevitable that we will have some form of the federal preemption debate as policy on internet access potentially fractures with imposition of state rules on internet access. AT&T, for example, already has started calling for federal rules to re-establish or preserve a single national policy.

That comes as some states and localities create their own policies for internet access, once again raising the issue of fractured policies across the nation. As those of you who work in tariff and taxation areas know, it is devilishly-complex to comply with all local and state regulations when you are running a nationwide business.

That, in fact, is behind the whole European Union project: ending the friction that comes with multiple regulatory and currency regimes within what increasingly is a single market.

“It is time for Congress to end the debate once and for all, by writing new laws that govern the internet and protect consumers,” AT&T says.

Given the obscurity of network neutrality in general, and its weaponization, it might be reasonable simply to point out the areas where nearly everybody continues to agree.

We all agree that people and consumers must be able to use all lawful services and applications. There cannot be blocking of lawful applications.

Such applications cannot be throttled or downgraded based solely on the ownership of specific sites and content.

Everyone has agreed on these principles for more than a decade. So, even if most seem not to understand, do AT&T and other major internet service providers.

“We don’t block websites; we don’t censor online content. And we don’t throttle, discriminate, or degrade network performance based on content. Period,” AT&T says.
But “Congressional action is needed to establish an ‘Internet Bill of Rights’ that applies to all internet companies and guarantees neutrality, transparency, openness, non-discrimination and privacy protection for all internet users.

“Legislation would not only ensure consumers’ rights are protected, but it would provide consistent rules of the road for all internet companies across all websites, content, devices and applications,” AT&T argues.

At this point, and ironically, it is as much the major app providers--not just ISPs--that probably have to worry about what that means. If the objection to changing the “best effort only” level of consumer internet access is about preventing the emergence of gatekeepers, we have problems far beyond “who owns the access pipe.”

Actual instances of “commercial blocking” have been happening, but by Amazon and Google, for instance, not ISPs.

In the coming debate, the need for predictable rules, across the whole country, will be stressed, as we have seen in the past, and for the same reasons. To be sure, AT&T’s concern is about future services whose performance does matter, and which might clearly benefit from optimization, as do consumer apps whose performance is assured by use of content delivery networks.

Ironically, most larger content and app providers already use content delivery networks, precisely for the purpose of optimizing performance of their consumer apps.


“In the very near future, technological advances like self-driving cars, remote surgery and augmented reality will demand even greater performance from the internet,” AT&T says. “Without predictable rules for how the internet works, it will be difficult to meet the demands of these new technology advances.”

To be sure, the issue all along has not been “lawful use of apps” and “no blocking” but the development of quality-assured or other services whose costs are defrayed by an enterprise.

Some ISPs and app providers have argued for the freedom to offer “toll free” services--offered at no charge to end users--alongside the for-fee models. Internet.org, for example, has tried to create no-charge internet access programs for mobile customers in developing markets.

Some ISPs want the freedom to create toll-free or tariff-free services that provide internet access in the same way that toll-free calling is offered.

To be sure, business services are not covered by network neutrality rules. The problem is that the line between enterprise and consumer services increasingly is blurred. Virtual private networks, for example, can be used by business or consumer end users.

The fear in some quarters, perhaps logically, is that, eventually, quality-assured internet access becomes high-definition to standard definition video; or 4K instead of HDTV, a “better” level of service that eventually forces app providers to upgrade, possibly with the implication that app providers pay money to a transport provider, as already happens with content delivery network payments.

The point is that CDNs are lawful and routinely used. Why are CDNs "to the end user" not lawful? And if so, does that business require uniform national rules, given that CDNs almost intrinsically operate across state lines?

Friday, January 26, 2018

Most Firms, in Most Industries, Must Recreate Their Business Models

The internet has almost uniformly been positive for consumers--generating new value--while allowing some firms to ride new value propositions to huge business success. On the other hand, the internet has generally been difficult, financially, for nearly all incumbent firms.

“Digital is confounding the best-laid plans to capture surplus by creating—on average—more value for customers than for firms,” McKinsey consultants say.

Telecom service providers know the process well. A shift to over-the-top,  internet-based applications allows consumers to use product substitutes (WhatsApp, Skype, Netflix) instead of buying service provider products.

That both makes telco markets smaller, and reduces revenue and profit potential for the amount of consumer demand that remains. At least where it comes to intangible or software products (voice, messaging, content, apps and features), the cost of incremental usage is close to zero.

Prices become much more transparent, while new alternative suppliers emerge to provide lower-cost or free substitutes.

In other words, as with most other industries, use of direct internet distribution reduces the need for, and value of, intermediaries and distributors.

To the extent that the marginal cost of supplying the next unit of any product is nearly zero, retail prices will trend toward zero. But the problem is not exclusively faced by telcos.

Internet-based competition has “siphoned off 40 percent of incumbents’ revenue growth and 25 percent of their growth in earnings before interest and taxes (EBIT), as they cut prices to defend what they still have or redouble their innovation investment in a scramble to catch up,” McKinsey argues.


The point is that telcos and other internet service providers necessarily must replace legacy businesses and products with new business models and products.

That is why some of us believe retail service providers (business-to-consumer) must move up the stack. The incumbent business models are breaking down.

Suppliers in the business-to-business segments of the market might have other constraints or opportunities. It is hard to see how most capacity suppliers, for example, actually can move “up the stack,” though all such firms now have moved from a “voice capacity” to “data capacity” revenue model.

The arguably more-important growth has mostly been “new geographies” or “new and redundant capacity in existing geographies.”

Building or acquiring new routes outside the current footprint are an example of the former. Building new cables across the Pacific or Atlantic oceans are examples of the latter strategy.

The main point is that virtually every business faces similar challenges in making the transition from legacy to next-generation business models.

S&P 500 4Q 2017 Telecom Earnings Uniformly Below Expectations

It is just a snapshot, but the telecom segment of the Standard & Poors 500 Index faired absolute dead last among industry segments where it came to earnings that were above expectations in the fourth quarter of 2017.

Perhaps no single market is experiencing greater shocks than the Indian telecom market, as rapid consolidation is following dramatically-lower earnings. Vodafone saw a 39-percent drop in profits in the first half of 2017. Bharti Airtel profit dropped 39 percent (77 percent consolidated net profit) in the third quarter.

In Europe, it appears that profit is stabilizing, if there is little revenue growth.

Source: @FactSet

Most Big Data Projects Fail to Some Extent

According to Resulticks, only 21 percent of marketers say their big data software delivers on all its big data promises. About 52 percent of surveyed respondents believe big data projects  deliver “some” of what vendors promise.

That is not a new story, for virtually any type of enterprise computing initiatives. Few big new initiatives actually succeed on the level originally promised. Most likely fail outright.

According to some studies, enterprise “digital transformation” success rates have been as low as 13 percent.

That reflects the larger story that major investments in new technology platforms have tended to lag in producing measurable gains in productivity, sometimes for a decade or more.

That seems to be the broader pattern for some systemically-important technologies such as electricity, steam power, internal combustion engines and other general purpose technologies.

That also has tended to be the trend when enterprises have invested heavily in new computing technologies. There are many theories about “why” the pattern exists. Some think the problem is that we cannot measure the changes.

That is unsatisfying, so many believe the issue is that technology platforms deliver measurable advantages only after business practices are reimagined and refashioned to take advantage of the new technology. Time after time, we have found that big new investments in new technology do not produce measurable results for a decade or even more.

If that was routinely expected, nobody would make the investments. So the expectation is that the payoff will come within three years. Measurable value creation takes much longer, generally speaking.  

Tuesday, January 23, 2018

How Many "IoT" Devices Already are in Use?

Is it possible there already are as many as 27 billion internet of things devices already in use globally? Most of us would say “no way.” But it all depends on the definitions one uses for “internet of things.” Some definitions arguably are too broad.

For example, there is a difference between “connected devices” and “internet of things” devices. There might be 16 billion mobile phones and PCs--all “connected”--in use in 2017. But that seems to stretch the definition of IoT too far.

Using a more narrow definition, where IoT does not refer to mobile phones, PCs, tablets, IoT would include all manner of sensors other than phones, PCs and tablets that communicate. Using that narrower definition, there might well be as many as 10 billion IoT devices already in use, including more than four billion industrial and commercial sensors. Medical devices and sensors used in transportation also might represent about a billion more IoT sensors.



Sunday, January 21, 2018

Reliance Jio Earns "Profit" in Less than 2 Years (Arbitrage, Accounting Rules Help)

It has been two decades since I’ve seen anything like the apparent regulation-assisted business model changes that apparently have helped Reliance Jio earn a profit within two years of launching its attack on the India mobile market.

The profit also is based on accounting rules, as Jio still has negative cash flow. In other words, Reliance Jio is able to capitalize some operating expenses.

Still, it is fair to note that some regulatory changes have simultaneously harmed Reliance JIo’s biggest competitors, and helped Jio reduce its own operating expenses.

The last time I saw this sort of regulatory arbitrage was back pre-2000, when incumbent and upstart telecom firms sparred over reciprocal compensation fees paid to firms for terminating calls on their networks from other service providers.

Basically, because such fees were very generous in a few locales, long distance conferencing services started businesses in those areas, charging very-low calling fees and essentially making their money on the earned reciprocal compensation fees paid by the calls inbound to the conference calling centers.

The same idea was used by call center operations, where most of the traffic, by definition is inbound, rather than outbound.

The same arbitrage could be used by dial-up internet access providers, since--again by definition--the customer traffic was inbound from other networks (customers dialing in to create an access session).

Essentially, disparities in traffic flow also underpin the economics of rural and other small telecom companies as well, where long distance calls (disproportionately important in rural areas) generate an originating access fee that is paid by the long distance carrier to the originating call network.

The point is that, at crucial times, regulatory arbitrage can provide a bit of breathing room while erstwhile upstarts sprint to gain market share and reach sustainability. Arbitrage likely is not a sustainable strategy for Jio, anymore than it has proven to be sustainable for many other service providers.

But, at least in principle, such arbitrage can help in the formative years.

Saturday, January 20, 2018

FCC Definitions are Floors, Not Ceilings

Defining what broadband means now is an arbitrary exercise, if a necessary task to measure progress. According to the current minimum definition--on fixed networks--of 25 Mbps in the downstream, many internet access services actually cannot be marketed as “broadband,” using the Federal Communications definitions.

People, app experience and markets are not affected by any such definitions, of course. It probably does not matter at all that fixed network 10 Mbps Ethernet is not “broadband,” using the FCC definition.

The definitions do not apply to other wireless or mobile networks, though, a nuance that often is missed.

Still, for most users, it does not matter that most of their Wi-Fi and mobile internet access sessions are not “broadband,” using the fixed network definition. What matters is that user experience is good enough to provide satisfactory interactions.

“Satisfactory” often hinges on the actual use case, of course. Relatively modest speeds are required for most consumer apps, including video, somewhere between 5 Mbps and 25 Mbps. “Twitch” gamers mostly will need more.

Also, floors are not ceilings. Availability is not usage. In fact, U.S. consumer internet access speeds double about as fast as Moore’s Law would predict, and grow by an order of magnitude about every five years.

By some measures, U.S. average speeds are in the range of 19 Mbps. By other tests, even mobile access speeds are in the 23 Mbps range. Some other tests show 2017 average speeds of 55 Mbps.  


Though we tend not to pay much attention, U.S. fixed network internet access speeds used by consumers have grown about as fast as Moore’s Law would predict, at least on cable TV networks.

Net AI Sustainability Footprint Might be Lower, Even if Data Center Footprint is Higher

Nobody knows yet whether higher energy consumption to support artificial intelligence compute operations will ultimately be offset by lower ...