Monday, January 16, 2017

Verizon Lowers Prices for 150-Mbps Internet Access, Replaces 500-Mbps Tier with 750-Mbps Tier

It has been expected that the new Verizon 750-Mbps symmetrical service would create a new pricing reference that would lead to lower prices for the other existing Verizon plans below 470-Mbps. That appears to be the case.

Prior to the 750-Mbps launch, the base tier was a symmetrical 50 Mbps tier. The 100-Mbps tier was $10 extra each month. The 150-Mbps tier was an extra $10 above the 100-Mbps tier, and then there were $100 a month increases for the symmetrical 300-Mbps tier and 500-Mbps symmetrical tier.

50/50 Base
100/100 +$10 from 50/50
150/150 +$10 from 100/100
300/300 +$100 from 150/150
500/500 +$100 from 300/300

Now, in areas where 750-Mbps symmetrical services is available, the 300-Mbps tier of service will cost $30 a month more than the 150-Mbps service, and the 750-Mbps tier will cost $50 more than the 300-Mbps service.

In other words, prices now will be lower for the 300 Mbps tier, and for the 750-Mbps tier, which replaces the former 500-Mbps tier.

50/50 Base
100/100 +$10 from 50/50
150/150 +$10 from 100/100
300/300 +$30 from 150/150
750/750 +$50 from 300/300

Why Internet Access Will Eventually Reach Nearly 100%

It has been my contention that, some day, internet access adoption would eventually reach close to 100 percent, driven by television and media consumption, not use of personal computers.

Some 90 percent of U.S. residents use the internet and about 75 percent buy fixed network service to use at their homes, according to the Pew Research Center. About 10 percent rely exclusively on mobile phones for internet access.
The logic is simple enough: adoption rates for linear video, in the U.S. market, reached about 87 percent at the peak of the adoption cycle.

Including satellite-delivered and telco-delivered television, household adoption of linear video reached 100 percent. The analogy is that, as video delivery shifted to the internet, those levels of adoption would help drive internet access up to nearly 100 percent, since such access supports computing, mobile and television consumption.

The other prediction that has been easy to make is that mobility would become a more-important part of the media consumption, “time spent with apps or internet,” and access network market share as well.


Even as mobile messaging and social apps remain dominant, with time spent in social and messaging apps grew by 394 percent in 2016, Flurry Analytics reports.

Also, in large part, consumers used their social and messaging apps as their voice and video calling utilities, and now are becoming gateways for consumption of media content as well.



source: Flurry

Sunday, January 15, 2017

Next Generations of "Computing" Might Change "Computing Market"

It is not easy to figure out what the next generation of computing will look like, or who will lead it.


What is clear is that computing moves through eras that have been defined by the archetypical machines in each era: mainframes, followed by mini-computers, then personal computers (first stand alone and then connected to local area networks, then to the internet), then mobiles. Now we are nearing an era where machine-to-machine apps or connected consumer devices might be the defining devices.


It also is fair to note that a focus on the archetypical “devices” might miss the shift as seen through the applications, business models and purposes computing supports. In the mainframe era, computing supported enterprise business or large organization purposes. In the mini-computer era, computing tools spread to organizations and entities of smaller size. In the PC era small business and then computers began using computing devices.


In what we might call the internet era, computing shifted away from enterprises and has largely been driven by consumer apps, new business models and roles, growing in pervasiveness, going mobile or untethered (ambient) and increasingly becoming embedded in consumer apps and life.


Facebook and Google, for example, have become computing leaders whose revenue models are based on advertising. Amazon is a computing leader whose revenue model is based, in part, on retailing.


Also, computing increasingly has become something that is remote, distributed and connected, as cloud computing increasingly shows with more “core” computing handled on a remote device, not locally resident, as was the original pattern.


So far, no clear and universally accepted term defines the recent evolutions of computing. In fact, it is becoming very hard to clearly delineate where computing ends and communications begins. Once upon a time “computing” tallied money spent on “computers and software,” as well as services supporting users of computers.


These days, one has to talk about internet applications and activities, smartphones and “connected life” to understand how, why and where core computing happens.




But a next generation will come, and the way it comes might make irrelevant the terms we use to understand and track “computing.” We certainly do not track the “electricity-using” appliances “industry,” but we do track electricity generation and delivery.


That might some day happen to “computing” as well. There will be some firms we track on parts of the computing business (data centers, semiconductors, enterprise and consumer app suppliers, support services). But large parts of the “computing” industry might be tracked in other categories, such as media or commerce.


It seems clear that a change in devices that use computing offers hints, as does the increasingly distributed nature of computing, which implies that “communications” will underpin future computing in a pervasive and fundamental sense. That is why “cloud computing” (now conducted in large and mega-scale data centers, perhaps in the future also conducted at edge locations) is of such interest to “communications” professionals.

It also seems clear it is getting harder to define "computing" or "information technology" as clearly as once seemed possible.




Friday, January 13, 2017

Common Carrier Regulation Might Have Helped Reduce Telco Capex, Though not Cable Capex

It’s hard to have fruitful discussions when we do not agree on the “facts” of the matter at hand. So it is with the amount of capital investment in access networks in the wake of regulation of such investments under common carrier rules.


According to economist Hal Singer, common carrier regulation has depressed investment, even if many claim the reverse to be true. Singer compares the first six months of 2016 with the same period in 2014, the last year in which ISPs were not subject to Title II regulation, and finds a decline of eight percent.


Still, it is not a simple matter to determine the specific impact of the rules, as distinct from background economic factors or changes in company strategies. Generally speaking, big economic shocks (the popping of the internet bubble in 2000 and the Great Recession of 2008) will drive capex declines.


Also, it is an undeniable fact that most U.S. telcos have shifted capex to mobility, and away from the fixed networks, so that is another factor. Prior to imposition of the rules, fixed network capital investment had dropped very sharply from 2000 peak levels.


Conversely, mobile networks clearly were the drivers of firm revenue growth since 2000, and saw generally higher investments, especially compared to fixed network investment.


http://marketrealist.com/2015/01/key-costs-wireless-wired-telecom/


You can see that telcos generally spent less, while cable companies spent more. In fact, those who argue that common carrier regulation did not depress investment invariably point to higher levels of investment by cable companies.

You might argue that flows form a correct understanding that those investments would generate incremental revenue for cable companies and a similar understanding by telcos that even high investment would not produce favorable financial returns. In fact, those simple understandings largely would account for high cable investment and low telco investment, for basic reasons related to return on investment.

It still remains difficult to say what might have happened, in terms of telco investment, if telcos had seen much higher revenues from doing so.


“Aggregate capital expenditure (capex) declined by nearly $2.7 billion relative to the same period in 2014,” Singer argues.


While Title II can’t be blamed for all of the capex decline, it is reasonable to attribute some portion to the FCC’s rules, he argues.


The rules bar ISPs from creating new revenue streams from content providers, and (needlessly) expose ISPs to price controls, Singer argues. Both measures truncate an ISP’s return on investment, which makes investment less attractive at the margin he argues.
Screen Shot 2016-08-11 at 11.38.33 AM
source: Hal Singer

Automation Will Affect 60% of All Jobs, Representing $16 Trillion in Wages, Says McKinsey

McKinsey Global Institute researchers estimate that automation (artificial intelligence) will affect as much as 60 percent of all occupations, defined as those in which at least 30 percent of present activities can be automated. To put some context on that prediction, McKinsey Global Institute says “almost half the activities people are paid almost $16 trillion in wages to do in the global economy have the potential to be automated by adapting currently demonstrated technology, according to our analysis of more than 2,000 work activities across 800 occupations.”

While less than five percent of all occupations can be automated entirely using demonstrated technologies, about 60 percent of all occupations have at least 30 percent of constituent activities that could be automated, McKinsey analysts say.

More occupations will change than will be automated away, in other words. But the study also suggests that half of today’s work activities could be automated by 2055, with a 20-year plus or minus likelihood.

McKinsey notes that the scale of shifts in the labor force over many decades that automation technologies can unleash is not without precedent.

The order of magnitude impact will be similar to the long-term technology-enabled shifts away from agriculture in developed countries’ workforces in the 20th century.

“Those shifts did not result in long-term mass unemployment, because they were accompanied by the creation of new types of work,” McKinsey notes. That’s the positive spin on the matter.

“Long term” is not the same thing as “near term,” in terms of the specific individuals involved in the change, which often shifts jobs from some regions to others; from some age groups to others; and with many changes in employability attributes that are ill addressed, if at all addressed. As with all major shifts of industry model and fortunes, the impact will be highly disruptive, at the level of discrete workers.

The activities most susceptible to automation are physical ones in highly structured and predictable environments, as well as data collection and processing. In the United States, these activities make up 51 percent of activities in the economy, accounting for almost $2.7 trillion in wages. They are most prevalent in manufacturing, accommodation and food service and retail trade.

And it’s not just low-skill, low-wage work that could be automated; middle-skill and high-paying, high-skill occupations, too, have a degree of automation potential, McKinsey says. As processes are transformed by the automation of individual activities, people will perform activities that complement the work that machines do, and vice versa.

Even if just five percent of current jobs are totally eliminated, perhaps 60 percent will be reshaped.

Source: McKinsey

Top 10 Internet Themes for 2017

The 10 top internet themes for 2017, as outlined by Brian Fitzgerald, Jefferies equity analyst, mostly center on enterprise apps, advertising and media and internet of things.

A couple of  important changes will affect the U.S. industry, including network neutrality rules and foreign  earnings repatriation.

Among the enterprise themes, fulfillment and last mile logistics, the travel sharing economy are key.

In the advertising and media area, online advertising, mobile sports,  video games and virtual reality are top of mind.

In  the IoT area, personal technology and autonomous driving are important.

Big Telecom Horizontal Mega-Mergers Likely Would Not be Approved, Hope Notwithstanding

There is giddiness in some quarters about big merger prospects in the U.S. telecom market under a new federal administration. Some of that optimism is realistic, as a reasonable person would expect at least some amount of regulatory rollback. On the other hand, some of the touted or hoped for deals seem to so badly violate a basic antitrust formula related to industry concentration that the horizontal mergers (providers in the same industry segments merging) remain unthinkable.

One can argue that the rules will be changed, and that antitrust tests will shift. If so, the
Herfindahl-Hirschman index (HHI), a commonly accepted measure of market concentration, will have to be discarded. Lots of things might change. But some of us would guess that the HHI will not be made irrelevant to big merger reviews. And by the HHI tests, nearly every horizontal mega-merger touted (Comcast-Verizon, Sprint with T-Mobile US, Comcast with Charter Communications) would so badly fail the HHI antitrust screen that the mergers would fail to be approved.

Hope, as they say, is not a strategy. Most who tout such mega-mergers undoubtedly know the odds of HHI-busting mega-mergers are slim to non-existent, but might hope to shape the climate for other deals.

The HHI is calculated by squaring the market share of each firm competing in a market, and then summing the resulting numbers, and can range from close to zero to 10,000. The U.S. Department of Justice uses the HHI for evaluating potential mergers issues.

The U.S. Department of Justice considers a market with an HHI of less than 1,500 to be a competitive marketplace, an HHI of 1,500 to 2,500 to be a moderately concentrated marketplace, and an HHI of 2,500 or greater to be a highly concentrated marketplace. Clearly, all the horizontal mega-mergers now floated would so grossly skew toward the higher range as to make the combinations untenable, in antitrust circles.

As a general rule, mergers that increase the HHI by more than 200 points in highly concentrated markets raise antitrust concerns, as they are assumed to enhance market power under the section 5.3 of the Horizontal Merger Guidelines jointly issued by the department and the Federal Trade Commission.

So look at one obvious generic example.
Consider a “hypothetical”  industry segment with four total firms, where the market leader has 40 percent share, provider number two has 30 percent share, the number three provider has 15 percent share and the fourth provider also has 15 percent share.

The HHI for that market (HHI = 40^2 + 30^2 + 15^2 + 15^2 = 1,600 + 900 + 225 + 225) yields an HHI score of 2,950, a level already deemed ‘highly concentrated.”

That hypothetical example roughly corresponds to the U.S. mobile market. The application to the fixed network segment is a bit more complicated, but that market also would be deemed highly concentrated, albeit on the lower ranges of that determination, as is the case for the U.S. mobile market.

The fixed network internet access provider market and fixed network communications markets tend to exhibit the “hypothetical” highly-concentrated markets.

So unless HHI, used globally as a test of market concentration, is ignored, the big horizontal mega-mergers are almost certainly precluded.




Will Video Content Industry Survive AI?

Virtually nobody in business ever wants to say that an industry or firm transition from an older business model to a newer model is doomed t...