Wednesday, May 23, 2018

Why 4K/8K TV is a Waste for Most People

For most consumers, 4K and 8K TVs are unlikely to provide an actual experience boost, despite the denser pixel count. The reason is that the human eye cannot tell the difference between 4K and 8K from 1080p unless a person sits uncomfortably close to a screen, or unless the screen is really huge. Simply put, 4K is a waste of money, as 8K will be, for most people.

Most people simply do not sit close enough to the screen to perceive the difference 4K or 8K can provide.  

It is obvious why consumer electronics companies want to sell you new TVs. TVs no longer break, and manufacturers need new reasons for you to buy a new screen and move the existing screen to a bedroom or elsewhere in a house.

Content developers have their own reasons for wanting higher resolution: it is part of the decades-long effort to create greater realism and experiential immersion.


The trend to bigger screens therefore makes sense. Either people have to move closer to their screens, or screens have to get much bigger. Bigger screens probably are the only realistic option.

But 4K and 8K really make sense for business, medical, industrial and other applications where a human operator actually is very close to a screen with very-rich detail.

Tuesday, May 22, 2018

Is Proposed Hillsboro, Ore. Municipal ISP Network Viable?

Building a $66 million, municipal ISP network would be "marginally" viable at a 28 percent "take" rate, a study by Colorado firm Uptown Services predicted in 2015.

That might be an optimistic expectation of market share for any well-run ISP operating a fiber-to-home network and competing against a elco and a cable TV operator where one or both of those competitors are vulnerable because they have not, or cannot, invest in their own networks.

Much hinges on whether the Hillsboro network plans also to sell video service or voice. If not, actual take rates might be as low as 20 percent, and possibly lower.

Many municipal ISPs that report adoption rates (penetration, or the percent of homes passed that actually buy service) boosted by their sales of video and voice services. So the adoption rate is based on “units of service sold,” not the “number of homes buying service.”

At least so far, where a municipal ISP offers only internet access, early adoption rates--even with highly-competitive prices, have been in single digits.

Penetration: Units Sold or Homes Buying Service?

Morristown
Chattanooga
Bristol
Cedar Falls
Longmont
homes passed
14500
140000
16800
15000
4000
subscribers
5600
70000
12700
13000
500
units sold
39%
50%
76%
87%
13%
services sold
3
3
5
3
2
HH buys .66 =
2
2
3
2
1
Homes served
2828
35354
3848
6566
379
penetration
20%
25%
23%
44%
9%

Some private ISPs would, and have, taken such a chances. Numerous cities and towns seem to be considering the option, as well.

The consultants estimate the Hillsboro municipal ISP operation would reach cash positive operations in 13 or 14 years, using the $50 per month benchmark. That might be too optimistic. Higher prices seem to part of the business model for other municipal broadband networks.  

But city officials have decided to build the municipal broadband network anyway. It will not be easy.

Municipal ISPs enjoy no advantages in capital investment and perhaps marginal advantages in the make-ready and pole attachment cost areas. Any hope for enough operating efficiencies to sell service at $50 a month would presumably have to come in marketing and operating cost areas comparable to best practices seen at some private ISPs (Sonic, Tucows).

If successful, such networks generally result in lower prices, to be sure. But the proposed Hillsboro network might be seen as a key test of whether such networks can compete in suburban markets.

Traditionally, the opportunity for municipal broadband has seemed more realistic in rural markets and for smaller towns. The Hillsboro network might be likened to the network Ting is building in Centennial, Colo., a reasonably prosperous suburb of Denver.

Hillsboro possibly will be an important test case of the business model. Few private investors would be able to wait more than a decade simply to reach cash flow positive status, to say nothing of earning enough money to earn a profit after two decades or so.

New Thinking on Mobile Market Structure?

Very few regulators or would-be competitors most places in the world would consider  a facilities-based approach to fixed line telecom services workable in the present market. Simply, revenue upside is too limited and investment burdens too high to support two or more facilities-based fixed network providers.


There are a few exceptions, especially small countries, city states or North America, where cable TV networks were transformed from one-way video broadcast networks into full duplex communications networks.


In the mobile area, most regulators have preferred four providers to three contestants. That thesis will soon be tested in the U.S. and possibly a few other markets as antitrust and telecom regulators have to make a decision about whether to allow the merger of Sprint and T-Mobile US.


The larger policy environment is challenging for Sprint and T-Mobile US in that regard. Some argue the present administration is going to more merger friendly than that last. Then there is the Department of Justice opposition to the vertical merger of AT&T and Time Warner, which many observers found surprising, as such vertical mergers have not tended to raise antitrust issues.


The present administration also has blocked other deals, including the purchase of Qualcomm by Broadcom; the DraftKings and FanDuel proposed merger; and blocking Otto Bock Healthcare from acquiring rival Freedom Innovations.


Also, antitrust opposition to a merger between AT&T and T-Mobile US in 2011, and Sprint and T-Mobile US in 2014, was very strong and evident, to the extent that those proposed mergers were scuttled.


Beyond that, there is growing talk of antitrust in the application provider space, with many believing Google, Facebook or Amazon have grown too large.


Tough new issues will have to be resolved, if the antitrust sentiment turns into antitrust action. Among the more-thorny issues is that it almost cannot be demonstrated that their has been consumer harm, since Google and Facebook offer their services and apps at no cost.


So proponents of regulation and antitrust will have to find some new, lawful argument about the magnitude of harm. Up to this point, the arguments center around potential harm to would-be competitors, not consumers.


Still, as many properly note, antitrust authority approval of the Sprint merger with T-Mobile US would trigger other mergers in the U.S. market, between other access and content providers.


There might be other repercussions. French regulators have opposed consolidation of the French mobile market from four providers to just three, but there are some signs authorities are willing to reconsider, in light of 5G investment costs. That is the argument of the day, it appears.


T-Mobile US and Sprint argue they should be allowed to merge to speed up and increase investment in 5G. However popular that argument might be, studies conducted of European mobile mergers have not found that investment increased.


Still, among the larger questions is the maximum feasible amount of facilities-based competition that can happen in telecom service provider markets. In most countries, the answer is that only one physical network seems sustainable. In a few, two fixed networks have worked, but that is rare.


In mobile markets, multiple facilities have been the rule, but the issue remains: what pattern is sustainable? Most observers would agree a monopoly provider is not optimal. But how many other firms can sustain themselves? Two? Three? Or four?


Approval of a Sprint merger with T-Mobile US would undoubtedly trigger some amount of new thinking on market structure.

Monday, May 21, 2018

Google, Facebook Antitrust: Correlation is Not Causation

For some, it might be clear that the Microsoft antitrust action lead directly to the rise of Google and Facebook. Others are not so sure. Some argue the Telecommunications Act of 1996 “succeeded” in bringing innovation and competition to the U.S. telecom market. Others might argue most of the change came from mobile and internet sources.

Those precedents are more relevant today, now that calls are made for antitrust action against Google and Facebook.

The problem, some might argue, is that it is not entirely clear that any of those earlier “pro-competitive” actions actually were the drivers of innovation and new competition.

There arguably remains significant disagreement about the actual effect of the antitrust action against Microsoft, which prohibited Microsoft from bundling its browser with its operating system. Some argue the action created the climate that lead Google and Facebook to emerge. Others dispute that notion.

The precedent matters as many now argue Google and Facebook now are stifling the emergence of big new application firms. As was the case in the Microsoft situation, competitors are among those who protest the most, arguing that Google and Facebook are stifling innovation.

That does not mean claims raised by would-be competitors to Google and Facebook are irrelevant. The problem is that we cannot be sure the arguments about stifled innovation are correct.

In some part, firms that say they want to challenge Google and Facebook simply do not get funding. That does not mean either Google or Facebook are stifling innovation. It does mean that potential investors are doing so.

On the other hand, some argue, tying practices by Google and Facebook arguably do make it hard for new competitors to emerge, for products Google and Facebook are tying. Antitrust actions have been taken in the past to prevent such tying of products in a bundle.

Much will ultimately hinge on what new end user demand could develop in the future, and whether that demand emerges in new ways. Some might argue that the Microsoft antitrust action “failed,” rather than succeeded, as control of the browser has failed to have unique value-creating power.

Others will argue that it was precisely the blocking of any tying of Internet Explorer with Windows that allowed Google to emerge, even if Google’s rise was based on search, not its command of the browser market.

Others would point to Chrome browser share and argue that preventing the tying of Windows to Internet Explorer did, in fact, lead to Google Chrome market share dominance. The issue is “why” Chrome rose to dominance.


These big regulatory changes are not science experiments. We cannot claim that correlation is causation. It is not completely clear that Google and Facebook rose for reasons directly related to Microsoft antitrust.

The internet “changed everything,” including the relative value of app, platform and device products. Some would argue that Chrome won out because it was a superior product to Internet Explorer.

But some also argue that because of the antitrust action, Microsoft moved slower, while competitors moved faster. In that sense, the antitrust action might have aided Google.

There are arguments to be made about creating incentives for more innovation in a market where a few providers have such dominance. Regulators rightly believe they can play a role.

Barring a big business from growing in new areas is an almost-certain prescription for producing a slower-growth profile for the proscribed businesses. That might, or might not, aid other new firms in rising to dominance in new categories.

What remains unclear is whether, in markets with scale effects (ad-supported business models, for example), the emergence of scale leaders is inevitable. Our choice could well be scale leaders in more categories, but not an end to scale leaders.

Will New Internet Access Platforms Disrupt the Market?

Among the bigger questions coming to the fore in the internet access business is whether 5G can become an effective replacement for the fixed network, and whether fixed wireless can do the same to the cabled networks.

The corollary is that some contestants have more motivation to ask such questions than others. Verizon, for example, has the smallest fixed network footprint among tier-one internet access suppliers in the U.S. market.

Comcast, for example, passes (can actually sell service to these homes) about 54 million homes. Charter Communications passes some 50 million home locations.

AT&T’s fixed network passes perhaps 62 million U.S. homes. Verizon, on the other hand, passes perhaps 27 million locations.

If fixed wireless proves to be a more-affordable way to create high-speed internet access at gigabit rates, Verizon can use the platform inside and outside its present fixed network territory. That is important in several ways.

Use of 5G fixed wireless could allow Verizon to offer fiber to home speeds without the cost, in major urban areas where it has not yet ubiquitously deployed Fios FTTH.

Just as important, out of region fixed wireless offers a brand new, and sizable, revenue opportunity. Today, Verizon is unable to compete, out of region, for perhaps 102 million fixed network internet access locations it cannot reach today. Verizon itself has argued the 5G fixed wireless opportunity is about 30 percent of U.S. household locations, or perhaps 43 million locations.

Similar questions will be raised about the use of unlicensed spectrum to support commercial access operations. Though a proven approach, tier-one service providers traditionally have eschewed that approach.

That does not mean every potential contestant will have the same predilections. Wi-Fi obviously has been deemed commercially feasible in a number of deployment situations. Other access platforms of a non-traditional nature are theoretically possible as well, and arguably will be studied much more seriously by potential new challengers to tier-one access providers.

Just as obviously, the tier-one providers will move to deploy their own solutions that obviate the “need” for other solutions.

That means choices made by some would-be competitors can, and likely will, be different from choices made by tier-one providers. Typically, no single choice is “best” for every deployment scenario. So mixed platform choices are common, even one platform is preferred.

Small rural ISPs have used fixed wireless. Tier-one telcos have used cabled networks (all copper, fiber to node or fiber to home). Cable companies have used hybrid fiber coax. Mobile operators have used radio networks. Satellite operators use those networks.

Platform possibilities are multiplying, though. Wide availability of new radios, lots of new unlicensed spectrum, ways to aggregate licensed and unlicensed spectrum and commercialized millimeter wave frequencies all will make a difference. The ability to create private access networks could well emerge as well, especially for venue access.

The point is that we likely to see new debates about the “best” access technology, or at least debates about “commercially viable” access platforms.

The context there is the extent to which any platform choice works well enough to support the existing business model, is flexible enough to evolve, while offering a hook to better platforms that will be needed in the future.

Platform and standards wars are anything but unusual in the technology business, and quite common in the telecom and networking businesses as well.

In recent decades, we have seen big debates about:
  • fiber to home versus fiber to curb versus hybrid fiber coax
  • Whether metro Wi-Fi can compete with mobile access
  • Value of CDMA and GSM in the U.S. market
  • Wi-Fi as an access technology to rival a mobile network
  • whether a 5G network can be an effective substitute for fixed network access
  • fixed wireless using unlicensed 60-GHz spectrum (Terragraph) as cable substitute
  • Whether low earth orbit satellites and perhaps other methods (unmanned aerial vehicles, balloons) can be substitutes for traditional cabled networks.

In all likelihood, the outcome will not be decided on technological grounds. Virtually always, the business model (deployment cost; fit with existing operations; user acceptance) drives the decision.

Sunday, May 20, 2018

Amazon is Key to Selling Direct-to-Consumer Premium Channels, Study Finds

One traditional concern of programming networks pondering a shift to streaming distribution is the additional cost of marketing direct to consumer. In a linear environment, that marketing effort is undertaken by the distributors (cable, telco, satellite).

As it turns out, the solution for retailing video services is akin to the retailing of many other consumer products: Amazon.

Amazon Prime Channels account for more than half of all direct-to-consumer premium-channel subscriptions, according to The Diffusion Group (TDG).

More than 53 percent of HBO direct-to-customer accounts were purchased from  Amazon Prime Channels. Some 72 percent of Showtime direct-to-consumer accounts were sold that way. Some 70 percent of Starz DTC accounts also were sold by Amazon Prime.



The Era of Zero-Touch, One-Touch and Low-Touch Activation is Coming

It is possible that mobile revenue, globally, will peak as soon as 2021, a fact that drives mobile operator and fixed line telco interest in possible internet of things revenue streams, to say nothing of video entertainment and advertising revenues.

In fact, so much will revenue sources have to change that we might call the coming eras starting with 5G the post-mobile era, in the sense that consumers using phones will not be the growth engines any longer.

In fact, much 5G consumer revenue will simply displace existing 4G account revenue, which is why the emphasis has to be on entirely-new sources of revenue coming from someplace other than demand for mobile phone usage.

So connectivity revenue growth, for any tier-one mobile operator, will not, in the future, be driven by consumers and businesses using mobile phone service and mobile internet access.

That same trend can be seen in the Wi-Fi space, where the new WPA3 Wi-Fi security protocol is expected to include features that enables one-touch setup of devices with no screens. In fact, many devices might require zero-touch activation.

If you have had to set up a smart speaker recently, you understand the issue of configuring an appliance with no screen or other obvious direct input features.

That implies something important about where device and usage growth might occur, namely in the internet of things area. For Wi-Fi interests, growth also is seen as shifting to new areas such as IoT and sensor connections that are part of the move towards pervasive computing.

Some people refer to that as the “internet of everything.” The point is that sensors and computing appliances will be ubiquitous, and connected.

Still, connectivity revenue for IoT sensors, appliances and devices might represent about three percent of the annual value of IoT spending, most of which will occur for devices, installation services, apps and platform purchases.




And while better security for any internet-connected device is an obvious intended outcome of WPA3, the ability to easily configure devices shows the coming importance of many low-cost, small internet of things appliances and devices that will be using IP and the internet to connect with remote servers.

It is possible that, by about 2024, almost $1.3 trillion worth of IoT devices will be sold. That is a lot of appliances that will require zero-touch or one-touch activation.

Many Winners and Losers from Generative AI

Perhaps there is no contradiction between low historical total factor annual productivity gains and high expected generative artificial inte...