Thursday, January 30, 2020

Is Private 5G a Threat to Mobile Operator Revenue?

Some believe private 5G or private 4G networks are an elephant in the room, a big potential threat to public network revenue models. Others see an opportunity to supply enterprises with private networks. 

A couple of proven models might clarify the potential upside and downside. Generally speaking, private networks have not historically been a threat to service provider revenue models. The examples include both cabled local area networks and Wi-Fi. In each case, the public network terminates at the side of the building and the internal network is owned and operated by the occupants. 

The LAN business always has been separate from the telecommunications business, and has either stimulated or been neutral in terms of revenue. 

To be sure, an obvious potential business model might have a telco operating a services integration business, building, operating and maintaining either cabled LANs or Wi-Fi networks on behalf of enterprise clients. This has proven difficult, both for telcos and a few firms that have tried to build a business doing so. 



Boingo, arguably the largest third-party supplier of enterprise venue Wi-Fi and neutral host mobile access in the U.S. market, has total annual revenue in the range of $275 million. As significant as that might be for many firms, it indicates a total addressable market simply too small to support a tier-one telco effort. 

In fact, in recent years, Boingo revenue growth has shifted to supplying distributed antenna system access to venues for mobile service providers. Basically, Boingo supplied the indoor or premises radio network for mobile phone service. 

Another example is the business private branch exchange (enterprise telephony) business. Telcos historically have preferred not to operate in this segment of the business, as gross revenue and profit margins are close to non-existent. Instead, ecosystem partners including system integrators and interconnect firms have occupied this niche in the market. 

The enterprise PBX market has not been large enough, or profitable enough, for the typical telco to pursue. 

Network slicing provides a new wrinkle, however. In principle, a private 5G enterprise network could in turn use a network slice for WAN connectivity. That still leaves the issue of which entity owns and operates the premises network, however. In principle, a network slice is simply another way the enterprise buys a connectivity service, while maintaining its own private local network.

The big takeaway might be that private network markets are not large enough for most telcos to pursue. The costs of building and operating a Wi-Fi network, enterprise telephony or indoor mobile network are not prohibitive for enterprises. So the opportunity for managed services might not be so large for any would-be third party suppliers. 

It remains to be seen whether private 4G or private 5G networks could break from those prior models. But suppliers will have to explore the possibilities. So Ericsson and Capgemini have partnered to explore their opportunities in the private 4G and private 5G network area. 

Telia in Sweden os the first service provider to join Capgemini and Ericsson looking for commercial projects in the Scandinavian market. 

In terms of business models, Ericsson might hope to sell infrastructure and software. Capgemini might look to provide both consulting, implementation and operation. Telia might seek mostly to garner the access revenues. 

But note the possible roles: private 4G or private 5G can be undertaken directly by an enterprise, or might be outsourced to a third party. The issue is whether the third parties might include telcos operating in the system integrator role, or whether that function will, as past patterns suggest, mostly be an opportunity for third parties. 

Beyond that, there is the question of how big the market opportunity might be for third parties. History might suggest the opportunity for telcos is limited, while the upside for third party integrators ultimately also is somewhat limited. 

Most large enterprises ultimately find that the cost of using a managed service provider exceeds the cost of building and operating a private local network. At low volume, a managed service often is more affordable. Those advantages often disappear at volume, however. That is why many enterprises still find they save money by operating their own LANs and PBXes. 

The takeaway might be that private 4G and private 5G will ultimately not prove to be disruptive for mobile service providers, even if significant private network activity occurs.

Wednesday, January 29, 2020

NTT Global Data Centers Execs on What is Happening in Data Center Market



Joe Goldsmith,NTT Global Data Centers, Americas  Chief Revenue Officer and Steve Manos, NTT Global Data Centers, Americas Vice President of Global Accounts talk about what they see happening in the data center market and what NTT is doing to meet customer requirements. 

Tuesday, January 28, 2020

How Many Streaming Services Will Consumers Ultimately Buy?

Virtually nobody believes Netflix will be unaffected by new competition coming from Disney+, Peacock, Apple and HBOMax, as well as existing competition from Amazon Prime and Hulu. So far, Disney+ has gained about 20 million accounts in about a quarter. 

Still, use of Netflix penetration remained steady throughout 2019 while the penetration rates for both Hulu and Amazon Prime grew about six percent, according to MoffettNathanson and HarrisX .

Some 90 percent of Disney+ customers appear to buy Netflix as well. At least so far, it appears that Disney+ is complementary to Netflix, not cannibalizing it.

It is of course possible that market share could start to shift more significantly when HBO Max launches in May 2020 and Peacock launches in July. The issue is how many different services customers will be willing to buy. 

At the moment, that number seems to hover between two to three subscriptions in some studies; three to four subscriptions per household in others. 

So the issue is whether the market saturates at five or more subscriptions per household. 


That might seem a stretch, but some households looking at the landscape, and which services have “must see” content already estimate they might need to buy five different subscriptions. 


The other issue is that Netflix is the first global service. The others arguably will compete primarily in the U.S. market, to start. 


What Will 5G Mean for the Rest of the Ecosystem?



What might 5G mean for all those in the ecosystem aside from mobile service providers? What will change, and what possibly could happen in the rest of the ecosystem, ranging from chips to apps; use cases to business models? How does 5G bring those changes? Where is the upside and the downside?


Dean Bubley, Director, Disruptive Analysis, United Kingdom 
image1
John Ghirardelli, Director, U.S. Innovation, American Tower Corporation, USA 
image1
Ramy Katrib, CEO & Founder, DigitalFilm Tree, USA 
image1
Yang Yang, Co-Director, and Professor, School of Information Science and Technology, SHIFT, and ShanghaiTech University, Peoples Republic of China 
image1
Gary Kim, Consultant, IP Carrier, USA 

Are Wi-Fi Routers Dangerous to Your Health?

As long as I can remember, there have been periodic and generally low level concerns about non-ionizing radiation--the type of energy radio signals represent. By non-ionizing, we mean that the signals are not capable of dislodging electrons atoms or molecules, as are x-rays or gamma rays. Ionizing radiation, in high doses, is carcinogenic, though useful in low doses. 

Non-ionizing radiation can cause tissue heating, as you can experience with food in a microwave oven. The health concerns about non-ionizing radiation come from the potential long term exposure. As with any form of natural radiation (sunlight, for example), the key is exposure levels. 

The key thing about non-ionizing radiation is that it is found, in real-world communications cases, at very low power levels. Also, signals decay logarithmically. 

This is an illustration of how a Wi-Fi router’s power levels drop rapidly with distance. Power levels drop more than half in the first two meters. Once people are about four meters from the router, signal levels have dropped from milliWatts to microWatts, about an order of magnitude (10 times). 

Some people are concerned about power emitted from mobile cell towers. Keep in mind that mobile radios on cell towers have power levels that decay just as do Wi-Fi signals. Some liken the power levels of a mobile radio on a tower to that of a light bulb

Radio signals weaken (attenuate) logarithmically, by powers of 10, so the power levels decay quite rapidly.

Basically, doubling the distance of a receiver from a transmitter means that the strength of the signal at that new location is 50 percent  of its previous value. Just three meters from the antenna, a cell tower radio’s power density has dropped by an order of magnitude (10 times).

At 10 meters--perhaps to the base of the tower, power density is down two orders of magnitude. At 500 meters, a distance a human is using the signals, power density has dropped six orders of magnitude.


Though there is no scientific evidence that such low levels of non-ionizing radiation actually have health effects, such as causing cancer, a prudent human will limit the amount of exposure, just as one takes the prudent risk of wearing a seat belt in an automobile, minimizing time spent in the sun and so forth.

Would It Have Made a Difference If Telcos Had Stuck with ATM?

It is no longer a question, but there was a time not so long ago when global telcos argued for asynchronous transfer mode (broadband ISDN) as the next-generation protocol, rather than TCP/IP.  

A report issued by Ovum in 1999 argued that “telecommunications providers are expected to reinvent themselves as operators of packet-switched communications networks by 2005.”
“Growth in Internet Protocol (IP) services is expected to fuel the transition,” Ovum argued.

Of course, all that was before the internet basically recreated the business, making connectivity providers a part of the broader ecosystem of applications, services and devices requiring internet connectivity to function. 

In retrospect, it might seem obvious that the shift of all media types (voice, video, image, text) to digital formats would make TCP/IP a rational choice. But that was not the case in the telecommunications industry from the 1980s to the first decade of the 21st century. Telco engineers argued that ATM was the better choice to handle all media types. 

But the internet, cheap bandwidth and cheap computing all were key forces changing the economics and desirability of IP, compared to ATM. 


Once internet apps became mass market activities, network priorities shifted from “voice optimized” to “data optimized.”

Connection-oriented protocols historically were favored by wide area network managers, while connectionless protocols were favored by local area network managers. The relative cost of bandwidth drove much of the decision making.

WAN bandwidth was relatively expensive, LAN bandwidth was not. That meant the overhead associated with connectionless protocols such as TCP/IP did not matter. On WANs, packet overhead mattered more, so lower header overhead was an advantage. 

It is by no means clear that the choice of a connectionless transmission system, instead of the connection-oriented ATM, would have changed the strategic position of the connectivity provider part of the internet ecosystem. Indeed, one key argument for IP was simply cost: IP devices and network elements were much cheaper than ATM-capable devices. 

One might argue the global telecom industry simply had no choice but to go with IP, no matter what its historic preferences might have been.

Monday, January 27, 2020

Applications and Use Cases are Big 6G Challenge

The whole point of any access network is send and receive user and device data as quickly as possible, as affordbly as possible, to the core network and all the computing resources attached to the core network. The future 6G network, no less than the new 5G network, is likely to feature advancements of that type.

Bandwidth will be higher, network ability to support unlimited numbers of devices and sensors will be greater, latency will be even lower and the distance between edge devices and users and computing resources will shrink.

The biggest unknowns are use cases, applications and revenue models, as has been true since 3G. The best analogy is gigabit fixed network internet access. Users often can buy service running at speeds up to a gigabit per second. Few customers presently have use cases requiring far lower speeds.

So it is likely that 6G, as will 5G, often will feature capabilities that exceed consumer use cases, initially.

NTT Docomo has released a white paper with an initial vision of what 6G will entail. Since every mobile generation since 2G has increased speeds and lowered latency, while connection density grew dramatically between 4G and 5G, we might look first to those metrics for change.


Docomo suggests peak data rates of 100 Gbps, latency under one millisecond and device connection density of 10 million devices in each square kilometer would be design goals. 


Along with continued progress on the coverage dimension, 6G standards might extend to space, sky and sea communications as well. Docomo also believes quality of service mechanisms exceeding “five nines” and device performance (no charging devices, cheaper devices) would be parts of the standard. 




Looking at commercial, economic or social impact, since the 3G era we have tended to see a lag of execution compared to expectations. In other words, many proposed 3G use cases did not emerge until 4G. Some might say a few key 4G use cases will not flourish until 5G is well underway.


For that reason, we might also discover that many proposed 5G innovations will not actually become typical until the 6G era. Autonomous vehicles are likely to provide an example. 


So Docomo focuses on 6G outcomes instead of network performance metrics. Docomo talks about “solving social problems” as much as “every place on the ground, sky, and sea” having  communications capability. Likewise, 6G might be expected to support the cyber-physical dimensions of experience. 


Also, 5G is the first mobile platform to include key support for machine-to-machine communication, instead of primarily focusing on ways to improve communication between humans. Docomo believes 6G will deepen that trend. 




It is worth noting that the 5G spec for the air interface entails availability higher than the traditional telecom standard of “five nines” (availability of 99.999 percent). 5G networks are designed to run at “six nines.” So 6G might well run at up to “seven nines” (99.99999 percent availability). 


The legacy telecom standard of five nines meant outages or service unavailability of 5.26 minutes a year. The 5G standard equates to less than 32 seconds of network unavailability each year. A seven nines standard means 3.16 seconds of unavailability each year. 




Some might say 4G was the first of the digital era platforms to design in support for internet of things (machines and computers talking to machines and computers) instead of the more traditional human mobile phone user. That trend is likely to be extended in the 6G era, with more design support for applications and use cases, with artificial intelligence support being a key design goal as well. 


In part, that shift to applications and use cases is more important as the wringing of traditional performance out of the network becomes less critical than new use cases taking advantage of performance boosts. 


As it already is the case that almost no consumer users actually “need” gigabit speeds, much less speeds in the hundreds of megabits per second, so few human users or sensors will actually “need” the 6G levels of throughput and latency. 


Architecturally, the evolution towards smaller cells will continue, in part to support millimeter wave frequencies, in part to assure better connectivity. Where traditional cell architectures have emphasized non-overlapping coverage, 6G networks might use orthogonally non-aligned cells (overlapping), deliberately overlapping to assure connectivity. 


That almost certainly will require more development of low-cost beam forming and signal path control. Having cheap artificial intelligence is going to help, one might suggest.

Digital Real Estate Destroys Physical Real Estate in Advertising

The “real estate” metaphor long has been applied in the “virtual” spaces created by operating systems ( homescreens and notifications), app...