Wednesday, September 18, 2019

Live Esports Streaming Requires a Different Network than the Esports Competitions

Live game streaming (esports), where viewers watch other gamers compete with each other, is said to be one possible application for network slicing. The notion is that creating a network optimized for point-to-multipoint (multicast) content delivery benefits from a network optimized for low latency and high bandwidth. 


Of course, there are other ways to support such networks. Traditional live video networks, including those featuring 4K content, have relied on satellite transport to local hubs (cable TV headends) instead of unicast delivery. 


It is not so clear that multicast gaming feeds require transport that is materially different from live broadcast (multicast) video, though the intended display screen is PC screen. So the constraint might not be the wide area delivery network itself but the business arrangements around episodic use of the network. Is the content delivered as a consuming-facing “channel” that is programmed “all the time,”  or as an episodic event more akin to a podcast, videoconference or other discrete event. 


source: Rethink Technology Research


Multicast ABR, for example, is a new multicast format proposed by CableLabs for multicasting video content instead of the more capacity-consumptive unicast method of delivery.  


Satellite networks might still work for esports packaged as a live channel. Other WANs might work--especially when edge caching is available, for more episodic events. 


Many say 5G will be an enabler for live game streaming, and that is true in some fundamental senses: the 5G network will use a virtualized core network that makes network slicing possible. 


It also is true that the 5G core network is designed to support distributed computing, and is therefore also an enabler of edge computing and caching, which might also be an approach for supporting live gaming streams, as content delivery networks have been used to speed up app performance generally. 


On the other hand, 5G as an access technology might, or might not, be necessary. A custom VPN is one approach. But satellite delivery to edge locations such as headends also is an option. Multicasting using a network slice also works. 


In that scenario, 5G latency performance might contribute to the experience, but it really is the edge computing or the network slicing that contributes most to the low-latency performance. 


Also, creating a low-latency, high-bandwidth network for the actual playing of esports games is a different matter than streaming of such matches. The former requires a high-performance unicast network, the latter might, or might not, rely on such a network, as the latter is a multicast operation. 


Where most present internet operations are unicast, one-to-one sessions, streaming of video or live esports content is multicast, many-to-many or one-to-many operation. 



WAN latency performance is key, though it typically is not so much the WAN performance as the capabilities of the IP networks using the optical WANs, that dictates the limits of experience. Also, large venues are needed for such competitions, so premises networking and high-bandwidth access facilities are a must. 


The ability to handle episodic surges in the actual gaming might be an issue for the access connection or the WAN transport. That is an issue either network slicing or raw bandwidth provisioning might  support. 


The streaming of selected portions of the gaming competitions is a separate matter. 

Streaming live esports, in other words, is one networking problem. Supporting an esports tournament is another issue. And streaming such competitions arguably is a third issue. Network slicing is one potential way of handling the streaming. But there are likely going to be other ways, as well.

Tuesday, September 17, 2019

Wholesale Business Models Work, Up to a Point

There is a reason many service providers prefer a facilities-based approach to sourcing their key network platforms: it is a way to control development of features and costs. On the other hand, there are many instances when service providers of many types either must use wholesale leased facilities, or prefer to do so. 

Some common examples are market entry into new geographies where market share is expected to be low; where regulatory barriers to facilities ownership or operation might exist; or where network services are ancillary to some other business model. 

All that said, there are some clear dangers for any service provider that expects it might have major market share, or wishes to price and package its services in a manner different from the prevailing market norms. Consider Altice Mobile, entering the U.S. mobile market using disruptive pricing, and using both wholesale leased access and owned facilities to enable that strategy. 

Altice Mobile offers unlimited data, text, and talk nationwide, unlimited mobile hotspot, unlimited video streaming, unlimited international text and talk from the U.S. to more than 35 countries, including Canada, Mexico, Dominican Republic, Israel, most of Europe, and more, and
unlimited data, text and talk while traveling abroad in those same countries, for $20 per device for Altice fixed network customers. 

It will be able to do so because it has a deal with Sprint giving Sprint no-charge access to Altice fixed networks to place small and other cell sites, and in return gets favorable MVNO terms for roaming access to Sprint’s network. 

That is one good example of why a facilities approach allows more freedom to create disruptive and differentiated offers. 

That has been true in the internet access business as well. Back in the days of dial-up access, ISPs could get into business and use voice bandwidth customers already were paying for to supply the internet access as well over the voice circuit. 

All that changed in the broadband era, as ISPs suddenly had to buy wholesale access from the underlying service providers, and had to invest heavily in central office gear and space. For a time, when the Federal Communications Commission was mandating big wholesale discounts, and before broadband changed capital requirements, the wholesale access model worked. 

When the FCC switched its emphasis to facilities-based competition, wholesale rates were allowed to reach market rates, and capex to support broadband soared, the wholesale business model collapsed. 

As recently as 2005, major independent U.S. dial-up ISPs had at least 49 percent market share. Many smaller independent providers had most of the remaining 51 percent. 


In the early days of the broadband market, firms such as Covad, Northpoint and Rhythms Netconnections Rhythms Netconnections were getting significant market share. 


More recently, Google Fi had been marketing internet access in a distinctive way, basically a “pay only for what you use model” at about $10 a gigabyte, with unused capacity rolling over to the next billing period. It works at low usage levels, but appears to become unworkable when most users start consuming 15 Gbytes per month, close to the 22 Gbyte point at which Google Fi wholesale suppliers throttle speeds. 

Also, Google Fi capped fees at a maximum of $80 per month, no matter how much data was consumed. The model becomes unworkable when Google Fi faces the danger of charging its retail customers less than the wholesale fees it has to pay the wholesale capacity providers. 

The larger point is that wholesale-based business models work, up to a point. In volume, facilities become important to contain costs. Also, network ownership also provides more flexibility in creating unusual or disruptive retail packages.

There Always are Trade-Offs, Whether it is Edge Computing, Phone design, Human Health or Greenhouse Gases

Engineers who build communication networks and devices always are aware that design, performance and cost trade-offs must be made. Network architects know there are trade-offs for building computing networks. Marketers know price and take rates are inversely related. CEOs have to balance the rival objectives of investment, debt reduction and shareholder return; allocating resources to customers, employees, shareholders, bankers and other strakeholders.

Likewise, economists emphasize that policy decisions always involve choices, and choices have unwanted intended or unintended consequences. 

As it turns out, the same seems to be true for human nutrition and greenhouse gas reduction. 

Though plant-based diets are touted as a way of reducing greenhouse gases, there are always trade-offs. In fact, “achieving an adequate, healthy diet in most low- and middle-income countries will require a substantial increase in greenhouse gas emissions and water use due to food production,” according to new research from the Johns Hopkins Center for a Livable Future based at the Johns Hopkins Bloomberg School of Public Health.

Consider the trade-offs for dairy products. "Our data indicate that it is actually dairy product consumption that explains much of the differences in greenhouse gas footprints across diets,” says  Martin Bloem, Johns Hopkins Center for a Livable Future director. “Yet, at the same time, nutritionists recognize the important role dairy products can have in stunting prevention. The goals of lowering greenhouse gas emissions by limited dairy intake conflict directly with the benefits of dairy products for human growth. 

"There will always be tradeoffs,” he says. “Environmental impact alone cannot be a guide for what people eat.”

Most business strategy involves trade-offs of these types.

Friday, September 13, 2019

Computing Archiitectures Now are Dependent on WAN Performance, Not LAN

These days, computing performance mostly hinges on the wide area network, not the "local" area network, a big change from earlier eras of computing. If you think about it, devices now communicate even with other devices in the same room, building or campus using the WAN, and not the LAN. Yes, we use Wi-Fi, but that is simply a local extensiion of the WAN network access and transport.

Local area network design parameters have changed over the past 50 years, along with changes in computing architecture. Prior to 1980, the computing devices were mainframes or mini-computers and the local networks simply needed to connect dumb terminals with the mainframe. There was limited requirement for wide area communications. 


The personal computer changed all that. Relatively low-cost PCs replaced mainframes as the standard computing appliance, creating a need to connect PCs at relatively low speed (10 Mbps or less). There still was limited need for WAN communications. 

Client-server dramatically increased local network communications requirements, adding servers as a centralized network element PCs needed to communicate with, on a frequent basis. That also drove requirements for LAN speed up to 100 Mbps. 

In the next era of distributed or network computing, perhaps the salient new requirement was for wide area communications, as more of the computing load moved to remote data centers and required networking of branch locations. 

In the present “web” era, broadband communications over wide area networks are mandatory, as processing load shifts to the far end of the WAN. There is less need for networking local devices, as most local computers interact directly with the far-end cloud data centers. 

In fact, many “local” communications actually rely on connections with the far-end data centers. Cloud printing, email, messaging provide examples: the communications between local devices actually are routed across the WAN to far-end servers and back again. The “local” network uses Wi-Fi, but the logical connections are from device to far-end servers and data centers. 

Local network performance requirements have changed with the architectures. Early networks did not require LAN speeds much faster than 4 Mbps to 10 Mbps. Use of distributed networks contacting far-end servers created a new need for higher-speed WAN connections. Today’s web architectures require high-speed local and wide area connections. 

The Wi-Fi “local area network” access is mostly an on-ramp to the WAN. In essence, the broad global network (private IP or public internet) replaces the LAN, and each local device communicates with other local devices using the WAN. Wi-Fi, essentially, is simply the local connection to the WAN. 

Application performance also has changed. Early character-based apps (either mainframe, mini-computer) did not require high-bandwidth or low-latency performance. Today’s multi-media apps do require high bandwidth as a matter of course. And some apps--especially voice or video conferencing--require low latency as well. 

Coming sensor apps (internet of things) will create more-stringent demands for latency and bandwidth performance exceeding parameters for voice and video conferencing, in some use cases. Virtual reality is probably the best example of bandwidth requirements. Sensors involved with auto safety provide the best examples of latency-dependent communications. 

The point is that, these days, LAN performance arguably is secondary to WAN performance.

Wednesday, September 11, 2019

5G Offers Incremental Revenue Upside, IF...

New 5G services for business and enterprise customers might boost connectivity revenues for mobile operators in a variety of use cases, assuming 5G becomes a favored connectivity choice, compared to other available choices, including 4G and unlicensed networks. Consider connected vehicles. 

By 2022, Ford expects most of its cars to come with built in C-V2X technology, an LTE-based (4G) platform. By definition, that is a rival platform to 5G. Also, connected car platforms include 
DSRC-based V2V technology, which might be used more broadly by GM and Volkswagen. 

Other use cases, including industrial internet of things, might also use a variety of platforms other than 5G, including 4G platforms, wide area low-power platforms or Wi-Fi. 

So 5G represents potential enterprise connectivity revenue upside for mobile service providers, but only if 5G proves more attractive than other connectivity options, some based on 4G, others supplied by rival platforms using unlicensed spectrum. 

Optimists also see incremental upside from mobile operator participation in internet of things or other smart device use cases where the mobile operator owns a solution or can partner with a solution provider. But “network access” is not the only new battlefield. 

Custom networks based on network slicing and edge computing also will be new potential value drivers for enterprise buyers, and, as always, enterprise solution architects will have other supplier choices. In some markets, mobile operators might have new wholesale opportunities as some enterprises seek to create private 5G networks. 

In fact, 5G access, in and of itself, might prove to be less a driver than the other features (customized networks, edge computing). And in some countries, private 5G networks will not create a new wholesale opportunity for mobile service providers, as enterprises will be able to use new spectrum specifically allowing them to create private 5G networks on their own. 

The point is that 5G will have to prove its value for enterprise customers who will have other choices, including mobile operator 4G solutions.

Monday, September 9, 2019

Streaming Video an Opportunity for Some Connectivity Providers



Lars Larsson, CEO, Varnish Software and Richard Craig-McFeely, Strategy & Marketing Director, Digital Media, Interxion, discuss the upside. 


AT&T Has to Change, Says Elliott Management

Perhaps it never is a good sign when a major institutional investor calls for changes. That virtually always is an indicator that a firm is not performing as well--as an asset--as the market expects. AT&T now is getting such attention from Elliott Management.

It is never difficult to find investor, equity analyst or commentator criticism of AT&T's major acquisitions since the failed effort to acquire T-Mobile US. Others might point to the deals since the DirecTV acquisition. Though execution sometimes is the main complaint, strategic error is the most common charge. By way of comparison, such critics say, Verizon has done better by sticking to its connectivity knitting. 

It is not an easy matter to decipher, at this point. AT&T equity performance has lagged the market, Verizon and T-Mobile US. Concerns about debt taken on to fuel the acquisitions is a frequent concern. 

Granting the concerns, there is another way to view the acquisitions (execution notwithstanding), and that is as a rational strategy. When an industry reaches a key turning point in its lifecycle, “doing more of the same” might well be called a questionable strategy, especially for the leaders in that industry. 

Without much question, core products ranging from voice to messaging to internet access and linear video are past their peak, and getting close to a peak. The huge attention being paid to 5G, edge computing and internet of things can be interpreted as evidence of that maturation. 

With the caveat that some might fault the execution, not the strategy, or the choice or assets, rather than the strategy, AT&T has diversified its position in the internet ecosystem substantially since 2010. Verizon has not done so. One might well make the argument that none of these businesses is a fast grower. But one might also argue that some of these lines of business stand a better chance of holding their own, while providing exposure to multiple roles within the ecosystem, not simply the “connectivity” role. 

Facing natural limits in its core market, any firm might rationally consider a change of business model and product set. Where would Facebook or Google be today had they not pivoted to a “mobile-first” strategy? What if Paypal PayPal had stuck to its original cryptography plan? What if Apple had stuck to PCs? What if YouTube had remained a site for quirky videos?

What if Singtel had remained a “Singapore only” telecom firm? The point is that the connectivity business, traditionally a no-growth or slow-growth business with guaranteed profits now is a no-growth or slow-growth business in a competitive environment. 

In fact, the U.S. Justice Department quashing the AT&T effort to buy T-Mobile US--on market concentration grounds--might validate the ”limited growth in the core business” argument. Justice Department officials made it clear that AT&T would not be allowed to get bigger in the U.S. mobile market. 

Even in failure to get antitrust approval, AT&T was confronted by the absolute need to look for growth elsewhere than in its traditional core businesses. The damage, one might argue, were the terms of the deal breakup fee, which allowed T-Mobile to fund its market attack. 

One can argue with the precise assets acquired, the strategy behind those acquisitions or the execution and integration, without disagreeing that AT&T has to find massive new revenue sources to fuel its growth and pay its dividends. 

By implication, many other tier-one service providers will have to undertake similar diversification moves. The issue is not “conglomerate or not.” It is hard to escape the conclusion that tomorrow’s tier-one connectivity providers will have transformed, much as Comcast has changed from a video distributor into a firm active in many parts of the internet ecosystem (theme parks, movie production, network ownership, business services, wireless and mobile). 

One might argue alternative major assets should have been purchased, though it is hard to see how the free cash flow from DirecTV could have been gotten from any other feasible acquisition. And if content ownership as a building block of tomorrow’s video subscription business is necessary, few assets other than Time Warner were available, providing both segment scale and free cash flow, within the constraints of the debt burden to be imposed. 

The point is that the U.S. connectivity business is growing very slowly, while every major product category is either in the declining phase of its life cycle, or will be, soon. 

Other firms have other options, as often is the case in any market with a few dominant providers and many upstarts and specialists. “Grow by taking share” still makes sense for T-Mobile US and many other smaller connectivity providers. It is not an option for AT&T, for the most part. 

As the largest U.S. connectivity services provider, AT&T has to expect to lose share, even if it executes well. 

None of that will spare AT&T criticism over its equity valuation, strategic choices, acquisitions  or execution. But the move to occupy different roles within the ecosystem (“moving up the stack,” some might say) is hardly foolhardy. It is a rational response to market circumstances and product life cycles.

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...