Wednesday, September 18, 2019

Live Esports Streaming Requires a Different Network than the Esports Competitions

Live game streaming (esports), where viewers watch other gamers compete with each other, is said to be one possible application for network slicing. The notion is that creating a network optimized for point-to-multipoint (multicast) content delivery benefits from a network optimized for low latency and high bandwidth. 


Of course, there are other ways to support such networks. Traditional live video networks, including those featuring 4K content, have relied on satellite transport to local hubs (cable TV headends) instead of unicast delivery. 


It is not so clear that multicast gaming feeds require transport that is materially different from live broadcast (multicast) video, though the intended display screen is PC screen. So the constraint might not be the wide area delivery network itself but the business arrangements around episodic use of the network. Is the content delivered as a consuming-facing “channel” that is programmed “all the time,”  or as an episodic event more akin to a podcast, videoconference or other discrete event. 


source: Rethink Technology Research


Multicast ABR, for example, is a new multicast format proposed by CableLabs for multicasting video content instead of the more capacity-consumptive unicast method of delivery.  


Satellite networks might still work for esports packaged as a live channel. Other WANs might work--especially when edge caching is available, for more episodic events. 


Many say 5G will be an enabler for live game streaming, and that is true in some fundamental senses: the 5G network will use a virtualized core network that makes network slicing possible. 


It also is true that the 5G core network is designed to support distributed computing, and is therefore also an enabler of edge computing and caching, which might also be an approach for supporting live gaming streams, as content delivery networks have been used to speed up app performance generally. 


On the other hand, 5G as an access technology might, or might not, be necessary. A custom VPN is one approach. But satellite delivery to edge locations such as headends also is an option. Multicasting using a network slice also works. 


In that scenario, 5G latency performance might contribute to the experience, but it really is the edge computing or the network slicing that contributes most to the low-latency performance. 


Also, creating a low-latency, high-bandwidth network for the actual playing of esports games is a different matter than streaming of such matches. The former requires a high-performance unicast network, the latter might, or might not, rely on such a network, as the latter is a multicast operation. 


Where most present internet operations are unicast, one-to-one sessions, streaming of video or live esports content is multicast, many-to-many or one-to-many operation. 



WAN latency performance is key, though it typically is not so much the WAN performance as the capabilities of the IP networks using the optical WANs, that dictates the limits of experience. Also, large venues are needed for such competitions, so premises networking and high-bandwidth access facilities are a must. 


The ability to handle episodic surges in the actual gaming might be an issue for the access connection or the WAN transport. That is an issue either network slicing or raw bandwidth provisioning might  support. 


The streaming of selected portions of the gaming competitions is a separate matter. 

Streaming live esports, in other words, is one networking problem. Supporting an esports tournament is another issue. And streaming such competitions arguably is a third issue. Network slicing is one potential way of handling the streaming. But there are likely going to be other ways, as well.

Tuesday, September 17, 2019

Wholesale Business Models Work, Up to a Point

There is a reason many service providers prefer a facilities-based approach to sourcing their key network platforms: it is a way to control development of features and costs. On the other hand, there are many instances when service providers of many types either must use wholesale leased facilities, or prefer to do so. 

Some common examples are market entry into new geographies where market share is expected to be low; where regulatory barriers to facilities ownership or operation might exist; or where network services are ancillary to some other business model. 

All that said, there are some clear dangers for any service provider that expects it might have major market share, or wishes to price and package its services in a manner different from the prevailing market norms. Consider Altice Mobile, entering the U.S. mobile market using disruptive pricing, and using both wholesale leased access and owned facilities to enable that strategy. 

Altice Mobile offers unlimited data, text, and talk nationwide, unlimited mobile hotspot, unlimited video streaming, unlimited international text and talk from the U.S. to more than 35 countries, including Canada, Mexico, Dominican Republic, Israel, most of Europe, and more, and
unlimited data, text and talk while traveling abroad in those same countries, for $20 per device for Altice fixed network customers. 

It will be able to do so because it has a deal with Sprint giving Sprint no-charge access to Altice fixed networks to place small and other cell sites, and in return gets favorable MVNO terms for roaming access to Sprint’s network. 

That is one good example of why a facilities approach allows more freedom to create disruptive and differentiated offers. 

That has been true in the internet access business as well. Back in the days of dial-up access, ISPs could get into business and use voice bandwidth customers already were paying for to supply the internet access as well over the voice circuit. 

All that changed in the broadband era, as ISPs suddenly had to buy wholesale access from the underlying service providers, and had to invest heavily in central office gear and space. For a time, when the Federal Communications Commission was mandating big wholesale discounts, and before broadband changed capital requirements, the wholesale access model worked. 

When the FCC switched its emphasis to facilities-based competition, wholesale rates were allowed to reach market rates, and capex to support broadband soared, the wholesale business model collapsed. 

As recently as 2005, major independent U.S. dial-up ISPs had at least 49 percent market share. Many smaller independent providers had most of the remaining 51 percent. 


In the early days of the broadband market, firms such as Covad, Northpoint and Rhythms Netconnections Rhythms Netconnections were getting significant market share. 


More recently, Google Fi had been marketing internet access in a distinctive way, basically a “pay only for what you use model” at about $10 a gigabyte, with unused capacity rolling over to the next billing period. It works at low usage levels, but appears to become unworkable when most users start consuming 15 Gbytes per month, close to the 22 Gbyte point at which Google Fi wholesale suppliers throttle speeds. 

Also, Google Fi capped fees at a maximum of $80 per month, no matter how much data was consumed. The model becomes unworkable when Google Fi faces the danger of charging its retail customers less than the wholesale fees it has to pay the wholesale capacity providers. 

The larger point is that wholesale-based business models work, up to a point. In volume, facilities become important to contain costs. Also, network ownership also provides more flexibility in creating unusual or disruptive retail packages.

There Always are Trade-Offs, Whether it is Edge Computing, Phone design, Human Health or Greenhouse Gases

Engineers who build communication networks and devices always are aware that design, performance and cost trade-offs must be made. Network architects know there are trade-offs for building computing networks. Marketers know price and take rates are inversely related. CEOs have to balance the rival objectives of investment, debt reduction and shareholder return; allocating resources to customers, employees, shareholders, bankers and other strakeholders.

Likewise, economists emphasize that policy decisions always involve choices, and choices have unwanted intended or unintended consequences. 

As it turns out, the same seems to be true for human nutrition and greenhouse gas reduction. 

Though plant-based diets are touted as a way of reducing greenhouse gases, there are always trade-offs. In fact, “achieving an adequate, healthy diet in most low- and middle-income countries will require a substantial increase in greenhouse gas emissions and water use due to food production,” according to new research from the Johns Hopkins Center for a Livable Future based at the Johns Hopkins Bloomberg School of Public Health.

Consider the trade-offs for dairy products. "Our data indicate that it is actually dairy product consumption that explains much of the differences in greenhouse gas footprints across diets,” says  Martin Bloem, Johns Hopkins Center for a Livable Future director. “Yet, at the same time, nutritionists recognize the important role dairy products can have in stunting prevention. The goals of lowering greenhouse gas emissions by limited dairy intake conflict directly with the benefits of dairy products for human growth. 

"There will always be tradeoffs,” he says. “Environmental impact alone cannot be a guide for what people eat.”

Most business strategy involves trade-offs of these types.

Friday, September 13, 2019

Computing Archiitectures Now are Dependent on WAN Performance, Not LAN

These days, computing performance mostly hinges on the wide area network, not the "local" area network, a big change from earlier eras of computing. If you think about it, devices now communicate even with other devices in the same room, building or campus using the WAN, and not the LAN. Yes, we use Wi-Fi, but that is simply a local extensiion of the WAN network access and transport.

Local area network design parameters have changed over the past 50 years, along with changes in computing architecture. Prior to 1980, the computing devices were mainframes or mini-computers and the local networks simply needed to connect dumb terminals with the mainframe. There was limited requirement for wide area communications. 


The personal computer changed all that. Relatively low-cost PCs replaced mainframes as the standard computing appliance, creating a need to connect PCs at relatively low speed (10 Mbps or less). There still was limited need for WAN communications. 

Client-server dramatically increased local network communications requirements, adding servers as a centralized network element PCs needed to communicate with, on a frequent basis. That also drove requirements for LAN speed up to 100 Mbps. 

In the next era of distributed or network computing, perhaps the salient new requirement was for wide area communications, as more of the computing load moved to remote data centers and required networking of branch locations. 

In the present “web” era, broadband communications over wide area networks are mandatory, as processing load shifts to the far end of the WAN. There is less need for networking local devices, as most local computers interact directly with the far-end cloud data centers. 

In fact, many “local” communications actually rely on connections with the far-end data centers. Cloud printing, email, messaging provide examples: the communications between local devices actually are routed across the WAN to far-end servers and back again. The “local” network uses Wi-Fi, but the logical connections are from device to far-end servers and data centers. 

Local network performance requirements have changed with the architectures. Early networks did not require LAN speeds much faster than 4 Mbps to 10 Mbps. Use of distributed networks contacting far-end servers created a new need for higher-speed WAN connections. Today’s web architectures require high-speed local and wide area connections. 

The Wi-Fi “local area network” access is mostly an on-ramp to the WAN. In essence, the broad global network (private IP or public internet) replaces the LAN, and each local device communicates with other local devices using the WAN. Wi-Fi, essentially, is simply the local connection to the WAN. 

Application performance also has changed. Early character-based apps (either mainframe, mini-computer) did not require high-bandwidth or low-latency performance. Today’s multi-media apps do require high bandwidth as a matter of course. And some apps--especially voice or video conferencing--require low latency as well. 

Coming sensor apps (internet of things) will create more-stringent demands for latency and bandwidth performance exceeding parameters for voice and video conferencing, in some use cases. Virtual reality is probably the best example of bandwidth requirements. Sensors involved with auto safety provide the best examples of latency-dependent communications. 

The point is that, these days, LAN performance arguably is secondary to WAN performance.

Wednesday, September 11, 2019

5G Offers Incremental Revenue Upside, IF...

New 5G services for business and enterprise customers might boost connectivity revenues for mobile operators in a variety of use cases, assuming 5G becomes a favored connectivity choice, compared to other available choices, including 4G and unlicensed networks. Consider connected vehicles. 

By 2022, Ford expects most of its cars to come with built in C-V2X technology, an LTE-based (4G) platform. By definition, that is a rival platform to 5G. Also, connected car platforms include 
DSRC-based V2V technology, which might be used more broadly by GM and Volkswagen. 

Other use cases, including industrial internet of things, might also use a variety of platforms other than 5G, including 4G platforms, wide area low-power platforms or Wi-Fi. 

So 5G represents potential enterprise connectivity revenue upside for mobile service providers, but only if 5G proves more attractive than other connectivity options, some based on 4G, others supplied by rival platforms using unlicensed spectrum. 

Optimists also see incremental upside from mobile operator participation in internet of things or other smart device use cases where the mobile operator owns a solution or can partner with a solution provider. But “network access” is not the only new battlefield. 

Custom networks based on network slicing and edge computing also will be new potential value drivers for enterprise buyers, and, as always, enterprise solution architects will have other supplier choices. In some markets, mobile operators might have new wholesale opportunities as some enterprises seek to create private 5G networks. 

In fact, 5G access, in and of itself, might prove to be less a driver than the other features (customized networks, edge computing). And in some countries, private 5G networks will not create a new wholesale opportunity for mobile service providers, as enterprises will be able to use new spectrum specifically allowing them to create private 5G networks on their own. 

The point is that 5G will have to prove its value for enterprise customers who will have other choices, including mobile operator 4G solutions.

Monday, September 9, 2019

Streaming Video an Opportunity for Some Connectivity Providers



Lars Larsson, CEO, Varnish Software and Richard Craig-McFeely, Strategy & Marketing Director, Digital Media, Interxion, discuss the upside. 


AT&T Has to Change, Says Elliott Management

Perhaps it never is a good sign when a major institutional investor calls for changes. That virtually always is an indicator that a firm is not performing as well--as an asset--as the market expects. AT&T now is getting such attention from Elliott Management.

It is never difficult to find investor, equity analyst or commentator criticism of AT&T's major acquisitions since the failed effort to acquire T-Mobile US. Others might point to the deals since the DirecTV acquisition. Though execution sometimes is the main complaint, strategic error is the most common charge. By way of comparison, such critics say, Verizon has done better by sticking to its connectivity knitting. 

It is not an easy matter to decipher, at this point. AT&T equity performance has lagged the market, Verizon and T-Mobile US. Concerns about debt taken on to fuel the acquisitions is a frequent concern. 

Granting the concerns, there is another way to view the acquisitions (execution notwithstanding), and that is as a rational strategy. When an industry reaches a key turning point in its lifecycle, “doing more of the same” might well be called a questionable strategy, especially for the leaders in that industry. 

Without much question, core products ranging from voice to messaging to internet access and linear video are past their peak, and getting close to a peak. The huge attention being paid to 5G, edge computing and internet of things can be interpreted as evidence of that maturation. 

With the caveat that some might fault the execution, not the strategy, or the choice or assets, rather than the strategy, AT&T has diversified its position in the internet ecosystem substantially since 2010. Verizon has not done so. One might well make the argument that none of these businesses is a fast grower. But one might also argue that some of these lines of business stand a better chance of holding their own, while providing exposure to multiple roles within the ecosystem, not simply the “connectivity” role. 

Facing natural limits in its core market, any firm might rationally consider a change of business model and product set. Where would Facebook or Google be today had they not pivoted to a “mobile-first” strategy? What if Paypal PayPal had stuck to its original cryptography plan? What if Apple had stuck to PCs? What if YouTube had remained a site for quirky videos?

What if Singtel had remained a “Singapore only” telecom firm? The point is that the connectivity business, traditionally a no-growth or slow-growth business with guaranteed profits now is a no-growth or slow-growth business in a competitive environment. 

In fact, the U.S. Justice Department quashing the AT&T effort to buy T-Mobile US--on market concentration grounds--might validate the ”limited growth in the core business” argument. Justice Department officials made it clear that AT&T would not be allowed to get bigger in the U.S. mobile market. 

Even in failure to get antitrust approval, AT&T was confronted by the absolute need to look for growth elsewhere than in its traditional core businesses. The damage, one might argue, were the terms of the deal breakup fee, which allowed T-Mobile to fund its market attack. 

One can argue with the precise assets acquired, the strategy behind those acquisitions or the execution and integration, without disagreeing that AT&T has to find massive new revenue sources to fuel its growth and pay its dividends. 

By implication, many other tier-one service providers will have to undertake similar diversification moves. The issue is not “conglomerate or not.” It is hard to escape the conclusion that tomorrow’s tier-one connectivity providers will have transformed, much as Comcast has changed from a video distributor into a firm active in many parts of the internet ecosystem (theme parks, movie production, network ownership, business services, wireless and mobile). 

One might argue alternative major assets should have been purchased, though it is hard to see how the free cash flow from DirecTV could have been gotten from any other feasible acquisition. And if content ownership as a building block of tomorrow’s video subscription business is necessary, few assets other than Time Warner were available, providing both segment scale and free cash flow, within the constraints of the debt burden to be imposed. 

The point is that the U.S. connectivity business is growing very slowly, while every major product category is either in the declining phase of its life cycle, or will be, soon. 

Other firms have other options, as often is the case in any market with a few dominant providers and many upstarts and specialists. “Grow by taking share” still makes sense for T-Mobile US and many other smaller connectivity providers. It is not an option for AT&T, for the most part. 

As the largest U.S. connectivity services provider, AT&T has to expect to lose share, even if it executes well. 

None of that will spare AT&T criticism over its equity valuation, strategic choices, acquisitions  or execution. But the move to occupy different roles within the ecosystem (“moving up the stack,” some might say) is hardly foolhardy. It is a rational response to market circumstances and product life cycles.

Friday, September 6, 2019

Are Mobile and Fixed Network Services Public Utilities?

The language generally used by proponents of municipal broadband is that internet access is a necessity or a utility. A survey conducted by Openet suggests many consumers and citizens also believe mobile service is a utility, in the United Kingdom, Colombia, Canada, Indonesia and Singapore.

More than half of all respondents consider mobile service “a utility along the lines of gas, water or electricity.” But only about 21 percent believe mobile operators “always will be” utilities. About 31 percent of those who view mobile service as a utility believe mobile operators can add more value. 

Those findings do not necessarily correspond to the regulatory framework for mobile services in the United States, nor did the survey poll U.S. consumers. Still, the findings do point out some degree of market and regulatory exposure, to the extent that U.S. consumers have similar views. 

Broadband internet services (both fixed-line and mobile) are increasingly being included within the definition” of public utilities, according to Wikipedia.

Worst of All Worlds: Telecom Now is a Regulated Non-Monopoly

If being an unregulated monopoly is the best of all possible worlds for an industry, then being a regulated non-monopoly arguably is among the worst of situations. And that might be where the legacy providers in the global telecom industry find themselves.

Considered carriers of last resort, many legacy service providers are compelled to sell wholesale capabilities to competitors, meaning the underlying carriers cannot reap all the rewards of investments in their networks. 

In other cases, the legacy carriers also have service obligations none of the other competitors face. 

Always a slow-growth business, telecom at least had the luxury of guaranteed rates of return and a bar on lawful competition. Today, telecom increasingly is becoming a slow-growth business without legal barriers to entry by competitors. 

STL Partners forecasts less than 1% CAGR in telecoms revenues


There are many potential business implications of slow or negative growth. Unchecked, the enterprise simply becomes unprofitable and then shrinks, before being acquired by a stronger and larger firm, or simply going out of business. Revenue growth rates of one percent are troubling especially if general rates of inflation are higher than one percent. 

The answer many connectivity providers pursue, aside from mergers and acquisitions to boost gross revenue and cut costs, is to diversify into other lines of business that can more than replace any losses in the legacy business. 

Wednesday, September 4, 2019

FastWeb, Wind Tre Sign Extensive 5G Network Sharing Agreement

Fastweb (Swisscom) and Wind Tre (Hutchison) have signed a 5G network sharing agreement including spectrum, radios and backhaul in Italy. The deal will allow both firms to reach 90 percent of the population by about 2026, while also lowering capital investment costs and speeding market entry, as attacker Iliad enters the Italian mobile market. 


Also, the deal gives Fastweb, a fixed network services supplier, a mobility capability, while Wind Tre gets to sell fixed network services (fiber-to-home and fiber-to-curb) to businesses and consumers. 

The shared network will be managed by Wind Tre and use Fastweb’s fixed network backhaul facilities to connect small cells and towers across Italy.

The big trade-off with wholesale network sharing is that the partners are unable to differentiate on coverage or speed dimensions, the two major potential differentiators for a mobile network. On the other hand, this deal also allows each partner to create a converged services business with mobile and fixed network services, at a fraction of the cost and time to market of each firm moving alone. 

Fastweb also becomes a nationwide mobile provider while Wind Tre immediately becomes a supplier of faster fixed network internet access services. Wind Tre has been under pressure recently, losing accounts as Iliad has entered the market.

Tuesday, September 3, 2019

Dunbar's Number and the Size of Mobile and Social Networks

One issue for designers of social network apps and mobile networks is the effective number of people any particular user interacts with, can interact with, and at what degree of intensity (time or emotional commitment). And it might be fair to say that face-to-face human relationships are one thing, while online social networks are another. 

LinkedIn might be a social network, but mostly of people one never sees, do not really know or spend time with in any way other than occasional online messages. One analogy is that most of us on LinkedIn could not consistently match faces and names of our own connections. 

Simply put, there are clear and sharp limits to emotional closeness and the number of meaningful relationships any person can have, in a face-to-face, real-world  context. 

The absolute limit of people any single person can even put a name and face together with numbers about 1500, according to Robin Dunbar, who developed a theory on the size of human groups now called Dunbar’s number. 

Few consider that an effective social or communication network, one simply recognizes a person. 

The Dunbar number suggests there are clear limits to the size of any single person’s face-to-face social network. Casual friends—the people you’d invite to a large party--might number only about 150. 

Dunbar discovered that the number grows and decreases according to a precise formula, roughly a “rule of three,” where each group of more intimate friends is about a third the size of the larger group. 

The number of people you might call close friends—perhaps the people you’d invite to a group dinner--number a maximum of 50. You see them often, but not so much that you consider them to be true intimates. 

There’s a smaller circle of fifteen, who are the friends that you can turn to for sympathy when you need it, the ones you can confide in about most things. 

The most intimate Dunbar number, five, is your close support group. These are your best friends (and often family members). 

Looking at mobile network communications, some researchers have found that whether any user has a large or small network of contacts, the amount of time spent communicating was about the same. 

Dunbar and a research team found, after analyzing some six billion calls made by 35 million people in an unnamed European country throughout 2007, that the rule holds for mobile communications. 

The team assumes that the frequency of calls between two individuals is a measure of the strength of their relationship. To screen out business calls and casual calls, the researchers included only individuals who make reciprocated calls and focus on individuals who call at least 100 other people. 

The team found some 27,000 people who call on average 130 other people. 

“Compared to those with smaller networks, those with large networks did not devote proportionally more time to communication and had on average weaker ties (as measured by time spent communicating),” say researchers Giovanna Miritelloab, Esteban Morobcd, Rubén Laraa, Rocío Martínez-Lópeza, John Belchambera, Sam G.B.Roberts and Robin Dunbar. 

Mobile users tend to distribute their time very unevenly across their network, with a large proportion of calls going to a small number of individuals, they note. “These results suggest that there are time constraints which limit tie strength in large personal networks.” 

A study of Facebook and Twitter networks likewise found that contact frequency matched “real world” communications closely. The absolute sizes of these layers and the mean frequencies of contact with alters within each layer match very closely the observed values from offline networks, say R.I.M.Dunbar, Valerio Arnaboldia, Marco Conti and Andrea Passarella. 

“Our analyses indicate that online communities have very similar structural characteristics to offline face-to-face networks,” they say. 

A study by Pew Research Center found that the median number of Facebook friends is 200. How that compares to “real world” human networks is debatable. One study of LinkedIn first-level contacts found that  27 percent of LinkedIn users had between 500 and 999 first degree connections. 


As with Facebook, “friends” and LinkedIn first level connections are not the equivalent of friends one knows on a face-to-face basis. In many cases, far from it. If you use LinkedIn, and think about it, nearly all the connections are business or commerce related, and almost never involve on-going relationships with people you eat with, for example. 

And even some who doubt the general premise Dunbar limit do note that any bucket of people (group with tags) on a very-large contact list tend to number less than 150. 

Dunbar's Number suggests no human can maintain more than 150 stable social relationships. Operationally, you would not feel uncomfortable joining that person for a drink at a bar. Most of us do not reach the "Dunbar limit."

The Dunbar number is actually a series of maximum limits. Some people might be able to know 500 acquaintances or 1500 people for whom you actually know a name. 

At the upper limit, 100 to 200 is who you’d ever be able to personally invite to a large party. Perhaps 50 is the limit of those who can be close friends. 

About 15 is the number of people you rely on for sympathy and confide in about most things. Five is your close support group (typically best friends and family). 

Social networks online arguably are different from other human networks. Most people might have social sets online that resemble their real world interactions. But a few power users on social media also exist. The issue is that such social networks are not the same as human face-to-face networks. 

Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...