Wednesday, September 18, 2019

Why 8K TVs Will Not--Unless Very Large--Improve Your Experience of Resolution

There is a simple rule for TV screens: bigger is better, for most people. That is true no matter what the resolution. That is similar to HDTV: the screen aspect ratio is considered better by most people, irrespective of the higher resolution. That said, most people, watching a TV from a standard viewing distance, can perceive the difference between HDTV and NTSC, for example.

But it is possible to argue that, for many viewers, 4K will not bring benefits as obvious as did HDTV, unless screen sizes get much bigger, or people are willing to move their furniture much closer to the TV sets.

But higher-definition display formats always face a chicken-and-egg problem, in that, until content production catches up with the new higher-resolution formats, consumers might find only limited content available that actually takes advantage of the higher resolution. Think about older film or video content shot in the 4:3 aspect ratio, when modern displays use a 16:9 format. 


Add to that the fact that linear or streaming video providers also have choices. They can deliver signals in standard or HDTV formats. Layer on 4K content and delivery formats and there is another layer of decisions to be made by video subscription services. 

But even assuming there eventually is much more 4K content available to view, there will be other nuances for buyers of 4K and 8K displays. Human eyes and video resolution is a matter of physics. 

Beyond a certain viewing distance, the human eye is unable to discriminate between content with 4K and HDTV resolution. A person with 20/20 vision sitting two feet from a screen (a PC screen, typically) can perceive the 4K resolution on a screen of 28 inches diagonal. 


Vision
Vision
Vision
Vision
Ideal maximum monitor size
for 24" viewing distance
20/30
20/20
20/15
20/10
1080p (1920x1080)
21"
14"
10.5"
7"
2K (2560x1440)
28"
18.5"
14"
9"
4K (3840x2160)
42"
28"
21"
14"
5K (5120x2880)
57"
37.5"
28"
18.5"

That should immediately tip you off to something important about TV screens. People do not sit two feet from their TVs. Six to eight feet is probably typical. When it comes to televisions touting new 4K technology, "a regular human isn't going to see a difference," said Raymond Soneira, head of display-testing firm DisplayMate Technologies.

To be sure, a 4K screen is capable of displaying four times the number of pixels as a 1080p screen. But the human eye is capable of seeing that many pixels depending on the size of the screen and where a person is sitting. 

From a distance, it is virtually impossible for someone to tell the difference in quality between a 1080p and 4K screen. The advantage arguably is most clear on the largest TV screens, as those allow people to sit at a normal distance and still be close enough so that a person with good vision can perceive the improved resolution. 




As a practical matter, recommends Sony, viewers should sit 1.5 times the vertical screen size of the TV, which is twice as close as people tend to sit when watching a standard HDTV screen. Buyers of the largest screens will have an easier time of the transition. But people buying smaller 4K TVs will have to scoot their couches up uncomfortably close to a 4K screen to perceive the enhanced definition. 

People looking at a 55-inch screen must sit no further away than 3.3 feet to perceive the resolution, Sony says. 

TV Size
Viewing Distance Range (Approx.)
55 inch
39 inches (3.29 feet)
65 inch
47 inches (3.92 feet)
75 inch
55 inches (4.58 feet)
85 inch
63 inches (5.25 feet)

Others suggest an appropriate viewing distance for an HDTV screen of 55 inches is seven to 11.5 feet away. For a 4K screen, that distance drops to 4.5 to seven feet, according to electronic product retailer Crutchfield. Me, I’d go with Sony’s recommendations. 


The point is that many buyers of 4K TVs will discover they really cannot perceive the difference from an HDTV screen, unless they are willing to move their furniture much closer to the TV than they are used to.

Live Esports Streaming Requires a Different Network than the Esports Competitions

Live game streaming (esports), where viewers watch other gamers compete with each other, is said to be one possible application for network slicing. The notion is that creating a network optimized for point-to-multipoint (multicast) content delivery benefits from a network optimized for low latency and high bandwidth. 


Of course, there are other ways to support such networks. Traditional live video networks, including those featuring 4K content, have relied on satellite transport to local hubs (cable TV headends) instead of unicast delivery. 


It is not so clear that multicast gaming feeds require transport that is materially different from live broadcast (multicast) video, though the intended display screen is PC screen. So the constraint might not be the wide area delivery network itself but the business arrangements around episodic use of the network. Is the content delivered as a consuming-facing “channel” that is programmed “all the time,”  or as an episodic event more akin to a podcast, videoconference or other discrete event. 


source: Rethink Technology Research


Multicast ABR, for example, is a new multicast format proposed by CableLabs for multicasting video content instead of the more capacity-consumptive unicast method of delivery.  


Satellite networks might still work for esports packaged as a live channel. Other WANs might work--especially when edge caching is available, for more episodic events. 


Many say 5G will be an enabler for live game streaming, and that is true in some fundamental senses: the 5G network will use a virtualized core network that makes network slicing possible. 


It also is true that the 5G core network is designed to support distributed computing, and is therefore also an enabler of edge computing and caching, which might also be an approach for supporting live gaming streams, as content delivery networks have been used to speed up app performance generally. 


On the other hand, 5G as an access technology might, or might not, be necessary. A custom VPN is one approach. But satellite delivery to edge locations such as headends also is an option. Multicasting using a network slice also works. 


In that scenario, 5G latency performance might contribute to the experience, but it really is the edge computing or the network slicing that contributes most to the low-latency performance. 


Also, creating a low-latency, high-bandwidth network for the actual playing of esports games is a different matter than streaming of such matches. The former requires a high-performance unicast network, the latter might, or might not, rely on such a network, as the latter is a multicast operation. 


Where most present internet operations are unicast, one-to-one sessions, streaming of video or live esports content is multicast, many-to-many or one-to-many operation. 



WAN latency performance is key, though it typically is not so much the WAN performance as the capabilities of the IP networks using the optical WANs, that dictates the limits of experience. Also, large venues are needed for such competitions, so premises networking and high-bandwidth access facilities are a must. 


The ability to handle episodic surges in the actual gaming might be an issue for the access connection or the WAN transport. That is an issue either network slicing or raw bandwidth provisioning might  support. 


The streaming of selected portions of the gaming competitions is a separate matter. 

Streaming live esports, in other words, is one networking problem. Supporting an esports tournament is another issue. And streaming such competitions arguably is a third issue. Network slicing is one potential way of handling the streaming. But there are likely going to be other ways, as well.

Tuesday, September 17, 2019

Wholesale Business Models Work, Up to a Point

There is a reason many service providers prefer a facilities-based approach to sourcing their key network platforms: it is a way to control development of features and costs. On the other hand, there are many instances when service providers of many types either must use wholesale leased facilities, or prefer to do so. 

Some common examples are market entry into new geographies where market share is expected to be low; where regulatory barriers to facilities ownership or operation might exist; or where network services are ancillary to some other business model. 

All that said, there are some clear dangers for any service provider that expects it might have major market share, or wishes to price and package its services in a manner different from the prevailing market norms. Consider Altice Mobile, entering the U.S. mobile market using disruptive pricing, and using both wholesale leased access and owned facilities to enable that strategy. 

Altice Mobile offers unlimited data, text, and talk nationwide, unlimited mobile hotspot, unlimited video streaming, unlimited international text and talk from the U.S. to more than 35 countries, including Canada, Mexico, Dominican Republic, Israel, most of Europe, and more, and
unlimited data, text and talk while traveling abroad in those same countries, for $20 per device for Altice fixed network customers. 

It will be able to do so because it has a deal with Sprint giving Sprint no-charge access to Altice fixed networks to place small and other cell sites, and in return gets favorable MVNO terms for roaming access to Sprint’s network. 

That is one good example of why a facilities approach allows more freedom to create disruptive and differentiated offers. 

That has been true in the internet access business as well. Back in the days of dial-up access, ISPs could get into business and use voice bandwidth customers already were paying for to supply the internet access as well over the voice circuit. 

All that changed in the broadband era, as ISPs suddenly had to buy wholesale access from the underlying service providers, and had to invest heavily in central office gear and space. For a time, when the Federal Communications Commission was mandating big wholesale discounts, and before broadband changed capital requirements, the wholesale access model worked. 

When the FCC switched its emphasis to facilities-based competition, wholesale rates were allowed to reach market rates, and capex to support broadband soared, the wholesale business model collapsed. 

As recently as 2005, major independent U.S. dial-up ISPs had at least 49 percent market share. Many smaller independent providers had most of the remaining 51 percent. 


In the early days of the broadband market, firms such as Covad, Northpoint and Rhythms Netconnections Rhythms Netconnections were getting significant market share. 


More recently, Google Fi had been marketing internet access in a distinctive way, basically a “pay only for what you use model” at about $10 a gigabyte, with unused capacity rolling over to the next billing period. It works at low usage levels, but appears to become unworkable when most users start consuming 15 Gbytes per month, close to the 22 Gbyte point at which Google Fi wholesale suppliers throttle speeds. 

Also, Google Fi capped fees at a maximum of $80 per month, no matter how much data was consumed. The model becomes unworkable when Google Fi faces the danger of charging its retail customers less than the wholesale fees it has to pay the wholesale capacity providers. 

The larger point is that wholesale-based business models work, up to a point. In volume, facilities become important to contain costs. Also, network ownership also provides more flexibility in creating unusual or disruptive retail packages.

There Always are Trade-Offs, Whether it is Edge Computing, Phone design, Human Health or Greenhouse Gases

Engineers who build communication networks and devices always are aware that design, performance and cost trade-offs must be made. Network architects know there are trade-offs for building computing networks. Marketers know price and take rates are inversely related. CEOs have to balance the rival objectives of investment, debt reduction and shareholder return; allocating resources to customers, employees, shareholders, bankers and other strakeholders.

Likewise, economists emphasize that policy decisions always involve choices, and choices have unwanted intended or unintended consequences. 

As it turns out, the same seems to be true for human nutrition and greenhouse gas reduction. 

Though plant-based diets are touted as a way of reducing greenhouse gases, there are always trade-offs. In fact, “achieving an adequate, healthy diet in most low- and middle-income countries will require a substantial increase in greenhouse gas emissions and water use due to food production,” according to new research from the Johns Hopkins Center for a Livable Future based at the Johns Hopkins Bloomberg School of Public Health.

Consider the trade-offs for dairy products. "Our data indicate that it is actually dairy product consumption that explains much of the differences in greenhouse gas footprints across diets,” says  Martin Bloem, Johns Hopkins Center for a Livable Future director. “Yet, at the same time, nutritionists recognize the important role dairy products can have in stunting prevention. The goals of lowering greenhouse gas emissions by limited dairy intake conflict directly with the benefits of dairy products for human growth. 

"There will always be tradeoffs,” he says. “Environmental impact alone cannot be a guide for what people eat.”

Most business strategy involves trade-offs of these types.

Friday, September 13, 2019

Computing Archiitectures Now are Dependent on WAN Performance, Not LAN

These days, computing performance mostly hinges on the wide area network, not the "local" area network, a big change from earlier eras of computing. If you think about it, devices now communicate even with other devices in the same room, building or campus using the WAN, and not the LAN. Yes, we use Wi-Fi, but that is simply a local extensiion of the WAN network access and transport.

Local area network design parameters have changed over the past 50 years, along with changes in computing architecture. Prior to 1980, the computing devices were mainframes or mini-computers and the local networks simply needed to connect dumb terminals with the mainframe. There was limited requirement for wide area communications. 


The personal computer changed all that. Relatively low-cost PCs replaced mainframes as the standard computing appliance, creating a need to connect PCs at relatively low speed (10 Mbps or less). There still was limited need for WAN communications. 

Client-server dramatically increased local network communications requirements, adding servers as a centralized network element PCs needed to communicate with, on a frequent basis. That also drove requirements for LAN speed up to 100 Mbps. 

In the next era of distributed or network computing, perhaps the salient new requirement was for wide area communications, as more of the computing load moved to remote data centers and required networking of branch locations. 

In the present “web” era, broadband communications over wide area networks are mandatory, as processing load shifts to the far end of the WAN. There is less need for networking local devices, as most local computers interact directly with the far-end cloud data centers. 

In fact, many “local” communications actually rely on connections with the far-end data centers. Cloud printing, email, messaging provide examples: the communications between local devices actually are routed across the WAN to far-end servers and back again. The “local” network uses Wi-Fi, but the logical connections are from device to far-end servers and data centers. 

Local network performance requirements have changed with the architectures. Early networks did not require LAN speeds much faster than 4 Mbps to 10 Mbps. Use of distributed networks contacting far-end servers created a new need for higher-speed WAN connections. Today’s web architectures require high-speed local and wide area connections. 

The Wi-Fi “local area network” access is mostly an on-ramp to the WAN. In essence, the broad global network (private IP or public internet) replaces the LAN, and each local device communicates with other local devices using the WAN. Wi-Fi, essentially, is simply the local connection to the WAN. 

Application performance also has changed. Early character-based apps (either mainframe, mini-computer) did not require high-bandwidth or low-latency performance. Today’s multi-media apps do require high bandwidth as a matter of course. And some apps--especially voice or video conferencing--require low latency as well. 

Coming sensor apps (internet of things) will create more-stringent demands for latency and bandwidth performance exceeding parameters for voice and video conferencing, in some use cases. Virtual reality is probably the best example of bandwidth requirements. Sensors involved with auto safety provide the best examples of latency-dependent communications. 

The point is that, these days, LAN performance arguably is secondary to WAN performance.

Wednesday, September 11, 2019

5G Offers Incremental Revenue Upside, IF...

New 5G services for business and enterprise customers might boost connectivity revenues for mobile operators in a variety of use cases, assuming 5G becomes a favored connectivity choice, compared to other available choices, including 4G and unlicensed networks. Consider connected vehicles. 

By 2022, Ford expects most of its cars to come with built in C-V2X technology, an LTE-based (4G) platform. By definition, that is a rival platform to 5G. Also, connected car platforms include 
DSRC-based V2V technology, which might be used more broadly by GM and Volkswagen. 

Other use cases, including industrial internet of things, might also use a variety of platforms other than 5G, including 4G platforms, wide area low-power platforms or Wi-Fi. 

So 5G represents potential enterprise connectivity revenue upside for mobile service providers, but only if 5G proves more attractive than other connectivity options, some based on 4G, others supplied by rival platforms using unlicensed spectrum. 

Optimists also see incremental upside from mobile operator participation in internet of things or other smart device use cases where the mobile operator owns a solution or can partner with a solution provider. But “network access” is not the only new battlefield. 

Custom networks based on network slicing and edge computing also will be new potential value drivers for enterprise buyers, and, as always, enterprise solution architects will have other supplier choices. In some markets, mobile operators might have new wholesale opportunities as some enterprises seek to create private 5G networks. 

In fact, 5G access, in and of itself, might prove to be less a driver than the other features (customized networks, edge computing). And in some countries, private 5G networks will not create a new wholesale opportunity for mobile service providers, as enterprises will be able to use new spectrum specifically allowing them to create private 5G networks on their own. 

The point is that 5G will have to prove its value for enterprise customers who will have other choices, including mobile operator 4G solutions.

Monday, September 9, 2019

Streaming Video an Opportunity for Some Connectivity Providers



Lars Larsson, CEO, Varnish Software and Richard Craig-McFeely, Strategy & Marketing Director, Digital Media, Interxion, discuss the upside. 


Don't Expect Measurable AI Productivity Boost in the Short Term

Many have high expectations for the impact artificial intelligence could have on productivity. Longer term, that seems likely, even if it mi...