Showing posts sorted by date for query media type data consumption. Sort by relevance Show all posts
Showing posts sorted by date for query media type data consumption. Sort by relevance Show all posts

Friday, January 5, 2024

Unicast Video Accounts for Most of the Internet Bandwidth Increases We See

Constant and significant increases in bandwidth consumption are among the fateful implications of switching from linear TV broadcasting to multicast video streaming. Consider that video now constitutes 52 percent to 88 percent of all internet traffic. 


Not all that increase is the direct result of video streaming services. Video now is an important part of social media interactions and advertising on web sites supporting consumer applications, though some studies suggest social media sites overall represent only seven percent to about 15 percent of video traffic consumed by end users. 


Also, there is some amount of internet video traffic between data centers, not intended directly for end users, possibly representing five percent of global internet traffic. 


Study

Date

Video Traffic Share (%)

Cisco Annual Internet Report (2023)

Dec 2022

88%

Sandvine Global Internet Phenomena Report (Q3 2023)

Sep 2023

83%

Limelight Networks State of the Real-Time Web Report (Q3 2023)

Oct 2023

76%

Ericsson Mobility Report (Nov 2023)

Nov 2023

72%

ITU Global Video Traffic Forecasts

Feb 2023

70% (2022)

Ookla Global Video Report (Q2 2023)

Aug 2023

65%

Akamai State of the Internet / Security Report (Q3 2023)

Oct 2023

60%

Statista: Global Internet Traffic Distribution by Content Type (2023)

Oct 2023

58%

GlobalWebIndex Social Video Trends Report (Q3 2023)

Sep 2023

55%

Juniper Networks Visual Networking Index (2023)

Feb 2023

52% (2022)


Ignoring for the moment the impact of video resolution on bandwidth consumption (higher resolution requires more bandwidth), the key change is that broadcasting essentially uses a “one-to-many” architecture, while streaming uses a unicast architecture. 


The best example is that a scheduled broadcast TV show, for example, can essentially send one copy of the content to every viewer (multicast or broadcast delivery). The same number of views, using internet delivery, essentially requires sending the same copy to each viewer separately (unicast delivery). 


In other words, 10 homes watching one multicast or broadcast program, on one channel, at one time consumes X amount of network bandwidth. If 10 homes watch a program of the same file size as the broadcast content, whether simultaneously or not, then bandwidth consumption is 10X. 


There are some nuances for real-world data consumption, such as the fact that consumption of linear video is declining or the fact that broadcasting uses a constant amount of bandwidth, no matter how many viewers in an area might be watching or not watching. 


Study

Comparison

Bandwidth Ratio (Streaming/Broadcasting)

"A Comparative Analysis of Video Streaming and Broadcasting for Live Sports Events" (2023)

Live sports streaming vs. multicast

10x - 15x

"Bandwidth Efficiency of IPTV vs. Traditional Broadcasting" (2022)

IPTV unicasting vs. terrestrial broadcasting

2x - 4x

"The Impact of Unicast Video Delivery on Network Traffic" (2021)

Unicasting video vs. multicast video

1.5x - 3x

"Comparing the Bandwidth Consumption of Live Streaming and P2P Delivery" (2020)

Live streaming vs. P2P for live events

3x - 6x

"The Bandwidth Efficiency of Video Streaming Protocols" (2019)

HTTP streaming vs. RTMP streaming

1.2x - 2x

"A Study of User-Generated Video Delivery on Social Media Platforms" (2018)

User-generated video streaming vs. traditional video streaming

2x - 4x

"The Bandwidth Implications of 4K and 8K Video Streaming" (2017)

Higher resolution streaming vs. standard definition

4x - 8x

"The Impact of Mobile Video Streaming on Network Congestion" (2016)

Mobile video streaming vs. fixed-line streaming

1.5x - 3x

"The Future of Video Delivery: A Cost Comparison of Streaming and Broadcasting" (2015)

Streaming vs. broadcasting for future content delivery

2x - 4x

"The Bandwidth Efficiency of Video-on-Demand Services" (2014)

Video-on-demand streaming vs. linear broadcasting

1.5x - 2.5x


There are other nuances as well. Since a broadcast video stream often is viewed on a television set, it is possible that multiple viewers “share” viewing of the same content. If one TV is receiving a program, and five people are watching, the “single delivery” supports five views. 


On a “per viewer” basis, X amount of delivery bandwidth is X/5 for each viewer of the same program. 


If five people watch a program of equivalent file size at the same time, data consumption is 5X. 


Study

Year

Methodology

Streaming Bandwidth (Mbps)

Linear Broadcasting Bandwidth (Mbps)

Nielsen

2022

Network traffic analysis

3.1-4.7 (average)

0.1-0.2 (average)

OpenVault

2023

ISP data analysis

1.8-2.5 (average)

0.05-0.15 (average)

Pew Research Center

2021

Survey and network analysis

2.3-3.8 (average)

0.1-0.2 (average)

University of Zurich

2019

Network monitoring and simulation

2.0-3.5 (average)

0.08-0.18 (average)

Akamai

2020

Global traffic analysis

1.6-2.8 (average)

0.04-0.12 (average)

Sandvine

2022

Network traffic analysis report

3.5-5.0 (peak)

0.15-0.25 (peak)

Netflix

2021

Open Connect content delivery platform report

0.5-1.5 (average)

N/A

BBC Research & Development

2018

HbbTV hybrid broadcasting analysis

1.0-2.0 (combined)

0.03-0.08 (combined)

Bitmovin

2023

Video encoding and delivery technology report

0.8-1.8 (efficient encoding)

N/A

Ericsson

2022

Mobile network video traffic report

0.5-2.0 (mobile average)

N/A


The point is that the shift from broadcasting (multicasting) to unicast entertainment video was destined to dramatically increase internet data consumption.


Thursday, March 2, 2023

Choice of TCP/IP Was Fateful

Choices have consequences. When the global connectivity industries decided that TCP/IP was the next-generation network, they also embraced other foundations. Functions now are layered. We use application programming interfaces to communicate between layers, but the layers themselves are disaggregated. 


That means monolithic value chains or vertically-integrated value chains are also disaggregated. One can own and operate an asset at one layer without owning and operating all the other layers. App ownership and creation is separated from delivery, for example. 


That is why “over the top” is essentially the way all apps now are created. Which entity “owns” an app can vary, but development is permissionless. 


In a similar and earlier way, the digitization of all media types has had key consequences. Digital media means digital delivery. And that also means delivery in a permissionless way over layered networks. Where content owners once had to create or own their own delivery networks, now any media type can be accessed by any user (so long as it is lawful in the eyes of a government) without a specific business relationship between content owner and distributor. 


That enables content streaming, which has created traffic imbalances between inbound and outbound data. Only kilobytes need be sent upstream to request delivery of a two-hour movie, whose delivery entails gigabytes. 


And though the magnitude of data transfer is less for other types of content and interactions, the same imbalance exists. It only takes kilobytes to request a page or an object. The delivered content, ranging from websites with auto-start video to streaming audio, requires much more data to be delivered and displayed. 


The public communications networks long have had a way of dealing with imbalances of this type. At the end of a year, carriers true up usage. If one network has landed more traffic than it has sent--and in principle “used” more resources--payments are made by the sending network. 


And that is the logic behind proposals to tax a few hyperscalers for landing traffic on ISP networks. 


Think about the impact on networks in making the switch from analog to digital; linear to on-demand delivery. Content delivery networks always are most efficient when using a broadcast or multicast model where essentially one copy is delivered at the same time to many thousands to millions of viewers or listeners. That allowed one-to-many networks to be built and operated by the content companies (radio stations, TV broadcast stations, cable TV). 


On-demand delivery requires a very different kind of network. Unicast delivery then must be supported. Any-to-any networks require overbuilding capacity, compared to a broadcast network. Where in a linear environment one copy is sent at the same time to many consumers, a unicast network requires sending one copy to one consumer, each requiring consumption of additional network resources. 


Of course, it is complicated. It is the ISP’s own customers who are invoking the remote data. If the product were electricity, water, natural gas, toll road access, landing rights, docking rights, lodging nights or most other retail products, buyers pay for usage. Business-to-business usage winds up--ultimately--paid by end user consumers, even if intermediate costs are borne by other business partners. 


Intermediate funding can come in various other ways. Advertisers or sponsors can defray some costs, those costs being recouped in sales of whatever products advertisers are hawking. 


Some participants can be taxed or subsidies can be applied. Taxes on a few hyperscalers illustrate the former; universal service subsidies represent the latter approach. 


The larger point is that choosing TCP/IP has had business model consequences. Permissionless app development means the financial interests of app owners and network owners can be disaggregated. And, as in any value chain, one participant’s revenue is another participant’s costs. 


Ultimately, all long-term value chain costs are reflected in retail end user prices. But costs and revenues within the value chain always are contentious to some degree. And any long-term increase in producer prices will be reflected in higher consumer prices, reduced output; lower producer profits or other changes in features. 


No matter what the resolution of debates over funding mechanisms, retail consumers will pay, in the form of higher connectivity fees; higher retail product costs or changes in feature sets. 


Producers could see pressure on profit margins if their own capital investment and operating cost parameters are not adjusted. 


But it all goes back to the fateful choice of TCP/IP as the next-generation network. Layered and disaggregated models have consequences.


Friday, January 6, 2023

Taxing Hyperscalers to Fund ISP Networks has Losers, Including End Users

For every public policy decision, there are winners and losers. That is no different for proposals to tax a few hyperscalers to support home broadband networks. ISPs would gain; app providers would lose. Ultimately, so would users of internet-delivered apps and services.


Communications policy almost always is based on precedent and prior conceptions. All this might be relevant when thinking about how public networks are funded, especially not that regulators are looking at unprecedented funding mechanisms, such as levying fees on third parties that are not “customers” of connectivity providers. 


It’s a bit like taxing appliance makers whose products create demand for electricity. Today, the electrical networks are common carriers, all the devices are private and the cost of using electricity is borne by the actual end user customers. 


But some regulators want to essentially tax device manufacturers for the amount of electricity use they generate. 


There are simpler solutions, such as charging customers on a usage basis, based on their consumption. That would have a possible added benefit of not disturbing the data communications regulatory framework. 


And that matters, at least for observers who care about freedom of expression. Data networks have always separated the movement of data from the content of data. Devices and software do not require the permission of the data infra owner to traverse the network, once access rights are paid for. 


The important point is that all networks now are computer networks. 


To be clear, some will argue that changes in how networks are built (architecture, media, protocols) do not matter. It is the function that matters, not the media. If a network is used for broadcast TV or radio, that is the crucial distinction, not whether broadcasting uses analog or digital modulation; particular protocols or radios. 


If a network is a public communications carrier, the types of switches, routers, cables, protocols and software used to operate that business do not matter. What is regulated is the function. 


The function of a public network is to allow paying customers to communicate with each other. Each account is an active node on the network, and pays to become a node (a customer and user of the network). 


Service providers are allowed to set policies that include usage volume and payment for other features. In principle, a connectivity provider may charge some customers more than others based on usage. 


But one element is quite different in the internet era. Connectivity providers have customers, but generally do not own the applications that customers use their networks to interact with. There is no business relationship between the access provider and all the other application providers--as app providers. Every app provider is a customer of a local access provider or many access providers. 


Operators of different domains can charge each other for use of each others’ networks by other networks, which is where the intercarrier settlements function plays. And volume does matter, in that regard. 


The point is that it is the networks who settle up on any discontinuities in traffic exchange. Arbitrage always is possible whenever traffic flows are unequal, and where rules are written in ways that create an arbitrage opportunity. The classic example is a call center, which features lots of inbound traffic, compared to outbound. 


So some might liken video streaming services to a form of arbitrage, in that video streaming creates highly unequal traffic flows: little outgoing traffic and lots incoming, for the consumer of streaming content. 


But that also depends on where the servers delivering the content are located. In principle, traffic flows might well balance out--between connectivity domains-- if streaming customers and server sites are distributed evenly. 


Historically, big networks and small networks also have different dynamics. When the media type is voice, for example, bigger networks will get more inbound traffic from smaller networks, while smaller networks should generate more outbound traffic to the larger networks. 


For streaming and other content, traffic flows on public networks might largely balance, since the biggest content firms build and operate their own private networks to handle the large amount of traffic within any single data center and between data centers. Actual distribution to retail customers (home broadband users of streaming video, for example) likewise is conditioned by the existence of server farms entirely located within a single domain (servers and users are all on one service provider’s network). 


The point is that inter-domain traffic flows, and any compensation that different domains might “owe” each other, is a complicated matter, and arguably should apply only to domains and their traffic exchange. 


In other words, one might argue that traditional inter-carrier settlements, traffic peering and transit are sufficient to accommodate unequal traffic flows between the domains. 


In other words, the argument that internet service providers make that a few hyperscale app providers are sending much more traffic than they are receiving “should” or “could” be settled between the access provider domains, as always has been done. 


If the argument goes beyond that, into notions of broadband cost recovery, then we arguably are dealing with something different. Going beyond inter-carrier settlements, such notions add a new idea, that traffic sources (content providers and streaming services)  should pay for traffic demand generated by their traffic sinks (users and subscribers of streaming services).  


This is a new concept that conceptually is not required. If ISPs claim they cannot afford to build and operate their own access networks, they are free to change charging mechanisms for their own customers. Customers who use more can pay more. It’s simpler, arguably more fair and does not require new layers of business arrangements that conflict with the “permissionless” model.


Data networks (wide area and local area) all are essentially considered private, even when using some public network resources. Data networks using public network resources pay whatever the prevailing tariffs are, and that is that. Entities using data networks do not contribute, beyond that, to the building and operating of the public underlying networks. 


Public transport and access providers might argue that they cannot raise prices, or if they did, would simply drive customers to build their own private networks for WAN transport.


That obviously would not happen often in the access function. Local networks are expensive. But there already exists a mechanism for networks to deal with unequal traffic flows between access domains. 


So there is a clash here between private data networking and public communications models. What is new is that, in the past, the applications supported by the network were entirely owned by the network services provider. 


Now, the assumption is that almost none of the applications used by any ISP’s customers are owned by the ISP itself. So the business model has to be built on an ISP’s own data access customer payments. Application revenue largely does not factor into the business model. 


But that is the way private computer networks work. Cost is incurred to create the network. Revenue might be created when public network access and transport is required. But all those payments are made by an ISP’s local customers, even when the ISP bundles in access to other ISP domains required to construct the private network. 


“Permissionless”  development and operation now is foundational for software design and computing networks. All networks now are computing networks, and all now rely on functional layers. 


The whole design allows changes and innovation at each functional layer without disturbing all the functions of the other layers. What we sometimes forget is that below the physical layer is layer 0, the networks of cables that create the physical pathways to carry data. 


Of course, any connectivity network must operate at several layers: physical, data link and network. By the “transport” layer functions tend to be embedded in edge devices. 


source: Comparitech 


To be sure, connectivity networks--especially access networks that sell home broadband and other connectivity services to businesses--must operate at many layers, including the modems used to support broadband access. 


So some might add, in addition to a “layer zero” network of cables, a layer eight for software and applications that run on networks. 

source: NetworkWalks 


Local area networks typically are less complex, but still use the layers architecture. The difference is that LANs (Wi-Fi, Ethernet  or other) primarily rely on layers one to three of the model. 


source: Electricalfundablog 


“Permissionless” access and transport have sparked enormous innovation. That should remain the case. Additional taxes, which means higher costs, will not help that process. Other networks charge for usage. Public IP networks could do the same. Settlement policies between access domains already exist. And, to be clear, app domains can create facilities that do not cross access domains, if they choose. 


So ISPs can charge for usage if they choose. Unlimited usage could be a higher price. Lower amounts of usage can still be sold in tiers. Problem essentially solved.


AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...