Showing posts sorted by date for query high split. Sort by relevance Show all posts
Showing posts sorted by date for query high split. Sort by relevance Show all posts

Thursday, August 10, 2023

AI Will Drive Data Center Capabilities: How Much Seems the Only Real Issue

Though artificial intelligence and generative AI training and inference operations are widely expected to drive data center requirements for processing power, storage and energy consumption, it seems perhaps unlikely that edge computing will get a similar boost, principally because AI inference operations and training are not latency-dependent. 


And the value of edge computing is latency reduction as well as bandwidth avoidance. And while it still makes economic sense to store frequently-requested content at the edge (content delivery use cases), AI operations will likely not be so routinized that this adds too much value. 


Operations requiring large language model access likely will still need to happen at larger data centers, for reasons of access to processing and storage resources. Think about processing to train AI models for self-driving cars, fraud detection, and other applications that require the analysis of massive datasets.


To be sure, support of self-driving cars also involves perhaps-stringent latency requirements. The problem is simply that the requirement for high-performance computing and access to data stores is more crucial for performance. So processing is likely to be located “onboard.” Again, the key observation is the split between on-device and remote data center functions. Edge might not play much of a role. 


The debate will likely be over the value of regional data centers, which some might consider “edge,” but others will say is a traditional large data center function. 


Operations that can be conducted on a device likewise will not benefit much, if at all, from edge compute capabilities. Think real-time language translation, facial recognition, and other applications that require quick responses.


And Digital Bridge CEO Marc Ganzi believes AI will mean a new or additional market about the size of the whole public cloud computing market, eventually. 


If public cloud now represents about 13 gigawatts of capacity, AI might eventually require 38 gigawatts, says Ganzi. 


The whole global data center installed base might represent something on the order of 700 gigawatts, according to IDC. Other estimates by the Uptime Institute suggest capacity is on the order of 180 GW. 


According to a report by Synergy Research Group, the global public cloud computing industry now represents 66.8 gigawatts (GW) of capacity. 


So AI should have a significant impact on both cloud computing capacity and data center requirements over time.


Tuesday, July 26, 2022

Business Case Drives Platform Choices, Always

Fixed network next-generation platform choices are virtually never as “easy” as choices possible for mobile networks. For starters, mobile platform choices are always global and unitary: there is one standard to replace the prior standard. 4G is replaced by 5G as 5G will be replaced by 6G. 


In the fixed networks segment there always are multiple choices. Cable operators have multiple architecture choices when driving optical fiber deeper into the network, or can switch platforms entirely and replace hybrid fiber coax with fiber to the premises. Then there is the timetable: upgrade HFC now and then switch to FTTP later versus moving directly to FTTP. 


Fixed network operators have several choices of FTTP platform as well. Capabilities are one matter, but deployment cost; business strategy; expected financial return; capital investment and expected level of competition all are issues to be considered. 


source: Broadband Library 


When an immediate upgrade from copper is envisioned, some might argue XGS-PON is more future-proof, if more costly. But scale matters, in some instances. Capex for one choice that is four to five times that of the other relevant choice can be a powerful incentive. 


Split ratios and the number of wavelengths also might be considerations if one platform has range significantly different from other platforms. A network architect might require more wavelengths per fiber if longer distances are covered, to support dropping wavelengths along the route to serve enterprise or other high-volume customers. 


source: Medium 


In all cases, though, the business case influences platform choices. "First installed cost" is never the only consideration.


Thursday, November 18, 2021

Big Strategic Shift for FTTH?

The strategic context for U.S. home broadband is evolving. For two decades, cable TV operators have been able to consistently maintain installed base share close to 70 percent, in most years getting the majority to all of the net new account additions. 


That remains the case in 2021, as cable continues to hold its installed base lead and also continues to win the net new additions battle.  


All that now seems set for change, though. The biggest change is an up- tempo pace of fiber to home conversions by telcos. But new 5G high-bandwidth fixed wireless offerings should claim some share as well. 


source: New Street Research 


Also important is the way some telcos are positioning their upgrades. In the past, they might have been content to match cable offers. Now some are aiming to surpass cable offers, with symmetrical upstream bandwidth a weapon.  


Frontier Communications, for example, is preparing rollout of a 2-Gbps offer, in addition to its standard 1-Gbps and entry-level 500-Mbps offers. That will likely feature symmetrical bandwidth. 


To be sure, cable is working on its own 10-Gbps capabilities, as well as methods to add more upstream bandwidth. But many of those solutions are not graceful upgrades from the existing hybrid fiber coax platform. The choice is whether to revamp HFC in significant ways or switch to FTTH as the replacement. 


More upstream bandwidth could be provided, to some extent, by pushing fiber deeper into the HFC network. Alternatively, cable operators can swap frequency plans, moving to mid-split or high-split designs. But all those moves require disruption of the physical plant, and cannot be accomplished by swapping out end user gear, as has been the case in the past. 


And any shift to fiber deeper networks, mid-split or high-split architectures (or two of the above) essentially delays an eventual shift to FTTH in any case, many would argue. So the decision comes down to “spend less now, but more in the long term, while undertaking a major network disruption twice” or “spend more now, and be done with it, and only disrupt operations once.” 


The larger point is that upgrading to FTTH comes with other choices that can confer advantage. Bandwidth can be symmetrical, or not. Bandwidth can top out at various levels: higher or relatively lower. And retail pricing, terms and conditions also make a difference. 


Much thinking now seems to be going into how to tweak those parameters to gain advantage over cable operator competitors. Many might assume FTTH means gigabit speeds. It does not. FTTH is physical media. Service providers still must decide how much bandwidth they want those networks to supply. 


Historically, FTTH might have meant speeds in the hundreds of megabits. Some U.S. FTTH networks installed in the mid-1990s to late 1990s offered speeds only up to 10 Mbps. User experience might be an order of magnitude less than advertised, however, even on FTTH platforms.  


What seems to be changing is a willingness to leverage FTTH to gain a speed advantage. 

 

“Our network is already 10-gig capable end-to-end, so we can carry on driving up speed tiers, as demand requires, in a very low-cost, very quick way, again, in a way that cable can't, says Nick Jeffery, Frontier Communications CEO. 


But that only matters if most Frontier customers can buy the service. 


“Our plan (is) to reach a total of five million fiber locations by the end of 2022 and 10 million locations by the end of 2025,” says Nick Jeffery, Frontier Communications CEO. 


Frontier has 15.2 million locations passed, so 10 million total FTTH passings means about 66 percent of the potential customer base would be able to buy FTTH services. 


Of course, a higher installed base does take time. “Our 2020 expansion cohort continues to show strong penetration of 30 percent at the 12-month mark,” says Jeffery, though noting that figure is based on a small sample. 


“For the overall build plan, we continue to expect a 15 percent to 20 percent penetration rate at the 12-month mark, and with penetration continuing to rise in subsequent years toward a terminal penetration of 45 percent,” he added. 


Government subsidies also are expected to improve the business case for FTTH and other high-speed services, as they are increasing substantially. 


George Ford, economist at the Phoenix Center for Advanced Legal and Economic Public Policy Studies, argues that about 9.1 million U.S. locations are “unserved” by any fixed network provider. 


Though specifics remain unclear, it is possible that a wide range of locations might see their deployment costs sliced by $2,000 or more. Lower subsidies would enable many more locations to be upgraded to FTTH, for example: not the unserved locations but possibly also many millions of locations that have been deemed “not feasible” for FTTH.


Much hinges on the actual rules that are adopted for disbursement. Simple political logic might dictate that aid for as many locations as possible is desirable, though many will argue for targeting the assistance to “unserved” locations. 


But there also will be logic for increasing FTTH services as widely as possible, which will entail smaller amounts of subsidy but across many millions of connections. The issue is whether to enable 50 million more FTTH locations or nine million to 15 million of the most-rural locations. 


Astute politicians will instinctively prefer subsidies that add 65 million locations (support for the most-rural locations plus many other locations in cities and towns where FTTH has not proven obviously suitable). 


The issue is the level of subsidy in various areas. 


“According to my calculations, if the average subsidy is $2,000 (which is the average of the RDOF auction), then the additional subsidy required to reach unserved households is $18.2 billio,” Ford argues. “If the average subsidy level is $3,000, then $22.8 billion is needed. And at a very high average subsidy of $5,000, getting broadband to every location requires approximately $45.5 billion.”


The point is that, compared to the business case 20 years ago, FTTH is better in a number of ways. Strategically, copper facilities simply are outmoded. Any fixed network operator clinging to that platform is destined for death. 


Financially, the older triple-play model--with its cost structure and complexity--now is out of favor. The new model is based on home broadband: the sole service for an independent ISP, and the growth driver for an incumbent telco. 


Oddly enough, the older justification for FTTH--that it allows telcos to support many services--now is eclipsed by the simple value of internet access. The value of the “do anything” platform still remains. 


Only these days the primary value driver for an incumbent telco or independent ISP is “access.” Voice or video entertainment might contribute additional revenue and value, but where there is a choice, new providers simply build on home broadband, leaving apps to be supplied by others. 


All that is a big potential change.


Tuesday, September 14, 2021

How Will Cable Operators Re-Architect to Add Upstream Bandwidth?

Hybrid fiber coax upgrades intended to increase upstream bandwidth can take a number of forms. Shrinking the serving areas; switching to fiber-to-home and re-architecting the network for different frequency plans are the typical choices. 


For operators who want to delay the shift to FTTH, moving from the standard HFC low-split design, and substituting a mid-split or high-split frequency plan, are the two architectural choices other than shrinking the fiber node serving areas or moving to an entirely-new FTTH network. 


As always, incrementalism is favored. Comcast appears to prefer the mid-split option, while Charter seems to be leaning towards a more-radical high-split approach. In terms of capital investment, the mid-split choice might be a shorter-window bridge to FTTH, while high-split might allow a longer window before FTTH is required. 


More symmetrical bandwidth is a large part of the thinking.  


DOCSIS 4.0 is going to force decisions about which path to take to support symmetrical multi-gigabit-per-second speeds of as much as 10Gbps downstream and up to 6 Gbps upstream.

source: Comscope 



Hybrid fiber coax networks still use frequency division, separating upstream and downstream traffic by frequency. So when a cable operator contemplates adopting mid-split or high-split designs, there are implications for active and passive network elements, especially for the more-radical high-split design. 


At this point, executives also will ask themselves whether, if radical changes are required, whether it would not be better to simply switch to fiber-to-home. 


source: Broadband Library 


Our notions of mid-split and high-split frequency plans have shifted a bit over the years, as total bandwidth has grown beyond 450 MHz up to 1.2 GHz. A designation of “mid-split”  made more sense in an era where total bandwidth was capped at about 450 MHz or 550 MHz. In those days, 108 MHz to 116 MHz of return bandwidth was perhaps 42 percent of the usable bandwidth. 


Hence the “mid-split” designation. 


Likewise for high-split designations, where as much as 186 MHz was designated for the return path, the return bandwidth represented as much as 67 percent of usable bandwidth on a 450-MHz coaxial cable system. 


source: Broadband Library  


Definitions remain, though with some new standardization of return bandwidths. “Mid-split” now features 85 MHz of return bandwidth, while “high-split” offers 204 MHz of upstream bandwidth. 


source: Broadband Library  


“Ultra-high-split” designs also are being investigated, where the upstream spectrum’s upper frequency limit can be 300 MHz, 396 MHz, 492 MHz, or 684 MHz, says Ron Hranac, consulting engineer. 


What remains true is that the ability to wring more performance out of hybrid fiber coax plant has proven more robust than many expected a decade ago. 


Also being considered are full duplex designs that swap time division for frequency division multiplexing. That is an option for DOCSIS 4.0 networks, and is a break from the frequency division HFC has used.




source: CableLabs 


Full duplex networks would allow the upstream and downstream traffic to use the same spectrum at the same time. That would require an HFC upgrade to a node-plus-zero amplifiers” design that is similar to fiber to the curb. The drop to the user location still uses coaxial cable, but without any radio frequency amplifiers. 

source: CableLabs 


The whole point of all these interventions is to supply more upstream or return bandwidth than HFC presently provides. 


source: Qorvo


Cable operators are a practical bunch, and will prefer gradualism when possible. So one might hypothesize that either mid- or high-split designs will be preferred. 


Thursday, August 12, 2021

Next HFC Upgrade Will be Driven by Business Assumptions

Cable operators and mobile operators share one business commonality: capacity improvements hinge on the availability of spectrum and the degree of frequency reuse (smaller cells or serving area sizes). 


Both mobile and cable operators can effectively boost capacity by using different modulation techniques as well. But cable operators face a bigger problem, architecturally. “At some point” in the future a shift to fiber to home designs seems inevitable. 


But there are many ways to upgrade the hybrid fiber coax network before then, with varying degrees of capital investment and complexity, as well as capacity improvements. So each upgrade path embeds assumptions about what the market will require in terms of both upstream and downstream capacity , and for how long. 


DOCSIS 4.0 is going to force decisions about which path to take to support symmetrical multi-gigabit-per-second speeds of as much as 10Gbps downstream and up to 6 Gbps upstream.

source: Comscope 



Hybrid fiber coax networks still use frequency division, separating upstream and downstream traffic by frequency. So when a cable operator contemplates adopting mid-split or high-split designs, there are implications for active and passive network elements, especially for the more-radical high-split design. 


At this point, executives also will ask themselves whether, if radical changes are required, whether it would not be better to simply switch to fiber-to-home. 


source: Broadband Library 


Our notions of mid-split and high-split frequency plans have shifted a bit over the years, as total bandwidth has grown beyond 450 MHz up to 1.2 GHz. A designation of “mid-split”  made more sense in an era where total bandwidth was capped at about 450 MHz or 550 MHz. In those days, 108 MHz to 116 MHz of return bandwidth was perhaps 42 percent of the usable bandwidth. 


Hence the “mid-split” designation. 


Likewise for high-split designations, where as much as 186 MHz was designated for the return path, the return bandwidth represented as much as 67 percent of usable bandwidth on a 450-MHz coaxial cable system. 


source: Broadband Library  


Definitions remain, though with some new standardization of return bandwidths. “Mid-split” now features 85 MHz of return bandwidth, while “high-split” offers 204 MHz of upstream bandwidth. 


source: Broadband Library  


“Ultra-high-split” designs also are being investigated, where the upstream spectrum’s upper frequency limit can be 300 MHz, 396 MHz, 492 MHz, or 684 MHz, says Ron Hranac, consulting engineer. 


What remains true is that the ability to wring more performance out of hybrid fiber coax plant has proven more robust than many expected a decade ago. 


Also being considered are full duplex designs that swap time division for frequency division multiplexing. 


source: CableLabs  


Each technology upgrade path has business implications, especially the cost to upgrade HFC in some way without shifting to FTTH. The other assumption is the competitive environment and how long each alternative upgrade can support the expected business model.


Tuesday, January 19, 2021

SMBs in Industries with Moats Spend Much More on IT Than SMBs in Risky Sectors

U.S. smaller businesses (defined as “small” when there are up to 99 employees and “medium” with 100 to 499 employees) can be split into four market segments, according to Analysys Mason. 


You likely would not be surprised to find that bigger spenders also are firms that face less competition--and therefore are more profitable--while also being larger firms overall. That has traditionally been the case. 


It would not surprise you that firms in financial distress, or in declining markets, spend less than growing or prosperous firms. You also would not likely be surprised if growing firms were more focused on growth--and investing for growth--while most declining firms are more interested in harvesting revenue for as long as possible and spending as little as possible. 


That is one way to interpret the way SMBs can be characterized in terms of their information technology spending volumes. 


Analysys Mason sees SMBs in four spending groups: super spenders, ahead of the curve, constrained strugglers or disengaged groups. 


What also might catch your attention is that firms that spend more tend to be in well-protected industries with moats of some sort that keep competitors out. They tend to be larger, growing and likely more profitable. 


The firms that invest less are all in high-risk industries, perhaps declining industries, are smaller and in financially-stressed industries. 


Fig1.png

Source: Analysys Mason, 2020


The super spenders make up roughly one third of the Analysys Mason sample and tend to be found in well-protected industries such as professional services.


The constrained strugglers and disengaged SMBs are mostly small businesses in challenged industries who focus on cost cutting and survival. These businesses tend to be found in high-risk industries such as retail, construction and manufacturing.


Wednesday, February 5, 2020

Big Changes in Product Strategy Must Come, for Fixed Network Service Providers

Google Fiber will no longer offer a linear TV product to new customers, essentially migrating the business model to a more-typical independent internet service provider model that is based on one product, not a triple play or dual play (video or voice). 

Comcast and Charter Communications, though noting that entertainment video still is a highly-significant revenue source, also say they now focus on core communications--especially internet access, business services and the new mobility segment--as their revenue drivers. 

Some small cable TV operators and small telcos might be moving the same way, essentially letting their video customer base dwindle, as mobile phone and internet access become the ubiquitous services consumers want. 

The big problem for small and rural service providers always has been that the linear TV business model is quite challenged. Few such operators have ever actually claimed they make a profit on entertainment video. 

In rural areas, where there essentially is no business model, subsidies always have made the difference between no service and “some level of service.”

At times, some rural service providers that have taken on too much debt might find themselves unsustainable

There are clear business model and strategy implications. In a competitive market, the reasonable assumption must be that two excellent providers will split market share; three excellent providers might expect only 33 percent share. 

There are two really-important implications. Without a lower cost structure, any single firm cannot expect to prosper if it has been built to support two, three or more key products. In the monopoly days, one service could support a full network, whether that service was TV, radio, voice or telegraph. 

In a competitive market featuring skilled competitors, suppliers might have to sustain themselves with customer share of 50 percent, 33 percent or less. That is why the triple play became important. The revenue impact of serving one out of three locations is mitigated if three services are sold to each customer location.

All that changes if we start seeing one key product supplied by most firms in competitive markets. Lower costs are necessary. That is why most independent internet service providers have such low overhead, compared to the larger telcos and cable companies.

But it also is likely that no bigger firm can get by simply by slashing costs. Additional revenue must be generated from other products and roles, to replace the dwindling legacy sources. 

So we might predict two developments. Single-product fixed networks using cables are only feasible if internet access is the product. Voice and video will not generate enough revenue to support a fixed network. 

So far, it has been possible for a wireless network to support itself on the strength of TV, radio or internet access. 

That changed in the competitive era, when single-product strategies based on high adoption (market share of 80 percent or more) began to fail. Multi-product sales based on selling more things to fewer customers (scope), rather than one product to nearly everyone (scale). 

All that seems to be changing as revenue upside from voice and linear video dwindles. Video arguably is the more-difficult challenge, as cost of goods is quite high. 

It certainly will make more sense, in a growing number of settings, for highly-focused, low overhead firms to try and make a go of things based solely on internet access, abandoning the multi-product strategy. 

But other changes must follow. Either overhead, capital and operating costs must be drastically pared, or new revenue sources found, or both. The decades where product bundles compensated for lower market share seem to be ending.

Something equally challenging now emerges: how to make a profit from a one-product network with lowest take rates.

Monday, April 1, 2019

Is a 50-50 Revenue Split for Apple News+ Unreasonable?

Content suppliers for Apple’s news-based subscriptions complain about revenue splits (as did app and game suppliers about similar distribution costs in the App Store). The channel conflict is real enough: unless a content supplier can go direct to consumer, distribution represents  a healthy chunk of total cost to deliver a product.

In principle, distribution costs include direct sales; advertising; packaging; incentives for distribution partners; credit and bad debt costs; market research; warehousing; shipping and delivery; invoice processing; customer service and returns processing, for example.

In some industries, the “cost of goods” can range from 30 percent to 80 percent of total retail cost. That might be likened to the digital content Apple will distribute.


Granted, traditional distribution operations have been oriented around physical products, not software, streaming and non-tangible products. One study suggests direct supply chain costs  represent four percent to 10 percent of cost; direct transportation costs a couple of percent to 10 percent of revenue; warehouse or distribution center costs perhaps two percent to 16 percent of revenue. The larger point is that distribution can range from a low of 10 percent to a high of 35 percent of total retail cost.


The point is that any content supplier can go direct or indirect. Apple’s News+ is an indirect distribution or sales channel. What that is worth is a matter of perceived value and market power, played out in contract negotiations.

So much of the disagreement about revenue splits harken back to the older arguments between content owners and distributors generally. In the U.S. linear video business, some argue sports content alone represents half of the retail cost of the service.

It might therefore be the case that distribution (everything required to get the content to the end user) represents 40 percent or so of total end user price.  

The point is that a 50-50 split of revenues between Apple and any specific content owner might seem out of whack. The alternative is the cost to sell the product direct versus indirect, using Apple. And that is far from an insignificant cost for any supplier, even of digital goods.

AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...