Showing posts sorted by date for query business locations. Sort by relevance Show all posts
Showing posts sorted by date for query business locations. Sort by relevance Show all posts

Friday, February 2, 2024

On-Device Edge Computing Will be Important for Cost Reasons

Generally speaking, edge computing facilities such as those envisioned by the multi-access edge computing model impose higher costs than do hyperscale data facilities, including capital investment efficiency and operating costs. That does not mean MEC is unfeasible, but that the business cases need to be worked out, as there are alternatives including on-board or on-device computing. 


Assume 20 percent to 40 percent of edge computing requirements might be suited for on-device processing, especially for simple, real-time tasks and applications with tight latency constraints.


Assume the remaining 60 percent to 80 percent of processing tasks might use either remote edge computing or cloud processing for more complex analysis, data aggregation, or situations where device limitations are significant.


Even in those cases, it is presently unclear how much latency improvement might be needed, and therefore when edge facilities are required. The answer matters, since, generally speaking, MEC or other edge computing facilities will not be as capital-efficient as hyperscale data centers. 


Challenge

Edge Computing

Hyperscale Data Centers

Infrastructure diversity: Diverse hardware needs based on specific edge locations (e.g., ruggedized for remote areas, low-power for battery-operated devices)

Standardized hardware for bulk purchase and deployment

Higher upfront costs

Geographical distribution: Managing equipment across geographically dispersed locations

Centralized infrastructure with economies of scale

Higher logistics and deployment costs

Smaller scale: Lower capacity per unit compared to large data centers

High capacity per unit due to bulk purchase and deployment

Lower cost per unit of compute

Add to that the operating cost profile, which likewise tends to be higher than for hyperscale sites. 


Challenge

Edge Computing

Hyperscale Data Centers

Remote monitoring and maintenance: Managing and maintaining equipment across diverse locations

Centralized monitoring and maintenance

Increased labor and service costs

Power and cooling: Diverse power and cooling requirements based on location (e.g., solar panels for remote areas)

Standardized power and cooling infrastructure

Increased energy and infrastructure costs

Security and compliance: Diversified security needs based on specific edge locations and regulations

Standardized security protocols across centralized infrastructure

Increased security and compliance costs


All of that means that MEC and other edge computing facilities are likely to be relatively costly investments for a data center services provider, simply because of lower scale at each facility, as well as the need for many such distributed facilities. 


That includes hardware costs; deployment costs; energy profiles; cooling requirements; monitoring and maintenance and well as security. 


Cost Factor

Edge Computing

Hyperscale Data Centers

Hardware: Higher upfront cost per unit, diverse needs

Lower upfront cost per unit, standardized needs

Edge > Hyperscale

Deployment: Higher logistics and deployment cost per unit

Lower deployment cost per unit due to scale

Edge > Hyperscale

Energy: Diverse power needs, potentially higher cost per unit

Standardized power infrastructure, lower cost per unit

Edge > Hyperscale (depending on location)

Cooling: Diverse cooling needs, potentially higher cost per unit

Standardized cooling infrastructure, lower cost per unit

Edge > Hyperscale (depending on location)

Monitoring & Maintenance: Higher labor and service cost per unit

Lower cost per unit due to centralized management

Edge > Hyperscale

Security & Compliance: Higher cost per unit due to diverse needs

Lower cost per unit due to standardized protocols

Edge > Hyperscale (depending on regulations)


Thursday, February 1, 2024

Edge Computing Capex, Opex are Deployment Issues

Generally speaking, edge computing facilities such as those envisioned by the multi-access edge computing model impose higher costs than do hyperscale data facilities, including capital investment efficiency and operating costs. That does not mean MEC is unfeasible, but that the business cases need to be worked out, as there are alternatives including on-board or on-device computing. 


Assume 20 percent to 40 percent of edge computing requirements might be suited for on-device processing, especially for simple, real-time tasks and applications with tight latency constraints.


Assume the remaining 60 percent to 80 percent of processing tasks might use either remote edge computing or cloud processing for more complex analysis, data aggregation, or situations where device limitations are significant.


Even in those cases, it is presently unclear how much latency improvement might be needed, and therefore when edge facilities are required. The answer matters, since, generally speaking, MEC or other edge computing facilities will not be as capital-efficient as hyperscale data centers. 


Challenge

Edge Computing

Hyperscale Data Centers

Infrastructure diversity: Diverse hardware needs based on specific edge locations (e.g., ruggedized for remote areas, low-power for battery-operated devices)

Standardized hardware for bulk purchase and deployment

Higher upfront costs

Geographical distribution: Managing equipment across geographically dispersed locations

Centralized infrastructure with economies of scale

Higher logistics and deployment costs

Smaller scale: Lower capacity per unit compared to large data centers

High capacity per unit due to bulk purchase and deployment

Lower cost per unit of compute


Add to that the operating cost profile, which likewise tends to be higher than for hyperscale sites. 


Challenge

Edge Computing

Hyperscale Data Centers

Remote monitoring and maintenance: Managing and maintaining equipment across diverse locations

Centralized monitoring and maintenance

Increased labor and service costs

Power and cooling: Diverse power and cooling requirements based on location (e.g., solar panels for remote areas)

Standardized power and cooling infrastructure

Increased energy and infrastructure costs

Security and compliance: Diversified security needs based on specific edge locations and regulations

Standardized security protocols across centralized infrastructure

Increased security and compliance costs


All of that means that MEC and other edge computing facilities are likely to be relatively costly investments for a data center services provider, simply because of lower scale at each facility, as well as the need for many such distributed facilities. 


That includes hardware costs; deployment costs; energy profiles; cooling requirements; monitoring and maintenance and well as security. 


Cost Factor

Edge Computing

Hyperscale Data Centers

Hardware: Higher upfront cost per unit, diverse needs

Lower upfront cost per unit, standardized needs

Edge > Hyperscale

Deployment: Higher logistics and deployment cost per unit

Lower deployment cost per unit due to scale

Edge > Hyperscale

Energy: Diverse power needs, potentially higher cost per unit

Standardized power infrastructure, lower cost per unit

Edge > Hyperscale (depending on location)

Cooling: Diverse cooling needs, potentially higher cost per unit

Standardized cooling infrastructure, lower cost per unit

Edge > Hyperscale (depending on location)

Monitoring & Maintenance: Higher labor and service cost per unit

Lower cost per unit due to centralized management

Edge > Hyperscale

Security & Compliance: Higher cost per unit due to diverse needs

Lower cost per unit due to standardized protocols

Edge > Hyperscale (depending on regulations)

Thursday, December 7, 2023

T-Mobile Preparing to Use Millimeter Wave for High-Traffic Urban Areas

T-Mobile, in a test, aggregated eight channels of millimeter-wave spectrum to reach download speeds topping 4.3 Gbps without relying on low-band or mid-band spectrum to anchor the connection, the company says.  T-Mobile also aggregated four channels of mmWave spectrum on the uplink, reaching speeds above 420 Mbps.


Though T-Mobile has not relied on mmWave spectrum to support 5G, it is testing 5G mmWave for use in crowded areas such as stadiums. T-Mobile also suggests mmWave might--and likely will--support its fixed wireless home broadband services.


Verizon’s use of millimeter wave has been to support usage in dense urban areas and high-traffic locations such as stadiums, airports, business districts. In large part, Verizon has been more aggressive about using millimeter wave assets because it has had the smallest allotment of crucial mid-band spectrum. 


AT&T has the same strategy--supporting usage in dense urban areas and select business locations--but AT&T has been more cautious in deploying its mmWave assets, compared to Verizon.


Competitors criticize the sustainability of fixed wireless as a platform that will eventually be unable to keep pace with capacity demands of its users. 


At the moment, Verizon and T-Mobile are careful to offer fixed wireless home broadband in areas where they have lots of capacity on the 5G network, allowing them to devote spectrum to fixed wireless without impairing mobile experience. 


Average monthly data consumption for 5G fixed wireless ranges from about 300 gigabytes on Verizon’s network to 450 GB on T-Mobile’s fixed wireless network. 


Compared to that, average monthly data consumption in North America is about 8.6 GB. Basically, fixed wireless, when used as a home broadband platform, consumes two orders of magnitude more bandwidth than does a typical mobile phone customers.


Thursday, October 12, 2023

How "Fair Share" Tries to Recreate the Old Closed Network

Sometimes what is important about a report or statement is what is not said. In the case of a report discussing European Union region network infrastructure issues, the European Commission did not issue an explicit decision whether a few large app providers (Netflix, Meta, Google) should be required to pay fair share fees to internet access providers.


Some observers might argue that likely means no action, one way or the other, will be taken in the next year or so. 


Though other issues were noted by the report, including the coming role of network virtualization artificial intelligence, edge computing, a unified EU market and open networks, the immediate battle is over revenue. ISPs and mobile operators say their revenues and profit margins are declining, and the argument is that this is, in large part, because a few large content and app providers benefit from ISP networks without contributing to create the needed capacity of those networks. 


Critics might note that internet domains--including the targeted hyperscale firms--already pay such fees for traffic asymmetry in the form of interconnection payments.


Hyperscale App Provider

ISP

Interconnection Payment

Netflix

Comcast

$1 billion

Netflix

Verizon

$750 million

Amazon Web Services

Comcast

$1.2 billion

Amazon Web Services

Verizon

$900 million

Microsoft Azure

Comcast

$1 billion

Microsoft Azure

Verizon

$750 million

Google Cloud

Comcast

$800 million

Google Cloud

Verizon

$600 million

Microsoft Azure

AT&T

$75 million per year

Alphabet

Charter

$100 million per year

Amazon

AT&T

$150 million per year

Microsoft

Charter

$75 million per year

Google

AT&T

$125 million per year

Meta

Charter

$50 million per year

Meta

AT&T

$75 million per year

Alphabet

China Telecom

$150 million per year

Amazon

NTT

$125 million per year

Microsoft

Deutsche Telekom

$100 million per year

Google

Telefónica

$75 million per year

Meta

Singtel

$50 million per year

Meta

Orange

$75 million per year


In addition, one can clearly argue that ISPs charge their own customers for internet access service, and can set those fees at levels that support their own customers’ data consumption. Already, some heavier users pay more than lighter users, and that is a business decision any ISP is free to make. 


In other businesses, when the cost of a product goes up, so does the retail price. In other words, if their own consumers are consuming more capacity than present retail prices recover, raise prices. 


The other angle is that, traditionally, traffic imbalances between domains were assumed to be created by the initiating party. For example, the party placing an international call pays for the call. The party sending a text message pays for it. 


The claimed traffic imbalance ISPs complain about is created by requests from their own customers for app providers to send data. 


That noted, in the past it was the telcos themselves that created interconnection payments for traffic asymmetries. Even in the internet era, where the hyperscale app providers often create their own end-to-end networks, traffic imbalances result only when a customer on one ISP network requests content from a firm on another ISP’s network. 


In other words, interconnection obligations happen when one party invokes the use of another party on a different network. At a retail level, the initiating customer creates a session that generates revenue for the initiating and terminating networks. At a wholesale level, imbalances between networks and domains are resolved either by settlement-free peering or true-ups at the end of a year. 


This is arguably more complicated than in the past, in part because internet sessions are based on nailing up circuits for finite periods of time easy to measure. Also, the locations of requesting and fulfilling parties is not set. 


The party requesting content can be on the same ISP network as the requesting party, in which case no inter-domain traffic is invoked. The requesting party can be on network A, while the fulfilling party is on network B, but, on balance, requesting and fulfilling parties exist--in many cases--on both neworks A and B. 


At a high level, traffic might roughly balance over a year’s time, especially in cases where ISPs A and B have hyperscale data centers as customers “on their own networks,” as well as retail customers invoking delivery of content from firms at those data centers, both on networks A and B. 


At one level, one might argue that peering makes more sense on transit and wide area networks since the actual path any packet might take is indeterminate. Any WAN provider might be able to cite the total number of packets flowing over the network, but be unable to identify which originating networks were involved in creating the traffic. 


On any access network, in contrast, the use of network resources is definite. No matter what path packets might take across the interconnected wide area networks, they use only one physical path at the receiving party’s location on the local access network. 


But again, one might ask: is the “cause” of traffic imbalance the originator or the fulfiller of a session? Is it the ISP customer asking for streaming of a Netflix movie the traffic initiator, or is it Netflix fulfilling the request? 


Beyond that, is it the ISP supporting Netflix content delivery that is “creating the traffic asymmetry” or is it Neflix or the Netflix customer on the ISP network that asks for the content?


Without question, domain interconnection, what constitutes a “session” and “who initiated the session?” are different questions in the internet era than was the case in the voice era. 


But the issues are bigger than that. In the internet era, transport and access service providers have lost their “closed” networks and the ability to control the presence of every app that uses the network, by and large. 


In a “permissionless” era, where no app creator needs a formal business relationship with a capacity provider to reach a customer, ISPs also have lost the ability to monetize their traffic as owners of such third-party apps. 


In a real sense, the proposed “fair share” payments from a few hyperscale app providers to ISPs is an effort to recreate that revenue mechanism. 


Or, in other words, “fair share” is an effort by ISPs to participate in the revenue earned by a few hyperscale app providers, as would have been the case in the older world of “closed” networks, where all apps needed permission from the network owner to operate.


Costs of Creating Machine Learning Models is Up Sharply

With the caveat that we must be careful about making linear extrapolations into the future, training costs of state-of-the-art AI models hav...